Open Access
9 May 2016 Development of the local magnification method for quantitative evaluation of endoscope geometric distortion
Author Affiliations +
Abstract
With improved diagnostic capabilities and complex optical designs, endoscopic technologies are advancing. As one of the several important optical performance characteristics, geometric distortion can negatively affect size estimation and feature identification related diagnosis. Therefore, a quantitative and simple distortion evaluation method is imperative for both the endoscopic industry and the medical device regulatory agent. However, no such method is available yet. While the image correction techniques are rather mature, they heavily depend on computational power to process multidimensional image data based on complex mathematical model, i.e., difficult to understand. Some commonly used distortion evaluation methods, such as the picture height distortion (DPH) or radial distortion (DRAD), are either too simple to accurately describe the distortion or subject to the error of deriving a reference image. We developed the basic local magnification (ML) method to evaluate endoscope distortion. Based on the method, we also developed ways to calculate DPH and DRAD. The method overcomes the aforementioned limitations, has clear physical meaning in the whole field of view, and can facilitate lesion size estimation during diagnosis. Most importantly, the method can facilitate endoscopic technology to market and potentially be adopted in an international endoscope standard.

1.

Introduction

Most endoscopes have a short focal length and a wide field of view (FOV) in order to observe a broad area with minimum moving or bending of the endoscope, at the expense of severe geometric distortion. The distorted images can adversely affect the accuracy of size and shape estimations. Therefore, a global standard providing a quantitative and simple method to evaluate endoscope distortion is essential. However, such a standard to characterize all possible types of endoscope distortion, along with instructions on how to evaluate and present the distortion results, has yet to be developed, which makes it difficult to accurately evaluate the quality of new and existing endoscopic technologies. This in turn leads to delays in the availability of technologically superior endoscopes in the market. Such a domino effect can be avoided by the development of a consistent and accurate standardized method to characterize endoscope distortion.

1.1.

Endoscope Distortion

Optical aberration includes two main types: chromatic aberrations and monochromatic aberrations. The former arises from the fact that the refractive index is actually a function of wavelength. The latter occurs even with quasimonochromatic light and falls into two subgroupings: monochromatic aberrations that deteriorate the image, making it unclear (e.g., spherical aberration, coma, and astigmatism), and monochromatic aberrations that deform the image (e.g., Petzval field curvature and distortion).1

In this paper, we focus on the monochromatic aberrations that deform the image. We call such aberrations geometric distortions. A geometric distortion is a deviation from the rectilinear projection, a projection in which straight lines in a scene remain straight in their image. While similar distortions can also be seen in display (display distortion, especially in cathode ray tube display), we mainly focus on the geometric distortions caused by geometric optics. Among different types of geometric distortions, radial distortions are the most commonly encountered and most severe. They cause an inward (barrel distortions) or outward (pincushion distortions) displacement of a given image point along the radial direction from its undistorted location (Fig. 4). A radial distortion can also be a combination of both barrel and pincushion distortions, which is called a mustache (or wave) distortion. In an image with a radial distortion, a straight line that runs through the image center (usually also being the center of distortion) remains straight. Since most radial distortions are circularly symmetric (i.e., rotationally symmetric with respect to any angle), or approximately so, arising from the circular symmetry of the optical imaging systems, a circle that is concentric with the image center remains a circle in its image, although its radius may be affected. Some complex distortions include both radial and tangential components, i.e., a given image point displaces along both radial (radial distortion) and tangential (tangential distortion) directions (Fig. 1). Such distortions, called radial-tangential distortions in this paper, include decentering distortions and thin prism distortions.2,3 Unless otherwise specified, distortions hereafter mentioned in this paper mean radial geometric distortions—the focus of this paper.

Endoscopes usually have severe barrel distortions. An endoscope needs a short focal length and a wide FOV in order to observe a broad area with minimum moving or bending of the endoscope, which is essential for steady and smooth manipulation of the endoscope because of the restricted space and degrees of freedom of movement and the limitation in hand-eye coordination during surgical cases.4 However, lenses used in endoscopes usually have a short focal length (a few millimeters only) and a wide FOV (ranging from 100 to 170 deg), which inevitably causes severe distortions.5 Typically, endoscopes exhibit barrel distortions. Occasionally, an endoscope exhibits a mustache distortion that varies between barrel and pincushion across the image, mostly because a mathematical algorithm is used to correct distortion at the maximum image height or other parts of the image. Since endoscope distortions can negatively affect size estimation and feature identification related diagnosis,57 quantitative evaluation of endoscope distortions and proper understanding of the evaluation results are essential.

1.2.

Need for an Endoscope Distortion Evaluation Method

The millions of endoscopic procedures conducted monthly in the United States for a wide range of indications are driving the advancement of endoscopic imaging technology. With new diagnostic capabilities and more complex optical designs, technological advances in endoscopes promise significant improvements in both safety and effectiveness. Endoscope optical performance (OP) can be evaluated by OP characteristics (OPCs), including resolution, distortion, FOV, direction of view, depth of field, optimal working distance, image noise, detection uniformity, veiling glare, and so on.

Current consensus standards provide limited information on validated and quantitative test methods for assessing endoscope OP. There is no standardized method to evaluate endoscope distortions. An international standard specifies methods of determining distortion in optical systems.8 The methods require the usage of complex devices, such as an autocollimator, or an instrument to measure the object and image pupil field angles and height. While the standard provides complex equations, it does not clarify how the distortion results should be presented and evaluated. Also, the picture height distortion value mentioned in the standard is insufficient for the evaluation of severe barrel or pincushion distortions, and fails for the evaluation of mustache distortions. The definitions of angular magnification and lateral magnification in this standard are only based on a small area near the optical axis of the test specimen, which cannot be extended to endoscopes whose magnification changes significantly within the FOV. The endoscopes working group (WG) of the International Organization for Standardization (ISO), ISO/TC172/SC5/WG6, develops and oversees endoscope standards (the ISO 8600 serial standards) that cover the endoscope OPCs of FOV, direction of view, and optical resolution. However, an endoscope distortion standard has not yet been developed by this WG.

While endoscopic technology is developing fast, the regulatory science for endoscope OP evaluation has been unable to keep pace. Every year, the U.S. Food and Drug Administration receives a large number of endoscope submissions for premarket notification or premarket approval. However, the evaluation of new video endoscopic equipment is difficult because of the lack of objective OP standards. The industry lacks consensus standards on objective test methods to evaluate distortion. The resulting patchwork of tests conducted by different device manufacturers leads to delays in bringing important endoscopic technology to market and may allow the clearance of a less optically robust system that negatively impacts patient care.

In this paper, we tried to establish a quantitative, objective, and simple distortion evaluation method for endoscopes, with the goal of applying the method in an international endoscope standard. We reviewed some common methods described in prior journal articles for distortion evaluation of an optical imaging system and analyzed the relationship between these methods. Based on the review, a quantitative test method for assessing endoscope radial distortion was developed and validated based on the local magnification idea. The method will help facilitate performance characterization and device intercomparison for a wide variety of standards and endoscopic imaging products. The method has the potential to facilitate the product development and regulatory assessment processes in a least burdensome approach by reducing the workload on both the endoscope manufactures and the regulatory agency. As a result, novel, high-quality endoscopic systems can be swiftly brought into the market. The method can also be used to facilitate the rapid identification and understanding of the cause for poorly performing endoscopes, and benefit quality control during manufacturing as well as quality assurance during clinical practices.

2.

Review of Common Methods for Distortion Evaluation

In this section, common methods for distortion evaluation are reviewed. The distortion pattern on an image sensor might not be the same as shown on a display device because of the effects of hardware, such as cathode ray tube (CRT), or software, such as an image processing algorithm. To be simple, this paper focuses on distortions of digital images from an image sensor that might or might not have been processed. However, the methods can also be extended to evaluate display distortions.

Theoretically, a geometric distortion might include both radial and tangential components, i.e., a given image point displaces along both radial (radial distortion) and tangential (tangential distortion) directions (Fig. 1). Such distortions, called radial-tangential distortions in this paper, include decentering distortions and thin prism distortions.2,3 A radial-tangential distortion can be evaluated by comparing the positions of two-dimensional (2-D) points on the distorted images with their positions on an ideally nondistorted image. It can be described with a 2-D matrix showing the relative position change of each point as a function of x-y coordinates. In an optical imaging system, the tangential component of a geometric distortion is basically conditioned by imperfect circular symmetry. However, an optical imaging system manufactured in accordance with the present state of the art has a negligible tangential distortion.8,9 Therefore, this paper only focuses on radial distortions.

2.1.

Picture Height Distortion and Related Methods

There are several methods for distortion evaluation. The picture height distortion method (DPH, where D means distortion) is defined by the European Broadcasting Union (EBU)10 and recommended by the ISO 9039 International Standard.8 It quantifies the bending of the image of a horizontal straight line that is tangent to the circumscribed (for barrel distortion) or inscribed (for pincushion distortion) rectangle of the distorted image (Fig. 2). As shown in Fig. 2, it is calculated as

Eq. (1)

DPH(%)=ΔH/H×100=(AB)/H×100,
with B being half H and H being the height of circumscribed (for barrel distortion) or inscribed (for pincushion distortion) rectangle of the distorted image. DPH values are negative for barrel distortions and positive for pincushion distortions. The reported DPH value should be the mean value of all four corners. While DPH was initially defined for the vertical direction, it is applicable to the horizontal distortion as well. DPH is also called the television (TV) distortion method (DTV) or traditional TV distortion method. The term TV distortion has been used because such geometric distortion was often observed on a traditional CRT television due to the effect of internal or external magnetic field, or because this method is often used to evaluate the distortion on a display device. While CRT televisions have almost been made obsolete, the term TV distortion is still widely used, though its meaning is not the original meaning related to TV anymore. An open standard for self-regulation of mobile imaging device manufacturers, named Standard Mobile Imaging Architecture (SMIA),11,12 defines a distortion evaluation method in a similar way as DPH. We call this method SMIA TV distortion method (DSTV) to distinguish from DTV or DPH. DSTV can be calculated as

Eq. (2)

DSTV(%)=ΔH/B×100=(AB)/B×100.
The reported DSTV value should also be the mean values of all four corners. Obviously, the DSTV value is twice as large as the DPH value for the same distorted image, i.e., DSTV=DPH×2.

Fig. 1

Definitions of radial-tangential distortion on the 2-D plane.

JBO_21_5_056003_f001.png

Fig. 2

Picture height distortion (DPH): (a) barrel distortion and (b) pincushion distortion.

JBO_21_5_056003_f002.png

Another distortion evaluation method is presented in Fig. 3.13 If we draw a straight line connecting two ends of a curved line—the image of a straight line in the target, its length is L and the largest distance from this drawn line to any point on the curved image line is l [Fig. 3(a)]—then the distortion is defined as

Eq. (3)

DPH2(%)=l/L×100.

As opposed to DPH, the DPH2 values are positive for barrel distortions and negative for pincushion distortions. Otherwise, the definition of DPH2 is similar to that of DPH. The absolute value of l in DPH2 is the same as that of ΔH in the DPH method for the lower horizontal edge of a distorted image. Comparing Figs. 2 and 3, we can get the relation of DPH2=DPH×H/L. Since the DPH2 method has no significant advantage over the DPH method, we do not recommend this method for distortion evaluation.

Fig. 3

DPH2: (a) definition of DPH2, (b) the relation between DPH2 and DPH.

JBO_21_5_056003_f003.png

The aforementioned distortion evaluation methods calculate the largest positional error of barrel or pincushion distortions over the whole image. They are meaningful only if the optical system has a steadily increasing distortion (barrel or pincushion distortion) from the image center to the edges. For a complex distortion pattern, it is impossible to evaluate the distortion in detail with a single value since the value might be misleading. Taking a mustache distortion as an example, it is possible that the image displays little or virtually zero distortion at the edges as measured by any of these methods, but a maximum distortion at the midfield. These methods are also related to the aspect ratio of the distorted image. The EBU defines the DPH for the case of aspect ratio being 43, a ratio for the traditional television and computer monitor standard. However, there are other widely used aspect ratios, such as the 32 ratio of the classic 35 mm still camera film and the 169 ratio of HD video. We cannot directly compare the distortion values of two images with different aspect ratios.

2.2.

Radial Distortion Method

Another distortion evaluation method is based on comparing the radii of distorted (Rd) and undistorted (Ru) images. It is assumed that the distortion close to the optical center is zero. Therefore, an undistorted image can be calculated based on the information at the center of the distorted image. The distorted image is then evaluated with the undistorted image as a reference along the radial direction. Since this method can be applied to any radial distortion, we call it DRAD. As shown in Fig. 4,

Eq. (4)

DRAD(%)=(RdRu)/Ru×100,
where Rd is the distance of a point in the distorted image from the image center and Ru is the distance of the corresponding point in the calculated undistorted image from the image center.14,15 The point can come from any location in the distorted image except for the image center where Ru can be infinitely small, although Fig. 4 shows only the top-right corner as an example. If the absolute values of Rd and Ru are magnified at the same scale, the distortion evaluation results will not be affected.

Fig. 4

Definition of DRAD: (a) undistorted image, (b) barrel distortion, and (c) pincushion distortion.

JBO_21_5_056003_f004.png

DRAD can be used to evaluate complex distortions (e.g., mustache distortions) with a distortion profile along a radius line. Mustache distortions can be caused by countermeasures in the design or by an image processing algorithm to limit or remove distortion. If we calculate the DRAD along a diagonal from the image center to a corner, we can obtain a curve of DRAD versus Ru or Rd, as shown in Fig. 5. For a simple barrel or pincushion distortion, we can identify a barrel distortion if the DPH or DSTV value is negative and a pincushion distortion if the value is positive. However, this criterion will fail for identifying a mustache distortion. The key point is that the identification of a distortion type should not depend on the sign of the distortion value but on the slope of the radial distortion curve. Typical radial distortion curves are shown in Fig. 5. Generally, these distortion curves start at zero, matching the assumption that the distortion close to the optical center is zero. For a barrel distortion [Fig. 5(a)], the curve slope is always negative; therefore, the distortion values are also negative. For a pincushion distortion [Fig. 5(b)], the curve slope is always positive, resulting in the distortion values being positive. For a mustache distortion, the curve has both positive and negative slope values at different regions. From Fig. 5(c), the curve has negative slope when Rd<11.7  mm and positive slope when Rd>11.7  mm. This means that the image has a barrel distortion for Rd<11.7  mm and a pincushion distortion for Rd>11.7  mm even though the distortion values are still negative for Rd>11.7  mm. A higher absolute slope value means more pronounced distortion at this radius.

Fig. 5

Typical DRAD curves: (a) barrel distortion, (b) pincushion distortion, and (c) mustache distortion.

JBO_21_5_056003_f005.png

For a simple barrel or pincushion distortion, the absolute value of radial distortion calculated from an image corner is usually larger than that of DPH/DTV or DSTV.15 This can be theoretically explained with the barrel distortion in Fig. 4 as an example. In Fig. 4(a), points A and B are, respectively, the middle point and right corner of the upper edge of the undistorted image, with their distance to the image center in vertical direction being Ruy. In Fig. 4(b), points A and B are the image of A and B, with their distances to the image center in vertical direction being RA and Rdy. RA is larger than Rdy for barrel distortion and smaller for pincushion distortion. DSTV at corner B can be calculated as DSTV=(RdyRA)/RA, and DRAD for corner B can be calculated as DRAD=(RdRu)/Ru. For a barrel distortion, both A and B are distorted toward the image center in the vertical direction with the amount of (RARuy) and (RdyRuy), respectively, with the negative value signifying a barrel distortion. At the same time, B moved the amount of (RdyRA) in the vertical direction toward the image center relative to A. Therefore, the equation of DSTV=(RdyRA)/RA evaluates only the position of B relative to A. On the other hand, the equation of DRAD=(RdRu)/Ru can be split into horizontal component of (RdxRux)/Rux and vertical component of (RdyRuy)/Ruy. The vertical component is the movement of B and can also be expressed as [(RdyRA)+(RARuy)]/Ruy, which is the sum of the movement of B relative to A and the movement of A. Since the values of Ruy and RA are close, DSTV is roughly equal to the movement of B relative to A in the vertical component of DRAD for corner B, without considering the movement of A in the vertical direction and the movement of B in the horizontal direction. That explains why the absolute DSTV value is usually smaller than the absolute DRAD value. Similar conclusion can also be obtained for a pincushion distortion.

The main problem with the DRAD method is that the values of Ru are not available, since the undistorted image does not physically exist. The Ru values can be approximated according to the data at the center of the distorted image, with the assumption that there is none or little distortion near the optical axis.16 Taking the image of a grid target as an example, if we know the distance of one grid (reference grid) at the center of the distorted image is corresponding to P pixels on the image sensor, then the distance of G grids from the image center in the undistorted image should be P×G  pixels, based on which we can calculate the undistorted image of the grid target and then the DRAD values. However, there is a problem with this approach. The size of the assumed undistorted area in the distorted image will affect the final results for a severe distortion as shown in most endoscopic images. If the reference grid is too big, it might have been distorted.

Figure 6(a) shows a barrel distorted image. From the image center to the right edge in the horizontal direction, we got the distance (Rd) of each cross-point from the center. By assuming that there was no distortion between point 0 and point 1 (0–1), point 0 and point 2 (0–2), or point 0 and point 3 (0–3), respectively, we calculated the undistorted distance (Ru) of the cross-points with these different assumptions and obtained three DRAD curves as shown in Fig. 6(b). For a barrel distortion, the distortion values should monotonically decrease with radius, with the maximum value of zero at the center. However, the 0–2 and 0–3 curves in Fig. 6(b) showed positive values at shorter radial distances, which illuminated that the assumptions of no distortion from point 0 to point 2 or point 0 to point 3 are less accurate than the no distortion assumption from point 0 to point 1, and bigger assumed nondistortion area causes bigger error. On the other hand, if the assumed nondistortion area was too small, the reading error for Rd at center would be enlarged by the large number of grids at further radial distance when calculating Ru.

Fig. 6

Effect of the size of the assumed-undistorted area on the DRAD results: (a) a barrel distorted image, and (b) DRAD curves with the undistorted radius data calculated based on different sizes of the assumed-undistorted area near the image center (the distance from image center to left or right edge is normalized as 1).

JBO_21_5_056003_f006.png

3.

Development of the Local Magnification Method for Distortion Evaluation of Endoscopes

Distortion evaluation is always related to camera calibration techniques—a rather mature area for an optical imaging device.17,18 While these techniques are useful for image correction with the help of powerful computational capability, the two- or three-dimensional image data as well as the transform matrixes are complex, lack direct physical meaning, and are hard to understand for most users. Therefore, these calibration methods are not a good choice for a consensus evaluation method that could be potentially adopted by an international standard. The local magnification method we developed is mathematically and experimentally simple, and can better describe the distortion characteristic of an endoscope than the commonly used methods. The method can also provide valuable information to help a physician to interpret a distorted medical image.

3.1.

Experimental Measurements

We established a distortion evaluation method using an endoscopic system (Olympus EVIS EXERA II) that includes a high-intensity xenon light source (CLV-180), a gastrointestinal videoscope (GIF-H180), and a video system center (CV-180). This system has the common barrel distortion as seen in most endoscopes. A series of grid targets with three different grid sizes (0.5  mm×0.5  mm, 1.5  mm×1.5  mm, and 3.0  mm×3.0  mm) was designed and printed with a laser printer. The total size of each target was large enough to cover the whole FOV area. The target during tests should be planar and be able to move along the optical axis. To securely keep the endoscope in its place, a customized mold with adjustable height was used. The direction of the endoscope distal end was adjusted with a fiber optic positioner (Newport, FPR2-C1A).

The setup was adjusted so that the endoscope optical axis was perpendicular to the test target and aligned with the test target center (Fig. 7). Criteria for acceptable adjustment are as follows: (1) center of the target, the point on the target that locates at the FOV center, was located at the center of the captured image and (2) two pairs of centrally symmetric points (e.g., AA and BB in Fig. 7) on the target were also centrally symmetric on the image. It was assumed that the distortion center (i.e., where the optical axis passes through the image plane) overlaps with the image center. The assumption was evaluated in Sec. 4.1. For the first criterion, the target was positioned at a given distance to take its image. The image was then analyzed with software (e.g., MATLAB) to find the target center (xt,yt) in the image. The distance from the target center to the image center (xi,yi) was calculated using the formula (xtxi)2+(ytyi)2, with the unit of pixels. The distance was controlled within 1% of the picture height [10  pixels for the image with size of 1008(H)×1280(W)pixels]. For the second criterion, the same method as for the first criterion was used to make sure that the midpoint between each pair of points was <1% of the picture height far away from the target center in the image. The two criteria were satisfied through an iterative trial-and-error process.

Fig. 7

Distortion measurement setup: (a) test target and (b) test setup.

JBO_21_5_056003_f007.png

3.2.

Local Magnification Method to Quantify Distortion

To avoid the aforementioned problem of calculating Ru in Sec. 2.2, we proposed to evaluate radial distortions with a new approach—the local magnification (ML) method. For a small object (e.g., a small cross) placed at a local point on the test target, the ratio of the object length on the image sensor (or on a display device from the sensor) to its actual length on the test target is called ML. The term “local magnification” is borrowed from the IEC 1262-4 Standard, which addresses determination of image distortion in electro-optical x-ray image intensifiers.19 In the standard, discrete ML values are obtained by measuring size changes in small hash marks and their accuracy can be affected by the sizes of the marks. In this paper, ML is expressed with an equation that can accurately and continuously express distortion at any location in the FOV. ML can be separated into the local radial magnification (MLR) and the local tangential magnification (MLT). MLR is the local magnification of a small one-dimensional (1-D) object oriented radially toward the FOV center. MLT is the local magnification of a small 1-D object tangentially oriented to a radial direction.

Figure 8 shows the MLR and MLT of a cross-shape object (ideally, the object should be infinitely small) with the width of lr and height of lt at radius of R on the test target. The object image located at radius of R on the target image with the width of lr and height of lt. Then the MLR and MLT at the cross-point can be calculated as MLR=lr/lr and MLT=lt/lt.

Fig. 8

MLR and MLT: (a) a small cross-shape object at radius R on the test target and (b) image of the object at radius R.

JBO_21_5_056003_f008.png

Assuming the distortion is radial, data from any straight line crossing the image center can be used to evaluate the distortion of the whole image if the target is well aligned with the device. We used the horizontal line from the image center to the right edge [Fig. 6(a)] as an example to explain the ML method. Since the distance from the image center to the right edge is not the maximum radius in the whole image, the final evaluation results mainly reflect the distortion characteristics in a circle area with the radius equal to the distance from image center to right edge. Other straight lines (e.g., a vertical or diagonal line) crossing the image center can also be used to cover a bigger circle area or obtain more accurate results. The method is described in detail as follows.

After proper alignment, an image of the target at an established distance from the endoscope is taken [Fig. 6(a)]. The horizontal line from the image center to the right edge is then used to analyze the radial distortion. Following this, the coordinates of each cross-point on the line are read with image analysis software, and the distances (Rd) of the points from the image center are calculated in terms of pixels. The actual distance (Ru) of these cross-points from the center on the target can also be obtained. To be simple, Ru is used as the number of grids from the center to the cross-points (Table 1), instead of measuring the actual distances. A matrix of Rd is then mapped to a matrix of Ru.

Table 1

Example of evaluating geometric distortion.

Rd (# of pixel)090177260332394449489530559586608626640
Ru (# of grid)012345678910111212.2
Normalized Rd00.140.280.410.520.620.700.760.830.870.920.950.981.00
Normalized Ru00.080.160.250.330.410.490.580.660.740.820.900.991.00
MLR0.0840.0840.0790.0720.0630.0540.0460.0370.0310.0260.0230.0210.0210.021
Normalized MLR1.001.000.950.850.750.650.540.440.360.310.270.260.250.26
MLT0.0840.0830.0800.0770.0740.0700.0650.0610.0580.0540.0520.0490.049
Normalized MLT1.000.990.960.920.880.830.780.730.690.650.620.580.58
DRAD0.010.010.040.080.120.170.220.270.310.350.380.420.42

The Ru value on the image edge is needed in order to evaluate the distortion at the edge. In most cases, however, an image near the edge is often blurred due to severe distortion and vignette effects. Additionally, the edge may not exactly lie on a cross point, and the number of cross points from the center to the edge would therefore not be an integer. Based on available Rd and Ru data from the cross-points, a polynomial equation of Ru=f(Rd) is calculated. The maximum pixel number from the image center to the image edge (i.e., half the picture width, 640 in our images) is then used as Rd to calculate Ru at the edge (bold numbers in Table 1). The Ru value of 12.2 is obtained in this example.

Both Rd and Ru are normalized, setting their maximum value as 1 (Table 1). From the curve of normalized Rd versus normalized Ru (Fig. 9) or vice versa, a polynomial fitting equation of Rd=fd(Ru) or Ru=fu(Rd) is created to fit and define the relation between Ru and Rd.

Fig. 9

Relations of normalized Rd versus normalized Ru based on Fig. 6(a): (a) Rd=fd(Ru) and (b) Ru=fu(Rd). The dashed lines are the fitting curves. By default, y represents the variable on the vertical axis and x represents the variable on the horizontal axis in the fitting equations.

JBO_21_5_056003_f009.png

The normalized Rd versus normalized Ru polynomial function of Rd=fd(Ru) can be easily converted to other scales. Assume the function has a polynomial form9,20,21 of

Eq. (5)

y=knxn+kn1xn1++k2x2+k1x+k0,
where n is the degree of the polynomial equation, x[0,1] represents normalized Ru, and y[0,1] represents normalized Rd. The degree zero term (the constant term k0) is assumed to be zero since, we assume that the centers of the distorted and undistorted images are overlapped. If x and y need to be scaled to X and Y, so that kyY=y[0,1] and kxX=x[0,1], then

Eq. (6)

kyY=kn(kxX)n+kn1(kxX)n1++k2(kxX)2+k1(kxX),Y=1ky(knkxnXn+kn1kxn1Xn1++k2kx2X2+k1kxX).
Equations (6) and (5) are the same if kx=1 and ky=1.

3.2.1.

Local radial magnification method

If X and Y are the actual lengths of Ru and Rd, MLR is defined as follows from the derivative of Eq. (6):

MLR=dYdX=1ky[nknkxnXn1+(n1)kn1kxn1Xn2++2k2kx2X+k1kx].
Substituting X=x/kx in the above equation, we get

Eq. (7)

MLR=kxky[nknxn1+(n1)kn1xn2++2k2x+k1].
Taking the data in Table 1 as an example, we can get the following fitting equation based on Eq. (5), with the normalized Rd as y and normalized Ru as x.

Eq. (8)

y=0.6896x5+2.5386x42.8434x3+0.2744x2+1.7125x.
Degree 5 is the lowest degree of the polynomial fitting equation with the R-squared value of the fitting equation being >0.9999. The target grid size is 3  mm×3  mm. So the maximum X is 36.6 mm (3×12.2) and kx is 0.0273 (1/36.6). Assuming the image sensor has pixel size of 2.8  μm×2.8  μm, the maximum Y is 1.792 mm (2.8×640/1000) and ky is 0.5580 (1/1.792). Then, Eq. (7) for these data will be

Eq. (9)

MLR=0.1688x4+0.4972x30.4177x2+0.0269x+0.0838,
from which the MLR data at each cross-point can be calculated as shown in Table 1. The MLR data can be normalized so that the maximum magnification at the image center is one. The MLR and normalized MLR versus normalized Rd curves are shown in Fig. 10, from which we can see that the two curves can totally overlap by adjusting the scale of y coordinates.

Fig. 10

MLR and normalized MLR versus normalized Rd.

JBO_21_5_056003_f010.png

3.2.2.

Local tangential magnification

MLT as defined in Fig. 8 is equal to the ratio of two circumferences with R and R as radius, respectively, MLT=(2πR)/(2πR)=R/R. The equation for MLT can be derived from Eq. (6). If X and Y are the actual lengths of Ru and Rd, we define MLT as

MLT=YX=1ky(  knkxnXn1+kn1kxn1Xn2++k2kx2X+k1kx).
Substituting X=x/kx in the above equation, we get

Eq. (10)

MLT=kxky(knxn1+kn1xn2++k2x+k1).
There is no MLT value at x=0, because the ratio of two circumferences both with a value of zero cannot be calculated. Again, take the data in Table 1 as an example. Assuming kx is 0.0273 and ky is 0.5580, we can get the MLT equation based on Eqs. (8) and (10) as follows:

Eq. (11)

MLT=0.0337x4+0.1242x30.1391x2+0.0134x+0.0838,
from which the MLT and normalized MLT data can be calculated as shown in Table 1.

Based on Table 1, we can compare MLR and MLT as shown in Fig. 11. While we use the normalized Rd as the x axis, the x axis can also be the normalized Ru based on the requirement. From Fig. 11, MLR and MLT are the same at the center position, when Rd is less than 0.2. However, MLR decreases faster than MLT with increasing Rd, which indicates that the image of an object will be compressed further in the radial direction than in the tangential direction, when Rd is greater than 0.2. This property is important for a physician to interpret an endoscopic image.

Fig. 11

MLR and MLT versus normalized Rd.

JBO_21_5_056003_f011.png

3.3.

Deriving DRAD and DPH (or DSTV) from MLR

The local magnification method could also help improve the traditional way of calculating DRAD and DPH. Assume that X and Y are the actual sizes of the undistorted and distorted images, respectively. (Please note that X represents the actual size of the target in Sec. 3.2, but the equations are exactly the same.) The radial distortion equation can be obtained from Eq. (6) as

DRAD=YXX=1ky(knkxnXn1+kn1kxn1Xn2++k2kx2X+k1kx)1.
As is apparent from the above equation, there is no DRAD value at x=0. Substituting X=x/kx into the above equation, we get

Eq. (12)

DRAD=kxky(knxn1+kn1xn2++k2x+k1)1.

Taking the data in Table 1 as an example, we assume the normalized Rd as Y (then ky=1) and the undistorted images has maximum diameter of 1.71, and kx is 0.585 (1/1.71). (Assuming the center grid of the distorted image is undistorted, the undistorted image has a diameter of 0.14×12.2=1.71.) Therefore, the DRAD values can be calculated (Table 1) based on Eqs. (8) and (12). While an image can be magnified to different sizes by changing the values of kx and ky, the ratio of kx to ky should be constant, i.e., DRAD should be constant.

We can also calculate the traditionally used DPH or DSTV. Take the barrel distortion in Fig. 4(b) as an example. Assume that the distorted image is 2Wd wide and 2RA high and we have obtained two functions of Ru=fu(Rd) and Rd=fd(Ru) with Rd and Ru being normalized values. Then the DPH or DSTV of a barrel distortion can be calculated with the procedure shown in Fig. 12. For pincushion distortion, similar method can be used. The advantage of calculating DPH or DSTV with this method is that it does not need lines that are tangent to the image boundaries [for barrel distortion, Fig. 13(a)] or whose end points overlap with the image corners (for pincushion distortion). For example, while Fig. 13(a) is an ideal image to analyze DSTV in the traditional way, Fig. 13(b) is not because there are no lines being tangent to the edges. However, the method in Fig. 12 will work for both images. Most importantly, DPH or DSTV can be directly calculated if the polynomial equations describing the relationship between Rd and Ru are known.

Fig. 12

Procedure used to calculate DPH or DSTV of a barrel distortion.

JBO_21_5_056003_f012.png

Fig. 13

Distorted images of a grid target: (a) four lines tangent to the image edges and (b) no line tangent to the edges.

JBO_21_5_056003_f013.png

4.

Discussion

4.1.

Assumptions of Circular Symmetry and Overlap of the Image Center with the Distortion Center

As mentioned before, we assumed that the endoscope is circularly symmetric, and therefore, the tangential component of the distortion can be ignored. We also assume that the distortion center overlaps with the image center. These assumptions can be verified. In Sec. 3, we used the data from the image center to right edge of Fig. 6(a) to demonstrate the distortion evaluation methods and their results (Table 1). Similar results can also be obtained based on data on any other radius. If the endoscope is circularly symmetric and the distortion center overlaps with the image center, the distortion results based on data from different radii should be close.

Four sets of data on four radii, i.e., the image center to the right, left, top, and bottom edges, were obtained from Fig. 6(a) to derive the normalized Rd versus normalized Ru curves (Fig. 14). The data were normalized with the longest radius as one. From Fig. 14, the four curves overlap, meaning the assumption of a circularly symmetric optical system without tangential distortion is correct and the distortion center overlaps with the image center.

Fig. 14

Normalized Rd versus normalized Ru based on data from radial lines at four different directions on the same distorted image.

JBO_21_5_056003_f014.png

4.2.

Accuracy of the Local Magnification Method

To evaluate the accuracy of the obtained MLR results, we applied the results in a MATLAB routine to correct a distorted image from the same endoscope. The distorted and corrected images are shown in Fig. 15. Since the points selected to establish the distortion function did not cover the four corners of the image, the equation only covers a circle area [Fig. 6(a)] with the radius as the distance from the image center to the furthest point. Therefore, only the image located within the circle was corrected with the four corners outside the circle discarded. Overall, the corrected image removed the vast majority of the distortion originally present. Some errors were still present near the image boundary. The main reason for the errors was that the coordinate reading for a point at further distance from the center was less accurate than at closer distance because of the smaller magnification, lower resolution, and dimmer light intensity at a further distance than at the center position. By adjusting the illuminating light, the accuracy can be improved.

Fig. 15

Original and corrected images: (a) Original distorted image taken with the endoscope and (b) the corrected image according to MLR.

JBO_21_5_056003_f015.png

4.3.

Number of Data Needed for Distortion Evaluation and the Formats of Polynomial Equations

Effect of the number of grids imaged from image center to edge (i.e., number of points to derive the polynomial equation of normalized Rd versus normalized Ru) on distortion evaluation was studied. A grid target with the grid size of 3  mm×3  mm was placed 6.4 cm away from the endoscope distal end and perpendicular to the optical axis of the endoscope. The distorted image of the target was used to analyze the effects of data number on distortion curves/equations. From the image center to the right edge, 34 radial distance data were obtained from 34 cross-points. From these 34 data, 18, 12, 8, 6, and 5 data were selected, respectively, with roughly even distribution to obtain the normalized Rd versus normalized Ru curve as shown in Fig. 16. From the figure, all the curves overlap with the curve based on all the 34 sets of data, which means that the minimum data number for distortion evaluation can be the number of unknown constant parameters in the distortion equation [e.g., the five parameters of k1,k2,,k5 in Eq. (5) with k0 set as zero] if the cross-points on the target image can be clearly read, on the premise that the equation format is correct.

Fig. 16

Normalized Rd versus normalized Ru based on different number of cross-point data on the image (the legend shows the number of cross-points used to get the fitting lines).

JBO_21_5_056003_f016.png

For all the normalized and non-normalized fittings, a polynomial fitting equation of degree 5 is accurate enough for most severe barrel or pincushion distortions, with the R-squared value >0.9999. However, the actual degree of a fitting equation can be flexible based on the required R-squared value. For example, the degree can be 2, 3, and 4 if the required R-squared values are 0.9898, 0.9987, and 0.9998, respectively, for the endoscope we evaluated. The degree can be >5 for more complex distortion. The equations can have all the terms from degree 0 to the degree of the equation or only some of the terms (e.g., the constant term is not necessarily zero or the equation can only have terms with degrees of odd numbers22).

4.4.

Projection Methods of Endoscopes

Distortion is the consequence of the projection method used in an optical design. Theoretically, the distortion pattern can be derived based on the known projection method, which is usually unknown to consumers. On the other hand, if the distortion pattern is known, the projection method can be inversely derived.

Most consumer cameras have rectilinear lenses based on the perspective projection (also called gnomonic projection) that renders a straight line in the object space as a straight line in the image. The perspective projection obeys the mapping function of r=f·tan(θ), where θ is the angle between the optical axis and the line from an object point to the entrance pupil center, r is the distance from the image of the object point to the image center, and f is the focal length of the optical system.15 For a 2-D object that is perpendicular to the optical axis of the camera, perspective projection can produce an image that faithfully reflects the geometry of the object. However, it is difficult to make a rectilinear lens with more than 100 deg of FOV. Therefore, some other projection methods (Fig. 17), such as stereographic [r=2f·tan(θ/2)], equidistant (r=f·θ), equisolid angle [r=2f·sin(θ/2)], and orthographic/orthogonal [r=f·sin(θ)] projections, are used to design lenses with a wide FOV, such as fisheye lenses and lenses in endoscopes.23

Fig. 17

Some projection methods for lenses with a wide FOV (assuming f is 1): (a) r versus θ, and (b) normalized r versus θ.

JBO_21_5_056003_f017.png

The projection method of an endoscope can be derived based on its distortion evaluation results. The 34 sets of data in Sec. 4.3 were used as an example of deriving the projection method of an endoscope. These data were obtained from 33 cross-points plus the image center in the distorted image of a grid target with 3  mm×3  mm grid size. The distance (d) from each cross-point to the target center can be calculated by multiplying the grid size of 3 mm by the grid number. The distance (l) from the target to the distal end of the endoscope is 6.39 cm. Then the angle θ was calculated for each cross-point with the equation θ=arctangent(d/l). Strictly speaking, l should be the distance from the target to the endoscope entrance pupil, which is not necessarily the endoscope distal end. However, if the distance from the endoscope distal end to entrance pupil is much smaller than the distance from the target to the distal end, the distal end location can be used to approximate the entrance pupil location. The distance from the entrance pupil to the distal end should be considered when the distance from the target to the distal end is short or for special design where the distal end is not a lens, such as a capsule endoscope. So we got 34 θ values, including the 0 deg from the target center, with the maximum angle being 0.9976 (or 57 deg) for the 33rd cross-point from the target center. We also had 34 normalized Rd values [same as normalized r in Fig. 17(b)] corresponding to these angles. We used normalized r because we did not know the image sensor size to calculate the actual r values. So we got a curve of normalized Rd versus θ for the endoscope.

To determine the projection method of the endoscope, we normalized the r values in Fig. 17(a) for θ from 0 to 0.9976 and compared these curves with the curve from the endoscope data, as shown in Fig. 17(b). From Fig. 17(b), the endoscope adopted the orthographic/orthogonal projection during the design since the measured curve almost overlaps with the curve of r=sin(θ). This curve can achieve bigger FOV with a given image sensor size than other curves in Fig. 17.

We can also get other optical parameters of the endoscope in the process of analyzing its projection method. The endoscope FOV in the horizontal direction is twice of the maximum angle, i.e., 1.98 (114 deg). If we know the size of the image sensor, we can also calculate the actual values of r and in turn calculate the focal length using the equation r=f·sin(θ).

5.

Conclusions

In this paper, we reviewed specific test methods for radial distortion evaluation and developed an objective and quantitative test method—the local magnification method—based on well-defined experimental and data processing steps to evaluate the radial distortion in the whole FOV of an endoscopic imaging system. To our best knowledge, this is the first time that the local magnification method is introduced to evaluate endoscope distortion. Our result showed that this method can describe the radial distortion of a traditional endoscope to a high degree of precision. Additionally, the image correction results showed that the local magnification method was accurate in correcting distorted images.

The local magnification method overcomes the error of estimating an ideal image used in the traditional distortion evaluation method and also has advantages over other distortion evaluation methods. The commonly used distortion evaluation methods such as the picture height distortion and the radial distortion methods are integral methods, because they evaluate distortion according to the distance between two points separated by a relatively large distance. The local magnification method, on the other hand, is a differential method showing distortion results at any given local point. The local magnification has a clear physical meaning. For an infinitely small object placed at a local point in the object space, the ratio of its length in the image (on a sensor or any display formats) to its actual length is its local magnification. Therefore, the size information at each local point can be easily interpreted without having to consider the information at other points. This feature can directly help a physician to estimate the size of a lesion during diagnosis. The local magnification method is inclusive, in the sense that this method can be used to derive other distortion parameters. Based on the local magnification data, the picture height distortion and radial distortion data can be derived.

A well-designed setup and procedure is essential for the accurate measurement of a distortion. The key points are (1) the test target should be planar; (2) the optical axis of the endoscope should be perpendicular to the test target and aligned with the target center; and (3) the measuring distance should be proper within the depth of field to obtain sufficient data to derive fitting equations but avoid large reading error caused by high magnification at a close distance and edge-blurred grids at a large distance. The endoscope’s own light source was used in our study. To get better illumination in terms of uniformity and intensity to reduce the reading error from distorted images, external light sources are recommended. Also, the endoscope used in this study has a prime lens (i.e., fixed focal length lens). For medical devices with a zoom lens, the distortion should be determined as a function of the focal length.

Our results showed that a polynomial equation of degree 5 could well describe the radial distortion curve of a traditional endoscope with severe barrel distortion. The image correction results of distorted images showed that our local magnification method was accurate for distortion evaluation. The method could be applied to evaluate medical devices with different distortion patterns (barrel, pincushion, mustache, and so on). While the equation format for other distortion patterns might be different, the derivation method would be the same.

In sum, the local magnification method is a quantitative and objective distortion evaluation method for endoscopes. It has significant benefits over the existing standards, in terms of being mathematically easy to understand and experimentally simple. It also has clear physical meaning that could potentially help a physician to interpret the size of a lesion from a distorted image. Therefore, it is a good choice for an international endoscope standard that has the potential to facilitate the product development and regulatory assessment processes in a least burdensome approach by reducing the burden on both the endoscope manufactures and the regulatory agency. As a result, high-quality endoscopic systems can be swiftly brought into the market. The method can also be used to facilitate the rapid identification and understanding of the cause for poorly performing endoscopes, and benefit quality control during manufacturing as well as quality assurance during clinical use. While this study was based on endoscope imaging, the developed methods can be extended to any circularly symmetric imaging device. Software based on this paper will soon be developed and will be available to the public upon request.

References

1. 

E. Hecht, “More on geometrical optics,” Optics, 253 Pearson Education Inc., San Francisco (2002). Google Scholar

2. 

J. Y. Weng et al., “Camera calibration with distortion models and accuracy evaluation,” IEEE Trans. Pattern Anal. Mach. Intell., 14 (10), 965 –980 (1992). http://dx.doi.org/10.1109/34.159901 Google Scholar

3. 

S. S. Beauchemin, R. Bajcsy, “Modelling and removing radial and tangential distortions in spherical lenses,” Multi-Image Analysis, 1 –21 Springer, Berlin Heidelberg (2001). Google Scholar

4. 

E. Kobayashi et al., “A wide-angle view endoscope system using wedge prisms,” in Third Int. Conf. on Medical Image Computing and Computer-Assisted Intervention, 661 –668 (2000). Google Scholar

5. 

M. Liedlgruber et al., “Statistical analysis of the impact of distortion (correction) on an automated classification of celiac disease,” in Int. Conf. on Digital Signal Processing, 1 –6 (2011). Google Scholar

6. 

A. Sonnenberg et al., “How reliable is determination of ulcer size by endoscopy?,” Br. Med. J., 2 (6201), 1322 –1324 (1979). http://dx.doi.org/10.1136/bmj.2.6201.1322 Google Scholar

7. 

S. H. Park et al., “Polyp measurement reliability, accuracy, and discrepancy: optical colonoscopy versus CT colonography with pig colonic specimens,” Radiology, 244 (1), 157 –164 (2007). http://dx.doi.org/10.1148/radiol.2441060794 Google Scholar

8. 

“ISO 9039: Optics and photonics—Quality evaluation of optical systems—Determination of distortion,” Geneva, Switzerland (2008). Google Scholar

9. 

C. Zhang et al., “Nonlinear distortion correction in endoscopic video images,” in Proc. of 2000 Int. Conf. on Image Processing, 439 –442 (2000). Google Scholar

10. 

Measurement and Analysis of the Performance of Film and Television Camera Lenses, European Broadcasting Union, Geneva, Switzerland (1995). Google Scholar

13. 

DXOLabs, “DxOMark measurements for lenses and camera sensors,” (2016) http://www.dxomark.com/About/In-depth-measurements/Measurements/Distortion April ). 2016). Google Scholar

14. 

ImageEngineering, “What is lens geometric distortion?,” (2011) http://www.image-engineering.de/library/technotes/752-what-is-lens-geometric-distortion July 2016). Google Scholar

15. 

B. Hönlinger and H. H. Nasse, “Distortion,” Zeiss, (2009) https://www.alpa.ch/_files/Zeiss%20About%20Lens%20Distortion%20cln33.pdf April 2016). Google Scholar

16. 

E. P. Efstathopoulos et al., “A protocol-based evaluation of medical image digitizers,” Br. J. Radiol., 74 (885), 841 –846 (2001). http://dx.doi.org/10.1259/bjr.74.885.740841 Google Scholar

17. 

R. Y. Tsai, “A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf TV cameras and lenses,” IEEE J. Robot. Autom., 3 (4), 323 –344 (1987). http://dx.doi.org/10.1109/JRA.1987.1087109 Google Scholar

18. 

Z. Zhang, “Flexible camera calibration by viewing a plane from unknown orientations,” in Proc. of the Seventh IEEE Int. Conf. on Computer Vision, 666 –673 (1999). http://dx.doi.org/10.1109/ICCV.1999.791289 Google Scholar

19. 

IEC 1262-4: Medical Electrical Equipment–Characteristics of Electro-Optical X-Ray Image Intensifiers—Part 4: Determination of the Image Distortion, The International Electrotechnical Commission, Geneva, Switzerland (1994). Google Scholar

20. 

J. P. Barreto et al., “Non parametric distortion correction in endoscopic medical images,” in 3DTV Conf., 1 –4 (2007). Google Scholar

21. 

H. Haneishi et al., “A new method for distortion correction of electronic endoscope images,” IEEE Trans. Med. Imaging, 14 (3), 548 –555 (1995). http://dx.doi.org/10.1109/42.414620 Google Scholar

22. 

D. C. Brown, “Close-range camera calibration,” Photogramm. Eng., 37 855 –866 (1971). Google Scholar

23. 

J. Kannala and S. S. Brandt, “A generic camera model and calibration method for conventional, wide-angle, and fish-eye lenses,” IEEE Trans. Pattern Anal. Mach. Intell., 28 (8), 1335 –1340 (2006). http://dx.doi.org/10.1109/TPAMI.2006.153 Google Scholar

Biography

Quanzeng Wang received his PhD in chemical and biomolecular engineering from the University of Maryland at College Park in 2009. He is a scientist and engineer in the Center for Devices and Radiological Health of U.S. Food and Drug Administration. His research interests include optical spectroscopy and imaging, tissue optics, fiber optics, optical diagnostics, computational biophotonics, image quality, and thermography.

Wei-Chung Cheng received his PhD in electrical engineering from the University of Southern California in 2003. He was an assistant professor in the Department of Photonics, National Chiao-Tung University, Taiwan, before joining the U.S. Food and Drug Administration. He is a color scientist in the Center for Devices and Radiological Health. His current research interests include color science, applied vision, and medical imaging systems.

Nitin Suresh is an MS student in the Department of Electrical and Computer Engineering at the University of Maryland, College Park. His research interests include signal and image processing, pattern recognition, and machine learning.

Hong Hua is a professor with the College of Optical Sciences (OSC), University of Arizona. She has over 20 years of experience in researching and developing advanced display and imaging technologies. As the principal investigator of the 3-D Visualization and Imaging Systems Laboratory (3DVIS Lab), her current research interests include various head-worn displays and 3-D displays, endoscopy, microscopy, optical engineering, biomedical imaging, and virtual and augmented reality technologies. She is a fellow of SPIE.

CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Quanzeng Wang, Wei-Chung Cheng, Nitin Suresh, and Hong Hua "Development of the local magnification method for quantitative evaluation of endoscope geometric distortion," Journal of Biomedical Optics 21(5), 056003 (9 May 2016). https://doi.org/10.1117/1.JBO.21.5.056003
Published: 9 May 2016
Lens.org Logo
CITATIONS
Cited by 8 scholarly publications and 3 patents.
Advertisement
Advertisement
KEYWORDS
Distortion

Endoscopes

Lawrencium

Standards development

Endoscopy

Image sensors

Data centers

Back to Top