Development of the local magnification method for quantitative evaluation of endoscope geometric distortion

Abstract. With improved diagnostic capabilities and complex optical designs, endoscopic technologies are advancing. As one of the several important optical performance characteristics, geometric distortion can negatively affect size estimation and feature identification related diagnosis. Therefore, a quantitative and simple distortion evaluation method is imperative for both the endoscopic industry and the medical device regulatory agent. However, no such method is available yet. While the image correction techniques are rather mature, they heavily depend on computational power to process multidimensional image data based on complex mathematical model, i.e., difficult to understand. Some commonly used distortion evaluation methods, such as the picture height distortion (DPH) or radial distortion (DRAD), are either too simple to accurately describe the distortion or subject to the error of deriving a reference image. We developed the basic local magnification (ML) method to evaluate endoscope distortion. Based on the method, we also developed ways to calculate DPH and DRAD. The method overcomes the aforementioned limitations, has clear physical meaning in the whole field of view, and can facilitate lesion size estimation during diagnosis. Most importantly, the method can facilitate endoscopic technology to market and potentially be adopted in an international endoscope standard.


Introduction
Most endoscopes have a short focal length and a wide field of view (FOV) in order to observe a broad area with minimum moving or bending of the endoscope, at the expense of severe geometric distortion. The distorted images can adversely affect the accuracy of size and shape estimations. Therefore, a global standard providing a quantitative and simple method to evaluate endoscope distortion is essential. However, such a standard to characterize all possible types of endoscope distortion, along with instructions on how to evaluate and present the distortion results, has yet to be developed, which makes it difficult to accurately evaluate the quality of new and existing endoscopic technologies. This in turn leads to delays in the availability of technologically superior endoscopes in the market. Such a domino effect can be avoided by the development of a consistent and accurate standardized method to characterize endoscope distortion.

Endoscope Distortion
Optical aberration includes two main types: chromatic aberrations and monochromatic aberrations. The former arises from the fact that the refractive index is actually a function of wavelength. The latter occurs even with quasimonochromatic light and falls into two subgroupings: monochromatic aberrations that deteriorate the image, making it unclear (e.g., spherical aberration, coma, and astigmatism), and monochromatic aberrations that deform the image (e.g., Petzval field curvature and distortion). 1 In this paper, we focus on the monochromatic aberrations that deform the image. We call such aberrations geometric distortions. A geometric distortion is a deviation from the rectilinear projection, a projection in which straight lines in a scene remain straight in their image. While similar distortions can also be seen in display (display distortion, especially in cathode ray tube display), we mainly focus on the geometric distortions caused by geometric optics. Among different types of geometric distortions, radial distortions are the most commonly encountered and most severe. They cause an inward (barrel distortions) or outward (pincushion distortions) displacement of a given image point along the radial direction from its undistorted location (Fig. 4). A radial distortion can also be a combination of both barrel and pincushion distortions, which is called a mustache (or wave) distortion. In an image with a radial distortion, a straight line that runs through the image center (usually also being the center of distortion) remains straight. Since most radial distortions are circularly symmetric (i.e., rotationally symmetric with respect to any angle), or approximately so, arising from the circular symmetry of the optical imaging systems, a circle that is concentric with the image center remains a circle in its image, although its radius may be affected. Some complex distortions include both radial and tangential components, i.e., a given image point displaces along both radial (radial distortion) and tangential (tangential distortion) directions (Fig. 1). Such distortions, called radial-tangential distortions in this paper, include decentering distortions and thin prism distortions. 2,3 Unless otherwise specified, distortions hereafter mentioned in this paper mean radial geometric distortions-the focus of this paper.
Endoscopes usually have severe barrel distortions. An endoscope needs a short focal length and a wide FOV in order to observe a broad area with minimum moving or bending of the endoscope, which is essential for steady and smooth manipulation of the endoscope because of the restricted space and degrees of freedom of movement and the limitation in hand-eye coordination during surgical cases. 4 However, lenses used in endoscopes usually have a short focal length (a few millimeters only) and a wide FOV (ranging from 100 to 170 deg), which inevitably causes severe distortions. 5 Typically, endoscopes exhibit barrel distortions. Occasionally, an endoscope exhibits a mustache distortion that varies between barrel and pincushion across the image, mostly because a mathematical algorithm is used to correct distortion at the maximum image height or other parts of the image. Since endoscope distortions can negatively affect size estimation and feature identification related diagnosis, [5][6][7] quantitative evaluation of endoscope distortions and proper understanding of the evaluation results are essential.

Need for an Endoscope Distortion Evaluation Method
The millions of endoscopic procedures conducted monthly in the United States for a wide range of indications are driving the advancement of endoscopic imaging technology. With new diagnostic capabilities and more complex optical designs, technological advances in endoscopes promise significant improvements in both safety and effectiveness. Endoscope optical performance (OP) can be evaluated by OP characteristics (OPCs), including resolution, distortion, FOV, direction of view, depth of field, optimal working distance, image noise, detection uniformity, veiling glare, and so on. Current consensus standards provide limited information on validated and quantitative test methods for assessing endoscope OP. There is no standardized method to evaluate endoscope distortions. An international standard specifies methods of determining distortion in optical systems. 8 The methods require the usage of complex devices, such as an autocollimator, or an instrument to measure the object and image pupil field angles and height. While the standard provides complex equations, it does not clarify how the distortion results should be presented and evaluated. Also, the picture height distortion value mentioned in the standard is insufficient for the evaluation of severe barrel or pincushion distortions, and fails for the evaluation of mustache distortions. The definitions of angular magnification and lateral magnification in this standard are only based on a small area near the optical axis of the test specimen, which cannot be extended to endoscopes whose magnification changes significantly within the FOV. The endoscopes working group (WG) of the International Organization for Standardization (ISO), ISO/TC172/SC5/WG6, develops and oversees endoscope standards (the ISO 8600 serial standards) that cover the endoscope OPCs of FOV, direction of view, and optical resolution. However, an endoscope distortion standard has not yet been developed by this WG.
While endoscopic technology is developing fast, the regulatory science for endoscope OP evaluation has been unable to keep pace. Every year, the U.S. Food and Drug Administration receives a large number of endoscope submissions for premarket notification or premarket approval. However, the evaluation of new video endoscopic equipment is difficult because of the lack of objective OP standards. The industry lacks consensus standards on objective test methods to evaluate distortion. The resulting patchwork of tests conducted by different device manufacturers leads to delays in bringing important endoscopic technology to market and may allow the clearance of a less optically robust system that negatively impacts patient care.
In this paper, we tried to establish a quantitative, objective, and simple distortion evaluation method for endoscopes, with the goal of applying the method in an international endoscope standard. We reviewed some common methods described in prior journal articles for distortion evaluation of an optical imaging system and analyzed the relationship between these methods. Based on the review, a quantitative test method for assessing endoscope radial distortion was developed and validated based on the local magnification idea. The method will help facilitate performance characterization and device intercomparison for a wide variety of standards and endoscopic imaging products. The method has the potential to facilitate the product development and regulatory assessment processes in a least burdensome approach by reducing the workload on both the endoscope manufactures and the regulatory agency. As a result, novel, high-quality endoscopic systems can be swiftly brought into the market. The method can also be used to facilitate the rapid identification and understanding of the cause for poorly performing endoscopes, and benefit quality control during manufacturing as well as quality assurance during clinical practices.

Review of Common Methods for Distortion Evaluation
In this section, common methods for distortion evaluation are reviewed. The distortion pattern on an image sensor might not be the same as shown on a display device because of the effects of hardware, such as cathode ray tube (CRT), or software, such as an image processing algorithm. To be simple, this paper focuses on distortions of digital images from an image sensor that might or might not have been processed. However, the methods can also be extended to evaluate display distortions. Theoretically, a geometric distortion might include both radial and tangential components, i.e., a given image point displaces along both radial (radial distortion) and tangential (tangential distortion) directions (Fig. 1). Such distortions, called radial-tangential distortions in this paper, include decentering distortions and thin prism distortions. 2,3 A radial-tangential distortion can be evaluated by comparing the positions of two-dimensional (2-D) points on the distorted images with their positions on an ideally nondistorted image. It can be described with a 2-D matrix showing the relative position change of each point as a function of x-y coordinates. In an optical imaging system, the tangential component of a geometric distortion is basically conditioned by imperfect circular symmetry. However, an optical imaging system manufactured in accordance with the present state of the art has a negligible tangential distortion. 8,9 Therefore, this paper only focuses on radial distortions.

Picture Height Distortion and Related Methods
There are several methods for distortion evaluation. The picture height distortion method (D PH , where D means distortion) is defined by the European Broadcasting Union (EBU) 10 and recommended by the ISO 9039 International Standard. 8 It quantifies the bending of the image of a horizontal straight line that is tangent to the circumscribed (for barrel distortion) or inscribed (for pincushion distortion) rectangle of the distorted image ( Fig. 2). As shown in Fig. 2, it is calculated as While D PH was initially defined for the vertical direction, it is applicable to the horizontal distortion as well. D PH is also called the television (TV) distortion method (D TV ) or traditional TV distortion method. The term TV distortion has been used because such geometric distortion was often observed on a traditional CRT television due to the effect of internal or external magnetic field, or because this method is often used to evaluate the distortion on a display device. While CRT televisions have almost been made obsolete, the term TV distortion is still widely used, though its meaning is not the original meaning related to TV anymore. An open standard for self-regulation of mobile imaging device manufacturers, named Standard Mobile Imaging Architecture (SMIA), 11,12 defines a distortion evaluation method in a similar way as D PH . We call this method SMIA TV distortion method (D STV ) to distinguish from D TV or D PH . D STV can be calculated as The reported D STV value should also be the mean values of all four corners. Obviously, the D STV value is twice as large as the D PH value for the same distorted image, i.e., Another distortion evaluation method is presented in Fig. 3. 13 If we draw a straight line connecting two ends of a curved linethe image of a straight line in the target, its length is L and the largest distance from this drawn line to any point on the curved image line is l [ Fig. 3(a)]-then the distortion is defined as E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 3 ; 3 2 6 ; 6 9 7 D PH2 ð%Þ ¼ l∕L × 100: (3) As opposed to D PH , the D PH2 values are positive for barrel distortions and negative for pincushion distortions. Otherwise, the definition of D PH2 is similar to that of D PH . The absolute value of l in D PH2 is the same as that of ΔH in the D PH method for the lower horizontal edge of a distorted image. Comparing Figs. 2 and 3, we can get the relation of D PH2 ¼ −D PH × H∕L. Since the D PH2 method has no significant advantage over the D PH method, we do not recommend this method for distortion evaluation.
The aforementioned distortion evaluation methods calculate the largest positional error of barrel or pincushion distortions over the whole image. They are meaningful only if the optical system has a steadily increasing distortion (barrel or pincushion distortion) from the image center to the edges. For a complex distortion pattern, it is impossible to evaluate the distortion in detail with a single value since the value might be misleading. Taking a mustache distortion as an example, it is possible that the image displays little or virtually zero distortion at the edges as measured by any of these methods, but a maximum distortion at the midfield. These methods are also related to the aspect ratio of the distorted image. The EBU defines the D PH for the case of aspect ratio being 4∶3, a ratio for the traditional television and computer monitor standard. However, there are other widely used aspect ratios, such as the 3∶2 ratio of the classic 35 mm still camera film and the 16∶9 ratio of HD video. We cannot directly compare the distortion values of two images with different aspect ratios.

Radial Distortion Method
Another distortion evaluation method is based on comparing the radii of distorted (R d ) and undistorted (R u ) images. It is assumed that the distortion close to the optical center is zero. Therefore, an undistorted image can be calculated based on the information at the center of the distorted image. The distorted image is then evaluated with the undistorted image as a reference along the radial direction. Since this method can be applied to any radial distortion, we call it D RAD . As shown in Fig. 4, E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 4 ; 3 2 6 ; 2 6 3 where R d is the distance of a point in the distorted image from the image center and R u is the distance of the corresponding Journal of Biomedical Optics 056003-3 May 2016 • Vol. 21 (5) point in the calculated undistorted image from the image center. 14,15 The point can come from any location in the distorted image except for the image center where R u can be infinitely small, although Fig. 4 shows only the top-right corner as an example. If the absolute values of R d and R u are magnified at the same scale, the distortion evaluation results will not be affected. D RAD can be used to evaluate complex distortions (e.g., mustache distortions) with a distortion profile along a radius line. Mustache distortions can be caused by countermeasures in the design or by an image processing algorithm to limit or remove distortion. If we calculate the D RAD along a diagonal from the image center to a corner, we can obtain a curve of D RAD versus R u or R d , as shown in Fig. 5. For a simple barrel or pincushion distortion, we can identify a barrel distortion if the D PH or D STV value is negative and a pincushion distortion if the value is positive. However, this criterion will fail for identifying a mustache distortion. The key point is that the identification of a distortion type should not depend on the sign of the distortion value but on the slope of the radial distortion curve. Typical radial distortion curves are shown in Fig. 5. Generally, these distortion curves start at zero, matching the assumption that the distortion close to the optical center is zero. For a barrel distortion   regions. From Fig. 5(c), the curve has negative slope when R d < 11.7 mm and positive slope when R d > 11.7 mm. This means that the image has a barrel distortion for R d < 11.7 mm and a pincushion distortion for R d > 11.7 mm even though the distortion values are still negative for R d > 11.7 mm. A higher absolute slope value means more pronounced distortion at this radius. For a simple barrel or pincushion distortion, the absolute value of radial distortion calculated from an image corner is usually larger than that of D PH ∕D TV or D STV . 15 This can be theoretically explained with the barrel distortion in Fig. 4 as an example. In Fig. 4(a), points A and B are, respectively, the middle point and right corner of the upper edge of the undistorted image, with their distance to the image center in vertical direction being R uy . In Fig. 4(b), points A 0 and B 0 are the image of A and B, with their distances to the image center in vertical direction being R A 0 and R dy . R A 0 is larger than R dy for barrel distortion and smaller for pincushion distortion. D STV at corner B 0 can be calculated as For a barrel distortion, both A and B are distorted toward the image center in the vertical direction with the amount of (R A 0 − R uy ) and (R dy − R uy ), respectively, with the negative value signifying a barrel distortion. At the same time, B 0 moved the amount of (R dy − R A 0 ) in the vertical direction toward the image center relative to A 0 . Therefore, the equation of D STV ¼ ðR dy − R A 0 Þ∕R A 0 evaluates only the position of B 0 relative to A 0 . On the other hand, the equation of D RAD ¼ ðR d − R u Þ∕R u can be split into horizontal component of ðR dx − R ux Þ∕ R ux and vertical component of ðR dy − R uy Þ∕R uy . The vertical component is the movement of B and can also be expressed as ½ðR dy − R A 0 Þ þ ðR A 0 − R uy Þ∕R uy , which is the sum of the movement of B relative to A 0 and the movement of A. Since the values of R uy and R A 0 are close, D STV is roughly equal to the movement of B relative to A 0 in the vertical component of D RAD for corner B, without considering the movement of A in the vertical direction and the movement of B in the horizontal direction. That explains why the absolute D STV value is usually smaller than the absolute D RAD value. Similar conclusion can also be obtained for a pincushion distortion.
The main problem with the D RAD method is that the values of R u are not available, since the undistorted image does not physically exist. The R u values can be approximated according to the data at the center of the distorted image, with the assumption that there is none or little distortion near the optical axis. 16 Taking the image of a grid target as an example, if we know the distance of one grid (reference grid) at the center of the distorted image is corresponding to P pixels on the image sensor, then the distance of G grids from the image center in the undistorted image should be P × G pixels, based on which we can calculate the undistorted image of the grid target and then the D RAD values. However, there is a problem with this approach. The size of the assumed undistorted area in the distorted image will affect the final results for a severe distortion as shown in most endoscopic images. If the reference grid is too big, it might have been distorted. Figure 6(a) shows a barrel distorted image. From the image center to the right edge in the horizontal direction, we got the distance (R d ) of each cross-point from the center. By assuming that there was no distortion between point 0 and point 1 (0-1), point 0 and point 2 (0-2), or point 0 and point 3 (0-3), respectively, we calculated the undistorted distance (R u ) of the crosspoints with these different assumptions and obtained three D RAD curves as shown in Fig. 6(b). For a barrel distortion, the distortion values should monotonically decrease with radius, with the maximum value of zero at the center. However, the 0-2 and 0-3 curves in Fig. 6(b) showed positive values at shorter radial distances, which illuminated that the assumptions of no distortion from point 0 to point 2 or point 0 to point 3 are less accurate than the no distortion assumption from point 0 to point 1, and bigger assumed nondistortion area causes bigger error. On the other hand, if the assumed nondistortion area was too small, the reading error for R d at center would be enlarged by the large number of grids at further radial distance when calculating R u .

Development of the Local Magnification Method for Distortion Evaluation of Endoscopes
Distortion evaluation is always related to camera calibration techniques-a rather mature area for an optical imaging device. 17,18 While these techniques are useful for image correction with the help of powerful computational capability, the two-or threedimensional image data as well as the transform matrixes are complex, lack direct physical meaning, and are hard to understand for most users. Therefore, these calibration methods are not a good choice for a consensus evaluation method that could be potentially adopted by an international standard. The local magnification method we developed is mathematically and experimentally simple, and can better describe the distortion characteristic of an endoscope than the commonly used methods. The method can also provide valuable information to help a physician to interpret a distorted medical image.

Experimental Measurements
We established a distortion evaluation method using an endoscopic system (Olympus EVIS EXERA II) that includes a high-intensity xenon light source (CLV-180), a gastrointestinal videoscope (GIF-H180), and a video system center (CV-180). This system has the common barrel distortion as seen in most endoscopes. A series of grid targets with three different grid sizes (0.5 mm × 0.5 mm, 1.5 mm × 1.5 mm, and 3.0 mm × 3.0 mm) was designed and printed with a laser printer. The total size of each target was large enough to cover the whole FOV area. The target during tests should be planar and be able to move along the optical axis. To securely keep the endoscope in its place, a customized mold with adjustable height was used. The direction of the endoscope distal end was adjusted with a fiber optic positioner (Newport, FPR2-C1A).
The setup was adjusted so that the endoscope optical axis was perpendicular to the test target and aligned with the test target center (Fig. 7). Criteria for acceptable adjustment are as follows: (1) center of the target, the point on the target that locates at the FOV center, was located at the center of the captured image and (2) two pairs of centrally symmetric points (e.g., A − A 0 and B − B 0 in Fig. 7) on the target were also centrally symmetric on the image. It was assumed that the distortion center (i.e., where the optical axis passes through the image plane) overlaps with the image center. The assumption was evaluated in Sec. 4.1. For the first criterion, the target was positioned at a given distance to take its image. The image was then analyzed with software (e.g., MATLAB) to find the target center ðx t ; y t Þ in the image. The distance from the target center to the image center ðx i ; y i Þ was calculated using the formula ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ðx t − x i Þ 2 þ ðy t − y i Þ 2 p , with the unit of pixels. The distance was controlled within 1% of the picture height [∼10 pixels for the image with size of 1008 ðHÞ × 1280 ðWÞ pixels]. For the second criterion, the same method as for the first criterion was used to make sure that the midpoint between each pair of points was <1% of the picture height far away from the target center in the image. The two criteria were satisfied through an iterative trial-and-error process.

Local Magnification Method to Quantify Distortion
To avoid the aforementioned problem of calculating R u in Sec. 2.2, we proposed to evaluate radial distortions with a new approach-the local magnification (M L ) method. For a small object (e.g., a small cross) placed at a local point on the test target, the ratio of the object length on the image sensor (or on a display device from the sensor) to its actual length on the test target is called M L . The term "local magnification" is borrowed from the IEC 1262-4 Standard, which addresses determination of image distortion in electro-optical x-ray image intensifiers. 19 In the standard, discrete M L values are obtained by measuring size changes in small hash marks and their accuracy can be affected by the sizes of the marks. In this paper, M L is expressed with an equation that can accurately and continuously express distortion at any location in the FOV. M L can be separated into the local radial magnification (M LR ) and the local tangential magnification (M LT ). M LR is the local magnification of a small one-dimensional (1-D) object oriented radially toward the FOV center. M LT is the local magnification of a small 1-D object tangentially oriented to a radial direction. Figure 8 shows the M LR and M LT of a cross-shape object (ideally, the object should be infinitely small) with the width of lr and height of lt at radius of R on the test target. The object image located at radius of R 0 on the target image with the width of lr 0 and height of lt 0 . Then the M LR and M LT at the cross-point can be calculated as M LR ¼ lr 0 ∕lr and M LT ¼ lt 0 ∕lt.
Assuming the distortion is radial, data from any straight line crossing the image center can be used to evaluate the distortion of the whole image if the target is well aligned with the device. We used the horizontal line from the image center to the right edge [ Fig. 6(a)] as an example to explain the M L method. Since the distance from the image center to the right edge is not the maximum radius in the whole image, the final evaluation results mainly reflect the distortion characteristics in a circle area with the radius equal to the distance from image center to right edge. Other straight lines (e.g., a vertical or diagonal line) crossing the image center can also be used to cover a bigger circle area or obtain more accurate results. The method is described in detail as follows.
After proper alignment, an image of the target at an established distance from the endoscope is taken [ Fig. 6(a)]. The horizontal line from the image center to the right edge is then used to analyze the radial distortion. Following this, the coordinates of each cross-point on the line are read with image analysis software, and the distances (R d ) of the points from the image center are calculated in terms of pixels. The actual distance (R u ) of these cross-points from the center on the target can also be obtained. To be simple, R u is used as the number of grids from the center to the cross-points (Table 1), instead of measuring the actual distances. A matrix of R d is then mapped to a matrix of R u .
The R u value on the image edge is needed in order to evaluate the distortion at the edge. In most cases, however, an image near the edge is often blurred due to severe distortion and vignette effects. Additionally, the edge may not exactly lie on a cross point, and the number of cross points from the center to the edge would therefore not be an integer. Based on available R d and R u data from the cross-points, a polynomial equation of R u ¼ fðR d Þ is calculated. The maximum pixel number from the image center to the image edge (i.e., half the picture width, 640 in our images) is then used as R d to calculate R u at  the edge (bold numbers in Table 1). The R u value of 12.2 is obtained in this example. Both R d and R u are normalized, setting their maximum value as 1 (Table 1). From the curve of normalized R d versus normalized R u (Fig. 9) or vice versa, a polynomial fitting equation of R d ¼ f d ðR u Þ or R u ¼ f u ðR d Þ is created to fit and define the relation between R u and R d .
The normalized R d versus normalized R u polynomial function of R d ¼ f d ðR u Þ can be easily converted to other scales. Assume the function has a polynomial form 9,20,21 of E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 5 ; 6 3 ; 6 4 2 y ¼ k n x n þ k n−1 x n−1 þ : : : where n is the degree of the polynomial equation, x ∈ ½0; 1 represents normalized R u , and y ∈ ½0; 1 represents normalized R d . The degree zero term (the constant term k 0 ) is assumed to be zero since, we assume that the centers of the distorted and undistorted images are overlapped. If x and y need to be scaled to X and Y, so that k y Y ¼ y ∈ ½0; 1 and k x X ¼ x ∈ ½0; 1, then E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 6 ; 6 3 ; 5 4 5 k y Y ¼ k n ðk x XÞ n þ k n−1 ðk x XÞ n−1 þ ::: þ k 2 ðk x XÞ 2 þ k 1 ðk x XÞ; Y ¼ 1 k y ðk n k n x X n þ k n−1 k n−1 x X n−1 þ ::: þ k 2 k 2 x X 2 þ k 1 k x XÞ: Equations (6) and (5) are the same if k x ¼ 1 and k y ¼ 1.

Local radial magnification method
If X and Y are the actual lengths of R u and R d , M LR is defined as follows from the derivative of Eq.
½nk n k n x X n−1 þ ðn − 1Þk n−1 k n−1 x X n−2 þ : : : Substituting X ¼ x∕k x in the above equation, we get E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 7 ; 6 3 ; 3 3 0 M LR ¼ k x k y ½nk n x n−1 þ ðn − 1Þk n−1 x n−2 þ : : : þ 2k 2 x þ k 1 : Taking the data in Table 1 as an example, we can get the following fitting equation based on Eq. (5), with the normalized R d as y and normalized R u as x.
E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 8 ; 3 2 6 ; 7 1 9 y ¼ −0.6896x 5 þ2.5386x 4 −2.8434x 3 þ0.2744x 2 þ1.7125x: Degree 5 is the lowest degree of the polynomial fitting equation with the R-squared value of the fitting equation being >0.9999. The target grid size is 3 mm × 3 mm. So the maximum X is 36.6 mm (3 × 12.2) and k x is 0.0273 (1∕36.6). Assuming the image sensor has pixel size of 2.8 μm × 2.8 μm, the maximum Y is 1.792 mm (2.8 × 640∕1000) and k y is 0.5580 (1∕1.792). Then, Eq. (7) for these data will be E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 9 ; 3 2 6 ; 5 9 7 þ 0.0838; (9) from which the M LR data at each cross-point can be calculated as shown in Table 1. The M LR data can be normalized so that the maximum magnification at the image center is one. The M LR and normalized M LR versus normalized R d curves are shown in Fig. 10, from which we can see that the two curves can totally overlap by adjusting the scale of y coordinates.

Local tangential magnification
M LT as defined in Fig. 8 is equal to the ratio of two circumferences with R 0 and R as radius, respectively, There is no M LT value at x ¼ 0, because the ratio of two circumferences both with a value of zero cannot be calculated. Again, take the data in Table 1 as an example. Assuming k x is 0.0273 and k y is 0.5580, we can get the M LT equation based on Eqs. (8) and (10) as follows: E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 1 1 ; 6 3 ; 5 2 6 from which the M LT and normalized M LT data can be calculated as shown in Table 1.
Based on Table 1, we can compare M LR and M LT as shown in Fig. 11. While we use the normalized R d as the x axis, the x axis can also be the normalized R u based on the requirement. From Fig. 11, M LR and M LT are the same at the center position, when R d is less than 0.2. However, M LR decreases faster than M LT with increasing R d , which indicates that the image of an object will be compressed further in the radial direction than in the tangential direction, when R d is greater than 0.2. This property is important for a physician to interpret an endoscopic image.

Deriving D RAD and D PH (or D STV ) from M LR
The local magnification method could also help improve the traditional way of calculating D RAD and D PH . Assume that X and Y are the actual sizes of the undistorted and distorted images, respectively. (Please note that X represents the actual size of the target in Sec. 3.2, but the equations are exactly the same.) The radial distortion equation can be obtained from Eq. (6) as E Q -T A R G E T ; t e m p : i n t r a l i n k -; s e c 3 . 3 ; 3 2 6 ; 7 5 2 n k n x X n−1 þ k n−1 k n−1 x X n−2 þ : : : As is apparent from the above equation, there is no D RAD value at x ¼ 0. Substituting X ¼ x∕k x into the above equation, we get E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 1 2 ; 3 2 6 ; 6 7 8 D RAD ¼ k x k y ðk n x n−1 þ k n−1 x n−2 þ : : : þ k 2 x þ k 1 Þ − 1: (12) Taking the data in Table 1 as an example, we assume the normalized R d as Y (then k y ¼ 1) and the undistorted images has maximum diameter of 1.71, and k x is 0.585 (1∕1.71). (Assuming the center grid of the distorted image is undistorted, the undistorted image has a diameter of 0.14 × 12.2 ¼ 1.71.) Therefore, the D RAD values can be calculated (Table 1) based on Eqs. (8) and (12). While an image can be magnified to different sizes by changing the values of k x and k y , the ratio of k x to k y should be constant, i.e., D RAD should be constant.
We can also calculate the traditionally used D PH or D STV . Take the barrel distortion in Fig. 4(b) as an example. Assume that the distorted image is 2W d wide and 2R A 0 high and we have obtained two functions of R u ¼ f u ðR d Þ and R d ¼ f d ðR u Þ with R d and R u being normalized values. Then the D PH or D STV of a barrel distortion can be calculated with the procedure shown in Fig. 12. For pincushion distortion, similar method can be used. The advantage of calculating D PH or D STV with this method is that it does not need lines that are tangent to the image boundaries [for barrel distortion, Fig. 13(a)] or whose end points overlap with the image corners (for pincushion distortion). For example, while Fig. 13(a) is an ideal image to analyze D STV in the traditional way, Fig. 13(b) is not because there are no lines being tangent to the edges. However, the method in Fig. 12 will work for both images. Most importantly, D PH or D STV can be directly calculated if the polynomial equations describing the relationship between R d and R u are known.

Assumptions of Circular Symmetry and Overlap of the Image Center with the Distortion Center
As mentioned before, we assumed that the endoscope is circularly symmetric, and therefore, the tangential component of the distortion can be ignored. We also assume that the distortion center overlaps with the image center. These assumptions can be verified. In Sec. 3, we used the data from the image center to right edge of Fig. 6(a) to demonstrate the distortion evaluation methods and their results (Table 1). Similar results can also be obtained based on data on any other radius. If the endoscope is circularly symmetric and the distortion center overlaps with the image center, the distortion results based on data from different radii should be close. Four sets of data on four radii, i.e., the image center to the right, left, top, and bottom edges, were obtained from Fig. 6(a) to derive the normalized R d versus normalized R u curves (Fig. 14). The data were normalized with the longest radius as one. From Fig. 14, the four curves overlap, meaning the assumption of a circularly symmetric optical system without tangential distortion is correct and the distortion center overlaps with the image center.

Accuracy of the Local Magnification Method
To evaluate the accuracy of the obtained M LR results, we applied the results in a MATLAB routine to correct a distorted image from the same endoscope. The distorted and corrected images are shown in Fig. 15. Since the points selected to establish the distortion function did not cover the four corners of the image, the equation only covers a circle area [ Fig. 6(a)] with the radius as the distance from the image center to the furthest point. Therefore, only the image located within the circle was corrected with the four corners outside the circle discarded. Overall, the corrected image removed the vast majority of the distortion originally present. Some errors were still present near the image boundary. The main reason for the errors was that the coordinate reading for a point at further distance from the center was less accurate than at closer distance because of the smaller magnification, lower resolution, and dimmer light intensity at a further distance than at the center position. By adjusting the illuminating light, the accuracy can be improved.

Number of Data Needed for Distortion Evaluation and the Formats of Polynomial Equations
Effect of the number of grids imaged from image center to edge (i.e., number of points to derive the polynomial equation of normalized R d versus normalized R u ) on distortion evaluation was studied. A grid target with the grid size of 3 mm × 3 mm was placed 6.4 cm away from the endoscope distal end and perpendicular to the optical axis of the endoscope. The distorted image of the target was used to analyze the effects of data number on distortion curves/equations. From the image center to the right edge, 34 radial distance data were obtained from 34 cross-points. From these 34 data, 18,12,8,6, and 5 data were selected, respectively, with roughly even distribution to obtain the normalized R d versus normalized R u curve as shown in Fig. 16. From the figure, all the curves overlap with the curve based on all the 34 sets of data, which means

Projection Methods of Endoscopes
Distortion is the consequence of the projection method used in an optical design. Theoretically, the distortion pattern can be derived based on the known projection method, which is usually unknown to consumers. On the other hand, if the distortion pattern is known, the projection method can be inversely derived.
Most consumer cameras have rectilinear lenses based on the perspective projection (also called gnomonic projection) that renders a straight line in the object space as a straight line in the image. The perspective projection obeys the mapping function of r ¼ f · tanðθÞ, where θ is the angle between the optical axis and the line from an object point to the entrance pupil center, r is the distance from the image of the object point to the image center, and f is the focal length of the optical system. 15 For a 2-D object that is perpendicular to the optical axis of the camera, perspective projection can produce an image that faithfully reflects the geometry of the object. However, it is difficult to make a rectilinear lens with more than 100 deg of FOV. Therefore, some other projection methods (Fig. 17), such as stereographic [r ¼ 2f · tanðθ∕2Þ], equidistant (r ¼ f · θ), equisolid angle [r ¼ 2f · sinðθ∕2Þ], and orthographic/orthogonal [r ¼ f · sinðθÞ] projections, are used to design lenses with a wide FOV, such as fisheye lenses and lenses in endoscopes. 23 The projection method of an endoscope can be derived based on its distortion evaluation results. The 34 sets of data in Sec. 4.3 were used as an example of deriving the projection method of an endoscope. These data were obtained from 33 cross-points plus the image center in the distorted image of a grid target with 3 mm × 3 mm grid size. The distance (d) from each crosspoint to the target center can be calculated by multiplying the grid size of 3 mm by the grid number. The distance (l) from the target to the distal end of the endoscope is 6.39 cm. Then the angle θ was calculated for each cross-point with the equation θ ¼ arctangentðd∕lÞ. Strictly speaking, l should be the distance from the target to the endoscope entrance pupil, which is not necessarily the endoscope distal end. However, if the distance from the endoscope distal end to entrance pupil is much smaller than the distance from the target to the distal end, the distal end location can be used to approximate the entrance pupil location. The distance from the entrance pupil to the distal end should be considered when the distance from the target to the distal end is short or for special design where the distal end is not a lens, such as a capsule endoscope. So we got 34 θ values, including the 0 deg from the target center, with the maximum angle being 0.9976 (or 57 deg) for the 33rd cross-point from the target center. We also had 34 normalized R d values [same as normalized r in Fig. 17(b)] corresponding to these angles. We used normalized r because we did not know the image sensor size to calculate the actual r values. So we got a curve of normalized R d versus θ for the endoscope.  To determine the projection method of the endoscope, we normalized the r values in Fig. 17(a) for θ from 0 to 0.9976 and compared these curves with the curve from the endoscope data, as shown in Fig. 17(b). From Fig. 17(b), the endoscope adopted the orthographic/orthogonal projection during the design since the measured curve almost overlaps with the curve of r ¼ sinðθÞ. This curve can achieve bigger FOV with a given image sensor size than other curves in Fig. 17.
We can also get other optical parameters of the endoscope in the process of analyzing its projection method. The endoscope FOV in the horizontal direction is twice of the maximum angle, i.e., 1.98 (114 deg). If we know the size of the image sensor, we can also calculate the actual values of r and in turn calculate the focal length using the equation r ¼ f · sinðθÞ.

Conclusions
In this paper, we reviewed specific test methods for radial distortion evaluation and developed an objective and quantitative test method-the local magnification method-based on well-defined experimental and data processing steps to evaluate the radial distortion in the whole FOV of an endoscopic imaging system. To our best knowledge, this is the first time that the local magnification method is introduced to evaluate endoscope distortion. Our result showed that this method can describe the radial distortion of a traditional endoscope to a high degree of precision. Additionally, the image correction results showed that the local magnification method was accurate in correcting distorted images.
The local magnification method overcomes the error of estimating an ideal image used in the traditional distortion evaluation method and also has advantages over other distortion evaluation methods. The commonly used distortion evaluation methods such as the picture height distortion and the radial distortion methods are integral methods, because they evaluate distortion according to the distance between two points separated by a relatively large distance. The local magnification method, on the other hand, is a differential method showing distortion results at any given local point. The local magnification has a clear physical meaning. For an infinitely small object placed at a local point in the object space, the ratio of its length in the image (on a sensor or any display formats) to its actual length is its local magnification. Therefore, the size information at each local point can be easily interpreted without having to consider the information at other points. This feature can directly help a physician to estimate the size of a lesion during diagnosis. The local magnification method is inclusive, in the sense that this method can be used to derive other distortion parameters. Based on the local magnification data, the picture height distortion and radial distortion data can be derived.
A well-designed setup and procedure is essential for the accurate measurement of a distortion. The key points are (1) the test target should be planar; (2) the optical axis of the endoscope should be perpendicular to the test target and aligned with the target center; and (3) the measuring distance should be proper within the depth of field to obtain sufficient data to derive fitting equations but avoid large reading error caused by high magnification at a close distance and edge-blurred grids at a large distance. The endoscope's own light source was used in our study. To get better illumination in terms of uniformity and intensity to reduce the reading error from distorted images, external light sources are recommended. Also, the endoscope used in this study has a prime lens (i.e., fixed focal length lens). For medical devices with a zoom lens, the distortion should be determined as a function of the focal length.
Our results showed that a polynomial equation of degree 5 could well describe the radial distortion curve of a traditional endoscope with severe barrel distortion. The image correction results of distorted images showed that our local magnification method was accurate for distortion evaluation. The method could be applied to evaluate medical devices with different distortion patterns (barrel, pincushion, mustache, and so on). While the equation format for other distortion patterns might be different, the derivation method would be the same.
In sum, the local magnification method is a quantitative and objective distortion evaluation method for endoscopes. It has significant benefits over the existing standards, in terms of being mathematically easy to understand and experimentally simple. It also has clear physical meaning that could potentially help a physician to interpret the size of a lesion from a distorted image. Therefore, it is a good choice for an international endoscope standard that has the potential to facilitate the product development and regulatory assessment processes in a least burdensome approach by reducing the burden on both the endoscope manufactures and the regulatory agency. As a result, high-quality endoscopic systems can be swiftly brought into the market. The method can also be used to facilitate the rapid identification and understanding of the cause for poorly performing endoscopes, and benefit quality control during manufacturing as well as quality assurance during clinical use. While this study was based on endoscope imaging, the developed methods can be extended to any circularly symmetric imaging device. Software based on this paper will soon be developed and will be available to the public upon request.