Improved iris localization by using wide and narrow field of view cameras for iris recognition

Abstract. Biometrics is a method of identifying individuals by their physiological or behavioral characteristics. Among other biometric identifiers, iris recognition has been widely used for various applications that require a high level of security. When a conventional iris recognition camera is used, the size and position of the iris region in a captured image vary according to the X, Y positions of a user’s eye and the Z distance between a user and the camera. Therefore, the searching area of the iris detection algorithm is increased, which can inevitably decrease both the detection speed and accuracy. To solve these problems, we propose a new method of iris localization that uses wide field of view (WFOV) and narrow field of view (NFOV) cameras. Our study is new as compared to previous studies in the following four ways. First, the device used in our research acquires three images, one each of the face and both irises, using one WFOV and two NFOV cameras simultaneously. The relation between the WFOV and NFOV cameras is determined by simple geometric transformation without complex calibration. Second, the Z distance (between a user’s eye and the iris camera) is estimated based on the iris size in the WFOV image and anthropometric data of the size of the human iris. Third, the accuracy of the geometric transformation between the WFOV and NFOV cameras is enhanced by using multiple matrices of the transformation according to the Z distance. Fourth, the searching region for iris localization in the NFOV image is significantly reduced based on the detected iris region in the WFOV image and the matrix of geometric transformation corresponding to the estimated Z distance. Experimental results showed that the performance of the proposed iris localization method is better than that of conventional methods in terms of accuracy and processing time.


Introduction
With the increasing security requirements of the current information society, personal identification is becoming increasingly important.Conventional methods for the identification of an individual include possession-based methods that use specific things (such as smart cards, keys, tokens) and knowledge-based methods that use what the individual knows (such as a password or a PIN).These methods have the disadvantages that tokens and passwords can be shared, misplaced, duplicated, lost, or forgotten. 1Therefore, over the last few decades, a new method called biometrics has been attracting attention as a promising identification technology.This technique uses a person's physiological or behavioral traits, such as the iris, face, fingerprint, gait, or voice. 4In particular, iris recognition means identifying a person based on the unique iris pattern that exists in the iris region between the sclera and the pupil.As compared with other biometric features, the iris pattern has the characteristics of uniqueness and stability throughout one's life 2 and is not affected even by laser eye surgery. 3In addition, not only both irises of the same person but also the irises of identical twins are reported to be different. 2These properties make iris recognition one of the most accurate biometric methods for identification.Iris recognition system is composed of four steps, namely the acquisition of iris image, iris localization, feature extraction, and matching.In iris localization, the iris region between the pupil (inner boundary of the iris region) and the sclera (outer boundary of the iris region) in a captured eye image is isolated.It is important to detect precisely the inner and outer boundaries since the exactness of the iris localization greatly influences the subsequent feature extraction and matching.Furthermore, accurate iris localization consumes much processing time. 5Thus, iris localization plays a key role in an iris recognition system since the performance of iris recognition is dependent on the iris localization accuracy.Generally, there are two major approaches for iris localization, 6 one of which is based on circular edge detection (CED), such as the Hough transform, and the other on histograms.
Most iris recognition systems have deployed these kinds of methods, and their performances are affected by whether the iris camera is equipped with a zoom lens.][10][11]  Kim et al. proposed a multimodal biometric system based on the recognition of the user's face and both irises. 11They used one wide field of view (WFOV) and one narrow field of view (NFOV) camera without panning, tilting, and autozooming functionalities, and thereby the system's volume was reduced.Based on the relationship between the iris size in the WFOVand NFOV images, they estimated the size of the iris in images captured by the NFOV camera, which was used for determining the searching range of the iris's radius in the iris localization algorithm.
Conversely, iris systems with autozooming functionality [12][13][14][15] can automatically maintain a similar iris size in an input image regardless of the Z distance, which can greatly facilitate iris localization.However, the size and complexity of these systems are increased due to the autozoom lens.Dong et al. designed an iris imaging system that uses NFOV and WFOV cameras with a pan-tilt-zoom unit with which the user can actively interact and which extends the working range; however, it has the limitation of the size and complexity. 15herefore, in this study, our objective is to achieve an improvement in the performance of iris localization based on the relationship between WFOV and NFOV cameras.Our system has one WFOV and two NFOV cameras with a fixed zoom lens and uses the relation between the WFOV and NFOV cameras obtained by geometric transformation, without complex calibration.Consequently, the searching region for iris localization in the NFOV image is significantly reduced by using the iris region detected in the WFOV image, and thus the performance of iris localization is enhanced.
Table 1 shows a summary of comparisons between previously published methods and the proposed method.
The remainder of this paper is structured as follows.In Sec. 2, the proposed method is presented.Experimental results and conclusions are given in Secs. 3 and 4, respectively.
2 Proposed Method

Overview of the Proposed Method
Figure 1 shows an overview of the proposed method.In an initial calibration stage, we capture the WFOV and NFOV images of the calibration pattern at predetermined Z distances from 19 to 46 cm, at 1-cm steps.Then, the matrices of geometric transformation are calculated from the corresponding four points of each captured image according to the Z distances.In the recognition stage, the WFOV and NFOV images of the user are captured simultaneously.

With autozooming functionality
Capturing the iris images where iris sizes are almost similar irrespective of the change of Z distance [12][13][14][15] Searching the iris region in the captured image by changing the iris position without changing the iris size The structure for autozooming is complicated, large, and expensive

Without autozooming functionality
With only narrow field of view (NFOV) camera 8,9 With WFOV and NFOV cameras Not using wide field of view (WFOV) camera for detecting iris region in NFOV image  The face and eye regions are detected in the WFOV image.
Then, the iris region in the WFOV image is detected to estimate the Z distance and mapped to the iris candidate area of the NFOV image.In detail, since the detected iris region in the WFOV image contains the size and position data of the iris region, we can estimate the Z distance based on the iris size and anthropometric data.The mapping of the detected iris region to the iris candidate region of the NFOV image is done using a matrix of geometric transformation (T Z : the precalculated matrix corresponding to the estimated Z distance).Then, the iris candidate region is redefined using the position of the corneal specular reflection (SR), and thereby iris localization is performed in the redefined iris region.Finally, iris recognition is conducted based on the iris code generated from the segmented iris region.

Proposed Capturing Device
Figure 2 shows the capturing device for acquiring the images of the WFOV and NFOV cameras simultaneously.It consists of a WFOV camera, two NFOV cameras, cold mirrors, and a near-infrared (NIR) illuminator (including 36 NIR light emitting diodes whose wavelength is 880 nm). 11We used three universal serial bus cameras (Webcam C600 by Logitech Corp 16 ) for the WFOV and two NFOV cameras, which can capture a 1600 × 1200 pixel image.The WFOV camera captures the user's face image, whereas the two NFOV cameras acquire both irises of the user.The NFOV cameras have an additional fixed focus zoom lens to capture magnified images of the iris.In order to reduce the processing time of iris recognition, the size of the captured iris image is reduced to 800 × 600 pixels.Based on the detected size and position of the iris region, we performed the iris code extraction in the original 1600 × 1200 pixel NFOV image.Since a fixed focus zoom lens is used, our device meets the requirement of the resolution of the iris image.The average diameter of the iris captured by the proposed device is 180 to 280 pixels within a Z distance operating range of 25 to 40 cm. 11The Z distance is the distance between the camera lens and the user's eye.According to ISO/IEC 19794-6, an iris image in which the iris diameter is >200 pixels is regarded as good quality; 150 to 200 pixels is acceptable; and 100 to 150 pixels is marginal. 11,17Based on this criterion, we can consider our iris image to be of acceptable or good quality in terms of iris diameter.The cold mirror has the characteristic of accepting NIR light while rejecting visible light.Therefore, the user can align his eye to the cold mirror according to his (visible light) reflected eye that he sees in the cold mirror, while his eye image illuminated by NIR light can be obtained by the NFOV camera inside the cold mirror.In order to remove the environmental visible light on the NFOV image, an additional NIR filter is attached to the NFOV cameras.

Estimating the Iris Region of NFOV Image by Using Geometric Transformation
This section describes a method of estimating the iris region of the NFOV image.The objective of this approach is to are obtained at one predetermined Z distance by using geometric transformation, as shown in Fig. 4 and Eq. ( 1). 18,28he first matrix relates the region of Fig. 3(a) to the area using the four red points in Fig. 3(c), and the second one relates the region of Fig. 3(b) to the area using the four blue points in Fig. 3(c).The first matrix (T Z ) is calculated by multiplying matrix N with the inverse matrix of W, and the eight parameters a to h can be obtained.
In the same way, the second matrix (T 0 z ) can be obtained by using the four blue points of Fig. 3(b) and the corresponding four blue points of Fig. 3(c).
After obtaining the matrices of T z according to the Z distance in the calibration stage, the user's iris position ðN 0 x ; N 0 y Þ in the left NFOV image can be estimated using matrix T z and the iris position ðW 0 x ; W 0 y Þ of the left eye in the WFOV image, as shown in Eq. (2).
In a similar way, the user's iris position in the right NFOV image can be estimated using matrix T 0 Z and the iris position of the right eye in WFOV image.
In general, the detected position of the iris in the WFOV image is not accurate because the image resolution of the eye region in the WFOV is low, as shown in Fig. 5. Therefore, we define the iris region in the WFOV image by the four points, as shown in Figs.5(b) and 5(c), instead of using the one point of the iris center.Consequently, four points [ðW 0 and ðW 0 0 x4 ; W 0 0 y4 Þ] can be determined from the right iris in the WFOV image.With these two pairs of four points, matrices T Z and T 0 Z , and Eq. ( 2), two pairs of four points [ðN 0 in the left and right NFOV images are calculated as the iris regions, respectively.First, we use the Adaboost algorithm to detect the face regions. 11,19In order to reduce the effects of illumination variations, Retinex filtering is used for illumination normalization in the detected facial regions. 11,20Then, the rapid eye detection (RED) algorithm is conducted to detect the iris region.It compares the intensity difference between the iris and its neighboring regions, which occurs because the iris region is usually darker. 21However, since only the approximate position and size of the iris region can be estimated by the RED algorithm, we perform CED to detect the iris position and size accurately. 27,29The iris region is determined at the position where the maximum difference between the gray levels of the inner and outer boundary points of the circular template (whose radius is changeable) is obtained. 27Consequently, we can determine the two iris regions of the left and right eyes in the WFOV image  1) should be selected corresponding to the Z distance.However, it is difficult to estimate the Z distance from the camera to the user using only one WFOV camera.
In previous research studies, Dong et al. used one WFOV camera to detect the face and estimate the Z distance. 14After detecting the rectangular area of face, they used the width or height of the face area to estimate the Z distance.However, this method has its limitations, because there are individual variations in facial size.To overcome this limitation, Lee et al. used the least squares regression method to enhance the accuracy of the Z distance estimation by updating the model parameters for estimating the Z distance. 23However, their method requires user-dependent calibration at the initial stage to obtain the user-specific parameters.
Considering the limitations of the previous research studies, we propose a new method for estimating the Z distance between the user's eye and the camera lens based on the Figure 6 shows a conventional camera optical model, 23 where Z represents the distance between the camera lens and the object.V is the Z distance between the image plane and the camera lens.W and w are the object sizes in the scene and image plane, respectively, and f is a camera focal point.According to a conventional camera optical model, 23 we can obtain the relationship among Z, and V, W, w as shown in Eq. ( 3): Since a lens of fixed focal length is used in our WFOV camera, V is a constant value.Therefore, in the calibration stage, we can calculate V by using the measured object size in the scene (W) and that captured in the image plane (w) with the measured Z distance (Z) based on Eq. ( 4): Consequently, with the calculated V, we can estimate the Z distance between the user's iris and the lens based on the iris diameter w (detected by CED) in the WFOV image and anthropometric data of the human iris size W by using Eq. ( 3).
Since the upper and lower iris regions are usually occluded by eyelids, we referred to the horizontal visible iris diameter (HVID) as the anthropometric data of the human iris size.5][26] Hall et al. measured the HVID, where the range of measured HVIDs was 9.26 to 13.22 mm, 24 and we used these values as the range of W. Consequently, we can obtain the minimum and maximum values of Z distance according to the range of W. Since the transformation matrix [T Z of Eq. ( 1)] from the area of the WFOV image to that of the NFOV image is defined according to Z distance, multiple T Z values are determined according to the range of the estimated Z distances, which produces multiple positions of iris area in the NFOV images, as shown in Fig. 7.
Figure 7 shows examples of the estimated iris positions in the left NFOV image according to the estimated Z distance range and corresponding searching region.The red and blue points in Fig. 7 show the mapped iris positions, which are calculated by transformation matrices [T Z of Eq. ( 1)] based on the minimum and maximum Z distances, respectively.We confirmed that the green points, which are mapped by the transformation matrix based on the ground-truth Z distance, are included in the candidate searching region, which is defined by the blue points in the upper left and the red points in the lower right.As shown in Fig. 7, the searching region for iris localization can be reduced from the entire area of the NFOV image to the candidate searching region.The candidate searching region in the right NFOV image can be determined by the same method.
Based on the camera optical model, we can assume that the iris of 9.26 mm is projected in the WFOV image and its size in the image is w.In addition, the iris of 13.22 mm is projected in the WFOV image and its size in the image is assumed as w 0 .Based on Eqs. ( 3) and (4), we can obtain the relationship of w ¼ V•W∕Z, and at the same Z distance, the w of the iris of 9.26 mm (W) is regarded as smaller than the w 0 of the iris of 13.22 mm (W 0 ).However, if the w is same to w 0 , we can think that the iris of 9.26 mm is closer to the camera (smaller Z distance) than that of 13.22 mm based on the relationship of w ¼ V•W∕Z.In this research, because we do not know the actual size of iris of each  user (between 9.26 and 13.22 mm), with one w value measured in the WFOV image and Eq. ( 3), we calculate the minimum value of Z distance (based on the iris size of 9.26 mm) and maximum one (based on the iris size of 13.22 mm), respectively.That is, because the iris [which is closer to camera (at the minimum value of Z distance)] has the smaller size (9.26 mm) than that (13.22 mm) at the maximum value of Z distance, we can think the size of iris of 9.26 mm in the NFOV image is similar to that of 13.22 mm.So, we do not use the bigger-sized searching region at the minimum Z distance and define the estimated iris region as same size in Fig. 7.

Iris Localization and Recognition
We use the candidate searching region of the NFOV image, as shown in Fig. 7, to further redefine the iris searching region by using the position of the SR in order to improve the performance of iris localization.As shown in Fig. 8, the SR is located near the center of the pupil because the NIR illuminator is positioned close to the NFOV cameras, as shown in Fig. 2, and the user aligns both eyes to the two NFOV cameras through the cold mirror.
We applied the RED algorithm 21 to detect the position of the SR.It compares the intensity difference between the SR and its neighboring regions, which occurs because the SR region is much brighter.After detecting the position of the SR, the final searching region for iris localization is redefined in consideration of the iris size, as shown in Fig. 9.
Using the final searching region, we performed two CEDs to isolate the iris region. 22,27In contrast with the CED for detecting iris area in the WFOV image (Sec.2.3), both the iris and the pupil region are considered in the two CEDs. 22,27Because the final searching region is much reduced, as shown by the red box in Fig. 9, the consequent searching range of the parameters of the two CEDs (the two radii of the iris and pupil, the two center positions of the iris and pupil) is also much decreased, which can enhance the accuracy and speed of detecting the iris region using the two CEDs.
After obtaining the center position and radius of both the iris and pupil regions, we detect upper and lower eyelids, and the eyelash region.The eyelid detection method extracts the candidate points of the eyelid using eyelid detecting masks and then detects an accurate eyelid region using a parabolic Hough transform. 27In addition, eyelash detecting masks are used to detect the eyelashes. 27Figure 10 shows examples of the detected iris, eyelid, and eyelash regions.
Figure 10(b) shows the result image where the detections of iris, pupil, eyelid, and eyelashes are completed.Because our algorithm knows the detected positions of eyelashes through eyelash detection procedure, 27 all the detected positions are painted as white pixel (whose gray level is 255).That is, except for the detected iris region, all the other areas (such as pupil, eyelashes, eyelid, sclera, and skin) are painted as white pixel (whose gray level is 255) as shown in Fig. 10(b).Then, the image of Fig. 10(b) is transformed into that of Fig. 11, and our iris recognition system does not use the code bit (which is extracted from the areas of the white pixel) for matching because this code bit does not represent the feature of iris texture area. 27he segmented iris region is transformed into an image of polar coordinates and is normalized as an image consisting of 256 sectors and 8 tracks. 27Figure 11 shows an example of the normalized image.Finally, an iris code is extracted from each sector and track based on a one-dimensional Gabor filter. 27We use the hamming distance to calculate the dissimilarity between the enrolled iris code and the recognized code. 27,30Experimental Result The proposed method for iris localization was tested using a desktop computer with an Intel Core I7 with 3.50 GHz speed and 8 GB RAM.The algorithm was developed by Microsoft Foundation Class based on C++ programming, and the image capturing module was implemented using a DirectX 9.0 software development kit.
To evaluate the performance of our proposed method, we acquired WFOV and NFOV images using the proposed capturing device with a Z distance range of 25 to 40 cm.The ground-truth Z distance was measured by a laser rangefinder (BOSCH DLE 70 professional). 7Figure 12 shows the captured images according to the Z distance.The collected database has 3450 images, which consist of 1150 images [WFOV, NFOV (left iris), and NFOV (right iris)] captured   We compared the performance of the iris localization method according to whether size or position data for the iris were available.When only the two CEDs of Sec.2.4 are used for detecting the iris region, no data related to the size and position of the iris are applied during the iris localization procedure.On the other hand, when the SR detection (explained in Sec.2.4) is additionally applied to the CED, the range of the iris position is reduced based on the SR detection.In the case when the relationship between the irises' sizes in the WFOV and NFOV images 11 is applied, the size of the iris is approximately estimated.Finally, the proposed method uses both the position and size data of the iris for iris localization.Figure 13 shows an example of the result of iris localization using the methods mentioned above.As shown in Fig. 13, the accuracy of the iris segmentation of the proposed method is better than that of other methods.
In Fig. 13(b), the searching position of iris center and the searching range of the iris diameter are not restricted.That is, the iris boundary is searched in the entire area of the image, and the searching range of the iris diameter is ∼180 to 320 pixels.Since our iris camera uses a fixed focal zoom lens, the variation of iris size in the captured image is large according to the user's Z distance, as shown in Fig. 12.Therefore, this wide searching range of the iris diameter (∼180 to 320 pixels) should be used in order to detect the iris area of various sizes.Accordingly, the searching positions and ranges of the diameter are large, and the possibility of incorrect detection of iris region increases.Consequently, the incorrect detection of Fig. 13(b) occurs.In Fig. 13(c), the searching position of iris center is restricted, but the searching range of the iris diameter is not restricted.That is, the iris boundary is searched only in the restricted region (red colored box), which is defined by the detected SR positions through the SR detection of Sec.2.4.Although the searching positions are reduced in the restricted region, the wide searching range of the iris diameter (∼180 to 320 pixels) is still used for searching the iris region, which causes the incorrect detection of iris region like Fig. 13(c).In Fig. 13(d), the searching position of iris center is not restricted, but the searching range of the iris diameter is restricted.That is, the searching range of the iris diameter is reduced as ∼240 to 280 pixels because the information of iris size is estimated based on the method. 11However, because the searching positions are not estimated, the iris boundary is searched in the entire area of the image like Fig. 13(b).Consequently, the possibility of incorrect detection of iris region increases and the incorrect detection of Fig. 13(d) occurs.
In Fig. 13(e) (proposed method), both the searching position (the red-colored box) of iris center and the searching range of the iris diameter are restricted by the estimation based on the iris size information of the WFOV image, anthropometric information of human iris size, and geometric transform matrix according to the estimated Z distance.In addition, the searching range of the iris diameter is accurately estimated considering the camera optical model and anthropometric information of human iris size, which are different from the method. 11Consequently, the iris region is correctly detected as shown in Fig. 13(e).In the next experiment, as shown in Table 2, the accuracy of iris recognition when the above iris localization methods were applied was measured in terms of equal error rate (EER).EER is defined as the error rate where the false rejection rate (FRR) and the false acceptance rate (FAR) are almost the same; it has been widely used as the performance criterion of biometric systems. 11The FRR is the error rate of rejecting an enrolled person as an unenrolled one, whereas the FAR is that of accepting an unenrolled person as an enrolled one.
Figure 14 shows the receiver operating characteristic (ROC) curves of the proposed method and other methods.The ROC curve is composed of the values for the genuine acceptance rate (GAR) according to the FAR, where GAR is "100−FRR" (%).In the case of both the left and right NFOV images, we confirmed that the accuracy level of the proposed method is higher than that of other methods.
When iris localization is performed without the size and position data of the iris (i.e., only by two CEDs), its accuracy is lower than that of other methods.On the other hand, when size or position data can be used, its accuracy is higher.Furthermore, when the size and position data are used, our proposed method is superior to the others, as shown in Figs. 13 and 14 and Table 2.
In addition, we compared the average processing time of detecting the iris region, as shown in Table 3.
The concept of detecting the iris region using the two CEDs without size or position data was used in a previous study. 27When only the two CEDs are used for detecting the iris region without size or position data, the processing time  is longer than that of other methods.Our proposed method accomplished not only accurate detection but also fast processing, as shown in Tables 2 and 3.In Table 4, we show the measured processing time of each part of our method.A comparison of Tables 3 and 4 shows that the processing time of the last two steps of Table 4, "Eyelid and eyelash detection" and "Iris code extraction and matching," are not included in Table 3.
The reason we use the approach to search the iris region in a 1600 × 1200 pixel WFOV image rather than two 800 × 600 pixel NFOV images is as follows.In order to detect the iris region in the NFOV images, we can use the RED algorithm (Sec.2.3) or two CEDs method (Sec.2.4).However, it takes so much time to use the two CEDs in the entire area of two NFOV images.In addition, the detection accuracy of searching the iris region in the entire area of the image with the wide searching range of the iris diameter becomes lower as shown in Fig. 13(b).
Then we can think the RED method as an alternative.However, since our iris camera uses a fixed focal zoom lens, the variation of the iris size in the captured image is large according to the user's Z distance, as shown in Fig. 12.Therefore, we should use the wide searching range of mask size (∼360 to 640 pixels) for the RED method in the NFOV image, which increases the processing time and detecting error like Figs. 13(b) and 13(c).In addition, the searching by the RED in the entire area of the NFOV image can also increase the detection error and time.Consequently, we use the approach to search the iris region in a 1600 × 1200 pixel WFOV image.

Conclusions
In this paper, we proposed a new method for enhancing the performance of iris localization based on WFOV and NFOV cameras.We used the relationship between the WFOV and NFOV cameras to perform iris localization accurately and quickly.The size and position data of the iris in the WFOV image are obtained using the Adaboost, RED, and CED methods.Then, an estimated Z distance is used for estimating the iris candidate region of the NFOV image with geometric transformation, where the Z distance is estimated based on the iris size in the WFOV image and anthropometric data.After defining the iris candidate region of the NFOV image, the final searching region is redefined by using the position of the SR, and thereby, iris localization and recognition are conducted.Our experimental results showed that the proposed method outperformed other methods in terms of accuracy and processing time.
In future work, we intend to test the proposed method with more people in various environments and apply it to the multimodal biometric system based on recognition of the face and both irises.

Fig. 3 Fig. 4
Fig. 3 Calibration pattern used for computing geometric transformation matrix between the wide field of view (WFOV) and two narrow field of view (NFOV) image planes.(a) Captured image of the right NFOV camera of Fig. 2. (b) Captured image of the left NFOV camera of Fig. 2. (c) Captured WFOV image.

Figure 5
Figure 5 shows the result of face and iris detection in the WFOV image.In the recognition stage, two iris regions in the WFOV image [ðW 0 x1; W 0 y1 Þ, ðW 0 x2 ; W 0 y2 Þ, ðW 0 x3 ; W 0 y3 Þ, ðW 0 x4 ; W 0 y4 Þ] and [ðW 0 0 x1 ; W 0 0 y1 Þ, ðW 0 0 x2 ; W 0 0 y2 Þ, ðW 0 0 x3 ; W 0 0 y3 Þ, ðW 0 0 x4 ; W 0 0 y4 Þ]are determined based on face and eye detection, as shown in Figs.5(b) and 5(c).First, we use the Adaboost algorithm to detect the face regions.11,19In order to reduce the effects of illumination variations, Retinex filtering is used for illumination normalization in the detected facial regions.11,20Then, the rapid eye detection (RED) algorithm is conducted to detect the iris region.It compares the intensity difference between the iris and its neighboring regions, which occurs because the iris region is usually darker.21However, since only the approximate position and size of the iris region can be estimated by the RED algorithm, we perform CED to detect the iris position and size accurately.27,29The iris region is determined at the position where the maximum difference between the gray levels of the inner and outer boundary points of the circular template (whose radius is changeable) is obtained.27Consequently, we can determine the two iris regions of the left and right eyes in the WFOV image[ðW 0 x1 ; W 0 y1 Þ, ðW 0 x2 ; W 0 y2 Þ, ðW 0 x3 ; W 0 y3 Þ, ðW 0 x4 ; W 0 y4 Þ] and [ðW 0 0 x1 ; W 0 0 y1 Þ, ðW 0 0 x2 ; W 0 0 y2 Þ, ðW 0 0 x3 ; W 0 0 y3 Þ, ðW 0 0 x4 ; W 0 0 y4 Þ]based on the center position and radius of the iris detected by CED, as shown in Figs.5(b) and 5(c).Then, in order to map the two iris regions in the WFOV image onto those in the left and right NFOV images, the matrices T Z and T 0 Z of Eq. (1) should be selected corresponding to the Z distance.However, it is difficult to estimate the Z distance from the camera to the user using only one WFOV camera.In previous research studies, Dong et al. used one WFOV camera to detect the face and estimate the Z distance.14After detecting the rectangular area of face, they used the width or height of the face area to estimate the Z distance.However, this method has its limitations, because there are individual variations in facial size.To overcome this limitation, Lee et al. used the least squares regression method to enhance the accuracy of the Z distance estimation by updating the model parameters for estimating the Z distance.23However, their method requires user-dependent calibration at the initial stage to obtain the user-specific parameters.Considering the limitations of the previous research studies, we propose a new method for estimating the Z distance between the user's eye and the camera lens based on the Figure 5 shows the result of face and iris detection in the WFOV image.In the recognition stage, two iris regions in the WFOV image [ðW 0 x1; W 0 y1 Þ, ðW 0 x2 ; W 0 y2 Þ, ðW 0 x3 ; W 0 y3 Þ, ðW 0 x4 ; W 0 y4 Þ] and [ðW 0 0 x1 ; W 0 0 y1 Þ, ðW 0 0 x2 ; W 0 0 y2 Þ, ðW 0 0 x3 ; W 0 0 y3 Þ, ðW 0 0 x4 ; W 0 0 y4 Þ]are determined based on face and eye detection, as shown in Figs.5(b) and 5(c).First, we use the Adaboost algorithm to detect the face regions.11,19In order to reduce the effects of illumination variations, Retinex filtering is used for illumination normalization in the detected facial regions.11,20Then, the rapid eye detection (RED) algorithm is conducted to detect the iris region.It compares the intensity difference between the iris and its neighboring regions, which occurs because the iris region is usually darker.21However, since only the approximate position and size of the iris region can be estimated by the RED algorithm, we perform CED to detect the iris position and size accurately.27,29The iris region is determined at the position where the maximum difference between the gray levels of the inner and outer boundary points of the circular template (whose radius is changeable) is obtained.27Consequently, we can determine the two iris regions of the left and right eyes in the WFOV image[ðW 0 x1 ; W 0 y1 Þ, ðW 0 x2 ; W 0 y2 Þ, ðW 0 x3 ; W 0 y3 Þ, ðW 0 x4 ; W 0 y4 Þ] and [ðW 0 0 x1 ; W 0 0 y1 Þ, ðW 0 0 x2 ; W 0 0 y2 Þ, ðW 0 0 x3 ; W 0 0 y3 Þ, ðW 0 0 x4 ; W 0 0 y4 Þ]based on the center position and radius of the iris detected by CED, as shown in Figs.5(b) and 5(c).Then, in order to map the two iris regions in the WFOV image onto those in the left and right NFOV images, the matrices T Z and T 0 Z of Eq. (1) should be selected corresponding to the Z distance.However, it is difficult to estimate the Z distance from the camera to the user using only one WFOV camera.In previous research studies, Dong et al. used one WFOV camera to detect the face and estimate the Z distance.14After detecting the rectangular area of face, they used the width or height of the face area to estimate the Z distance.However, this method has its limitations, because there are individual variations in facial size.To overcome this limitation, Lee et al. used the least squares regression method to enhance the accuracy of the Z distance estimation by updating the model parameters for estimating the Z distance.23However, their method requires user-dependent calibration at the initial stage to obtain the user-specific parameters.Considering the limitations of the previous research studies, we propose a new method for estimating the Z distance between the user's eye and the camera lens based on the Figure 5 shows the result of face and iris detection in the WFOV image.In the recognition stage, two iris regions in the WFOV image [ðW 0 x1; W 0 y1 Þ, ðW 0 x2 ; W 0 y2 Þ, ðW 0 x3 ; W 0 y3 Þ, ðW 0 x4 ; W 0 y4 Þ] and [ðW 0 0 x1 ; W 0 0 y1 Þ, ðW 0 0 x2 ; W 0 0 y2 Þ, ðW 0 0 x3 ; W 0 0 y3 Þ, ðW 0 0 x4 ; W 0 0 y4 Þ]are determined based on face and eye detection, as shown in Figs.5(b) and 5(c).First, we use the Adaboost algorithm to detect the face regions.11,19In order to reduce the effects of illumination variations, Retinex filtering is used for illumination normalization in the detected facial regions.11,20Then, the rapid eye detection (RED) algorithm is conducted to detect the iris region.It compares the intensity difference between the iris and its neighboring regions, which occurs because the iris region is usually darker.21However, since only the approximate position and size of the iris region can be estimated by the RED algorithm, we perform CED to detect the iris position and size accurately.27,29The iris region is determined at the position where the maximum difference between the gray levels of the inner and outer boundary points of the circular template (whose radius is changeable) is obtained.27Consequently, we can determine the two iris regions of the left and right eyes in the WFOV image[ðW 0 x1 ; W 0 y1 Þ, ðW 0 x2 ; W 0 y2 Þ, ðW 0 x3 ; W 0 y3 Þ, ðW 0 x4 ; W 0 y4 Þ] and [ðW 0 0 x1 ; W 0 0 y1 Þ, ðW 0 0 x2 ; W 0 0 y2 Þ, ðW 0 0 x3 ; W 0 0 y3 Þ, ðW 0 0 x4 ; W 0 0 y4 Þ]based on the center position and radius of the iris detected by CED, as shown in Figs.5(b) and 5(c).Then, in order to map the two iris regions in the WFOV image onto those in the left and right NFOV images, the matrices T Z and T 0 Z of Eq. (1) should be selected corresponding to the Z distance.However, it is difficult to estimate the Z distance from the camera to the user using only one WFOV camera.In previous research studies, Dong et al. used one WFOV camera to detect the face and estimate the Z distance.14After detecting the rectangular area of face, they used the width or height of the face area to estimate the Z distance.However, this method has its limitations, because there are individual variations in facial size.To overcome this limitation, Lee et al. used the least squares regression method to enhance the accuracy of the Z distance estimation by updating the model parameters for estimating the Z distance.23However, their method requires user-dependent calibration at the initial stage to obtain the user-specific parameters.Considering the limitations of the previous research studies, we propose a new method for estimating the Z distance between the user's eye and the camera lens based on the

Fig. 7
Fig. 7 Examples of the estimated iris positions in the left NFOV image according to the estimated Z distance range and corresponding searching region.

Fig. 11
Fig. 11 Example of the normalized image of the left eye of Fig. 10(b).

Fig. 13
Fig. 13 Examples of the segmented iris region according to iris localization method.(a) Original image.(b) Result image using only two circular edge detections (CEDs).(c) Result image of the two CEDs with the position data using the SR detection of Sec.2.4.(d) Result image of the two CEDs with the size data given in Ref. 11. (e) Result image of the two CEDs with the size and position data (proposed method).

Fig. 14
Fig. 14 Receiver operating characteristic curves of the proposed method and other methods.(a) Result of left NFOV image.(b) Result of right NFOV image.

Table 1
Comparison of previous and proposed method. 10

Table 2
Comparison of equal error rates (EERs) of iris recognition with different iris localization methods (unit: %) Kim, Shin, and Park: Improved iris localization by using wide and narrow field of view cameras. . .Downloaded From: https://www.spiedigitallibrary.org/journals/Optical-Engineering on 18 Oct 2023 Terms of Use: https://www.spiedigitallibrary.org/terms-of-use

Table 3
Comparison of processing time (unit: ms).

Table 4
Processing time of each part of our method (unit: ms).Kim, Shin, and Park: Improved iris localization by using wide and narrow field of view cameras. . .