Open Access
3 October 2013 Improved iris localization by using wide and narrow field of view cameras for iris recognition
Author Affiliations +
Abstract
Biometrics is a method of identifying individuals by their physiological or behavioral characteristics. Among other biometric identifiers, iris recognition has been widely used for various applications that require a high level of security. When a conventional iris recognition camera is used, the size and position of the iris region in a captured image vary according to the X , Y positions of a user’s eye and the Z distance between a user and the camera. Therefore, the searching area of the iris detection algorithm is increased, which can inevitably decrease both the detection speed and accuracy. To solve these problems, we propose a new method of iris localization that uses wide field of view (WFOV) and narrow field of view (NFOV) cameras. Our study is new as compared to previous studies in the following four ways. First, the device used in our research acquires three images, one each of the face and both irises, using one WFOV and two NFOV cameras simultaneously. The relation between the WFOV and NFOV cameras is determined by simple geometric transformation without complex calibration. Second, the Z distance (between a user’s eye and the iris camera) is estimated based on the iris size in the WFOV image and anthropometric data of the size of the human iris. Third, the accuracy of the geometric transformation between the WFOV and NFOV cameras is enhanced by using multiple matrices of the transformation according to the Z distance. Fourth, the searching region for iris localization in the NFOV image is significantly reduced based on the detected iris region in the WFOV image and the matrix of geometric transformation corresponding to the estimated Z distance. Experimental results showed that the performance of the proposed iris localization method is better than that of conventional methods in terms of accuracy and processing time.

1.

Introduction

With the increasing security requirements of the current information society, personal identification is becoming increasingly important. Conventional methods for the identification of an individual include possession-based methods that use specific things (such as smart cards, keys, tokens) and knowledge-based methods that use what the individual knows (such as a password or a PIN). These methods have the disadvantages that tokens and passwords can be shared, misplaced, duplicated, lost, or forgotten.1 Therefore, over the last few decades, a new method called biometrics has been attracting attention as a promising identification technology. This technique uses a person’s physiological or behavioral traits, such as the iris, face, fingerprint, gait, or voice.4 In particular, iris recognition means identifying a person based on the unique iris pattern that exists in the iris region between the sclera and the pupil. As compared with other biometric features, the iris pattern has the characteristics of uniqueness and stability throughout one’s life2 and is not affected even by laser eye surgery.3 In addition, not only both irises of the same person but also the irises of identical twins are reported to be different.2 These properties make iris recognition one of the most accurate biometric methods for identification. Iris recognition system is composed of four steps, namely the acquisition of iris image, iris localization, feature extraction, and matching. In iris localization, the iris region between the pupil (inner boundary of the iris region) and the sclera (outer boundary of the iris region) in a captured eye image is isolated. It is important to detect precisely the inner and outer boundaries since the exactness of the iris localization greatly influences the subsequent feature extraction and matching. Furthermore, accurate iris localization consumes much processing time.5 Thus, iris localization plays a key role in an iris recognition system since the performance of iris recognition is dependent on the iris localization accuracy. Generally, there are two major approaches for iris localization,6 one of which is based on circular edge detection (CED), such as the Hough transform, and the other on histograms.

Most iris recognition systems have deployed these kinds of methods, and their performances are affected by whether the iris camera is equipped with a zoom lens. For example, some systems do not use an autozoom lens in order to reduce the size and complexity of the system.811 Kim et al. proposed a multimodal biometric system based on the recognition of the user’s face and both irises.11 They used one wide field of view (WFOV) and one narrow field of view (NFOV) camera without panning, tilting, and autozooming functionalities, and thereby the system’s volume was reduced. Based on the relationship between the iris size in the WFOV and NFOV images, they estimated the size of the iris in images captured by the NFOV camera, which was used for determining the searching range of the iris’s radius in the iris localization algorithm.

Conversely, iris systems with autozooming functionality1215 can automatically maintain a similar iris size in an input image regardless of the Z distance, which can greatly facilitate iris localization. However, the size and complexity of these systems are increased due to the autozoom lens. Dong et al. designed an iris imaging system that uses NFOV and WFOV cameras with a pan-tilt-zoom unit with which the user can actively interact and which extends the working range; however, it has the limitation of the size and complexity.15

Therefore, in this study, our objective is to achieve an improvement in the performance of iris localization based on the relationship between WFOV and NFOV cameras. Our system has one WFOV and two NFOV cameras with a fixed zoom lens and uses the relation between the WFOV and NFOV cameras obtained by geometric transformation, without complex calibration. Consequently, the searching region for iris localization in the NFOV image is significantly reduced by using the iris region detected in the WFOV image, and thus the performance of iris localization is enhanced.

Table 1 shows a summary of comparisons between previously published methods and the proposed method.

Table 1

Comparison of previous and proposed method.

CategoryMethodStrengthsWeakness
With autozooming functionalityCapturing the iris images where iris sizes are almost similar irrespective of the change of Z distance1215Searching the iris region in the captured image by changing the iris position without changing the iris sizeThe structure for autozooming is complicated, large, and expensive
Without autozooming functionalityWith only narrow field of view (NFOV) camera8,9
With WFOV and NFOV camerasNot using wide field of view (WFOV) camera for detecting iris region in NFOV image10Capturing the iris images where iris sizes are changed according to Z distance. Searching the iris region in a captured image by changing both iris position and sizeThe structure without autozooming is relatively smaller, inexpensive, and less complicatedThe processing speed and accuracy of iris localization is limited due to the searching of iris region by changing both iris position and size
Using WFOV camera for detecting iris region in NFOV cameraConsidering the relationship between WFOV and NFOV images in terms of the iris sizes. Searching the iris region in the captured image by changing only the iris position11The structure without autozooming is smaller, inexpensive, and less complicated. The processing speed and accuracy of iris localization is enhanced as compared to that of the system not using a WFOV camera for detecting the iris region in the NFOV imageIris searching by changing the iris position still causes a decrease in detection accuracy and speed
Considering the relationship between WFOV and NFOV images in terms of both size and position of the iris. Localizing the iris region in the determined region using WFOV and NFOV cameras (proposed method)The structure without autozooming is smaller, inexpensive, and less complicated. The processing speed and accuracy of iris localization is enhanced as compared to the previous systemsThe calibration between the WFOV and NFOV camera according to the Z distance is required

The remainder of this paper is structured as follows. In Sec. 2, the proposed method is presented. Experimental results and conclusions are given in Secs. 3 and 4, respectively.

2.

Proposed Method

2.1.

Overview of the Proposed Method

Figure 1 shows an overview of the proposed method. In an initial calibration stage, we capture the WFOV and NFOV images of the calibration pattern at predetermined Z distances from 19 to 46 cm, at 1-cm steps. Then, the matrices of geometric transformation are calculated from the corresponding four points of each captured image according to the Z distances. In the recognition stage, the WFOV and NFOV images of the user are captured simultaneously. The face and eye regions are detected in the WFOV image. Then, the iris region in the WFOV image is detected to estimate the Z distance and mapped to the iris candidate area of the NFOV image. In detail, since the detected iris region in the WFOV image contains the size and position data of the iris region, we can estimate the Z distance based on the iris size and anthropometric data. The mapping of the detected iris region to the iris candidate region of the NFOV image is done using a matrix of geometric transformation (TZ: the precalculated matrix corresponding to the estimated Z distance). Then, the iris candidate region is redefined using the position of the corneal specular reflection (SR), and thereby iris localization is performed in the redefined iris region. Finally, iris recognition is conducted based on the iris code generated from the segmented iris region.

Fig. 1

Overall procedure of the proposed method.

OE_52_10_103102_f001.png

2.2.

Proposed Capturing Device

Figure 2 shows the capturing device for acquiring the images of the WFOV and NFOV cameras simultaneously. It consists of a WFOV camera, two NFOV cameras, cold mirrors, and a near-infrared (NIR) illuminator (including 36 NIR light emitting diodes whose wavelength is 880 nm).11 We used three universal serial bus cameras (Webcam C600 by Logitech Corp16) for the WFOV and two NFOV cameras, which can capture a 1600×1200pixel image. The WFOV camera captures the user’s face image, whereas the two NFOV cameras acquire both irises of the user. The NFOV cameras have an additional fixed focus zoom lens to capture magnified images of the iris. In order to reduce the processing time of iris recognition, the size of the captured iris image is reduced to 800×600pixels. Based on the detected size and position of the iris region, we performed the iris code extraction in the original 1600×1200pixel NFOV image. Since a fixed focus zoom lens is used, our device meets the requirement of the resolution of the iris image. The average diameter of the iris captured by the proposed device is 180 to 280 pixels within a Z distance operating range of 25 to 40 cm.11 The Z distance is the distance between the camera lens and the user’s eye. According to ISO/IEC 19794-6, an iris image in which the iris diameter is >200pixels is regarded as good quality; 150 to 200 pixels is acceptable; and 100 to 150 pixels is marginal.11,17 Based on this criterion, we can consider our iris image to be of acceptable or good quality in terms of iris diameter. The cold mirror has the characteristic of accepting NIR light while rejecting visible light. Therefore, the user can align his eye to the cold mirror according to his (visible light) reflected eye that he sees in the cold mirror, while his eye image illuminated by NIR light can be obtained by the NFOV camera inside the cold mirror. In order to remove the environmental visible light on the NFOV image, an additional NIR filter is attached to the NFOV cameras.

Fig. 2

The proposed capturing device.

OE_52_10_103102_f002.png

2.3.

Estimating the Iris Region of NFOV Image by Using Geometric Transformation

This section describes a method of estimating the iris region of the NFOV image. The objective of this approach is to make the iris region detected in the WFOV image be mapped to the estimated iris region in the NFOV image. As shown in Fig. 3, we captured the images of the calibration pattern in order to obtain the matrices of geometric transformation in the calibration stage. The captured images are obtained at the predetermined Z distances from 19 to 46 cm at 1-cm steps, and then the matrices are calculated from the corresponding four (manually detected) points of each captured image. These four points of the NFOV image [the four red and four blue points of Figs. 3(a) and 3(b), respectively] are the outermost points of the calibration pattern. As shown in Fig. 3(c), also the corresponding two pairs of four points of the WFOV image (the four red points and the four blue ones) are manually selected. Based on these points, the two transformation matrices (mapping function) between the region in the WFOV image [(Wx1,Wy1), (Wx2,Wy2), (Wx3,Wy3), and (Wx4,Wy4)] and the region in the NFOV image [(Nx1,Ny1), (Nx2,Ny2), (Nx3,Ny3), and (Nx4,Ny4)] are obtained at one predetermined Z distance by using geometric transformation, as shown in Fig. 4 and Eq. (1).18,28 The first matrix relates the region of Fig. 3(a) to the area using the four red points in Fig. 3(c), and the second one relates the region of Fig. 3(b) to the area using the four blue points in Fig. 3(c). The first matrix (TZ) is calculated by multiplying matrix N with the inverse matrix of W, and the eight parameters a to h can be obtained.

Eq. (1)

N=TzW(Nx1Nx2Nx3Nx4Ny1Ny2Ny3Ny400000000)=(abcdefgh00000000)(Wx1Wx2Wx3Wx4Wy1Wy2Wy3Wy4Wx1Wy1Wx2Wy2Wx3Wy3Wx4Wy41111).

Fig. 3

Calibration pattern used for computing geometric transformation matrix between the wide field of view (WFOV) and two narrow field of view (NFOV) image planes. (a) Captured image of the right NFOV camera of Fig. 2. (b) Captured image of the left NFOV camera of Fig. 2. (c) Captured WFOV image.

OE_52_10_103102_f003.png

Fig. 4

Relation between the regions of the WFOV and NFOV image planes.

OE_52_10_103102_f004.png

In the same way, the second matrix (Tz) can be obtained by using the four blue points of Fig. 3(b) and the corresponding four blue points of Fig. 3(c).

After obtaining the matrices of Tz according to the Z distance in the calibration stage, the user’s iris position (Nx,Ny) in the left NFOV image can be estimated using matrix Tz and the iris position (Wx,Wy) of the left eye in the WFOV image, as shown in Eq. (2).

Eq. (2)

(NxNy00)=(abcdefgh00000000)(WxWyWxWy1).

In a similar way, the user’s iris position in the right NFOV image can be estimated using matrix TZ and the iris position of the right eye in WFOV image.

In general, the detected position of the iris in the WFOV image is not accurate because the image resolution of the eye region in the WFOV is low, as shown in Fig. 5. Therefore, we define the iris region in the WFOV image by the four points, as shown in Figs. 5(b) and 5(c), instead of using the one point of the iris center. Consequently, four points [(Wx1,Wy1), (Wx2,Wy2), (Wx3,Wy3), and (Wx4,Wy4)] are determined from the left iris of the WFOV image; the other four points [(Wx1,Wy1), (Wx2,Wy2), (Wx3,Wy3), and (Wx4,Wy4)] can be determined from the right iris in the WFOV image. With these two pairs of four points, matrices TZ and TZ, and Eq. (2), two pairs of four points [(Nx1,Ny1), (Nx2,Ny2), (Nx3,Ny3), (Nx4,Ny4)] and [(Nx1,Ny1), (Nx2,Ny2), (Nx3,Ny3), (Nx4,Ny4)] in the left and right NFOV images are calculated as the iris regions, respectively.

Fig. 5

Result of face and iris detection in WFOV image. (a) The result in the WFOV image. (b) Magnified image showing the detected iris region and four points of the left eye [(Wx1,Wy1), (Wx2,Wy2), (Wx3,Wy3), and (Wx4,Wy4)]. (c) Magnified image showing the detected iris region and four points of the right eye [(Wx1,Wy1), (Wx2,Wy2), (Wx3,Wy3), and (Wx4,Wy4)].

OE_52_10_103102_f005.png

Figure 5 shows the result of face and iris detection in the WFOV image. In the recognition stage, two iris regions in the WFOV image [(Wx1,Wy1), (Wx2,Wy2), (Wx3,Wy3), (Wx4,Wy4)] and [(Wx1,Wy1), (Wx2,Wy2), (Wx3,Wy3), (Wx4,Wy4)] are determined based on face and eye detection, as shown in Figs. 5(b) and 5(c).

First, we use the Adaboost algorithm to detect the face regions.11,19 In order to reduce the effects of illumination variations, Retinex filtering is used for illumination normalization in the detected facial regions.11,20 Then, the rapid eye detection (RED) algorithm is conducted to detect the iris region. It compares the intensity difference between the iris and its neighboring regions, which occurs because the iris region is usually darker.21 However, since only the approximate position and size of the iris region can be estimated by the RED algorithm, we perform CED to detect the iris position and size accurately.27,29 The iris region is determined at the position where the maximum difference between the gray levels of the inner and outer boundary points of the circular template (whose radius is changeable) is obtained.27 Consequently, we can determine the two iris regions of the left and right eyes in the WFOV image [(Wx1,Wy1), (Wx2,Wy2), (Wx3,Wy3), (Wx4,Wy4)] and [(Wx1,Wy1), (Wx2,Wy2), (Wx3,Wy3), (Wx4,Wy4)] based on the center position and radius of the iris detected by CED, as shown in Figs. 5(b) and 5(c). Then, in order to map the two iris regions in the WFOV image onto those in the left and right NFOV images, the matrices TZ and TZ of Eq. (1) should be selected corresponding to the Z distance. However, it is difficult to estimate the Z distance from the camera to the user using only one WFOV camera.

In previous research studies, Dong et al. used one WFOV camera to detect the face and estimate the Z distance.14 After detecting the rectangular area of face, they used the width or height of the face area to estimate the Z distance. However, this method has its limitations, because there are individual variations in facial size. To overcome this limitation, Lee et al. used the least squares regression method to enhance the accuracy of the Z distance estimation by updating the model parameters for estimating the Z distance.23 However, their method requires user-dependent calibration at the initial stage to obtain the user-specific parameters.

Considering the limitations of the previous research studies, we propose a new method for estimating the Z distance between the user’s eye and the camera lens based on the detected iris size in the WFOV image and anthropometric data of the human iris size, which does not require initial user calibration. The detailed explanations are as follows. Figure 6 shows a conventional camera optical model,23 where Z represents the distance between the camera lens and the object. V is the Z distance between the image plane and the camera lens. W and w are the object sizes in the scene and image plane, respectively, and f is a camera focal point. According to a conventional camera optical model,23 we can obtain the relationship among Z, and V, W, w as shown in Eq. (3):

Eq. (3)

Z=V·Ww.

Fig. 6

Camera optical model.23

OE_52_10_103102_f006.png

Since a lens of fixed focal length is used in our WFOV camera, V is a constant value. Therefore, in the calibration stage, we can calculate V by using the measured object size in the scene (W) and that captured in the image plane (w) with the measured Z distance (Z) based on Eq. (4):

Eq. (4)

V=Z·wW.

Consequently, with the calculated V, we can estimate the Z distance between the user’s iris and the lens based on the iris diameter w (detected by CED) in the WFOV image and anthropometric data of the human iris size W by using Eq. (3).

Since the upper and lower iris regions are usually occluded by eyelids, we referred to the horizontal visible iris diameter (HVID) as the anthropometric data of the human iris size. Previous research studies have found the HVID.2426 Hall et al. measured the HVID, where the range of measured HVIDs was 9.26 to 13.22 mm,24 and we used these values as the range of W. Consequently, we can obtain the minimum and maximum values of Z distance according to the range of W. Since the transformation matrix [TZ of Eq. (1)] from the area of the WFOV image to that of the NFOV image is defined according to Z distance, multiple TZ values are determined according to the range of the estimated Z distances, which produces multiple positions of iris area in the NFOV images, as shown in Fig. 7.

Fig. 7

Examples of the estimated iris positions in the left NFOV image according to the estimated Z distance range and corresponding searching region.

OE_52_10_103102_f007.png

Figure 7 shows examples of the estimated iris positions in the left NFOV image according to the estimated Z distance range and corresponding searching region. The red and blue points in Fig. 7 show the mapped iris positions, which are calculated by transformation matrices [TZ of Eq. (1)] based on the minimum and maximum Z distances, respectively. We confirmed that the green points, which are mapped by the transformation matrix based on the ground-truth Z distance, are included in the candidate searching region, which is defined by the blue points in the upper left and the red points in the lower right. As shown in Fig. 7, the searching region for iris localization can be reduced from the entire area of the NFOV image to the candidate searching region. The candidate searching region in the right NFOV image can be determined by the same method.

Based on the camera optical model, we can assume that the iris of 9.26 mm is projected in the WFOV image and its size in the image is w. In addition, the iris of 13.22 mm is projected in the WFOV image and its size in the image is assumed as w. Based on Eqs. (3) and (4), we can obtain the relationship of w=V·W/Z, and at the same Z distance, the w of the iris of 9.26 mm (W) is regarded as smaller than the w of the iris of 13.22 mm (W). However, if the w is same to w, we can think that the iris of 9.26 mm is closer to the camera (smaller Z distance) than that of 13.22 mm based on the relationship of w=V·W/Z. In this research, because we do not know the actual size of iris of each user (between 9.26 and 13.22 mm), with one w value measured in the WFOV image and Eq. (3), we calculate the minimum value of Z distance (based on the iris size of 9.26 mm) and maximum one (based on the iris size of 13.22 mm), respectively. That is, because the iris [which is closer to camera (at the minimum value of Z distance)] has the smaller size (9.26 mm) than that (13.22 mm) at the maximum value of Z distance, we can think the size of iris of 9.26 mm in the NFOV image is similar to that of 13.22 mm. So, we do not use the bigger-sized searching region at the minimum Z distance and define the estimated iris region as same size in Fig. 7.

2.4.

Iris Localization and Recognition

We use the candidate searching region of the NFOV image, as shown in Fig. 7, to further redefine the iris searching region by using the position of the SR in order to improve the performance of iris localization. As shown in Fig. 8, the SR is located near the center of the pupil because the NIR illuminator is positioned close to the NFOV cameras, as shown in Fig. 2, and the user aligns both eyes to the two NFOV cameras through the cold mirror.

Fig. 8

Examples of the detected specular reflection (SR) in the captured NFOV images according to Z distance: (a) 25 cm, (b) 40 cm.

OE_52_10_103102_f008.png

We applied the RED algorithm21 to detect the position of the SR. It compares the intensity difference between the SR and its neighboring regions, which occurs because the SR region is much brighter. After detecting the position of the SR, the final searching region for iris localization is redefined in consideration of the iris size, as shown in Fig. 9.

Fig. 9

Final searching region for iris localization.

OE_52_10_103102_f009.png

Using the final searching region, we performed two CEDs to isolate the iris region.22,27 In contrast with the CED for detecting iris area in the WFOV image (Sec. 2.3), both the iris and the pupil region are considered in the two CEDs.22,27 Because the final searching region is much reduced, as shown by the red box in Fig. 9, the consequent searching range of the parameters of the two CEDs (the two radii of the iris and pupil, the two center positions of the iris and pupil) is also much decreased, which can enhance the accuracy and speed of detecting the iris region using the two CEDs.

After obtaining the center position and radius of both the iris and pupil regions, we detect upper and lower eyelids, and the eyelash region. The eyelid detection method extracts the candidate points of the eyelid using eyelid detecting masks and then detects an accurate eyelid region using a parabolic Hough transform.27 In addition, eyelash detecting masks are used to detect the eyelashes.27 Figure 10 shows examples of the detected iris, eyelid, and eyelash regions.

Fig. 10

Examples of the detected iris, eyelid, and eyelash regions. (a) Original images. (b) Result images.

OE_52_10_103102_f010.png

Figure 10(b) shows the result image where the detections of iris, pupil, eyelid, and eyelashes are completed. Because our algorithm knows the detected positions of eyelashes through eyelash detection procedure,27 all the detected positions are painted as white pixel (whose gray level is 255). That is, except for the detected iris region, all the other areas (such as pupil, eyelashes, eyelid, sclera, and skin) are painted as white pixel (whose gray level is 255) as shown in Fig. 10(b). Then, the image of Fig. 10(b) is transformed into that of Fig. 11, and our iris recognition system does not use the code bit (which is extracted from the areas of the white pixel) for matching because this code bit does not represent the feature of iris texture area.27

Fig. 11

Example of the normalized image of the left eye of Fig. 10(b).

OE_52_10_103102_f011.png

The segmented iris region is transformed into an image of polar coordinates and is normalized as an image consisting of 256 sectors and 8 tracks.27 Figure 11 shows an example of the normalized image. Finally, an iris code is extracted from each sector and track based on a one-dimensional Gabor filter.27 We use the hamming distance to calculate the dissimilarity between the enrolled iris code and the recognized code.27,30

3.

Experimental Result

The proposed method for iris localization was tested using a desktop computer with an Intel Core I7 with 3.50 GHz speed and 8 GB RAM. The algorithm was developed by Microsoft Foundation Class based on C++ programming, and the image capturing module was implemented using a DirectX 9.0 software development kit.

To evaluate the performance of our proposed method, we acquired WFOV and NFOV images using the proposed capturing device with a Z distance range of 25 to 40 cm. The ground-truth Z distance was measured by a laser rangefinder (BOSCH DLE 70 professional).7 Figure 12 shows the captured images according to the Z distance. The collected database has 3450 images, which consist of 1150 images [WFOV, NFOV (left iris), and NFOV (right iris)] captured from 30 subjects. Because no training procedure is required for our method, the entire database was used for testing.

Fig. 12

Examples of the captured WFOV and NFOV images.

OE_52_10_103102_f012.png

We compared the performance of the iris localization method according to whether size or position data for the iris were available. When only the two CEDs of Sec. 2.4 are used for detecting the iris region, no data related to the size and position of the iris are applied during the iris localization procedure. On the other hand, when the SR detection (explained in Sec. 2.4) is additionally applied to the CED, the range of the iris position is reduced based on the SR detection. In the case when the relationship between the irises’ sizes in the WFOV and NFOV images11 is applied, the size of the iris is approximately estimated. Finally, the proposed method uses both the position and size data of the iris for iris localization. Figure 13 shows an example of the result of iris localization using the methods mentioned above. As shown in Fig. 13, the accuracy of the iris segmentation of the proposed method is better than that of other methods.

Fig. 13

Examples of the segmented iris region according to iris localization method. (a) Original image. (b) Result image using only two circular edge detections (CEDs). (c) Result image of the two CEDs with the position data using the SR detection of Sec. 2.4. (d) Result image of the two CEDs with the size data given in Ref. 11. (e) Result image of the two CEDs with the size and position data (proposed method).

OE_52_10_103102_f013.png

In Fig. 13(b), the searching position of iris center and the searching range of the iris diameter are not restricted. That is, the iris boundary is searched in the entire area of the image, and the searching range of the iris diameter is 180 to 320 pixels. Since our iris camera uses a fixed focal zoom lens, the variation of iris size in the captured image is large according to the user’s Z distance, as shown in Fig. 12. Therefore, this wide searching range of the iris diameter (180 to 320 pixels) should be used in order to detect the iris area of various sizes. Accordingly, the searching positions and ranges of the diameter are large, and the possibility of incorrect detection of iris region increases. Consequently, the incorrect detection of Fig. 13(b) occurs. In Fig. 13(c), the searching position of iris center is restricted, but the searching range of the iris diameter is not restricted. That is, the iris boundary is searched only in the restricted region (red colored box), which is defined by the detected SR positions through the SR detection of Sec. 2.4. Although the searching positions are reduced in the restricted region, the wide searching range of the iris diameter (180 to 320 pixels) is still used for searching the iris region, which causes the incorrect detection of iris region like Fig. 13(c). In Fig. 13(d), the searching position of iris center is not restricted, but the searching range of the iris diameter is restricted. That is, the searching range of the iris diameter is reduced as 240 to 280 pixels because the information of iris size is estimated based on the method.11 However, because the searching positions are not estimated, the iris boundary is searched in the entire area of the image like Fig. 13(b). Consequently, the possibility of incorrect detection of iris region increases and the incorrect detection of Fig. 13(d) occurs.

In Fig. 13(e) (proposed method), both the searching position (the red-colored box) of iris center and the searching range of the iris diameter are restricted by the estimation based on the iris size information of the WFOV image, anthropometric information of human iris size, and geometric transform matrix according to the estimated Z distance. In addition, the searching range of the iris diameter is accurately estimated considering the camera optical model and anthropometric information of human iris size, which are different from the method.11 Consequently, the iris region is correctly detected as shown in Fig. 13(e).

In the next experiment, as shown in Table 2, the accuracy of iris recognition when the above iris localization methods were applied was measured in terms of equal error rate (EER). EER is defined as the error rate where the false rejection rate (FRR) and the false acceptance rate (FAR) are almost the same; it has been widely used as the performance criterion of biometric systems.11 The FRR is the error rate of rejecting an enrolled person as an unenrolled one, whereas the FAR is that of accepting an unenrolled person as an enrolled one.

Table 2

Comparison of equal error rates (EERs) of iris recognition with different iris localization methods (unit: %)

Iris localization methodLeftRight
EEROnly by two circular edge detections (CEDs)10.06613.326
Using two CEDs with the position data provided by the specular reflection (SR) detection in Sec. 2.47.69510.951
Using two CEDs with size data given in Ref. 114.0473.384
Using two CEDs with position and size data (proposed method)3.4443.009

Figure 14 shows the receiver operating characteristic (ROC) curves of the proposed method and other methods. The ROC curve is composed of the values for the genuine acceptance rate (GAR) according to the FAR, where GAR is “100−FRR” (%). In the case of both the left and right NFOV images, we confirmed that the accuracy level of the proposed method is higher than that of other methods.

Fig. 14

Receiver operating characteristic curves of the proposed method and other methods. (a) Result of left NFOV image. (b) Result of right NFOV image.

OE_52_10_103102_f014.png

When iris localization is performed without the size and position data of the iris (i.e., only by two CEDs), its accuracy is lower than that of other methods. On the other hand, when size or position data can be used, its accuracy is higher. Furthermore, when the size and position data are used, our proposed method is superior to the others, as shown in Figs. 13 and 14 and Table 2.

In addition, we compared the average processing time of detecting the iris region, as shown in Table 3.

Table 3

Comparison of processing time (unit: ms).

Iris localization methodLeftRight
Processing timeUsing only two CEDs2547.892046.51
Using two CEDs with the position data provided by the specular reflection detection of Sec. 2.4561.96446.88
Using two CEDs with size data according to Ref. 11716.05592.75
Using two CEDs with position and size data (proposed method)402.63400.29

The concept of detecting the iris region using the two CEDs without size or position data was used in a previous study.27 When only the two CEDs are used for detecting the iris region without size or position data, the processing time is longer than that of other methods. Our proposed method accomplished not only accurate detection but also fast processing, as shown in Tables 2 and 3. In Table 4, we show the measured processing time of each part of our method. A comparison of Tables 3 and 4 shows that the processing time of the last two steps of Table 4, “Eyelid and eyelash detection” and “Iris code extraction and matching,” are not included in Table 3.

Table 4

Processing time of each part of our method (unit: ms).

MethodAverageTotal
Wide field of view imageFace detection by Adaboost method143.75201.28
Illumination normalization by Retinex algorithm4.59
Eye detection by rapid eye detection method15.43
Iris detection by CED37.51
Narrow field of view image (NFOV)Estimating iris region in the NFOV image50.26200.12
Reducing the searching region by the SR detection18.03
Iris detection by two CEDs131.83
Eyelid and eyelash detection102.81
Iris code extraction and matching23.02
Total processing time527.23

The reason we use the approach to search the iris region in a 1600×1200pixel WFOV image rather than two 800×600pixel NFOV images is as follows. In order to detect the iris region in the NFOV images, we can use the RED algorithm (Sec. 2.3) or two CEDs method (Sec. 2.4). However, it takes so much time to use the two CEDs in the entire area of two NFOV images. In addition, the detection accuracy of searching the iris region in the entire area of the image with the wide searching range of the iris diameter becomes lower as shown in Fig. 13(b).

Then we can think the RED method as an alternative. However, since our iris camera uses a fixed focal zoom lens, the variation of the iris size in the captured image is large according to the user’s Z distance, as shown in Fig. 12. Therefore, we should use the wide searching range of mask size (360 to 640 pixels) for the RED method in the NFOV image, which increases the processing time and detecting error like Figs. 13(b) and 13(c). In addition, the searching by the RED in the entire area of the NFOV image can also increase the detection error and time. Consequently, we use the approach to search the iris region in a 1600×1200pixel WFOV image.

4.

Conclusions

In this paper, we proposed a new method for enhancing the performance of iris localization based on WFOV and NFOV cameras. We used the relationship between the WFOV and NFOV cameras to perform iris localization accurately and quickly. The size and position data of the iris in the WFOV image are obtained using the Adaboost, RED, and CED methods. Then, an estimated Z distance is used for estimating the iris candidate region of the NFOV image with geometric transformation, where the Z distance is estimated based on the iris size in the WFOV image and anthropometric data. After defining the iris candidate region of the NFOV image, the final searching region is redefined by using the position of the SR, and thereby, iris localization and recognition are conducted. Our experimental results showed that the proposed method outperformed other methods in terms of accuracy and processing time.

In future work, we intend to test the proposed method with more people in various environments and apply it to the multimodal biometric system based on recognition of the face and both irises.

Acknowledgments

This research was supported by the Ministry of Science, ICT and Future Planning (MSIPC), Korea, under the Information Technology Research Center (ITRC) support program (NIPA-2013-H0301-13-4007) supervised by the National IT Industry Promotion Agency (NIPA).

References

1. 

S. PrabhakarS. PankantiA. K. Jain, “Biometric recognition: security and privacy concerns,” IEEE Secur. Priv., 1 (2), 33 –42 (2003). http://dx.doi.org/10.1109/MSECP.2003.1193209 1540-7993 Google Scholar

2. 

J. DaugmanC. Downing, “Epigenetic randomness, complexity and singularity of human iris patterns,” Proc. R. Soc. B, 268 (1477), 1737 –1740 (2001). http://dx.doi.org/10.1098/rspb.2001.1696 PRLBA4 0080-4649 Google Scholar

3. 

A. ChandraR. DurandS. Weaver, “The uses and potential of biometrics in health care: are consumers and providers ready for it?,” Int. J. Pharm. Healthcare Mark., 2 (1), 22 –34 (2008). http://dx.doi.org/10.1108/17506120810865406 Google Scholar

4. 

Z. ZhuT. S. Huang, Multimodal Surveillance—Sensors, Algorithms, and Systems, Artech House Inc., Norwood, MA (2007). Google Scholar

5. 

A. BasitM. Y. Javed, “Localization of iris in gray scale images using intensity gradient,” Opt. Lasers Eng., 45 (12), 1107 –1114 (2007). http://dx.doi.org/10.1016/j.optlaseng.2007.06.006 OLENDN 0143-8166 Google Scholar

6. 

M. T. Ibrahimet al., “Iris localization using local histogram and other image statistics,” Opt. Lasers Eng., 50 (5), 645 –654 (2012). http://dx.doi.org/10.1016/j.optlaseng.2011.11.008 OLENDN 0143-8166 Google Scholar

7. 

DLE 70 Professional,” (2013) http://www.bosch-professional.com/gb/en/dle-70-16847-ocs-p September ). 2013). Google Scholar

9. 

J. R. Mateyet al., “Iris on the move: acquisition of images for iris recognition in less constrained environments,” Proc. IEEE, 94 (11), 1936 –1947 (2006). http://dx.doi.org/10.1109/JPROC.2006.884091 IEEPAD 0018-9219 Google Scholar

10. 

iCAM 7000 series,” (2013) http://irisid.com/icam7000series September ). 2013). Google Scholar

11. 

Y. G. Kimet al., “Multimodal biometric system based on the recognition of face and both irises,” Int. J. Adv. Robotic Syst., 9 (65), 1 –6 (2012). Google Scholar

12. 

K. Hannaet al., “A system for non-intrusive human iris acquisition and identification,” in Proc. of IAPR Workshop on Machine Vision Applications, 200 –203 (1996). Google Scholar

13. 

Z. B. Zhanget al., “Fast iris detection and localization algorithm based on AdaBoost algorithm and neural networks,” in Proc. Int. Conf. on Neural Networks and Brain, 1085 –1088 (2005). Google Scholar

14. 

W. Donget al., “Self-adaptive iris image acquisition system,” Proc. SPIE, 6944 694406 (2008). http://dx.doi.org/10.1117/12.777516 PSISDG 0277-786X Google Scholar

15. 

W. DongZ. SunT. Tan, “A design of iris recognition system at a distance,” in Proc. of Chinese Conference on Pattern Recognition, 1 –5 (2009). Google Scholar

17. 

Information technology. Biometric data interchange formats—Iris image data,” (2005). Google Scholar

18. 

J. W. Leeet al., “3D gaze tracking method using Purkinje images on eye optical model and pupil,” Opt. Lasers Eng., 50 (5), 736 –751 (2012). http://dx.doi.org/10.1016/j.optlaseng.2011.12.001 OLENDN 0143-8166 Google Scholar

19. 

P. ViolaM. J. Jones, “Robust real-time face detection,” Int. J. Comput. Vis., 57 (2), 137 –154 (2004). http://dx.doi.org/10.1023/B:VISI.0000013087.49260.fb IJCVEQ 0920-5691 Google Scholar

20. 

G. Hineset al., “Single-scale retinex using digital signal processors,” in Proc. of Global Signal Processing Conf., (2004). Google Scholar

21. 

B.-S. KimH. LeeW.-Y. Kim, “Rapid eye detection method for non-glasses type 3D display on portable devices,” IEEE Trans. Consum. Electron., 56 (4), 2498 –2505 (2010). http://dx.doi.org/10.1109/TCE.2010.5681133 ITCEDA 0098-3063 Google Scholar

22. 

D. S. Jeonget al., “A new iris segmentation method for non-ideal iris images,” Image Vis. Comput., 28 (2), 254 –260 (2010). http://dx.doi.org/10.1016/j.imavis.2009.04.001 IVCODK 0262-8856 Google Scholar

23. 

W. O. Leeet al., “Auto-focusing method for remote gaze tracking camera,” Opt. Eng., 51 (6), 063204 (2012). http://dx.doi.org/10.1117/1.OE.51.6.063204 OPEGAR 0091-3286 Google Scholar

24. 

L. A. Hallet al., “The influence of corneoscleral topography on soft contact lens fit,” Invest. Ophthalmol. Vis. Sci., 52 (9), 6801 –6806 (2011). http://dx.doi.org/10.1167/iovs.11-7177 IOVSDA 0146-0404 Google Scholar

25. 

S. YooR.-H. Park, “Red-eye detection and correction using inpainting in digital photographs,” IEEE Trans. Consum. Electron., 55 (3), 1006 –1014 (2009). http://dx.doi.org/10.1109/TCE.2009.5277948 ITCEDA 0098-3063 Google Scholar

26. 

L. M. Matsudaet al., “Clinical comparison of corneal diameter and curvature in Asian eyes with those of Caucasian eyes,” Optom. Vis. Sci., 69 (1), 51 –54 (1992). http://dx.doi.org/10.1097/00006324-199201000-00008 OVSCET 1040-5488 Google Scholar

27. 

K. Y. ShinY. G. KimK. R. Park, “Enhanced iris recognition method based on multi-unit iris images,” Opt. Eng., 52 (4), 047201 (2013). http://dx.doi.org/10.1117/1.OE.52.4.047201 OPEGAR 0091-3286 Google Scholar

28. 

R. C. GonzalezR. E. Woods, Digital Image Processing, 270 –272 2nd ed.Prentice Hall Inc., Upper Saddle River, NJ (2002). Google Scholar

29. 

J.-S. Choiet al., “Enhanced perception of user intention by combining EEG and gaze-tracking for brain-computer interfaces (BCIs),” Sensors, 13 (3), 3454 –3472 (2013). http://dx.doi.org/10.3390/s130303454 SNSRES 0746-9462 Google Scholar

30. 

J. Daugman, “How iris recognition works,” IEEE Trans. Circuits Syst. Video Technol., 14 (1), 21 –30 (2004). http://dx.doi.org/10.1109/TCSVT.2003.818350 ITCTEM 1051-8215 Google Scholar

Biography

OE_52_10_103102_d001.png

Yeong Gon Kim received a BS degree in computer engineering from Dongguk University, Seoul, South Korea, in 2011. He also received his master’s degree in electronics and electrical engineering at Dongguk University in 2013. He is currently pursuing his PhD degree in electronics and electrical engineering at Dongguk University. His research interests include biometrics and pattern recognition.

OE_52_10_103102_d002.png

Kwang Yong Shin received a BS degree in electronics engineering from Dongguk University, Seoul, South Korea, in 2008. He is currently pursuing a combined course of MS and PhD degree in electronics and electrical engineering at Dongguk University. His research interests include biometrics and pattern recognition.

OE_52_10_103102_d003.png

Kang Ryoung Park received his BS and master’s degrees in electronic engineering from Yonsei University, Seoul, South Korea, in 1994 and 1996, respectively. He also received his PhD degree from the Department of Electrical and Computer Engineering, Yonsei University in 2000. He was an assistant professor in the Division of Digital Media Technology at Sangmyung University until February 2008. He is currently a professor in the Division of Electronics and Electrical Engineering at Dongguk University. His research interests include computer vision, image processing, and biometrics.

CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Yeong Gon Kim, Kwang Yong Shin, and Kang Ryoung Park "Improved iris localization by using wide and narrow field of view cameras for iris recognition," Optical Engineering 52(10), 103102 (3 October 2013). https://doi.org/10.1117/1.OE.52.10.103102
Published: 3 October 2013
Lens.org Logo
CITATIONS
Cited by 2 scholarly publications.
Advertisement
Advertisement
KEYWORDS
Iris recognition

Cameras

Eye

Calibration

Image segmentation

Image compression

Matrices

RELATED CONTENT

Eye gaze tracking using correlation filters
Proceedings of SPIE (March 07 2014)
Aspects of iris image and iris match pair quality
Proceedings of SPIE (April 14 2010)
Rapid calibration method for head-mounted eye-tracker
Proceedings of SPIE (February 21 2024)

Back to Top