|
1.IntroductionFingerprinting is a classical topic in biometric and computer vision domains and has gained wide application in our daily life. With any fingerprinting technique, how to capture the high-quality fingerprint image is always the principle concern. Current fingerprint acquisition instruments usually consist of an image sensor and a touch panel. With such a touchable operation manner, the captured fingerprint images are usually degraded by improper finger placement, skin deformation or slippage, smearing of fingers, sensor noise, etc.1 To overcome these disadvantages, a technique named touch-less fingerprinting is emerging recently.2 By employing a three-dimensional (3-D) scanning procedure, 3-D models of fingerprints can be obtained with a contactless operation. In comparison with traditional two-dimensional (2-D) fingerprint images, plentiful fingerprint information can be retrieved from the 3-D fingerprint models, thus making the subsequent fingerprint recognition more reliable.3 Compared with the traditional 2-D image-based fingerprinting techniques, 3-D fingerprinting is still a new research domain that appears in recent years. The major technical challenge involved in 3-D fingerprinting is how to capture the 3-D model of live fingerprint precisely and efficiently. Various computer vision techniques have been applied for this purpose, such as shape from silhouette, structured light systems (SLS), stereo vision, etc.4–11 In Ref. 6, a multiple-view system that consists of five cameras and a set of 16 green light-emitting diode (LED) arrays is proposed for the 3-D fingerprint scanning. Multiple cameras are used to capture the fingerprint images under different viewpoints and LED lightings. And the corresponding silhouettes are extracted for the 3-D modeling of the fingerprints via shape from silhouette method. Based on the 3-D fingerprint acquisition system proposed in Ref. 6, to make the 3-D fingerprint images compatible with current 2-D fingerprint systems, an unwrapping algorithm is presented in Ref. 7. The equidistance unwrapping approach is utilized to minimize the distortion while preserving the ground-truth of the fingerprint. By unfolding the 3-D fingerprint in such a way, it resembles the effect of virtually rolling the 3-D finger on a 2-D plane. In Ref. 8, an SLS is developed for the 3-D fingerprint acquisition. While the fringe projector operates at a blue wavelength, a green illumination is used to image the papillary lines. The fringe pattern analysis technique is applied for the 3-D depth recovery from phase information.9 The nonparametric unwrap approach is applied to preserve the distances between surface points. A 3-D fingertip scanning system that consists of one projector and two industrial cameras is presented in Ref. 10. 3-D fingerprint models are achieved via SLS and stereo-vision means, and the 3D models are unwrapped to compare with 2-D counterparts. In Ref. 11, a high-speed SLS is established for the 3-D scanning of fingerprints. The system is configured with a digital light processing projector and a camera. A shifting sine wave pattern is projected onto the finger surface and the images with pattern illuminations are captured synchronously. Via the proposed decoding algorithm, high-density 3-D points of the finger surface can be obtained. However, such 3-D scanning-based approaches usually suffer from the translucent finger skin and thus degrade the 3-D reconstruction accuracy. Moreover, the complicated structure and high cost of the hardware also make these techniques unaffordable to end users and prevent the touchless 3-D fingerprinting technology for wide applications. In this paper, an efficient 3-D fingerprint acquisition method based on a simple hardware setup is presented. The system only consists of one camera and some LED lights. A shiny ball with known radius is used to calibrate the lighting directions of different LEDs. Considering nonuniform lighting conditions by the LEDs, a correction procedure is introduced to calibrate each LED light. Compared with traditional photometric stereo (PS) methods, which are usually implemented with Lambert reflectance assumption, a two-layer Hanrahan–Krueger (HK) model12 is introduced to model the finger surface’s reflectance property more accurately. An objective function is derived from HK irradiance equation based on PS principle, and the Levenberg–Marquardt method13 is used to solve the equation for accurate estimation of surface normal. Finally, a linear surface normal transformation is adopted to enhance the reconstructed 3-D fingerprint model. Experiments on real fingerprints and palm prints are experimented to demonstrate its feasibility and 3-D reconstruction accuracy. The rest of the paper is organized as follows. Reflectance modeling algorithm of the finger surface and how to solve the surface normal from the model are introduced in Sec. 2. Section 3 describes the adopted methods for lighting direction calibration, nonuniform lighting correction, and 3-D fingerprint model enhancement. Section 4 presents the experimental results on real fingerprints and the comparisons with traditional method. Conclusions and future work are offered in Sec. 5. 2.Reflectance Property Modeling of Finger SurfacePS is an important approach in computer vision that is usually used to estimate the surface normal by observing the target surface under various illuminations. This technique was first introduced by Woodham.14 Various PS-based methods have been proposed in the past decades, and their major concerns are usually focused on the following issues: calibration of the lighting directions, modeling of the surface reflectance property, 3-D reconstruction under nonuniform lighting conditions, etc.15–20 The modeling of surface reflectance property is a crucial issue in the PS algorithms, since it directly determines how the incident lights be modulated by the target surface as well as the final surface normal estimation. The most popular surface reflectance description is the Lambert model, which assumes the target surfaces with ideal diffuse reflection property. Given three or more known illumination conditions, surface normal can be efficiently calculated by solving some linear irradiance equations. 21 Skin surface like fingerprint is a kind of translucent material that contains certain multiple scattering and specular reflections. It cannot be well represented by a traditional linear reflection model like Lambert, especially in the case where high reconstructing precision is demanded. To model the human skin more precisely, a nonlinear reflection descriptor named Torrance and Sparrow (TS) model is introduced in Ref. 22. As a physical-based model, the TS model assumes the skin reflectance consists of Lambertian and purely surface scattering components. Incorporating an uncalibrated PS method, reflectance parameters of skin surface can be well estimated and the negative effects from generalized bas-relief ambiguity can be reduced.23 In comparison, the HK model considers the skin as a layered surface based on one-dimensional linear transport theory.12 The underlying principle is that the amount of light reflected by a material that exhibits subsurface scattering can be calculated by summing the amount of lights reflected by each layer times the percentage of light that actually reaches that layer. Hence, it is a more reasonable description for translucent surfaces like human skin and fingers. With reference to the HK model, in this paper, the finger surface is modeled as a two-layer material that consists of the epidermis and the dermis. The layers have different reflectance parameters that determine how the incident lights are reflected as illustrated by Fig. 1. According to Ref. 12, the irradiance equation of a two-layer HK model can be written as where is the irradiance, which is theoretically equal to pixel intensity, is the intensity of light source, and are the absorption section and scattering cross-section, respectively, and are the Fresnel transmittance terms for the lights entering and leaving the surface, is the thickness of epidermis layer, is the albedo, is the incidence angle between the normal vector and the light direction, is the outgoing angle between the normal vector and the reflection light direction, is the angle between the light and the view directions, is the mean cosine value of the phase function, is the function of , and is as given in Eq. (2) and is used to determine in which direction the light is likely to scatter.Define surface normal vector as , light direction vector as , reflection light direction as , and view direction as . All these vectors are normalized. Then, , , and in Eq. (1) can be rewritten in the form of inner product between two of these vectors as Suppose all illumination conditions are known, and , and are constant and known over the whole surface. Then there are total seven unknown parameters involved in Eq. (1), i.e., , , , , , , and . Define for the unknowns as , where indicates the ’th surface point. Given images with each one taken under different illumination conditions, the objective function can be formulated as where and indicate the irradiance value and pixel intensity on the ’th image of the ’th surface point. The Levenberg–Marquardt algorithm13 can be used to solve from the above equation.3.System Calibration3.1.Calibration of Lighting DirectionsIn Ref. 24, two cameras and a shiny ball with unknown position and radius are used for the lighting direction calibration. Since only one camera is adopted in our system, a shiny ball with known radius is used instead. With Zhang’s camera calibration method,25 focal length and the camera center can be estimated. With reference to the camera coordinate frame, lighting direction of the ’th light source can be represented as as shown in Fig. 2(a). According to the following two observations in the perspective image of a sphere—(1) the line passing through the camera center and any boundary point of the sphere in the image plane is a tangent line to the sphere and (2) the perpendicular distance from the center of the sphere to a tangent line of the sphere is the radius of the sphere—we can get the following equation: where is a boundary point on the image plane with a checked position , is the distance between point and point , is the vector that connects the two points. Note that Eq. (5) contains only three unknowns, , i.e., three components of the sphere center. Suppose boundary points , can be obtained via edge detection algorithm,26 then an error function can be defined asBy minimizing this error function EOF, we have three equations about , and the optimal values of the sphere location can be solved. The specular point on the image plane can be detected by finding the brightest image point. Then we can get its corresponding surface point as well as surface normal vector as illustrated by Fig. 2(b). With and , the lighting direction can be calculated as 3.2.Correction for Nonuniform Lighting ConditionsNotice that the distribution of the illumination (DOI) for each LED lamp is usually different and nonuniform. Figure 3(a) shows a homogeneous white paper surface under the illumination of one LED. Figure 3(b) shows the distribution of the image intensities within a selected image area. The nonuniform distribution of LED lights causes direct effect to the surface normal estimation. Therefore, a procedure is required to correct these deviations. Notice that when the light position is fixed, the brightest spot has a fixed area. That means under the illumination of a fixed light source, for a point on the image plane , the portion of its pixel value to the brightest pixel value is constant to the pixel position and independent of the surface shape, i.e., Suppose is the ’th LED light whose DOI needs to be calibrated; a white planar board with Lambertian surface is used for its correction. The procedure can be implemented by the following steps:
With the above correction, image intensity to each light can be generally corrected to a uniform distribution. Figure 4 gives a comparison between original and corrected fingerprint images. 3.3.Enhancement of Surface NormalThe corrected images will be used as input to calculate surface normal at each surface point via the algorithms described in Sec. 2. As a postprocedure, a linear surface normal transformation is adopted consequently. In the algorithm, the average of normal vectors over a local patch is calculated as a local reference. And the difference between original surface normal and this reference vector can be amplified as This procedure aims to amplify the angle between two neighboring normal vectors and thus improves the visual effect and contrast of the reconstructed 3-D fingerprint models. 4.Experiments and DiscussionThe experimental setup consists of one camera (Prosilica GC650, 90FPS) with a resolution of and seven LED lamps (white color, 0.2 W, emitting angle 120 deg) as shown in Fig. 5(a). Focal length of the lens (Navitar NMV-25M23) is 25 mm, and the operating distance (from the finger place to the front of lens) is . The LED lamps are evenly mounted on a lampshade with radius of 75 mm. The lampshade has a distance to the finger place. The LED positions are finely adjusted to point to the finger place region. An external I/O board is developed to synchronize the camera and LEDs to control the whole capturing time within 0.1 s. The algorithm is implemented under MATLAB2010a on a PC with 2.7 GHz CPU and 2 G RAM. Via the proposed lighting direction calibration method, directions of LED lamps are calculated as the following normalized vectors: , , , , , , and , as illustrated in Fig. 5(b). Real fingerprints and palm prints are used for the experiments, and the results are compared with traditional PS methods where the Lambert reflectance model is adopted. The experiments are implemented with the following steps:
4.1.Comparison with Traditional PS MethodIn our experiment, the parameters of , and in Eq. (1) are set to {0.8, 1.25, 0.8}. The unknown parameters of , and in Eq. (1) are initialized to for the fingerprint experiments and for the palm print experiment. To enhance the recovered 3-D model, , is set to Eq. (9). There are still technical challenges for current instruments to get the 3-D ground truth of elastic surface like skin. That means it is difficult to evaluate the precision of reconstructed 3-D fingerprint models directly. As an alternative approach, a homogeneous white paper surface is used for experiment as shown by Fig. 6. Surface normal at each image pixel is calculated, and a standard deviation of only 1.65 deg is obtained. It shows high reconstruction accuracy can be guaranteed by the proposed calibration and modeling methods, except for some distinct errors that appeared in the marginal image regions affected by inhomogeneous lighting conditions. Similar accuracy evaluations are also performed on some synthetic data with known depth map as shown by Figs. 7 and 8. The data can be downloaded from Ref. 27. Seven synthetic images with virtual illuminations are generated as shown by Figs. 7(a) and 8(a). The reconstructed 3-D models under various viewpoints are displayed by Figs. 7(b) to 7(d) and 8(b) to 8(d). The reconstruction errors (in the unit of pixel) with respect to the ground truth models are presented by Figs. 7(e) and 8(e). It shows an average reconstruction deviation far less than one pixel and demonstrates the accuracy of the proposed algorithm. In the experiment with fingerprints, the original four images with different lighting conditions are as shown in Fig. 9(a). The reconstructed 3-D model by traditional PS method, which adopts Lambert reflectance model, is as shown in Fig. 9(b). The reconstructed 3-D model by the proposed method is given by Fig. 9(c). Figure 9(d) shows the obtained albedo map by the proposed method. Another experiment is also conducted on the palm print with same parameter setting as shown by Fig. 10. From the results we can see that skin features like the finger and palm ridges can be clearly retrieved in 3-D space. To solve the nonlinear model for each image pixel, our algorithm takes more time than the traditional linear Lambert model. The whole computation time from images input to final 3-D model is given in Table 1. The efficiency can be further improved by more efficient developing tools like Visual C++ and parallel processing devices like graphics processing units in the future. Table 1Calculation time by the nonlinear model and linear Lambert model (in seconds).
4.2.Enhancement of 3-D Models Under Different ParametersThe parameters of and in Eq. (9) play an important role in the rendering of 3-D fingerprint model. In this experiment, different and are tested as shown in Fig. 11. Figure 11(a) shows the original fingerprint image. The reconstructed 3-D fingerprint without surface normal enhancement is as shown in Fig. 11(b). Figures 11(c) to 11(e) show the 3-D models with surface normal enhancing parameters of , , , and , , respectively. From the results we can see that with the increase of enhancing parameters, the fingerprint ridges can be clearly emphasized. However, the ridges are also usually broken subject to the overenhancement as shown in Fig. 11(f). 4.3.Reconstruction Results with Different Image NumbersIn this experiment, three, five, and seven images are used for the fingerprint 3-D reconstruction, and the results by the proposed method and traditional Lambert method are also compared respectively. The captured seven fingerprint images under different LED illuminations are as shown in Fig. 12. To make the comparison unbiased, all the 3-D models are presented without enhancement processing. Figure 13 shows the 3-D reconstruction result with seven images. Figures 13(a) and 13(b) show the results by our method and traditional Lambert model, respectively. Figures 13(c) and 13(d) give close-up views. In comparison, without the enhancement process, Fig. 13(c) demonstrates more finger ridge details with sharp contrast than Fig. 13(d). It means higher-quality 3-D model can be achieved with the proposed method. Experimental results with five and three images are given in Figs. 14 and 15, respectively. From these results, we can see that once the number of images declined to three, as illustrated in Fig. 15, there is no big difference between our method and the traditional method. It is because that there are seven unknowns in Eq. (1) and it needs more images to achieve a stable solution. 5.Conclusions and Future WorkThis article presents a novel method for 3-D fingerprint acquisition based on the principle of PS. Compared with previous 3-D scanning-based methods, the proposed system is rather simple and only contains a camera and some LED lamps. To calibrate the lighting direction, a shiny sphere with known radius is used. To model the reflectance property of finger surface, a two-layer HK reflection model is proposed instead of traditional Lambert reflectance model. Considering the nonuniform lighting property of LEDs, a simple correction procedure is also introduced. Finally, the reconstructed 3-D model is enhanced via a linear transformation process. To verify feasibility of the proposed algorithm and system, the experiments are implemented with real finger and palm prints. The results by traditional Lambert model are also provided for comparison to demonstrate the improvements in 3-D reconstruction accuracy. Future work can address how to improve the correction method for nonuniform lighting conditions, especially to the marginal image regions. Moreover, how to apply the obtained 3-D fingerprint models to the recognition phase and to establish an extensive 3-D fingerprint database will be an urgent task in the future. AcknowledgmentsThis work was supported in part by the National Natural Science Foundation of China (Grant No. 61002040, 61375041), the Introduced Innovative R&D Team of Guangdong Province-Robot and Intelligent Information Technology R&D Team (Grant No. 201001D0104648280), and the Shenzhen Key Lab for Computer Vision and Pattern Recognition (Grant No. CXB201104220032A). ReferencesF. A. Fernandezet al.,
“A comparative study of fingerprint image-quality estimation methods,”
IEEE Trans. Inf. Forensics Security, 2
(4), 734
–743
(2007). http://dx.doi.org/10.1109/TIFS.2007.908228 1556-6013 Google Scholar
D. AouadaH. Krim,
“Squigraphs for fine and compact modeling of 3-D shapes,”
IEEE Trans. Image Process., 19
(2), 306
–321
(2010). http://dx.doi.org/10.1109/TIP.2009.2034693 IIPRE4 1057-7149 Google Scholar
R. LenzP. L. Carmona,
“Octahedral transformations for 3-D image processing,”
IEEE Trans. Image Process., 18
(12), 2618
–2628
(2009). http://dx.doi.org/10.1109/TIP.2009.2029953 IIPRE4 1057-7149 Google Scholar
S. NegahdaripourH. SekkatiH. Pirsiavash,
“Opti-acoustic stereo imaging: on system calibration and 3-D target reconstruction,”
IEEE Trans. Image Process., 18
(6), 1203
–1214
(2009). http://dx.doi.org/10.1109/TIP.2009.2013081 IIPRE4 1057-7149 Google Scholar
W. MiledJ. C. PesquetM. Parent,
“A convex optimization approach for depth estimation under illumination variation,”
IEEE Trans. Image Process., 18
(4), 813
–830
(2009). http://dx.doi.org/10.1109/TIP.2008.2011386 IIPRE4 1057-7149 Google Scholar
G. ParzialeE. Diaz-SantanaR. Hauke,
“The surround imager tm: a multi-camera touchless device to acquire 3d rolled-equivalent fingerprints,”
Lec. Notes Comput. Sci., 3832 244
–250
(2005). http://dx.doi.org/10.1007/11608288 LNCSD9 0302-9743 Google Scholar
Y. Chenet al.,
“3D touchless fingerprints: compatibility with legacy rolled images,”
in IEEE Biometrics Symposium: Special Session on Research at The Biometric Consortium Conference,
1
–6
(2006). Google Scholar
D. Kolleret al.,
“3D capturing of fingerprints—on the way to a contactless certified sensor,”
in IEEE Proc. of the Special Interest Group on Biometrics and Electronic Signatures,
33
–44
(2011). Google Scholar
Z. ZhangD. ZhangX. Peng,
“Performance analysis of a 3D full-field sensor based on fringe projection,”
Opt. Lasers Eng., 42
(3), 341
–353
(2004). http://dx.doi.org/10.1016/j.optlaseng.2003.11.004 OLENDN 0143-8166 Google Scholar
R. D. Labatiet al.,
“Fast 3-D fingertip reconstruction using a single two-view structured light acquisition,”
in IEEE Workshop on Biometric Measurements and Systems for Security and Medical Applications,
1
–8
(2011). Google Scholar
Y. C. WangL. G. HassebrookD. L. Lau,
“Data acquisition and processing of 3-D fingerprints,”
IEEE Trans. Inf. Forensics Security, 5
(4), 750
–760
(2010). http://dx.doi.org/10.1109/TIFS.2010.2062177 1556-6013 Google Scholar
P. HanrahanW. Krugeger,
“Reflection from layered surfaces due to subsurface scattering,”
in ACM Proc. of SIGGRAPH,
165
–174
(1993). Google Scholar
K. Levenberg,
“A method for the solution of certain non-linear problems in least squares,”
Q. Appl. Math., 2
(2), 164
–168
(1994). QAMAAY 0033-569X Google Scholar
R. J. Woodham,
“Photometric method for determining surface orientation from multiple images,”
Opt. Eng., 19
(1), 139
–144
(1980). http://dx.doi.org/10.1117/12.7972479 OPEGAR 0091-3286 Google Scholar
H. KimB. WilburnM. B. Ezra,
“Photometric stereo for dynamic surface orientations,”
in ECCV,
59
–72
(2010). Google Scholar
L. Wuet al.,
“Robust photometric stereo via low-rank matrix completion and recovery,”
in ACCV,
703
–717
(2011). Google Scholar
T. KuparinenV. Kyrki,
“Optimal reconstruction of approximate planar surfaces using photometric stereo,”
IEEE Trans. Pattern Anal. Mach. Intell., 31
(12), 2282
–2289
(2009). http://dx.doi.org/10.1109/TPAMI.2009.101 Google Scholar
S. BarskyM. Petrou,
“The 4-source photometric stereo technique for three-dimensional surfaces in the presence of highlights and shadows,”
IEEE Trans. Pattern Anal. Mach. Intell., 25
(10), 1239
–1252
(2003). http://dx.doi.org/10.1109/TPAMI.2003.1233898 ITPIDJ 0162-8828 Google Scholar
T. Malzbenderet al.,
“Surface enhancement using real-time photometric stereo and reflectance transformation,”
in Eurographics Association Proc. of the European Symp. on Rending,
245
–250
(2006). Google Scholar
J. A. Sunet al.,
“Object surface recovery using a multi-light photometric stereo technique for non-Lambertian surfaces subject to shadows and specularities,”
Image Vis. Comput., 25
(7), 1050
–1057
(2007). http://dx.doi.org/10.1016/j.imavis.2006.04.025 IVCODK 0262-8856 Google Scholar
W. Osten,
“A simple and efficient optical 3D-sensor based on photometric stereo,”
in Springer, The 5th Int. Workshop on Automatic Processing of Fringe Patterns,
702
–706
(2005). Google Scholar
A. S. Georghiades,
“Incorporating the torrance and sparrow model of reflectance in uncalibrated photometric stereo,”
in IEEE ICCV,
591
–597
(2003). Google Scholar
P. BelhumeurD. KriegmanA. Yuille,
“The bas-relief ambiguity,”
in IEEE CVPR,
1040
–1046
(1997). Google Scholar
W. ZhouC. Kambhamettu,
“Estimation of illuminant direction and intensity of multiple light sources,”
Lec. Notes Comput. Sci., 2353 206
–220
(2002). http://dx.doi.org/10.1007/3-540-47979-1 LNCSD9 0302-9743 Google Scholar
Z. Zhang,
“A flexible new technique for camera calibration,”
IEEE Trans. Pattern Anal. Mach. Intell., 22
(11), 1330
–1334
(2000). http://dx.doi.org/10.1109/34.888718 ITPIDJ 0162-8828 Google Scholar
J. Canny,
“A computational approach to edge detection,”
IEEE Trans. Pattern Anal. Mach. Intell., 8
(6), 679
–714
(1986). http://dx.doi.org/10.1109/TPAMI.1986.4767851 ITPIDJ 0162-8828 Google Scholar
Digital Shape Workbench, “AIM@SHAPE,”
(2007) http://shapes.aim-at-shape.net/ ( March ). 2007). Google Scholar
BiographyWuyuan Xie received an MS in information engineering from the South China University of Technology. She is currently a PhD student in the Department of Mechanical and Automation Engineering at the Chinese University of Hong Kong. Her research interests include photometric stereo and stereo vision. Zhan Song received his PhD degree in mechanical and automation engineering from the Chinese University of Hong Kong, Hong Kong, in 2008. He is currently with the Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, as an associate researcher. His research interests include structured light-based sensing and vision-based human computer interaction. Ronald Chung received a BSEE from the University of Hong Kong, Hong Kong, and a PhD in computer engineering from University of Southern California, Los Angeles. He is currently with the Chinese University of Hong Kong as director of the Computer Vision Laboratory and professor in the Department of Mechanical and Automation Engineering. His research interests include computer vision and robotics. He is a senior member of IEEE and a member of MENSA. He was the chairman of the IEEE Hong Kong Section Joint Chapter on Robotics & Automation Society and Control Systems Society in the years 2001 to 2003. |