Open Access
3 October 2013 Real-time three-dimensional fingerprint acquisition via a new photometric stereo means
Author Affiliations +
Abstract
A real-time means for three-dimensional (3-D) fingerprint acquisition is presented. The system is configured with only one camera and some white light-emitting diode lamps. The reconstruction is performed based on the principle of photometric stereo. In the algorithm, a two-layer Hanrahan–Krueger model is proposed to represent the finger surface reflectance property instead of the traditional Lambert model. By the proposed lighting direction calibration and the nonuniform lighting correction methods, surface normal at each image point can be accurately estimated by solving a nonlinear optimization problem. Finally, a linear normal transformation is implemented for the enhancement of 3-D models. The experiments are implemented with real finger and palm prints, and the results are also compared with traditional means to show its feasibility and improvement in the reconstruction accuracy.

1.

Introduction

Fingerprinting is a classical topic in biometric and computer vision domains and has gained wide application in our daily life. With any fingerprinting technique, how to capture the high-quality fingerprint image is always the principle concern. Current fingerprint acquisition instruments usually consist of an image sensor and a touch panel. With such a touchable operation manner, the captured fingerprint images are usually degraded by improper finger placement, skin deformation or slippage, smearing of fingers, sensor noise, etc.1 To overcome these disadvantages, a technique named touch-less fingerprinting is emerging recently.2 By employing a three-dimensional (3-D) scanning procedure, 3-D models of fingerprints can be obtained with a contactless operation. In comparison with traditional two-dimensional (2-D) fingerprint images, plentiful fingerprint information can be retrieved from the 3-D fingerprint models, thus making the subsequent fingerprint recognition more reliable.3

Compared with the traditional 2-D image-based fingerprinting techniques, 3-D fingerprinting is still a new research domain that appears in recent years. The major technical challenge involved in 3-D fingerprinting is how to capture the 3-D model of live fingerprint precisely and efficiently. Various computer vision techniques have been applied for this purpose, such as shape from silhouette, structured light systems (SLS), stereo vision, etc.411 In Ref. 6, a multiple-view system that consists of five cameras and a set of 16 green light-emitting diode (LED) arrays is proposed for the 3-D fingerprint scanning. Multiple cameras are used to capture the fingerprint images under different viewpoints and LED lightings. And the corresponding silhouettes are extracted for the 3-D modeling of the fingerprints via shape from silhouette method. Based on the 3-D fingerprint acquisition system proposed in Ref. 6, to make the 3-D fingerprint images compatible with current 2-D fingerprint systems, an unwrapping algorithm is presented in Ref. 7. The equidistance unwrapping approach is utilized to minimize the distortion while preserving the ground-truth of the fingerprint. By unfolding the 3-D fingerprint in such a way, it resembles the effect of virtually rolling the 3-D finger on a 2-D plane. In Ref. 8, an SLS is developed for the 3-D fingerprint acquisition. While the fringe projector operates at a blue wavelength, a green illumination is used to image the papillary lines. The fringe pattern analysis technique is applied for the 3-D depth recovery from phase information.9 The nonparametric unwrap approach is applied to preserve the distances between surface points. A 3-D fingertip scanning system that consists of one projector and two industrial cameras is presented in Ref. 10. 3-D fingerprint models are achieved via SLS and stereo-vision means, and the 3D models are unwrapped to compare with 2-D counterparts. In Ref. 11, a high-speed SLS is established for the 3-D scanning of fingerprints. The system is configured with a digital light processing projector and a camera. A shifting sine wave pattern is projected onto the finger surface and the images with pattern illuminations are captured synchronously. Via the proposed decoding algorithm, high-density 3-D points of the finger surface can be obtained. However, such 3-D scanning-based approaches usually suffer from the translucent finger skin and thus degrade the 3-D reconstruction accuracy. Moreover, the complicated structure and high cost of the hardware also make these techniques unaffordable to end users and prevent the touchless 3-D fingerprinting technology for wide applications.

In this paper, an efficient 3-D fingerprint acquisition method based on a simple hardware setup is presented. The system only consists of one camera and some LED lights. A shiny ball with known radius is used to calibrate the lighting directions of different LEDs. Considering nonuniform lighting conditions by the LEDs, a correction procedure is introduced to calibrate each LED light. Compared with traditional photometric stereo (PS) methods, which are usually implemented with Lambert reflectance assumption, a two-layer Hanrahan–Krueger (HK) model12 is introduced to model the finger surface’s reflectance property more accurately. An objective function is derived from HK irradiance equation based on PS principle, and the Levenberg–Marquardt method13 is used to solve the equation for accurate estimation of surface normal. Finally, a linear surface normal transformation is adopted to enhance the reconstructed 3-D fingerprint model. Experiments on real fingerprints and palm prints are experimented to demonstrate its feasibility and 3-D reconstruction accuracy.

The rest of the paper is organized as follows. Reflectance modeling algorithm of the finger surface and how to solve the surface normal from the model are introduced in Sec. 2. Section 3 describes the adopted methods for lighting direction calibration, nonuniform lighting correction, and 3-D fingerprint model enhancement. Section 4 presents the experimental results on real fingerprints and the comparisons with traditional method. Conclusions and future work are offered in Sec. 5.

2.

Reflectance Property Modeling of Finger Surface

PS is an important approach in computer vision that is usually used to estimate the surface normal by observing the target surface under various illuminations. This technique was first introduced by Woodham.14 Various PS-based methods have been proposed in the past decades, and their major concerns are usually focused on the following issues: calibration of the lighting directions, modeling of the surface reflectance property, 3-D reconstruction under nonuniform lighting conditions, etc.1520 The modeling of surface reflectance property is a crucial issue in the PS algorithms, since it directly determines how the incident lights be modulated by the target surface as well as the final surface normal estimation. The most popular surface reflectance description is the Lambert model, which assumes the target surfaces with ideal diffuse reflection property. Given three or more known illumination conditions, surface normal can be efficiently calculated by solving some linear irradiance equations. 21

Skin surface like fingerprint is a kind of translucent material that contains certain multiple scattering and specular reflections. It cannot be well represented by a traditional linear reflection model like Lambert, especially in the case where high reconstructing precision is demanded. To model the human skin more precisely, a nonlinear reflection descriptor named Torrance and Sparrow (TS) model is introduced in Ref. 22. As a physical-based model, the TS model assumes the skin reflectance consists of Lambertian and purely surface scattering components. Incorporating an uncalibrated PS method, reflectance parameters of skin surface can be well estimated and the negative effects from generalized bas-relief ambiguity can be reduced.23 In comparison, the HK model considers the skin as a layered surface based on one-dimensional linear transport theory.12 The underlying principle is that the amount of light reflected by a material that exhibits subsurface scattering can be calculated by summing the amount of lights reflected by each layer times the percentage of light that actually reaches that layer. Hence, it is a more reasonable description for translucent surfaces like human skin and fingers.

With reference to the HK model, in this paper, the finger surface is modeled as a two-layer material that consists of the epidermis and the dermis. The layers have different reflectance parameters that determine how the incident lights are reflected as illustrated by Fig. 1.

Fig. 1

The finger skin is represented by a two-layer Hanrahan–Krueger model. The incident light Li is reflected twice by the epidermis and dermis layers; the reflected lights Lr are captured by the camera.

OE_52_10_103103_f001.png

According to Ref. 12, the irradiance equation of a two-layer HK model can be written as

Eq. (1)

Lr=σaT12T21cosθi(1e(σa+σs)d(cosθr+cosθi)cosθicosθr)p(ϕ,g)(σa+σs)(cosθi+cosθr)Li+ρcosθi,
where Lr is the irradiance, which is theoretically equal to pixel intensity, Li is the intensity of light source, σa and σs are the absorption section and scattering cross-section, respectively, T12 and T21 are the Fresnel transmittance terms for the lights entering and leaving the surface, d is the thickness of epidermis layer, ρ is the albedo, θi is the incidence angle between the normal vector and the light direction, θr is the outgoing angle between the normal vector and the reflection light direction, ϕ is the angle between the light and the view directions, g is the mean cosine value of the phase function, p is the function of ϕ, and g is as given in Eq. (2) and is used to determine in which direction the light is likely to scatter.

Eq. (2)

p(ϕ,g)=14π·1g24π(1+g22gcosϕ)2/3.

Define surface normal vector as n=(nx,ny,nz), light direction vector as l=(lx,ly,lz), reflection light direction as r=(rx,ry,rz), and view direction as z=(zx,zy,zz). All these vectors are normalized. Then, cosθi, cosϕ, and cosθr in Eq. (1) can be rewritten in the form of inner product between two of these vectors as

Eq. (3)

{cosθi=l·n;cosϕ=l·zcosθr=r·n=[2(l·n)nl]n.

Suppose all illumination conditions are known, and g, T12 and T21 are constant and known over the whole surface. Then there are total seven unknown parameters involved in Eq. (1), i.e., nx, ny, nz, ρ, d, σs, and σa. Define x for the unknowns as x=(nxj,nyj,nzj,ρj,dj,σsj,σaj), where j indicates the j’th surface point. Given K images with each one taken under different illumination conditions, the objective function can be formulated as

Eq. (4)

argminxE(x),whereE(x)=j,k(Lrj,kIj,k)2,
where Lrj,k and Ij,k indicate the irradiance value and pixel intensity on the k’th image of the j’th surface point. The Levenberg–Marquardt algorithm13 can be used to solve x from the above equation.

3.

System Calibration

3.1.

Calibration of Lighting Directions

In Ref. 24, two cameras and a shiny ball with unknown position and radius are used for the lighting direction calibration. Since only one camera is adopted in our system, a shiny ball with known radius r is used instead. With Zhang’s camera calibration method,25 focal length f and the camera center C can be estimated. With reference to the camera coordinate frame, lighting direction of the i’th light source can be represented as li=(lxi,lyi,lzi) as shown in Fig. 2(a).

Fig. 2

(a) Illustration of lighting direction calibration. (b) Relation of normal vector N, reflection direction R, and the incident light direction l on the specular point Si.

OE_52_10_103103_f002.png

According to the following two observations in the perspective image of a sphere—(1) the line passing through the camera center and any boundary point of the sphere in the image plane is a tangent line to the sphere and (2) the perpendicular distance from the center of the sphere to a tangent line of the sphere is the radius of the sphere—we can get the following equation:

Eq. (5)

{di2=|CS|2cos2θ=hi(sx,sy,sz)cosθ=(CS·CBi)|CS||CBi|=|CS|2di2|CS|,
where Bi is a boundary point on the image plane with a checked position (ii,ji,f), |CS| is the distance between point C and point S, CS is the vector that connects the two points. Note that Eq. (5) contains only three unknowns, S=(sx,sy,sz), i.e., three components of the sphere center. Suppose m boundary points Bi, i(1,,m) can be obtained via edge detection algorithm,26 then an error function can be defined as

Eq. (6)

EOF=i=1m(hir2)2.

By minimizing this error function EOF, we have three equations about S, and the optimal values of the sphere location can be solved. The specular point P(i1,j1) on the image plane can be detected by finding the brightest image point. Then we can get its corresponding surface point S1 as well as surface normal vector N as illustrated by Fig. 2(b). With N and R, the lighting direction l can be calculated as

Eq. (7)

l=2(N·R)NR.

3.2.

Correction for Nonuniform Lighting Conditions

Notice that the distribution of the illumination (DOI) for each LED lamp is usually different and nonuniform. Figure 3(a) shows a homogeneous white paper surface under the illumination of one LED. Figure 3(b) shows the distribution of the image intensities within a selected image area. The nonuniform distribution of LED lights causes direct effect to the surface normal estimation. Therefore, a procedure is required to correct these deviations. Notice that when the light position is fixed, the brightest spot has a fixed area. That means under the illumination of a fixed light source, for a point on the image plane (i,j), the portion p(i,j) of its pixel value I(i,j) to the brightest pixel value Imax is constant to the pixel position and independent of the surface shape, i.e.,

Eq. (8)

p(i,j)=I(i,j)Imax.

Fig. 3

(a) A white paper under the illumination of one light-emitting diode (LED) light; the red circle is the brightest spot, which indicates the illumination distribution not uniform. (b) The diagram of values of p calculated from (a).

OE_52_10_103103_f003.png

Suppose lk is the k’th LED light whose DOI needs to be calibrated; a white planar board with Lambertian surface is used for its correction. The procedure can be implemented by the following steps:

  • a. Take an image of the planar board under the illumination of lk, and define it as Ik(i,j), iimageHeight, jimage Width;

  • b. Search the biggest values in Ik, i.e., Imaxk=maxi,j{Ik};

  • c. Substitute Imaxk into Eq. (8) to calculate pk(i,j) for all pixels.

With the above correction, image intensity to each light can be generally corrected to a uniform distribution. Figure 4 gives a comparison between original and corrected fingerprint images.

Fig. 4

(a) Original fingerprint image. (b) Intensity of the corrected image looks more homogenous.

OE_52_10_103103_f004.png

3.3.

Enhancement of Surface Normal

The corrected images will be used as input to calculate surface normal at each surface point via the algorithms described in Sec. 2. As a postprocedure, a linear surface normal transformation is adopted consequently. In the algorithm, the average of normal vectors n over a local patch w is calculated as a local reference. And the difference between original surface normal and this reference vector can be amplified as

Eq. (9)

n=n+k[nnormalize(j=1wnj)].

This procedure aims to amplify the angle between two neighboring normal vectors and thus improves the visual effect and contrast of the reconstructed 3-D fingerprint models.

4.

Experiments and Discussion

The experimental setup consists of one camera (Prosilica GC650, 90FPS) with a resolution of 659×493pixels and seven LED lamps (white color, 0.2 W, emitting angle 120 deg) as shown in Fig. 5(a). Focal length of the lens (Navitar NMV-25M23) is 25 mm, and the operating distance (from the finger place to the front of lens) is 90mm. The LED lamps are evenly mounted on a lampshade with radius of 75 mm. The lampshade has a distance 65mm to the finger place. The LED positions are finely adjusted to point to the finger place region. An external I/O board is developed to synchronize the camera and LEDs to control the whole capturing time within 0.1 s. The algorithm is implemented under MATLAB2010a on a PC with 2.7 GHz CPU and 2 G RAM.

Fig. 5

(a) The system consists of one camera and seven LEDs. (b) Plot of the calibrated lighting directions of seven LEDs, where the camera optical center is origin and its principal optical axis is Z direction.

OE_52_10_103103_f005.png

Via the proposed lighting direction calibration method, directions of LED lamps are calculated as the following normalized vectors: l1{0.4571,0.1702,0.8729}, l2{0.7714,0.1672,0.6139}, l3{0.6645,0.6205,0.4163}, l4{0.1975,0.9110,0.3618}, l5{0.3447,0.8340,0.4307}, l6{0.6193,0.4498,0.6434}, and l7{0.4701,0.1366,0.8719}, as illustrated in Fig. 5(b).

Real fingerprints and palm prints are used for the experiments, and the results are compared with traditional PS methods where the Lambert reflectance model is adopted. The experiments are implemented with the following steps:

  • a. Calibration of lighting directions as described in Sec. 3;

  • b. Take seven pictures of the object; then for each surface point, rank seven pixel intensities in decent order. The highest and two lowest values are removed, so as to avoid the saturate and shadow points that do not obey the HK model;

  • c. Use the remaining four images as input of the algorithm described in Sec. 2;

  • d. Apply normal transformation to enhance the reconstructed 3-D model.

4.1.

Comparison with Traditional PS Method

In our experiment, the parameters of g, T12 and T21 in Eq. (1) are set to {0.8, 1.25, 0.8}. The unknown parameters of dj, σsj and σaj in Eq. (1) are initialized to {0.085mm,50mm1,3.8mm1} for the fingerprint experiments and {0.12mm,30mm1,4.5mm1} for the palm print experiment. To enhance the recovered 3-D model, w=5, k=1.5 is set to Eq. (9).

There are still technical challenges for current instruments to get the 3-D ground truth of elastic surface like skin. That means it is difficult to evaluate the precision of reconstructed 3-D fingerprint models directly. As an alternative approach, a homogeneous white paper surface is used for experiment as shown by Fig. 6. Surface normal at each image pixel is calculated, and a standard deviation of only 1.65 deg is obtained. It shows high reconstruction accuracy can be guaranteed by the proposed calibration and modeling methods, except for some distinct errors that appeared in the marginal image regions affected by inhomogeneous lighting conditions. Similar accuracy evaluations are also performed on some synthetic data with known depth map as shown by Figs. 7 and 8. The data can be downloaded from Ref. 27. Seven synthetic images with virtual illuminations are generated as shown by Figs. 7(a) and 8(a). The reconstructed 3-D models under various viewpoints are displayed by Figs. 7(b) to 7(d) and 8(b) to 8(d). The reconstruction errors (in the unit of pixel) with respect to the ground truth models are presented by Figs. 7(e) and 8(e). It shows an average reconstruction deviation far less than one pixel and demonstrates the accuracy of the proposed algorithm.

Fig. 6

Accuracy evaluation of three-dimensional (3-D) reconstruction results. (a) A flat paper surface is used for the experiment. (b) The reconstructed planar surface. (c) 3-D plane under various viewpoints. (d) Distribution of the surface normal deviations.

OE_52_10_103103_f006.png

Fig. 7

Accuracy evaluation with synthetic data: saddle-shape. (a) Seven saddle images with virtual illuminations. (b) to (d) The reconstructed 3-D model under various viewpoints. (e) Distribution of the reconstruction error in pixel unit.

OE_52_10_103103_f007.png

Fig. 8

Accuracy evaluation with synthetic data: cone. (a) Seven cone images with virtual illuminations. (b) to (d) The reconstructed 3-D model under various viewpoints. (e) Distribution of the reconstruction error in pixel unit.

OE_52_10_103103_f008.png

In the experiment with fingerprints, the original four images with different lighting conditions are as shown in Fig. 9(a). The reconstructed 3-D model by traditional PS method, which adopts Lambert reflectance model, is as shown in Fig. 9(b). The reconstructed 3-D model by the proposed method is given by Fig. 9(c). Figure 9(d) shows the obtained albedo map by the proposed method. Another experiment is also conducted on the palm print with same parameter setting as shown by Fig. 10. From the results we can see that skin features like the finger and palm ridges can be clearly retrieved in 3-D space. To solve the nonlinear model for each image pixel, our algorithm takes more time than the traditional linear Lambert model. The whole computation time from images input to final 3-D model is given in Table 1. The efficiency can be further improved by more efficient developing tools like Visual C++ and parallel processing devices like graphics processing units in the future.

Fig. 9

Experimental results on fingerprint. (a) Four images under illumination direction of {0.41,0.45,0.80}, {0.39,0.47,0.80}, {0.12,0.58,0.81}, and {0.52,0.24,0.82}. (b) Reconstruction result via traditional photometric stereo (PS) method based on Lambert reflectance model. (c) Reconstruction result by the proposed method. (d) The albedo map obtained by the proposed method.

OE_52_10_103103_f009.png

Fig. 10

3-D reconstruction of a palm print. (a) Four images of palm print under illumination directions of {0.77,0.17,0.61}, {0.20,0.91,0.36}, {0.34,0.83,0.43}, and {0.47,0.14,0.87}. (b) The albedo map obtained by the proposed method. (c) 3-D reconstruction result via traditional PS method based on Lambert reflectance model with w=5, k=1.5. (d) 3-D reconstruction result by the proposed method with w=5, k=1.5. (e) to (f) The cropped 3-D images for close observation of (c) and (d).

OE_52_10_103103_f010.png

Table 1

Calculation time by the nonlinear model and linear Lambert model (in seconds).

DatasetImage sizeImage numberOur methodLambert model
Fingerprint300×2007183.5723.21
Palm print659×4937726.6645.83

4.2.

Enhancement of 3-D Models Under Different Parameters

The parameters of w and k in Eq. (9) play an important role in the rendering of 3-D fingerprint model. In this experiment, different w and k are tested as shown in Fig. 11. Figure 11(a) shows the original fingerprint image. The reconstructed 3-D fingerprint without surface normal enhancement is as shown in Fig. 11(b). Figures 11(c) to 11(e) show the 3-D models with surface normal enhancing parameters of w=5, k=1.5, w=7, k=2.0 and w=11, k=4.0, respectively. From the results we can see that with the increase of enhancing parameters, the fingerprint ridges can be clearly emphasized. However, the ridges are also usually broken subject to the overenhancement as shown in Fig. 11(f).

Fig. 11

3-D fingerprint models under different surface enhancing parameters w and k. (a) Original fingerprint image under one LED illumination. (b) 3-D fingerprint model without surface normal transformation. (c) Result with parameters of w=5, k=1.5. (d) Result with parameters of w=7, k=2.0. (e) Result with parameters of w=11, k=4.0. (f) Cropped area to show overenhancement.

OE_52_10_103103_f011.png

4.3.

Reconstruction Results with Different Image Numbers

In this experiment, three, five, and seven images are used for the fingerprint 3-D reconstruction, and the results by the proposed method and traditional Lambert method are also compared respectively. The captured seven fingerprint images under different LED illuminations are as shown in Fig. 12. To make the comparison unbiased, all the 3-D models are presented without enhancement processing. Figure 13 shows the 3-D reconstruction result with seven images. Figures 13(a) and 13(b) show the results by our method and traditional Lambert model, respectively. Figures 13(c) and 13(d) give close-up views. In comparison, without the enhancement process, Fig. 13(c) demonstrates more finger ridge details with sharp contrast than Fig. 13(d). It means higher-quality 3-D model can be achieved with the proposed method. Experimental results with five and three images are given in Figs. 14 and 15, respectively. From these results, we can see that once the number of images declined to three, as illustrated in Fig. 15, there is no big difference between our method and the traditional method. It is because that there are seven unknowns in Eq. (1) and it needs more images to achieve a stable solution.

Fig. 12

Seven fingerprint images captured under different LED illuminations.

OE_52_10_103103_f012.png

Fig. 13

3-D reconstruction results of finger print with seven images. (a) Result by the proposed method. (b) Result by the traditional Lambert model. (c) and (d) The cropped images for close observation.

OE_52_10_103103_f013.png

Fig. 14

3-D reconstruction results of finger print with five images. (a) Result by the proposed method. (b) Result by the traditional Lambert model. (c) and (d) The cropped images for close observation.

OE_52_10_103103_f014.png

Fig. 15

3-D reconstruction results of finger print with three images. (a) Result by the proposed method. (b) Result by the traditional Lambert model. (c) and (d) The cropped images for close observation.

OE_52_10_103103_f015.png

5.

Conclusions and Future Work

This article presents a novel method for 3-D fingerprint acquisition based on the principle of PS. Compared with previous 3-D scanning-based methods, the proposed system is rather simple and only contains a camera and some LED lamps. To calibrate the lighting direction, a shiny sphere with known radius is used. To model the reflectance property of finger surface, a two-layer HK reflection model is proposed instead of traditional Lambert reflectance model. Considering the nonuniform lighting property of LEDs, a simple correction procedure is also introduced. Finally, the reconstructed 3-D model is enhanced via a linear transformation process. To verify feasibility of the proposed algorithm and system, the experiments are implemented with real finger and palm prints. The results by traditional Lambert model are also provided for comparison to demonstrate the improvements in 3-D reconstruction accuracy.

Future work can address how to improve the correction method for nonuniform lighting conditions, especially to the marginal image regions. Moreover, how to apply the obtained 3-D fingerprint models to the recognition phase and to establish an extensive 3-D fingerprint database will be an urgent task in the future.

Acknowledgments

This work was supported in part by the National Natural Science Foundation of China (Grant No. 61002040, 61375041), the Introduced Innovative R&D Team of Guangdong Province-Robot and Intelligent Information Technology R&D Team (Grant No. 201001D0104648280), and the Shenzhen Key Lab for Computer Vision and Pattern Recognition (Grant No. CXB201104220032A).

References

1. 

F. A. Fernandezet al., “A comparative study of fingerprint image-quality estimation methods,” IEEE Trans. Inf. Forensics Security, 2 (4), 734 –743 (2007). http://dx.doi.org/10.1109/TIFS.2007.908228 1556-6013 Google Scholar

2. 

D. AouadaH. Krim, “Squigraphs for fine and compact modeling of 3-D shapes,” IEEE Trans. Image Process., 19 (2), 306 –321 (2010). http://dx.doi.org/10.1109/TIP.2009.2034693 IIPRE4 1057-7149 Google Scholar

3. 

R. LenzP. L. Carmona, “Octahedral transformations for 3-D image processing,” IEEE Trans. Image Process., 18 (12), 2618 –2628 (2009). http://dx.doi.org/10.1109/TIP.2009.2029953 IIPRE4 1057-7149 Google Scholar

4. 

S. NegahdaripourH. SekkatiH. Pirsiavash, “Opti-acoustic stereo imaging: on system calibration and 3-D target reconstruction,” IEEE Trans. Image Process., 18 (6), 1203 –1214 (2009). http://dx.doi.org/10.1109/TIP.2009.2013081 IIPRE4 1057-7149 Google Scholar

5. 

W. MiledJ. C. PesquetM. Parent, “A convex optimization approach for depth estimation under illumination variation,” IEEE Trans. Image Process., 18 (4), 813 –830 (2009). http://dx.doi.org/10.1109/TIP.2008.2011386 IIPRE4 1057-7149 Google Scholar

6. 

G. ParzialeE. Diaz-SantanaR. Hauke, “The surround imager tm: a multi-camera touchless device to acquire 3d rolled-equivalent fingerprints,” Lec. Notes Comput. Sci., 3832 244 –250 (2005). http://dx.doi.org/10.1007/11608288 LNCSD9 0302-9743 Google Scholar

7. 

Y. Chenet al., “3D touchless fingerprints: compatibility with legacy rolled images,” in IEEE Biometrics Symposium: Special Session on Research at The Biometric Consortium Conference, 1 –6 (2006). Google Scholar

8. 

D. Kolleret al., “3D capturing of fingerprints—on the way to a contactless certified sensor,” in IEEE Proc. of the Special Interest Group on Biometrics and Electronic Signatures, 33 –44 (2011). Google Scholar

9. 

Z. ZhangD. ZhangX. Peng, “Performance analysis of a 3D full-field sensor based on fringe projection,” Opt. Lasers Eng., 42 (3), 341 –353 (2004). http://dx.doi.org/10.1016/j.optlaseng.2003.11.004 OLENDN 0143-8166 Google Scholar

10. 

R. D. Labatiet al., “Fast 3-D fingertip reconstruction using a single two-view structured light acquisition,” in IEEE Workshop on Biometric Measurements and Systems for Security and Medical Applications, 1 –8 (2011). Google Scholar

11. 

Y. C. WangL. G. HassebrookD. L. Lau, “Data acquisition and processing of 3-D fingerprints,” IEEE Trans. Inf. Forensics Security, 5 (4), 750 –760 (2010). http://dx.doi.org/10.1109/TIFS.2010.2062177 1556-6013 Google Scholar

12. 

P. HanrahanW. Krugeger, “Reflection from layered surfaces due to subsurface scattering,” in ACM Proc. of SIGGRAPH, 165 –174 (1993). Google Scholar

13. 

K. Levenberg, “A method for the solution of certain non-linear problems in least squares,” Q. Appl. Math., 2 (2), 164 –168 (1994). QAMAAY 0033-569X Google Scholar

14. 

R. J. Woodham, “Photometric method for determining surface orientation from multiple images,” Opt. Eng., 19 (1), 139 –144 (1980). http://dx.doi.org/10.1117/12.7972479 OPEGAR 0091-3286 Google Scholar

15. 

H. KimB. WilburnM. B. Ezra, “Photometric stereo for dynamic surface orientations,” in ECCV, 59 –72 (2010). Google Scholar

16. 

L. Wuet al., “Robust photometric stereo via low-rank matrix completion and recovery,” in ACCV, 703 –717 (2011). Google Scholar

17. 

T. KuparinenV. Kyrki, “Optimal reconstruction of approximate planar surfaces using photometric stereo,” IEEE Trans. Pattern Anal. Mach. Intell., 31 (12), 2282 –2289 (2009). http://dx.doi.org/10.1109/TPAMI.2009.101 Google Scholar

18. 

S. BarskyM. Petrou, “The 4-source photometric stereo technique for three-dimensional surfaces in the presence of highlights and shadows,” IEEE Trans. Pattern Anal. Mach. Intell., 25 (10), 1239 –1252 (2003). http://dx.doi.org/10.1109/TPAMI.2003.1233898 ITPIDJ 0162-8828 Google Scholar

19. 

T. Malzbenderet al., “Surface enhancement using real-time photometric stereo and reflectance transformation,” in Eurographics Association Proc. of the European Symp. on Rending, 245 –250 (2006). Google Scholar

20. 

J. A. Sunet al., “Object surface recovery using a multi-light photometric stereo technique for non-Lambertian surfaces subject to shadows and specularities,” Image Vis. Comput., 25 (7), 1050 –1057 (2007). http://dx.doi.org/10.1016/j.imavis.2006.04.025 IVCODK 0262-8856 Google Scholar

21. 

W. Osten, “A simple and efficient optical 3D-sensor based on photometric stereo,” in Springer, The 5th Int. Workshop on Automatic Processing of Fringe Patterns, 702 –706 (2005). Google Scholar

22. 

A. S. Georghiades, “Incorporating the torrance and sparrow model of reflectance in uncalibrated photometric stereo,” in IEEE ICCV, 591 –597 (2003). Google Scholar

23. 

P. BelhumeurD. KriegmanA. Yuille, “The bas-relief ambiguity,” in IEEE CVPR, 1040 –1046 (1997). Google Scholar

24. 

W. ZhouC. Kambhamettu, “Estimation of illuminant direction and intensity of multiple light sources,” Lec. Notes Comput. Sci., 2353 206 –220 (2002). http://dx.doi.org/10.1007/3-540-47979-1 LNCSD9 0302-9743 Google Scholar

25. 

Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell., 22 (11), 1330 –1334 (2000). http://dx.doi.org/10.1109/34.888718 ITPIDJ 0162-8828 Google Scholar

26. 

J. Canny, “A computational approach to edge detection,” IEEE Trans. Pattern Anal. Mach. Intell., 8 (6), 679 –714 (1986). http://dx.doi.org/10.1109/TPAMI.1986.4767851 ITPIDJ 0162-8828 Google Scholar

27. 

Digital Shape Workbench, “AIM@SHAPE,” (2007) http://shapes.aim-at-shape.net/ ( March ). 2007). Google Scholar

Biography

Wuyuan Xie received an MS in information engineering from the South China University of Technology. She is currently a PhD student in the Department of Mechanical and Automation Engineering at the Chinese University of Hong Kong. Her research interests include photometric stereo and stereo vision.

Zhan Song received his PhD degree in mechanical and automation engineering from the Chinese University of Hong Kong, Hong Kong, in 2008. He is currently with the Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, as an associate researcher. His research interests include structured light-based sensing and vision-based human computer interaction.

Ronald Chung received a BSEE from the University of Hong Kong, Hong Kong, and a PhD in computer engineering from University of Southern California, Los Angeles. He is currently with the Chinese University of Hong Kong as director of the Computer Vision Laboratory and professor in the Department of Mechanical and Automation Engineering. His research interests include computer vision and robotics. He is a senior member of IEEE and a member of MENSA. He was the chairman of the IEEE Hong Kong Section Joint Chapter on Robotics & Automation Society and Control Systems Society in the years 2001 to 2003.

CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Wuyuan Xie, Zhan Song, and Ronald C. Chung "Real-time three-dimensional fingerprint acquisition via a new photometric stereo means," Optical Engineering 52(10), 103103 (3 October 2013). https://doi.org/10.1117/1.OE.52.10.103103
Published: 3 October 2013
Lens.org Logo
CITATIONS
Cited by 18 scholarly publications and 1 patent.
Advertisement
Advertisement
KEYWORDS
3D modeling

3D acquisition

3D image processing

3D image reconstruction

Light sources and illumination

Light emitting diodes

Cameras

Back to Top