Fingerprint identification is a well-regarded and widely accepted modality in the field of biometrics for its high
recognition rates. Legacy 2D contact based methods, though highly evolved in terms of technology suffer from certain
drawbacks. Being contact based, there are many known issues which affect the recognition rates.
Flashscan3D/University of Kentucky (UKY) developed state of the art 3D non-contact fingerprint scanners using
different structured light illumination (SLI) techniques namely SLI single Point Of View (POV) and the SLI Subwindowing
techniques. Capturing the fingerprints by non-contact means in 3D gives much higher quality fingerprint
data which ultimately improves matching rates over a traditional 2D approach. In this paper, we present a full hand 3D
non-contact scanner using the SLI Sub-windowing technique. Sample fingerprint data and experimental results for
fingerprint matching based on a small sample 3D fingerprint test set are presented.
As crime prevention and national security remain a top priority, requirements for the use of fingerprints for identification
continue to grow. While the size of fingerprint databases continues to expand, new technologies that can improve
accuracy and ultimately matching performance will become more critical to maintain the effectiveness of the systems.
FlashScan3D has developed non-contact, fingerprint scanners based on the principles of Structured Light Illumination
(SLI) that capture 3Dimensional data of fingerprints quickly, accurately and independently of an operator. FlashScan3D
will present findings from various research projects performed for the US Army and the Department of Homeland
Security.
Fingerprint identification is one of the most prolific and well-regarded modalities in the field of biometrics for its high
recognition rates. Fingerprints remain consistent throughout a person's lifetime and are relatively simple and inexpensive
to capture with techniques ranging from inked fingerprint cards to Livescan devices. In this paper, we present an
algorithm and a working device that is capable of capturing high quality 3D fingerprints based on Structured Light
Illumination using a novel approach called the sub-window technique. The various benefits of this unique approach and
applications in fingerprint biometrics are presented.
Structured-light illumination (SLI) means projecting a series of structured or striped patterns from a projector onto an object and then using a camera, placed at an angle from the projector, to record the target's 3-D shape. For multiplexing these structured patterns in time, traditional SLI systems require the target object to remain still during the scanning process. Thus, the technique of composite-pattern design was introduced as a means of combining multiple SLI patterns, using principles of frequency modulation, into a single pattern that can be continuously projected and from which 3-D surface can be reconstructed from a single image, thereby enabling the recording of 3-D video. But the associated process of modulation and demodulation is limited by the spatial bandwidth of the projector-camera pair, which introduces distortion near surface or albedo discontinuities. Therefore, this paper introduces a postprocessing step to refine the reconstructed depth surface. Simulated experiments show an 78% reduction in depth error.
We present an eight million point structured light illumination scanner design. It has a single patch projection resolution
of 12,288 lines along the phase direction. The Basler CMOS video cameras are 2352 by 1726 pixel resolution. The
configuration consists of a custom Boulder Nonlinear Systems Spatial Light Modulator for the projection system and
dual four mega pixel digital video cameras. The camera field of views are tiled with minimal overlap region and a
potential capture rate of 24 frames per second. This report is a status report of a project still under development. We will
report on the concept of applying a 1D-square footprint projection chip and give preliminary results of single camera
scans. The structured light illumination technique we use is the multi-pattern, multi-frequency phase measuring
profilometry technique already published by our group.
Fingerprints are one of the most commonly used and relied-upon biometric technology. But often the captured
fingerprint image is far from ideal due to imperfect acquisition techniques that can be slow and cumbersome
to use without providing complete fingerprint information. Most of the diffculties arise due to the contact of
the fingerprint surface with the sensor platen. To overcome these diffculties we have been developing a noncontact
scanning system for acquiring a 3-D scan of a finger with suffciently high resolution which is then
converted into a 2-D rolled equivalent image. In this paper, we describe certain quantitative measures evaluating
scanner performance. Specifically, we use some image software components developed by the National Institute
of Standards and Technology, to derive our performance metrics. Out of the eleven identified metrics, three
were found to be most suitable for evaluating scanner performance. A comparison is also made between 2D
fingerprint images obtained by the traditional means and the 2D images obtained after unrolling the 3D scans
and the quality of the acquired scans is quantified using the metrics.
Interacting with computer technology while wearing a space suit is difficult at best. We present a sensor that can interpret body gestures in 3-Dimensions. Having the depth dimension allows simple thresholding to isolate the hands as well as use their positioning and orientation as input controls to digital devices such as computers and/or robotic devices. Structured light pattern projection is a well known method of accurately extracting 3-Dimensional information of a scene. Traditional structured light methods require several different patterns to recover the depth, without ambiguity and albedo sensitivity, and are corrupted by object motion during the projection/capture process. The authors have developed a methodology for combining multiple patterns into a single composite pattern by using 2-Dimensional spatial modulation techniques. A single composite pattern projection does not require synchronization with the camera so the data acquisition rate is only limited by the video rate. We have incorporated dynamic programming to greatly improve the resolution of the scan. Other applications include machine vision, remote controlled robotic interfacing in space, advanced cockpit controls and computer interfacing for the disabled. We will present performance analysis, experimental results and video examples.
KEYWORDS: Cameras, 3D scanning, Projection systems, Calibration, Phase shifts, 3D modeling, 3D metrology, 3D acquisition, Laser imaging, Structured light
Structured light projection is one of the most accurate non-contact methods for scanning surface topologies. The field of view of such a scan may range from millimeters to several meters. One of the most precise and robust methods of structured light is Phase Measuring Profilometry. This method utilizes a sinusoidal pattern that is laterally shifted across a surface. An image is captured at uniform intervals and the “phase” is recovered for each pixel position by correlating across the shifted patterns. In general, the more pattern shifts and the higher the spatial frequency, the more accurate the depth measurement becomes, at each pixel location. However, with a high frequency, ambiguity errors can occur, so a dual frequency approach is commonly used where a low frequency pattern is used for non-ambiguous depth, followed by high frequency pattern. The low frequency result is used to unwrap the high frequency to yield a non-ambiguous and precise phase. If the high frequency is too high, then ambiguity errors will occur. The solution is a multi-frequency method. We present experimental results for several variations of the multi-frequency approach yielding accuracies of 0.127mm standard deviation in depth with 0.92mm pixel spacing. With consumer camera mega pixel technology this equates to a 0.127mm deviation over a field of view of 2 to 3 meters. To achieve this level of accuracy also requires calibration for radial and perspective distortions. Applications for this technology include non-contact surface measurement and robotic and computer vision.
Phase-only spatial light modulators provide active pattern projection. Unlike incoherent techniques, the pattern energy is inversely proportional to the total pattern area. We refer to this flexible pattern/beamsteering system as the real-time adaptive multi-spot laser beamsteering system (RAMS-LBS). The spatial light modulator under investigation is a, 512x512 element, phase-only liquid crystal device recently produced by Boulder Non-linear Systems Incorporated. A laser tweezer is a powerful micromanipulation tool in both the physical and life sciences. In this study, we introduce the detection and tracking of the movement of particles that are controlled by the laser tweezers. The detection and tracking philosophy of these methods are to use optical flow and matched filter techniques. Our discussion will include different tracking protocol and some demonstrations of shape recognition. We have also developed a numerical simulation integrated with experimental implementation.
A Phase-only spatial light modulator can provide active spot pattern projection with high signal-to-noise ratio and form near-arbitrary phase modulation surfaces. As a result they can diffract laser beams into a near-arbitrary pattern of laser spots. Depending on the sequence of phase images loaded onto the SLM, the spots can be scanned on independent and continuous two-dimensional trajectories. We refer to this flexible beamsteering system as the real-time adaptive multi-spot laser beamsteering system (RAMS-LBS). This paper presents work under progress, in developing 2D and 3D calibration algorithms for a spot pattern projection system. In the 2D calibration process, spot grids are projected with successively more spot locations. After each projection, a higher order model is determined for camera to projector coordinate transforms. The accuracies of different model orders are measured. In the 3D calibration process, grids of spots are projected onto non-coplanar target grid to construct the transformation matrix between different coordinates. Perspective distortions are included in the transformation vectors after the calibration. Therefore, 3D information of the target can be obtained in the calibrated system. Applications such as 3D target surface topology measurement and target detection using 2D and 3D information are described in this paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.