Integrity of fingerprint data is essential to biometric and forensic applications. Accordingly, the FBI's Criminal Justice
Information Services (CJIS) Division has sponsored development of software tools to facilitate quality control functions
relative to maintaining its fingerprint data assets inherent to the Integrated Automated Fingerprint Identification System
(IAFIS) and Next Generation Identification (NGI). This paper provides an introduction of two such tools. The first FBI-sponsored
tool was developed by the National Institute of Standards and Technology (NIST) and examines and detects
the spectral signature of the ridge-flow structure characteristic of friction ridge skin. The Spectral Image
Validation/Verification (SIVV) utility differentiates fingerprints from non-fingerprints, including blank frames or
segmentation failures erroneously included in data; provides a "first look" at image quality; and can identify anomalies
in sample rates of scanned images. The SIVV utility might detect errors in individual 10-print fingerprints inaccurately
segmented from the flat, multi-finger image acquired by one of the automated collection systems increasing in
availability and usage. In such cases, the lost fingerprint can be recovered by re-segmentation from the now compressed
multi-finger image record. The second FBI-sponsored tool, CropCoeff was developed by MITRE and thoroughly tested
via NIST. CropCoeff enables cropping of the replacement single print directly from the compressed data file, thus
avoiding decompression and recompression of images that might degrade fingerprint features necessary for matching.
JPEG 2000 image compression allows many formatting alternatives, but users frequently have insufficient knowledge or
experience to direct the choice. At compression time many of these options may seem approximately equal, but during
exploitation the file structure differences can have a huge impact on access speed. This is particularly true for very large
images such as those regularly used in remote sensing and many defense systems. This paper examines the impacts of
JPEG 2000 options such as tiling, tile-parts, precincts, and packet ordering on large single band images, particularly in
relationship to random access speed.
Ideally, when the same set of compression parameters are used, it is desirable for a compression algorithm to be idempotent to multiple cycles of compression and decompression. However, this condition is generally not satisfied for most images and compression settings of interest. Furthermore, if the image undergoes cropping before recompression, there is a severe degradation in image quality. In this paper we compare the multiple compression cycle performance of JPEG and JPEG2000. The performance is compared for different quantization tables (shaped or flat) and a variety of bit rates, with or without cropping. It is shown that in the absence of clipping errors, it is possible to derive conditions on the quantization tables under which the image is idempotent to repeated compression cycles. Simulation results show that when images have the same mean squared error (MSE) after the first compression cycle, there are situations in which the images compressed with JPEG2000 can degrade more rapidly compared to JPEG in subsequent compression cycles. Also, the multiple compression cycle performance of JPEG2000 depends on the specific choice of wavelet filters. Finally, we observe that in the presence of cropping, JPEG2000 is clearly superior to JPEG. Also, when it is anticipated that the images will be cropped between compression cycles when using JPEG2000, it is recommended that the canvas system be used.
JPEG 2000 inverse scalar quantization includes a reconstruction rounding factor that has a range of allowable values within the standard. Although the standard notes a fixed value that works reasonably well in practice, implementations are allowed to use other values in an effort to improve the reconstructed image quality. This paper discusses some of the issues involved in adjusting the rounding factor.
The wavelet-based JPEG 2000 image compression standard is flexible enough to handle a large number of imagery types in a broad range of applications. One important application is the use of JPEG 2000 to compress imagery collected by remote sensing systems. This general class of imagery is often larger -- in terms of number of pixels) -- than most other classes of imagery. Support for tiling and the embedded, progressively ordered bit stream of JPEG 2000 are very useful in handling very large images. However, the performance of JPEG 2000 on detected SAR (Synthetic Aperture Radar) and other kinds of specular imagery is not as good, from the perspective of visual image quality, as its performance on more 'literal' imagery types. In this paper, we try to characterize the problem by analyzing some statistical and qualitative differences between detected SAR and other more literal remote sensing imagery types. Several image examples are presented to illustrate the differences. JPEG 2000 is very flexible and offers a wider range of options that allow for technology that can be used to optimize the algorithm for a particular imagery type or application. A number of different JPEG 2000 options - - including subband, weighting, trellis-coded quantization (TCQ), and packet decomposition -- are explored for their impact to SAR image quality. Finally, the anatomy of a texture-preserving wavelet compression scheme is presented with very impressive visual results. The demonstration system used for this paper is currently not supported by the JPEG 2000 standard, but it is hoped that with additional research, a variant of the scheme can be fit into the framework of JPEG 2000.