**Publications**(98)

This will count as one of your downloads.

You will have access to both the presentation and article (if available).

This content is available for download via your institution's subscription. To access this item, please sign in to your personal account.

No SPIE account? Create an account

*CR*≤ 2,500:1) have been reported for selected images. Because a region boundary is often represented with more parameters than the region contents, it is crucial to maximize the boundary compression ratio by reducing these parameters. Researchers have elsewhere shown that cherished boundary encoding techniques such as chain coding, simplicial complexes, or quadtrees, to name but a few, are inadequate to support OBC within the aforementioned

*CR*range. Several existing compression standards such as MPEG support efficient boundary representation, but do not necessarily support OBC at

*CR*≥ 500:1 . Siddiqui et al. exploited concepts from fractal geometry to encode and compress region boundaries based on fractal dimension, reporting

*CR*= 286.6:1 in one test. However, Siddiqui's algorithm is costly and appears to contain ambiguities. In this paper, we first discuss fractal dimension and OBC compression ratio, then enhance Siddiqui's algorithm, achieving significantly higher

*CR*for a wide variety of boundary types. In particular, our algorithm smoothes a region boundary

*B*, then extracts its inflection or control points

*P*, which are compactly represented. The fractal dimension

*D*is computed locally for the detrended

*B*. By appropriate subsampling, one efficiently segments disjoint clusters of

*D*values subject to a preselected tolerance, thereby partitioning

*B*into a multifractal. This is accomplished using four possible compression modes. In contrast, previous researchers have characterized boundary variance with one fractal dimension, thereby producing a monofractal. At its most complex, the compressed representation contains

*P*, a spatial marker, and a

*D*value for each monofractal boundary segment, with slight additional overhead indicating an encoding mode. The simplest representation contains

*P*and a pointer into a database of region patterns. Each of these patterns has an associated fractal dimension, thus alleviating storage of segment-specific

*D*values. Contour reconstruction during decompression is guided by the smoothed contour. Analysis of this procedure over a database of 73 images reveals 622:1 ≤

*CR*≤ 1,720:1 is typical for natural scenes, demonstrating the utility of our approach.

*object-based compression*(OBC) promises significantly improved performance via compression ratios ranging from 200:1 to as high as 2,500:1. OBC involves segmentation of image regions, followed by efficient encoding of each region's content and boundary. During decompression, such regions can be approximated by objects from a codebook, yielding a reconstructed image that is semantically equivalent to the corresponding source image, but has pixel- and featural-level differences. Semantic equivalence between the source and decompressed image facilitates fast decompression through efficient substitutions, albeit at the cost of codebook search in the compression step. Given small codebooks, OBC holds promise for information-push technologies where approximate context is sufficient, for example, transmission of surveillance images that provide the gist of a scene. However, OBC is not necessarily useful for applications requiring high accuracy, such as medical image processing, because substitution of source content can be inaccurate at small spatial scales. The cost of segmentation is a significant disadvantage in current OBC implementations. Several innovative techniques have been developed for region segmentation, as discussed in a previous paper [4]. Additionally, tradeoffs between representational fidelity, computational cost, and storage requirement occur, as with the vast majority of lossy compression algorithms. This paper analyzes the computational (time) and storage (space) complexities of several recent OBC algorithms applied to single-frame imagery. A time complexity model is proposed, which can be associated theoretically with a space complexity model that we have previously published [2]. The result, when combined with measurements of representational accuracy described in a companion paper [5], supports estimation of a time-space-error bandwidth product that could facilitate dynamic optimization of OBC algorithms. In practice, this would support efficient compression with visually acceptable reconstruction for a wide variety of military and domestic applications.

*object-based compression*(OBC) promises significantly improved bit rate and computational efficiency, OBC is epistemologically distinct in a way that renders existing image quality measures (IQMs) for compression transform optimization less suitable for OBC. In particular, OBC segments source image regions, then efficiently encodes each region's content and boundary. During decompression, region contents are often replaced by similar-appearing objects from a codebook, thus producing a reconstructed image that corresponds semantically to the source image, but has pixel-, featural-, and object-level differences that are apparent visually. OBC thus gains the advantage of fast decompression via efficient codebook-based substitutions, albeit at the cost of codebook search in the compression step and significant pixel- or region-level errors in decompression. Existing IQMs are pixel- and region oriented, and thus tend to indicate high error due to OBC's lack of pixel-level correlation between source and reconstructed imagery. Thus, current IQMs do not necessarily measure the semantic correspondence that OBC is designed to produce. This paper presents image quality measures for estimating semantic correspondence between a source image and a corresponding OBC-decompressed image. In particular, we examine the semantic assumptions and models that underlie various approaches to OBC, especially those based on textural as well as high-level name and spatial similarities. We propose several measures that are designed to quantify this type of high-level similarity, and can be combined with existing IQMs for assessing compression transform performance. Discussion also highlights how these novel IQMs can be combined with time and space complexity measures for compression transform optimization.

*iterated function systems*(IFS). Although VQ, TC, and IFS based compression algorithms have enjoyed varying levels of success for different types of applications, bit rate requirements, and image quality constraints, few of these algorithms examine the higher-level spatial structure of an image, and fewer still exploit this structure to enhance compression ratio. In this paper, we discuss a fourth type of compression algorithm, called

*object-based compression*, which is based on research in joint segmentaton and compression, as well as previous research in the extraction of sketch-like representations from digital imagery. Here, large image regions that correspond to contiguous recognizeable objects or parts of objects are segmented from the source, then represented compactly in the compressed image. Segmentation is facilitated by source properties such as size, shape, texture, statistical properties, and spectral signature. In particular, discussion addresses issues such as efficient boundary representation, variance assessment and representation, as well as a texture classification and replacement algorithms that can decrease compression overhead and increase reconstruction fidelity in the decompressed image. Contextual extraction of motion patterns in digital video sequences, using a frequency-domain pattern recognition technique based on interframe correlation, is described in a companion paper. This technique can also be extended to multidimensional image domains, to support joint spectral, spatial, and temporal compression.

*compressive processing*, this technique involves computation using compressed data (e.g., signals or imagery) in compressed form, without decompression, primarily for image and signal processing (ISP) applications. Given a source image

**a**and a compression transform

*T*, a compressed image

**c**=

*T*(

**a**) is subjected to an operation O' that is an analogue over

*range*(

*T*) of an operation on uncompressed imagery

**b**= O(

**a**), such that

**d**= O’(

**c**) =

*T*(O (

**a**)). By applying the inverse of

*T*(denoted by

*T*

^{-1}) to

**d**, we obtain

**b**. If

*T*does not have an inverse, then an approximation

*T*

^{*}to the inverse can be applied, such that the decompressed image is given by

**d**=

*T*

^{*}( O (

*T*(

**a**))) ≈

**b**, which is the customary case in lossy compression. Analysis emphasizes implementational concerns such as applicable compression transforms (

*T*) and theory that relates compressive operations (O’) to corresponding image or signal processing operations (O). Discussion also includes error propagation in cascades of compressive operations. Performance results are given for various image processing operations.

Keywords: Image compression, Image quality measures, Error analysis

_{D}) equals O(X/Y) equals O(kl) work, where CR

_{D}denotes the domain compression ratio. In practice, typical computational efficiencies of approximately one-half the domain compression ratio have been realized via processing imagery compressed by VQ, block truncation coding (BTC), and visual pattern image coding (VPIC). In this paper, we extend our previous research in compressive ATR to include the processing of compressed stereoscopic imagery. We begin with a brief review of stereo vision and the correspondence problem, as well as theory fundamental to the processing of compressed data. We then summarize VQ, BTC, and VPIC compression. In Part 2 of this series, we map a cepstrum- based stereo matching algorithm to stereoscopic images represented by the aforementioned compressive formats. Analyses emphasize computational cost and stereo disparity error. Algorithms are expressed in terms of image algebra, a rigorous, concise notation that unifies linear and nonlinear mathematics in the image domain. Since image algebra has been implemented on numerous sequential and parallel computers, our algorithms are feasible and widely portable.

_{d}). Defined as the ratio of the number of source data to the number of compressed data, CR

_{d}generally exceeds the customary compression ratio. Analyses emphasize computational complexity, information loss, and implementational feasibility.

You currently do not have any folders to save your paper to! Create a new folder below.

View contact details

Shibboleth users login
to see if you have access.