Image thumbnails are used in most imaging products and applications, where they allow quick preview of the content of
the underlying high resolution images. The question: "How would you best represent a high resolution original image
given a fixed number of thumbnail pixels?" is addressed using both automatically and manually generated thumbnails.
Automatically generated thumbnails that preserve the image quality of the high resolution originals are first reviewed and
subjectively evaluated. These thumbnails allow interactive identification of image quality, while simultaneously allowing
the viewer's knowledge to select desired subject matter. Images containing textures are, however, difficult for the automatic
algorithm. Textured images are further studied by using photo editing to manually generate representative thumbnails.
The automatic thumbnails are subjectively compared to standard (filter and subsample) thumbnails using clean, blurry,
noisy, and textured images. Results using twenty subjects find the automatic thumbnails more representative of their
originals for blurry images. In addition, as desired, there is little difference between the automatic and standard thumbnails
for clean images. The noise component improves the results for noisy images, but degrades the results for textured images.
Further studying textured images, the manual thumbnails were subjectively compared to standard thumbnails for four
images. Evaluation using forty judgments found a bimodal distribution for preference between the standard and the manual
thumbnails, with some observers preferring manual thumbnails and others preferring standard thumbnails.
A printed photograph is difficult to reuse because the digital information that generated the print may no longer be
available. This paper describes a mechanism for approximating the original digital image by combining a scan of the
printed photograph with small amounts of digital auxiliary information kept together with the print. The auxiliary
information consists of a small amount of digital data to enable accurate registration and color-reproduction,
followed by a larger amount of digital data to recover residual errors and lost frequencies by distributed Wyner-Ziv
coding techniques. Approximating the original digital image enables many uses, including making good quality
reprints from the original print, even when they are faded many years later. In essence, the print itself becomes the
currency for archiving and repurposing digital images, without requiring computer infrastructure.
Lightening or darkening an image is a fundamental adjustment used to improve aesthetics or correct exposure. This paper describes new geometrical algorithms for lightness adjustment, implementing fast traversal of colors along lightness-saturation curves, applicable when the data starts naturally in YCC space (JPEG images or MPEG videos). Here, YCC refers generically to color spaces with one luminance and two color difference channels, including linear YCC spaces and CIELAB. Our first solution uses a class of curves that allows closed-form computation. Further assuming that saturation is a separable function of luminance and curve parameter simplifies the computations. This approach reduces clipping and better adjusts lightness together with saturation. Preliminary evaluation with 96 images finds good subjective results, and clipping is reduced to about 5% of a prior approach.
KEYWORDS: Databases, Feature extraction, Signal processing, Convolution, Optical filters, Bandpass filters, Reliability, Control systems, Pattern recognition, Signal to noise ratio
The radio plays a song that you like but that you do not recognize. How do you find the title and the artist? Previous approaches to finding a song in a database are based on pattern recognition. In some of the previous work features are extracted from a hummed song and decision rules are used to retrieve probable candidates from the database. Feature matching has not resulted in reliable searches from microphone samples. In this work, to find the song, we process a short, microphone recorded sample from it. Both a feature vector and a signal are precomputed for each song in a database and also extracted from the recording. The database songs are first sorted by feature distance to the recording. Then, normalized cross-correlation, even though nonlinear, is applied using overlap-save FFT convolution. A decision rule presents likely matches to the user for confirmation but controls the number of false alarms shown. This system, tested using hundreds of recordings, is reliable because signals are matched. The addition of the feature-ordered search and the decision rule result in database searches five times faster than signal matching alone.
Spatial resolution and grayscale resolution are two image parameters that determine image quality. In this study we investigate the trade-off between spatial resolution and grayscale in terms of the discriminability of steps, measured in bits, away from a standard image. A CRT display was used to simulate black-and-white images with a square-pixel geometry. Natural images and a test pattern consisting of a radially symmetric spatial frequency chirp of increasing radial frequency (called a zone plate) were studied. Multiple versions of each image were produced by varying the simulated pixel size and the number of gray levels and by filtering. Discrimination thresholds for pixel size and number of gray levels were measured for several locations in the parameter space of spatial resolution and grayscale resolution for each image. Unfiltered, low-contrast, Nyquist-filtered, and Gaussian-filtered versions of the images were studied. Resolution levels were always integer divisors of the CRT display resolution, produced by subsampling and pixel-replication. Gray levels were steps that were linear in luminance and that spanned the entire CRT luminance range. Discrimination thresholds were measured using a three-alternative forced-choice one-up-two-down double- random-staircase procedure. Simulation device limitations caused some measurements to be less precise than was desired.
The DE-1 satellite has gathered over 500,000 images of the Earth's aurora. This data set provides a realistic testbed for developing algorithms for scientific image databases. Scientists studying the aurora currently need to browse through large numbers of images to find events suitable for further scientific studies. They select or reject an image based on a variety of visual queues, including shape, size, and intensity. This paper describes a system currently under development for selecting interesting events based on image content. We use boundaries from the images to outline the aurora, and then to extract features that relate to shape, size, and intensity. These features are then input into a supervised decision tree classifier. The system retrieves images of potential interest to the user. The user makes the final decision regarding the use of the images retrieved. The algorithm is applied to hundreds of DE-1 satellite images to find `quiet' versus `active' auroras, after being initially trained by the user. The system's advantage over neural networks is that the scientists may inspect the event selection process by studying the decision tree generated.
This paper describes a technique for interactive, computer-assisted boundary tracking from two-dimensional images. Once the boundaries of an object are known, it is possible to extract any number of features to be used for subsequent search or retrieval based on image content. The technique combines manual inputs from the user with machine inputs, generated from edge detection algorithms, for assisting in the extraction of the boundaries. This allows for the quick extraction of boundaries by combining the capabilities of the user and the computer. The user is adept at quickly locating an object of interest and at drawing a very rough outline of the object. The computer is adept at quickly making a large number of calculations that refine the rough outline generated by the user. The performance of the technique was tested using both computer simulations and real images. The performance of the technique degrades gracefully in the presence of noise.
The DE-1 satellite has gathered over 500,000 images of the Earth's aurora using ultraviolet and visible light photometers. The extraction of the boundaries delimiting the auroral oval allows the computation of important parameters for the geophysical study of the phenomenon such as total area and total integrated magnetic field. This paper describes an unsupervised technique that we call 'minimization-pruning' that finds the boundaries of the auroral oval. The technique is based on concepts that are relevant to a wide range of applications having characteristics similar to this application, namely images with variable background, high noise levels and missing data. Among the advantages of the new technique are the ability to find the object of interest even with intense interfering background noise, and the ability to find the outline of an object even if only a section of it is visible. The technique is based on the assumption that certain regions of the object are less obscured by the background, and hence the information provided by these regions is more important for finding the boundaries. The implementation of the technique consists of an iterative minimization-pruning algorithm, in which a fundamental part is a measure of the quality of the data for different regions along the boundary. Calculation of this measure is simplified by transforming the input image into polar coordinates. The technique has been applied to a set of more than 100 images of the aurora with good results. We also show examples of extraction of the inner and outer boundaries starting from the elliptical approximation and analyzing the image locally around that solution.
The stability of active contour models or 'snakes' is studied. It is shown that the modification of snake parameters using adaptive systems improves both the stability of the snakes and the boundaries obtained. The adaptive snakes perform better with images of varying contrasts, noisy images, and images with different curvatures along the boundaries. The computational costs at each iteration for the adaptive snakes is still of order (Nu) , where (Nu) is the number of points on the snakes. Comparisons of the results for non-adaptive and adaptive snakes are shown using both computer simulations and satellite images.
Abstract
The DE-1 satellite has gathered over 500,000 images of the Earth's aurora. Finding the location and shape of
the boundaries of the oval is of interest to geophysicists but manual extraction of the boundaries is extremely time
consuming. This paper describes a computer vision system that automatically provides an estimate of the inner
auroral boundary for winter hemisphere scenes. The system performs automatic checks of its boundary estimate.
If the boundary estimate is deemed inconsistent, the system does not output it. The performance of this system is
evaluated using 44 DE-1 images. The system provides boundary estimates for 37 of the inputs. Of these 37 estimates,
31 are consistent with the corresponding manual estimates. At this level of performance, the supervised use of the
system provides more than one order of magnitude increase in throughput compared to manual extraction of the
boundaries.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.