George Eastman House International Museum of Photography Conservation Laboratory and the University of Rochester
Department of Computer Science are researching image analysis techniques to distinguish daguerreotype plate and image
features from deterioration, contaminant particulates, and optical imaging error occurring in high resolution photomicrography
system. The images are captured at up to 30 times magnification and composited, including the ravages of age and
reactivity of the highly polished surface that obscures and reduces the readability of the image. The University of Rochester
computer scientists have developed and applied novel techniques for the seamless correction of a variety of problems. The
final output is threefold: an analysis of regular artifacting resulting from imaging conditions and equipment; a fast automatic
identification of problem areas in the original artifact; and an approximate digital restoration. In addition to the
discussion of novel classification and restorative methods for digital daguerreotype restoration, this work highlights the
effective use of large-scale parallelism for restoration (made available through the University of Rochester Center for Research
Computing). This paper will show the application of analytical techniques to the Cincinnati Waterfront Panorama
Daguerreotype, with the intent of making the results publically available through high resolution web image navigation
We propose a quality-aware computational optimization of inpainting based upon the intelligent application of a battery of
inpainting methods. By leveraging the Decision-Action-Reward Network (DARN) formalism and a bottom-up model of
human visual attention, methods are selected for optimal local use via an adjustable quality-time tradeoff and (empirical)
training statistics aimed at minimizing observer foveal attention to inpainted regions. Results are shown for object removal
in high-resolution consumer video, including a comparison of output quality and efficiency with homogeneous inpainting
We propose a means of objectively comparing the results of digital image inpainting algorithms by analyzing changes in predicted human attention prior to and following application. Artifacting is generalized in two catagories, in-region and out-region, depending on whether or not attention changes are primarily within the edited region or in nearby (contrasting) regions. Human qualitative scores are shown to correlate strongly with numerical scores of in-region and out-region artifacting, including the effectiveness of training supervised classifiers of increasing complexity. Results are shown on two novel human-scored datasets.
We propose a method for efficiently determining qualitative depth maps for multiple monoscopic videos of the same scene
without explicitly solving for stereo or calibrating any of the cameras involved. By tracking a small number of feature points
and determining trajectory correspondence, it is possible to determine correct temporal alignment as well as establish a
similarity metric for fundamental matrices relating each trajectory. Modeling of matrix relations with a weighted digraph
and performing Markov clustering results in a determination of emergent depth layers for feature points. Finally, pixels
are segmented into depth layers based upon motion similarity to feature point trajectories. Initial experimental results are
demonstrated on stereo benchmark and consumer data.
Quantitative metrics for successful image inpainting currently do not exist, with researchers instead relying upon
qualitative human comparisons to evaluate their methodologies and techniques. In an attempt to rectify this situation, we
propose two new metrics to capture the notions of noticeability and visual intent in order to evaluate inpainting results.
The proposed metrics use a quantitative measure of visual salience based upon a computational model of human visual
attention. We demonstrate how these two metrics repeatably correlate with qualitative opinion in a human observer
study, correctly identify the optimum uses for exemplar-based inpainting (as specified in the original publication), and
match qualitative opinion in published examples.