During the past years, research has focused on the reconstruction of three-dimensional point cloud models from unordered and uncalibrated sets of images. Most of the proposed solutions rely on the structure-from-motion algorithm, and their performances significantly degrade whenever exchangeable image file format information about focal lengths is missing or corrupted. We propose a preprocessing strategy that permits estimating the focal lengths of a camera more accurately. The basic idea is to cluster the input images into separate subsets according to an array of interpolation-related multimedia forensic clues. This operation permits having a more robust estimate and improving the accuracy of the final model.
Recently, a new class of distributed source coding (DSC) based video coders has been proposed to enable low-complexity
encoding. However, to date, these low-complexity DSC-based video encoders have been unable to
compress as efficiently as motion-compensated predictive coding based video codecs, such as H.264/AVC, due to
insufficiently accurate modeling of video data. In this work, we examine achieving H.264-like high compression
efficiency with a DSC-based approach without the encoding complexity constraint. The success of H.264/AVC
highlights the importance of accurately modeling the highly non-stationary video data through fine-granularity
motion estimation. This motivates us to deviate from the popular approach of approaching the Wyner-Ziv
bound with sophisticated capacity-achieving channel codes that require long block lengths and high decoding
complexity, and instead focus on accurately modeling video data. Such a DSC-based, compression-centric encoder
is an important first step towards building a robust DSC-based video coding framework.