Persons captured in real-life scenarios are generally in non-uniform scales. However, most generally acknowledged person re-identification (Re-ID) methods lay emphasis on matching normal-scale high-resolution person images. To address this problem, the ideas of existing image reconstruction techniques are incorporated which are expected contribute to recover accurate appearance information for low-resolution person Re-ID. In specific, this paper proposes a joint deep learning approach for Scale-Adaptive person Super-Resolution and Re-identification (SASR<sup>2</sup> ). It is for the first time that scale-adaptive learning is jointly implemented for super-resolution and re-identification without any extra post-processing process. With the super-resolution module, the high-resolution appearance information can be automatically reconstructed from scales of low-resolution person images, bringing a direct beneficial impact on the subsequent Re-ID thanks to the joint learning nature of the proposed approach. It deserves noting that SASR<sup>2</sup> is not only simple but also flexible, since it can be adaptable to person Re-ID on both multi-scale LR and normal-scale HR datasets. A large amount of experimental analysis demonstrates that SASR<sup>2</sup> achieves competitive performance compared with previous low-resolution Re-ID methods especially on the realistic CAVIAR dataset.
This work addresses the problem of single image dehazing particularly towards the goal of better visibility restoration. Athough extensive studies have been performed, almost all of them are heavily built on the atmospheric scattering model. What is worse, they usually fail to restore the visibility of dense hazy images convincingly. Inspired by the potentials of deep learning, a new end-to-end approach is presented to restore a clear image directly from a hazy image, while with an emphasis on the real-world weather conditions. In specific, an Encoder-Decoder is exploited as a generator for restoring the dehazed image in an attempt of preserve more image details. Interestingly, it is further found that the performance of the Encoder-Decoder can be largely boosted via our advocated dual principles of discriminativeness in this paper. On the one hand, the dark channel is re-explored in our framework resulting in a discriminative prior formulated specifically for the dehazing problem. On the other hand, a critic is incorporated for adversarial training against the autoencoding-based generator, implemented via the Wasserstein GAN (generative adverarial networks) regularized by the Liptchitiz penalty. The proposed approach is trained on a synthetic dataset of hazy images, while evaluated on both synthetic and real hazy images. The objective evaluation has shown that the proposed approach performs competitively with the state-of-the-art approaches, but outperforms them in terms of the visibility restoration especially in the scenarios of dense haze.
Proc. SPIE. 11069, Tenth International Conference on Graphics and Image Processing (ICGIP 2018)
KEYWORDS: Super resolution, Visualization, Image segmentation, Matrices, Image processing, Image quality, Telecommunications, Algorithm development, Communication engineering, Current controlled current source
This paper proposes a new variational model for deblurring low-resolution images, a.k.a. single image nonparametric blind super-resolution. In specific, a type of new adaptive heavy-tailed image priors are presented incorporating both the model discriminativeness and effectiveness of salient edge pursuit for accurate and reliable blur kernel estimation. With the assistance of appropriate non-blind super-resolution approaches, nonparametric blind super-resolution can be cast as a regularized functional minimization problem. An efficient numerical algorithm is derived by harnessing the alternating direction method of multipliers as well as the conjugate gradient method, with which alternatingly iterative estimations for kernel and image are finally implemented in a multi-scale manner. Numerous experiments are conducted along with comparisons made among the proposed approach and two recent state-of-the-art ones, demonstrating that the proposed approach is able to better deal with low-resolution images which are blurred by various possible kernels, e.g., Gaussianshaped kernels of varying sizes, ellipse-shaped kernels of varying orientations, curvilinear kernels of varying trajectories.
It is known that actual performance of most previous face hallucination approaches will drop dramatically as a very low-resolution tiny face is provided. Inspired by the latest progress in deep unsupervised learning, this paper works on tiny faces of size 16×16 pixels and magnifies them into their 8× upsampling ones by exploiting the boundary equilibrium generative adverarial networks (BEGAN). Besides imposing a pixel-wise <i>L</i>2 regularization term to the generative model, it is found that our targeted auto-encoding generator with residual blocks and skip connections is a key component for BEGAN achieving state-of-the-art hallucination performance. The cropped CelebA face dataset is preliminarily used in our experiments. The results demonstrate that the proposed approach is not only of fast and stable convergence, but also robust to pose, expression, illuminance and occluded variations.