The point spread function (PSF) of imaging system with coded mask is generally acquired by practical measure-
ment with calibration light source. As the thermal radiation of coded masks are relatively severe than it is in
visible imaging systems, which buries the modulation effects of the mask pattern, it is difficult to estimate and
evaluate the performance of mask pattern from measured results. To tackle this problem, a model for infrared
imaging systems with masks is presented in this paper. The model is composed with two functional components,
the coded mask imaging with ideal focused lenses and the imperfection imaging with practical lenses. Ignoring
the thermal radiation, the systems PSF can then be represented by a convolution of the diffraction pattern of
mask with the PSF of practical lenses. To evaluate performances of different mask patterns, a set of criterion
are designed according to different imaging and recovery methods. Furthermore, imaging results with inclined
plane waves are analyzed to achieve the variation of PSF within the view field. The influence of mask cell size
is also analyzed to control the diffraction pattern. Numerical results show that mask pattern for direct imaging
systems should have more random structures, while more periodic structures are needed in system with image
reconstruction. By adjusting the combination of random and periodic arrangement, desired diffraction pattern
can be achieved.
Accurate Point Spread Function (PSF) estimation of coded aperture cameras is a key to deblur defocus images.
There are mainly two kinds of approaches to estimate PSF: blind-deconvolution-based methods, and
measurement-based methods with point light sources. Both these two kinds of methods cannot provide accurate
and convenient PSFs due to the limit of blind deconvolution or imperfection of point light sources. Inaccurate
PSF estimation introduces pseudo-ripple and ringing artifacts which influence the effects of image deconvolution.
In addition, there are many inconvenient situation for the PSF estimation.
This paper proposes a novel method of PSF estimation for coded aperture cameras. It is observed and verified
that the spatially-varying point spread functions are well modeled by the convolution of the aperture pattern
and Gaussian blurring with appropriate scales and bandwidths. We use the coded aperture camera to capture
a point light source to get a rough estimate of the PSF. Then, the PSF estimation method is formulated as the
optimization of scale and bandwidth of Gaussian blurring kernel to fit the coded pattern with the observed PSF.
We also investigate the PSF estimation at arbitrary distance with a few observed PSF kernels, which allows us to
fully characterize the response of coded imaging systems with limited measurements. Experimental results show
that our method is able to accurately estimate PSF kernels, which significantly make the deblurring performance
Image denoising manages to recover a digital image from its noisy version by exploring the statistical features inside a
given noisy image. Most denoising methods perform well at low noise levels but lose efficiency at higher ones. In this
paper, we propose a novel image denoising method, which restores an image by exploiting the correlations between the
noisy image and the images retrieved from the cloud. Given a noisy image, we first retrieve relevant images based on
feature-level similarity. These images are then geometrically aligned to the noisy image to increase global statistical
correlation. Using the aligned images as references, we propose recovering the image with patch-level noise removal.
For each noisy patch, we first retrieve similar patches from the references and stack these patches (including the noisy
one) into a three dimensional (3D) group. We then obtain the noise free (NF) patches by collaborative filtering over the
3D groups. These recovered NF patches are aggregated together, producing the desired NF image. Experimental results
demonstrate that our scheme achieves significantly better results compared to state-of-the-art methods in terms of both
objective and subjective qualities.
Smartphones are becoming popular nowadays not only because of its communication functionality but also, more
importantly, its powerful sensing and computing capability. In this paper, we describe a novel and accurate image and
video based remote target localization and tracking system using the Android smartphones, by leveraging its built-in
sensors such as camera, digital compass, GPS, etc. Even though many other distance estimation or localization devices
are available, our all-in-one, easy-to-use localization and tracking system on low cost and commodity smartphones is
first of its kind. Furthermore, smartphones' exclusive user-friendly interface has been effectively taken advantage of by
our system to facilitate low complexity and high accuracy. Our experimental results show that our system works
accurately and efficiently.