SPIE publishes accepted journal articles as soon as they are approved for publication. Journal issues are considered In Progress until all articles for an issue have been published. Articles published ahead of the completed issue are fully citable.
We propose a video stabilization algorithm based on the rotation of a virtual sphere. Unlike traditional video stabilization algorithms relying on two-dimensional motion models or reconstruction of (3D) camera motions, the proposed virtual sphere model stabilizes video by projecting each frame onto the sphere and performing corrective rotations. Specifically, matching feature points between two adjacent frames are first projected onto two virtual spheres to obtain pairs of spherical points. Then, the rotation matrix between the previous and current frame is calculated. The resulting 3D rotation matrix sequence is used to represent the camera motion, and it is smoothed using the geodesic distance on a Riemannian manifold. Finally, the difference between the smoothed and original path allows obtaining the rotation matrix that causes camera jitter, and the virtual spheres are rotated reversely to suppress jitter. Experimental results show that the proposed algorithm can effectively reduce random jitter, outperforming state-of-the-art methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
Single-image super-resolution (SISR) refers to reconstructing a high-resolution image from given low-resolution observation. Recently, convolutional neural network (CNN)-based SISR methods have achieved remarkable results in terms of peak-signal-to-noise ratio and structural similarity measures. These models use pixel-wise loss functions to optimize their models, which results in blurry images. However, the generative adversarial network (GAN) has the ability to generate visually plausible solutions. The different GAN-based SISR methods obtain perceptually better SR results when compared to that with the existing CNN-based methods. However, the existing GAN-based SISR methods need a large number of training parameters in the architecture to obtain better SR performance, which makes them unsuitable for many real-world applications. We propose a computationally efficient enhanced progressive approach for SISR task using GAN, which we referred as E-ProSRGAN. In the proposed method, we introduce a novel design of residual block called enhanced parallel densely connected residual network, which helps to obtain better SR performance with less number of training parameters. The quantitative performance of the proposed E-ProSRNet (i.e., generator network of E-ProSRGAN) model is better for higher upscaling factors ×3 and ×4 for most of datasets when compared to the same obtained using different CNN-based methods whose trainable parameters are less than 7 M. In the case of upscaling factor ×2, E-ProSRNet obtains second highest structural similarity index measure values for Set5 and BSD100 datasets. The proposed E-ProSRGAN model generates SR samples with better high-frequency details and perception measures than that of the other existing GAN-based SISR methods with significant reduction in the number of training parameters for larger upscaling factor.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
Single-pixel camera is developed to mitigate the constraints faced by the conventional cameras especially in invisible wavelengths and low light conditions. Nyquist–Shannon theorem requires as many measurements as the image pixels to reconstruct images flawlessly. In practice, obtaining more measurements increases the cost and acquisition time, which are the major drawbacks of single-pixel imaging (SPI). Therefore, compressive sensing was proposed to enable image reconstruction with fewer measurements. We present a design of sensing patterns to obtain image information by utilizing spatially variant resolution (SVR) technique in SPI. The proposed method reduces the measurements by prioritizing the resolution in the region of interest (ROI). It successfully achieves the programmable imaging concept where multiresolution adaptively optimizes the balance between the image quality and the measurements number. Results show that SVR images can be reconstructed from significantly fewer measurements yet able to achieve better image quality than uniform resolution images. In addition, the SVR images can be further enhanced by integrating the dynamic supersampling technique. Consequently, the concerns of image quality, long acquisition, and processing time can be addressed. The proposed method potentially benefits imaging applications where the target ROI is prioritized over the background and most importantly it requires fewer measurements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
Haze scatters light transmitted in the air and reduces image quality, which greatly decreases the interpretability and intelligibility of an image. To solve these problems, we propose an improved real-time image dehazing algorithm based on dark channel prior and fast weighted guided filtering. First, the image is divided into dark areas and bright areas by the K-means clustering algorithm, and the atmospheric light value is calculated according to the proportion of the bright area in the whole image. Second, the fast weighted guided filtering algorithm is employed to generate a refined transmission map, which removes the halo artifact from around the sharp edges. Finally, the gamma correction and automatic contrast enhancement algorithms are used to adjust the brightness and contrast of the dehazing image. Experimental results demonstrate that the proposed method can effectively remove the halo artifacts, improve the color deviation, and retain more details in the images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.