Hyperspectral imaging has a wide range of applications relying on remote material identification, including astronomy, mineralogy, and agriculture; however, due to the large volume of data involved, the complexity and cost of hyperspectral imagers can be prohibitive. The exploitation of redundancies along the spatial and spectral dimensions of a hyperspectral image of a scene has created new paradigms that overcome the limitations of traditional imaging systems. While compressive sensing (CS) approaches have been proposed and simulated with success on already acquired hyperspectral imagery, most of the existing work relies on the capability to simultaneously measure the spatial and spectral dimensions of the hyperspectral cube. Most real-life devices, however, are limited to sampling one or two dimensions at a time, which renders a significant portion of the existing work unfeasible. We propose a new variant of the recently proposed serial hybrid vectorial and tensorial compressive sensing (HCS-S) algorithm that, like its predecessor, is compatible with real-life devices both in terms of the acquisition and reconstruction requirements. The newly introduced approach is parallelizable, and we abbreviate it as HCS-P. Together, HCS-S and HCS-P comprise a generalized framework for hybrid tenso-vectorial compressive sensing, or HCS for short. We perform a detailed analysis that demonstrates the uniqueness of the signal reconstructed by both the original HCS-S and the proposed HCS-P algorithms. Last, we analyze the behavior of the HCS reconstruction algorithms in the presence of measurement noise, both theoretically and experimentally.
Background: In binocular stereoscopic display, stereoscopic distortions due to viewer motion, such as depth distortion, shear distortion, and rotation distortion, result in misperception of the stereo content and reduce visual comfort dramat ically. In the past, perceived depth distortion has been thoroughly addressed, and shear distortion has been investigated within the context of multi-view display to accommodate motion parallax. However, the impact of rotation distortion has barely been studied. Therefore, no technique is available to address stereoscopic distortions due to general viewer motion.
Objective: To preserve an undistorted 3D perception from a fixed viewpoint irrespective of viewing position.
Method: We propose a unified system and method that rectifies stereoscopic distortion due to general affine viewer motion and delivers a fixed perspective of the 3D scene without distortion irrespective of viewer motion. The system assumes eye tracking of the viewer and pixel-wisely adjusts the display location of the stereo pair based on tracked viewer eye location.
Results: For demonstration purpose, we implement our method on controlling perceived depth in binocular stereoscopic display of red and cyan anaglyph 3D. The user first perceives the designed perspective of the 3D scene at the reference position. The user then moves to 6 different positions with various distances and angles relative to the screen. At all positions, the users report to perceive a much more consistent stereo content with the adjusted displays and at the same time, experience improved visual comfort.
Novelty: We address stereoscopic distortions with a goal to maintain a fixed perspective of the stereo scene, and propose a unified solution that simultaneously rectifies the stereoscopic distortions resulted from arbitrary viewer motion.
It is a widely held belief that in the long run, three-dimensional (3D) display should supply stereo to multiple
viewers without wearing any viewing aids and free to move. Over the last few decades, great e®orts have
been made to approach auto-stereoscopic (AS) display for multiple viewers. Spatial multiplexing technique has
¯rst been employed to accommodate multiple viewers simultaneously in stereoscopic planar display. However,
resolution of each view image decreases as the number of viewers increases. Recent development of high-speed
liquid crystal display (LCD), which is capable of operating 240-Hz frame rate, makes feasible multi-viewer
display via time multiplexing and improving image quality at the same time. In this paper, we propose a
display adjustment algorithm that enables high-quality auto-stereoscopic display for multiple viewers. The
proposed method relies on spatio-temporal parallax barrier to channel desired stereo pair to corresponding
viewers according to their locations. We subsequently conduct simulations that demonstrate the e®ectiveness of
the proposed method.
We present a novel method for robust indexing and retrieval of multiple motion trajectories obtained from a
multi-camera system. Motion trajectories describe the motion information by recording the objects' coordinates
in the video sequence. We generate a four-dimensional tensor representation of multiple motion trajectories
from multiple cameras. We subsequently rely on high-order singular value decomposition (HOSVD) for compact
representation and dimensionality reduction of the tensor. We show that HOSVD-based representation provides
a robust framework that can be used for a unified representation of the HOSVD of all subtensors. We thus
demonstrate analytically and experimentally that the proposed HOSVD-based representation can handle flexible
query structure consisting of an arbitrary number of objects and cameras. Simulation results are finally used to
illustrate the superior performance of the proposed approach to multiple trajectory indexing and retrieval from
multi-camera systems compared to the use of a single camera.