This PDF file contains the front matter associated with SPIE Proceedings Volume 10222, including the Title Page, Copyright information, Table of Contents, Introduction (if any), and Conference Committee listing.
Structured illumination has been utilized to super-resolve microscopic objects and provide topographic information in computer vision applications. Motivated by the achievements in these fields and leveraging techniques found in astronomical sparse aperture systems, an approach is developed to super-resolve macroscopic objects in typical real world scenarios. The challenges of super-resolving uncontrolled 3D environments are addressed. An approach is presented which enables the collection of 3D topographic information while super-resolving. These techniques use incoherent illumination to resolve spatial detail in an intensity image. For indirect imaging scenarios, this approach is adapted with structured coherent illumination to super-resolve phase at a distance.
Super-resolution for infrared imaging is motivated by the high cost and practical limitations of obtaining large focal plane arrays. Methods in the literature require the optical system to be modified. Here, we propose a compressive sensing based method for super-resolution using the inherent point spread function of the camera. The proposed method produces high resolution images and is robust against missing pixels. We then compare our method to other super-resolution methods in the literature and show that our method performs well for practical usage without any modification to the optical system.
We address the mathematical foundations of a special case of the general problem of partitioning an end-to-end sensing algorithm for implementation by optics and by a digital processor for minimal electrical power dissipation. Specifically, we present a non-iterative algorithm for factoring a general k × k real matrix A (describing the end-to-end linear pre-processing) into the product BC, where C has no negative entries (for implementation in linear optics) and B is maximally sparse, i.e., has the fewest possible non-zero entries (for minimal dissipation of electrical power). Our algorithm achieves a sparsification of B: i.e., the number s of non-zero entries in B: of s ≤ 2k, which we prove is optimal for our class of problems.
Lensless imaging systems have the potential to provide new capabilities for lower size and weight configuration than traditional imaging systems. Lensless imagers frequently utilize computational imaging techniques, which moves the complexity of the system away from optical subcomponents and into a calibration process whereby the measurement matrix is estimated.
We report on the design, simulation, and prototyping of a lensless imaging system that utilizes a 3D printed optically transparent random scattering element. Development of end-to-end system simulations, which includes simulations of the calibration process, as well as the data processing algorithm used to generate an image from the raw data are presented. These simulations utilize GPU-based raytracing software, and parallelized minimization algorithms to bring complete system simulation times down to the order of seconds.
Hardware prototype results are presented, and practical lessons such as the effect of sensor noise on reconstructed image quality are discussed. System performance metrics are proposed and evaluated to discuss image quality in a manner that is relatable to traditional image quality metrics. Various hardware instantiations are discussed.
In this study, a method for reducing the atmospheric effects on SAR interferometric products is proposed. The method exploits MODIS data, as well as the Saastamoinen model for the estimation of the atmospheric component and the generation of spatially continuous data for this component. Then it recovers the interferometric signal from delays caused by the atmospheric component, through the appropriate modelling of the interferometric phase.
Performance of the method depends on MODIS data resolution; however, it always improves results. Experiments showed that the accuracy of DEMs that are produced by interferometry is improved when the proposed method is applied.
We propose a method for partially blind-deconvolution with prior information on the lens characteristics. There is a permanent demand for higher resolution for applications such as tracking, recognition, and identification. Limitations of available methods for practical systems are generally due to computational cost and power. Therefore a computationally efficient method for blind-deconvolution is desirable for practical systems. Total-variation (TV) minimization method proposed by Vogel and Oman is used to recover the image from noisy data and eliminated some of the blurs. Another approach called split augmented Lagrangian shrinkage algorithm uses alternating direction method of multipliers (ADMM) in which an unconstrained optimization problem including ℓ1 data fidelity and a non-smooth regularization term are solved. Although successful, the excessive computational requirements present a challenge for practical usage of these methods. Here, we propose a parametric blind-deconvolution method with prior knowledge on the point spread function (PSF) of the camera lens. We model the PSF of the circular optics as Jinc-squared function and determine the best PSF by solving optimization problem containing TV-norm along with Wavelet-sparsity objectives using an ADMM based algorithm. We use a convolutional model and work in Fourier domain for efficient implementation, and avoid circular effects by extending the unknown image region. First, we show that PSF function of the lenses can be modeled with Jinc function in experimental data. Next, we point out that our algorithm improves resolution of the image and compared to classical blind-deconvolution methods while remaining feasible in terms of computation time.
The values of the unwrapped phase produced by interferometric pairs can be parameterized for phase components, such as height, atmospheric path delay and deformation term, and estimated through DInSAR techniques. In this study, a method is proposed which estimates the atmospheric path delay using a single interferometric pair and an atmospheric path delay estimator. The estimator relies on the minimization of the outage probability, which is the probability that the Mean Square Error (MSE) of the estimated atmospheric component exceeds a desired MSE value. Outage minimization is equivalent to the minimization of the MSE of the atmospheric component for a fixed outage probability. The minimization of the MSE of the atmospheric component is determined by the second-order statistics of the topography and atmospheric components. For a specific SAR image geometry, second-order statistics of the topography component are satisfactorily approximated by the mean squared height errors of a high quality InSAR DEM for various height and slope classes, whereas second-order statistics of the atmospheric component are approximated by the inverse coherence value of the dataset which provides the high quality InSAR DEM. The proposed approach is validated to real satellite images and meteorological measurements.
Direct image formation in synthetic aperture radar (SAR) involves processing of data modeled as Fourier coefficients along a polar grid. Often in such data acquisition processes, imperfections in the data cannot simply be modeled as additive or even multiplicative noise errors. In the case of SAR, errors in the data can exist due to imprecise estimation of the round trip wave propagation time, which manifests as phase errors in the Fourier domain. To correct for these errors, we propose a phase correction scheme that relies on both the on smoothness characteristics of the image and the phase corrections associated with neighboring pulses, which are possibly highly correlated due to the nature of the data off setting. Our model takes advantage of these correlations and smoothness characteristics simultaneously for a new autofocusing approach, and our algorithm for the proposed model alternates between approximate image feature and phase correction minimizers to the model.
Digital in-line holography serves as a useful encoder for spatial information. This allows three-dimensional reconstruction from a two-dimensional image. This is applicable to the tasks of fast motion capture, particle tracking etc. Sampling high resolution holograms yields a spatiotemporal tradeoff. We spatially subsample holograms to increase temporal resolution. We demonstrate this idea with two subsampling techniques, periodic and uniformly random sampling. The implementation includes an on-chip setup for periodic subsampling and a DMD (Digital Micromirror Device) -based setup for pixel-wise random subsampling. The on-chip setup enables direct increase of up to 20 in camera frame rate. Alternatively, the DMD-based setup encodes temporal information as high-speed mask patterns, and projects these masks within a single exposure (coded exposure). This way, the frame rate is improved to the level of the DMD with a temporal gain of 10. The reconstruction of subsampled data using the aforementioned setups is achieved in two ways. We examine and compare two iterative reconstruction methods. One is an error reduction phase retrieval and the other is sparsity-based compressed sensing algorithm. Both methods show strong capability of reconstructing complex object fields. We present both simulations and real experiments. In the lab, we image and reconstruct structure and movement of static polystyrene microspheres, microscopic moving peranema, macroscopic fast moving fur and glitters.
Snapshot compressive imaging aims to capture high resolution images using low resolution detectors. The challenge is the generation of simultaneous optical projections that fulfill the compressed sensing reconstruction requirements. We propose the use of controlled aberrations through wavefront coding to produce point spread functions that can simultaneously code and multiplex the scene in a variety of ways. Apart from light efficiency, we can analytically characterize the system matrix response. We explore combinations of Zernike modes and analyze the corresponding coherence parameter. Simulation results using natively sparse and natural scenes demonstrate the feasibility of using controlled aberrations for compressive imaging.
In this paper, we propose a unified optimization framework for L2, L1, and/or L0 constrained image reconstruction. First, we generalize cost functions for image reconstruction, which consist of a fidelity term with L2 norm and constraint terms with L2, L1, and/or L0 norms. This generalized cost function covers many types of existing cost functions for image reconstruction. Then, we show that this generalized cost function can be optimized by the alternating direction method of multipliers (ADMM). The ADMM is a well-known iterative optimization approach for convex problems. Experimental results demonstrate that the proposed unified optimization framework is applicable to a wide range of applications.
A stick-shaped TOMBO (thin observation module by bound optics) was developed for intra-oral diagnosis. The module consists of 3×3 imaging units which are designed to capture different optical signals. Embedded functions were stereo 3D monitoring, depth estimation, and tissue assessment. Illumination equipments and a pattern projector were integrated in the module. Teeth and gingiva of several subjects were observed. 3D shape of gingiva was retrieved from a couple of unit images. The boundary between the attached gingiva and the alveolar mucosa as well as the spatial distribution of melanin were estimated using multiple linear regression analysis. The observed signals were confirmed to be useful for odontotherapy.
A one-shot multi-directional ultra-small angle X-ray scattering imaging successfully resolves the fiber orientation of a wood sample. This 2D structured illumination enables the retrieval of scattering signals in multiple directions simultaneously.
A current focus of art conservation research seeks to accurately identify materials, such as oil paints or pigments, used in a work of art. Since many of these materials are fluorescent, measuring the fluorescence lifetime following an excitation pulse is a useful non-contact, quantitative method to identify pigments. In this project, we propose a simple method using a dynamic vision sensor to efficiently characterize the fluorescence lifetime of a common pigment named Egyptian Blue, which is consistent with x-ray techniques. We believe our fast, compact and cost-effective method for fluorescence lifetime analysis is useful in art conservation research and potentially a broader range of applications in chemistry and materials science.
Propagation-based phase retrieval using the contrast transfer function (CTF) allows images at any propagation distance to be used when recovering the phase of slowly-varying objects. The CTF suffers from artifacts due to nulls in the transfer function at low spatial frequency and at higher, propagation-distance-dependent frequencies, though the latter can be alleviated by combining measurements at multiple distances. We demonstrate that the use of extended sources can improve low frequency performance. In addition, this method offers source shape as a parameter that can be used when optimizing combinations of measurements to produce robust phase reconstructions.