Polarimetric LIDAR is a significant tool for current remote sensing applications. In addition, measurement of the full waveform of the LIDAR echo provides improved ranging and target discrimination, although, data storage volume in this approach can be problematic. In the work presented here, we investigated the practical issues related to the implementation of a full waveform LIDAR system to identify polarization characteristics of multiple targets within the footprint of the illumination beam. This work was carried out on a laboratory LIDAR testbed that features a flexible arrangement of targets and the ability to change the target polarization characteristics. Targets with different retardance characteristics were illuminated with a linearly polarized laser beam and the return pulse intensities were analyzed by rotating a linear analyzer polarizer in front of a high-speed detector. Additionally, we explored the applicability and the limitations of applying a sparse sampling approach based on Finite Rate of Innovations (FRI) to compress and recover the characteristic parameters of the pulses reflected from the targets. The pulse parameter values extracted by the FRI analysis were accurate and we successfully distinguished the polarimetric characteristics and the range of multiple targets at different depths within the same beam footprint. We also demonstrated the recovery of an unknown target retardance value from the echoes by applying a Mueller matrix system model.
In this paper, we introduce a LIDAR return pulse analysis framework based on the concept of finite rate of
innovations (FRI). Specifically, the proposed FRI-based model allows us to characterize the temporal return pulse envelopes captured by 3rd generation LIDAR systems in a low dimensional space. Furthermore, the extracted model parameters can often be mapped to specific physical features of the scene being captured, aiding in high-level interpretation. After describing the model formulation and extraction process, we illustrate its potential utility in two specific applications: sub-spot size ranging (super-resolution) and random impulsive scene scanning. In the course of this discussion, we also relate the FRI model to compressive sensing and sparse range-map reconstruction.
An imaging polarimeter records the polarization state of light reflected by an object that is illuminated with a
polarized source such as a laser. Active polarimetric imagery has been shown to be useful in many remote sensing
applications including shape extraction, material classification and target detection/recognition. In this paper, we
present a method that automatically extracts the angle of incidence, angle of reflection and the relative azimuthal
angle from Mueller matrix imagery. Mueller matrix imagery provides multiple measurements from which we can
construct a nonlinear system of equations. This system is solved using the Levenberg-Marquardt algorithm
which is a standard nonlinear equation solver. We experimentally demonstrate via computer simulations that
the parameter estimates can be estimated accurately using our approach.
A passive imaging polarimeter records the polarization state of light reflected by an object that is illuminated with
an unpolarized and usually uncontrolled source. Passive polarimetric imagery has shown to be useful in many
remote sensing applications including shape extraction, material classification and target detection/recognition.
In this paper, we present an image segmentation algorithm that automatically extracts an object from multi-look
passive polarimetric imagery. The term multi-look refers to multiple polarization measurements where the
position of the source of illumination (typically the Sun in passive systems) changes between measurements. The
proposed method relies on our previous work on estimating the complex index of refraction and reflection angle
from multi-look passive polarimetric imagery. We experimentally showed that the estimates for the index of
refraction were largely invariant to both the position of the source and the view angle. Consequently, we utilize
the index of refraction as a feature vector to design an illumination invariant image segmentation algorithm.
A clustering approach based on the classic c-means algorithm is used for segmenting objects based on their
index of refraction. The proposed segmentation approach is validated by using data collected under laboratory
conditions. Experimental results indicate that the proposed method is effective for segmenting various targets
Passive polarimetric imagery conveys information that complements the information contained in intensity and spectral
imagery. Passive polarimetric measurements have been exploited in many remote sensing applications such as shape
extraction, surface inspection and object detection/recognition. In previous work Thilak et al. proposed an algorithm to
estimate the index of refraction and view angle (object surface orientation) from multiple polarization images where the
source position changes between measurements. That work relies on a specular polarimetric bidirectional reflectance
distribution function (pBRDF) developed by Priest and Meier. The pBRDF incorporates a Mueller matrix that
characterizes the polarized reflection properties of a target for any incident Stokes vector. The results in Thilak et al.
assumed that scattering occurs in the plane of incidence, which means that the pBRDF matrix contains many zero
elements. In this paper, we extend this work to an out-of-plane scattering geometry, which implies that the pBRDF
matrix contains more non-zero elements. In the initial work presented here, a nonlinear optimization approach is utilized
to estimate the incident and reflection angles from a single polarization measurement assuming knowledge of the surface
index of refraction and azimuthal angle between source and receiver. The effectiveness of the proposed method is
verified through computer simulation.
A passive polarization based imaging system records the polarization state of light reflected by objects that are illuminated with an unpolarized and generally uncontrolled source. The information conveyed by the polarization state of light has been exploited in applications such as target detection, shape extraction and material classification. In this paper we present a method to jointly estimate the refractive index and the reflected zenith angle from two measurements collected by a passive polarimeter. An expression for the degree of polarization is derived from the microfacet polarimetric bidirectional reflectance model for the case of scattering in the place of incidence. The parameters of interest are iteratively estimated from polarization measurements assumed to be collected with a polarimeter. Computer simulations are presented to demonstrate the effectiveness of the proposed method.
Passive polarization based imaging is a useful tool in computer vision and pattern recognition. A passive polarization imaging system forms a polarimetric image from the reflection of ambient light that contains useful information for computer vision tasks such as object detection (classification) and recognition. Applications of polarization based pattern recognition include material classification and automatic shape recognition. In this paper, we present two target detection algorithms for images captured by a passive polarimetric imaging system. The proposed detection algorithms are based on Bayesian decision theory. In these approaches, an object can belong to one of any given number classes and classification involves making decisions that minimize the average probability of making incorrect decisions. This minimum is achieved by assigning an object to the class that maximizes the <i>a posteriori</i> probability. Computing <i>a posteriori</i> probabilities requires estimates of class conditional probability density functions (likelihoods) and prior probabilities. A Probabilistic neural network (PNN), which is a nonparametric method that can compute Bayes optimal boundaries, and a -nearest neighbor (KNN) classifier, is used for density estimation and classification. The proposed algorithms are applied to polarimetric image data gathered in the laboratory with a liquid crystal-based system. The experimental results validate the effectiveness of the above algorithms for target detection from polarimetric data.
In this paper we consider a new form of successive coefficient refinement which can be used in conjunction with embedded compression algorithms like Shapiro's EZW (Embedded Zerotree Wavelet) and Said & Pearlman's SPIHT (Set Partitioning in Hierarchical Trees). Using the conventional refinement process, the approximation of a coefficient that was earlier determined to be significantly is refined by transmitting one of two symbols--an `up' symbol if the actual coefficient value is in the top half of the current uncertainty interval or a `down' symbol if it is the bottom half. In the modified scheme developed here, we transmit one of 3 symbols instead--`up', `down', or `exact'. The new `exact' symbol tells the decoder that its current approximation of a wavelet coefficient is `exact' to the level of precision desired. By applying this scheme in earlier work to lossless embedded compression (also called lossy/lossless compression), we achieved significant reductions in encoder and decoder execution times with no adverse impact on compression efficiency. These excellent results for lossless systems have inspired us to adapt this refinement approach to lossy embedded compression. Unfortunately, the results we have achieved thus far for lossy compression are not as good.
In this paper we present a novel synthesis of embedded compression and texture recognition. In order to maximize the information content of a transmitted image, a texture recognition and segmentation algorithm is used to identify potential areas of disinterest, and the compression algorithm regionally compresses the image, allocating fewer bits to textured regions. In this method, higher resolution is achieved in areas that potentially are of more interest to a viewer.
An embedded coding algorithm crates a compressed bit stream which can be truncated to produce reduced resolution versions of the original image. This property allows such algorithms to precisely achieve fixed bit rates, reducing or eliminating the need for rate control in video transmission applications. Furthermore, embedded bit streams have a certain degree of innate error resistance and can also be used to facilitate unequal error protection. Unfortunately, embedded compression algorithms are relatively slow. In this paper, we introduce the concept of adaptive embedding in an effort to address this problem. Such embedding increases the speed of the algorithm by reducing the number of separate resolution layers contained within the bit stream. Since it is often impossible to effectively use the many layers created by existing embedded coders, sacrificing some of them to speed up the processing may be quite acceptable. We show here that adaptive embedding increase execution speed by 28 percent for fixed-rate video compression with only a 2 percent reduction in rate-distortion performance. Finally, we also introduce an alternate form of lossless compression which increases execution speed by another 6-10 percent at the expense of reconstruction quality.
In this work, we present a new family of image compression algorithms derived from Shapiro's embedded zerotree wavelet (EZW) coder. These new algorithms introduce robustness to transmission errors into the bit stream while still preserving its embedded structure. This is done by partitioning the wavelet coefficients into groups, coding each group independently, and interleaving the bit streams for transmission, thus if one bit is corrupted, then only one of these bit streams will be truncated in the decoder. If each group of wavelet coefficients uniformly spans the entire image, then the objective and subjective qualities of the reconstructed image are very good. To illustrate the advantages of this new family, we compare it to the conventional EZW coder. For example, one variation has a peak signal to noise ratio (PSNR) slightly lower than that of the conventional algorithm when no errors occur, but when a single error occurs at bit 1000, the PSNR of the new coder is well over 5 dB higher for both test images. Finally, we note that the new algorithms do not increase the complexity of the overall system and, in fact, they are far more easily parallelized than the conventional EZW coder.
We explore here the implementation of Shapiro's embedded zerotree wavelet (EZW) image coding algorithms on an array of parallel processors. To this end, we first consider the problem of parallelizing the basic wavelet transform, discussing past work in this area and the compatibility of that work with the zerotree coding process. From this discussion, we present a parallel partitioning of the transform which is computationally efficient and which allows the wavelet coefficients to be coded with little or no additional inter-processor communication. The key to achieving low data dependence between the processors is to ensure that each processor contains only entire zerotrees of wavelet coefficients after the decomposition is complete. We next quantify the rate-distortion tradeoffs associated with different levels of parallelization for a few variations of the basic coding algorithm. Studying these results, we conclude that the quality of the coder decreases as the number of parallel processors used to implement it increases. Noting that the performance of the parallel algorithm might be unacceptably poor for large processor arrays, we also develop an alternate algorithm which always achieves the same rate-distortion performance as the original sequential EZW algorithm at the cost of higher complexity and reduced scalability.