Hybrid passive polarimetric imager and lidar combination for material classification

Abstract. We investigate the augmentation of active imaging with passive polarimetric imaging for material classification. Experiments are conducted to obtain a multimodal dataset of lidar reflectivity and polarimetric thermal self-emission measurements against a diverse set of material types. Using the assumption that active lidar imaging can provide high-resolution three-dimensional spatial information, a known surface orientation is utilized to enable higher fidelity classification. Machine learning is applied to the dataset of monostatic lidar unidirectional reflectivity and passive longwave infrared degree of linear polarization features for material classification. The hybrid sensor technique can classify materials with 91.1% accuracy even with measurement noise resulting in a signal-to-noise ratio of only 6 dB. The application of the proposed technique is applicable for the classification of hidden objects or could assist existing spatial-based object classification.


Introduction
Many applications such as autonomous driving, surveillance, reconnaissance, and target engagement require the capability to accurately classify objects. Both active (e.g., sonar, radar, and lidar systems) and passive imaging (e.g., visible and infrared cameras) are popular solutions for object classification. Active imaging sensors operating in the optical spectrum, such as lidar, actively transmit light and detect backscattered light to characterize material properties, shape, and size. 1 Lidar offers several advantages over other sensing modalities, including ranging (enabling pointcloud rendering), pulse separation (enabling foliage penetration for hidden object classification), directional material reflectance, and invariance to lighting. 2 Similar to lidar, passive infrared sensors also capture material properties such as spectral reflectance as well as spatial information; however, passive sensors rely on external sources to illuminate or emit radiance (e.g., the sun illuminating the material or the material self-emitting due to body temperature). Both lidar and passive infrared imaging have demonstrated excellent performance in object classification, assuming sufficient pixel coverage of the object is obtained via imaging in order to infer spatial information of the object (e.g., template matching). 3,4 However, spatial-based recognition is only successful if a significant portion of the object is visible. For scenarios where only a small fraction of a surface on an object is imaged (e.g., hidden by obscuration), spatial information is of limited utility. In this scenario, only spectral information is available for object classification (this nonspatial classification can be considered material classification).
Polarization-sensitive passive infrared imaging has been employed and demonstrated to improve discrimination between natural and manmade classes. 5,6 This two-class discrimination is typically based on contrast enhancement of a spatial area in the scene with the surrounding *Address all correspondence to Jarrod P. Brown, E-mail: jarrod.p.brown@gmail.com background. More sophisticated material classification techniques using passive polarimetric imagers have also been investigated to classify material types. [7][8][9][10][11] In order to obtain successful results, the techniques must make several assumptions such as a known orientation of the material surface and a single illumination source. These assumptions may not be valid in actual remote sensing applications utilizing passive sensors alone. In related work, the authors suggest utilizing lidar to obtain object orientation and measurement geometry; 10 however, no further research has been presented regarding this suggestion.
In recent years, significant advancements have been made in machine learning techniques for classification, specifically in the field of deep learning. [12][13][14][15][16] A recently published survey 12 reviews deep learning-based hyperspectral image classification publications and compares several strategies for this topic. The survey includes networks designed to only use the spectral content of a single pixel, which is ideal for material classification. Unfortunately, material classification in passive imaging is difficult due to significant signal variability from the fluctuation in external sources such as temperature, cloud cover, and diurnal cycle. 17 This issue has been demonstrated with a deep belief network trained using longwave infrared (LWIR) hyperspectral imagery collected over multiple diurnal cycles. 13 Results showed that a multiday augmented deep network had a significant drop in performance when tested on a single day, demonstrating a lack of generalization for the specific dataset utilized. In other work, a deep transfer learning method has been proposed to improve the hyperspectral image classification performance in the situation of limited training samples. 14 The deep network design consistently demonstrates superior performance over other popular machine learning techniques. However, the design requires spatial features which may be limited if an object is partially hidden. Similar work utilizes deep learning techniques to combine hyperspectral imagery with visible 15 and lidar 16 modalities. These publications suggest that combining information using machine learning techniques will greatly enhance classification performance.
In this paper, we present a hybrid passive polarimetric LWIR imager and lidar combination for material classification. Lidar is commonly paired with hyperspectral imagery to leverage height and shape features of lidar with spectral characterization obtained by passive sensors operating at many wavelengths. [18][19][20] Similarly, polarimetric imagery also is typically fused with hyperspectral imagery. [21][22][23] In contrast to the aforementioned research, which relies on the hyperspectral characterization of materials to distinguish material types, we combine passive polarimetric and active reflectivity features of the dual imaging architecture. The specific imaging capabilities we use include degree of linear polarization (DoLP) from passive polarimetric imaging, monostatic unidirectional reflectance (f r ) from lidar imaging, and viewing orientation ðθ; ϕÞ. Viewing orientation is assumed to be available using lidar three-dimensional (3-D) pointcloud ranging. Very limited research has been published on the combination of lidar with passive polarimetric imaging to improve classification performance, which we believe is an important aspect in machine learning applications for infrared imaging. 24 The innovation of our work includes (1) the architecture of utilizing θ, ϕ, and f r from lidar in combination with DoLP measured by a passive polarimetric imager, (2) a unique dataset of 34 diverse material types imaged the hybrid system at eight observation angles, and (3) material classification results from combining the measurements, viewing angle, and training data. Therefore, the emphasis of this paper is the introduction and demonstration of the proposed hybrid sensing technique for material classification. We believe advanced classification methods could be designed for specific applications based on this work.
The remainder of this paper is organized as follows. In Sec. 2, we describe the sensing modalities used in this work, including the sensor data representation. Then, Sec. 3 presents a solution for material classification focused on the joint usage of passive polarimetric and lidar infrared imaging. The proposed multisensor architecture utilizes observation angle as well as multiple measurements taken from each sensor to classify material type. A demonstration of an example application is also presented. In Sec. 4, we demonstrate the feasibility of material classification with the proposed multisensor architecture by training and testing six popular machine learning techniques. The measurement and processing of the dual modality dataset is explained. Classification accuracy of the multisensor architecture is compared to the performance of each sensor operating independently. Finally, we conclude our work and future research direction in the last section.

Sensors and Data Representation
The machine learning application presented in this paper utilizes a hybrid imaging architecture consisting of lidar and passive polarimetric sensors to capture f r and DoLP features, respectively. The independent sensing modalities present distinct characteristics of a material; however, both depend on the same description of the interaction of the electromagnetic field with materials. Consider the scenario of an optical signal with wavelength λ (nm) incident on a surface from the direction described by θ i and ϕ i , and reflecting into the direction of θ r and ϕ r . The reflected radiance L r (Wm −2 sr −1 ) carries information about polarimetric interactions of the incident irradiance E i (Wm −2 ), and is expressed as E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 1 ; 1 1 6 ; 6 1 9 L r ðθ r ; ϕ r ; λÞ ¼ M r ðθ i ; ϕ i ; θ r ; ϕ r ; λÞE i ðθ i ; ϕ i ; λÞ; (1) where M r (sr −1 ) is the polarimetric bidirectional reflectance distribution function which is a 4 × 4 Mueller matrix. 25-27 L r and E i are 4 × 1 column matrices in Stokes notation, described as E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 2 ; 1 1 6 ; 5 6 4 where S represents the polarimetric state of the signals described by Stokes parameters s 0 , s 1 , s 2 , and s 3 . Stokes notation allows s 0 to represent total signal intensity, s 1 to represent horizontal and vertical linear polarizations, s 2 to represent linear polarization oriented at 45 deg and 135 deg, and s 3 to represent circular polarization. 26 Equation (1) is the general representation of an optical signal interacting with a material surface. The data representation of signals captured by lidar and passive polarimetric imaging is further discussed in the following sections.

Lidar
The lidar features utilized in our machine learning technique include unidirectional reflectivity and range. Reflectivity is used to characterize the material, and range is used to estimate the observation angle of the material surface. The direct detection pulsed lidar sensor utilized in this work operates at the 1.55-μm wavelength and uses a linear mode avalanche photodetector. The system transmits a 5-ns full-width at half-maximum laser pulse which strikes and scatters opaque surfaces. The intensity of the backscattered laser energy is captured by the photodetector and digitized by a receiver. The time elapsed between the transmitted and reflected pulses is used to calculate range. Multiple range measurements across a fraction of a surface can be used to estimate angle of incidence. The peak of the backscattered pulse is used to estimate reflectivity. As shown in Fig. 1(a), active sensors are typically dominated by unidirectional radiance represented by Eq. (1) with θ r ¼ θ i and ϕ r ¼ ϕ i , which we denote as θ and ϕ, respectively. However, the receiving detector is polarization insensitive; therefore, only the s 0 component of L r is measured. Furthermore, we assume nondiagonal elements in the first row of the Mueller matrix for our data to be zero. This assumption is supported by experimental measurements 28,29 of diverse materials which show that nondiagonal Mueller matrix elements of most opaque surfaces that might be observed in a remote sensing application are approximately zero. Using the stated simplifications, Eq. (1) is approximated for our lidar system as L r ðθ; ϕ; λÞ ¼ f r ðθ; ϕ; λÞE i ðθ; ϕ; λÞ; (3) where L r and E i are the scalar s 0 elements of L r and E i , and f r (sr −1 ) is the top-left element of M which represents the scalar monostatic bidirectional reflectance distribution function (mBRDF). Due to practical complications in measuring E i in Eq.
(3), f r is defined in alternative form as E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 4 ; 1 1 6 ; 6 6 5 f r ðθ; ϕ; λÞ ¼ P r P i Ω cos θ ; which describes the scattered power P r (W) per unit solid angle Ω (sr) normalized by the incident power P i (W) and the cosine of the detector zenith angle θ measured relative to the material surface. 30 Theoretically, an active imaging system could be calibrated to have a known P i by measuring direct output power and estimating range and atmospheric attenuation. Likewise, θ could be estimated by calculating surface orientation using lidar 3-D point-cloud data, and Ω is calculated from range and aperture size. Therefore, f r could be calculated and utilized for material classification. An alternative method to calculate f r in experimentation utilizes a reference material with a known directional-hemispherical reflectivity ρ DHR , such as Spectralon, in addition to P r and θ. This is a favorable method because P i can be difficult to calibrate, however, ρ DHR can be accurately measured using laboratory instruments. Since Spectralon is manufactured to closely approximate ideal Lambertian diffuse reflectors, the Spectralon f r is assumed to be ρ DHR ðλÞ π which has been supported by laboratory measurements. Finally, mBRDF is calculated as E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 5 ; 1 1 6 ; 4 6 2 f r ðθ; ϕ; λÞ ¼ P r cos θ s P s r cos θ ρ DHR ðλÞ π ; where P s r is the power measurement (or backscatter pulse peak) of the Spectralon, P r is the power measurement of the sample, and the incident power is characterized to be constant for each measurement (Spectralon and sample). 30,31 In this paper, a database of material f r is collected using the technique described in Eq. (5).

Passive Polarimeter
The polarimetric feature, DoLP, is captured using a cooled Polaris 640 LWIR Imaging Polarimeter, manufactured by Polaris Sensor Technologies, Inc. 32 The sensor has an operating wavelength of 7.5 to 11.1 μm and up to a 120-Hz frame rate. The Polaris 640 system is equipped with a fixed polarizer and rotating retarder imaging polarimeter, which takes measurements of linear polarization oriented at 0 deg, 45 to obtain a polarimetric Stokes column matrix. Since circular polarization emitted from an object is extremely uncommon, most passive polarimeters (including the one utilized in our experiments) do not capture s 3 ; 27,33 therefore, the s 3 element is set to zero.
A common characterization of polarization in passive polarimetric imaging is DoLP, which is calculated from L as E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 7 ; 1 1 6 ; 1 0 5 and describes the fraction of the power that is linearly polarized. Due to the nature of the quantities involved, DoLP ranges from zero to one (i.e., zero indicates no polarization is detected, and one indicates the signal is completely polarized). As depicted in Fig. 1(b), passive sensors capture the sum of specular and diffuse reflected signals as well as self-emitted radiance. 34 Emitted radiance L e (Wm −2 sr −1 ) is described as where E b (Wm −2 ) is the intensity of radiance derived from the surface body temperature, M e (sr −1 ) is the directional polarimetric emittance, which is a 4 × 4 Mueller matrix, 27 and θ, ϕ is the observation angle relative to normal. The specular and diffuse reflected radiance are each described by Eq. (1). We assume the emitted radiance is significantly larger than diffuse and specular reflectance within the LWIR waveband. Through experimentation, it has been shown that this is a valid assumption when imaging objects heated to ∼100°C with a cold sky. 35 Therefore, the experiments in this paper are conducted on heated samples in a controlled indoor laboratory. Passive polarimetric measurements of an object are taken with the retarder waveplate at angles 0 deg, 45 deg, 90 deg, and 135 deg so that the column matrix in Eq. (6) can be constructed. Finally, DoLP is calculated using Eq. (7). The fundamental properties of polarization suggest that polarimetric measurements could be useful features for material classification, specifically in discriminating rough and smooth surfaces. 9,36 This is typically explained by representing the texture of the surface as multiple microfacets with orientations following a random distribution. The angle-dependent polarization from each microfacet is incoherently summed when simultaneously observing multiple microfacets of a rough surface, resulting in an unpolarized signal. Conversely, smooth surfaces maintain a consistent orientation across the surface and therefore preserve the polarimetric signal.

Data Representation
This paper advances material classification by utilizing the feature set consisting of measurements of lidar and passive polarimetric sensors both characterized over a well-defined set of observation angles. The number of unique observation angles and the specific angles utilized are expected to significantly affect classification performance. For example, from Fresnel reflectance theory, DoLP is known to increase as observation angle relative to normal increases. 37 Concerning the mBRDF angle dependence, perfectly diffused Lambertian surfaces have uniform f r for all angles; however, realistic surfaces typically have specular components with higher values within the normal incidence specular lobe. 30 We assume the observation angle can be determined by estimating surface orientation relative to normal using lidar 3-D point-cloud imagery. The observation angle is represented as θ and is restricted to be in the monostatic plane of incidence such that ϕ ¼ 0 deg. Furthermore, in many applications, multiple observation angles can be measured on a single material surface, due to a moving platform or moving object. The features are jointly represented by feature vector X as E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 9 ; 1 1 6 ; 2 4 5 Xðθ 1 ; θ 2 ; : : : ; θ N Þ ¼ where N represents the total number of observation angles at which the measurements are taken.

Hybrid Sensor Architecture for Material Classification
In this section, we establish the first-ever implementation of hybrid passive polarimetric imager and lidar combination for material classification. While we believe the combination of these modalities offers several benefits, this paper is focused specifically on the classification of material type. Material classification could be extremely useful for detecting partially hidden objects or could assist spatial-based object classification. As discussed in the previous section, the hybrid sensing architecture we propose uses f r and DoLP features measured by the lidar and the passive polarimeter sensors, respectively, which are simultaneously captured at a colocated observation geometry. The proposed hybrid sensing architecture requires a state-of-the-art linear-mode lidar capable of obtaining high-resolution 3-D point-cloud and reflectivity measurements for each pixel. The point-cloud data are used to estimate surface orientation and thus observation angle θ relative to the surface normal. Both lidar and passive polarimetric infrared intensity values are utilized to calculate f r and DoLP. The required processing steps are shown in Fig. 2. First, lidar and passive polarimetric measurements are captured to form 3-D point-cloud, intensity, and Stokes data. Measurements could be repeated to capture multiple observation angles. The features are combined to form X from Eq. (9), and material classification is implemented. Details of the classification process are presented next, and the training and parameter optimization of the classifier is discussed in Sec. 4.4. If the proposed architecture is utilized in applications where long ranges or adverse weather conditions are present, the measurements must be corrected to compensate for environmental effects. In Sec. 3.2, as a notional hybrid sensing system, which represents one of several applications benefiting from this progressive technology is presented, and solutions to potential obstacles of utilizing the proposed technology in a tactical environment are discussed.

Material Classification
Since both f r and DoLP are expected to have consistent and repeatable measurements in most situations, a supervised learning algorithm is considered for the hybrid sensor material classification in this paper. In supervised machine learning, labeled sample data is used offline to model the mapping between input examples and the known output classes. 38 We utilize features measured against laboratory data of a diverse material dataset to train the supervised classifier in identifying material type. The key idea of supervised learning is to estimate a decision boundary, which separates each class from one another based on the training data. We propose using a support vector machine (SVM) for classifying material type due to the proven success of this classifier in similar applications such as hyperspectral imaging for land cover classification and target detection. 39,40 However, we believe advanced classifiers could be designed, based on the proposed technique (i.e., hybrid sensing with known viewing orientation), that optimize performance for a specific application. The SVM presented in this paper demonstrates the general application of material classification. The SVM classifier tries to find the optimal separating hyperplane that maximizes the margin between the closest training samples of each class. The hyperplanes are typically formed in highdimensional space using kernel transformation functions; 41 and boundary pixels (i.e., support vectors) are utilized to create a decision surface. 42 Therefore, SVM classifiers are inherently binary classifiers designed to solve two-class problems. A collection of SVM classifiers must be implemented to separate multiple classes. Multiclass designs include one-versus-all (one SVM classifier for each class) and one-versus-one (one SVM classifier for each pair of classes). The SVM classifier is a particularity popular solution for machine learning when there are a limited number of training samples available, 43 which is typically the case in nonconventional imaging, such as hyperspectral, polarimetric, and lidar. The implementation, parameter selection, and classification accuracy of material classification for our proposed hybrid sensing system are presented in Sec. 4.

Notional System
The proposed hybrid sensing architecture is beneficial to a multitude of machine learning applications, such as automatic target detection, land cover classification, autonomous driving, and machine vision in manufacturing. The actual system parameters of the lidar and passive polarimetric sensors should be carefully selected to optimize the performance for the specific application. For example, commercially available lidar systems designed for autonomous driving currently utilize high scanning rates and a large field-of-view, requiring high repetition rate lasers with moderate power and ∼200-m maximum distance. [44][45][46] In contrast, scanning linear-mode lidar in 3-D mapping remote sensing applications typically requires a higher power laser and operates at an altitude of ∼1000 to 5000 ft, 46 with operating ranges of ∼1 km or greater. In this section, we demonstrate the feasibility of the proposed architecture by presenting a notional implementation for a remote sensing application.
To support our notion of hybrid sensing, a tactical demonstrator is fully assembled using the commercially available passive polarimetric imager manufactured by Polaris Sensor Technologies, Inc. as described in Sec. 2.2, and a custom lidar system owned and operated by the Air Force Research Laboratory (AFRL) at Eglin Air Force Base. Parameters for the demonstration are shown in Table 1. The system is operated to capture imagery at ∼1.5 km from a 25-m tower. A flat white painted aluminum 1.22 m × 1.52 m panel is placed in a predominately natural scene at a 1.469-km slant range and 40-deg observation angle. Example imagery from the demonstration is shown in Fig. 3. At this range, there are ∼88 and 12 pixels on the panel with the lidar and passive systems, respectively. For this application, the passive system is designed to have a larger field-of-view to locate possible objects-of-interest, and the lidar is cued to image specific areas with high resolution. The presented notional hybrid system demonstrates the feasibility to capture imagery using a tactical system in a relevant application. If the proposed architecture is utilized in applications where long ranges or adverse weather conditions are present, the lidar measurements must be corrected to compensate for atmospheric attenuation and signal loss using a radiometric model. The first technique to mitigate this issue is choosing an operating wavelength of the laser to be within a high transmission window. In addition, we suggest utilizing a popular radiometric model, such as MODTRAN 47 or LEEDR, 48 as well as current meteorological data, to correct for atmospheric effects. The passive polarimetric signal DoLP is not altered due to signal attenuation, but sources of noise such as diffuse reflected LWIR radiance could affect the polarimetric signal. In this paper, we do not attempt to correct measurements taken in adverse conditions and long ranges. Instead, we limit our measurements in this paper to close range under ideal conditions and then introduce a generic error source into the test database when evaluating the classification accuracy (discussed in Sec. 4.4). The error term represents effects of long range imaging and atmospheric conditions (or possible errors resulting from the correction of those effects). Adding error to our data alters the signal-to-ratio (SNR), which is varied to represent multiple degrees of accuracy that may be expected. Basically, longer ranges and more difficult imaging environments are expected to reduce the SNR, and we evaluate performance against varying amounts of SNR.

Experiment Results
In this section, the proposed architecture is evaluated for material classification. We present a unique common dataset for polarimetric LWIR and lidar measurements against a diverse set of materials. Next, the dataset is analyzed and trends from each class are discussed. Then, the implementation of supervised learning is fully described. Finally, a comprehensive evaluation of material classification performance for the machine learning algorithms is presented.

Dataset
To our knowledge, there are no lidar datasets with LWIR passive polarimetric imagery available to evaluate the performance of material classification algorithms. Therefore, an experiment is conducted to obtain a unique characterization of a diverse set of materials with both active and passive polarimetric infrared imaging systems. The experiment is conducted to collect f r and DoLP of 34 materials imaged at eight observation angles. The sample materials consist of painted aluminum panels (of various colors and gloss), painted tile thinset (of various colors and texture), naturally occurring objects (e.g., leaves, pine needle, and bark), asphalt, concrete, brick, rubber, metal, roof shingle, plywood, plexiglass, and cardboard, as shown in Fig. 4.  Each sample is placed on a rotation stage controlled by an articulating tripod which has the ability to pan and tilt via computer-controlled instruction. Samples are imaged at angles 0 deg to 70 deg in 10-deg increments where 0 deg is normal incidence (as determined by a mirror) and ϕ is held constant at 0 deg. The entire scene remains static for each iteration of imaging. The scanning lidar system captures pulse intensity at each pixel of the image by measuring the peak power of the backscattered pulse. A region of interest (ROI) is manually selected in the lidar imagery to represent approximately the same portion of the material for all angles, as shown in Fig. 5. The ROI is selected to include all of the sample surfaces except for areas near the edge. The ROI consists of at least 1800 pixels at normal and 250 pixels at 70 deg. The measurements are taken in a controlled laboratory setting at a distance of ∼9 m. Measurements are also taken against calibrated Spectralon panels with ρ DHR accurately measured at the 1.55-μm wavelength. Using the mean power measurements of the materials and Spectralon panels, f r is calculated using Eq. (5).
The entire experiment process is repeated using an LWIR polarimeter in place of the lidar system. In order to capture the emissive properties of the material, a heating element is utilized to maintain an ∼100°C surface temperature. The passive polarimeter measures the Stokes column matrix, as described in Eq. (6) (example imagery is shown in Fig. 5). ROIs are manually selected and consist of at least 3000 pixels at normal observation angle and 650 pixels at 70 deg. Finally, DoLP is calculated using Eq. (7). More details of the experiment setup and methodology have been recently published. 24 The sample mean X and standard deviation σ SV of the pixel values within each ROI are calculated to statistically represent the experiment measurements as random variables. For simplicity, both f r and DoLP are approximated as Gaussian distributions. The feature set of Eq. (9) is formed using the experiment measurements of each material described as

Data Analysis
Next, we analyze the dataset obtained with the hybrid sensor experiment. Inspection of f r in Fig. 6 shows the sample mean of materials with semigloss or glossy paint have extremely large f r near normal (due the specular lobe of the lidar geometry) and low diffuse f r at other angles.
The f r of all other materials tends to vary slowly with observation angle because the backscatter energy is mostly diffused reflectance. Dark color paints (i.e., green, black, and camouflage) have much lower f r than light colors (i.e., tan, white, and gray) because the darker colors absorb some of the laser energy. Additional groups of materials with considerably low reflectance include asphalt, rubber, and rusted steel. Materials painted light colors and brick have the overall highest f r . The natural materials, roof, concrete, cement block, cardboard, plywood, and plexiglass have similar f r that is typically more than dark paints but less than light paints. According to Fresnel polarization theory, 26 the magnitude of linear polarization is zero at normal observation angle and increases as a function of angle and refractive index of the material. For rough surfaces, the polarization is degraded as the signal from each microfacet is incoherently summed. 34 In our dataset, DoLP is approximately zero near normal observation angle and increases with angle for almost all materials (resulting in a −s 1 and þDoLP). The only exception is plywood which has a reflected component that is prevalent at observation angles less than 20 deg (þs 1 and þDoLP). Aluminum with light or dark paint color has the highest DoLP due to very smooth surfaces. Natural materials have the lowest DoLP, due to rough surfaces. Likewise, the smooth, medium, and rough textured thinset has DoLP inversely proportional to the surface roughness. Many of the measurements within a class maintain very similar signatures. For example, all materials of the semigloss light painted aluminum class (class e) have approximately the same polarimetric signal for all angles [as shown in Fig. 7(b)]. However, in comparison to the f r measurements, DoLP appears to be less diverse between classes. For example, class e is very similar to classes d and f. Therefore, classification may be more difficult with DoLP. Overall, the combined dataset is seen to agree with reflectance and polarization theory.
As previously discussed, the standard deviation represents material variation due to surface texture. In lidar imagery, standard deviation is relatively small compared to the mean, with the exception of the glossy and the camouflage painted aluminum panels. The glossy paints have a nonuniform specular spot at the center of the material near normal observation angles. The camouflage sample has three different paint colors within the ROI which causes a high standard deviation. As anticipated, the standard deviation of DoLP is highly correlated with the surface roughness (i.e., rough and smooth surfaces have high and low standard deviation, respectively) 34 and mixed material types. For example, thinset with rough texture has higher standard deviation than the smooth thinset. Similarly, oak leaves and rusted steel have significantly higher variance due to the diverse materials within the ROI (i.e., colors of leaves, rust deposits on steel, etc.).

Implementation of Supervised Learning
The complete dataset which is composed of sample mean and standard deviation presented in Figs. 6 and 7 is utilized to generate a database for supervised machine learning and classification performance evaluation. The initial database contains 34 row vectors, where each 1 × 16 row vector contains f r and DoLP measured at eight observation angles, as described in Eq. (9). For each of the 34 material samples, 100 observation vectors are generated using the sum of sample mean X and randomly distributed Gaussian noise η SV characterized by the material's variance σ SV , as described in Eq. (10). The entire database is organized as a 3400 × 16 matrix, to represent an ensemble of measurements of the material. Class labels are assigned to each observation following the class grouping a through s as indicated in Fig. 4. The generated database represents intrinsic variation due to surface texture and inconsistent material properties across the sample surface with no measurement noise added (e.g., rust, discoloration, grain, nonuniform mixtures, etc.). To address measurement noise, we introduce a separate noise component, which is described in Sec. 4.4.2.
We propose the use of SVM to implement material classification, as discussed in Sec. 3.1; however, we emphasize the hybrid sensing architecture is prevailing over single modality sensing while using any one of an assortment of supervised machine learning techniques. Therefore, in addition to SVM we also implement decision tree, 38 discriminant, 49 Naïve Bayes, 50 k-nearest neighbors (kNN), 51 and neural network 52 to prove the benefit of hybrid sensing. All classifiers are implemented using either the Statistics and Machine Learning or Deep Learning toolboxes from MATLAB. 53 First, the database is loaded into the Classification Learner tool in MATLAB and the option to partition into five disjoint folds is selected. This option utilizes four folds for training and one fold is used for testing. To reduce classification variability, five rounds of crossvalidation are performed using different partitions, and the validation results are averaged to obtain the final classification accuracy. Next, each of the six classifier techniques is individually selected within the tool. Parameters of each classification method are iteratively adjusted as shown in Table 2. All combinations of the parameters are exhaustively exercised, and the optimal result is utilized in the final accuracy metric for each implementation. Please note: best-performing parameters within the listed parameter-space change depending on the number of viewing angles (i.e., features), SNR, and classes of the dataset. Furthermore, future implementations could utilize automatic selection of the parameters via optimization tools provided by MATLAB to optimize the classifier for specific applications. Finally, the classification learner tool allows the user to select a subset of the features in the database to utilize in training and testing. In the following section, we present results from the experiment using various combinations of viewing angle measurements.

Performance Evaluation
To fully demonstrate the added benefit of multisensor material classification, supervised techniques are utilized with features of individual sensors as well as the proposed hybrid system. We also experiment with multiple combinations of observation angles. First, results of a single observation angle using f r only, DoLP only, and hybrid features are evaluated without measurement noise added. Then, performance using a single observation angle is evaluated with varying levels of noise added. Finally, results using multiple observation angles with noise are presented.

Single observation angle without measurement noise
Measurements at a single observation angle, f r ðθ 1 Þ and DoLPðθ 1 Þ, are utilized and θ 1 is varied from 0 deg to 70 deg. Total classification accuracy, calculated as the number of observations correctly classified out of the total number of observations, is determined for each angle. As shown in Fig. 8(a), classification with f r has consistent performance for all angles and DoLP improves as θ 1 increases. The result matches expected performance based on Fresnel reflectance, where DoLP increases with angle and material classes become more distinct as observation angle increases. The highest classification accuracy obtained in this experiment is 83.6%, which occurs at θ 1 ¼ 70 deg. By utilizing multiple features, classification accuracy is increased by 44.5% compared to lidar only, and 32.3% compared to passive polarimetric only; however, since a standalone passive polarimeter cannot determine observation angle without lidar point-cloud information, the DoLP only classifier is still dependent on the ranging information of the lidar ranging in a dual-sensor architecture. For evaluation purposes, we assume perfect knowledge of θ in this paper.

Single observation angle with measurement noise
Next, in order to comprehensively demonstrate the effectiveness of the hybrid architecture, classification performance is evaluated with measurement noise added to the generated database. where η MN represents a vector containing Gaussian random numbers with zero mean and σ MN standard deviation. We analyze classification accuracy versus SNR (dB), which we define as E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 1 2 ; 1 1 6 ; 6 9 9 SNR ¼ 10 log X þ η SV σ MN ; (12) where σ MN is the standard deviation of the generated noise. Therefore, Eq. (12) is solved for σ MN and then evaluated with SNR varied from 3 to 10 dB for each observation X. The X 0 ðθ 1 ¼ 70 degÞ database, which includes sample variance, measurement noise, and measurement mean at a single observation angle of 70 deg, is utilized with the SVM classifier for the f r only, DoLP only, and hybrid (i.e., f r and DoLP) architectures. As shown in Fig. 8(b), classification accuracy of all three architectures improves as SNR increases. The highest classification accuracy is 73.3% at SNR ¼ 10 dB. With SNR ¼ 6 dB, which the signal is only four times greater than the standard deviation of the noise, classification accuracy is 56.6% using hybrid sensing.

Multiple observation angles with measurement noise
Finally, the classification accuracy for combinations of observation angles is examined, representing a scenario of a passively augmented lidar architecture imaging an object from multiple viewpoints (e.g., θ 1 ; θ 2 ; : : : ; θ N ). In this experiment, the database of X 0 from Eq. (11) is utilized with SNR of 6 and 9 dB. In Table 3, the classification accuracy using imagery captured at all 0 deg, 10 deg, 20 deg, 30 deg, 40 deg, 50 deg, 60 deg, and 70 deg viewing angles is presented. Results of SVM, decision tree, discriminant, Naïve Bayes, kNN, and neural network classifiers using the parameters listed in Table 2 are shown. Parameters of the individual classifiers are optimized for each scenario. Results show that all classifiers follow the same trend versus SNR (higher SNR increases accuracy).
Classification accuracy when using eight viewing angles from 0 deg to 70 deg is very impressive. However, in many scenarios obtaining this diverse set of angles is impractical. Therefore, we present additional experimentation utilizing combinations of only two to seven viewing angles. Obtaining multiple viewpoints is most likely to occur as consecutive angles (e.g., a moving platform may have a clear view of an object's surface for 30 deg to 50 deg observation angles before losing sight of it due to obscuration). We examine combinations of observation angles with consecutive angles. As shown in Table 4, the utilization of additional observation angles generally improves performance. For example, the accuracy of Xð50 deg; 60 deg; 70 degÞ is 70.8%, a 5.4% increase from Xð60 deg; 70 degÞ.

Discussion of Results
The classification accuracy of all scenarios evaluated on our dataset is greater than 20%. With the 19 classes considered, a completely random guess would result in less than 5.3% chance of correct classification. The performance is enabled by having a known observation angle. When considering a single known observation angle, results of f r and DoLP are very similar [ Fig. 8(b)]. However, when combinations of angles are considered (Tables 3 and 4) f r consistently outperforms DoLP. In fact, the more angles utilized, the better the performance. This is because the actual measurements (shown in Figs. 6 and 7) of most materials have signatures that vary with observation angle. In all scenarios combining the features in a hybrid architecture significantly improves performance. As previously mentioned, a standalone passive polarimeter is not capable of obtaining observation angle without lidar point-cloud information. Therefore, utilizing only the DoLP feature would still require a lidar system. We believe 6 dB is a reasonable evaluation point for SNR, based on our experience with lidar and infrared imaging systems. At 6 dB, the proposed technique achieves 91.1% material classification accuracy using SVM. When comparing classifier techniques (Table 3), SVM obtains the best results. This could be due to the limited parameter space we explored with each classifier (shown in Table 1). Optimizing these parameters for the specific dataset could improve classification accuracy of each method. We also notice that some SVM classifiers require on the order of 10 times longer to train than other classifier types (but performance metrics on training time are not presented because the metric is highly dependent on computational hardware). We recommend that the type of classifier utilized in future work should be carefully selected for each individual application (by considering the amount of training data, dimensionality of the data, training time, number of the features, number of classes, and class separation).

Conclusion
This work lays the foundation for follow-on work to design advanced classifiers optimized for specific applications. The combination of lidar and passive polarimetric sensors in a hybrid imaging architecture is demonstrated to obtain 91.1% material classification accuracy. A unique dataset consisting of f r and DoLP measurements versus θ is presented for a diverse set of 34 material types each imaged at eight observation angles. Material classification is implemented using six machine learning classifiers with multiple feature sets to clearly show the benefit of using a hybrid infrared imaging technique. The advantage of imaging an object at multiple viewpoints is shown to increase classification accuracy by ∼31.5% compared to classification at 70 deg alone when SNR ¼ 6 dB is considered. The presented technique relies on lidar 3-D point-cloud imagery to estimate surface orientation and is designed to classify on material surface properties f r measured with lidar and DoLP measured with passive polarimetric infrared sensors. Future work can combine this technology with object classification based on spatial features. For example, spatial features such as shape, height, length, and intensity contrast are typically obtained from the imagery of the sensors in the proposed hybrid sensing architecture. By combining material classification of our work with spatial features captured with the same sensors, we expect the classification accuracy to further improve.