Multi-sensor anomalous change detection in remote sensing imagery

Abstract. Combining multiple satellite remote sensing sources provides a far richer, more frequent view of the earth than that of any single source; the challenge is in distilling these petabytes of heterogeneous sensor imagery into meaningful characterizations of the imaged areas. Meeting this challenge requires effective algorithms for combining multi-modal imagery over time to identify subtle but real changes among the intrinsic data variation. Here, we implement a joint-distribution framework for multi-sensor anomalous change detection (MSACD) that can effectively account for these differences in modality, and does not require any signal resampling of the pixel measurements. This flexibility enables the use of satellite imagery from different sensor platforms and modalities. We use multi-year construction of the SoFi Stadium in California as our testbed, and exploit synthetic aperture radar imagery from Sentinel-1 and multispectral imagery from both Sentinel-2 and Landsat 8. We show results for MSACD using real imagery with implanted, measurable changes, as well as real imagery with real, observable changes, including scaling our analysis over multiple years.


Introduction
The material discrimination enabled by remote sensing imagery naturally lends itself to the following questions: if I have two or more images of a given scene, what has changed? Furthermore, where are the changes that are interesting? 1 One analyst might, for example, be interested in seasonal variations from summer to autumn that result in drier vegetation. Another analyst might not care about broader seasonal changes, but might be interested in a building that is constructed during that time. [2][3][4][5][6][7] Both of these circumstances will result in true signal changes, and so the challenge then becomes this: how do we translate the arguably subjective and certainly applicationdependent concept of an "interesting change" to an objective mathematical framework that can be used to exploit remote sensing images? An initial step is to make the distinction between imagewide pervasive differences and rare anomalous changes. The paradigm of anomalous change detection (ACD), [8][9][10][11][12][13][14][15] which is grounded in concepts from anomaly detection, [16][17][18][19][20] seeks to identify changes that are different from how everything else might have changed. This borrows from the classic anomaly detection framework, which attempts to characterize that which is "typical" and then uses that to identify deviations from what is expected or common.
ACD approaches have historically (and sensibly) assumed that the images come from the same sensor, and as a result have the same channels. This is particularly important for frameworks that use a difference image, as that requires the two images to be in the same domain. With airborne and spaceborne remote sensing become increasingly more accessible, and with the variety of sensor designs and modalities growing dramatically, 21 the ability to identify changes across different sensor images has been recognized as a growing area of need. 22 The few approaches to date have either focused on a two-step process to (1) use deep learning or classification to transform disparate images to a common feature space and then (2) look for changes within that feature space; [23][24][25][26] or else have imposed signal interpolation to align channels (which is only possible between optical channels, and not extendable to crossmodality data). 27 These approaches do begin to address this challenge, but they require significant training imagery and/or a priori knowledge, which makes them very challenging to scale.
This research presents a flexible and sensor-agnostic change detection approach: multisensor anomalous change detection (MSACD). We have extended our prior work in same-sensor change detection, 28 and have developed a new processing chain to enable application to multisensor imagery. Given the lack of ground-truthed community datasets for multi-sensor change detection over time, this also included designing new experiments to evaluate the extension. MSACD is a notable advancement over other techniques as it does not require resampling in the signal domain, nor does it require training data. 29 The development of such an approach has the potential to be significant, as it enables higher-cadence surveillance by allowing for continuous exploitation of any imagery over a particular area, without having to wait for the same sensor to revisit that area. The flexibility afforded by MSACD is especially valuable when, for example, an unexpected event occurs; because we cannot go back in time and re-task the past, we have to make use of the imagery that is available, and the imagery closest in time to the event will almost certainly come from different sensors.
Following the additional background information in Secs. 1.1 and 1.2, the remaining sections of the paper are structured as follows: Sec. 2 provides an overview of the methodology; Sec. 3 details the multi-sensor change detection experiments; Sec. 4 presents an exploration of the experimental results; and Sec. 5 discusses conclusions and future work.

Remote Sensing Imaging Modalities
In spectral remote sensing, a scene is imaged at multiple wavelengths that can span ultraviolet through infrared. Multispectral imagery (MSI) typically contain tens of discrete wavelength bands, while hyperspectral imagery (HSI) may have hundreds of narrow, contiguous bands, depending on the designs of the sensors. By imaging a scene at non-visible wavelengths, materials that appear visually similar (e.g., a green car in a forest) may in fact be spectrally distinct. This allows for remote material discrimination for a variety of analyses. 30,31 Single-band panchromatic images can exhibit higher signal-to-noise ratio and/or higher spatial resolution. Synthetic aperture radar (SAR) provides fundamentally different information about a scene: specifically, detailed structural information about the surface. The sensors send pulses of radio waves to "illuminate" a scene, are not impeded by clouds, and do not require ambient light. Two SAR images enable interferometry, where phase changes are used to identify surface deformation that has occurred between the sensor passes. This is valuable in, for example, subsidence detection.

Anomalous Change Detection
The obvious way to find differences in a pair of images is to literally take the difference: subtract the images, and look for pixels where that difference is large or in some other way interesting or unusual. Indeed, traditional change detection algorithms (such as Change Vector Analysis [32][33][34][35] are based on analyzing the vector-valued differences at each pixel. One of the difficulties with subtraction-based approaches is that true differences may be dominated by environmental factors (atmospheric transmission, aerosol scattering, solar illumination, view angle, sensor characteristics, etc.) that vary from acquisition to acquisition. The difficulties become truly problematic when the images are acquired by different sensors, possibly over a different range of wavelengths, or even different phenomenologies. When the number of bands is different, then it's not just a bad idea to subtract them, it's a mathematical impossibility.
The idea behind transformation-based ACD is to in some way equalize the images so that pervasive differences are suppressed and, at the same time, salient changes are maintained or enhanced; this can be done in a physics-driven way 36,37 or a data-driven way. [8][9][10][11] The chronochrome algorithm of Schaum and Stocker 8 is a simple and effective anamolous change detector, providing a good illustration of data-driven equalization. To that end, consider a pair of spatially corresponding pixels from two images of a scene taken at different times, and let x and y represent their spectra. A matrix L is derived so thatŷ ¼ Lx is a good approximation to y for most of the pixels in the scene (L is sometimes called a "predictor"). In particular, L is chosen to minimize the average of ky − Lxk 2 over the whole image. Now, the vector quantity y − Lx will be relatively small over most of the image (after all, L was chosen to minimize its average magnitude). But for anomalously-changed pixels, that vector difference will be of larger magnitude. In the algorithm, the scalar anomalousness score is given by the Mahalanobis magnitude of the vector-valued difference. Note that the effect of L is to provide a data-driven equalization in making Lx as much like y as possible. There is no need for L to be square, so this works even when the number of bands in the two images is different. Various chronochrome extensions have been made, including adapting it to a target detection framework where changes are used to characterize the target-free background at any given pixel. [38][39][40] 2 Joint-Distribution ACD Methodology

Theory
In the distribution-based approach, introduced in Ref. 28, there is no subtraction at all. In this formulation, we begin with a joint distribution Pðx; yÞ that describes a typical pair of coregistered pixels x and y taken from the two images of interest. Following the usual prescription from anomaly detection, we could take our anomalies to be points for which Pðx; yÞ is small, but this would go after anomalous pairs of pixels and in doing so would find straight anomalies in addition to anomalous changes. To focus in on just the unusual changes, we define a nonuniform "background distribution" Qðx; yÞ ¼ PðxÞPðyÞ where Here, Qðx; yÞ describes "normal" points exhibiting anomalous changes. By contrast, Pðx; yÞ describes normal points exhibiting normal changes. That is because Pðx; yÞ describes the data, and the statistics of our data are what we use to define normal. The key notion behind this distribution-based approach is to consider the ratio of these two likelihoods: Pðnormal points exhibiting anomalous changesÞ Pðnormal points exhibiting normal changesÞ ¼ PðxÞPðyÞ Pðx; yÞ : And as a practical matter it is useful to take the logarithm (which is merely a monotonic rescaling of the values), so we can define The functional form of this expression allows us to interpret the anomalousness as a measure of the mutual information between the pixels x and y. We remark that when the distribution is Gaussian, then the contours of constant anomalousness (that is, the boundaries between what is normal and what is anomalous) are hyperbolic. This motivates the name hyperbolic anomalous change detection (HACD). For the stack of images z ¼ ½x T ; y T T (corresponding to the joint distribution Pðx; yÞ), we can simplify the HACD expression to represent this anomalousness measure as E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 6 ; 1 1 6 ; 7 1 1 We define ξ z using the squared Mahalanobis distance where μ z is the mean of z and R −1 z is the corresponding covariance matrix; the distance ξ for x and y are defined similarly.
We provide more details on the implementation in Sec. 2.2. We focus here on HACD, but we note that there are other variants such as EC-HACD, where the distributions are elliptically contoured. 41 There has also been research into using parametric distributions, 42 kernelized distributions, 43 spatio-spectral distributions, 44 and sequences of images. 45,46

Implementation
The implementation of HACD used here employs local co-registration adjustment (LCRA) for increased robustness to potential misregistration issues, introduced in Ref. 47. We provide detailed pseudocode of these steps in Algorithm 1.
Because HACD does not use a channel-by-channel comparison between pixels, but rather compares pixel distributions, the approach is sensor-agnostic; it only requires that the two images be sampled to the same spatial domain, leading to a natural extension to multi-sensor imagery. We refer to our multi-sensor extension of joint-distribution based ACD as Multi-Sensor Anamolous Change Detection, or MSACD. A conceptual overview of the MSACD methodology is shown in Fig. 1.
The code upon which these experiments are based has been made open source and is available on GitHub at https://github.com/jt-lanl/acd. This provides the base algorithms for HACD and LCRA, enabling implementation of MSACD for multi-sensor imagery.

MSACD Experiments
To investigate the feasibility of anomalous change detection for multi-sensor scenarios, we applied MSACD to remote sensing images taken from a variety of satellite-based sensors: Sentinel-1 (SAR), Sentinel-2 (MSI), and Landsat 8 (MSI). Because ground-truthed data unfortunately does not yet exist for multi-temporal, multi-sensor change detection experiments, we performed our analysis on real data with real, observable changes as well as real data with synthetic, measurable changes, including scaling our analysis over multiple years. And because there are not other methods against which we can compare this approach, we performed baseline experiments to benchmark the multi-sensor results.

Testbed and Satellite Sensors
In 2016, construction began on a 298-acre open-air stadium and entertainment complex, located at 33.95345°N, 118.3392°W, in Inglewood, California. Now known as SoFi Stadium, it is the home arena for both the Los Angeles (formerly San Diego) Chargers and the Los Angeles (formerly St. Louis, formerly Los Angeles) Rams of the National Football League (see Fig. 2). In mid-2020, construction was completed, and on September 8th of that year, at the first NFL game played there, the Rams defeated the Dallas Cowboys 20-17. The timeframe and spatial scale of this construction site make it a useful testbed for developing, implementing, and exploring the results of MSACD across multiple sensors and temporal scales. There are not explicit ground truth masks for the actual changes during the construction of the stadium, so we used visual inspection (and occasionally, as noted below, archived news stories and historical Google Earth imagery) to qualitatively assess the identified changes. Additionally, we performed experiments with implanted, controlled changes in the imagery, enabling quantitative assessment of performance. We used public domain satellite imagery for this study, and all sensors were global-coverage systems (as opposed to tasking-based); this access to reliable and repeatable historical imagery collections is crucial for looking at changes over time. In particular, we used imagery from the European Space Agency (ESA) Copernicus program, 48 namely SAR imagery collected by Sentinel-1 (S1) and MSI collected by Sentinel-2 (S2). We also used MSI from Landsat 8 (L8), the United States Geological Survey (USGS) and NASA satellite. 49 Specific details about the satellites and their sensors are provided in Fig. 3. Of note is that Sentinel-1 is a first-of-its-kind system as a global-coverage public domain SAR satellite, providing a number of new research opportunities, e.g., SAR-to-SAR change detection.

MSI to MSI: Sentinel-2 and Landsat 8
For our first qualitative experiment, we looked at changes in images from different sensors within the same modality (multispectral). Using Sentinel-2 and Landsat 8 images, we analyzed a 1500 m × 1500 m area including the stadium and surrounding region, and broke it into nine tiles (see Fig. 4). We pulled all Sentinel-2 and Landsat 8 images over a 1.5-year period, kept the relatively cloud-free images, and derived ACD maps for every consecutive pair of images (effectively using a sliding temporal window); we then computed summary statistics (maximum ACD score) on the results within each tile. In total, 45 images were analyzed in this experiment. Nearest neighbor resampling was used to spatially sample the images so they were all aligned at 10 m resolution. As was noted in Sec. 2, no spectral resampling was needed even though the number of bands and bandwidths varied between the two sensors.

SAR to MSI: Sentinel-1 and Sentinel-2
For our second qualitative experiment, we analyzed an image pair from different sensors across different modalities. We looked at a larger 2500 m × 2500 m area including and surrounding the stadium site, and purposely picked images that were far apart in time (with the "before" image occurring prior to construction) so that there would be a large-magnitude change between them. As a baseline, we looked at changes between a Sentinel-2 image from December 2015 and a Sentinel-2 image from March 2019. Then, we looked at changes between a Sentinel-1 SAR image from December 2015 and the same Sentinel-2 "after" image from March 2019. These images are shown in Figs. 5(a) and 5(b), and the workflow for this analysis is provided in Fig. 5(c).

Quantitative Analysis of Implanted Changes
To enable quantitative evaluation of detection performance through implanted changes, we first identified three images-one from each of the three satellites-that were captured very closely in time, minimizing the chance of naturally-occurring (or anthropogenic) changes in the scene. The Sentinel-1 image was captured on February 22, 2019, the Landsat 8 image was captured on February 23, 2019, and the Sentinel-2 image was captured on February 23, 2019. We treated the Sentinel-1 and Landsat 8 images as the "before" images and the Sentinel-2 image as the "after" image, giving us two primary image pairs for these experiments: L8 → implanted-S2 (MSI to MSI) and S1 → implanted-S2 (SAR to MSI), as shown in Fig. 6(a). We used the same "after" image for both pairs so that we could apply various treatments to the Sentinel-2 image for implanting changes and then compare the effect on detection performance for the two multisensor pairs. For completeness, we also performed ACD for S2 → implanted-S2, where high performance is expected. And before implanting any changes, we performed MSACD for the original L8 → S2 and S1 → S2 image pairs to obtain baseline detection results for any "real" changes in the scene.  5 (a) Color composites of the Sentinel-2 images used for baseline ACD results. The Sentinel-2 MSI has 12 bands and the color images shown here are constructed from the red, green, and blue bands. This same-sensor pair is used to create baseline results against which to compare the SAR to MSI results. (b) Color composites of the Sentinel-1 and Sentinel-2 images used for cross-modality ACD analysis. Note that each channel of the Sentinel-1 image is grayscale, though it is displayed here with a color scale that spans blue to green to yellow, with lighter color corresponding to higher SAR amplitude. (c) Workflow for this analysis, including the normalized difference of the two output detection maps.

Implanted Changes in S2: Randomly
For our first quantitative experiment, we randomly implanted changes into the Sentinel-2 image. To do this, we randomly selected and shuffled 500 of the pixels to induce controlled, likely anomalous changes. They are only likely to be changes because we cannot guarantee that the shuffled pixels are going to new locations where they are in fact anomalous (e.g., a rooftop pixel may get shuffled to a different location that is also a rooftop). The workflow for this is shown in Fig. 6(b), where we use c S2 to indicate the implanted-S2 image.

Implanted Changes in S2: Across Distinct Material Classes
For our second quantitative experiment, we did more targeted implantation of changes across distinct material classes. To enforce more likely material changes, we first k-means clustered Fig. 6 Image data used in the quantitative analysis and the associated workflows. (a) The S1 and L8 "before" images and the S2 "after" image, captured closely in time to minimize the potential for real changes in the scene. We also show the k -means cluster map (k ¼ 4) for the after image, used to guide the experiments for class-based implantation of changes. (b) Workflow for randomlygenerated changes, where 500 pixels are shuffled in the after image. Note: this does not guarantee 500 changes, as a pixel may move to the same class. (c) Workflow for across-class changes, where 500 pixels from one class are swapped into 500 pixels from another. The four classes resulted in the generation of 16 different class-to-class implanted S2 "after" images.
the S2 image into four classes: bare_earth, urban_bright, vegetation, and urban_dark [see Fig. 6(a)]. Then for all combinations of to/from directional class pairs (16 in total, including self-pairs), we generated a corresponding implanted Sentinel-2 image by randomly taking 500 pixels from the first class and implanting them into 500 pixel locations randomly selected from the second class. For example, for the bare_earth to vegetation pair, we randomly selected 500 pixels from the bare_earth class and replaced 500 randomly selected pixels in the vegetation class. The workflow for this is shown in Fig. 6(c), where we use c S2 i;j to indicate the implanted-S2 image created by implanting pixels from class i into pixel locations from class j. For each of these 16 implanted Sentinel-2 "after" images, we performed MSACD between L8 → implanted-S2 and S1 → implanted-S2 and then evaluated the associated detection results and ROC curves. For completeness in evaluating the ROC curves, we also performed ACD between S2 → implanted-S2.

Qualitative Exploration of Real Changes
We review the results of our two qualitative experiments: the first looks at changes between Sentinel-2 and Landsat 8 MSI throughout a 1.5-year period, and the second looks at changes between Sentinel-1 SAR imagery and Sentinel-2 MSI.

MSI to MSI: Sentinel-2 and Landsat 8
The results of our first qualitative experiment, where we continuously performed MSACD across Sentinel-2 and Landsat 8 images collected throughout a 1.5-year period, are shown in Fig. 7. The plot at the top shows the maximum ACD value within each tile over time, and for exploration we focus in on three of the higher ACD scores (labeled A, B, and C). Every point in the plot is derived from some consecutive pair of Sentinel-2 and Landsat 8 images (not necessarily in that order). The detection maps are scaled such that darker pixels correspond to higher ACD scores.
The images associated with anomalous change A are a Sentinel-2 image from August 17, 2017 and a Landsat 8 image from August 28, 2017. The highlighted region (red box) in Tile 1 is where the high-valued anomalous change occurred. The white circular region directly to the left of the anomalous change is The Forum, an indoor arena that often holds large events. The anomalous change appears to be mobile, so we looked into events at The Forum around these dates. It turned out that on August 27, 2017, The Forum hosted the MTV Video Music Awards; the anomalous change likely corresponds to trucks or caravans associated with the awards show. 50 The images associated with anomalous change B are a Landsat 8 image from June 28, 2018 and a Sentinel-2 image from July 13, 2018. The highlighted region in Tile 1 is where the highvalued anomalous change occurred. This region is over the main part of the construction site (surrounding the actual stadium), and corresponds to ongoing construction changes.
The images associated with anomalous change C are a Landsat 8 image from September 16, 2018 and a Sentinel-2 image also from September 16, 2018. This change appears in the RGB composite image as an elongated red and green and blue streak; we suspect that it is an airplane that flew through the scene as the satellite was taking the image. Because of the pushbroom nature of the Sentinel-2 sensor, the red and green and blue components for a given position on the ground are imaged at slightly different times, and when combined with the movement of an airplane, results in this streaky red-green-blue pattern in the composite image.

SAR to MSI: Sentinel-1 and Sentinel-2
The results of our second qualitative experiment, where we compared changes from December 2015 to March 2019 for the baseline MSI to MSI image pair and SAR to MSI image pair in Fig. 5, are shown on the left side of Fig. 8. The top map corresponds to the same-sensor MSI to MSI anomalous changes, and the second map corresponds to the multi-sensor SAR to MSI anomalous changes; for each, black indicates an anomalous change while gray indicates no change and white corresponds to pixels that are persistent anomalies. The black box outlines the area surrounding the construction site.
On the right side of Fig. 8, we show the normalized difference of the two maps to quantitatively compare their results. This difference map is scaled such that white indicates agreement between the maps on the left, blue indicates that the MSI → MSI change was greater than the SAR → MSI change, and red indicates that the SAR → MSI change was greater than the MSI → MSI change. We generally see agreement over the construction site, and also see a number of areas whose change is more anomalous in one image pair over the other. This is not surprising, as the construction site likely has surface changes that are not material changes (e.g., growing piles of dirt), or material changes that are not significant surface changes (e.g., laying out tarps). However, one area that stood out more prominently is the L-shaped region in blue at the bottom Fig. 7 Tracked statistics for the nine tiles over time between Landsat 8 and Sentinel-2. Three highvalue anomalous changes are highlighted (anomalous change A, B, and C). The corresponding image pairs and ACD maps are provided, and the detection maps are scaled such that darker pixels correspond to higher ACD scores. of the black annotated box. This is a sharp feature that is a much larger anomalous change in the MSI → MSI change map.
In Fig. 9, we show further exploration of the L-shaped anomalous change that was more evident in the MSI → MSI detection map. To gain a better (and higher resolution) understanding of the nature of this change, we used Google Earth to look at historical imagery around the construction site before and during the image time frame. Because the analyzed image dates are from December 2015 to March 2019, we looked to prior imagery to see what was already present in that area. We can see that in April 2014, there was a building already present and that it had a brown roof. Further exploration of additional historical layers shows us that the roof was unchanged as of February 2016, was undergoing replacement to a white roof in October 2016 and April 2017, and was a fully white roof by October 2017. This roof change occurs squarely within the time frame of the analyzed Sentinel-1 and Sentinel-2 imagery. What we can see from this higher-resolution historical imagery is that the building undergoes a material change but not a dramatic surface change. In other words, this is a change that would indeed present itself in the MSI → MSI analysis, but not as strongly in the SAR → MSI data.

Quantitative Analysis of Implanted Changes
The baseline MSACD results for the S1 → S2 and L8 → S2 image pairs from Fig. 6(a) are shown in Fig. 10. These establish the typical change maps for the scene prior to implantation of changes into the S2 "after" image. . For the change maps on the left, black indicates anomalous change, while gray to white indicates no change (white corresponds to pixels which are anomalous but not anomalously changed). The normalized difference map on the right is scaled such that blue indicates that the MSI → MSI change was greater than SAR → MSI, red indicates that the SAR → MSI change was more prominent, and white indicates that the same change (or lack of change) was observed in both scenarios.

Implanted Changes in S2: Randomly
For our first quantitative experiment, we randomly shuffled 500 of the pixels in the S2 image, inducing likely changes (although not guaranteed). This produced an associated truth mask that we could use to generate receiver operating characteristic (ROC) curves on the detection results, as shown in Fig. 11. These compare the true positive rate (TPR) to the false positive rate (FPR). For all of the ROC curves herein, we also compute the area under the curve (AUC) for a maximum FPR ¼ 0.01; this more restrictive FPR threshold more accurately represents the lower tolerance for false alarms that would be encountered in an operational scenario. The ROC curves are also all plotted on a log-linear scale to emphasize the low FPR regime. As expected, the detection map generated by Fig. 9 Exploration of the anomalous change corresponding to the L-shaped building. This change presents more strongly in the MSI → MSI data because it is a significant material change, but not a strong surface change. Google Earth historical imagery shows that the roof was replaced within the time frame of the "before" and "after" images used in this analysis. (The Google Earth imagery was collected by Maxar Technologies and ESA.) comparing the original S2 image to the implanted-S2 image had a thresholded-AUC of 1.0, while it was 0.71 for L8 → implanted-S2 and barely above chance for S1 → implanted-S2, indicating that the randomly-shuffled pixel changes were difficult to detect with the "before" SAR imagery.

Implanted Changes in S2: Across Distinct Material Classes
Our second quantitative experiment used the four S2 k-means classes shown in Fig. 6 to generate 16 different implanted-S2 images. The resulting ROC curves are shown in Fig. 12, and the Fig. 11 ROC curves for the changes generated by randomly swapping 500 pixels in the S2 "after" image. Fig. 12 The 16 sets of ROC curves resulting from the 16 implanted-S2 images generated from class-based pixel implantation. The rows correspond to the classes from which the implanted pixels were drawn, and the columns correspond to the classes in which the pixels were implanted.
corresponding matrices of thresholded-AUC values are shown in Fig. 13. As expected, the S2 → implanted-S2 results are the strongest, followed by L8 → implanted-S2, and generally trailed by S1 → implanted-S2. These ROC curves also provide insight into which material classes of changes might be easiest to detect in the multi-sensor scenario. For example, the bare_earth pixels implanted into the vegetation pixel locations had strong L8 results, but the urban_dark pixels implanted into the same class did not result in as strong of a detection performance. Overall, the L8 → implanted-S2 pairs had higher thresholded-AUC detection scores than the S1 → implanted-S2 counterparts. The S1 pairs had the strongest performance from implanting urban_bright pixels into vegetation pixels, likely due to the sharp building structures having strong backscatter responses in the SAR imagery. In general, these results demonstrate the potential value for MSACD, especially within the same modality (i.e., MSI to MSI).

Conclusions and Future Work
Because distribution-based anomalous change detectors do not involve image subtraction, and therefore do not require that images be aligned signally, we can use this class of change detector on image pairs from disparate MSI sensors as well as sensors that are based on different phenomenologies, such as MSI and SAR. We extended HACD for implementation in multi-sensor anomalous change detection, and applied it to satellite images taken from Landsat 8, Sentinel-2, and Sentinel-1. Landsat 8 and Sentinel-2 are both electro-optical multispectral imagers, though they have different numbers of bands and diferent bandwidths. Sentinel-1 is a SAR imager, and those SAR images are phenomenologically quite different from the multispectral imagers in the optical domain.
We performed both qualitative and quantitative experiments to assess the efficacy of ACD in this multi-sensor domain. We did find that multi-sensor (and multi-modal) ACD was able to identify interesting changes, particularly in the qualitative results. The multi-sensor same-modality experiments between Landsat 8 and Sentinel-2 (both qualitative and quantitative) were even able to find changes on short timescales, while the multi-sensor multi-modality experiments between Sentinel-1 and Sentinel-2 generally required more significant changes (e.g., new buildings) in order for those changes to be detected. For the experimental data in this paper, such changes typically corresponded to longer timescales, although similar changes (e.g., temporary encampments) may appear on shorter timescales in other scenarios. MSACD has the potential to be most valuable in identifying rapid or unexpected rare changes, because it enables higher-cadence surveillance by allowing for continuous exploitation of any imagery over a particular area. This affords a kind of opportunistic change detection where foresight is not required, and the best image pairs for capturing the change can be readily analyzed even if those images come from different sensors. Fig. 13 Matrices of the thresholded-AUC values (at max FPR ¼ 0.01) from the ROC curves in Fig. 12. (a) The results corresponding to the Sentinel-1 "before" image. (b) The results corresponding to the Landsat 8 "before" image.
Although the results are promising, there are limitations of the study that should be noted. One limitation is the lack of ground-truthed community datasets for multi-sensor change detection over time; the scarcity of ground truth is a common challenge in change detection studies, and even more so in the multi-sensor scenario. To address this, we identified a testbed with changes that were large-scale enough to see from spaceborne sensors, and that also spanned several years. We focused on global-coverage satellites to ensure regular imagery coverage over time. Still, in the absence of ground truth (and truth masks), our assessment of real changes could at best be qualitative, requiring anecdotal evidence (e.g., from news stories). We complemented this with more rigorous quantitative analysis, where we synthetically injected changes (from real measurements) into known locations under a variety of scenarios. Another limitation that stems from this is the size of the study: we did use a lot of images from a remote sensing standpoint (over 50 in total), but they were all over the same location. Focusing on the same location was useful in that-in the absence of a ground-truthed dataset-we generally understood the types of changes occurring and the content of the scene, making the results more interpretable. There is no mathematical limitation on the extension to other land covers and sensor types, and while preliminary experiments have shown similar performance for other land covers (e.g., farmland, forested regions), and for other sensors with different spatial and spectral resolutions (e.g., WorldView-3, Hyperion), more dedicated experiments are needed to quantitatively evaluate those applications and extensions. Further research is also needed to evaluate where the "break point" might be for image pairs with very different spatial resolutions, i.e., where the spatial resampling scheme can no longer overcome the resolution differences in order to effectively find changes; this will almost certainly be dependent on the type and scale of the change, as well. Additionally, the design of the methodology is both an advantage and, depending on the application, potentially limiting. Because MSACD is agnostic not only to sensor type but also to change type, no knowledge of the type of change is needed a priori. As a result, however, it may not be as sensitive as one would like to focused types of changes. For example, if one is looking for new roads, MSACD might not do as well as a large neural net that is specifically trained to look for new roads. Lastly, a limitation of the methodology that is notably minor is with respect to atmospheric conditions: our use of a simple parametric model distribution (Gaussian or Elliptically Contoured) makes the approach quite robust to atmospheric variability. Still, bi-modal effects (such as shadows) may be somewhat harder to capture; additional studies are needed to explore this.
Future research will address these limitations, as well as new research thrusts. The changes investigated here were primarily taken from temporally-adjacent pairs of images; since we in fact have a long time series of changes, we will adapt our future analysis to take advantage of that opportunity. In particular, we will seek to classify changes by their temporal as well as their spectral signatures through using multi-dimensional joint distributions, distinguishing, for instance, among fleeting, recurring, and persistent changes. Additionally, we will explore augmenting feature layers to the SAR image data (e.g., local texture measures used as additional channels), as we suspect the limited number of channels (i.e., two) may have contributed to the performance results for pairs with Sentinel-1 imagery. 50. "2017 MTV video music awards," Wikipedia, https://en.wikipedia.org/wiki/2017_MTV_ Video_Music_Awards.
Amanda Ziemann is a remote sensing scientist at Los Alamos National Laboratory. She holds a BS in applied mathematics (2010), an MS in applied and computational mathematics (2011), and a PhD in imaging science (2015), all from Rochester Institute of Technology (RIT). She is an author of one book chapter and more than 40 articles and proceedings, serves on the program committee for two SPIE conferences, and is an associate editor for IEEE Geoscience and Remote Sensing Letters. Her research interests include remote sensing, signal detection, image analysis, and machine learning. She is a member of SPIE.
Christopher X. Ren is a scientist at Los Alamos National Laboratory. He holds an MPhys in physics (2011) from the University of Manchester, and both an MRes in photonic systems development (2012) and PhD in materials science (2016) from the University of Cambridge. He is an author of more than 30 articles and proceedings. His current research interests include remote sensing, change detection, multi-sensor data fusion, and machine learning.
James Theiler is a physicist and laboratory fellow at Los Alamos National Laboratory. He holds SB degrees in physics and mathematics (1981) from the Massachusetts Institute of Technology (MIT), and a PhD in physics from Caltech (1987). He is an author of two book chapters and more than 250 articles and proceedings, serves on the program committee for an SPIE conference, and is a senior area editor for IEEE Transactions on Computational Imaging. His research interests include remote sensing, statistical modeling, machine learning, and image processing.