PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 11728, including the Title Page, Copyright information and Table of Contents
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Radiant energy object detection deep learning algorithms require large training sets with site-specific images, often from locations that are difficult to access, while also remaining diverse enough to encourage a robust model. Of particular interest is the detection of buried and partially buried objects which have a widely varying behavior profile dependent on factors such as depth, soil composition, time of day, moisture level, target composition, etc. The variety associated with these variables increases the difficulty of acquiring an adequately diverse data set. Synthetic imagery offers a potential solution to limited accessibility to data as images can be created on demand with diversity limited only by the parameters of the simulation. The goal of this study is to create custom models using SSD (Single Shot MultiBox Detector), YOLOv3 (You Only Look Once), and Faster R-CNN (Region- based Convolutional Neural Networks) to detect buried objects in real images by leveraging synthetic radiant energy imagery. Custom training is done on a synthetic data set (made in-house) using pre-trained models from Tensor ow's model zoo and ImageAI's YOLOv3 pre-trained model. Model training leverages high performance computing (HPC) resources and utilizes GPU to optimize training speed. Proof-of-concept models for SSD, YOLOv3, and Faster R-CNN have been trained on preliminary synthetic imagery and analyzed. Preliminary results for these models will be discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Machine learning systems are known to require large amounts of data to effectively generalize. When this data isn’t available, synthetically generated data is often used in its place. With synthetic aperture radar (SAR) imagery, the domain shift required to effectively transfer knowledge from simulated to measured imagery is non-trivial. We propose a pairing of convolutional networks (CNNs) with generative adversarial networks (GANs) to learn an effective mapping between the two domains. Classification networks are trained individually on measured and synthetic data, then a mapping between layers of the two CNNs is learned using a GAN.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Global climate warming is rapidly reducing Arctic sea ice volume and extent. The associated perennial sea ice loss has economic and global security implications associated with Arctic Ocean navigability, since sea ice cover dictates whether an Arctic route is open to shipping. Thus, understanding changes in sea ice thickness, concentration and drift is essential for operation planning and routing. However, changes in sea ice cover on scales up to a few days and kilometers are challenging to detect and forecast; current sea ice models may not capture quickly-changing conditions on short timescales needed for navigation. Assimilating these predictive models requires frequent, high-resolution morphological information about the pack, which is operationally difficult. We suggest an approach to mitigate this challenge by using machine learning (ML) to interpret satellite-based synthetic aperture radar (SAR) imagery. In this study, we derive ML models for the analysis of SAR data to improve short-term local sea ice monitoring at high spatial resolutions, enabling more accurate analysis of Arctic navigability. We develop an algorithm/classifier that can analyze Sentinel-1 SAR imagery with the potential to inform operational sea ice forecasting models. We focus on detecting two sea ice features of interest to Arctic navigability: ridges and leads (fractures in the ice shelf). These can be considered local extremes in terms of ice thickness, a crucial parameter for navigation. We build models to detect these ice features using machine learning techniques. Both our ridge and lead detection models perform as well as, if not better than, state-of-the- art methods. These models demonstrate Sentinel-1's ability to capture sea ice conditions, suggesting the potential for Sentinel-1 global coverage imagery to inform sea ice forecasting models.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Phenomenology-Informed (PI) Machine Learning is introduced to address the unique challenges faced when applying modern machine-learning object recognition techniques to the SAR domain. PI-ML includes a collection of data normalization and augmentation techniques inspired by successful SAR ATR algorithms designed to bridge the gap between simulated and real-world SAR data for use in training Convolutional Neural Networks (CNNs) that perform well in the low-noise, feature-dense space of camera-based imagery. The efficacy of PI-ML will be evaluated using ResNet, EfficientNet, and other networks, using both traditional training techniques and all-SAR transfer learning.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Within the realm of automatic target recognition (ATR) using synthetic aperture radar (SAR), significant research has been performed on the e Moving and Stationary Target Acquisition and Recognition (MSTAR) dataset. The classification results performed on the uncorrupted MSTAR images are typically well above 90% correct, and often approaching 99%. However, in support of operational missions, there is a need to assess the various approaches against a baseline that includes less ideal operating conditions such as foliage penetration (FOPEN). Thus, this paper uses a specialized algorithm that has been proven effective in other settings to assess the effect of a range of increasingly densely spaced pixel amplitude distortions. The results show that once approximately 50% or more of the pixels within the target and shadow region are degraded, the ability to classify the correct target and pose is greatly reduced. Also, as speculated by others, leaving a border of the original clutter appears to yield artificially good classification results in the %50-%90 degraded range before it also rolls off. Finally, when there is no masking, the results are rather sensitive to the chosen confidence level which reinforces the supposition that matches are occurring due to clutter and not just the target.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Synthetic Aperture Radar (SAR) point scatterers are exploited in SAR imaging to detect known locations and objects, and to estimate and identify their speci fic parameters. The absolute, relative, and differential location accuracy provides the basis for the performance of these functions. Absolute localization can x the location of an image. Relative localization can estimate object dimensions for identification. And the (time) differential can identify sublimation (sinking) or vibration. Major variations in point scatterers stem from di¤erences in imaging techniques and the geometric (ight) imaging scenario. This paper looks at the gamut of imaging scenarios and the accuracy of parameters estimated from them. The basic location of a point scatterer in an image depends both on the image resolution fi xed by the imaging scenario parameters and the sharpness of the point scatterer (impulse response or point spread function). In some cases sub-pixel resolution accuracy is achievable via low-rank image scatterer localization algorithms. Achievable accuracy of both of these is bounded by the imaging scenario and the uniformity of this accuracy across an image which is governed by the imaging technique. Performance varies with waveform parameters, chirp rate, image size, and imaging range. Additionally, performance varies with the different sampling rates that can be used to attain the same image resolution. The performance of scatterer localization techniques across imaging scenarios and example uses of point scatterer localization are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This work builds on another effort described in Application of Jupyter Notebook interfaces and iLauncher to deep learning work ows on HPC systems.22 We describe a complex work ow application which generates millions of images in parallel on an HPC system via web interfaces using ipywidgets in Jupyter Notebooks and the Interface Launcher (iLauncher). Some computations are so complicated, taking many millions of HPC hours, that only a few subject matter experts are able to generate information efficiently. We present our custom application that walks the user through a work flow to include: target selection, configuration of the target, radar phase history simulation, and finally SAR image generation. The interface requests the user to enter a minimal set of parameters while other variables essential to computations are generated on the y and provides status updates on work ow computations. Additionally, the ability to download any data component or view images interactively is provided. This application can be disconnected from the HPC system and reconnected at any time without slowing down the computations on the work ow submitted. Although typically a maximum run time must be specified when submitting a job to the queuing interface on an HPC system, this application uses the HPC-GPS tool to allow users to extend run times even after the initial request is submitted. Our new application helps to reduce the barrier to entry for both complicated physics-based simulations and using HPC systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Radar scattering behavior within the angular aperture of a typical SAR image has been well studied through a variety of computational and analytic methods. This has allowed parsimonious scattering center models to be posed that effectively represent a large percentage of the scattered energy from common targets of interest. Wide- angle synthetic aperture radar seeks to form and exploit images using an angular aperture that supports cross-range resolution that is many times finer than that supported by the range resolution bandwidth. Over the course of wide synthetic apertures, a much more complex scattering behavior is observed on targets of interest, which thus far escapes concise characterization. In this paper, we study wide-angle SAR scattering behavior through episodic processing of pixel values across many standard-resolution SAR apertures. We characterize scattering behavior based on existing understanding and past observations of wide angle SAR imagery. The proposed categories of scattering analysis provide insightful decomposition of target responses that are demonstrated on simulated and measured imagery.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A container packages up an application's code and dependencies, so it runs quickly and reliably across all computing environments. Using Singularity, a text-based recipe file is written in order to construct a container. However, all shell commands to install the desired applications and dependencies must be known, creating a daunting task for a novice builder. We solve this using a Python script with a database of installation commands for a plethora of applications, written automatically into a Singularity recipe file. Future work includes expanding the library of available apps, automating the database upkeep, and integrating the script with HPC systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present information on what users need in order to support complex deep learning workflows on DoD HPC systems with web interfaces using ipywidgets in Jupyter Notebooks and the Interface Launcher (iLauncher), a tool that automates the submission of HPC jobs and provides a mechanism for rapidly prototyping web interfaces from the user’s desktop to powerful capabilities running on the HPC nodes. We detail a representative use case for a PyTorch deep learning workflow and show how to include the underlying software along with all dependencies in an all-inclusive software packaging technology called a Singularity container. We then show how to use ipywidgets in a Jupyter Notebook and convert it to a full-fledged web interface using the Voila server. Finally, we outline how to create the iLauncher plugin to run the web interface on DoD HPC system nodes to provide a complete user interface workflow solution that does not require special privileges to create, deploy, or use. Using Jupyter Notebooks, ipywidgets, iLauncher, and Singularity containers together provide an explosion of accessible capabilities that were previously inconceivable in the restrictive DoD environment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present an analysis of image reconstruction quality that includes the use of traditional and deep-learning quality metrics for sparse reconstructions of three-dimensionally (3D) focused synthetic aperture radar (SAR) data. A major goal of our analysis is to explore the usefulness of various metrics to demonstrate their utility in 3D focused scenarios. We make use of synthetic prediction to help fully span the large parameter space of a two-dimensional cross-range aperture. The analysis including the synthetic prediction will help guide future measurements of scale models in our compact radar range.1
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a simple, practical Fast Backprojection algorithm for Video-SAR. We employ the overlap-and-save algorithm to create an efficient filter bank for downsampling a continuous stream of pulses in the azimuth dimension. We discuss GPU and subaperture processing details along with some important design considerations and tradeoffs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As humans, we perceive the world in three dimensions. However, many militarily relevant sensing capabilities only display two-dimensional information to users in the form of imagery. In this work we develop and analyze a technique for reconstructing objects in three dimensions given sparse amounts of synthetic aperture radar (SAR) data. We analyze the required sampling rates of the proposed techniques and conduct a thorough analysis of the accuracy of our methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.