<p>Many optical systems are used for specific tasks such as classification. Of these systems, the majority are designed to maximize image quality for human observers. However, machine learning classification algorithms do not require the same data representation used by humans. We investigate the compressive optical systems optimized for a specific machine sensing task. Two compressive optical architectures are examined: an array of prisms and neutral density filters where each prism and neutral density filter pair realizes one datum from an optimized compressive sensing matrix, and another architecture using conventional optics to image the aperture onto the detector, a prism array to divide the aperture, and a pixelated attenuation mask in the intermediate image plane. We discuss the design, simulation, and trade-offs of these systems built for compressed classification of the Modified National Institute of Standards and Technology dataset. Both architectures achieve classification accuracies within 3% of the optimized sensing matrix for compression ranging from 98.85% to 99.87%. The performance of the systems with 98.85% compression were between an <italic>F</italic> / 2 and <italic>F</italic> / 4 imaging system in the presence of noise.</p>
Recent advances in deep learning have shown promising results for anomaly detection that can be applied to the problem of defect detection in electronic parts. In this work, we train a deep learning model with Generative Adversarial Networks (GANs) to detect anomalies in images of X-ray CT scans. The GANs detections can then be reviewed by an analyst to confirm the presence or absence of a defect in a scan, significantly reducing the amount of time required to analyze X-Ray CT scans. We employ a trained GAN via a system referred to in the literature as an AnoGAN. We train the AnoGAN on images of X-Ray CT scans from normal, non-defective components until it is capable of generating images that are indistinguishable from genuine part scans. Once trained, we query the AnoGAN with an image of an X-ray CT scan that is known to contain a defect, such as a crack or a void. By sampling the GANs latent space, we generate an image that is as visually close to the query image as possible. Because the AnoGAN has learned a distribution over non-defective parts, it can only produce images without defects. By taking the difference between the query image and the generated image, we are able to highlight anomalous areas in the defective part. We hypothesize that this work can be used to improve speed and accuracy for quality assurance of manufactured parts by applying machine learning to non-destructive imaging.
We investigate the feasibility of additively manufacturing optical components to accomplish task-specific classification in a computational imaging device. We report on the design, fabrication, and characterization of a non-traditional optical element that physically realizes an extremely compressed, optimized sensing matrix. The compression is achieved by designing an optical element that only samples the regions of object space most relevant to the classification algorithms, as determined by machine learning algorithms. The design process for the proposed optical element converts the optimal sensing matrix to a refractive surface composed of a minimized set of non-repeating, unique prisms. The optical elements are 3D printed using a Nanoscribe, which uses two-photon polymerization for high-precision printing. We describe the design of several computational imaging prototype elements. We characterize these components, including surface topography, surface roughness, and angle of prism facets of the as-fabricated elements.
Many optical systems are used for specific tasks such as classification. Of these systems, the majority are designed to maximize image quality for human observers; however, machine learning classification algorithms do not require the same data representation used by humans. In this work we investigate compressive optical systems optimized for a specific machine sensing task. Two compressive optical architectures are examined: an array of prisms and neutral density filters where each prism and neutral density filter pair realizes one datum from an optimized compressive sensing matrix, and another architecture using conventional optics to image the aperture onto the detector, a prism array to divide the aperture, and a pixelated attenuation mask in the intermediate image plane. We discuss the design, simulation, and tradeoffs of these compressive imaging systems built for compressed classification of the MNSIT data set. To evaluate the tradeoffs of the two architectures, we present radiometric and raytrace models for each system. Additionally, we investigate the impact of system aberrations on classification accuracy of the system. We compare the performance of these systems over a range of compression. Classification performance, radiometric throughput, and optical design manufacturability are discussed.
Advancements in machine learning (ML) and deep learning (DL) have enabled imaging systems to perform complex classification tasks, opening numerous problem domains to solutions driven by high quality imagers coupled with algorithmic elements. However, current ML and DL methods for target classification typically rely upon algorithms applied to data measured by traditional imagers. This design paradigm fails to enable the ML and DL algorithms to influence the sensing device itself, and treats the optimization of the sensor and algorithm as separate sequential elements. Additionally, this current paradigm narrowly investigates traditional images, and therefore traditional imaging hardware, as the primary means of data collection. We investigate alternative architectures for computational imaging systems optimized for specific classification tasks, such as digit classification. This involves a holistic approach to the design of the system from the imaging hardware to algorithms. Techniques to find optimal compressive representations of training data are discussed, and most-useful object-space information is evaluated. Methods to translate task-specific compressed data representations into non-traditional computational imaging hardware are described, followed by simulations of such imaging devices coupled with algorithmic classification using ML and DL techniques. Our approach allows for inexpensive, efficient sensing systems. Reduced storage and bandwidth are achievable as well since data representations are compressed measurements which is especially important for high data volume systems.
Existing methods to detect vehicle tracks in coherent change detection images, a product of combining two synthetic aperture radar images taken at different times of the same scene, rely on simple and fast models to label track pixels. These models, however, are unable to capture natural track features, such as continuity and parallelism. More powerful but computationally expensive models can be used in offline settings. We present an approach that uses dilated convolutional networks consisting of a series of 3×3 convolutions to segment vehicle tracks. The design of our networks considers the fact that remote sensing applications tend to operate in low power and have limited training data. As a result, we aim for small and efficient networks that can be trained end-to-end to learn natural track features entirely from limited training data. We demonstrate that our six-layer network, trained on just 90 images, is computationally efficient and improves the F-score on a standard dataset to 0.992, up from 0.959 obtained by the current state-of-the-art method.
Payload location is an approach to find the message bits hidden in steganographic images, but not necessarily
their logical order. Its success relies primarily on the accuracy of the underlying cover estimators and can be
improved if more estimators are used. This paper presents an approach based on Markov random field to estimate
the cover image given a stego image. It uses pairwise constraints to capture the natural two-dimensional statistics
of cover images and forms a basis for more sophisticated models. Experimental results show that it is competitive
against current state-of-the-art estimators and can locate payload embedded by simple LSB steganography and
group-parity steganography. Furthermore, when combined with existing estimators, payload location accuracy
Payload location using residuals is a successful approach to identify load-carrying pixels provided a large number
of stego images are available. Furthermore, each image must have the payload embedded at the same locations.
The success of payload location is therefore limited if different keys are used or an adaptive embedding algorithm
is used. Given these limitations, the focus of this paper is to locate modified pixels in a single stego image.
Given a sufficiently large set of independent binary decision functions, each determines whether a pixel has been
modified better than guessing, we show that it is possible to locate modified pixels in a single stego image with
low error rate. We construct these functions using existing cover estimators and provide experimental results to
support our analysis.
Locating steganographic payload usingWeighted Stego-image (WS) residuals has been proven successful provided
a large number of stego images are available. In this paper, we revisit this topic with two goals. First, we
argue that it is a promising approach to locate payload by showing that in the ideal scenario where the cover
images are available, the expected number of stego images needed to perfectly locate all load-carrying pixels is
the logarithm of the payload size. Second, we generalize cover estimation to a maximum likelihood decoding
problem and demonstrate that a second-order statistical cover model can be used to compute residuals to locate
payload embedded by both LSB replacement and LSB matching steganography.
One of the biggest challenges in universal steganalysis is the identification of reliable features that can be used
to detect stego images. In this paper, we present a steganalysis method using features calculated from a measure
that is invariant for cover images and is altered for stego images. We derive this measure, which is the ratio
of any two Fourier coefficients of the distribution of the DCT coefficients, by modeling the distribution of the
DCT coefficients as a Laplacian. We evaluate our steganalysis detector against three different pixel-domain