In this paper, we describe a phenomenon, which we named “super-convergence”, where neural networks can be trained an order of magnitude faster than with standard training methods. The existence of super-convergence is relevant to understanding why deep networks generalize well. One of the key elements of super-convergence is training with one learning rate cycle and a large maximum learning rate. A insight that allows super-convergence training is that large learning rates regularize the training, hence requiring a reduction of all other forms of regularization in order to preserve an optimal regularization balance. We also derive a simplification of the Hessian Free optimization method to compute an estimate of the optimal learning rate. Experiments demonstrate super-convergence for Cifar-10/100, MNIST and Imagenet datasets, and resnet, wide-resnet, densenet, and inception architectures. In addition, we show that super-convergence provides a greater boost in performance relative to standard training when the amount of labeled training data is limited. The architectures and code to replicate the figures in this paper are available at github.com/lnsmith54/super-convergence.
Deep Learning has proven to be an effective method for making highly accurate predictions from complex data sources. Convolutional neural networks continue to dominate image classification problems and recursive neural networks have proven their utility in caption generation and language translations. While these approaches are powerful, they do not offer explanation for how the output is generated. Without understanding how deep learning arrives at a solution there is no guarantee that these networks will transition from controlled laboratory environments to fieldable systems. This paper presents an approach for incorporating such rule based methodology into neural networks by embedding fuzzy inference systems into deep learning networks.
The potential of compressive sensing (CS) has spurred great interest in the research community and is a fast
growing area of research. However, research translating CS theory into practical hardware and demonstrating
clear and significant benefits with this hardware over current, conventional imaging techniques has been limited.
This article helps researchers to find those niche applications where the CS approach provide substantial gain
over conventional approaches by articulating guidelines for finding these niche CS applications. Furthermore, in
this paper we utilized these guidelines to find one such new application for CS; sea skimming missile detection. As
a proof of concept, it is demonstrated that a simplified CS missile detection architecture and algorithm provides
comparable results to the conventional imaging approach but using a smaller FPA.
We present several improvements to published algorithms for sparse image modeling with the goal of
improving processing of imagery of small watercraft in littoral environments. The first improvement
is to the K-SVD algorithm for training over-complete dictionaries, which are used in sparse
representations. It is shown that the training converges significantly faster by incorporating multiple
dictionary (i.e., codebook) update stages in each training iteration. The paper also provides several
useful and practical lessons learned from our experience with sparse representations. Results of three
applications of sparse representation are presented and compared to the state-of-the-art methods; image
compression, image denoising, and super-resolution.
The ability to image underwater is highly desired for scientific and military applications, including optical communications, submarine awareness, diver visibility, and mine detection. Underwater imaging is severely impaired by scattering and optical turbulence associated with refractive index fluctuations. This work introduces a novel approach to restoration of degraded underwater imagery based on a multi-frame correction technique developed for atmospheric distortions. The method represents synthesis of "lucky-region" fusion with nonlinear gain and optical flow-based image warping. The developed multiframe image restoration algorithm is tested on underwater imagery collected in a laboratory tank and in a field exercise. Reliance of image restoration on accuracy of the optical flow algorithm is revealed. The developed algorithm demonstrates significant resolution improvement of the restored image in comparison to any single frame or the mean of the underwater image sequence.
Modern IR cameras are increasingly equipped with built-in advanced (often non-linear) image and signal processing
algorithms (like fusion, super-resolution, dynamic range compression etc.) which can tremendously influence
performance characteristics. Traditional approaches to range performance modeling are of limited use for these types of
equipment. Several groups have tried to overcome this problem by producing a variety of imagery to assess the impact of
advanced signal and image processing. Mostly, this data was taken from classified targets and/ or using classified imager
and is thus not suitable for comparison studies between different groups from government, industry and universities. To
ameliorate this situation, NATO SET-140 has undertaken a systematic measurement campaign at the DGA technical
proving ground in Angers, France, to produce an openly distributable data set suitable for the assessment of fusion,
super-resolution, local contrast enhancement, dynamic range compression and image-based NUC algorithm
performance. The imagery was recorded for different target / background settings, camera and/or object movements and
temperature contrasts. MWIR, LWIR and Dual-band cameras were used for recording and were also thoroughly
characterized in the lab. We present a selection of the data set together with examples of their use in the assessment of
super-resolution and contrast enhancement algorithms.
This paper presents a simple, fast, and robust method to estimate the blur kernel model, support size, and its
parameters directly from a blurry image. The edge profile method eliminates the need for searching the parameter
space. In addition, this edge profile method is highly local and can provide a measure of asymmetry and spatial
variation, which allows one to make an informed decision on whether to use a symmetric or asymmetric, spatially
varying or non-varying blur kernel over an image. Furthermore, the edge profile method is relatively robust to
image noise. We show how to utilize the concepts behind the statistical tools for fitting data distributions
to analytically obtain an estimate of the blur kernel that incorporates blur from all sources, including factors
inherent in the imaging system. Comparisons are presented of the deblurring results from this method to current
common practices for real-world (VNIR, SWIR, MWIR, and active IR) imagery. The effect of image noise on
this method is compared to the effect of noise on other methods.
In image formation and recording process, there are many factors that affect sensor performance and image
quality that result in loss of high-frequency information. Two of these common factors are undersampled
sensors and sensor's blurring function. Two image processing algorithms, including super-resolution image
reconstruction and deblur filtering, have been developed based on characterizing the sources of image
degradation from image formation and recording process. In this paper, we discuss the applications of these
two algorithms to three practical thermal imaging systems. First, super-resolution and deblurring are
applied to a longwave uncooled sensor in a missile seeker. Target resolution is improved in the flight phase
of the seeker operation. Second, these two algorithms are applied to a midwave target acquisition sensor for
use in long-range target identification. Third, the two algorithms are applied to a naval midwave distributed
aperture sensor (DAS) for infrared search and track (IRST) system that is dual use in missile detection and
force protection/anti-terrorism applications. In this case, super-resolution and deblurring are used to
improve the resolution of on-deck activity discrimination.