An important enabler for low cost airborne systems is the ability to exploit low cost inertial instruments. An Inertial Navigation System (INS) can provide a navigation solution, when GPS is denied, by integrating measurements from inertial sensors. However, the gyrometer and accelerometer biases of low cost inertial sensors cause compound errors in the integrated navigation solution. This paper describes experiments to establish whether (and to what extent) the navigation solution can be aided by fusing measurements from an on-board video camera with measurements from the inertial sensors. The primary aim of the work was to establish whether optic flow aided navigation is beneficial even when the 3D structure within the observed scene is unknown. A further aim was to investigate whether an INS can help to infer 3D scene content from video. Experiments with both real and synthetic data have been conducted. Real data was collected using an AR Parrot quadrotor. Empirical results illustrate that optic flow provides a useful aid to navigation even when the 3D structure of the observed scene is not known. With optic flow aiding of the INS, the computed trajectory is consistent with the true camera motion, whereas the unaided INS yields a rapidly increasing position error (the data represents ~40 seconds, after which the unaided INS is ~50 metres in error and has passed through the ground). The results of the Monte Carlo simulation concur with the empirical result. Position errors, which grow as a quadratic function of time when unaided, are substantially checked by the availability of optic flow measurements.
We describe progress in the third year of the EMRS DTC TEP theme project entitled "Temporal Resolution
Enhancement from Motion". The aim is to develop algorithms that combine evidence over time from a sequence of
images in order to improve spatial resolution and reduce unwanted artefacts. Years one and two of this project
developed and demonstrated an efficient algorithm that provided good resolution enhancement of a scene viewed in the
far field (approximately flat) . This paper reports a new algorithm which is applicable to a three dimensional scene
where substantial depth variation causes parallax within the imagery. The new algorithm is demonstrated using airborne
We describe progress in the second year of the EMRS DTC TEP theme project entitled "Temporal Resolution Enhancement from Motion". The aim is to develop algorithms that combine evidence over time from a sequence of images in order to improve spatial resolution and reduce unwanted artefacts. A C++ implementation of an algorithm was developed in year one 1. Work in year two has improved the efficiency and extended the applicability of the algorithm. New schemes for information update and scene matching have substantially reduced the processing time and enabled application of the technique to imagery with more complicated viewing geometries. The new technique is demonstrated using airborne infra-red imagery datasets from a Wescam MX series turret on a helicopter.
This paper discusses the use of constraints when super-resolving passive millimeter wave (PMMW) images. A PMMW imager has good all-weather imaging capability but requires a large collection aperture to obtain adequate spatial resolution due to the diffraction limit and the long wavelengths involved. A typical aperture size for a system operating at 94GHz would be 1m in diameter. This size may be reduced if image restoration techniques are employed. A factor of two in recognition range may be achieved using a linear technique such as a Wiener filter; while a factor of four is available using non-linear techniques. These non-linear restoration methods generate the missing high frequency information above the pass band in band limited images. For this bandwidth extension to generate genuine high frequencies, it is necessary to restore the image subject to constraints. These constraints should be applied directly to the scene content rather than to any noise that might also be present. The merits of the available super-resolution techniques are discussed with particular reference to the Lorentzian method. Attempts are made to explain why the distribution of gradients within an image is Lorentzian by assuming that an image has randomly distributed gradients of random size. Any increase in sharpness of an image frequently results in an increase in the noise present. The effect of noise and image sharpness on the ability of a human observer to recognise an object in the scene is discussed with reference to a recent model of human perception.
For many dynamic estimation problems involving nonlinear and/or non-Gaussian models, particle filtering offers improved performance at the expense of computational effort. This paper describes a scheme for efficiently tracking multiple targets using particle filters. The tracking of the individual targets is made efficient through the use of Rao-Blackwellisation. The tracking of multiple targets is made practicable using Quasi-Monte Carlo integration. The efficiency of the approach is illustrated on synthetic data.
This paper discusses methods of improving the quality and resolution of passive mm-wave images, particularly those obtained using the DERA MITRE imager and the more recent MERIT imager. This later real time imager consists of some novel optics followed by a conical scanner in the form of a disk rotating about an axis through its center and tilted with respect to its normal. A horizontal array of receivers is scanned such that each receiver performs a conical scan pattern in the scene. The resulting image which has a 40 degree by 20 degree field of view, consists of a series of circles whose centers are uniformly displaced horizontally. Each receiver is calibrated initially using a two point correction but then drifts in time and a scene based correction is applied. Following this pre-processing the images are superresolved using nonlinear restoration techniques. These various processes are described and images presented.
A new super resolution algorithm is proposed which can provide bandwidth extension of noisy images in a small number of iterations and is potentially capable of real time operation. Attempts to restore band-limited images frequently introduce ringing artifacts. Methods designed to reduce this ringing often suppress sharp features in the scene. In images with a well-defined background intensity, super resolution techniques involving a positive constraint are effective in suppressing this ringing and providing a high degree of bandwidth extension. Problems arise, however, in a general image where no such well-defined background exists. The first stage of the algorithm reported here addresses these problems by computing an effective background. Features that need to be enhanced then exist as blurred deviations from this background. The background is computed from the first and second differentials of the image with respect to further blurring. It has been possible to suppress ringing artifacts, resulting in bandwidth extension, by comparing the calculated background with the known original blurred image. An iterative procedure based on Gerchberg's error energy reduction technique has produced good results. Computer calculations applied to both synthetic images and real millimeter wave images show that the algorithm is effective, efficient and largely immune to noise.