Optical flow fields can be used to recover some components of the camera ego-motion such as velocity and angular velocity. In this paper, we discuss the use of optical flow fields to estimate the relative orientation of two imagers with non-overlapping fields of view. The algorithms proposed are based on a spherical alignment technique which is closely related to rapid transfer alignment methods used to align aircraft inertial navigation systems. Of particular importance is the relationship between the accuracy of the optical flow field (which is dependent upon the complexity of the scene and the resolution of the cameras) and the accuracy of the resultant alignment process.
Multiple camera systems have been considered for a number of applications, including infrared (IR) missile detection in modern fast jet aircraft, and soldier-aiding data fusion systems. This paper details experimental work undertaken to test image-processing and harmonisation techniques that were developed to align multiple camera systems. This paper considers systems where the camera properties are significantly different and the camera fields of view do not necessarily overlap. This is in contrast to stereo calibration alignment techniques that rely on similar resolution, fields of view and overlapping imagery. Testing has involved the use of two visible-band cameras and attempts to harmonise a narrow field of view camera with a wide field of view camera. In this paper, consideration has also been given to the applicability of the algorithms to both visual-band and IR based camera systems, the use of supplementary motion information from inertial measurement systems and consequent system limitations.
Most modern fast jet aircraft have at least one infrared camera, a Forward Looking Infra Red (FLIR) imager. Future aircraft are likely to have several infrared cameras, and systems are already being considered that use multiple imagers in a distributed architecture. Such systems could provide the functionality of several existing systems: a pilot flying aid, a modern laser designator/targeting system and a missile approach warning system. This paper considers image-processing techniques that could be used in a distributed aperture vision system, concentrating on the harmonisation of high resolution, narrow field of view cameras with low-resolution cameras with wide fields of view. In this paper, consideration is given to the accuracy of the registration and harmonisation processes in situations where the complexity of the scene varies over different terrain types, and possible use of supplementary motion information from inertial measurement systems.