We discuss the applicability of multi-layer Neural Networks (NN) to automatic target classification in the challenging environment of Acquisition-Tracking-Pointing (ATP) systems. The merits of NN are discussed in the context of a limited training data base, which is characteristic of practical ATP situations. A computational method is reviewed which allows the quantification of large-scale NN performance for the important small-sample case. In addition, Perceptron learning for the classification of target stochastic images governed by the Poisson distribution is examined with the aid of computer simulations.
A mathematical model is used to show that illuminating a target with beams of varying nonuniformity across the target can not only facilitate unambiguous image reconstruction but can be used for tracking provided that pointing jitter is sufficiently small. An image reconstruction technique for this purpose is introduced which does not depend on any properties of analytic functions which would allow recovery of an image from its autocorrelation function. The practical implementation of this algebraic technique is addressed.
This study considers a satellite-mounted sensor in a 1000-km circular orbit. The sensor is initially placed north, south, east, or west of a ballistic target, at a variety of initial ranges from 500 km to over 3000 km. The initial angle between the sensor and target velocity vectors is varied, from near zero degrees to roughly 180 deg, in steps of 30 deg. The tracking algorithm used is a standard Kalman filter. The track errors as a function of track time for several track data rates are examined. The error is defined to be the maximum eigenvalue of the covariance matrix. Both the current covariance matrix and the matrix propagated to impact are studied. The study is done for a variety of angular measurement errors, from one microradian to over 100 microradians. The best tracking performance seldom occurs when the target and sensor velocity vectors are crossing, as might be intuitively expected. The track error is very nearly linear with angular error. While increasing the data rate improves tracking performance, doubling the data rate does not improve performance nearly as much as doubling the total track time. Once a good track is obtained, further updates to the track can be very infrequent and the track will still improve steadily with time.
A procedure for tracking multiple targets in clutter is presented. The procedure is similar in form to those that utilize probabilistic data association, but a fuzzy logic filter is utilized to provide weights and weighted innovations in a bank of Kalman filters adapted to the multiple target in clutter environment. Specifically, fuzzy logic is employed to evaluate returns for processing by a Kalman filter modified to treat target source uncertainty. The method is developed and demonstrated with a noise driven simulation and in actual forward looking infrared sequences (FLIR).
This paper deals with the analytical work performed by the Ballistic Research Laboratory in evaluating the feasibility and performance enhancements of incorporating Forward-Looking- Infrared (FLIR) autotrack technology into armored combat tanks. Both current and next generation armored vehicle technologies were considered in the evaluation. In addition to autotrack, higher order target state estimation and prediction algorithms were also evaluated as part of the study. Both stationary firer/maneuvering target and fire-on-the-move/stationary target analyses were performed and will be presented in this paper.
In tracking applications, one needs to know if the characteristics of the signal at the location to be tracked are sufficient for tracking. Specific characteristics for this purpose depend on the particular tracking algorithm, but typical characteristics are signal strength compared to the noise, edge strength, and signal shape. This article considers the signal strength characteristic. A signal strength test based on robust statistical methods is shown. The test makes a histogram of the area to be tracked and tests the hypothesis that the histogram is due to noise only. This test is very practical for digitized signals processed by a real-time processor. The robust method makes the test insensitive to many characteristics of sensor noise. The test is described, and a detection/false alarm performance analysis for the test on certain signals is given. The performance analysis uses joint distributions of order statistics applied to mixture density functions. The result of the analysis is the test threshold as a function of rms sensor noise and required signal-to-noise ratio.
The interacting multiple method (IMM) algorithm is an effective technique for tracking maneuvering targets. The IMM algorithm uses multiple models that interact through state mixing to track a target maneuvering through an arbitrary trajectory. The state estimates are mixed according to their model probabilities and the model switching probabilities that are governed by an underlying Markov chain. In the IMM algorithm, the probability pij of switching from model i to model j is often assumed to be uniform between each measurement update. However, for multiple sensors operating asynchronously or a sensor with a probability of detection less than one, the data will be aperiodic. To overcome this limitation, the model switching probabilities are modeled as time-dependent. IMM algorithms with constant and time-dependent model switching probabilities are evaluated for the cases of a two sensor tracking system and a sensor with a probability of detection of detection less than one.
This paper presents a pure-Cartesian formulation for angle-only and angle-plus-range tracking filters. Every aspect of the filter--from the expression of the sensor measurements to the filter's state vector--is defined in Cartesian frames. The key to the filter is its measurement expression. In conventional angle-only filters, the sensor measurements are expressed in terms of the elevation and bearing to the target--i.e., in a spherical coordinate system. The proposed tracking filter uses a different approach. Each time step, the filter constructs a Cartesian frame, referred to as the target line-of-sight frame, with origin at the sensor and one axis along the instantaneous expected line of sight to the target. The sensor measurements are then transformed into this frame, and the components of this vector, not the original sensor measurements, are used as inputs to the tracking filter. The technique results in a statistically- uncorrelated Cartesian expression of the sensor measurements. The filter is efficient--it does not require trigonometric functions--and it performs equally well for any line-of-sight geometry. Unlike conventional angle-only filters, its performance does not degrade as target elevation approaches +/- 90 degrees.
An aimpoint selecting method is proposed which serves as an image shape analysis method presented in this paper. The method is especially suitable for asymmetric and structure- branching targets because traditional centroid-selecting methods may lead the resulting aimpoints near the edge or even out of the targets. In our method, graph representation of the image shape is given and the key target part is selected.
The fusion of asynchronous data is usually achieved by sequentially processing the data as it arrives at the central processor. However, if the data rate is too high and the data from different sensors are taken at times that are arbitrarily close together, some other technique is necessary. One way of overcoming this problem is to take the data within a specified time interval and compress it. While there are many ways of compressing data, the method chosen must retain all of the data's salient features. This paper discusses methods that utilize least- squares techniques for compressing data from one or more sensors and a central filter for processing the compressed data. Simulated tracking results from a central filter using the compressed data are compared to the results from an optimal central filter that processes the data sequentially. The error covariance associated with the optical filtering approach is compared with that of the central filter processing the compressed data. Also, some generalized theorems concerning the fusion of synchronized state and measurement vectors of different dimensions are given.
Integration of sensors into a multi-sensor system requires accurate alignment of all of the sensors in the system. This is an important issue because all of the sensors must be aligned to effectively share data. The presence of alignment errors will degrade the overall system performance. The alignment problem for two sensors, where one of the sensors is a 3-D radar (i.e., it measures the range, azimuth, and elevation) and the other one is a 2-D sensor that measures azimuth and elevation (e.g., an optical sensor), is examined in this paper. It will be assumed that the sensors are relatively close to each other (e.g., the sensors are located on the same platform). The 3-D radar will serve as the master sensor; that is, all of the multi-sensor data will be expressed in the 3-D radar's reference frame and the 2-D sensor will be aligned relative to the 3-D radar. A mathematical model will be developed for this alignment problem, and Kalman filtering techniques will be used to develop the alignment algorithm.
A common function for human being is to detect the movement of an object against a stationary background and then to lock on to and trace its motion. This natural process becomes very tedious in industrial or military environments where the database of images to be searched is huge or where the function is to be repeated continuously. Thus automation can assist people carrying out such tasks. This is the case in security systems, military reconnaissance, military targeting, aircraft tracking, assembly line manufacturing systems, and quality control. We present a hybrid system to do such tasks. The technique is simulated on computer using numerical algorithms and is successful under many situations. For implementation an ideal system using optical components is presented. This hybrid system employs three main subsystems which are combined in such a way as to compensate for each other's drawbacks yet enhance each other's virtues. The first system is a velocity correlation system which correlates two adjacent frames in a sequence of image frames. The resultant velocity correlations are searched to find the potential velocity profiles at which an object may be moving. These velocity profiles are then processed by the multi-frame mean subsystem which performs a geometric (or arithmetic mean) operation on the image frames. These frames are displaced by the selected velocity profiles and thereby aligning the object in the given frames for detection. Algorithms have been developed and tested to perform this technique on selected databases. Also algorithms to synthesize test images have been developed and the results are presented.
Many target tracking methods rely on the use of centroids as features points. In several applications such as the tracking of vehicles on roadways, the appropriate target paths are known a priori. Based on this observation, an efficient centroid computation method is presented and analyzed in this paper.
A new method for target tracking is presented. The centroid of moving target images from a FLIR imaging sensor is tracked. The position of the centroid and the offset of the centroid between two consecutive images are taken as the measurable variables. Their measurement noise statistics are analyzed and can be expressed by video noise characteristics. The offset measurement noise is shown to be autocorrelated and is considered in the new tracking model. Experiment results show that subpixel tracking accuracy is achieved.
A laser vision sensor has been developed to enable range measurement and identification of targets through flames, smoke, and fog which are invisible to the human eye. This vision sensor employs a 10.6 micrometers -wavelength carbon dioxide laser for its long wavelength. The target is scanned two-dimensionally by the laser beam, directed by a pair of galvanometer mirrors, to produce the target image and measure the range of the target. The laser beam, amplitude-modulated to 5 MHz with an electro-optic modulator, is projected onto a target, and the reflected beam is detected by a cadmium mercury telluride detector. The phase difference between the projected and reflected light signals is used to provide range data up to 30 m. The indoor test is carried out with a 1 cubic meter box in which flames, smoke, and fog can be generated. The laser beam is projected through this box, and the targets behind this box are detected. The reproduced image is sufficient for identification through flames, smoke, and fog.
An IR-spatial filter is proposed for usage in a focal plane. This filter screens the IR-array locally, where the IR signal is followed by a signal in the visible region (only noisy signals or jamming interferences are assumed to be followed by visible light). The filter discussed here is based on a photosensitive spatial light (IR) modulator, abruptly diminishing its transmittance in IR, when (and where) visible light intensity is over the settled threshold. The experiments were performed when the attenuation of interference was about one order.
The problem addressed in paper is that of pointing a transmitted laser to views off-beacon of target. The optical tracking system views a beacon on the receiving vehicle. Turbulence- induced angle-to-arrival fluctuations and beam wander is important in an atmospheric tracking system because of the resultant power fading and change for the worse of accuracy control of the vehicle. In this paper, we presented preliminary experimental results on the effects of wander and angle-to-arrival fluctuations, the efficiency of their cancellation using such a tracking system on the dissector and different construction reflectors on the target.
This paper, based on analysis of the mechanism and features (i.e., parallelism, fuzziness, and association) of human pattern recognition as well as various simulation approaches, proposes the scheme of an optic-electric hybrid parallel recognition system (OEHPPRS) for simulating the mechanism of human pattern recognition. THe OEHPPRS is composed of six parts: optical analogue processor, digital image processor, recognizer, knowledge base, optical matcher, and main control program.
Area correlation tracking (ACT) is an effective solution for tracking targets that have neither prominent features nor high contrast with the background. The essential step in area correlation tracking is to find the position of best-fit between the reference image of the terrain surrounding the desired target to the real-time scene acquired by an imaging sensor. The image matching is accomplished by the basic mathematical correlation coefficient algorithm (CCA) or mean absolute difference (MAD) algorithm or other algorithms derived from these algorithms. The reference image has to be updated very frequently to circumvent the problems associated with image growth, aspect changes, rotation, etc. Otherwise the image tracking algorithm may become divergent with time thereby losing the track. This paper addresses the real-time implementation of area correlation algorithms for an on-board application. The implementation poses challenges because of the very large number of integer multiplications required by the correlation coefficient algorithm. In view of the space and power constraints in on board applications, the realization becomes all the more complex. To realize the system, a parallel pipelines architecture is adapted. Very high speed arithmetic devices are used for computations, programmable logic devices for the high speed control and a dual microprocessor based system for overall control. This hardware has been evaluated by integrating with a full fledged imaging seeker in captive flight trials. The results are presented.
The three Fine Guidance Sensors (FGS's) on board the Hubble Space Telescope have been operated extensively since the observatory was launched in April, 1990. The FGS's, each capable of measuring angle as small as 0.003 arc-seconds (15 nanoradians), provide required fine pointing information to the Space Telescope's pointing control system, and are intended to serve as astrometry instruments. On-orbit data have shown that the acquisition, pointing and tracking performance of the FGS's in most cases meets, and of these sometimes exceeds, requirements. The versatility of the FGS digital control electronics to adapt to the unexpected conditions imposed on the sensors by the telescope spherical aberration and by solar panel jitter will be discussed. There is encouragement from both on-orbit tests and analytical studies that the FGS's can accommodate the current telescope characteristics. Improvements to guide star acquisitions within the FGS's and to target acquisitions within science instruments have been accomplished with the internal distortion calibration of each FGS and with the alignment calibration between sensors. Techniques used in the calibration process and the resulting improvements in acquisitions will be presented.
The Integrated Beam Control Demonstration (IBCD) brassboard for the Advanced Beam Control System (ABCS) rapidly directs a laser beam over a wide field of view with high precision. The pointing control system consists of four fast steering mirrors and a visible- wavelength alignment sensor, coordinated by a central computer. This paper describes the IBCD pointing control system requirements and it design, and presents the results of recent performance tests.
This paper describes the design, analysis and simulation studies of the pointing control system of a gimballed large diameter (3.5 M) ground-based telescope, to meet stringent requirements. The gimbal forms, Alt-Alt, Alt-Alt-Azimuth and Alt-Azimuth are compared in the concept selection. Structural mode shapes, derived from the Nastran models are included in the structural analysis and performance predictions. Pointing jitter prediction models include seismic and coolant flow power spectra. Worst case tracking scenarios are discussed, with suitable feed forward arrangements. Active secondary mirror mount options, as well as beam alignment options, are considered.
Recent advances in control systems and sensors allow construction of an inexpensive yet high- performance orbiting observatory to collect data at ultraviolet wavelengths between 1150 angstroms and 3000 angstroms. The Deep Ultraviolet Explorer satellite (DUVE) will obtain all-sky imagery at various broad band wavelengths, high-resolution images, and spectral across the UV region. The DUVE program offers substantial performance advantages over current space-based observatories.
A sensor mount, in conjunction with a wide coverage lens, is described that uses translational motion to 'point' a sensor. This new 'X-Y translation gimbal' replaced more traditional rotational gimbals or pan-and-tilt mounts in air-launched weapons and air vehicles as well as in land applications. In a typical implementation, a 1/2 inch or 1/3 inch CCD TV camera is placed at the focal plane of a lens with large field coverage. As the TV camera is moved across this focal plane by means of optical bench-type translation rails (linear positioners), the camera images different small portions (the field of view, FOV) of the larger total scene (the field of regard, FOR) imaged by the lens.
This paper describes the simulation methods and results used in the design of the control system for a lightweight spacecraft with multi-sensor suites. The design example uses several highly agile gimballed sensor payloads which can be used for the acquisition and tracking of ballistic missiles, both below and above the horizon. Using a control system-based simulation software package interfaced with a multi-body dynamics program a tool was built to add in the design of multi-body spacecrafts. This tool allows the controls engineer to rapidly input the dynamic model of a multi-body system by simply entering the mass, geometry and connection properties. The dynamic/structural and disturbance models can be constantly updated as the hardware matures. One of the ways the simulation model is being used is determining the effects of structural bending and induced disturbances on the sensor focal planes. Simulation results can be used to aid the designer in determining the following key design requirements: actuators, fast steering mirror (if any), command generator, lowest frequency mode, attitude knowledge, etc.. In addition a star map simulation program was developed and can be used in determining star tracker capabilities and requirements for a given orbital inclination and altitude. These simulation packages provide the design engineer with some high fidelity tools with extremely rapid design turnaround capability.
TASAT is a complete end-to-end system simulation of tracking and pointing systems. It can currently model ground-based (GB), space-based (SB), and kinetic energy weapon (KEW) systems at a very high level of fidelity to assess system performance and design tradeoffs. It is primarily a time-domain analysis tool, but it can also perform frequency-domain analysis for performance and stability analysis. TASAT was built as a modular set of interacting routines that permit much more than end-to-end analysis. Specifically, subsystem and even component level analyses are available. The code treats all aspects of tracking and pointing systems using realistic, anchored imagery in a multiwavelength simulation. Some of the functions modeled include orbit propagation or launch trajectories, image rendering with high fidelity scattering calculations, atmospheric or optical blur point-spread functions (PSFs), image formation via convolution, realistic focal plane sensors including dead bands, sensor noise, and analog-to- digital conversion, and control system response. For GB applications, the atmospheric model is a novel treatment of the time average PSF after application of an adaptive optics system. Also, atmospheric tilt is modeled exactly. The code has applications beyond GB, SB, and KEW systems. It will treat imaging systems, tactical and strategic surveillance systems, and radar range gating. The paper provides an overview of the simulation architecture and presents results from analyses of each of the principal systems modeled in TASAT.
The design of an imaging electro-optical seeker for a hit-to-kill, missile using autonomous on- board guidance is discussed. A high fidelity electro-optical seeker simulation developed at Ball Aerospace has been used to study the performance of various track algorithms as a function of seeker and sensor design parameters. Methods of accurately simulating seeker performance without sacrificing computational efficiency are shown. Optimum point spread function and track algorithm selection as a function of signal to noise are derived. The effect of sensor performance parameters such as gain nonuniformity and operability is quantified.
Intercepting an incoming ballistic missile with another missile is a well-known control problem. Making such an intercept in a head-on configuration is, however, a more difficult task with traditional techniques such as proportional navigation. A head-on intercept is desirable under some conditions since it maximizes the probability of destroying the incoming missile warhead. This paper presents a novel approach to this problem using fuzzy logic control methods. The approach uses two inputs and a fuzzy logic inference scheme to generate the control output. The fuzzy inputs are the respective angles of the interceptor and target velocity vectors relative to the line of sight between the two missiles. The output is the change in the interceptor acceleration direction to accomplish the desired intercept. For a realistic simulation of missile kinematics and radar measurement errors, this fuzzy logic system guided the interceptor to accurate head-on missile intercepts. Over a wide range of incoming missile parameters, it achieved intercepts to within one meter, deviating from head-on by less than one degree. Details of the approach used and typical performance results of the model will be presented.
A number of tracking and pointing applications require extremely precise referencing of the optical line of sight (LOS) relative to some small portion of the vehicle to be tracked. Since the referencing must be performed in a vehicle fixed coordinate system and the optical image is degraded due to disturbances such as atmospheric blurring, sensor noise, and diffraction, the referencing becomes quite difficult. These degradations, along with potentially coarse spatial quantization of the optical image [large pixel size driven by signal-to-noise ratio (SNR) considerations], also limit the ability of a human operator to interact with the optical system in real time to control the LOS. Several concepts to determine the real-time LOS control (vehicle intensity moments, neural networks, optical correlators, etc.) have been suggested in past studies, but generally have proven insufficiently sensitive or too complex to implement in a real-time system. The preliminary concept presented here centers on using a correlation tracker combined with a precomputed image sequence as a straightforward means to maintain a precision LOS. The concept employs the high SNR image within the correlation tracker reference map to make the relatively low bandwidth LOS corrections required. The corrections are determined by correlation of the tracker reference map imagery with a precomputed image sequence and thus provide the accuracy associated with the high SNR map image and high SNR precomputed image sequences. [The satellite tracking problem provides the tracker with viewing/aspect angle geometry, which is generally deterministic, within the uncertainty bounds of ephemeris and satellite attitude information, and thus stimulates the use of precomputed (simulated) imagery.] The precomputed image sequence provides the LOS control through registration of the desired vehicle fixed coordinate on the image with a fiducial point on the image array. We will discuss the theoretical basis, potential advantages, implementation, and performance [as determined by Time-Domain Analysis Simulation for Advanced Tracking (TASAT)] of the concept.
The Naval Research Laboratory is studying the performance versus design tradeoffs for infrared seekers used for the acquisition of sea surface targets. The wide variation in background level, target contrast, target size, and background clutter require an optimized sensor design and adaptive real time signal processing techniques to detect and acquire the target. This paper describes generic sea surface target infrared signature characteristics, and the signal processing techniques used to acquire these targets.
British Aerospace (Systems & Equipment) Ltd (BASE) has been working in the field of automatic electro-optical tracking (Autotrack) systems for more than 12 years. BASE Autotrack systems carry out the automatic detection, tracking and classification of missiles and targets using image processing techniques operating on data received from electro-optical sensors. Typical systems also produce control data to move the sensor platform, enabling moving targets to be tracked accurately over a wide range of conditions. BASE Autotrack systems have been well proven in land, sea and air applications. This paper discusses the relevance of Autotrack systems to modern high-technology warfare and charts the progress of their development with BASE, both with respect to current products and active research programs. Two third generation BASE Autotrack systems are described, one of which provided a sophisticated air-to-ground tracking capability in the recent Gulf War. The latest Autotrack product is also described; this uses ASIC and Transputer technology to provide a high-performance, compact, missile and target tracker. Reference is also made to BASE's research work. Topics include an ASIC correlator, point target detection and, in particular, the use of neural networks for real-time target classification.
In optical systems, in addition to their imaging function, some kinds of prisms, (e.g., cube prisms), can be used for target tracking or image stabilizing. Therefore, prism systems have been widely applied to various optical systems. To simplify the optical system, cube prism assembly can simultaneously act as a tracking and stabilizing element. This article deals with a two-cube prism assembly in collimated light, which has LOS stabilizing and target tracking functions when finite angular perturbation exerts on the optical device. Because there is no simple relation between change of the image space and the rotations of the prisms in the case of a finite angular perturbation, a microprocessor is used for controlling the rotations of the prisms. Such a microprocessor controlling the LOS stabilizing and tracking experimental device is manufactured, consisting of a telescopic optical unit, two-cube prism assembly, microprocessor (Z80) data processing and controlling unit, gyro attitude sensing and data sampling units, and a stepping motor acting unit. The experimental results show that the two- cube prism assembly can simultaneously meet the demands of target tracking and LOS stabilizing in the case of a finite angular perturbation. On the other hand, in order to reach the real-time LOS stabilizing effect, higher speed microprocessor and data sampling units are needed.
Plane mirrors are commonly used to steer, rotate, and stabilize the image and line-of-sight (LOS) of optical systems. Plane-mirror optical kinematics is the study of fixed, flexured, and gimballed mirrors in this application. LOS pointing, stabilization, image mapping, image derotation, boresight coefficient determination and mechanical tolerance analysis are all areas of plane-mirror optical kinematics. Specific problems in these areas have been addressed in the literature by a wide variety of analytical techniques. None of these technique, however, have been generalized for application to a wide class of problems. A unified analytical framework for plane-mirror optical kinematics is presented in this paper. This methodology is based on a new optical kinematic construction, the line-of-sight reference frame. An LOS reference frame is a unit vector triad that defines the LOS and the associated image plane. The use of optical and vector basis transformations is central to LOS reference frame analysis. These transformations often look similar, but are conceptually unrelated. A thorough understanding of each is required. Both are discussed in detail, and a direct comparison is made. Use of LOS reference frames as a general optical kinematics tool is outlined. The pertinent LOS reference frames of an aerial photography system are constructed as an example.
One subsystem critical to the performance of a precision electro-optical line-of-sight (LOS) pointing system is a wide-band inertial stabilization reference. This paper compares, in terms of relative performance of LOS stabilization in the presence of vehicle jitter, three inertial LOS stabilization reference mechanizations for space-based optical systems. The three mechanizations are: an inertially stabilized platform, the Inertial Pseudo-Star Reference Unit (IPSRU) under development at Draper Laboratory; a device called the Optical Reference Gyro (ORG), also developed at Draper Laboratory; and a strapdown wide-band inertial sensor assembly. Each of the three stabilization reference mechanizations generates a collimated alignment beam that is injected into the entrance aperture of the optical system. In the stabilized platform mechanization, the alignment beam emanates from a platform inertially stabilized from vehicle jitter in two axes, and thus the alignment beam becomes a jitter- stabilized pseudo-star. An alignment loop closed around the pseudo-star image and a steering mirror in the optical path stabilizes the LOS against vehicle jitter. The ORG alignment beam projects from the gyro rotor, which is decoupled from case motion and is effectively inertially stabilized. The ORG spin-speed noise is compensated with phase-lock technology. In the strapdown mechanization, the alignment beam source is hard-mounted to the vehicle. Inertial measurement of the local vehicle motion is fed forward, open loop, to a steering mirror in the optical path to compensate for alignment beam jitter.
Stabilization systems use gyroscopes typically mounted along side sensing or imaging devices being isolated from external rotations in inertial space. The gyroscopes measure the residual angular motion of the platform stabilized member. The better the stabilization the smaller the sensed residual motion is. Conceivably, the residual motion (angular jitter) could be as small as the noise in the gyroscope outputs. This represents a low signal-to-noise ratio case. Under such conditions, lower noise gyroscopes would be required to further reduce angular jitter, at considerably higher expense. Alternatively, the performance of the stabilization system can be significantly enhanced through noise reducing optimal filtering techniques. The latter approach uses angular velocity estimates that are more accurate than raw measurements to stabilize the platform, without resorting to more precise and costly lower-noise gyroscopes. This paper presents a new Kalman filtering technique that reduces the mean-square-error (MSE) between actual angular velocity values and estimated ones by an order of magnitude (when compared to the MSE resulting from direct measurements) even under extremely low signal-to-noise ratio conditions. The electronically improved angular motion measurements can be fed into the platform stabilization control system (instead of raw measurements) considerably reducing stabilization jitter.
This paper describes an algorithm for determining the attitude (yaw, pitch, and roll) of an imaging sensor relative to its attitude at a reference time. Inputs are images at the two times and relative sensor position. We assume the latter is provided by an inertial navigation system (INS) or other source. The algorithm was developed to meet the needs of passive ranging where accurate estimates of attitude are crucial to performance and are difficult to obtain, particularly when the sensor and INS are not co-located. On the other hand, it is not difficult to obtain estimates of relative motion to the required accuracy.
Positional control of components and steering of beams to high degrees of accuracy is becoming more and more important and highly specified, particularly with the advent of Optical Inter satellite Links. Queensgate have developed piezo-electric actuators which are qualifiable for use in space and can comply with the stringent constraints on mass power and volume, which will move up to lOOm repeatable to less than a nanometre. Drifts have been measured in the region of 5nm0C, and non-linearities are consistently measured in the region of 0.3 to O.4°/o. The capacitance sensing technology used for such stable positioning, can also be deployed separately for monitoring positions of assemblies etc. . Novel methods for the sensors and the processing circuitry have been developed extensively over the past year to produce capacitance sensors which will sense or measure gaps of up to 5O0im with a resolution of less than lnm.
Typically, beam steering mirrors have been custom designed for specific performance and form factor requirements for each new program. In many cases, the requirements are stressing in just one or two areas, such as, bandwidth or travel. Hughes has developed and tested a universal beam steering mirror design, called the Hughes Beam Steering Mirror (HBSM), that simultaneously provides large angular mechanical travel in two areas (+/- 3.5 degree(s)), high bandwidth (1 kHz), high acceleration (1200 rad/sec2), low noise (< 100 nrad), with a 5 inch clear aperture reactionless mechanism in a compact envelope (< 5' X 5' X 7'). These performance parameters combined meet the demanding agile steering mirror requirements of many SDI programs, while satisfying the requirements of systems that may require only a subset of these parameters. One of two key distinctions of the HBSM is the cross blade flexure. The cross blade flexure provides low stiffness in the two desired tilt degrees of freedom, while providing very high stiffness in all other degrees of freedom. This allows for a simple mirror control law that does not require active control over other degrees of freedom, and provides infinite cycle life over a +/- 3.5 degree(s) travel range. The second unique design distinction is the slotted bobbin voice coil actuator. This design was developed to counter the effects of a metallic bobbin when high bandwidth, displacement, and acceleration are simultaneously required. This paper will present the HBSM design, the performance analysis for the cross blade flexure, the design theory for the slotted bobbin, and performance data on the HBSM mechanism.
A lightweight, stable optical assembly, which combines the functions of precision mirror pointing and occulting of stray light into a single package has been developed. This assembly, known as the Mirror and Occulter Mechanism (MOM), is a critical component of the Ultraviolet Coronagraph Spectrometer (UVCS), to be flown on the European SOHO spacecraft in 1995. This paper describes the two-part mechanism and control system design and discusses development testing and life testing activities.
The Relay Mirror Experiment (RME) required continuous illumination of its satellite by ground-based lasers. To maintain tracking, the system relied on an array of six-inch diameter hollow retroreflectors, mounted on the lower deck of the spacecraft. The retroreflectors are tailored to the mission, including compensation for their orbital velocity. The array has a lidar cross section of up to 3,000 square kilometers. This paper describes the design, construction, testing, and use of this retroreflector array.
This paper deals with some improvements on the accuracy of the laboratory model, presented in a previous work, for the simulation of the attitude control and pointing of an optical instrument connected to the Space Station, or other space facility, via a tether (2 to 10 Km long), mounted on a platform. The bidimensional model of this system was realized using a small platform equipped with a DC servo-motor and a screw bearing, floating with a small inclination angle, on an air table, and connected, like a pendulum, through a tether, to a second servo-motor on the wall. In the previous work the attitude control was based on the tracking of two points fixed on the model of the platform with one CCD camera and moving the attachment point on it. The new experimental apparatus, based on two CCD cameras, an optical system of mirrors and a He-Ne laser beam, has been assembled in order to better simulate the control system for a telescope mounted on the platform. The tracking is realized via a computer based vision system which acquires and locks a laser spot projected onto a screen representing the field of view of the telescope. The control loop has been optimized taking into account the disturbances produced by the simulation of the effect of the tether dynamics by means of a second motor which moves the wall tether end with a proper law, and reproducing the slewing manoeuvre effect of the telescope, on the dynamics of the system.
The motion of ground vehicle targets after a ballistic round is launched can be a major source of inaccuracy for small (handheld) anti-armour weapon systems. A method of automatically measuring the crossing component to compensate the fire control solution has been devised and tested against various targets in a range of environments. A photodetector array aligned with the sight's horizontal reticle obtains scene features, which are digitized and processed to separate target from sight motion. Relative motion of the target against the background is briefly monitored to deduce angular crossing rate and a compensating lead angle is introduced into the aim point. Research to gather quantitative data and optimize algorithm performance is described, and some results from field testing are presented.
A method of calculating numerically the optical transfer function appropriate to any type of image motion and vibration, including random ones, has been developed. Here the numerical calculation method is compared to experimental measurement, and the close agreement justifies implementation in image restoration for blurring deriving from any type of image motion. In addition, statistics regarding limiting resolution as a function of relative exposure time for low frequency vibrations involving random blur are described. This can be implemented in target acquisition probabilities.
On the basis of the theory of conjugation for reflecting prisms, this paper analyzes various prisms which are used to stabilize the line-of-sight of an optical system, and presents the method for line-of--sight stabilization with a mirror assembly in convergent light. Line-of--sight stabilization formulae for this mirror assembly are derived. This mirror assembly may act as line-of-sight stabilization for optical system meantime working in all wavelength range.
Image stabilization technique ha been widely applied in many weapon eytemsBut the complex structure makes it difficult to get a stabilizing image in small systems. This artical presents a image stabilizing method that gimbals a single prism to achieve a stabilizing image in two freedoms in convergent light.
Real-time hardware developed for image processing applications such as image enhancement, segmentation, image registration, pattern recognition, etc. can not be thoroughly debugged and analyzed for its performance by using conventional test equipment. To cite an example, the hardware for image registration may be failing intermittently or the registration point may be drifting even when the scene and the sensor are static. There is no way of knowing whether the malfunction is due to input data or any noise glitches. Therefore, for evaluating real-time image processing hardware, it is necessary to acquire image sequence data in real-time and also capture the status of the tracker in real-time. Test equipment like logic analyzer, microprocessor development systems do not have the capability to acquire and store image sequences. General purpose data acquisition systems on the IBM PC compatible computer are not suitable as the data rate required is about 500 kbytes/sec. the design of an interface card for achieving this data rate using IEEE-488 (GPIB) interface is described. Results on the evaluation of image processing hardware using this card are also presented.
Autonomous fire and forget weapons have gained importance to achieve accurate first pass kill by hitting the target at an appropriate aim point. Centroid of the image presented by a target in the field of view (FOV) of a sensor is generally accepted as the aimpoint for these weapons. Centroid trackers are applicable only when the target image is of significant size in the FOV of the sensor but does not overflow the FOV. But as the range between the sensor and the target decreases the image of the target will grow and finally overflow the FOV at close ranges and the centroid point on the target will keep on changing which is not desirable. And also centroid need not be the most desired/vulnerable point on the target. For hardened targets like tanks, proper aimpoint selection and guidance up to almost zero range is essential to achieve maximum kill probability. This paper presents a centroid tracker realization. As centroid offers a stable tracking point, it can be used as a reference to select the proper aimpoint. The centroid and the desired aimpoint are simultaneously tracked to avoid jamming by flares and also to take care of the problems arising due to image overflow. Thresholding of gray level image to binary image is a crucial step in centroid tracker. Different thresholding algorithms are discussed and a suitable algorithm is chosen. The real-time hardware implementation of centroid tracker with a suitable thresholding technique is presented including the interfacing to a multimode tracker for autonomous target tracking and aimpoint selection. The hardware uses very high speed arithmetic and programmable logic devices to meet the speed requirement and a microprocessor based subsystem for the system control. The tracker has been evaluated in a field environment.
A tracking system is presented which uses two FM frequency-versus-radius reticle allowing the tracking of multiple targets or large targets. The results of computer simulations of both single and double reticle tracking systems are presented. The performances of the two systems are compared.