Determining position, attitude, and aimpoint of a small smart weapon as a function of time is extremely difficult and expensive. This paper describes an accurate, low-cost, portable method for collecting flight imagery and extracting position, attitude, and aimpoint information. This system consists of an onboard CCD camera, transmitter, and associated ground-based equipment. A firepulse from the weapon computer asynchronously takes a snapshot of the aimpoint. The ground- based equipment consists of video cassette recorders and a real-time digital disk recorder as well as GPS-derived IRIG-B encoders. Once the 60 Hz imagery is received and encoded with IRIG-B timecode, it is recorded. The stored imagery is then transferred to a computer workstation for digital image processing that includes individual field and line correction to remove electronic distortions and timing ambiguities. Optical distortions are corrected pixel-by-pixel using pixel maps to 'rubber-sheet' the imagery or build camera models. Images, grouped into several blocks, may be imported directly into OrthomaxR for further processing, which includes manually tagging fiducials and ground control points so that links between image and real space can be made and a 'least squares bundle adjustment' can be applied. An alternative method uses the City University Bundle Adjustment program to perform the necessary mathematical computations on all images in one large data set. An automated method, Video Motion Modeling System, is used to significantly reduce the labor and time involved in data processing. Results from all three methods are presented and compared in the following sections of this paper.