Real-time motion analysis would be very useful for autonomous undersea vehicle (AUV) navigation, target tracking, homing, and obstacle avoidance. The perception of motion is well developed in animals from insects to man, providing solutions to similar problems. We have therefore applied a model of the motion analysis subnetwork in the vertebrate retina to visual navigation in the AUV. The model is currently implemented in the C programming language as a discrete- time serial approximation of a continuous-time parallel process. Running on an IBM-PC/AT with digitized video camera images, the system can detect and describe motion in a 16 by 16 receptor field at the rate of 4 updates per second. The system responds accurately with direction and speed information to images moving across the visual field at velocities less than 8 degrees of visual angle per second at signal-to-noise ratios greater than 3. The architecture is parallel and its sparse connections do not require long-term modifications. The model is thus appropriate for implementation in VLSI optoelectronics.