The automatic detection of runway hazards from a moving platform under poor visibility conditions is a multifaceted problem. The general approach that we use relies on looking at several frames of the video imagery to determine the presence of objects. Since the platform is in motion during the acquisition of these frames, the first step in the process is to correct for platform motion. Extracting the scene structure from the frames is our next goal. To rectify, enhance the details and to remove fog we perform multiscale retinex followed by edge detection on the imagery. In this paper, we concentrate on the automatic determination of runway boundaries from the rectified, enhanced, and edge-detected imagery. We will examine the performance of edgedetection algorithms for images that have poor contrast, and quantify their efficacy as runway edge detectors. Additionally, we will define qualitative criteria to determine the best edge output image. Finally, we will find an optimizing parameter for the detector that would help us to automate the detection of objects on the runway and thus the whole process of hazard detection.