We have investigated the use of forward looking infrared (FLIR) sensors to verify aircraft navigation information during approach and landing. Our research includes the development of an experimental primary flight display (PFD) integrated with a synthetic vision system (SVS). The effectiveness of a traditional SV display is limited by navigation equipment position and orientation errors, database limitations, and lack of knowledge of temporary obstacles. However, integrating information from the navigation system with an external FLIR sensor has the potential to increase information provided to the pilot, improving flight safety. In prior work, we developed software to correct aircraft orientation inaccuracies. Our algorithm locates the runway in a LWIR image and uses the extracted runway location to validate and correct the SV system's understanding of aircraft orientation. Evaluations demonstrated that this orientation correction worked well when there were no position errors. However, uncorrected position inaccuracies introduce errors into the pitch and heading correction estimates as the aircraft approaches the runway. To address this problem, we have developed a new algorithm to separate the image effects of orientation and position errors. This allows our system to correct for both orientation and position errors. We evaluated our system using LWIR video and navigation data recorded by test aircraft during runway approaches. Our results show significant improvements in correction accuracy using dual orientation and position estimation compared to orientation correction alone.