This paper introduces the concept of using simulation for both plan tracking and state estimation and prediction. Given some set of objectives the military commander must devise a sequence of actions that transform the current state to the desired one. The desire to do this in faster than real-time so that many courses of action can be considered motivates us to investigate modeling techniques that explicitly produce such courses of action. This class of problem can be modeled as a Markov decision process (MDP) whose principal solution is stochastic dynamic programming. In this paper we consider the extension of a MDP model of air operations to the partially observed case.