DARPA and Lockheed's Pilot's Associate (PA) represents one of the largest and most complex artificially intelligent systems constructed to date. Its architecture of five modular, cooperative expert systems posses a knowledge engineering problem unique in its scope, though not in its basic nature. The knowledge bases for each of PA's modules will be very large, constantly changing (in response to new tactics and new technological capabilities), and highly specialized for the task of the specific module. For efficiency, each module must contain only that knowledge necessary for its task, yet for cooperation, each system's knowledge must be consistent with the others'. Machine learning approaches hold the promise of greatly reducing knowledge acquisition and knowledge engineering time and of making the entire PA system more flexible, more accurate, and more consistent. We present the results of a three-year program investigating an Explanation-Based Learning approach to acquiring new plans from a simulator-based learning scenario and then propagating this knowledge to two of the five PA modules--as a tactical plan which focuses on changing world states for the Tactics Planner module, and as a list of pilot information needs for the dynamic display configuration algorithm used in the Pilot-Vehicle Interface module.