At GTE Laboratories, we are advancing the theory of connectionist learning architectures for real-time control while exploring their relationships to animal learning models, applications in manufacturing quality control, and VLSI implementations. We seek connectionist-network architectures with improved convergence rate and scaling properties, as assessed on simulated and actual control problems. Our primary focus is on extensions to reinforcement learning. These include adaptive critics, feature/representation adaptation in multilayer networks, hybrid connectionist/conventional controllers, and modular networks for hierarchical control. We are also extending methods for system identification, or model learning, to include internal models learned using temporal-differences. We propose the integration of reinforcement and model learning based on their relationships to dynamic programming. We are working to resolve how connectionist systems should serve as a total systems concept or as tools in a larger architecture.