The development of autonomous systems presents many challenges, including safety, repeatability of results, and the high cost of system testing. These challenges are compounded because of the nature of autonomous systems. The systems are composed of numerous interconnected components encompassing a broad spectrum of disciplines, including mechanical engineering, electrical engineering, computer vision, computer science, networking, and communications. By necessity, each component is often under simultaneous development with one or more other components. Because of the close coupling of components, changes in one component can have wideranging systemic effects. In this paper we describe a set of processes and tools that allow a distributed team to coordinate parallel development of software for autonomous systems without introducing system instability or performance regression. These include processes and tools for sharing software configurations, deploying software to autonomous system computers that lack internet access, building software in parallel across multiple machines, verifying software configurations, and automatically ensuring a specified level of quality exists using static analysis, unit tests, and regression tests. We build on existing best practices to ensure a stable baseline software configuration necessary for quantitatively assessing software changes without the need for impractically time-consuming data collection and testing. We have incorporated best practices from software development and robotics communities as well as from military requirements for fielding systems. These practices and tools give an increased confidence that the results from each test case capture the desired data due to proper software configuration, and that the configuration can be merged successfully into the baseline system if warranted by results. The processes described allow for parallel development paths to proceed simultaneously while minimizing the occurrence of unforeseen and difficult-to-resolve conflicts.