The Large Synoptic Survey Telescope (LSST) is planned to start construction in early 2009 and achieve first light in late 2012. The LSST Data Management System (DMS) has the responsibility to:
1) process the stream of raw images (15 TB/night) generated during observing to create and archive the nightly data products;
2) reprocess archived data products to incorporate pipeline improvements and generate longer-term data products;
3) provide a public interface that makes available all generated data products.
The DMS must perform these duties throughout the multi-decade lifetime of the survey and its data products. It is given that computing hardware undergoes generational changes every 3 to 5 years, software engineering paradigms shift every decade, and astronomy data reduction and analysis algorithms are in constant evolution. Thus, if the useful life of the LSST Data Products is even 2 decades, the raw data will be completely re-processed at least 20 times with improved
algorithms, the computing on which this is executed will be completely changed at least 4 times, and the software engineering paradigm and software architecture will completely change at least once. Managing this evolution in the DMS will require strategies in all areas of LSST Data Management, including:
1) a layered system architecture;
2) stable interfaces preserving backward compatibility;
3) plug-and-play components for pipeline construction;
4) extendable data and metadata types for catalog construction;
5) open interfaces for resource registration and access;
6) provenance and preservation mechanisms.
This paper describes how we plan to employ these strategies and the expected benefits.