LSST will have a Science Data Quality Assessment (SDQA) subsystem for the assessment of the data products that will
be produced during the course of a 10 yr survey. The LSST will produce unprecedented volumes of astronomical data as
it surveys the accessible sky every few nights. The SDQA subsystem will enable comparisons of the science data with
expectations from prior experience and models, and with established requirements for the survey. While analogous
systems have been built for previous large astronomical surveys, SDQA for LSST must meet a unique combination of
challenges. Chief among them will be the extraordinary data rate and volume, which restricts the bulk of the quality
computations to the automated processing stages, as revisiting the pixels for a post-facto evaluation is prohibitively
expensive. The identification of appropriate scientific metrics is driven by the breadth of the expected science, the scope
of the time-domain survey, the need to tap the widest possible pool of scientific expertise, and the historical tendency of
new quality metrics to be crafted and refined as experience grows. Prior experience suggests that contemplative, off-line
quality analyses are essential to distilling new automated quality metrics, so the SDQA architecture must support
integrability with a variety of custom and community-based tools, and be flexible to embrace evolving QA demands.
Finally, the time-domain nature of LSST means every exposure may be useful for some scientific purpose, so the model
of quality thresholds must be sufficiently rich to reflect the quality demands of diverse science aims.
The Large Synoptic Survey Telescope (LSST) is an 8.4m (6.5m effective), wide-field (9.6 degree2), ground-based
telescope with a 3.2 GPixel camera. It will survey over 20,000 degree2 with 1,000 re-visits over 10 years in six visible
bands, and is scheduled to begin full scientific operations in 2016. The Data Management System will acquire and
process the images, issue transient alerts, and catalog the world's largest database of optical astronomical data. Every 24
hours, 15 terabytes of raw data will be transferred via redundant 10 Gbps fiber optics down from the mountain summit at
Cerro Pachon, Chile to the Base Facility in La Serena for transient alert processing. Simultaneously, the data will be
transferred at 2.5Gbps over fiber optics to the Archive Center in Champaign, Illinois for archiving and further scientific
processing and creation of scientific data catalogs. Finally, the Archive Center will distribute the processed data and
catalogs at 10Gbps to a number Data Access Centers for scientific ,educational, and public access. Redundant storage
and network bandwidth is built into the design of the system. The current networking acquistiion strategy involves
leveraging existing dark fiber to handle within Chile, Chile - U.S. and within U.S. links. There are a significant number
of carriers and networks involved and coordinating the acquisition, deployment, and operations of this capability.
Advanced protocols are being investigated during our Research and Development phase to address anticipated
challenges in effective utilization. We describe the data communications requirements, architecture, and acquisition
strategy in this paper.
We review the Spitzer Space Telescope Science Center operations teams and processes and their interfaces with other
Project elements -- what we planned early in the development of the science center, what we had at a launch and what
we have now and why. We also explore the checks and balances behind building an organizational structure that
supports constructive airing of conflicts and a timely resolution that balances the inputs and provides for very efficient
on-orbit operations. For example, what organizational roles are involved in reviewing observing schedules, what
constituency do they represent and who has authority to approve or disapprove the schedule.
The Spitzer Space Telescope was successfully launched on August 25<sup>th</sup>, 2003. After a 98 day In Orbit Checkout and
Science Verification period, Spitzer began its five and one half year mission of science observations at wavelengths
ranging from 3.6 to 160 microns. Results from the first two years of operations show the observatory performing
exceedingly well, meeting or surpassing performance requirements in all areas. The California Institute of Technology
is the home for the Spitzer Science Center (SSC). The SSC is responsible for selecting observing proposals, providing
technical support to the science community, performing mission planning and science observation scheduling,
instrument calibration and performance monitoring during operations, and production of archival quality data products.
This paper will provide an overview of the Science Operations System at the SSC focusing on lessons learned during
the first two years of science operations and the changes made in the system as a result. This work was performed at the
California Institute of Technology under contract to the National Aeronautics and Space Administration.
The Spitzer Space Telescope was launched on August 25<sup>th</sup>, 2003, and has been operating virtually flawlessly for over two years. The projected cryogenic lifetime for Spitzer is currently 5.5 years, substantially exceeding the required lifetime of 2.5 years and the pre-launch prediction of 5 years. The Spitzer Project has made a singular effort to extend Spitzer's lifetime through operational changes to conserve helium. Additionally, many updates to calibration and scheduling activities have been made in order to maximum the scientific return from Spitzer. Spitzer has met its level one science time requirement of 90%, and routinely exceeds it today. All this has been achieved with an operating budget that is substantially smaller than that of NASA's other Great Observatories.
This paper will describe the overall performance of the Spitzer Space Telescope Science Operations System and detail the modifications made to increase both the helium lifetime and the science data return. It will also discuss trades made between performance improvements and cost. Lessons learned which can be applied to future observatory operations will be included in the paper. This work was performed at the California Institute of Technology under contract to the National Aeronautics and Space Administration.