Going from the astronomer submitting an observing proposal to the
reception of the data of the corresponding observations requires quite
a number of steps: TAC/OPC selection, phase II, scheduling, observations, quality control, data pre-reduction and data delivery. In this contribution, the architecture of ESO's data flow and in particular the evolution of the concept, role an even definition of the archive is presented. Coming from the tail end of the ESO data flow, the archive is now the central repository of information about observations and plays initially mostly a technical, operational role.
The European Southern Observatory (ESO) manages numerous telescopes
which use various types of instruments and readout detectors. The data
flow process at ESO's observatories involves several steps: telescope
setup, data acquisition (science, calibration and test), pipeline
processing, quality control, archivisation, distribution of data to
the users. Well defined interfaces are vital for the smooth operation
of such complex structures. Also, the future expansion of ESO operations - such as development of new observatories (e.g. ALMA) and supporting the Virtual Observatory (VO) - will make maintenance of data interfaces even more critical. In this paper we present the overview of the current status of the Data Interface Control process at ESO and discuss the future expansion plans.
The end-to-end operations of the ESO VLT has now seen three full years of service to the ESO community. During that time its capabilities have grown to four 8.2m unit telescopes with a complement of four optical and IR multimode instruments being operated in a mixed Service Mode and Visitor Mode environment. The input and output of programs and data to the system is summarized over this period together with the growth in operations manpower. We review the difficulties of working in a mixed operations and development environment and the ways in which the success of the end-to-end approach may be measured. Finally we summarize the operational lessons learned and the challenges posed by future developments of VLT instruments and facilities such as interferometry and survey telescopes.
In this article we present the Data Flow System (DFS) for the Very Large Telescope Interferometer (VLTI). The Data Flow System is the VLT end-to-end software system for handling astronomical observations from the initial observation proposal phase through the acquisition, processing and control of the astronomical data. The Data Flow system is now in the process of installation and adaptation for the VLT Interferometer. The DFS was first installed for VLTI first fringes utilising the siderostats together with the VINCI instrument and is constantly being upgraded in phase with the VLTI commissioning. When completed the VLT Interferometer will make it possible to coherently combine up to three beams coming from the four VLT 8.2m telescopes as well as from a set of initially three 1.8m Auxiliary Telescopes, using a Delay Line tunnel and four interferometry instruments. Observations of objects with some scientific interest are already being carried out in the framework of the VLTI commissioning using siderostats and the VLT Unit Telescopes, making it possible to test tools under realistic conditions. These tools comprise observation preparation, pipeline processing and further analysis systems. Work is in progress for the commissioning of other VLTI science instruments such as MIDI and AMBER. These are planned for the second half of 2002 and first half of 2003 respectively. The DFS will be especially useful for service observing. This is expected to be an important mode of observation for the VLTI, which is required to cope with numerous observation constraints and the need for observations spread over extended periods of time.
The joint archive facility of the European Southern Observatory (ESO) and the Space Telescope - European Coordinating Facility (ST-ECF) is undertaking particular efforts in the field of associating (grouping) Hubble Space Telescope (HST) observations for a number of years already. By now their users are given means for browsing associations of HST images. Soon the same capability will be provided for spectra as well. Associations of observations can either be defined and driven by requirements imposed by higher level algorithms like co-adding and drizzling techniques or by user defined constraints. In any case we consider these services an important precursor and testbed to a future virtual observatory. Two components complement an on-line interface (archive.eso.org) to such data products: For one part it is the selection process which can be greatly improved by adding preview capabilities for individual or multiple exposures. On the other hand it requires a request handling system which supports the concept of associations and which can expand a given association and computes and delivers calibrated and combined data products.
The Data Flow System is the VLT end-to-end system for handling astronomical observations from the initial observation proposal phase through the acquisition, processing and control of the astronomical data. The VLT Data Flow System has been in place since the opening of the first VLT Unit Telescope in 1998. When completed the VLT Interferometer will make it possible to coherently combine up to three beams coming from the four VLT 8.2m telescopes as well as from a set of initially three 1.8m Auxiliary Telescopes, using a Delay Line tunnel and four interferometry instruments. The Data Flow system is now in the process of installation and adaptation for the VLT Interferometer. Observation preparation for a multi-telescope system, handling large data volume of several tens of gigabytes per night are among the new challenges offered by this system. This introduction paper presents the VLTI Data Flow system installed during the initial phase of VLTI commissioning. Observation preparation, data archival, and data pipeline processing are addressed.
The endeavour of Hubble Space Telescope (HST) proved once more that arguments such as high costs, extremely long preparation time, inherent total failure risks, limited life time and high over-subscription rates make each scientific space mission almost always a unique event. The above arguments immediately point to the need for storing all the data produced by spacecraft in a short time for the scientific community to re-use in the long term. This calls for the organization of science archives. Together with the Space Telescope Science Institute, the European Coordinating Facility developed an archive system for the HST data. This paper is about the experience gained in setting up and running the European HST Science Data Archive system. Organization, cost versus scientific return and acceptance by the scientists are among the aspects that will be covered. In particular, we will insist on the 'four-pillar' structure principle that all archive centers should have. Namely: a user interface, a catalogue accurately describing the content of the archive, the human scientific expertise and of course the data. Long term prospects and problems due to technology changes will be evaluated and solutions will be proposed. The adaptability of the system described to other scientific space missions our ground-based observatories will be discussed.