We present the data model utilised in maintaining the lifecycle of astronomical frames in the ESO Archive
activities. The principal concept is that complete file metadata are managed separately from the data and
merged only upon delivery of the data to the end user. This concept is now applied to all ESO Archive assets:
raw observation frames originated in ESO telescopes in all Chilean sites, reduced frames generated intra-ESO
using pipeline processing, as well as the processed data generated by the PIs and delivered to the ESO Archive
through "Phase 3" infrastructure. We present the implementation details of the model and discuss future
The European Organisation for Astronomical Research in the Southern Hemisphere (ESO), headquartered in Garching,
Germany, operates different state-of-the-art observing sites in Chile. To manage observatory operations and observation
transfer, ESO developed an end-to-end Data Flow System, from Phase I proposal preparation to the final archiving of
quality-controlled science, calibration and engineering data. All information pertinent to the data flow is stored in the
central databases at ESO headquarters and replicated to and from the observatory database servers.
In the ESO's data flow model one can distinguish two groups of databases; the front-end databases, which are replicated
from the ESO headquarters to the observing sites, and the back-end databases, where replication is directed from the
observations to the headquarters.
A part of the front-end database contains the Observation Blocks (OBs), which are sequences of operations necessary to
perform an observation, such as instrument setting, target, filter and/or grism ID, exposure time, etc. Observatory
operations rely on fast access to the OB database and quick recovery strategies in case of a database outage.
After several years of operations, those databases have grown considerably. There was a necessity in reviewing the
database architecture to find a solution that support scalability of the operational databases.
We present the newly developed concept of distributing the OBs between two databases, containing operational and
historical information. We present the architectural design in which OBs in operational databases will be archived
periodically at ESO headquarters. This will remedy the scalability problems and keep the size of the operational
databases small. The historical databases will only exist in the headquarters, for archiving purposes.
We have designed a metadata database containing all information stored in almost 10 million FITS file headers using Sybase IQ server. This repository includes metadata from raw observation frames and from the science and calibration pipeline products produced by the ESO Quality Control group. We present a few illustrative applications using data stored in this database. One of the applications which is very attractive to the astronomical community is the possibility to access the FITS headers with up-to-date information coming directly from the database using the ESO Archive interface. The keyword repository can also feed local tables and/or views for specific uses, such as instrument specific tables, which contain parameters specific to particular instruments used in archive queries. Finally, the ESO observation keyword repository supports Virtual Observatory applications with the meta-data needed by visualisation tools, such as VirGO or Aladin.
We present the design of the system for handling observations metadata at the Science Archive Facility of the
European Southern Observatory using Sybase ASE, Replication Server and Sybase IQ. The system has been reengineered
to enhance the browsing capabilities of Archive contents using searches on any observation parameter,
for on-line updates on all parameters and for the on-the-fly introduction of those updates in files retrieved from the
Archive. The systems also reduces the replication of duplicate information and simplifies database maintenance.
The European Southern Observatory (ESO) manages numerous telescopes
which use various types of instruments and readout detectors. The data
flow process at ESO's observatories involves several steps: telescope
setup, data acquisition (science, calibration and test), pipeline
processing, quality control, archivisation, distribution of data to
the users. Well defined interfaces are vital for the smooth operation
of such complex structures. Also, the future expansion of ESO operations - such as development of new observatories (e.g. ALMA) and supporting the Virtual Observatory (VO) - will make maintenance of data interfaces even more critical. In this paper we present the overview of the current status of the Data Interface Control process at ESO and discuss the future expansion plans.