The archive of the La Silla Paranal Observatory is a powerful science resource for the ESO astronomical community. It stores both the raw data generated by all ESO instruments and selected processed (science-ready) data. We present the new capabilities and user services that have recently been developed in order to enhance data discovery and usage in the face of the increasing volume and complexity of the archive holdings. Future plans to extend the new services to processed data from the Atacama Large Millimeter/submillimeter Array (ALMA) are also discussed.
The ESO Phase 3 infrastructure provides a channel to submit reduced data products for publication to the astronomical community and long-term data preservation in the ESO Science Archive Facility. To be integrated into Phase 3, data must comply to the ESO Science Data Product Standard regarding format (one unique standard data format is associated to each type of product, like image, spectrum, IFU cube, etc.) and required metadata. ESO has developed a Groovy based tool that carries out an automatic validation of the submitted reduced products that is triggered when data are uploaded and then submitted. Here we present how the tool is structured and which checks are implemented.
ESO has a strong mandate to survey the Southern Sky. In this article, we describe the ESO telescopes and instruments that are currently used for ESO Public Surveys, and the future plans of the community with the new wide-field-spectroscopic instruments. We summarize the ESO policies governing the management of these projects on behalf of the community. The on-going ESO Public Surveys and their science goals, their status of completion, and the new projects selected during the second ESO VISTA call in 2015/2016 are discussed. We then present the impact of these projects in terms of current numbers of refereed publications and the scientific data products published through the ESO Science Archive Facility by the survey teams, including the independent access and scientific use of the published survey data products by the astronomical community.
Proc. SPIE. 9910, Observatory Operations: Strategies, Processes, and Systems VI
KEYWORDS: Observatories, Telescopes, Calibration, Data storage, Data acquisition, Data archive systems, Data archive systems, Spectral calibration, Data processing, Image quality standards, Automatic tracking
The data validation phase is an essential step of the Phase 3 process at ESO that is defining and providing an infrastructure to deal with interactions between the data producers and the archive. We are using a controlled process to systematically review all Phase 3 data submissions to ensure a homogeneous and consistent science archive with well traceable and characterised data products, to the benefits of archive users. How the Phase 3 data validation plan is defined and how its results are subsequently managed will be described in the presentation. For a description of its technical implementation, please refer to the contribution by L. Mascetti.
Phase 3 denotes the process of preparation, submission, validation and ingestion of science data products for storage in the ESO Science Archive Facility and subsequent publication to the scientific community. In this paper we will review more than four years of Phase 3 operations at ESO and we will discuss the future evolution of the Phase 3 system.
The ESO Phase 3 process allows the upload, validation, storage, and publication of reduced data through the ESO Science Archive Facility. Since its introduction, ~2 million data products have been archived and published; 80% of them are one-dimensional extracted and calibrated spectra. Central to Phase3 is the ESO science data product standard that defines metadata and data format of any product. This contribution describes the ESO data standard for 1d-spectra, its adoption by the reduction pipelines of selected instrument modes for in-house generation of reduced spectra, the enhanced archive legacy value. Archive usage statistics are provided.
The European Southern Observatory Science Archive Facility is evolving from an archive containing predominantly raw data into a resource also offering science-grade data products for immediate analysis and prompt interpretation. New products originate from two different sources. On the one hand Principal Investigators of Public Surveys and other programmes reduce the raw observational data and return their products using the so-called Phase 3 - a process that extends the Data Flow System after proposal submission (Phase 1) and detailed specification of the observations (Phase 2). On the other hand raw data of selected instruments and modes are uniformly processed in-house, independently of the original science goal. Current data products assets in the ESO science archive facility include calibrated images and spectra, as well as catalogues, for a total volume in excess of 16 TB and increasing. Images alone cover more than 4500 square degrees in the NIR bands and 2400 square degrees in the optical bands; over 85000 individually searchable spectra are already available in the spectroscopic data collection. In this paper we review the evolution of the ESO science archive facility content, illustrate the data access by the community, give an overview of the implemented processes and the role of the associated data standard.
We are carrying out a comprehensive study of massive star forming complexes in the Large Magellanic Cloud, through the study of ionized regions. Preliminary results for the nebula LHA~120-N~44C are presented here.
We are blending i) the spectral and morphological information
contained in images taken through selected filters that probe lines sensitive to factors such as excitation mechanisms or hardness of the ionizing radiation, ii) with the already existing photometry from the 2MASS near-infrared survey and iii) multi-wavelength archived images retrieved from various locations.
The merging of all these sources of informations will allow us to establish a close link between massive stars and the surrounding interstellar medium and should help constraining the local star formation history and dynamical evolution of these ionized regions in the Large Magellanic Cloud. In this respect, the Astrophysical Virtual Observatory (AVO) prototype tool has proven to be a powerful tool to speed up the discovery process.
The European Southern Observatory (ESO) manages numerous telescopes
which use various types of instruments and readout detectors. The data
flow process at ESO's observatories involves several steps: telescope
setup, data acquisition (science, calibration and test), pipeline
processing, quality control, archivisation, distribution of data to
the users. Well defined interfaces are vital for the smooth operation
of such complex structures. Also, the future expansion of ESO operations - such as development of new observatories (e.g. ALMA) and supporting the Virtual Observatory (VO) - will make maintenance of data interfaces even more critical. In this paper we present the overview of the current status of the Data Interface Control process at ESO and discuss the future expansion plans.
The recent publications of the DENIS Catalogue towards the Magellanic Clouds (MCs) with more than 1.3 million sources identified in at least two of the three DENIS filters (I J K<SUB>s</SUB>) and of the incremental releases of the 2MASS point source catalogues (J H K<SUB>s</SUB>) covering the same region of the sky, provide an unprecedented wealth of data related to stellar populations in the Mcs. In order to build a reference catalogue of stars towards the Magellanic Clouds, we have performed a cross--identification of these two catalogues. This implied developing new tools for cross--identification and data mining. This study is partly supported by the Astrovirtel program that aims at improving access to astronomical archives as virtual telescopes. The main goal of the present study is to validate new cross--matching procedures for very large catalogues, and to derive results concerning the astrometric and photometric accuracy of these catalogues. The cross--matching of large surveys is an essential tool to improve our understanding of their specific contents. This approach can be considered as a new step towards a Virtual Observatory.