The European Southern Observatory Science Archive Facility is evolving from an archive containing predominantly raw data into a resource also offering science-grade data products for immediate analysis and prompt interpretation. New products originate from two different sources. On the one hand Principal Investigators of Public Surveys and other programmes reduce the raw observational data and return their products using the so-called Phase 3 - a process that extends the Data Flow System after proposal submission (Phase 1) and detailed specification of the observations (Phase 2). On the other hand raw data of selected instruments and modes are uniformly processed in-house, independently of the original science goal. Current data products assets in the ESO science archive facility include calibrated images and spectra, as well as catalogues, for a total volume in excess of 16 TB and increasing. Images alone cover more than 4500 square degrees in the NIR bands and 2400 square degrees in the optical bands; over 85000 individually searchable spectra are already available in the spectroscopic data collection. In this paper we review the evolution of the ESO science archive facility content, illustrate the data access by the community, give an overview of the implemented processes and the role of the associated data standard.
Over the last decade of successful science operations with the VLT at Paranal, the instrument pipelines have
played a critical role in ensuring the quality control of the instruments. During the last few years, instrument
pipelines have gradually evolved into a tool suite capable of providing science grade data products for all major
modes available for each instrument. In this paper we present the major enhancements that have been recently
brought into the body of the FORS pipeline. The algorithms applied for wavelength and photometric calibrations
have been deeply revised and improved by implementing innovative ideas, and the FORS instrument is now almost
fully supported in all of its modes: spectroscopy, imaging and spectro-polarimetry. Furthermore, the satisfactory
results obtained with the FORS pipeline have prompted synergies with other instrument pipelines. EFOSC2 at
the NTT of the La Silla Observatory already shares with the FORS pipeline the imaging and spectroscopic data
reduction code, and the spectroscopic part of the VIMOS pipeline is being reengineered along the same lines.
ESO aims at supporting the production of science grade data products for all of its Paranal instruments. This serves the
dual purpose of facilitating the immediate exploitation of the data by the respective PIs, as well as the longer term one by
the community at large through the ESO Science Archive Facility. The production of science grade data products
requires an integrated approach to science and calibration observations and the development of software to process and
calibrate the raw data. Here we present ESO's strategy to complement the in-house generation of data products with
contributions returned by our users. The most relevant lessons we have learned in the process are discussed, as well.
The X-shooter data reduction pipeline, as part of the ESO-VLT Data Flow System, provides recipes for Paranal
Science Operations, and for Data Product and Quality Control Operations at Garching headquarters. At Paranal,
it is used for the quick-look data evaluation. The pipeline recipes can be executed either with EsoRex at the
command line level or through the Gasgano graphical user interface. The recipes are implemented with the ESO
Common Pipeline Library (CPL).
X-shooter is the first of the second generation of VLT instruments. It makes possible to collect in one shot
the full spectrum of the target from 300 to 2500 nm, subdivided in three arms optimised for UVB, VIS and NIR
ranges, with an efficiency between 15% and 35% including the telescope and the atmosphere, and a spectral
resolution varying between 3000 and 17,000. It allows observations in stare, offset modes, using the slit or an
IFU, and observing sequences nodding the target along the slit.
Data reduction can be performed either with a classical approach, by determining the spectral format via
2D-polynomial transformations, or with the help of a dedicated instrument physical model to gain insight on the
instrument and allowing a constrained solution that depends on a few parameters with a physical meaning.
In the present paper we describe the steps of data reduction necessary to fully reduce science observations in
the different modes with examples on typical data calibrations and observations sequences.
CRIRES is a cryogenic, pre-dispersed, infrared Echelle spectrograph designed to provide a nominal resolving
power ν/Δν of 10<sup>5</sup> between 1000 and 5000 nm for a nominal slit width of 0.2". The CRIRES installation at
the Nasmyth focus A of the 8-m VLT UT1 (Antu) marks the completion of the original instrumentation plan
for the VLT. A curvature sensing adaptive optics system feed is used to minimize slit losses and to provide 0.2"
spatial resolution along the slit. A mosaic of four Aladdin InSb-arrays packaged on custom-fabricated ceramic
boards has been developed. It provides for an effective 4096 × 512 pixel focal plane array to maximize the free
spectral range covered in each exposure. Insertion of gas cells is possible in order to measure radial velocities with
high precision. Measurement of circular and linear polarization in Zeeman sensitive lines for magnetic Doppler
imaging is foreseen but not yet fully implemented. A cryogenic Wollaston prism on a kinematic mount is already
incorporated. The retarder devices will be located close to the Unit Telescope focal plane. Here we briefly recall
the major design features of CRIRES and describe the commissioning of the instrument including a report of
extensive testing and a preview of astronomical results.
The ESO Paranal observatory is operating a heterogeneous set of science detectors. The maintenance and
quality control of science detectors is an important routine task to retain the technical and science performance
of the instrumentation. In 2006 a detector monitoring working group was built devoted with the following tasks:
inventory of the currently existing detector calibration plans and monitored quality characteristics, completion
and homogenization of the detector calibrations plans, design and implementation of cross-instrument applicable
templates and data reduction pipeline recipes and monitoring tools.
The instrument calibration plans include monthly and daily scheduled detector calibrations. The monthly
calibrations are to measure linearity, contamination and gain including the inter-pixel capacitance correction
factor. A reference recipe has been defined to be applicable to all operational VLT instruments and has been
tested on archive calibration frames for optical, near- and mid-infrared science detectors. The daily calibrations
measure BIAS or DARK level and read-out noise in different ways. This has until now prevented cross
detector comparison of performance values. The upgrade of the daily detector calibration plan consists of the
homogenization of the measurement method in the existing pipeline recipes.
With the completion of the first generation instrumentation set on the Very Large Telescope, a total of eleven instruments are now provided at the VLT/VLTI for science operations. For each of them, ESO provides automatic data reduction facilities in the form of instrument pipelines developed in collaboration with the instrument consortia. The pipelines are deployed in different environments, at the observatory and at the ESO headquarters, for on-line assessment of observations, instruments and detector monitoring, as well as data quality control and products generation. A number of VLT pipelines are also distributed to the user community together with front-end applications for batch and interactive usage. The main application of the pipeline is to support the Quality Control process. However, ESO also aims to deliver pipelines that can generate science ready products for a major fraction of the scientific needs of the users. This paper provides an overview of the current developments for the VLT/VLTI next generation of instruments and of the prototyping studies of new tools for science users.
The VLTI has been operating for about 5 years using the VINCI instrument first, and later MIDI. In October 2005
(Period 76) the first Science Operations with the AMBER instrument started, with 14 Open Time proposals in
the observing queues submitted by the astronomical community. AMBER, the near-infrared/red focal instrument
of the VLTI, operates in the bands J, H, and, K (i.e. 1.0 to 2.5 micrometers) with three beams, thus enabling the
use of closure phase techniques. Light was fed from the 8m Unit Telescopes (UT). The Instrument was offered
with the Low Resolution Mode (JHK) and the Medium Resolution Mode in K-band on the UTs. We will present
how the AMBER VLTI Science Operations currently are performed and integrated into the general Paranal
Science Operations, using the extensive experience of Service Mode operations performed by the Paranal Science
operations and in particular applying the know-how learned from the two years of MIDI Science Operations. We
will also be presenting the operational statistics from these first ever Open Time observations with AMBER.
PRIMA, the Phase-Referenced Imaging and Micro-arcsecond Astrometry facility for the Very Large Telescope Interferometer, is now nearing the end of its manufacturing phase. An intensive test period of the various sub-systems (star separators, fringe sensor units and incremental metrology) and of their interactions in the global system will start in Garching as soon as they are delivered. The status and performances of the individual sub-systems are presented in this paper as well as the proposed observation and calibration strategy to reach the challenging goal of high-accuracy differential astrometry at 10 μas level.
The ESO Data Flow Operations group (also called Quality Control group) is dedicated to look into the performance of the different VLT instruments, to verify the quality of the calibration and scientific data, to control and monitor them on different time scales. At ESO headquarters in Garching, Germany, one QC scientist is dedicated to these tasks for the VLTI instruments: VINCI, MIDI, AMBER, and (eventually) PRIMA.
In this paper, we focus on MIDI. In this presentation, we define the tasks of the Quality Control scientist and describe the lessons learned on quality control and instrument trending with the commissioning instrument VINCI. We then illustrate the different aspects of the MIDI Data Flow Operations supported by the QC scientist such as data management issues (data volume, distribution to the community), processing of the data, and data quality control.
MIDI (MID-infrared Interferometric instrument) gave its first N-band (8 to 13 micron) stellar interference fringes on the VLTI (Very Large Telescope Interferometer) at Cerro Paranal Observatory (Chile) in December 2002. An lot of work had to be done to transform it, from a successful physics experiment, into a premium science instrument which is offered to the worldwide community of astronomers since September 2003. The process of "paranalization", carried out by the European Southern Observatory (ESO) in collaboration with the MIDI consortium, has aimed to make MIDI simpler to use, more reliable, and more efficient. We describe in this paper these different aspects of paranalization (detailing the improvement brought to the observation software) and the lessons we have learnt. Some general rules, for bringing an interferometric instrument into routine operation in an observatory, can be drawn from the experience with MIDI. We also report our experience of the first "service mode" run of an interferometer (VLTI + MIDI) that took place in April 2004.
The ESO Very Large Telescope Interferometer (VLTI) is the first general-user interferometer that offers near- and mid-infrared long-baseline interferometric observations in service mode as well as visitor mode to the whole astronomical community. Regular VLTI observations with the first scientific instrument, the mid-infrared instrument MIDI, have started in ESO observing period P73, for observations between April and September 2004. The efficient use of the VLTI as a general-user facility implies the need for a well-defined operations scheme. The VLTI follows the established general operations scheme of the other VLT instruments. Here, we present from a users' point of view the VLTI specific aspects of this scheme beginning from the preparation of the proposal until the delivery of the data.
The Very Large Telescope Interferometer (VLTI) on Cerro Paranal (2635 m) in Northern Chile reached a major milestone in September 2003 when the mid infrared instrument MIDI was offered for scientific observations to the community. This was only nine months after MIDI had recorded first fringes. In the meantime, the near infrared instrument AMBER saw first fringes in March 2004, and it is planned to offer AMBER in September 2004.
The large number of subsystems that have been installed in the last two years - amongst them adaptive optics for the 8-m Unit Telescopes (UT), the first 1.8-m Auxiliary Telescope (AT), the fringe tracker FINITO and three more Delay Lines for a total of six, only to name the major ones - will be described in this article. We will also discuss the next steps of the VLTI mainly concerned with the dual feed system PRIMA and we will give an outlook to possible future extensions.
CRIRES is a cryogenic, pre-dispersed, infrared echelle spectrograph designed to provide a resolving power lambda/(Delta lambda) of 10<sup>5</sup> between 1 and 5mu m at the Nasmyth focus B of the 8m VLT unit telescope #1 (Antu). A curvature sensing adaptive optics system feed is used to minimize slit losses and to provide diffraction limited spatial resolution along the slit. A mosaic of 4 Aladdin~III InSb-arrays packaged on custom-fabricated ceramics boards has been developed. This provides for an effective 4096x512 pixel focal plane array, to maximize the free spectral range covered in each exposure. Insertion of gas cells to measure high precision radial velocities is foreseen. For measurement of circular polarization a Fresnel rhomb in combination with a Wollaston prism for magnetic Doppler imaging is foreseen. The implementation of full spectropolarimetry is under study. This is one result of a scientific workshop held at ESO in late 2003 to refine the science-case of CRIRES. Installation at the VLT is scheduled during the first half of 2005. Here we briefly recall the major design features of CRIRES and describe its current development status including a report of laboratory testing.
The European Southern Observatory (ESO) develops and maintains a large number of instrument-specific data processing pipelines. These pipelines must produce standard-format output and meet the need for data archiving and the computation and logging of quality assurance parameters. As the number, complexity and data-output-rate of instrument increases, so does the challenge to develop and maintain the associated processing software. ESO has developed the Common Pipeline Library (CPL) in order to unify the pipeline production effort and to minimise code duplication. The CPL is a self-contained ISO-C library, designed for use in a C/C++ environment. It is designed to work with FITS data, extensions and meta-data, and provides a template for standard algorithms, thus unifying the look-and-feel of pipelines. It has been written in such a way to make it extremely robust, fast and generic, in order to cope with the operation-critical online data reduction requirements of modern observatories. The CPL has now been successfully incorporated into several new and existing instrument systems. In order to achieve such success, it is essential to go beyond simply making the code publicly available, but also engage in training, support and promotion. There must be a commitment to maintenance, development, standards-compliance, optimisation, consistency and testing. This paper describes in detail the experiences of the CPL in all these areas. It covers the general principles applicable to any such software project and the specific challenges and solutions, that make the CPL unique.
Now that the Very Large Telescope Interferometer (VLTI) is producing regular scientific observations, the field of optical interferometry has moved from being a specialist niche area into mainstream astronomy. Making such instruments available to the general community involves difficult challenges in modelling, presentation and automation. The planning of each interferometric observation requires calibrator source selection, visibility prediction, signal-to-noise estimation and exposure time calculation. These planning tools require detailed physical models simulating the complete telescope system - including the observed source, atmosphere, array configuration, optics, detector and data processing. Only then can these software utilities provide accurate predictions about instrument performance, robust noise estimation and reliable metrics indicating the anticipated success of an observation. The information must be presented in a clear, intelligible manner, sufficiently abstract to hide the details of telescope technicalities, but still giving the user a degree of control over the system. The Data Flow System group has addressed the needs of the VLTI and, in doing so, has gained some new insights into the planning of observations, and the modelling and simulation of interferometer performance. This paper reports these new techniques, as well as the successes of the Data Flow System group in this area and a summary of what is now offered as standard to VLTI observers.
Science interferometry instruments are now available at the Very Large Telescope for observations in service mode; the MID-Infrared interferometry instrument, MIDI, started commissioning and has been opened to observations in 2003 and the AMBER 3-beam instrument shall follow in 2004. The Data Flow System is the VLT end-to-end software system for handling astronomical observations from the initial observation proposal phase through to the acquisition, archiving, processing, and control of the astronomical data. In this paper we present the interferometry specific components of the Data Flow System and the software tools which are used for the VLTI.
The Very Large Telescope (VLT) Observatory on Cerro Paranal (2635 m) in Northern Chile is approaching completion. After the four 8-m Unit Telescopes (UT) individually saw first light in the last years, two of them were combined for the first time on October 30, 2001 to form a stellar interferometer, the VLT Interferometer. The remaining two UTs will be integrated into the interferometric array later this year. In this article, we will describe the subsystems of the VLTI and the planning for the following years.
Installed at the heart of the Very Large Telescope Interferometer (VLTI), VINCI combines coherently the infrared light coming from two telescopes. The first fringes were obtained in March 2001 with the VLTI test siderostats, and in October of the same year with the 8 meters Unit Telescopes (UTs). After more than one year of operation, it is now possible to evaluate its behavior and performances with a relatively long timescale. During this period, the technical downtime has been kept to a very low level. The most important parameters of the instrument (interferometric efficiency, mechanical stability,...) have been followed regularly, leading to a good understanding of its performances and characteristics. In addition to a large number of laboratory measurements, more than 3000 on-sky observations have been recorded, giving a precise knowledge of the behavior of the system under various conditions. We report in this paper the main characteristics of the VINCI instrument hardware and software. The differences between observations with the siderostats and the UTs are also briefly discussed.
On March 17, 2001, the VLT interferometer saw for the first time interferometric fringes on sky with its two test siderostats on a 16m baseline. Seven months later, on October 29, 2001, fringes were found with two of the four 8.2m Unit Telescopes (UTs), named Antu and Melipal, spanning a baseline of 102m. First shared risk science operations with VLTI will start in October 2002. The time between these milestones is used for further integration as well as for commissioning of the interferometer with the goal to understand all its characteristics and to optimize performance and observing procedures. In this article we will describe the various commissioning tasks carried out and present some results of our work.
After having established routine science operations for four 8 m single dish telescopes and their first set of instruments at the Paranal Observatory, the next big engineering challenge for ESO has been the VLT Interferometer. Following an intense integration period at Paranal, first fringes were obtained in the course of last year, first with two smaller test siderostats and later with two 8 m VLT telescopes. Even though optical interferometry today may be considered more experimental than single telescope astronomy, we have aimed at developing a system with the same requirements on reliability and operability as for a single VLT telescope. The VLTI control system is responsible for controlling and co-ordinating all devices making up VLTI, where a telescope is just one out of many subsystems. Thus the pure size of the complete system increases the complexity and likelihood of failure. Secondly, some of the new subsystems introduced, in particular the delay lines and the associated fringe-tracking loop, have more demanding requirements in terms of control loop bandwidth, computing power and communication. We have developed an innovative generic multiprocessor controller within the VLT framework to address these requirements. Finally, we have decided to use the VLT science operation model, whereby the observation is driven by observation blocks with minimum human real-time interaction, which implies that VLTI is seen as one machine and not as a set of telescopes and other subsystems by the astronomical instrument. In this paper we describe the as-built architecture of the VLTI control and data flow system, emphasising how new techniques have been incorporated, while at the same time the investments in technology and know-how obtained during the VLT years have been protected. The result has been a faster development cycle, a robustness approaching that of VLT single dish telescopes and a "look and feel" identical to all other ESO observing facilities. We present operation, performance and development cost data to confirm this. Finally we discuss the plans for the coming years, when more and more subsystems will be added in order to explore the full potential of the VLTI.
The VLT Data Flow System (DFS) has been developed to maximize the scientific output from the operation of the ESO observatory facilities. From its original conception in the mid 90s till the system now in production at Paranal, at La Silla, at the ESO HQ and externally at home institutes of astronomers, extensive efforts, iteration and retrofitting have been invested in the DFS to maintain a good level of performance and to keep it up to date. In the end what has been obtained is a robust, efficient and reliable 'science support engine', without which it would be difficult, if not impossible, to operate the VLT in a manner as efficient and with such great success as is the case today. Of course, in the end the symbiosis between the VLT Control System (VCS) and the DFS plus the hard work of dedicated development and operational staff, is what made the success of the VLT possible. Although the basic framework of DFS can be considered as 'completed' and that DFS has been in operation for approximately 3 years by now, the implementation of improvements and enhancements is an ongoing process mostly due to the appearance of new requirements. This article describes the origin of such new requirements towards DFS and discusses the challenges that have been faced adapting the DFS to an ever-changing operational environment. Examples of recent, new concepts designed and implemented to make the base part of DFS more generic and flexible are given. Also the general adaptation of the DFS at system level to reduce maintenance costs and increase robustness and reliability and to some extend to keep it conform with industry standards is mentioned. Finally the general infrastructure needed to cope with a changing system is discussed in depth.
In this article we present the Data Flow System (DFS) for the Very Large Telescope Interferometer (VLTI). The Data Flow System is the VLT end-to-end software system for handling astronomical observations from the initial observation proposal phase through the acquisition, processing and control of the astronomical data. The Data Flow system is now in the process of installation and adaptation for the VLT Interferometer. The DFS was first installed for VLTI first fringes utilising the siderostats together with the VINCI instrument and is constantly being upgraded in phase with the VLTI commissioning. When completed the VLT Interferometer will make it possible to coherently combine up to three beams coming from the four VLT 8.2m telescopes as well as from a set of initially three 1.8m Auxiliary Telescopes, using a Delay Line tunnel and four interferometry instruments. Observations of objects with some scientific interest are already being carried out in the framework of the VLTI commissioning using siderostats and the VLT Unit Telescopes, making it possible to test tools under realistic conditions. These tools comprise observation preparation, pipeline processing and further analysis systems. Work is in progress for the commissioning of other VLTI science instruments such as MIDI and AMBER. These are planned for the second half of 2002 and first half of 2003 respectively. The DFS will be especially useful for service observing. This is expected to be an important mode of observation for the VLTI, which is required to cope with numerous observation constraints and the need for observations spread over extended periods of time.
The Data Flow System is the VLT end-to-end system for handling astronomical observations from the initial observation proposal phase through the acquisition, processing and control of the astronomical data. The VLT Data Flow System has been in place since the opening of the first VLT Unit Telescope in 1998. When completed the VLT Interferometer will make it possible to coherently combine up to three beams coming from the four VLT 8.2m telescopes as well as from a set of initially three 1.8m Auxiliary Telescopes, using a Delay Line tunnel and four interferometry instruments. The Data Flow system is now in the process of installation and adaptation for the VLT Interferometer. Observation preparation for a multi-telescope system, handling large data volume of several tens of gigabytes per night are among the new challenges offered by this system. This introduction paper presents the VLTI Data Flow system installed during the initial phase of VLTI commissioning. Observation preparation, data archival, and data pipeline processing are addressed.
FLAMES is a fiber facility to be installed on the A platform of the VLT Kueyen telescope, which can feed up to three spectrographs with fibers positioned over a corrected 25 arcminutes field of view. The initial configuration will include connections to the GIRAFFE and to the red arm of the UVES spectrographs, the latter, located on the Nasmyth B platform of the same telescope, is already in operation as a long slit stand alone instrument. The 8 fibers to UVES will give R approximately 45000 and a large spectral coverage, while GIRAFFE will be fed by 132 single fibers, or by 15 deployable integral field units or by one central large integral unit. GIRAFFE will be equipped with two gratings, giving R equals 5000-9000 and R equals 15000-25000 respectively. It will be possible to obtain GIRAFFE and UVES observations simultaneously. Special attention is paid to optimizing night operations and to providing appropriate data reduction. The instrument is rather complex and it is now in the construction phase; in addition to ESO, its realization has required the collaboration of several institutes grouped in 4 consortia.
The operational applications needed to quantitatively assess VLT calibration and science data are provided by the VLT Quality Control system (QC). In the Data Flow observation life-cycle, QC relates data pipeline processing and observation preparation. It allows the ESO Quality Control Scientists of the Data Flow Operations group to populate and maintain the pipeline calibration database, to measure and verify the quality of observations, and to follow instrument trends. The QC system also includes models allowing users to predict instrument performance, and the Exposure Time Calculators are probably the QC applications most visible to the astronomical community. The Quality Control system is designed to cope with the large data volumes of the VLT, the geographical distribution of data handling, and the parallelism of observations executed on the different unit telescopes and instruments.
In order to realize the optimal scientific return from the VLT, ESO has undertaken to develop an end-to-end data flow system from proposal entry to science archive. The VLT Data Flow System (DFS) is being designed and implemented by the ESO Data Management and Operations Division in collaboration with VLT and Instrumentation Divisions. Tests of the DFS started in October 1996 on ESO's New Technology Telescope. Since then, prototypes of the Phase 2 Proposal Entry System, VLT Control System Interface, Data Pipelines, On-line Data Archive, Data Quality Control and Science Archive System have been tested. Several major DFS components have been run under operational conditions since February 1997. This paper describes the current status of the VLT DFS, the technological and operational challenges of such a system and the planing for VLT operations beginning in early 1999.
Conducting service observing in large ground-based observatories involves delivering standard products to the user, as well as installing the mechanisms to guarantee the proper execution of the observations and the verification of the resulting data. This article presents the quality control system of the very large telescope. Levels of quality are defined, corresponding to increasingly fundamental levels of verification of the observation process performance. After a presentation of the QC levels and their implementation for the VLT, the paper discusses the usage of instrument models. Indeed several developments make it more practical today to efficiently use models in the entire observational process. On the one hand, the proposer can prepare observations exposure time estimators and data simulators. On the other hand the observatory can control the instrumental configuration, test data analysis procedures, and provide calibration solutions with the help of instrument models. The article closes with a report on the instrument modeling efforts for VLT and HST instruments.
The data flow system (DFS) for the ESO VLT provides a global system approach to the flow of science related data in the VLT environment. It includes components for preparation and scheduling of observations, archiving of data, pipeline data reduction and quality control. Standardized data structures serve as carriers for the exchange of information units between the DFS subsystems and VLT users and operators. Prototypes of the system were installed and tested at the New Technology Telescope. They helped us to clarify the astronomical requirements and check the new concepts introduced to meet the ambitious goals of the VLT. The experience gained from these tests is discussed.