The NOAO Data Lab will allow users to efficiently utilize catalogs of billions of objects, augment traditional telescope imaging and spectral data with external archive holdings, publish high level data products of their research, share custom results with collaborators and experiment with analysis toolkits. The goal of the Data Lab is to provide a common framework and workspace for science collaborations and individuals to use and disseminate data from large surveys.
In this paper we describe the motivations behind the NOAO Data Lab and present a conceptual overview of the activities we plan to support. Specific science cases will be used to develop a prototype framework and tools, allowing us to work directly with scientists from survey teams to ensure development will remain focused on scientifically productive tasks. This will additionally develop a pool of both scientific and technical experts who can provide ongoing advice and support for community users as the scope and capabilities of the Data Lab expand.
The Thirty Meter Telescope (TMT) will be a ground-based, 30-m optical-IR alt-az telescope with a highly segmented
primary mirror located in a remote location. Efficient science operations require the asynchronous coordination of many
different sub-systems including telescope mount, three independent active optics sub-systems, adaptive optics, laser
guide stars, and user-configured science instrument. An important high-level requirement is target acquisition and
observatory system configuration must be completed in less than 5 minutes (or 10 minutes if moving to a new
instrument). To meet this coordination challenge and target acquisition time requirement, a distributed software
architecture is envisioned consisting of software components linked by a service-based software communications
backbone. A master sequencer coordinates the activities of mid-layer sequencers for the telescope, adaptive optics, and
selected instrument. In turn, these mid-layer sequencers coordinate the activities of groups of sub-systems. In this paper,
TMT observatory requirements are presented in more detail, followed by a description of the design reference software
architecture and a discussion of preliminary implementation strategies.
The Thirty Meter Telescope (TMT) will be a ground-based, 30-m optical-IR telescope with a highly segmented primary
mirror located in a remote location. From the start of operations, TMT will provide a rich and diverse mix of seeing-limited
and diffraction-limited instrumentation. Initially, only classical observing will be supported, although remote
observing will follow almost immediately. Queue (or service) observing may be supported at a later date. TMT users
will expect high facility uptime and observing efficiency as well as effective user support for planning and execution of
observations. Those expectations are captured in the high-level Operations Concept Definition (OCD) document. The
services and staffing needed to implement those concepts are described in the TMT Operations Plan. In this paper, high-level
TMT operational concepts are summarized followed by a description of the current operations plan, including
The Thirty Meter Telescope (TMT) project will provide diffraction limited and seeing limited capabilities that will be
highly synergistic with JWST and other planned astronomy missions. TMT will thus be poised to tackle most of the
questions confronting scientists today and for the next several decades. The early light instrumentation will provide NIR
imaging and integral field spectroscopy designed to sample even the tiny 7mas images provided at 1.2 microns by a
multi-conjugate laser guide star AO system, near-infrared multi-slit spectroscopy over a 2 arcmin field (fed by the same
AO system, tuned for wide field performance), and wide field multi-object spectroscopy in the 0.3 - 1 micron
wavelength region. TMT is being designed, as a system, to take advantage of the observational opportunities that a
diffraction limited 30m telescope will afford. Results of detailed end-to-end modeling demonstrate excellent
performance in both seeing-limited and diffraction-limited modes. TMT is also being designed to operate in a very
efficient manner. Details of how this will be accomplished, descriptions of the planned instrumentation with focus on the
early light instruments, new technologies that will be implemented, and a summary of the anticipated observing
programs and how these will complement observations from other facilities are described.
The Thirty Meter Telescope (TMT) project is a partnership between the Association of Canadian Universities for Research in Astronomy (ACURA), Associated Universities for Research in Astronomy (AURA), Caltech and the University of California. The complexity of TMT and its diverse suite of instrumentation (many of which will be assisted by adaptive optics front-ends) necessitates the design and implementation of a highly-automated, well-tuned observatory software system. The fundamental system requirements are low operating costs and excellent reliability, both of which necessitate simplicity in software design. This paper will address how these requirements will be achieved as well as how the system will handle observing program execution.
Since the beginning on April 3, 1999, the start of observations with the ESO Very Large Telescope (VLT), a significant fraction of the observations is executed in Service Mode (SM). SM observations require that the Principal Investigator (PI) provides all necessary information before the observation, so that the night astronomers in Chile have precise and complete indications on the execution requirements of every program. The observers also need to be able to know which observations can possibly be executed during a given night.
The missing link between these external users and the operations staff at ESO-Chile is the User Support Department (USD) which ensures that this information flow runs smoothly and in a uniform way. This requires the existence of a well-designed network of reports and communication procedures serving purposes such as conveying information from the users to the observatory, allowing the USD support astronomers an efficient review and validation of the material submitted by the users, enabling reliable program execution tracking, or providing rapid program progress feedback to the users, etc.. These tasks manage a level of information flow that complements that of the VLT Data Flow System.
This article will provide an overview about how the exchange of information for SM runs was optimized over the past 7 years, about the lessons learned by interacting with external users and internal operations staff, and the resulting changes and improvements.
The software for the Thirty Meter Telescope (TMT) is currently in the specification and design phase. A decision was
made early on to provide a common software package that will provide basic infrastructure and services to be used by all
project software packages. A roadmap for defining Common Software was written. The first roadmap step of defining
what should be included in common software was accomplished by analyzing similar projects. The result was the
definition of a reference architecture for end-to-end observatory software systems called the Observatory Software
Domain Architecture. This architecture was then used to define the specifications for the TMT common software. This
paper describes the roadmap, the reference architecture, and the current definition of TMT common software.
The ESO Very Large Telescope Interferometer (VLTI) is the first general-user interferometer that offers near- and mid-infrared long-baseline interferometric observations in service mode as well as visitor mode to the whole astronomical community. Regular VLTI observations with the first scientific instrument, the mid-infrared instrument MIDI, have started in ESO observing period P73, for observations between April and September 2004. The efficient use of the VLTI as a general-user facility implies the need for a well-defined operations scheme. The VLTI follows the established general operations scheme of the other VLT instruments. Here, we present from a users' point of view the VLTI specific aspects of this scheme beginning from the preparation of the proposal until the delivery of the data.
Currently four instruments are operational at the four 8.2m telescopes of the European Southern Observatory Very Large Telescope: FORS1, FORS2, UVES, and ISAAC. Their data products are processed by the Data Flow Operations Group (also known as QC Garching) using dedicated pipelines. Calibration data are processed in order to provide instrument health checks, monitor instrument performance, and detect problems in time. The Quality Control (QC) system has been developed during the past three years. It has the following general components: procedures (pipeline and post-pipeline) to measure QC parameters; a database for storage; a calibration archive hosting master calibration data; web pages and interfaces. This system is part of a larger control system which also has a branch on Paranal where quick-look data are immediately checked for instrument health. The VLT QC system has a critical impact on instrument performance. Some examples are given where careful quality checks have discovered instrument failures or non-optimal performance. Results and documentation of the VLT QC system are accessible under http://www.eso.org/qc/.
The execution of observations in Service Mode is an option at the European Southern Observatory Very Large Telescope. In this operations mode, observations are not scheduled for specific nights, they are scheduled flexibly. Each night observations are selected from a pool of possible observations based on Observing Programme Committee (OPC) priority and the current observing conditions. Ideally, the pool of possible observations contains a range of observations that exactly match the real range of conditions and the real number of available hours, so that all observations are completed in a timely manner. Since this ideal case never occurs, constructing the pool of observations must be done carefully, with the goals of maximizing scientific return and operational efficiency. In this paper, basic ESO Service Mode scheduling concepts are presented. A specific VLT focus is maintained for most of this article, but the general principles are true for all ESO facilities executing Service Mode runs.
The end-to-end operations of the ESO VLT has now seen three full years of service to the ESO community. During that time its capabilities have grown to four 8.2m unit telescopes with a complement of four optical and IR multimode instruments being operated in a mixed Service Mode and Visitor Mode environment. The input and output of programs and data to the system is summarized over this period together with the growth in operations manpower. We review the difficulties of working in a mixed operations and development environment and the ways in which the success of the end-to-end approach may be measured. Finally we summarize the operational lessons learned and the challenges posed by future developments of VLT instruments and facilities such as interferometry and survey telescopes.
On 31 March 2000, the ESO Very Large Telescope (VLT) will complete the first year of science operations. During this first year, Antu (UT1) was operated with two instruments: Focal Reducer/Low Resolution Spectrograph (FORS-1) and Infrared Spectrograph and Array Camera (ISAAC). Both Visitor and Service Mode operations were successfully supported, with roughly equal time spent in each mode. On 1 April 2000, Kueyen (UT2) will begin science operations with two new instruments: UV-Visible Echelle Spectrograph (UVES) and FORS-2. The VLT science operations concept revolves around a distributed operations model. Front-end (proposal, observation, and scheduling preparation support and management) and back-end (quality control, Service Mode data distribution, and archive) operations are executed at ESO headquarters in Garching bei Munchen, Germany. Observation execution and on-line quality control are managed on-site at the Paranal Observatory, Cerro Paranal, Chile. The VLT Data Flow System provides the backbone infrastructure for VLT operations. Here we present an overview of the VLT science operations concept, a summary of the results from Year 1, and a discussion of lessons learned and where the science operations concept had to be adapted to achieve the current level of operations.
The Observation Handling Subsystem (OHS) of the ESO VLT Data- Flow System was designed to collect, verify, store and distribute observation preparation information. This rather generic definition includes high-level Observing Proposals submitted once per semester to apply for telescope time (typically referred to as 'Phase I' proposals) as well as detailed descriptions of the observations to be performed (which is often called 'Phase II' data); in the Data-Flow System, such descriptions are defined as Observation Blocks (OBs). Observation queues and long- and short-term schedules are also produced, ranging in scope from an observation semester to a few hours. The OHS is a distributed system composed of a collection of loosely coupled software tools. The tools communicate mostly through a set of relational databases, which are distributed between Garching and The Chilean observatories. A number of communication protocols are also used, from the e-mail based Receiver process of the Proposal Handling and Reporting System to the proprietary protocol used to serve the telescope and instrument control systems. Data and commands flow through the OHS, supporting the operational procedures of ESO's Observing Programmes Committee and of the different operation teams in Garching and in Chile. This paper presents the overall architecture of the OHS, and each module's technical features and underlying operational concepts. It also discusses the current implementation choices and development plans.
On 1 April 1999, the first unit telescope (ANTU) of the ESO VLT began science operations. Two new instruments (FORS-1 for optical imaging and spectroscopy and ISAAC for IR imaging and spectroscopy) were offered in a mix of 50% visitor mode and 50% service mode. A Phase-I and Phase-II proposal and observation preparation process was conducted from 1 October 1998 until the middle of March 1999 involving approximately 280 proposals. A total of 1768 Observation Blocks for 83 approved service mode programs were scheduled and executed between 1 April and 1 October 1999. The resultant raw science and calibration data were subjected to quality control in Garching and released to the ESO user community starting from 15 June 1999 along with pipeline processed data products for a subset of instrument modes. The data flow loop for the first LT telescope is closed. The current operational VLT data flow system and the developments for the remainder of the VLT will be presented in the light of the first year of operational experience.
We have obtained high speed image motion data from the 3.5M WIYN telescope at Kitt Peak as part of commissioning and characterizing efforts. These data come from a small frame transfer CCD feeding dedicated centroiding hardware called FastTrack, which provides x and y data pairs at frequencies > Hertz. In this paper we use power spectra from these data to investigate telescope structure resonance and characterize the site + telescope performance for gains by implementing tip-tilt correction at WIYN. At frequencies below 20hz the image motion power spectra are consistent with Kolmogorov turbulence models but show excess power at higher frequencies. Over the past 2 years the FastTrack data have consistently shown that image improvements of 0.10-0.20 arcseconds can be obtained with a simple tip-tilt systems that fully corrects frequencies 20Hz and lower. The FastTrack power spectra have also revealed a complex structure of coherent frequencies between 22 and 28 Hertz, similar to what has been seen on other light weight stiff telescopes. In other tests, we have used simultaneous star trail data to estimate the isokinetic angle for frequencies below 10hz. We have found that the one dimensional correlation remains above 0.8 within an angular radius of approximately 240 arcseconds. These data show that a high speed tip-tilt system can net significant improvements to image quality at WIYN over a relatively large field.
Science operations at the ESO very large telescope is scheduled to begin in April 1999. ESO is currently finalizing the VLT science operations plan. This plan describes the operations tasks and staffing needed to support both visitor and service mode operations. The Data Flow Systems (DFS) currently being developed by ESO will provide the infrastructure necessary for VLT science operations. This paper describes the current VLT science operations plan, first by discussing the tasks involved and then by describing the operations teams that have responsibility for those tasks. Prototypes of many of these operational concepts and tools have been in use at the ESO New Technology Telescope (NTT) since February 1997. This paper briefly summarizes the status of these prototypes and then discusses what operation lessons have been learned from the NTT experience and how they can be applied to the VLT.
In order to realize the optimal scientific return from the VLT, ESO has undertaken to develop an end-to-end data flow system from proposal entry to science archive. The VLT Data Flow System (DFS) is being designed and implemented by the ESO Data Management and Operations Division in collaboration with VLT and Instrumentation Divisions. Tests of the DFS started in October 1996 on ESO's New Technology Telescope. Since then, prototypes of the Phase 2 Proposal Entry System, VLT Control System Interface, Data Pipelines, On-line Data Archive, Data Quality Control and Science Archive System have been tested. Several major DFS components have been run under operational conditions since February 1997. This paper describes the current status of the VLT DFS, the technological and operational challenges of such a system and the planing for VLT operations beginning in early 1999.
During the past two years NOAO has conducted a queue observing experiment with the 3.5m WIYN telescope on Kitt Peak, Arizona. The WIYN telescope is ideally suited to queue-scheduled operation in terms of its performance and its instrument complement. The queue scheduling experiment on WIYN was designed to test a number of beliefs and hypotheses about gains in efficiency and scientific effectiveness due to queue scheduling. In addition, the experiment was a test of our implementation strategy and management of community expectations. The queue is run according to a set of rules that guide decisions about which observation to do next. In practice, scientific rank, suitability of current conditions, and the desire to complete programs all enter into these decisions. As predicted by Monte Carlo simulations, the queue increases the overall efficiency of the telescope, particularly for observations requiring rare conditions. Together with this improvement for typical programs, the queue enables synoptic, target-of-opportunity, and short programs that could not be scheduled classically. Despite this success, a number of sociological issues determine the community's perception of the WIYN queue.
One of the most important design goals of the ESO very large telescope is efficiency of operations, to maximize the scientific productivity of the observatory. 'Service mode' observations will take up a significant fraction of the VLT's time, with the goal of matching the best observing conditions to the most demanding scientific programs. Such an operational scheme requires extensive computer support in the area of observation preparation and execution. In this paper we present some of the software tools developed at ESO to support VLT observers, both staff and external. Our phase II proposal preparation system and the operational toolkit are prototype implementations of the final VLT systems and have been in use for over a year, while the scheduling tools to support 'service mode' operations.
The data flow system (DFS) for the ESO VLT provides a global system approach to the flow of science related data in the VLT environment. It includes components for preparation and scheduling of observations, archiving of data, pipeline data reduction and quality control. Standardized data structures serve as carriers for the exchange of information units between the DFS subsystems and VLT users and operators. Prototypes of the system were installed and tested at the New Technology Telescope. They helped us to clarify the astronomical requirements and check the new concepts introduced to meet the ambitious goals of the VLT. The experience gained from these tests is discussed.