In order to realize the optimal scientific return from the VLT, ESO has undertaken to develop an end-to-end data flow system from proposal entry to science archive. The VLT Data Flow System (DFS) is being designed and implemented by the ESO Data Management and Operations Division in collaboration with VLT and Instrumentation Divisions. Tests of the DFS started in October 1996 on ESO's New Technology Telescope. Since then, prototypes of the Phase 2 Proposal Entry System, VLT Control System Interface, Data Pipelines, On-line Data Archive, Data Quality Control and Science Archive System have been tested. Several major DFS components have been run under operational conditions since February 1997. This paper describes the current status of the VLT DFS, the technological and operational challenges of such a system and the planing for VLT operations beginning in early 1999.
Science operations at the ESO very large telescope is scheduled to begin in April 1999. ESO is currently finalizing the VLT science operations plan. This plan describes the operations tasks and staffing needed to support both visitor and service mode operations. The Data Flow Systems (DFS) currently being developed by ESO will provide the infrastructure necessary for VLT science operations. This paper describes the current VLT science operations plan, first by discussing the tasks involved and then by describing the operations teams that have responsibility for those tasks. Prototypes of many of these operational concepts and tools have been in use at the ESO New Technology Telescope (NTT) since February 1997. This paper briefly summarizes the status of these prototypes and then discusses what operation lessons have been learned from the NTT experience and how they can be applied to the VLT.
In order to use the next generation of space and ground based observatories for the greatest scientific benefit, the experiences of current missions should be carefully examined to find strategies which have worked well and also to identify areas where new paradigms are needed. With the operation of the Hubble Space Telescope, the Space Telescope Science Institute pioneered the large scale application of non-traditional operations models including observation preparation tools, integrated scheduling for increased scientific return, service observing, and multi-year long- range planning. This paper discusses the key aspects of HST operations, including concepts which worked well and those which did not. We discuss how this experience can be applied to new ground- and space-based missions.
During the past two years NOAO has conducted a queue observing experiment with the 3.5m WIYN telescope on Kitt Peak, Arizona. The WIYN telescope is ideally suited to queue-scheduled operation in terms of its performance and its instrument complement. The queue scheduling experiment on WIYN was designed to test a number of beliefs and hypotheses about gains in efficiency and scientific effectiveness due to queue scheduling. In addition, the experiment was a test of our implementation strategy and management of community expectations. The queue is run according to a set of rules that guide decisions about which observation to do next. In practice, scientific rank, suitability of current conditions, and the desire to complete programs all enter into these decisions. As predicted by Monte Carlo simulations, the queue increases the overall efficiency of the telescope, particularly for observations requiring rare conditions. Together with this improvement for typical programs, the queue enables synoptic, target-of-opportunity, and short programs that could not be scheduled classically. Despite this success, a number of sociological issues determine the community's perception of the WIYN queue.
Experience in bringing into operation the 91-segment primary mirror alignment and control system, the focal plane tracker system, and other critical subsystems of the HET will be described. Particular attention is given to the tracker, which utilizes three linear and three rotational degrees of freedom to follow sidereal targets. Coarse time-dependent functions for each axis are downloaded to autonomous PMAC controllers that provide the precise motion drives to the two linear stages and the hexapod system. Experience gained in aligning the sperate mirrors and then maintaining image quality in a variable thermal environments will also be described. Because of the fixed elevation of the primary optical axis, only a limited amount of time is available for observing objects in the 12 degrees wide observing band. With a small core HET team working with McDonald Observatory staff, efficient, reliable, uncomplicated methodologies are required in all aspects of the observing operations.
The goal of STScI's user support is to provide HST observers with the tools, documentation and assistance they need to maximize the scientific return of their observations. This includes pre-observing support to design feasible observing programs which meet their scientific goals and post- observing support in the calibration, reduction, and analysis of the data. The current model for user support evolved over the first five years of HST operations and culminated in our contact scientist (CS) and program coordinator (PC) team. The CS is a professional astronomer as well as an instrument scientific for one of the HST instruments. The PC provides technical support as an expert in the language and tools of HST observation specification, implementation and scheduling. The underlying philosophy is that (1) the CS/PC team supports the observer from 'cradle to grave' of the observation and (2) the team is a 'single point of contact' for the observer. This means the observer can contact the CS/PC team during any phase in the life cycle of an HST program to receive assistance. It also ensure that the use obtains help from the two people at STScI who are the most familiar with the program, without being shuffled among many different experts. The STScI help desk provides parallel support for requests which do not deal with a given HST program. Requests are received, tracked, and assigned to the appropriate expert for reply. Our holistic approach combines CS/PC support with documentation, software and tools, and the help desk to create ann efficient and powerful support structure for observers.
We review the Gemini Observatory science operations plan including the proposal submission, allocation and observation planning processes; the telescope operation model; and the scientific staffing plans and user support. Use of the telescope is shown via a sub-stellar companion search program to illustrate the planning tools and level of integration required between the observatory control, telescope control and data handling software systems.
Presented are some conclusions from the UKIRT reactive scheduling experiment. This shows that, provided adequate back-up time is available, it is possible to manipulate the schedule of a 4m class telescope to increase significantly the likelihood that a few chosen programs can be completed and that such a scheme can double the success rate of those programs. Also, despite the personal inconvenience, observers are willing to be flexible when offered a second chance to take up observing time lost in bad weather.
As institutions and observatories are required to handle more tasks with fewer resources, the need to assist or automate some of the processing becomes crucial. One of the easiest tasks to automate is the front-end process of requesting to use the telescope. Proposing for Hubble Space Telescope (HST) observing time and archival research proceeds to two phases: in Phase I, the scientific merits of the proposal are considered, and only accepted proposals enter Phase II, where the observations are specified in complete detail. The HST Phase I process includes obtaining, completing, and submitting proposal forms. The automation includes making the proposal forms available, and allowing them to be submitted electronically. By providing a standard proposal form, the necessary information contained in the proposal is extracted and processed by software. Tracking and low-level error detection can be handled with software, while more intellectually challenging tasks are handled by people. This paper discusses the current system for Phase I proposers to use the HST, including some of the tools available for automating a proposal submission process. This paper is an update of the system described in the published paper 'Computer-assisted Proposal Submission Systems'. This system has been in use for the past three HST cycles and is being used for the most current call for proposals.
In 1995, the Space Telescope Science Institute (STScI) introduced RPS2 (Remote Proposal Submission 2). RPS2 is used by Hubble Space TElescope (HST) proposers to prepare their detailed observation descriptions. It is a client/server system implemented using Tcl/Tk. The client can transparently access servers on the user's machine, at STScI, or on any other machine on the Internet. The servers combine syntax checking, feasibility analysis and orbit packing, and constraint and schedulability analysis of user- specified proposals as they will be performed aboard HST. Prior to the release of RPS2, observers used a system which provided only syntax checking. RPS2 now provides the observers with some of the more complicated pieces of software that had been used by STScI staff to prepare observations since 1990. The RPS2 system consists of four independent subsystem, controlled by the client/server mechanism. A problem with a system of this size and complexity is that the software components, which continue to grow and change with HST itself, must continually be tested and distributed to those who need it. In the past, it had been acceptable to release the RPS2 software only once per observing cycle, but it became apparent before the 1997 HST Servicing Mission that multiple releases of RPS2 were going to be required to support the new instruments. This paper discusses how RPS2 and its component systems are maintained, updated, tested, and distributed.
One of the most important design goals of the ESO very large telescope is efficiency of operations, to maximize the scientific productivity of the observatory. 'Service mode' observations will take up a significant fraction of the VLT's time, with the goal of matching the best observing conditions to the most demanding scientific programs. Such an operational scheme requires extensive computer support in the area of observation preparation and execution. In this paper we present some of the software tools developed at ESO to support VLT observers, both staff and external. Our phase II proposal preparation system and the operational toolkit are prototype implementations of the final VLT systems and have been in use for over a year, while the scheduling tools to support 'service mode' operations.
Construction of the first Gemini 8-m telescope is well underway. The software that provides the user interface and high-level control of the observatory, the observatory control system (OCS), is also proceeding on track. The OCS provides tools that assist the astronomer from the proposal submission phase through planning, observation execution, and data review. A capable and flexible software infrastructure is required to support this comprehensive approach. New software technologies and industry standards have played a large part in the implementation of this infrastructure. For instance, the use of CORBA has provided many benefits in the software including object distribution, an interface definition language, and implementation language independence. In this paper, we describe the infrastructure of the OCS that supports observation planning and execution. Important software decisions and interfaces that allow Internet access and the ability to substitute alternate implementations easily are discussed as a model for other similar projects.
The advent of SCUBA, and the imminent delivery of a new state-of-the-art heterodyne receiver to operate in the 650 micrometers and 450 micrometers bands, indicate that the JCMT is primarily being driven towards high-frequency submillimeter observations. The number of applications from the community requesting time using SCUBA has already led to a large over- subscription for the high-frequency submillimeter weather. Thus it has become significantly more important, and timely to experiment with flexible observations in order to maintain the JCMTs status as one of the world's submillimeter telescopes. It has been estimated elsewhere that weather conditions appropriate for efficient operation of these types of instruments in their highest frequency modes occurs only about 30 percent of the time over Mauna Kea. Techniques for predicting the water vapor content over the mountain, and hence the sky opacity, would be extremely useful and studies are in progress towards this goal. A brief analysis of actual sky opacity records indicates that a figure nearer 25 percent may be appropriate over the past 3 years. There is evidence that certain meteorological disturbances, such as the El Nino effect, may result in an enhancement of the percentage of extremely dry weather to around 40 percent. This paper describes early attempts to flexibly schedule high-frequency submillimeter observations on the JCMT. Some of these schemes have met with more success than others. In the light of past experiences, a significantly different flexible queue-driven system was implemented for the first observing semester using SCUBA in an attempt to maximize the scientific return achievable given the 'weather' available. Details are presented of the operation and result obtained from this highly successful scheme. A brief description is also given of the currently running, slightly revised version of the system.
Writing, reviewing, and selecting the proposals which are to define the science program of any state-of-the-art observatory/space mission are all tasks which have grown in complexity, and as a consequence large amounts of time and effort are currently being invested in this process by proposers as well as reviewers. Viewed from the opposite vantage point, the currently used solicitation and selection process is a significant operational expense: mailing paper copies of proposals and gathering reviewers for panel meetings and a 'time allocation committee' involves a large amount of logistical support and time by the observatory staff. Finally, the batching of proposals into yearly cycles increases the time form concept of a scientific idea to receipt of actual data which decreases the ability to respond to new scientific developments and also increases the general operational overhead of handling a large batch of observations. In this paper we explore two experimental steps towards an optimal proposal selection process: streamlining the current process via 'paperless' and 'groupware' technologies, and use of a 'steady state' process which accepts submission of the reviews proposals continuously. The pros and cons of each approach are examined and we demonstrate that not only are the enabling technologies available, but when resources are considered in a global manner we can identify both major improvements to the current process and significant reductions in the expenditure of resources.
We describe our scheme for scheduling and observing with the Hobby-Eberly Telescope (HET). The HET will be operated 85 percent of the time in a queue-scheduled, service observing mode. Principal investigators (PIs) use software planning tools to determine how to make their observations with the HET, and submit proposals for telescope time to local Time Allocation Committees (TACs). Once time has been granted, PIs submit detailed observing scripts which instruct HET operations how, when, and under what conditions data are to be taken. These scripts are compiled into a relational database which is used to schedule the telescope. Observations are scheduled using TAC and PI-assigned priorities to rank plans relative to ne another. Resident astronomers use these priorities plus a set of simple precedence rules to determine which objects are to be observed each night. The execution of observation scripts is mostly automated, with the software commanding the telescope position and building data acquisition macros for each instrument. Aside from building and running the nightly observing queue, the resident astronomers are responsible for identifying targets, starting exposures, and validating data quality. They may also revise the observing queue in real time as conditions change. We discuss our initial experience working with this system, scheduling and executing observations during the commissioning of the HET.
The MACHO experiment is searching for dark matter in the halo of the Galaxy by monitoring more than 50 million stars in the LMC, SMC, and Galactic bulge for gravitational microlensing events. The hardware consists of a 50 inch telescope, a two-color 32 megapixel ccd camera and a network of computers. On clear nights the system generates up to 8 GB of raw data and 1 GB of reduced data. The computer system is responsible for all realtime control tasks, for data reduction, and for storing all data associated with each observation in a database. The subject of this paper is the software system that handles these functions. It is an integrated system controlled by Petri nets that consists of multiple processes communicating via mailboxes and a bulletin board. The system is highly automated, readily extensive, and incorporates flexible error recovery capabilities. It is implemented with C++ in a Unix environment.
Images of astronomical objects acquired by ground-based telescopes are blurred by atmospheric turbulence. These blurring effects can be partially overcome by post-detection processing such as speckle imaging (SI). We have developed a parallel implementation of SI to dramatically reduce the time required to reduce imaging data, allowing us to implement a near realtime (NRT) SI image feedback capability. With NRT SI feedback, telescope operators can select observing parameters to optimize data quality while the data is being taken. NRT processing also allows easy selection of the best data from a long observation for later post-detection processing using more sophisticated algorithms. Similar NRT schemes could also be implemented for non-imaging measurements, such as spectroscopy. NRT feedback will yield higher quality data products and better utilization of observatory resources.
We are developing a data reduction and analysis system DASH for efficient data processing of the SUBARU telescope. We adopted CORBA as a distributed object environment and Java for a user interface in the prototype of DASH. Moreover, we introduced a data reduction procedure cube as a kind of visual procedure script.
For the past seven years observing with the major instruments at the United Kingdom IR Telescope (UKIRT) has been semi-automated, using ASCII files top configure the instruments and then sequence a series of exposures and telescope movements to acquire the data. For one instrument automatic data reduction completes the cycle. The emergence of recent software technologies has suggested an evolution of this successful system to provide a friendlier and more powerful interface to observing at UKIRT. The Observatory Reduction and Acquisition Control (ORAC) project is now underway to construct this system. A key aim of ORAC is to allow a more complete description of the observing program, including the target sources and the recipe that will be used to provide on-line data reduction. Remote observation preparation and submission will also be supported. In parallel the observatory control system will be upgraded to use these descriptions for more automatic observing, while retaining the 'classical' interactive observing mode. The final component of the project is an improved automatic data reduction system, allowing on-line reduction of data at the telescope while retaining the flexibility to cope with changing observing techniques and instruments. The user will also automatically be provided with the scripts used for the real-time reduction to help provide post-observing data reduction support. The overall project goal is to improve the scientific productivity of the telescope, but it should also reduce the overall ongoing support requirements, and has the eventual goal of supporting the use of queue- scheduled observing.
Subaru Observation Control System has selected Ethernet and FiberChannel as their standard interface to instruments. Every instrument should connect themselves with at least one of the LANs. Regarding the data transfer to Hilo base, the first concern is that no data must be lost during transfer process, whatever troubles may happen on hardware or network. In the hardware, we provide RAID, tape library at the summit and another RAID at the base facility. As the other measure in software, we have the data file management by Subaru Observation Software System, which enables users to track the location of the file. The hardware configuration of the summit simulation system, which is for the instrument test and so on, is presented. The telescope at the summit of Mauna Kea has been connected to the super computer at the base facility via OC12. This high-speed network is used not only data transfer and IP communication, but also for multimedia communication such as video or telephone. The multimedia project is introduced.
Conducting service observing in large ground-based observatories involves delivering standard products to the user, as well as installing the mechanisms to guarantee the proper execution of the observations and the verification of the resulting data. This article presents the quality control system of the very large telescope. Levels of quality are defined, corresponding to increasingly fundamental levels of verification of the observation process performance. After a presentation of the QC levels and their implementation for the VLT, the paper discusses the usage of instrument models. Indeed several developments make it more practical today to efficiently use models in the entire observational process. On the one hand, the proposer can prepare observations exposure time estimators and data simulators. On the other hand the observatory can control the instrumental configuration, test data analysis procedures, and provide calibration solutions with the help of instrument models. The article closes with a report on the instrument modeling efforts for VLT and HST instruments.
The Hubble Space Telescope Calibration Database System collects and organizes data used in calibration of the many operating modes of the on-board scientific instruments. During the period form July 1995 to January 1997 the calibration data base system underwent a major redesign. The existing system had performed well since 1990 but some shortcomings were becoming apparent. The advent of two new science instruments, one of which has a very large number of operational modes, promised major complications. The new design operates with far fewer database tables yet provides extra functions. The tracking of replacement files has been improved, the maintenance of documentation has been simplified, and the process for installing data automated and streamlined. Additionally, various scripts have been written to perform checks on currently installed and historic data. This has resulted in a very efficient and reliable installation process which accommodates the new instruments and supports new data formats. It has also allowed us to detect and correct some discrepancies in the existing data that arose from occasional errors in the earlier manual procedures.
Service mode observing simultaneously provides convenience, observing efficiency, cost-savings, and scheduling flexibility. To effectively optimize these advantages, the observer must exactly specify an observation with no real time interaction with the observatory staff. In this respect, ground-based service-mode observing and HST observing are similar. There are numerous details which, if unspecified, are either ambiguous or are left to chance, sometimes with undesirable results. Minimization of ambiguous/unspecified details is critical to the success of both HST and ground-based service observing. Smart observing proposal development tools which ave built in flexibility are therefore essential for both the proposer and the observatory staff. Calibration of the science observations is also an important facet of service observing. A centralized calibration process, while resource-intensive to install and maintain, is advantageous in several ways: it allows a more efficient overall use of the telescope, guarantees a standard quality of the observations, and makes archival observations more easily usable, greatly increasing the potential scientific return from the observations. In order to maximize the scientific results from an observatory in a service mode operations model, the observatory needs to be committed to performing a standard data quality evaluation on all science observations to assist users in their data evaluation and to provide data quality information to the observatory archive. The data quality control process at STScI adds value to the HST data and associated data products through examination and improvement of data processing, calibration, and archiving functions. This functionality is provided by a scientist who is familiar with the science goals of the proposal and assists its development throughout, from observation specification to the analysis of the processed data. Finally, archiving is essential to good service observing, because a good archive helps improve observing efficiency by not allowing unnecessary duplication of observations.
The Hubble Data Archive at the Space Telescope Science Institute contains over 4.3 TB of data, primarily for the Hubble Space Telescope, but also from complementary space- based and ground-based facilities. We are in the process of upgrading and generalizing many of the HDA's component system, developing tools to provide more integrated access to the HDA holdings, and working with other major data providing organizations to implement global data location services for astronomy and other space science disciplines. This paper describes the key elements of our archiving and data distribution systems, including a planned transition to DVD media, data compression, data segregation, on-the-fly calibration, an engineering data warehouse, and distributed search and retrieval facilities.
The ESO very large telescope (VLT) will deliver a science archive of astronomical observations well exceeding the 80 Terabytes mark already within its first six years of operations. ESO is undertaking the design and development of both on-line and off-line archive facilities. This paper reviews the current planning and development state of the VLT science archive project.
Subaru telescope is one of the largest ground-based optical- IR telescope, and it will produce so large amount of data on the universe. For secure data storage and effective science output, we need very intelligent data archive system. It includes keeping security for observed data and providing user friendly environment for science and engineering research. The most important thing for data archive system for ground-based telescope is how it can provide understandable description about observation performed and it provides user based tools for data searching etc. We will report the status of our development and the features of database environment in Subaru telescope.
Subaru telescope of National Astronomical Observatory of Japan is now under the commissioning phase, and there will be installed seven powerful instruments to produce several tens megabytes of data in each second of observations. The total amount of the storage necessary to keep those data becomes about 20TB per year.Here we introduce a concept of the hierarchical data storage system on the super computer system of Hilo Base Facility of Subaru Telescope. Detailed description of the computer system and performance feature is also presented. The computer system is useful for operation support based on advanced information management database, called Subaru Data Base.
The Isaac Newton Group comprises three telescopes: the 4.2m William Herschel Telescope, the 2.5m Isaac Newton Telescope, and the 1.0m Jacobus Kaptyen Telescope. The operational capability of the ING has been increased by integrating the fault reporting system with the archiving of data. All data obtained from the telescope are automatically archived and stored on-line in a 500 slot CDROM jukebox. The flexible image transport system headers are stripped, stored in a Sybase database and are available immediately for inspection via a web-based user interface. Users have the option to save files to disc for FTP download and display the data using a standard image tool. After six months the original data are sent from the ING to the RGO Astronomy Data Center in Cambridge. The ING science archive may be interrogated, and the data is available for general download. The ING fault database is also implemented as a Sybase database. In addition to standard features, links can be made to individual data files. These can be subsequently downloaded from the archive on request. This system greatly aids in ensuring the integrity of data obtained across the ING telescopes and helps engineers when analyzing many kinds of faults. Access to data on-line is being exploited in automating the dissemination of data obtained during service observing. Pipeline processed data will also be integrated into the system. In order to handle increased data flows with new larger CCD arrays, a system based on high capacity DVD disks is planned.
Large astronomical catalogues containing from a million up to hundreds of millions records are currently available, even larger catalogues will be released in the near future. They will have an important operational role since they will be used throughout the observing cycle of next generation large telescopes, for proposal and observation preparation, telescope scheduling, selection of guide stars, etc. These large databases pose new problems for fast and general access. Solutions based on custom software or on customized versions of specific catalogues have been proposed, but the problem will benefit from a more general database approach. While traditional database technologies have proven to be inadequate for this task, new technologies are emerging, in particular that of Object Relational DBMSs, that seem to be suitable to solve the problem. In this paper we describe our experiences in experimenting with ORDBMSs for the management of large astronomical catalogues. We worked especially on the database query language and access methods. In the first field to extend the database query language capabilities with astronomical functionalities and to support typical astronomical queries.In the second, to speed up the execution of queries containing astronomical predicates.
This paper presents the greedy search technique used by the Hubble Space Telescope (HST) Science Planning and Scheduling System to automatically schedule HST activities on weekly calendars. Given a set of possible observations to schedule in a week, this technique determines the best time ordering of observations which would maximize the scheduling efficiency or total science time in a calendar. The HST observation constraints that strongly influence the search heuristics and the process that produce HST flight calendars are also described in detail.
This study has looked at the technical implications of switching the JCMT to a remote operations mode, taking as a baseline the telescope being operated from Hilo with no staff normally present on the summit during the night. This study has not addressed observing modes, staffing or their implications in terms of costs. There is a potential show- stopper in that unless a good fraction of the instruments can be made remotely operable then any remote operation would be very inflexible. To modify the instruments in this way would require input from the same well-found labs that are currently engaged in the instrumentation work for JCMT and it is not clear that they could do both things at once. If this problem could be overcome, the bottom line conclusions are that at a level of work that we considered to be reasonable: the upfront costs would be some $DLR 650k plus 17 direct staff years spread over 1 to 3 years, some of which would have to be in expert instrumentation groups. There would be some extra lost telescope time due to the delay in getting someone from Hilo to fix a fault, which could roughly double telescope down-time. This could be reduced by extra initial investment. Long-term savings in accommodation, vehicle and staffing costs would be significant. There would also be efficiency gains because of the removal of the '14000 ft effect', however these are hard to quantify.
The National Optical Astronomy Observatories has developed a new database system, ALPS, to track proposals for telescope time from original receipt through the review process, scheduling, observing, and final statistical reporting. The database is written in Microsoft Access, and is integrated with observatory operations. Proposals arrive in a LATEX format and are parsed into files suitable for import into Access using a Perl script running under Unix. The database system provides tools to support al activities associated with handling proposals, including support for the Telescope Allocation Committee through reviewer assignments, grades imported via the Web, and comments for the principal investigator. The telescope schedules are prepared through a scheduling interface, and the final schedule is posted automatically to the Web. Statistics on telescope usage are collected via the Web and imported into the database as well. The new database has been in operation since March, 1997, for proposals submitted for observing time at the Kitt Peak National Observatory, and has been installed at Cerro Tololo Interamerican Observatory as well. The program is written to be easily adaptable for new facilities which will be available through NOAO, including public access to time at independent observatories and access to the Gemini telescopes.
The Far InfraRed and Submillimeter Telescope (FIRST) is the last of the four Cornerstone Missions in the 'Horizon 2000' long term science plan of the European Space Agency (ESA) and as an observatory type mission it will be open to the international astronomical community. Its launch is presently foreseen for the end of 2005. The nominal mission duration will be 4.5 years and the active archive phase 3 years. Taking into account the experience from other ESA missions and in order to minimize costs, the ground segment for FIRST scientific operations will be structured in a novel 'decentralized' way, creating centers of competence.
SABIO is currently at the beginning of the specification phase at the Instituto de Astrofisica de Canarias (IAC). This system is aimed at providing full control over the complete set of telescope operations, ranging from tools for proposals submission and data entry until the final scheduling during the observations. SABIO will also manage the link between the instrument and the telescope control system to perform the step-by-step observation commands selected between a list of available templates depending upon the observing mode. It is intended that on-line information about sky quality will be also provided to SABIO in real time, which will then be used to adapt the observing queue to the sky conditions. The project is splitted into several parts which will be developed wither in parallel or in sequence, depending on the available resources. It is planned that SABIO will initiate operation, in a preliminary beta version, by the end of 1999, starting at the 1.5m Telescopio Carlos Sanchez, at the Spanish Observatorio de El Teide, in the Canarian Island of Tenerife.
We report initial results from a project to design an interactive on-line data archive for short-period variable stars. Our goal is to provide an easily accessible set of web pages for use by a researcher at the telescope. The first step is to provide the researcher with convenient access to data archives for a variety of short-period variable stars. In addition to the basic data archive, there is a page for each star that contains positional information, the most recent epoch and period data, basic physical parameters, and a set of helpful journal references. We also include a page for each of the program variables with a finder chart and a selection of comparison stars for use in differential photometry. Additionally, one entry point in the system is a phase calculator that will sort through the data and return a list of stars that are observable from various user input locations during a variety of time periods. The current system has a partial data set in place for over one hundred short-period variable stars. We intend to continue to expand this set to include a large number of complete data files. We are also considering a similar archive of galaxy images for comparison use in student conducted supernova searches. We find this system improves the scientific return form our two small telescopes at the West Mountain Observatory. We believe this model can also be employed to optimize data management and scientific return for a wide variety of projects from the new generation of large ground-based telescopes.
The Global Network of Astronomical Telescopes (GNAT) is anon-profit research corporation established for the purpose of creating a longitudinally distributed network of identical telescopes and imaging systems which can be dedicated to a variety of astronomical problems which require temporal observations, often at high time frequency. The first telescopes in the network will be imaging photometry system; later telescopes will be equipped for spectrophotometry. The network will be centrally scheduled and all of the telescopes in the network will be operated automatically under local host computer control. It is intended that GNAT will serve research, education and community outreach needs. In this paper we report on the current status of development of the GNAT network.
The support costs of a system during the operational phase of its life-cycle represent a substantial part of the total life-cycle cost. Also, the support costs strongly depend on the decisions taken during the initial stages of the system design. However, traditionally many major systems have been developed attending only to the criteria deriving from the main functions of the system concerned. Questions related to logistic support were taken into account a posteriori, when the system was about to start the operational phase. Hence, the life-cycle costs were much bigger than initially foreseen, and at the same time, the maintenance and operation of the system suffered from inefficiency. This problem is further exacerbated as the complexity of the system increases. In the light of these considerations, it is obvious that the development of the Gran Telescopio Canarias (GTC) has to be considered as an integral concept that takes into account not only the scientific but also the logistic-support requirements. Such a compromise requires logistic support to be considered at all the phase of the life-cycle of the system, especially in those design stages in which the main decisions affecting the configuration of the system are taken. This paper presents the integrated logistic support (ILS) approach that will be followed to design the GTC in such a way as to make it effectively and economically supportable, as well as to develop the necessary elements for supporting it. A goal is to reach an optimum compromise between the cost of designing the GTC, including the support elements, and the cost of the support itself. Also, the logistic-support analysis will lead to the plan for the GTC operation and maintenance.
WISP, the wide-field imaging symbiotic program, is one way of optimizing the scientific return from the observations made using the CFHT 3.6-m telescope. Many of the wide-field images, presently acquired with the UH8k CCD mosaic camera, exhibit moving solar system objects that are not studied by the observers, as they are not part of their scientific programs. The main goal of WISP is to extract from all these observations the position and an estimate magnitude of the moving objects in near real time, for each observing run where the PI doesn't intend to study solar system objects, gathering information which would have lost without WISP. In order to achieve this goal, real time data processing has been developed, making available the morning following the observations the accurate position of the asteroids detected to those interested in following them. The data pipeline is facing challenges, as the observations are not optimized for this kind of search. But ultimately, the data gathered from these observations will be useful to the study of the asteroid population up to mg 23 to 24, and higher with the next generation mosaics in development for CFHT, in addition to the potential rapid detection of interesting objects such as near Earth asteroids or trans-Neptunian objects.
For successful, continued, operation of space-based instruments over the lifetime of a satellite it is common practice to put into place procedures to identify, investigate and monitor long term trends in the characteristic parameters of an instrument in order to be able to take action before a failure of the instrument or a subsystem occurs. With the advent of more sophisticated instrumentation and the need for efficient utilization of ground-based telescopes, there is an increasing need to carry out this function as part of the routine operations of ground based observatories. This paper characterizes the types of trend data that may be obtained during the lifetime of an instrument and presents examples of such data taken from the long wavelength spectrometer instrument on-board the IR space observatory. The resulting actions taken to minimize the possibility of failure of subsystems and to maximize the scientific output from the instrument will be discussed.
The high level of automation in the operation of the ESA Infrared Space Observatory, together with high observing efficiency, leads to a requirement for a commensurate level of automation in the subsequent processing of the astronomical data. This inevitably means that all data for a given instrument mode have the same calibration applied, regardless of the exact details of the object being observed. Questions then arise about these 'pipeline processed' data in terms of the calibration accuracy achieved; how to control the quality of data received by the observer and how much further processing is required - or desirable - by the observer.In this paper we outline the experience of two years of operation of the long wavelength spectrometer on board ISO, detailing the improvements made in the pipeline processing during this time and the difficulties encountered in the automated processing of some instrument modes.
The ISOPHOT Serendipity Survey utilizes the slew time between ISO's pointed observations with strip scanning measurements of the sky in the far-IR at 170 micrometers . The slews contain information about two fundamentally different types of objects, namely unresolved galactic and extragalactic far-IR sources as well as extended regions of galactic cirrus emission. Since the structure of the obtained data is almost unique, the development of dedicated software to extract astrophysically interesting parameters for the crossed sources is mandatory. Data analysis is currently in its early stages and concentrates on the detection of point sources. First results from an investigation of a high galactic latitude field near the North Galactic Pole indicate that the detection completeness with respect to previously known IRAS sources will be almost 100 percent for sources with f(subscript 100micrometers > 2 Jy, dropping below approximately equals 50 percent for f(subscript 100micrometers < 1.5 Jy. Nevertheless, even faint sources down to a level of f(subscript 170micrometers approximately equals 1 Jy can be detected. Since the majority of the detected point sources are galaxies, the Serendipity Survey will result in a large database of approximately equals 2000 galaxies.
The new generation of 21st century 8m ground-based telescopes requires a new model of proposal submission. The proposal submission tool must be globally accessible and provide an efficient mechanisms to create a proposal and submit it for review. Global accessibility is dependent on network availability and connection time should be minimized to reduce this dependency. The efficiency of the tool is optimized by implementing checks which ensure that the proposal is complete before it reaches the reviewers. This saves the reviewers form having to contact the astronomer for additional information and the astronomer is assured that her/his proposal will not be rejected for its incompleteness. The Gemini Phase 1 Science Proposal Entry Tool is a platform-independent software program which is downloaded from the web to reside on the astronomer's local machine. During the creation of a science proposal, no network connection is required.Input is entered through a Graphical User Interface (GUI) which consists of a series of pages. The astronomer can, for the most part, page around the GUI entering the information in any order. However, in some case, data that determines what is displayed on other pages must be entered before advancing to the next page. Local saves and prints of the proposal can be made at any time. Also, the tool can reload an existing proposal so that the astronomer can work on a proposal over several sittings. Completed pages are indicated on a floating screen separate from the main GUI. When the astronomer is ready to submit the proposal, the file is verified for completeness. If compete, it is submitted to the National Time Allocation Committee via ftp.
Science workshops were held throughout the Gemini partnership during the second half of 1997 with the aims of identifying and quantifying the supporting capabilities required to enhance the utility and efficiency of the Gemini 8m telescopes. These workshops, held separately in the US, UK, Canada and South America, ensured representation programs were considered in detail sufficient to understand the requirements for their execution on Gemini as well as for any preparatory observations. The desire for wide-field optical and near-IR imaging was frequently identified with an average of one-half to one night of these survey observations per night of Gemini follow-up. Two other common themes were high angular resolution imaging and rapid response to target-of-opportunity events.
Proposals for telescope time at facilities available through the National Optical Astronomy Observatories can now be prepared and submitted via the WWW. Investigators submit proposal information through a series of HTML forms to the NOAO server, where the information is processed by Perl CGI scripts. PostScript figures and ASCII files may be attached by investigators for inclusion in their proposals using their browser's upload feature. Proposal information is saved on the server so that investigators can return in later sessions to continue work on a proposal and so that collaborators can participate in writing the proposal if they have access to the proposal account name and password. The system provides on-line verification of LATEX syntax and a spellchecker, and confirms that all sections of the proposal are filled out. Users can request a LATEX or PostScript copy of their proposal by e-mail, or view the proposal on line. The advantages of the Web-based process for our users are convenience, access to on-line documentation, and the simple interface which avoids direct confrontation with LATEX. From the NOAO point of view, the advantage is the use of standardized formats and syntax, particularly as we begin to receive proposals for the Gemini telescopes and some independent observatories.
Using the WWW, direct query-access has ben made to the Hubble Space Telescope Calibration Database. All eight science instruments' reference data, plus photometric and spectrophotometric standards can be seen. By using wildcards, a user may retrieve information on all reference datasets or by using various qualifiers, the user has the ability to narrow the search down to very particular sets. The information retrieved not only lists the existence of filenames for particular modes of operation, but critical information about when the reference data has been archived and when it has been installed for pipeline calibrations.
Hubble Space Telescope (HST) moving target observations are planned using the 'Percy' interactive computer program. Percy provides ephemeris and geometrical even information about solar system objects including the Sun, major planets and their natural satellites, comets, and asteroids. While Percy contains some HST specific feature, it should be useful for almost nay ground or spacecraft based observing system. Percy was originally developed by JPL, but the Space Telescope Science Institute took-over all Percy development in 1992. Since then, extensive modifications and many new features have been added. This paper reflects the current state of Percy.
This paper describes how the OPUS pipeline, currently used for processing science data from the Hubble Space Telescope (HST), was used as the backbone for developing the science data pipeline for a much smaller mission. The far ultraviolet spectroscopic explorer (FUSE) project selected OPUS for its data processing pipeline platform and selected the OPUS team at the STScI to write the FUSE pipeline applications. A total of 105 new modules were developed for the FUSE pipeline. The foundation of over 250 modules in the OPUS libraries allowed development to proceed quickly and with considerable confidence that the underlying functionality is reliable and robust. Each task represented roughly 90 percent reuse, and the project as a whole shows over 70 percent reuse of the existing OPUS system. Taking an existing system that is operational, and will be maintained for many years to come, was a key decision for the FUSE mission. Adding the extensive experience of the OPUS team to the task resulted in the development of a complete telemetry pipeline system within a matter of months. Reusable software has been the siren song of software engineering and object- oriented design for a decade or more. The development of inexpensive software systems by adapting existing code to new applications is as attractive as it has been elusive. The OPUS telemetry pipeline for the FUSE mission has proven to be a significant exception to that trend.
An observation data set (OD) has an important role in Subaru Observation Software System in order to connect the observation control system with the data analysis system. OD includes abstract commands of getting both a science object data and its calibration data indispensable to calibration. Acquisition conditions of each calibration data are also defined in the OD. The observation schedule may be optimized and re-arranged using the OD during the observation in scheduling mode. In the manual operation mode, indication of the next observation command may be given through the OD. The OD is used for automated data analysis, such as pipeline processing, in the data analysis system in the base facility in Hilo, Hawaii. Feedback of the control parameters and real-time quality assessment of the acquired data to observation scheduling will be achieved using the supercomputer system at Hilo in a few years.
Subaru telescope observation control system is composed of several systems such as a telescope control system, an observation supervisor system, a data acquisition system, and a data archival system. Each system consists of several processes to carry out observation operation in cooperating with other processes by passing control messages and by exchanging their status data. All acquired data is registered in database together with related data such as status and log data of the telescope and instruments. Observers and their observation proposals are registered in the control system as a NIS+ user and NIS+ group. User access to the control. system is managed according to the registered operation level. User interface of the control system is described with some samples of screen displays.
The WIYN 3.5 meter telescope on Kitt Peak, Arizona is operated by a consortium involving three universities and the National Optical Astronomical Observatories (NOAO) each with their own set of scientific requirements and research objectives. To meet this diversity a variety of operational modes are being used. It is the purpose of this paper to describe the experience acquired so far with queue scheduling, remote observing, consortium-wide coordinated programs, and student involvement. Observing time is block scheduled in such a way that each WIYN member receives their equitable distribution with respect to season and lunation. NOAO provides operations support and receives 40 percent of the observing time which is made available to the general astronomical community through the same mechanism as for other NOAO facilities. The largest fraction of this time, however, is devoted to queue scheduling. The remaining 60 percent of the observing time is divided among the three universities in proportion to their contribution to the capital costs of the observatory. Each university has its own approach to assigning observing time and utilizing their blocks. Among the modes employed are traditional on-site, service, and remote observing. The WIYN telescope supports rapid changing of instrumentation and it is common to do multiple-instrument observing during the course of a night. This also expedites the sharing of nights by more than one observer. The flexibility also provides the means to respond to targets of opportunity. In this paper we shall try to evaluate the ways in which this flexibility has been able to enhance scientific return.
Many scientific observational programs require the field of view (FOV) or aperture to have a specific orientation on the sky. Since orientation requirements have a very strong impact on other aspects of the execution of the observation, an observer must have the ability to visualize the orientation of the science aperture and determine the effect of the orientation on the possible scheduling of the observation. We are prototyping an interactive, visual tool for fine-tuning the target location and orientation. To make efficient use of any instrument the user needs to understand the various modes of the instrument and then calculate exposure times or signal-to-noise ratios for many different kinds of observations. Thus, the exposure time calculator (ETC) is an essential tool that is used by various users for many different purposes. We are prototyping a more dynamic graphical ETC in which the user can simulate to some extent and determine the effect of various input parameters. This interactive exposure time calculator will not only be intuitive but will provide various users the different level of detailed information they desire. The VTT and ETC are Web-based tools that can be used by themselves or as part of the Scientist's Expert Assistant, for the next generation space telescope proposal management system. Currently, the tools are being developed with the requirements of HST in mid, but will also be easily adaptable to other observatories. The underlying software for the tools is an object-oriented Java-based applet. The object-oriented nature of the design is intended to allow the tools to easily expand their features or to be customized. By making the system Java-based, we gain the ability to easily distribute the applet across a wide set of operating system and users. In addition to executing the tools as a Java applet, it can be loaded onto a user's workstation and run as an application independent of a Web browser.
One of the manually intensive efforts of HST observing is the specification and validation of the detailed proposals for scientists observing with the telescope. In order to meet the operational cost objectives for the next generation telescope, this process needs to be dramatically less time consuming and less costly. We are prototyping a new proposal development system, the Scientist's Expert Assistant (SEA), using a combination of artificial intelligence and user interface techniques to reduce the time and effort involved for both scientists and the telescope operations staff. The advanced architectures and automation branch or Goddard's Information Systems Center is working with the Space Telescope Science Institute to explore SEA alternatives, using an iterative prototype-review-revise cycle. We are testing the usefulness of rule-based expert systems to painlessly guide a scientist to his or her desired observation specification. We are also examining several potential user interface paradigms and explore data visualization schemes to see which techniques are more intuitive. Our prototypes will be validated using HST's Advanced Camera for Surveys instrument as a live test instrument. Having an operational test-bed will ensure the most realistic feedback possible for the prototyping cycle. In addition, when the instruments for NGST are better defined, the SEA will already be a proven platform that simply needs adapting to NGST specific instruments.
The 5000m altitude of the potential site for the Millimeter Array (MMA) in Northern Chile is so high that high-altitude problems for both the staff and equipment must be considered and included in planing for the facility. The very good accessibility of the site, only one hour's drive from the nearest town at altitude 2440m, makes it possible for MMA workers to sleep and perform much of their work at low altitude. Workers on the site will have 11 percent less oxygen available than workers at Mauna Kea Observatory. It is expected that the mental abilities and ability to do hard physical labor of workers on the high site will be reduced by 10 percent to 30 percent compared to sea-level. In-doors working areas on the MMA site will have their atmospheres oxygen enriched to provide an effective working altitude of 3500m where loss of mental ability should be small. Tests of oxygen enrichment at high-altitude Chilean mines and at the University of California White Mountain Research Station show that it is feasible and economic. Problems of equipment operation at 5000m altitude are expected to be manageable.
In recent years Kitt Peak National Observatory has undertaken a number of innovative projects to optimize science operations with the suite of telescopes we operate on Kitt Peak, Arizona. Changing scientific requirements and expectations of our users, evolving technology and declining budgets have motivated the changes. The operations improvements have included telescope performance enhancements--with the focus on the Mayall 4-m--modes of observing and scheduling, telescope control and observing systems, planning and communication, and data archiving.
The "new-technology" 3.5-meter telescope at Apache Point Observatory has been in routine operations since 1994. Designed to enable nearly full remote operation via the Internet, remote use of the telescope comprises two-thirds cf all observing. Rapid instrument change capabilities and flexible scheduling allow for some optimized science utilization. Several science programs can share the telescope on a given night, using more than one scientific instrument. Remote users can also collaboratively use the telescope simultaneously from different geographical locations. Synoptic observing programs and rapid-response observations are routinely accommodated. More than two hundred observers have used the telescope remotely, and by the end of I 997 more than 60 scientific publications based on telescope data have appeared in the journals. Several scenarios for operating the telescope have been explored. The current scheme is to schedule the telescope by quarters based on prioritized proposals submitted by the consortium member institutions. Except for short synoptic observations and targets of opportunity, each night is divided into halves. These half-night blocks provide adequate time on target plus calibration time, and provide simplification ofthe scheduling process which is done manually. Enhancements to telescope performance and efficiency are underway