On 31 March 2000, the ESO Very Large Telescope (VLT) will complete the first year of science operations. During this first year, Antu (UT1) was operated with two instruments: Focal Reducer/Low Resolution Spectrograph (FORS-1) and Infrared Spectrograph and Array Camera (ISAAC). Both Visitor and Service Mode operations were successfully supported, with roughly equal time spent in each mode. On 1 April 2000, Kueyen (UT2) will begin science operations with two new instruments: UV-Visible Echelle Spectrograph (UVES) and FORS-2. The VLT science operations concept revolves around a distributed operations model. Front-end (proposal, observation, and scheduling preparation support and management) and back-end (quality control, Service Mode data distribution, and archive) operations are executed at ESO headquarters in Garching bei Munchen, Germany. Observation execution and on-line quality control are managed on-site at the Paranal Observatory, Cerro Paranal, Chile. The VLT Data Flow System provides the backbone infrastructure for VLT operations. Here we present an overview of the VLT science operations concept, a summary of the results from Year 1, and a discussion of lessons learned and where the science operations concept had to be adapted to achieve the current level of operations.
I present all the software that enables us to achieve observation and observatory management with maximum efficiency and with minimum human resource requirement. Also, my presentation will be focused to the policies and specifications for remote observation and proposal management systems that will eventually be a part of 'observatory management database.' Our unified telescope and instrument operation system implies that those who can control instruments can also control the telescope. Since the telescope operation is a sole responsibility of telescope operators, our remote observation system cannot allow remote observers to control instruments. Therefore, our remote observation system will comprise data browser, observation procedure editor and observation monitor. Proposal management system is a must tool for an observatory that accepts hundreds of proposals and handles hundreds of programs for either classical or queue/service observation in a single term. This system will include online proposal acceptance both for science merit nd for technical feasibility, online Observation Data Set builder and online tutorial or help desk for applicants.
For the last 3 years, most of NOAO's 40 percent observing share on the WIYN 3.5 m telescope has been used for queued observing, with the goal of facilitating highly ranked science proposals that require rare observing conditions and/or synoptic or 'target of opportunity' observations. The ease of switching between imaging on one Nasmyth focus and multi- object fiber fed bench spectroscopy on the other Nasmyth port offers the choice of making the best use of the extant observing conditions. We assess the results of this experiment and highlight some of the forefront observing programs that have been executed. We discuss algorithms that facilitate making decisions on both long and short time scales so that we can provide the best match of program requirements and observing conditions. We suggest a way of quantifying the prioritization of programs beyond simple ranking that will greatly aid decision making, and evolve the procedures to where queued observations better serve the emphases placed by the time allocation process, without compromising the intent of the scientific investigators.
The Hobby-Eberly telescope (HET) is an innovative, low cost 9- meter class telescope that specializes in visible and near- infrared, queue observing mode spectroscopy. The operations costs for this telescope follow the capital cost model, being approximately 15 - 20% that of other 9-meter telescopes. In this contribution we describe the HET operations model and our early operations and scientific experience with this telescope, emphasizing those aspects that most directly impact the scientific productivity of the HET and describing the actions we have taken to optimize the telescope's scientific return.
In an era of increasing pressure to do more with less and make the most out of every budget dollar, HST science operations have steadily been able to give its customers more by increasing observatory efficiency. While original mission goals for observatory efficiency were targeted at less than 35%, HST now consistently achieves weekly schedules which are greater than 50% efficient. Furthermore, special concentration on continuous viewing opportunities and science campaigns (i.e. -- the Hubble Deep Field) has yielded efficiencies exceeding 60%. More than fourteen years of applied operational experience and system analysis by HST ground, flight, instrument, and user support systems personnel have resulted in the success. However, these efficiency levels could be even higher were it not for the variety of constraints and unplanned events which affect how and when the observatory can be used. Certain known spacecraft and instrument constraints impact efficiency with little effect on long range plan stability since they can be accounted for in advance. For this class of concerns, planning scenarios can be developed and analyzed to see what efficiencies might be achieved without these constraints. Unpredictable events such as spacecraft safings and anomalies, targets of opportunity and quick turnaround director's discretionary science reduce the overall stability of an observatory's planned use as well as its efficiency. In this paper we will describe various constraints and unplanned events, show their effects on HST observatory efficiency and stability, and discuss specific efforts of the HST Long Range Planning group to minimize their impact.
In order to maximize the science return while minimizing running cost, the operational and maintenance requirements are being considered during the design of the GTC. Following this adopted integrated logistic support approach; GTC has produced the baselines of the GTC Operation and Maintenance Plans. These plans describe the science operation strategies, in particular those regarding the queue schedule mode, which will play the main role in the GTC operation scenario. Also the Operation and Maintenance Plans describe the maintenance activities and the support elements (staff, handling equipment, facilities...) that are being currently defined for the GTC future operational phase. This paper briefly summarizes the main items included in the Baselines of the GTC Operation and Maintenance Plans.
On 1 April 1999, the first unit telescope (ANTU) of the ESO VLT began science operations. Two new instruments (FORS-1 for optical imaging and spectroscopy and ISAAC for IR imaging and spectroscopy) were offered in a mix of 50% visitor mode and 50% service mode. A Phase-I and Phase-II proposal and observation preparation process was conducted from 1 October 1998 until the middle of March 1999 involving approximately 280 proposals. A total of 1768 Observation Blocks for 83 approved service mode programs were scheduled and executed between 1 April and 1 October 1999. The resultant raw science and calibration data were subjected to quality control in Garching and released to the ESO user community starting from 15 June 1999 along with pipeline processed data products for a subset of instrument modes. The data flow loop for the first LT telescope is closed. The current operational VLT data flow system and the developments for the remainder of the VLT will be presented in the light of the first year of operational experience.
The Hobby-Eberly Telescope (HET) is an innovative, low cost 9- meter class telescope that specializes in queue mode spectroscopy. To observe astronomical targets, the HET uses a unique focal tracker system that employs complex robotic mechanisms to accurately point and track. In this contribution, we describe the electro-mechanical subsystems that have been designed and installed to monitor and diagnose this unique telescope's operations modes. These subsystems are designed to maximize the fraction of night-time hours that are devoted to science operations, optimizing the telescope's scientific output by quickly detecting problems and minimizing engineering overhead.
Telescope performance can be characterized by a number of metrics e.g. mirror reflectivity, seeing, readout noise, observing overheads. In deciding where to invest limited operational resources to improve performance, one needs to predict the impact of given enhancements on scientific productivity. E.g. for the same cost, is it more important to reduce CCD readout noise by a factor of 2, or to improve instrument throughput by 30%? Knowing the mix of programs at a given telescope, and the dependence of signal-to-noise on the various parameters, the % gain in scientific productivity can be predicted for a given % improvement in any parameter, allowing optimal investment of the operational budget. We describe operational metrics used to monitor the performance of the 4.2-m William Herschel Telescope on La Palma, and give examples of current and planned enhancements which have been prioritized by comparing predicted gains and costs. These enhancements should deliver a total gain approximately 30% in productivity, equivalent to approximately 100 extra observing nights per year.
The Gemini Observatory HelpDesk was activated early in 2000 to aid in the rapid and accurate resolution of queries concerning the Gemini telescopes and their capabilities. This system co- ordinates user support amongst staff within the Observatory and at National Offices in each partner country. The HelpDesk is based on a commercial product from Remedy Corporation that logs, tracks, forwards and escalates queries and self- generates a knowledgebase of previously asked questions. Timestamping of these events in the life cycle of a request and analysis of associated information provides valuable feedback on the static web content and performance of user support.
The Observation Handling Subsystem (OHS) of the ESO VLT Data- Flow System was designed to collect, verify, store and distribute observation preparation information. This rather generic definition includes high-level Observing Proposals submitted once per semester to apply for telescope time (typically referred to as 'Phase I' proposals) as well as detailed descriptions of the observations to be performed (which is often called 'Phase II' data); in the Data-Flow System, such descriptions are defined as Observation Blocks (OBs). Observation queues and long- and short-term schedules are also produced, ranging in scope from an observation semester to a few hours. The OHS is a distributed system composed of a collection of loosely coupled software tools. The tools communicate mostly through a set of relational databases, which are distributed between Garching and The Chilean observatories. A number of communication protocols are also used, from the e-mail based Receiver process of the Proposal Handling and Reporting System to the proprietary protocol used to serve the telescope and instrument control systems. Data and commands flow through the OHS, supporting the operational procedures of ESO's Observing Programmes Committee and of the different operation teams in Garching and in Chile. This paper presents the overall architecture of the OHS, and each module's technical features and underlying operational concepts. It also discusses the current implementation choices and development plans.
In this paper we present a strategy for developing the next generation of proposal preparation tools so that we can continue to optimize scientific returns from the Hubble Space Telescope in an era of constrained budgets. The new proposal preparation tools must be built with two goals: (1) to facilitate scientific investigation for observers, and (2) to decrease the effort spent on routine matters by observatory staff. We have based our conclusions on lessons learned from the Next Generation Space Telescope's Scientist's Expert Assistant experiment. We conclude that: (1) Compared to existing Hubble Space Telescope's Phase II RPS2 software, a modern set of proposal tools and an environment that integrates them will be appreciated by the user community. From the user's perspective the proposed software must be more intuitive, visual, and responsive. From the observatory's perspective the tools must be interoperable and extensible to other observatories. (2) To ensure state-of-the-art tools for proposal preparation for the user community, there needs to be a management structure that supports innovation. Further, the development activities need to be divided into innovating and fielding efforts to prevent operational pressures from inhibiting innovation. This will allow use of up-to-date technology so that the system can remain fluid and responsive to changes.
Through Cycle 8, the process of selecting HST proposals was extremely successful, according to hundreds of scientists involved in the proposal review. Yet the system showed signs of strain, as the number of proposals doubled and the panels/TAC grew commensurately. This led to highly specialized panels, each with limited amounts of observing time to award, and an increasingly narrow scientific focus that exacerbated the natural tension between scientific expertise and conflict of interest. The TAC's role of establishing priorities among scientific disciplines became simultaneously more critical and more difficult to carry out. Furthermore, the scientific community advised us strongly, starting in Cycle 7, that more HST observing time should be devoted to larger programs (> 100 orbits). For Cycle 9 we instituted significant changes to address these issues: (1) Fewer, much broader panels, with redundancy to avoid conflicts of interest. (2) TAC role re- defined, to focus on awarding a significant fraction (1/4 - 1/3) of the available time to > 100-orbit proposals. (3) Incentives for panels to award time to 'mid-sized' proposals. We outline these changes in greater detail, and describe to what extent their implementation in the Cycle 9 review achieved the goals of more large programs, fewer conflicts of interest for reviewers, and a stronger science program for HST.
During the past two years, the Scientist's Expert Assistant (SEA) team has been prototyping proposal development tools for the Hubble Space Telescope in an effort to demonstrate the role of software in reducing support costs for the Next Generation Space Telescope (NGST). This effort has been a success. The Hubble Space Telescope has adopted two SEA prototype tools, the Exposure Time Calculator and Visual Target Tuner, for operational use. The Space Telescope Science Institute is building a new set of observing tools based on SEA technology. These tools will hopefully be foundation that is easily adaptable to other observatories including NGST. The SEA project has aggressively pursued the latest software technologies including Java, distributed computing, XML, Web distribution, and expert systems. Some technology experiments proved to be dead ends, while other technologies were unexpectedly beneficial. We have also worked with other projects to foster collaboration between the various observing tool programs. In two years, we have learned a great deal that will be useful to future software tool efforts. In this presentation, we will discuss the lessons that we've learned during the development and evaluation of the SEA. We will also discuss future directions for the project.
With the successful launch of NASA's third 'Great Observatory,' the Chandra X-ray Observatory (formerly AXAF), we are embarking on a new era of multi-wavelength science campaigns from space. To meet this challenge, the Space Telescope Science Institute (STScI) and the Chandra X-ray Center (CXC) have initiated a test program whereby proposals of a multi-wavelength nature requiring both Hubble Space Telescope (HST) and Chandra data can be submitted to either the HST Review Panel or the Chandra Review Panel. This joint activity enables proposers to avoid the 'double jeopardy' of submitting to two separate reviews. By agreement with the CXC, STScI will award up to 400 kiloseconds of Chandra observing time, and similarly the CXC will award up to 100 orbits of HST time (about one week of observing time for each observatory). The only criterion above and beyond the usual review criteria is that both sets of data are required to meet the scientific goals. We discuss the multi-wavelength allocation concept, how the process worked for HST's Cycle 9 Review and modifications expected for Chandra's Cycle 2 Review. We will also address other missions, such as EUVE, FUSE, NOAO, SIRTF and NGST that might be included in coordinated observation time allocation in the future.
The Subaru Telescope requires a fault tracking system to record the problems and questions that staff experience during their work, and the solutions provided by technical experts to these problems and questions. The system records each fault and routes it to a pre-selected 'solution-provider' for each type of fault. The solution provider analyzes the fault and writes a solution that is routed back to the fault reporter and recorded in a 'knowledge-base' for future reference. The specifications of our fault tracking system were unique. (1) Dual language capacity -- Our staff speak both English and Japanese. Our contractors speak Japanese. (2) Heterogeneous computers -- Our computer workstations are a mixture of SPARCstations, Macintosh and Windows computers. (3) Integration with prime contractors -- Mitsubishi and Fujitsu are primary contractors in the construction of the telescope. In many cases, our 'experts' are our contractors. (4) Operator scheduling -- Our operators spend 50% of their work-month operating the telescope, the other 50% is spent working day shift at the base facility in Hilo, or day shift at the summit. We plan for 8 operators, with a frequent rotation. We need to keep all operators informed on the current status of all faults, no matter the operator's location.
We present a summary of a short experiment in flexible scheduling of the UKIRT telescope in Hawaii. This was accomplished by asking visiting observers to alternate between projects based on specified environmental conditions. This limited experiment delivered more time on target in the required conditions than would have been achieved by classical scheduling. We find there is a need to accurately estimate overheads during observation planning, to deliver unambiguous measurements of observing conditions on which decisions can be made, to have software to reduce the administrative overhead when accounting for time used and distributing data taken during flexible observing.
We established the quality control sequence to realize the efficient observation and the production of homogeneous quality of the data. The flow is observation preparation, execution of the observation procedure, data acquisition, data archiving, data analysis, and feedback to the future observation sequence. They are closely connected with each other by the idea of observation data set. A science object frame would be valid after applying various calibrations to the data. Observation data set rule describes the 'relation' between these data: science frame and science frame, science frame and calibration frame, and calibration frame and calibration frame. 'Relation' means mainly the acquisition order and timing. The observation data set is an assembly of the data related by the observation data set rule. In the data analysis stage, the observation data set rule is used for collecting the data. Various number of data can be collected by modification of the observation data set rule. After evaluation of the analyzed data, we can find the proper observation data set rule. Then the new rule will be fed back to the observation preparation system as the template.
Operations of the first ESO Very Large Telescope (VLT), the 8m-Antu, started on April 01, 1999, and two instruments, FORS1 (FOcal Reducer/low dispersion Spectrograph) and ISAAC (Infrared Spectrometer And Array Camera) became available to the user community on the same day. ESO's Data Flow Concept embraces the definition of observations (Phase 2 Proposal Preparation, P2PP), the execution (Paranal Science Operations), and the different functions of Data Flow Operations (DFO). As part of DFO, the Quality Control Scientists are in control of the following tasks: (1) distribution of data, (2) creation of master calibration data, (3) reduction of science data, (4) performance of quality control and trend analysis. In this paper we will describe in more detail the work within the Quality Control Group, with particular emphasis on the different methodologies applied to the two instruments, and give the insider point of view on operations with a large telescope after almost one year of experience.
After the first light, Subaru telescope produced about 86,000 frames or 400 giga bytes data during its test observation by the end of February 2000. STARS (Subaru Telescope ARchive System) contains all data and is serving them to the observers. STARS also provides several convenient tool and information such as QLI (Quick Look Image) by the aid of QP (QLI Producer) and QLIS (QLI Server), HDI (HeaDer Information file), and machine readable (on-line) memorandum for observed data, for making users know the rough quality of the data at a glance. QLI file is a FITS file with FITS BINTABLE extension. By the combination of QP and QLIS (our code name is 'GYOJI'), users have data with various size (20 to 200 times smaller than original one) on their needs, and also many extracted information such as mean, maximum and minimum count values, profiles of extracted spectra in multi-slit spectroscopy or echelle spectroscopy data and so on in the original data browser (QLISFITS) written as JAVA2 applet. This functions will also be used for public data archive system in the future. For the convenience of the data analysis, STARS also handles and manages the 'dataset,' which is essential for preparing the necessary data including object and calibration frames used in data analysis by DASH (Distributed Analysis System Hierarchy: platform for data analysis of Subaru Telescope data). This 'dataset' is made at the summit system (SOSS: Subaru Observing Software System) which knows everything about the procedure of the observation performed, and is interpreted by DASH system. In this paper, we will describe the functions which STARS has and how STARS, DASH and SOSS are linked each other for leading the effective scientific and engineering returns.
The Bisei Spaceguard Center is an observatory complex located in Bisei Town Japan that comprises a 0.5-meter and a 1.0-meter very wide field automated telescope system. It is a joint project of the Japan Space Forum and the Japan Spaceguard Association. The mission of the Bisei Spaceguard Center is to detect Near Earth Objects (NEOs) and to do follow up observations to help determine the orbits of the newly detected and currently known NEOs.
FIMS is a graphical user interface to prepare observations off-line for the two astronomical instruments FORS1 and FORS2 of the ESO VLT at Paranal. FIMS was originally designed to support the main mode of FORS only: multi object spectroscopy, but supports now all observing modes. FIMS shows the focal field of FORS upon a background sky image. A typical FIMS session consists of a few cursor clicks on the stars or galaxies of the sky image to set a slit. The saved configuration will be sent to Paranal observatory to execute the observations with FORS. The slit positions as specified by FIMS and performed by the alignment methods of the FORS observation software are accurate to (sigma) equals 0.075 CCD pixel (equals 0.015 arcsec, 8 micrometer).
The ARC 3.5-m telescope began operations in 1994. Shortly thereafter a program was undertaken by the observatory management and staff to improve durability, reliability, maintainability and improvements to the telescope and facility to reach the planned scientific potential of the telescope. This program is built around the minimal staffing level for the observatory. A maintenance plan was developed with the objectives of reducing down time and providing data consistency. Preventive maintenance was addressed with respect to preventing system failures and performance degradation. An online reporting system was established for staff and observers to report telescope and instrument problems. Two types of improvement plans were devised. The first was for ongoing improvements that could be handled with existing observatory resources. These improvements consisted of new or redesign of current systems and the support of visiting instruments. Collaborative visiting instruments are brought in to enhance the observatory cache. These instruments come in for 2 to 3 night observing runs and greatly increase the science benefit of the telescope by using the latest advances in scientific instruments. Finally an enhancement plan was established that provided for additional funding and technical support to design and instill new systems for the telescope and instrument upgrades.
Over the past two years, the Scientist's Expert Assistant team from NASA's Goddard Space Flight Center and the Space Telescope Science Institute has been prototyping tools to support General Observer proposal development for the Hubble Space Telescope and the Next Generation Space Telescope. One aspect of this effort has been the exploration of the use of expert systems in guiding the user in preparing their observing program. The initial goal was to provide the user with a question-and-answer style of interaction where the software would 'interview' the user for their science needs and recommend instrument settings. This design ultimately failed. The reasons for this failure, and the resulting evolution of our approach, are an interesting case study in the use of expert system technology for observing tools. Although the interview approach failed we felt that expert systems can still be used in the tools environment. This paper describes our current approach to the use of expert systems and how it has evolved over the project's lifetime. We also present suggestions on why expert systems are useful and when they are appropriate.
Once primary science observations have been placed on the timeline for the Hubble Space Telescope (HST) we schedule additional (parallel) science observations that use the other science instruments. This is possible because the instruments use different places in the focal plane, because multiple instruments may be operated at once and because of improvements in the software that processes the science proposals. The Parallel Observation Matching System (POMS) forms the heart of these improvements. It identifies suitable places on the timeline for the parallel science and crafts science visits to fit them. The new version of POMS, which has been in use for 2 years, was designed to significantly reduce the special processing for parallel proposals. Parallel proposals are now described in a similar fashion to primary science proposals, so the standard software processing can be applied. The result is template visits with some flexible parameters. POMS considers each proposal in a prioritized sequence and uses the current operational version of the proposal transformation program (TRANS) to produce the detailed observation description. Currently the entire process takes under 1.5 hours to craft and schedule seven days worth of science parallel observations for the two active parallel instruments. The addition of new parallel proposals for future instruments is now a simple procedure.
This paper describes the approach and evaluation results of the Next Generation Space Telescope (NGST) Scientist's Expert Assistant (SEA) project. The plan describes the goals, and methodology for the evaluation. The objective of this evaluation is to provide a means for the targeted user community to provide feedback to the developers, and to determine if the advanced technologies investigated as part of SEA have achieved the goals that were to be its success criteria. We can with confidence say that visual, interactive tools in SEA were found to be highly useful by the users. On a scale of 1 - 5, where 1 was excellent and 5 was poor, the SEA as a whole ranked as 1.7, i.e., between excellent and above average.
The Hubble Space Telescope (HST), the first of NASA's Great Observatories, was launched on April 24, 1990. The HST was designed for a minimum fifteen-year mission with on-orbit servicing by the Space Shuttle System planned at approximately three-year intervals. Major changes to the HST ground system have been implemented for the third servicing mission in December 1999. The primary objectives of the ground system re- engineering effort, a project called 'Vision 2000 Control Center System (CCS),' are to reduce both development and operating costs significantly for the remaining years of HST's lifetime. Development costs are reduced by providing a more modern hardware and software architecture and utilizing commercial off the shelf (COTS) products wherever possible. Part of CCS is a Space Telescope Engineering Data Store, the design of which is based on current Data Warehouse technology. The Data Warehouse (Red Brick), as implemented in the CCS Ground System that operates and monitors the Hubble Space Telescope, represents the first use of a commercial Data Warehouse to manage engineering data. The purpose of this data store is to provide a common data source of telemetry data for all HST subsystems. This data store will become the engineering data archive and will provide a queryable database for the user to analyze HST telemetry. The access to the engineering data in the Data Warehouse is platform-independent from an office environment using commercial standards (Unix, Windows98/NT). The latest Internet technology is used to reach the HST engineering community. A WEB-based user interface allows easy access to the data archives. This paper will provide a CCS system overview and will illustrate some of the CCS telemetry capabilities: in particular the use of the new Telemetry Archiving System. Vision 20001 is an ambitious project, but one that is well under way. It will allow the HST program to realize reduced operations costs for the Third Servicing Mission and beyond.
In this new era of modern astronomy, observations across multiple wavelengths are often required. This implies understanding many different costly and complex observatories. Yet, the process for translating ideas into proposals is very similar for all of these observatories If we had a new generation of uniform, common tools, writing proposals for the various observatories would be simpler for the observer because the learning curve would not be as steep. As observatory staffs struggle to meet the demands for higher scientific productivity with fewer resources, it is important to remember that another benefit of having such universal tools is that they enable much greater flexibility within an organization. The shifting manpower needs of multiple- instrument support or multiple-mission operations may be more readily met since the expertise is built into the tools. The flexibility of an organization is critical to its ability to change, to plan ahead, and respond to various new opportunities and operating conditions on shorter time scales, and to achieve the goal of maximizing scientific returns. In this paper we will discuss the role of a new generation of tools with relation to multiple missions and observatories. We will also discuss some of the impact of how uniform, consistently familiar software tools can enhance the individual's expertise and the organization's flexibility. Finally, we will discuss the relevance of advanced tools to higher education.
The operational applications needed to quantitatively assess VLT calibration and science data are provided by the VLT Quality Control system (QC). In the Data Flow observation life-cycle, QC relates data pipeline processing and observation preparation. It allows the ESO Quality Control Scientists of the Data Flow Operations group to populate and maintain the pipeline calibration database, to measure and verify the quality of observations, and to follow instrument trends. The QC system also includes models allowing users to predict instrument performance, and the Exposure Time Calculators are probably the QC applications most visible to the astronomical community. The Quality Control system is designed to cope with the large data volumes of the VLT, the geographical distribution of data handling, and the parallelism of observations executed on the different unit telescopes and instruments.
As the Hubble Space Telescope (HST) moves into its Second Decade of observations, we are embarking on bringing our Phase I submission system into the 21st Century as well. Proposing for Hubble Space Telescope (HST) observing time and archival research proceeds in two phases. In Phase I, the scientific merits of the proposal are considered. Only accepted proposals enter Phase II, where the observations are specified in complete detail. With the advent of state of the art technology and the excellent prototyping work that has brought the Astronomer's Proposal Tool (APT), formerly the Scientist's Expert Assistant (SEA), from a concept 3 years ago, to an approved Project for implementation, at the Space Telescope Science Institute (STScI). We plan to make HST's Phase I submission system to be an integral part of the APT. We have always tried to maintain our Phase I strategy of keeping the interface simple, as well as having a minimal learning curve. This strategy will be maintained in the APT framework as well. In this paper we will present our concept for the Science definition, and Phase I proposal development and submission tools. We also discuss how we are transforming our current Call for Proposals (CP) document into a smaller and more concise electronic document that will address our policies and submission process. This document will be built and maintained using innovative tools and XML. We will provide links to existing documentation as well as provide all of the relevant information available via the tool as on-line 'context- sensitive' help.
The Hobby-Eberly Telescope (HET) is an innovative, low cost 9- meter telescope that specializes in queue mode spectroscopic observing. Because of the HET's unique design, careful day- time and night-time thermal conditioning of the interior dome environment is essential to optimizing the telescope's performance on the sky during astronomical research operations. In this contribution, we describe the past and present thermal conditioning techniques that have been developed and employed at HET to optimize the telescope's scientific performance.
There are now a large number of space-based observatories as well as several queue-scheduled ground-based observatories. As each new telescope is brought on line, astronomers find more ways to increase their scientific return through multi- wavelength campaigns between the available telescopes. Observers can and should be involved in the coordination process from the beginning. They need to be informed about the issues, understand their true requirements and stay in touch with the involved observatories, but this is not always sufficient. Starting in 1995 the schedulers for five telescopes began contacting each other directly to plan campaigns in a way that truly met the goals of the observers. This was very beneficial because observatories have different scheduling constraints and sometimes different names of the same constraints, as well as different proposal cycles. Because the number of tightly coupled observations in increasing, it would make sense to investigate automating the comparison of viewing opportunities. Innovations in observatory coordination include trading telescope time (as Chandra and HST have) so that one observatory can award coordinated time between two telescopes. The process of coordinating observations will be discussed along with feedback from successful observers and advice to the potential observer.
In this paper we introduce the OCS (Observatory Control System) of the LAMOST (Large Sky Area Multi-Object Fiber Spectroscopic Telescope), which will survey more than ten million galaxies and stars to get their spectra after 2004. The OCS will operate the TCS (Telescope Control System) and ICS (Instrument Control System) in real-time to accomplish spectroscopic observations. Each observation could obtain spectra of about 4000 objects simultaneously and the amount of raw data per night is 2 - 3 gigabytes. The OCS will also handle the observational schedules and the data processing, which is called as the DHS (Data Handling System). The propose of the OCS is to make whole observation activity (including object selection, observational scheduling, observation at telescope, data processing, data archiving and so on) automatically and to gain scientific return more efficiently. The OCS is a software system connected with TCS, ICS and DHS by the computer networks. It involves many advanced information techniques, such as network, communication, database, web, GPS, et al.
The cost effectiveness of modern telescope operations depends upon appropriate telescope system design. We explore telescope operating models and define the efficiency of telescope usage, using recent operational data. We investigate the efficiency of telescope use with respect to several generic types of operational programme. We derive a model of telescope operating efficiency and explore the operational implications of several telescope design factors and configurations.
Subaru Telescope of National Astronomical Observatory of Japan is now finishing the commissioning of telescope and instruments at the summit of Mauna Kea, Hawaii. There will be an announcement for open usage in near future. The proposal management system of the Subaru Telescope (PMSS) which accept and retrieve proposals for open use of the Subaru Telescope is now constructed on the Subaru Telescope Network, the super computer system of the Subaru Telescope. The PMSS is developed on the object oriented data model, a Use Case Model, and a prototyping has been completed.