PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Spitzer Space Telescope, the fourth and final of NASA's Great Observatories, and the cornerstone to NASA's Origins
Program, launched on 25 August 2003 into an Earth-trailing solar orbit to acquire infrared observations from space.
Spitzer has an 85cm diameter beryllium telescope, which operates near absolute zero utilizing a liquid helium cryostat
for cooling the telescope. The helium cryostat though designed for a 2.5 year lifetime, through creative usage now has an
expected lifetime of 5.5 years. Spitzer has completed its in-orbit checkout/science verification phases and the first two
years of nominal operations becoming the first mission to execute astronomical observations from a solar orbit. Spitzer
was designed to probe and explore the universe in the infrared utilizing three state of the art detector arrays providing
imaging, photometry, and spectroscopy over the 3-160 micron wavelength range. Spitzer is achieving major advances in
the study of astrophysical phenomena across the expanses of our universe. Many technology areas critical to future
infrared missions have been successfully demonstrated by Spitzer. These demonstrated technologies include lightweight
cryogenic optics, sensitive detector arrays, and a high performance thermal system, combining radiation both passive and
active cryogenic cooling of the telescope in space following its warm launch. This paper provides an overview of the
Spitzer mission, telescope, cryostat, instruments, spacecraft, its orbit, operations and project management approach and
related lessons learned.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Spitzer Space Telescope was launched on August 25th, 2003, and has been operating virtually flawlessly for over two years. The projected cryogenic lifetime for Spitzer is currently 5.5 years, substantially exceeding the required lifetime of 2.5 years and the pre-launch prediction of 5 years. The Spitzer Project has made a singular effort to extend Spitzer's lifetime through operational changes to conserve helium. Additionally, many updates to calibration and scheduling activities have been made in order to maximum the scientific return from Spitzer. Spitzer has met its level one science time requirement of 90%, and routinely exceeds it today. All this has been achieved with an operating budget that is substantially smaller than that of NASA's other Great Observatories.
This paper will describe the overall performance of the Spitzer Space Telescope Science Operations System and detail the modifications made to increase both the helium lifetime and the science data return. It will also discuss trades made between performance improvements and cost. Lessons learned which can be applied to future observatory operations will be included in the paper. This work was performed at the California Institute of Technology under contract to the National Aeronautics and Space Administration.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Mission Assurance's independent assessments started during the SPITZER development cycle and continued through post-launch operations. During the operations phase, the health and safety of the observatory is of utmost importance. Therefore, Mission Assurance must ensure requirements compliance and focus on the process improvements required across the operational systems, including new/modified products, tools, and procedures. To avoid problem reoccurrences, an interactive model involving three areas was deployed: Team Member Interaction, Root Cause Analysis Practices, and Risk Assessment. In applying this model, a metric-based measurement process was found to have the most significant benefit. Considering a combination of root cause analysis and risk approaches allows project engineers to the ability to prioritize and quantify their corrective actions based on a well-defined set of root cause definitions (i.e., closure criteria for problem reports), success criteria, and risk rating definitions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Galaxy Evolution Explorer, a NASA small explorer mission, is
performing the first all-sky, deep imaging and spectroscopic surveys
in the space ultraviolet. The prime goal of GALEX is to study star
formation in galaxies and its evolution with time. Now in its fourth year of operations the emphasis of the mission is changing from
completing the primary science goals set at launch to servicing the astronomical community with a guest investigator program that uses 50%
or more of the available observing time. We outline here mission
operations, describe some of the challenges the GALEX team has surmounted,
and some of the changes needed to accomplish the goals of the extended mission.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Chandra X-ray Observatory, which was launched in 1999, has to date completed almost seven years of successful
science and mission operations. The Observatory, which is the third of NASA's Great Observatories, is the most
sophisticated X-ray Observatory yet built. Chandra is designed to observe X-rays from high-energy regions of the
universe, such as the remnants of exploded stars, environs near black holes, and the hot tenuous gas filling the void
between the galaxies bound in clusters. The Chandra X-ray Center (CXC) is the focal point of scientific and mission
operations for the Observatory, and provides support to the scientific community in its use of Chandra. We describe the
CXC's organization, functions and principal processes, with emphasis on changes through different phases of the
mission from pre-launch to long-term operations, and we discuss lessons we have learned in developing and operating a
joint science and mission operations center.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Hubble Space Telescope has been operating from low Earth orbit for 15 years. In the case of HST, the orbit was determined by launch and servicing considerations, not observing efficiency. Higher orbits have been chosen for recent missions, for various reasons. This paper will discuss the evolution of HST operations in low Earth orbit, comparing the results with early expectations, identifying the changes and improvements made, and identifying the most serious constraints to observing efficiency found in practice. We will generalize this experience to possible future use of low Earth orbit as an observing site, identifying mission design parameters that could improve efficiency, or otherwise make low Earth orbit more attractive as an observing site.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Magellan Observatory consists of two 6.5 m telescopes located at the Las Campanas Observatory in Chile. The Magellan partner institutions are the Carnegie Institution of Washington, Harvard University, the University of Arizona, Massachusetts Institute of Technology, and the University of Michigan. The telescopes are owned and operated by Carnegie for the benefit of the consortium members. This paper provides an overview of the scientific, technical, and administrative structure of the observatory operations. A technical staff of ~23 FTEs provides on-site support of the telescopes. This group is augmented by ~3 FTEs at the Carnegie Observatories headquarters in Pasadena who concentrate mostly on upgrades or modifications to the telescopes. The observatory is operated in the "classical" mode, wherein the visiting observer is a key member of the operations team. Instrumentation is supplied entirely by the consortium members, who continue to provide significant support after instrument commissioning. An analysis of the successfulness of this model over the first five years of operation is presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The European Southern Observatory (ESO) operates its Very Large Telescope (VLT) on Cerro Paranal (Chile) with to date 11 scientific instruments including two interferometric instruments and their numerous auxiliary systems at 4 Unit Telescopes (UTs) and 3 Auxiliary Telescopes (ATs). The rigorous application of preventive and corrective maintenance procedures and a close monitoring of the instruments' engineering data streams are the key ingredient towards the minimization of the technical downtime of the instruments. The extensive use of standardized hardware and software components and their strict configuration control is considered crucial to efficiently manage the large number of systems with the limited human and technical resources available. A close collaboration between the instrument engineers, the instrument scientists in instrument operation teams (IOTs) turns out to be vital to maintain and to the performance of the instrumentation suite. In this paper, the necessary tools, workflows, and organizational structures to achieve these objectives are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Subaru Telescope has been in operation for open use for six years. As the first-generation instruments became all operational and as minimal engineering time has been spent for the commissioning of the second-generation instrument, science time counts over 80% of the total telescope time since 2002. Downtime is almost minimized thanks to the stability of the telescope and the instruments and to the dedication of the support staff. Due to overwhelming deficiency in national budget of Japan, Subaru Telescope faces more serious budget cut than expected. This paper presents how the observatory is/will be dealing with the reduced budget with minimum impact to the operation that may pose observers any restriction to use the telescope.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Large Binocular Telescope Observatory is expecting to support its first routine observing runs for partner astronomers by early 2007. When fully operational, the variety of observing modes will require a combination of skilled staff and prepared observers for full scientific productivity. The pair of 8.4-meter primaries can be operated as parallel channels to feed permanently mounted, paired wide-field direct imaging cameras, and optical and near-IR spectrographs. The two pairs of spectrographs support custom-drilled multi-object masks, with particular care required for the vacuum exchange in the near-IR system. Instruments with initially restricted user groups include a high-dispersion, stable fiber-fed echelle spectrograph and two beam-combining interferometers. The near-IR spectrograph and beam-combining instruments will depend on routine and reliable high performance from the adaptive optics system, based on the two 0.9-m adaptive secondary mirrors. We present preliminary plans for specialist staffing and scheduling modes for support of the science and deployment of instrumental modes by the partners.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Laser Guide Star Adaptive Optics (LGS AO) has been offered to Keck II visiting astronomers since November 2004. From the few nights of shared-risk science offered at that time, the LGS AO operation effort has grown to supporting over fifty nights of LGS AO per semester. In this paper we describe the new technology required to support LGS AO, give an overview of the operational model, report observing efficiency and discuss the support load required to operate LGS AO. We conclude the paper by sharing lessons learned and the challenges yet to be faced.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The W. M. Keck Observatory is in the process of formalizing its engineering process across the observatory. Over the years we have developed separate systems in separate departments; we are now creating a unified workflow and documentation system. We'll discuss the context, and propose a process for implementation. We'll describe resources, functions, and tools required for effective development and maintenance of any engineering process. The astronomy community is different from a typical manufacturing environment, and implications for off-the-shelf solutions as well as custom developments will be presented in the context of being an appropriate solution to our type of organization.
Specific focus will be placed on the role of documentation as part of this process. We'll discuss different types of documentation and implications for long-term maintenance. We'll describe collaboration tools for both internal and external development and maintenance. We'll discuss paper documentation and current parametric CAD models, and how to maintain those for the life of the observatory. We'll present tools for fast search and retrieval of this information. Finally we'll present lessons learned that may be applied to any such a process.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Hundreds of mirror segment, thousands of high precision actuators, highly complex mechanical, hydraulic, electrical and
other technology subsystems, and highly sophisticated control systems: an ELT system consists of millions of individual
parts and components, each of them may fail and lead to a partial or complete system breakdown. The traditional maintenance
concepts characterized by predefined preventive maintenance activities and rigid schedules are not suitable for
handling this large number of potential failures and malfunctions and the extreme maintenance workload. New maintenance
strategies have to be found suitable to increase reliability while reducing the cost of needless maintenance services.
The Reliability Centred Maintenance (RCM) methodology is already used extensively by airlines, and in industrial and
marine facilities and even by scientific institutions like NASA. Its application increases the operational reliability while
reducing the cost of unnecessary maintenance activities and is certainly also a solution for current and future ELT facilities.
RCM is a concept of developing a maintenance scheme based on the reliability of the various components of a system by
using "feedback loops between instrument / system performance monitoring and preventive/corrective maintenance cycles."
Ideally RCM has to be designed within a system and should be located in the requirement definition, the preliminary
and final design phases of new equipment and complicated systems. However, under certain conditions, an implementation
of RCM into the maintenance management strategy of already existing astronomical infrastructure facilities is
also possible.
This presentation outlines the principles of the RCM methodology, explains the advantages, and highlights necessary
changes in the observatory development, operation and maintenance philosophies. Presently, it is the right time to implement
RCM into current and future ELT projects and to save up to 50% maintenance and operation costs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the last few years the ubiquitous availability of high bandwidth networks has changed the way both robotic and non-robotic telescopes operate, with single isolated telescopes being integrated into expanding "smart" telescope networks that can span continents and respond to transient events in seconds. The Heterogeneous Telescope Networks (HTN)* Consortium represents a number of major research groups in the field of robotic telescopes, and together we are proposing a standards based approach to providing interoperability between the existing proprietary telescope networks. We further propose standards for interoperability, and integration with, the emerging Virtual Observatory.
We present the results of the first interoperability meeting held last year and discuss the protocol and transport standards agreed at the meeting, which deals with the complex issue of how to optimally schedule observations on geographically distributed resources. We discuss a free market approach to this scheduling problem, which must initially be based on ad-hoc agreements between the participants in the network, but which may eventually expand into a electronic market for the exchange of telescope time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The European Southern Observatory (ESO) is in the process of creating a central access point for all services offered to its user community via the Web. That gateway, called the User Portal, will provide registered users with a personalized set of service access points, the actual set depending on each user's privileges.
Correspondence between users and ESO will take place by way of "profiles", that is, contact information. Each user may have several active profiles, so that an investigator may choose, for instance, whether their data should be delivered to their own address or to a collaborator.
To application developers, the portal will offer authentication and authorization services, either via database queries or an LDAP server.
The User Portal is being developed as a Web application using Java-based technology, including servlets and JSPs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
VOEvent is the emerging virtual observatory standard for representing and reporting celestial transient events. Detailed semantics and short latency are required to support immediate, often robotic, follow-up observations. A flexible schema supports a wide selection of instrumentation and observing scenarios. We discuss the use of VOEvent to motivate time domain astronomy in the NOAO Science Archive.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Virtual Observatory (VO) movement is driven by the desire to deliver more powerful tools into the hands of
research astronomers. The expectations of the scientists who use observing facilities and online services evolve year-to-year.
A decade ago the observatory's data management responsibilities ended when the data were read out of the
instrument and written to a disk at the telescope. Observers were satisfied to bring their own tape to the telescope, write
the nights data onto the tape, and take it home. There are only a few major telescopes now that follow that procedure. It
used to be standard practice for scientists to write major pieces of software to reduce and analyze data. That situation
was revolutionized with IRAF (Tody 1986) data reduction software which was produced at an observatory and was
tightly integrated with the instruments and observing procedures used at that observatory and at that time. Most
observatories today have at least a primitive means of saving and protecting their data in an archive. In summary,
science data management has evolved a great deal in the past decade. The VO movement is a reflection of this ongoing
evolution.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Remote Telescope Markup Language (RTML) is an XML-based protocol for the transport of the high-level description of a set of observations to be carried out on a remote, robotic or service telescope. We describe how RTML is being used in a wide variety of contexts: the transport of service and robotic observing requests in the Hands-On UniverseTM, ACP, eSTAR, and MONET networks; how RTML is easily combined with other XML protocols for more localized control of telescopes; RTML as a secondary observation report format for the IVOA's VOEvent protocol; the input format for a general-purpose observation simulator; and the observatory-independent means for carrying out request transactions for the international Heterogeneous Telescope Network (HTN).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Distributed, heterogenous networks of telescopes will require a very different approach to scheduling than classically operated single site instruments. We have previously discussed the advantages of an economic (free market) approach to this problem. In this paper we describe a test implementation of the technologies using a generic
toolkit designed to make negotiable and chargeable web services.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Janet D. Evans, Mark Cresitello-Dittmar, Stephen Doe, Ian Evans, Giuseppina Fabbiano, Gregg Germain, Kenny Glotfelty, Diane Hall, David Plummer, et al.
The Chandra X-ray Center Data System provides end-to-end scientific software support for Chandra X-ray Observatory mission operations. The data system includes the following components: (1) observers' science proposal planning tools; (2) science mission planning tools; (3) science data processing, monitoring, and trending pipelines and tools; and (4) data archive and database management. A subset of the science data processing component is ported to multiple platforms and distributed to end-users as a portable data analysis package. Web-based user tools are also available for data archive search and retrieval. We describe the overall architecture of the data system and its component pieces, and consider the design choices and their impacts on maintainability.
We discuss the many challenges involved in maintaining a large, mission-critical software system with limited resources. These challenges include managing continually changing software requirements and ensuring the integrity of the data system and resulting data products while being highly responsive to the needs of the project. We describe our use of COTS and OTS software at the subsystem and component levels, our methods for managing multiple release builds, and adapting a large code base to new hardware and software platforms. We review our experiences during the life of the mission so-far, and our approaches for keeping a small, but highly talented, development team engaged during the maintenance phase of a mission.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a discussion of the lessons learned from establishing and operating the Chandra Data Archive (CDA). We offer an overview of the archive, what preparations were done before launch, the transition to operations, actual operations, and some of the unexpected developments that had to be addressed in running the archive.
From this experience we highlight some of the important issues that need to be addressed in the creation and running of an archive for a major project. Among these are the importance of data format standards; the integration of the archive with the rest of the mission; requirements throughout all phases of the mission; operational requirements; what to expect at launch; the user interfaces; how to anticipate new tasks; and overall importance of team management and organization.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Six years into the mission, Chandra data processing operations has reached a stage of maturity that allows nearly complete automation as well as dynamic flexibility to accommodate future changes in mission and instrument status and constraints. We present a summary of the procedural and technical solutions that have been developed since the launch of Chandra to meet unanticipated challenges in the area of data processing. Lessons learned concerning data processing are discussed, including an explanation of the source of each problem and the Chandra team's response to the problem. Potential pitfalls that might affect future projects are also included. The user interface, data quality screening, and quicklook software developed specifically to address issues identified after launch have proved valuable in meeting the goals of low-cost, efficient, and flexible mission operations for the Chandra mission and can provide insight for future mission designs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Gaia is an approved ESA cornerstone project, currently scheduled for launch in late 2011. Gaia will provide photometric, positional, spectroscopic and radial velocity measurements with the accuracies needed to produce a stereoscopic and kinematic census of about one billion stars in our Galaxy and throughout the Local Group, addressing its core science goals to quantify the formation and assembly history of a large spiral galaxy, the Milky Way. Gaia will achieve this by obtaining a six-dimensional (spatial & kinematic) phase-space map of the Galaxy, complemented by an optimised high-spatial resolution multi-colour photometric survey, and the largest stellar spectroscopic and radial velocity surveys ever made. The Gaia data set will be constructed from 2 × 1012 observations (image CCD transits), whose analysis is a very complex task, involving both real-time (this proposal) and end-of-mission data products. This paper describes the UK Gaia Data Flow System activities as part of the emerging European wide Gaia Data Processing system. We describe the data processing challenges that need to be overcome to meet the heavy demands placed by Gaia. We note the construction processes required to handle the photometric reduction of the data from Gaia's 100+ focal plane CCDs, the pipeline needed to support the 'science alerts' and epoch photometry handling, and the spectroscopic processing system. We note the system software and hardware architecture, and how the data products will be generated to ensure compliance with emerging VO standards.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The 3.2 giga-pixel LSST camera will produce approximately half a petabyte of archive images every month. These data need to be reduced in under a minute to produce real-time transient alerts, and then added to the cumulative catalog for further analysis. The catalog is expected to grow about three hundred terabytes per year. The data volume, the real-time transient alerting requirements of the LSST, and its spatio-temporal aspects require innovative techniques to build an efficient data access system at reasonable cost. As currently envisioned, the system will rely on a database for catalogs and metadata. Several database systems are being evaluated to understand how they perform at these data rates, data volumes, and access patterns. This paper describes the LSST requirements, the challenges they impose, the data access philosophy, results to date from evaluating available database technologies against LSST requirements, and the proposed database architecture to meet the data challenges.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Data from two IR survey cameras UKIRT's WFCAM and ESO's VISTA can arrive at rates approaching 1.4 TB/night
for of order 10 years. Handling the rate, and volume of survey data accumulated over time, are both challenges. The
UK's VISTA Data Flow System (for WFCAM & VISTA near-IR survey data) removes instrumental artefacts,
astrometrically and photometrically calibrates, extracts catalogues, puts the products in a curated archive, facilitates
production of user-specified data products, and is designed in the context of the Virtual Observatory. The VDFS design
concept is outlined, and experience in handling the first year of WFCAM data described. This work will minimize risk
in meeting the more taxing requirements of VISTA, which will be commissioned in 2007. Tools for preparing survey
observations with VISTA are outlined.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the completion of the first generation instrumentation set on the Very Large Telescope, a total of eleven instruments are now provided at the VLT/VLTI for science operations. For each of them, ESO provides automatic data reduction facilities in the form of instrument pipelines developed in collaboration with the instrument consortia. The pipelines are deployed in different environments, at the observatory and at the ESO headquarters, for on-line assessment of observations, instruments and detector monitoring, as well as data quality control and products generation. A number of VLT pipelines are also distributed to the user community together with front-end applications for batch and interactive usage. The main application of the pipeline is to support the Quality Control process. However, ESO also aims to deliver pipelines that can generate science ready products for a major fraction of the scientific needs of the users. This paper provides an overview of the current developments for the VLT/VLTI next generation of instruments and of the prototyping studies of new tools for science users.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The emergence of the Virtual Observatory as a new model for doing science means that the value of a facility instrument is no longer limited to its own lifetime. Instead, value becomes the net effect of optimizing observational throughput, the quality and quantity of data in the archive, and the applicability of the data that can survive and be used for future research even after a telescope ceases operations. Valuation aims to answer two questions which are especially important to funding agencies: what am I getting for my investment, and why should I care? Policy establishes guidelines for achieving the goals that lead to increased value. The relative roles of valuation and policy in inventory control and archiving strategies, adoption of standards, and developing maintainable software systems to meet these future goals are examined.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We review the first five years of science operations at Gemini Observatory with particular emphasis on the evolution of the queue observing mode through its challenges, false assumptions and successes. The telescope operations are now acquiring high quality data very efficiently, completing the most meritorious programs and delivering data to the community in a timely manner. The telescopes routinely operate in a multi-instrument queue mode where several facility instruments are used each night.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Gamma-ray bursts and other targets of opportunity require a quick response by observers to maximize the significance of observations. Because of this need for quickness, these types of observations are often observed at smaller facilities where observers and institutions have more freedom to respond to serendipitous events. The two Gemini 8-m telescopes have a well-developed workflow for queue observing that allows investigators to be involved in their science program throughout its lifecycle. To coincide with the startup of the Swift Gamma Ray Burst Explorer orbiting observatory in late 2004, the Gemini observing policies, workflow, and observing tools were enhanced to allow investigators to participate in target of opportunity programs. This paper describes how target of opportunity has been integrated into Gemini operations to allow investigators to trigger observations at the Gemini telescopes within minutes of an event.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The ESO Very Large Telescope (VLT) started operations on Cerro Paranal (Chile) in April 1999 with one Unit Telescope and two science instruments. Seven years later it is still a growing facility consisting of four 8.2-m telescopes, three auxiliary telescopes for interferometry, and 11 science instruments. In addition two dedicated survey telescopes with wide-field cameras, VST and VISTA, a fourth auxiliary telescope, and several new instruments will become available in the coming months. Since the very beginning, VLT operations were planned to contain a substantial component of Service Mode observing, amounting to approximately 50% of the available time. The success of the full-scale implementation of Service Mode operations is reflected nowadays by the steady increase in its demand by the community, both in absolute terms and also relative to the demand in Visitor Mode, by the highly positive feedback received from the users, and also by the increasing flow of scientific results produced by programs that have exploited the unique advantages of flexible short-term scheduling. It is also fulfilling the requirement of creating a science archive and populating it with a data stream having through a quality control process. Here we review the current status of Service Mode observing at the VLT and the VLT Interferometer (VLTI), the challenges posed by its implementation on a wide variety of instrument modes, and its strong requirement of an integrated, end-to-end approach to operations planning with adequate tools and carefully defined policies and procedures. The experience of these seven years of VLT operations have led to a thorough exploration of operations paradigms that will be essential to the scientific success of ALMA and the extremely large optical telescopes in the coming decades.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In order to maximize the useful life of the remaining operational gyroscopes and extend the life of the mission, the Hubble Space Telescope began 2-Gyro observations in the summer of 2005. The operational switch had implications for the planning and scheduling of the telescope; those issues are discussed here. In addition, we present an analysis of the resulting scheduling rates and telescope efficiency.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Launched as the Space Infrared Telescope Facility (SIRTF) in August, 2003 and renamed in early 2004,
the Spitzer Space Telescope is performing an extended series of science observations at wavelengths ranging from 3
to 180 microns. The California Institute of Technology is the home of the Spitzer Science Center (SSC) and
operates the Science Operations System (SOS), which supports science operations of the Observatory. A key
function supported by the SOS is the long-range planning and short-term scheduling of the Observatory.
This paper describes the role and function of the SSC Observatory Planning and Scheduling Team (OPST), its
operational interfaces, processes, and tools.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
LINC-NIRVANA is a Fizeau interferometer for the LBT. The instrument combines the two 8.4 m telescopes into one image plane. The fixed geometry of the telescope and the adaptive optics of the instrument put constraints on the observation schedule. Environmental changes influences the execution of observations.
We present a robust and reactive scheduling strategy to achieve high observation efficiency and scientific results with our instrument.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
LINC-NIRVANA is a Fizeau Interferometer using the two 8.4 m mirrors of LBT in the combined focus. The images can be obtained in K, H and J Band over a 10" × 10" Field of View by means of Multi-Conjugated Adaptive Optics (MCAO) and a Fringe and Flexure Tracker System (FFTS). In interferometry, the planning of observations is much more tightly connected to the reduction of data than in traditional astronomy. Such observations need to be carefully prepared, taking into account the constraints imposed by scientific objectives as well as features of the instrument. The Observation Preparation Software (OPS), currently under development at MPIA, is a tool to support an astronomer (observer) in the complex process of preparing the observations for LINC-NIRVANA. The main goal of this tool is to provide the observer with an idea what he or she can do and what to expect under given conditions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Hobby-Eberly Telescope has identified a number of problems with the current observatory control interface used
by operators to run the telescope and associated equipment. We consider the applicability of a purely graphical interface
to replace the existing interface and explore the pros and cons of a graphical interface. The design decisions for a new
interface are discussed and the process by which we plan to create the new interface is described. An initial prototype
interface has been developed and discussed with the telescope operators. The prototype interface and the reasoning
behind some of the decisions are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Spitzer Space Telescope was launched on 25 August 2003 into an Earth-trailing solar orbit to acquire infrared observations from space. Development of the Mission Operations System (MOS) portion prior to launch was very different from planetary missions from the stand point that the MOS teams and Ground Data System had to be ready to support all aspects of the mission at launch (i.e., no cruise period for finalizing the implementation). For Spitzer, all mission-critical events post launch happen in hours or days rather than months or years, as is traditional with deep space missions.
At the end of 2000 the Project was dealt a major blow when the MOS had an unsuccessful Critical Design Review (CDR). The project made major changes at the beginning of 2001 in an effort to get the MOS (and Project) back on track. The result for the Spitzer Space Telescope was a successful launch of the observatory followed by an extremely successful In Orbit Checkout (IOC) and operations phase. This paper describes how the project was able to recover the MOS to a successful Delta (CDR) by mid 2001, and what changes in philosophies, experiences, and lessons learned followed. It describes how projects must invest early or else invest heavily later in the development phase to achieve a successful operations phase.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Spitzer Space Telescope was successfully launched on August 25th, 2003. After a 98 day In Orbit Checkout and
Science Verification period, Spitzer began its five and one half year mission of science observations at wavelengths
ranging from 3.6 to 160 microns. Results from the first two years of operations show the observatory performing
exceedingly well, meeting or surpassing performance requirements in all areas. The California Institute of Technology
is the home for the Spitzer Science Center (SSC). The SSC is responsible for selecting observing proposals, providing
technical support to the science community, performing mission planning and science observation scheduling,
instrument calibration and performance monitoring during operations, and production of archival quality data products.
This paper will provide an overview of the Science Operations System at the SSC focusing on lessons learned during
the first two years of science operations and the changes made in the system as a result. This work was performed at the
California Institute of Technology under contract to the National Aeronautics and Space Administration.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
System management for any organization can be a challenge, but satellite projects present their own issues. I will be presenting the network and system architecture chosen to support the scientists in the Chandra X-ray Center. My group provides the infrastructure for science data processing, mission planning, user support, archive support and software development. Our challenge is to create a stable environment with enough flexibility to roll with the changes during the mission. I'll discuss system and network choices, web service, backups, security and systems monitoring. Also, how to build infrastructure that's flexible, how to support a large group of scientists with a relatively small staff, what challenges we faced (anticipated and unanticipated) and what lessons we learned over the past 6 years since the launch of Chandra. Finally I'll outline our plans for the future including beowulf cluster support, an improved helpdesk system, methods for dealing with the explosive amount of data that needs to be managed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
After over 6 highly successful years on orbit, the Chandra X-ray Observatory continues to deliver world class science to
members of the X-ray community. Much of this success can be attributed to an excellent space vehicle, however; the
creation of several unique software tools has allowed for extremely efficient and smooth running operations. The
Chandra Flight Operations Team, staffed by members of Northrop Grumman Space Technology, has created a suite of
software tools designed to help optimize on-console operations, mission planning and scheduling, and spacecraft
engineering and trending. Many of these tools leverage COTS products and Web based technologies. We describe the
original mission concepts, need for supplemental software tools, development and implementation, use of these tools in
the current operations scenario, and efficiency improvements due to their use.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Chandra X-Ray Observatory was launched in July, 1999 and has yielded extraordinary scientific results. Behind the scenes, our Monitoring and Trends Analysis (MTA) approach has proven to be a valuable resource in providing telescope diagnostic information and analysis of scientific data to access Observatory performance. We have created and maintain real-time monitoring and long-term trending tools. This paper will update our 2002 SPIE paper on the design of the system and discuss lessons learned.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
SPIRE is one of three instruments on board ESA's Herschel space observatory, due for launch in 2008. The instrument comprises both a photometer and Fourier transform spectrometer. The Herschel mission has a limited operational lifetime of 3.5 years and, as with all space-based facilities, has very high development and operational costs. As a result observing time is a valuable and limited resource, making efficiency of crucial importance. In this paper we present recent results derived from the SPIRE photometer simulator, detailing the optimum observing mode parameters to be employed by the Herschel/SPIRE system. We also outline the effiency of various modes, leading to the conclusion that scan mapping is the optimal mode for the mapping of regions greater than ~4' × 4'.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This work presents the use of system modeling with the aim of maintaining and improving instrument performances. The complexity and cryogenic nature of infrared systems prevent continuous hardware upgrades, so that the advantages of modeling are of high value. Two applications of modeling and basic control theory are shown. The first example concerns the performance monitoring of the ISAAC cryogenic system. The measured and simulated cold structure temperatures are compared in real time, allowing for anticipation of possible failures of the cooling system and therefore for reduction of operational downtime. The second case is about the position control of the duo-echelle grating of the VISIR spectrometer. The controlled system was identified and simulated to select controller parameters that improve the image stability. Preliminary results show the possibility to get better compensation of the disturbances induced by the telescope movements, leading to an enhancement of the data quality.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper explores the how's and why's of the Spitzer Mission Operations System's (MOS) success, efficiency, and affordability in comparison to other observatory-class missions. MOS exploits today's flight, ground, and operations capabilities, embraces automation, and balances both risk and cost. With operational efficiency as the primary goal, MOS maintains a strong control process by translating lessons learned into efficiency improvements, thereby enabling the MOS processes, teams, and procedures to rapidly evolve from concept (through thorough validation) into in-flight implementation. Operational teaming, planning, and execution are designed to enable re-use. Mission changes, unforeseen events, and continuous improvement have often times forced us to learn to fly anew. Collaborative spacecraft operations and remote science and instrument teams have become well integrated, and worked together to improve and optimize each human, machine, and software-system element. Adaptation to tighter spacecraft margins has facilitated continuous operational improvements via automated and autonomous software coupled with improved human analysis. Based upon what we now know and what we need to improve, adapt, or fix, the projected mission lifetime continues to grow - as does the opportunity for numerous scientific discoveries.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The goal of the Wide-field Infrared Survey Explorer (WISE) mission is to perform a highly sensitive all-sky survey in 4 wavebands from 3 to 25 μm. Launched on a Delta II rocket into a 500 km Sun-synchronous orbit in June 2009, during its 7 months of operations, WISE will acquire about 50 GBytes of raw science data every day, which will be down-linked via the TDRSS relay satellite system and processed into an astronomical catalogue and image atlas.
The WISE mission operations system is being implemented in collaboration between UCLA, JPL and IPAC (Caltech). In this paper we describe the challenges to manage a high data rate, cryogenic, low earth-orbit mission; maintaining safe on-orbit operations, fast anomaly recoveries (mandated by the desire to provide complete sky coverage in a limited lifetime), production and dissemination of high quality science products, given the constraints imposed by funding profiles for small space missions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have developed an operation simulator for the Large Synoptic Survey Telescope (LSST) that is an implementation in Python language using the SimPy extension, with a modular and object-oriented design. The main components include a telescope model, a sky model, a weather database for 3 sites, a scheduler and multiple observing proposals. All the proposals derive from a parent class which is fully configurable through about 75 parameters to implement a specific science survey. These parameters control the target selection region, the composition of the sequence of observations for
each field, the timing restrictions and filter selection criteria of each observation, the lunation handling, seeing limits, etc. The current implemented proposals include Weak Lensing, Near Earth Asteroids, Supernova and Kuiper Belt Objects.
The telescope model computes the slew time delay from the current position to any given target position, using a complete kinematic model for the mount, dome and rotator, as well as optics alignment corrections. The model is fully configurable through about 50 parameters. The scheduler module combines the information received from the proposals and the telescope model for selecting the best target at each moment, promoting targets that fulfill multiple surveys and storing all the simulator activities in a MySQL database for further analysis of the run. This scheduler is also configurable; for example, balancing the weight of the slew time delay in selecting the next field to observe.
This simulator has been very useful in clarifying some of the technical and scientific capabilities of the LSST design, and gives a good baseline for a future observation scheduler.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Thirty Meter Telescope (TMT) project is a partnership between the Association of Canadian Universities for Research in Astronomy (ACURA), Associated Universities for Research in Astronomy (AURA), Caltech and the University of California. The complexity of TMT and its diverse suite of instrumentation (many of which will be assisted by adaptive optics front-ends) necessitates the design and implementation of a highly-automated, well-tuned observatory software system. The fundamental system requirements are low operating costs and excellent reliability, both of which necessitate simplicity in software design. This paper will address how these requirements will be achieved as well as how the system will handle observing program execution.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Georgia Tech Research Institute and the University of New Mexico are developing a compact, rugged, eye safe lidar
(laser radar) to be used specifically for measuring atmospheric extinction in support of the second generation of the
CCD/Transit Instrument (CTI-II). The CTI-II is a 1.8 meter telescope that will be used to accomplish a precise timedomain
imaging photometric and astrometric survey at the McDonald Observatory in West Texas. The supporting lidar
will enable more precise photometry by providing real-time measurements of the amount of atmospheric extinction as
well as its cause, i.e. low-lying aerosols, dust or smoke in the free troposphere, or high cirrus. The goal of this project is
to develop reliable, cost-effective lidar technology for any observatory. The lidar data can be used to efficiently allocate
observatory time and to provide greater integrity for ground-based data. The design is described in this paper along with
estimates of the lidar's performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The gain of the Advanced CCD Imaging Spectrometer (ACIS) instrument has changed over time since the
Chandra X-ray Observatory launch in July 1999. The calibration and data analysis teams have addressed this
issue, but it has only been recently that the operations team has examined how the commanding of the instrument
could be altered to partially compensate for these changes. This paper will address the changes in the gain, the
impact on science data and the changes the operations team has considered to improve the science return. We
also discuss the restrictions imposed by the commanding software on our response.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Instrument response uncertainties are almost universally ignored in current astrophysical X-ray data analyses. Yet modern X-ray observatories, such as Chandra and XMM-Newton, frequently acquire data for which photon counting statistics are not the dominant source of error. Including allowance for performance uncertainties is, however, technically challenging in terms of both understanding and specifying the uncertainties themselves, and in employing them in data analysis. Here we describe Monte Carlo methods developed to include instrument performance uncertainties in typical model parameter estimation studies. These methods are used to estimate the limiting accuracy of Chandra for understanding typical X-ray source model parameters. The present study indicates that, for ACIS-S3 observations, the limiting accuracy is reached for ~ 104 counts.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Now in operation for over 6 years, the Chandra X-ray Observatory (CXO) has sampled a variety of space environments. Its highly elliptical orbit, with a 63.5 hr period, regularly takes the spacecraft through the Earth's radiation belts, the magnetosphere, the magnetosheath and into the solar wind. Additionally, the CXO has weathered several severe solar storms during its time in orbit. Given the vulnerability of Chandra's Charge Coupled Devices (CCDs) to radiation damage from low energy protons, proper radiation management has been a prime concern of the Chandra team. A comprehensive approach utilizing scheduled radiation safing, in addition to both on-board autonomous radiation monitoring and manual intervention, has proved successful at managing further radiation damage. However, the future of autonomous radiation monitoring on-board the CXO faces a new challenge as the multi-layer insulation (MLI) on its radiation monitor, the Electron, Proton, Helium Instrument (EPHIN), continues to degrade, leading to elevated temperatures. Operating at higher temperatures, the data from some EPHIN channels can become noisy and unreliable for radiation monitoring. This paper explores the full implication of the loss of EPHIN to CXO radiation monitoring by evaluating the fluences the CXO experienced during 40 autonomous radiation safing events from 2000 through 2005 in various hypothetical scenarios which include the use of EPHIN in limited to no capacity as a radiation monitor. We also consider the possibility of replacing EPHIN with Chandra's High Resolution Camera (HRC) for radiation monitoring.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In mid-2004, the Hubble Space Telescope (HST) began experiencing occasional losses of lock during Fine Guidance Sensor (FGS) guide star acquisitions, threatening a potential loss of science. These failures were associated with an increasing disparity between the FGS-derived estimates of gyro bias calculated in orbit day and those calculated in orbit night. Early efforts to mitigate the operational effects of this Attitude Observer Anomaly (AOA) succeeded; however, the magnitude of the anomaly continued to increase at a linear rate and operational problems resumed in mid-2005. Continued analysis led to an additional on-orbit mitigation strategy that succeeded in reducing the AOA signature. Before the investigation could be completed, HST began operations under the life-extending Two Gyro Science mode. This eliminated both the operational effects of and the visibility into the AOA phenomenon.
Possible causes of the anomaly at the vehicle system level included component hardware failures, flight software errors in control law processing, distortion of the telescope optical path, and deformation of vehicle structure. Although the mechanism of the AOA was not definitively identified, the Anomaly Review Board (ARB) chartered to investigate the anomaly concluded that the most likely root cause lies within one of HST's 6 rate-integrating gyroscopes.
This paper provides a summary of the initial paths of investigation, the analysis and testing performed to attempt to isolate the source, and a review of the findings of the ARB. The possibility of future operational impacts and available methods of on-orbit mitigation are also addressed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Though the Hubble Space Telescope (HST) has proven to be a stable platform for astronomical observations
when compared with ground-based imaging, it features its own characteristic optical variations that have required
post-observation processing to support the full range of science investigations that should be possible with HST.
While the overall system focus has been monitored and adjusted throughout the life of the Observatory, the recent
installation of the Advanced Camera for Surveys and its High Resolution Camera has allowed us to use phase
retrieval techniques to accurately measure changes in coma and astigmatism as well. The aim of this current
work is to relate these measurements of wave front error back to characterizations more common to science
data analysis (e.g. FWHM, and ellipticity). We show how variations in these quantities over the timescales
observed may impact the photometric and astrometric precision required of many current HST programs, as well
as the characterization of barely-resolved objects. We discuss how improved characterization and modeling of
the point spread function (PSF ) variations may help HST observers achieve the full science capabilities of the
Observatory.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
It has been approximately 6.4 years since the Gyroscopes on HST have been replaced. During this time two Gyroscopes have failed and two have developed problems, but operational work-arounds are available. Further Gyroscope replacement will not occur until the anticipated Shuttle Servicing Mission scheduled for November, 2007. To extend the science mission life of HST up to an additional 15 months, the control system has been modified from a three/four-Gyro, to a two-Gyro control law. This paper describes the new two-Gyro Guide Star acquisition strategy to enable science observations, including the integration of Astrometry commanding with the Acquisition Logic.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The focus of the Hubble Space Telescope (HST) has been monitored throughout the life of the observatory primarily by
phase retrieval techniques. This method generates and fits model Point Spread Functions (PSFs) to nearly in-focus stellar
images to solve for coefficients of the Zernike polynomials describing focus, and often coma and astigmatism. Here, we
discuss what these data from the ongoing monitoring strategies and special observations tell us about modes and
timescales observed in HST optical variations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The first of two 1.2m MONET robotic telescopes became operational at McDonald Observatory in Texas in spring 2006, the second one will be erected at the South African Astronomical Observatory's Sutherland Station. About 60% of the observing time is dedicated to scientific use by the consortium (Univ. Göttingen, McDonald Obs. and the South African Astron. Obs.) and 40% is for public and school outreach. The alt-az-mounted f/7 RC imaging telescopes are optimized for fast operations, with slewing speeds up to 10°/sec in all axes, making them some of the fastest of their class in the world. The unusual clam-shell enclosures provide the telescopes with nearly unobstructed views of the sky. The new observatory control system fully utilizes the hardware capabilities and permits local, remote, and robotic operations and scheduling, including the monitoring of the weather, electric power, the building, current seeing, all software processes, and the archiving of new data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We summarize the on-orbit performance of the CCD detectors in the Suzaku X-ray Imaging Spectrometer during the first eight months of the mission. Gradual changes in energy scale, spectral resolution and other performance characteristics, mainly due to radiation exposure, are presented and compared with pre-launch expectations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The performance of all scientific instruments of the Very Large Telescope (VLT) is monitored by the Quality Control (QC) Group of the European Southern Observatory. Basic goals are to detect instrumental failures on a short time basis and to evaluate and detect long-term trends. The QC process mainly involves pipeline-produced calibration products and is set up on a file by file
basis. This implies that currently each detector or channel of an instrument is checked separately. All operational VLT instruments have a low number of detectors but with the advent of multi-detector instruments like OmegaCAM and VISTA, which have up to 32 individual detectors, this approach becomes unfeasible. In this paper, we present solutions for this problem for the VLT instrument VIMOS. With four detectors operating simultaneously, VIMOS can be regarded as a test bed for studying new QC concepts which can be implemented for other instruments with higher multiplicity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
CRIRES, a first generation VLT instrument, is a cryogenic high-resolution (R~100,000) IR spectrograph operating in the range 1-5 μm. Here we present a model based wavelength calibration for CRIRES. The procedure uses a streamlined model of the CRIRES optical path that enables calculation of the location of the illumination
in the detector focal plane at sub-pixel accuracy for a given wavelength and instrumental configuration. The instrumental configuration is described in terms of the tips and tilts of optical surfaces, their optical properties and environmental conditions. These parameters are derived through the application of a minimisation algorithm that is capable of using multiple realisations of the model to find the configuration which results in the optimal match between simulated wavelength data and dedicated calibration exposures. Once the configuration is accurately determined the model can be used to provide the dispersion solution for science exposures or to produce two dimensional simulated data for a given spectral source. In addition we describe comparisons to early laboratory data and the optimisation strategy adopted.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Raw data from the Chandra X-ray Observatory are processed by a set of standard data processing pipelines to create scientifically useful data products appropriate for further analysis by end users. Fully automated pipelines read the dumped raw telemetry byte stream from the spacecraft and perform the common reductions and calibrations necessary to remove spacecraft and instrumental signatures and convert the data into physically meaningful quantities that can be further analyzed by observers. The resulting data products are subject to automated validation to ensure correct pipeline processing and verify that the spacecraft configuration and scheduling matched the observers request and any constraints. In addition, pipeline processing monitors science and engineering data for anomalous indications and trending, and triggers alerts if appropriate. Data products are ingested and stored in the Chandra Data Archive, where they are made available for downloading by users.
In this paper, we describe the architecture of the data processing system, including the scientific algorithms that are applied to the data, and interfaces to other subsystems. We place particular emphasis on the impacts of design choices on system integrity and maintainability. We review areas where algorithmic improvements or changes in instrument characteristics have required significant enhancements, and the mechanisms used to effect these changes while assuring continued scientific integrity and robustness. We discuss major enhancements to the data processing system that are currently being developed to automate production of the Chandra Source Catalog.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The CIAO (Chandra Interactive Analysis of Observations) software package was first released in 1999 following the launch of the Chandra X-ray Observatory and is used by astronomers across the world to analyze Chandra data as well as data from other telescopes. From the earliest design discussions, CIAO was planned as a general-purpose scientific data analysis system optimized for X-ray astronomy, and consists mainly of command line tools (allowing easy pipelining and scripting) with a parameter-based interface layered on a flexible data manipulation I/O library. The same code is used for the standard Chandra archive pipeline, allowing users to recalibrate their data in a consistent way. We will discuss the lessons learned from the first six years of the software's evolution. Our initial approach to documentation evolved to concentrate on recipe-based "threads" which have proved very successful. A multi-dimensional abstract approach to data analysis has allowed new capabilities to be added while retaining existing interfaces. A key requirement for our community was interoperability with other data analysis systems, leading us to adopt standard file formats and an architecture which was as robust as possible to the input of foreign data files, as well as re-using a number of external libraries. We support users who are comfortable with coding themselves via a flexible user scripting paradigm, while the availability of tightly constrained pipeline programs are of benefit to less computationally-advanced users. As with other analysis systems, we have found that infrastructure maintenance and re-engineering is a necessary and significant ongoing effort and needs to be planned in to any long-lived astronomy software.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Chandra standard data processing involves hundreds of different types of data products and pipelines. Pipelines are initiated by different types of events or notifications and may depend upon many other pipelines for input data. The Chandra automated processing system (AP) was designed to handle the various notifications and orchestrate the pipeline processing. Certain data sets may require "special" handling that deviates slightly from the standard processing thread. Also, bulk reprocessing of data often involves new processing requirements. Most recently, a new type of processing to produce source catalogs has introduced requirements not anticipated by the original AP design. Managing these complex dependencies and evolving processing requirements in an efficient, flexible, and automated fashion presents many challenges. This paper describes the most significant of these challenges, the AP design changes required to address these issues and the lessons learned along the way.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The calibration database implemented for the Chandra X-ray Observatory is the most detailed and extensive CalDB of
its kind to date. Built according to the NASA High Energy Astrophysics Science Archive Research Center (HEASARC)
CalDB prescription, the Chandra CalDB provides indexed, selectable calibration data for detector responses, mirror
effective areas, grating efficiencies, instrument geometries, default source aim points, CCD characteristics, and quantum
efficiencies, among many others. The combined index comprises approximately 500 entries. A standard FTOOLS
parametric interface allows users and tools to access the index. Unique dataset selection requires certain input
calibration parameters such as mission, instrument, detector, UTC date and time, and certain ranged parameter values.
The goals of the HEASARC CalDB design are (1) to separate software upgrades from calibration upgrades, (2) to allow
multi-mission use of analysis software (for missions with a compliant CalDB) and (3) to facilitate the use of multiple
software packages for the same data. While we have been able to meet the multivariate needs of Chandra with the
current CalDB implementation from HEASARC, certain requirements and desirable enhancements have been identified
that raise the prospect of a developmental rewrite of the CalDB system. The explicit goal is to meet Chandra's specific
needs better, but such upgrades may also provide significant advantages to CalDB planning for future missions. In
particular we believe we will introduce important features aiding in the development of mission-independent analysis
software. We report our current plans and progress.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Spitzer Science Center (SSC) provides a set of user tools to support search and retrieval of Spitzer Science Archive (SSA) data via the Internet. This paper will present the system architecture and design principles that support the Archive Interface subsystem of the SSA. The Archive Interface is an extension of the core components of the Uplink subsystem and provides a set Web services to allow open access to the SSA data set. Web services technology provides a basis for searching the archive and retrieving data products. The Archive Interface provides three modes of access: a rich client, Web browser and scripts (via Web services). The rich client allows the user to perform complex queries and submit requests for data that is asynchronously downloaded to the local workstation. Asynchronous download is a critical feature given the large volume of a typical data set (on the order of 40 Gigabytes). For basic queries and retrieval of data the Web browser interface is provided. For advanced users scripting languages with Web services capabilities (i.e. Perl) can used to query and download data from the SSA. The archive interface subsystem is the primary means for searching and retrieving data from the SSA and is critical to the success of the Spitzer Space Telescope.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Data Quality Analysis (DQA) for astronomical infrared maps and spectra acquired by NASA's Spitzer Space Telescope is one of the important functions performed in routine science operations at the Spitzer Science Center of the California Institute of Technology. A DQA software system has been implemented to display, analyze and grade Spitzer science data. This supports the project requirement that the science data be verified after calibration and before archiving and subsequent release to the astronomical community. The software has an interface for browsing the mission data and for visualizing images and spectra. It accesses supporting data in the operations database and updates the database with DQA grading information. The system has worked very well since the beginning of the Spitzer observatory's routine phase of operations, and can be regarded as a model for DQA operations in future space science missions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have developed a post-Basic Calibrated Data pipeline processing software suite called "IRACproc". This package facilitates the co-addition of dithered or mapped Spitzer/IRAC data to make them ready for further analysis with application to a wide variety of IRAC observing programs. In acting as a wrapper for the Spitzer Science Center's MOPEX software, IRACproc improves the rejection of cosmic rays and other transients in the co-added data. In addition, IRACproc performs (optional) Point Spread Function (PSF) fitting, subtraction, and masking of saturated stars.
The under/critically sampled IRAC PSFs are subject to large variations in shape between successive frames as a result of sub-pixel shifts from dithering or telescope jitter. IRACproc improves cosmic ray and other transient rejection by using spatial derivative images to map the locations and structure of astronomical sources. By protecting sources with a metric that accounts for these large variations in the PSFs, our technique maintains the structure and photometric reliability of the PSF, while at the same time removing transients at the lowest level.
High Dynamic Range PSFs for each IRAC band were obtained by combining an unsaturated core, derived from stars in the IRAC PSF calibration project, with the wings of a number of bright stars. These PSFs have dynamic ranges of ~107 and cover the entire IRAC field of view. PSF subtraction can drastically reduce the light from a bright star outside the saturated region. For a bright star near the array center it is possible to detect faint sources as close as ~15-20" that would otherwise be lost in the glare. In addition, PSF fitting has been shown to provide photometry accurate to 1-2% for over-exposed stars.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Hobby-Eberly Telescope Dark Energy Experiment (HETDEX) will measure baryonic acoustic oscillations, first discovered in the Cosmic Microwave Background (CMB), to constrain the nature of dark energy by performing a blind search for Ly-α emitting galaxies within a 200 deg2 field and a redshift bin of 1.8 < z < 3.7. This will be achieved by VIRUS, a wide field, low resolution, 145 IFU spectrograph. The data reduction pipeline will have to extract ≈ 35.000 spectra per exposure (≈5 million per night, i.e. 500 million in total), perform an astrometric, photometric, and wavelength calibration, and find and classify objects in the spectra fully automatically. We will describe our ideas how to achieve this goal.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The LAMOST1,2 telescope is expected to have its first light in later of 2007. The 4-meter aperture and 4000-fiber feeding ablility will make it a powerful spectra sky survey instrument, as well a challenge to the mission of data processing and analysis. So far several statistical methods, mainly based on PCA, have been developed for spectra automatic classification and red shift measurement by a team of LAMOST3. Statistical methods of Hidden Markov Modelling have become popular in many area since 1990s, which are rich in mathematical structure and can form the theoretical basis for use in a wide range of applications, e.g. speech recognition and pattern recognition. No doubt they are prospective implements for automatic spectra processing and analysis. In this paper, I attempt to briefly introduce the theoretical aspects of this type of statistical modelling and show the possible applications in automatic spectra data processing and analysis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Dark Energy Survey (DES; operations 2009-2015) will address the nature of dark energy using four independent and complementary techniques: (1) a galaxy cluster survey over 4000 deg2 in collaboration with the South Pole Telescope Sunyaev-Zel'dovich effect mapping experiment, (2) a cosmic shear measurement over 5000 deg2, (3) a galaxy angular clustering measurement within redshift shells to redshift=1.35, and (4) distance measurements to 1900 supernovae Ia. The DES will produce 200 TB of raw data in four bands, These data will be processed into science ready images and catalogs and co-added into deeper, higher quality images and catalogs. In total, the DES dataset will exceed 1 PB, including a 100 TB catalog database that will serve as a key science analysis tool for the astronomy/cosmology community. The data rate, volume, and duration of the survey require a new type of data management (DM) system that (1) offers a high degree of automation and robustness and (2) leverages the existing high performance computing infrastructure to meet the project's DM targets. The DES DM system consists of (1) a gridenabled, flexible and scalable middleware developed at NCSA for the broader scientific community, (2) astronomy
modules that build upon community software, and (3) a DES archive to support automated processing and to serve DES catalogs and images to the collaboration and the public. In the recent DES Data Challenge 1 we deployed and tested the first version of the DES DM system, successfully reducing 700 GB of raw simulated images into 5 TB of reduced data products and cataloguing 50 million objects with calibrated astrometry and photometry.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We describe a proposed architecture for the Large Synoptic Survey Telescope (LSST) moving object processing pipeline based on a similar system under development for the Pan-STARRS project. This pipeline is responsible for identifying and discovering fast moving objects such as asteroids, updating information about them, generating appropriate alerts, and supporting queries about moving objects. Of particular interest are potentially hazardous asteroids(PHA's).
We consider the system as being composed of two interacting components. First, candidate linkages corresponding to moving objects are found by tracking detections ("tracklets"). To achieve this in reasonable time we have developed specialized data structures and algorithms that efficiently evaluate the possibilities using quadratic fits of the detections on a modest time scale.
For the second component we take a Bayesian approach to validating, refining, and merging linkages over time. Thus new detections increase our belief that an orbit is correct and contribute to better orbital parameters. Conversely, missed expected detections reduce the probability that the orbit exists. Finally, new candidate linkages are confirmed or refuted based on previous images.
In order to assign new detections to existing orbits we propose bipartite graph matching to find a maximum likelihood assignment subject to the constraint that detections match at most one orbit and vice versa. We describe how to construct this matching process to properly deal with false detections and missed detections.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
VISIR is the new ESO VLT instrument mounted at the Cassegrain focus of Melipal (UT3) telescope. At Paranal it is the very first instrument capable of high sensitivity imaging in the N band and Q band mid infrared atmospheric windows. In addition, it features a long-slit spectrometer with a range of spectral resolutions between 150 and 30000. VISIR had been included in the standard VLT data flow operation even before regular observing started in March/April 2005. Data products are pipeline-processed and quality checked by the Data Flow Operations Group in Garching. The calibration data are processed to create calibration products and to extract Quality
Control parameters. These parameters provide health checks and monitor instrument's performance. They are stored in a database, compared to earlier data, trended over time and made available on the VISIR Quality Control web pages that are updated daily. We present the parameters that were designed to assess quality of the data and to monitor performance of the MIR instrument. We also discuss the general process of data flow and data inspection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present the design of the system for handling observations metadata at the Science Archive Facility of the
European Southern Observatory using Sybase ASE, Replication Server and Sybase IQ. The system has been reengineered
to enhance the browsing capabilities of Archive contents using searches on any observation parameter,
for on-line updates on all parameters and for the on-the-fly introduction of those updates in files retrieved from the
Archive. The systems also reduces the replication of duplicate information and simplifies database maintenance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The VLTI Data Flow Operations consist in monitoring the performance of the different VLTI instruments offered to the community, in verifying the quality of the calibration and scientific data and their associated products. Since the beginning of MIDI (April 2004) and AMBER (October 2005) Service Mode Operations, scientific as well as calibration data have been accumulated to monitor the instruments and the quality of the observations on different time scales and under different conditions or system configurations. In this presentation, we will describe the Quality Control procedures and give some statistics and results on the different parameters used for instrument monitoring for time scales from hours to years in the case of MIDI. We will show that this includes parameters extracted directly from the instruments (Instrumental Transfer Function, Flux stability, Image Quality, Detector stability...) and parameters extracted from some of the sub-systems associated to the instruments (Adaptive Optics, telescopes used...). We will discuss the development of the monitoring of the instruments once more instrument modes or sub-systems such as PRIMA are offered to the community.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Observation Planning and Scheduling Poster Session
This paper discusses the Spitzer Space Telescope General Observer proposal process. Proposals, consisting of the scientific justification, basic contact information for the observer, and observation requests, are submitted electronically using a client-server Java package called Spot. The Spitzer Science Center (SSC) uses a one-phase proposal submission process, meaning that fully-planned observations are submitted for most proposals at the time of submission, not months after acceptance. Ample documentation and tools are available to the observers on SSC web pages to support the preparation of proposals, including an email-based Helpdesk. Upon submission proposals are immediately ingested into a database which can be queried at the SSC for program information, statistics, etc. at any time. Large proposals are checked for technical feasibility and all proposals are checked against duplicates of already approved observations. Output from these tasks is made available to the Time Allocation Committee (TAC) members. At the review meeting, web-based software is used to record reviewer comments and keep track of the voted scores. After the meeting, another Java-based web tool, Griffin, is used to track the approved programs as they go through technical reviews, duplication checks and minor modifications before the observations are released for scheduling. In addition to detailing the proposal process, lessons learned from the first two General Observer proposal calls are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Planning an observation schedule for ground-based and space-based telescopes alike requires careful constraint management and implementation. Scientific constraints, which meet an observer's desire to maximize science returns, must be weighed against the physical constraints and capabilities of the telescope. Since its launch in 1999, the Chandra X-Ray Observatory (CXO) has provided excellent science in spite of evolving constraints, including the proliferation of constraint types and varying degrees of restriction. The CXO observation schedule is generated on a weekly basis, yet the mission planning process maintains the flexibility to turn around a target-of-opportunity (TOO) request within 24 hours. This flexibility is only possible when all personnel responsible for schedule generation - flight operations engineers, science operations personnel, and program office support - are actively involved in constraint management. A proper balance of software tools, guideline documentation, and adequate subjective judgment is required for proper constraint implementation. The decision-making process employed by mission planning personnel requires accurate, complete, and current constraint information.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In some eyes, the Phase I proposal selection process is the most important activity handled by the Space Telescope
Science Institute (STScI). Proposing for HST and other missions consists of requesting observing time and/or archival
research funding. This step is called Phase I, where the scientific merit of a proposal is considered by a community
based peer-review process. Accepted proposals then proceed thru Phase II, where the observations are specified in
sufficient detail to enable scheduling on the telescope.
Each cycle the Hubble Space Telescope (HST) Telescope Allocation Committee (TAC) reviews proposals and awards
observing time that is valued at $0.5B, when the total expenditures for HST over its lifetime are figured on an annual
basis. This is in fact a very important endeavor that we continue to fine-tune and tweak. This process is open to the
science community and we constantly receive comments and praise for this process. In this last year we have had to deal
with the loss of the Space Telescope Imaging Spectrograph (STIS) and move from 3-gyro operations to 2-gyro
operations.
This paper will outline how operational issues impact the HST science peer review process. We will discuss the process
that was used to recover from the loss of the STIS instrument and how we dealt with the loss of 1/3 of the current
science observations. We will also discuss the issues relating to 3-gyro vs. 2-gyro operations and how that changes
impacted Proposers, our in-house processing and the TAC.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Since the beginning on April 3, 1999, the start of observations with the ESO Very Large Telescope (VLT), a significant fraction of the observations is executed in Service Mode (SM). SM observations require that the Principal Investigator (PI) provides all necessary information before the observation, so that the night astronomers in Chile have precise and complete indications on the execution requirements of every program. The observers also need to be able to know which observations can possibly be executed during a given night.
The missing link between these external users and the operations staff at ESO-Chile is the User Support Department (USD) which ensures that this information flow runs smoothly and in a uniform way. This requires the existence of a well-designed network of reports and communication procedures serving purposes such as conveying information from the users to the observatory, allowing the USD support astronomers an efficient review and validation of the material submitted by the users, enabling reliable program execution tracking, or providing rapid program progress feedback to the users, etc.. These tasks manage a level of information flow that complements that of the VLT Data Flow System.
This article will provide an overview about how the exchange of information for SM runs was optimized over the past 7 years, about the lessons learned by interacting with external users and internal operations staff, and the resulting changes and improvements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.