The Atacama Large Millimeter Array (ALMA) is a joint project involving astronomical organizations in Europe and North America. ALMA will consist of at least 64 12-meter antennas operating from millimeter to sub-millimeter wavelengths. ALMA will be located at an altitude of about 5000m in the Chilean Atacama desert. The main challenge for the development of the ALMA software, which will support the whole end-to-end operation, it is the fact that the computing group is extremely distributed. Groups at different institutes have started the design of all subsystems based on the ALMA Common Software framework (ACS) that provides the necessary standardization.
The operation of ALMA by a community of astronomers distributed over various continents will need an adequate network infrastructure. The operation centers in Chile are split between an ALMA high altitude site, a lower altitude control centre, and a support centre in Santiago. These centers will be complemented by ALMA Regional Centers (ARCs) in Europe, North America, and Japan.
All this will require computing and communications equipment at more than 5000m in a radio-quiet area. This equipment must be connected to high bandwidth and reliable links providing access to the ARCs. The design of a global computing and communication infrastructure is on-going and aims at providing an integrated system addressing both the operational computing needs and normal IT support. The particular requirements and solutions foreseen for ALMA in terms of computing and communication systems will be explained.
The Atacama Large Millimeter Array (ALMA) will, when it is completed
in 2012, be the world's largest millimeter & sub-millimeter radio
telescope. It will consist of 64 antennas, each one 12 meters in
diameter, connected as an interferometer.
The ALMA Test Interferometer Control System (TICS) was developed as a
prototype for the ALMA control system. Its initial task was to provide
sufficient functionality for the evaluation of the prototype
antennas. The main antenna evaluation tasks include surface
measurements via holography and pointing accuracy, measured at both
optical and millimeter wavelengths.
In this paper we will present the design of TICS, which is a
distributed computing environment. In the test facility there are four
computers: three real-time computers running VxWorks (one on each
antenna and a central one) and a master computer running Linux. These
computers communicate via Ethernet, and each of the real-time
computers is connected to the hardware devices via an extension of the
CAN bus.
We will also discuss our experience with this system and outline
changes we are making in light of our experiences.
In the last two years the Very Large Telescope Interferometer (VLTI) has, on one hand grown with the addition of new subsystems, on the other hand matured with experience from commissioning and operation. Two adaptive optics systems for the 8-m unit telescopes have been fully integrated in the VLTI infrastructure. The first scientific instrument, MIDI, has been commissioned and is now being offered to the community. A second scientific instrument AMBER is currently being commissioned. The performance of the interferometer is being enhanced by the installation of a dedicated fringe sensor, FINITO, and a tip-tilt sensor in the interferometric laboratory, IRIS, and the associated control loops. Four relocatable auxiliary 1.8 m telescopes and three additional delay lines are being added to the infrastructure. At the same time the design and development of the dual feed PRIMA facility, which will have major impact on the existing control system, is in full swing. In this paper we review the current status of the VLTI control system and assess the impact on complexity and reliability caused by this explosion in size. We describe the applied methods and technologies to maximize the performance and reliability in order to keep VLTI and its control system a competitive, reliable and productive facility.
The 4.1 meter Southern Astrophysical Research (SOAR) Telescope is now entering the operations phase, after a period of construction and system commissioning. The SOAR TCS implemented in the LabVIEW software package, has kept pace throughout development with the installation of the other telescope subsystems, and has proven to be a key component for the successful deployment of SOAR. In this third article of the SOAR TCS series, we present the results achieved when operating the SOAR telescope under control of the SOAR TCS software. A review is made of the design considerations and the implementations details, followed by a presentation of the software extensions that allows a seamless integration of instruments into the system, as well as the programming techniques that permit the execution of remote observing procedures.
Proc. SPIE 5496, Development of a state machine sequencer for the Keck Interferometer: evolution, development, and lessons learned using a CASE tool approach, 0000 (15 September 2004); doi: 10.1117/12.548830
This paper presents a discussion of the evolution of a sequencer from a simple Experimental Physics and Industrial Control System (EPICS) based sequencer into a complex implementation designed utilizing UML (Unified Modeling Language) methodologies and a Computer Aided Software Engineering (CASE) tool approach. The main purpose of the Interferometer Sequencer (called the IF Sequencer) is to provide overall control of the Keck Interferometer to enable science operations to be carried out by a single operator (and/or observer). The interferometer links the two 10m telescopes of the W. M. Keck Observatory at Mauna Kea, Hawaii.
The IF Sequencer is a high-level, multi-threaded, Harel finite state machine software program designed to orchestrate several lower-level hardware and software hard real-time subsystems that must perform their work in a specific and sequential order. The sequencing need not be done in hard real-time. Each state machine thread commands either a high-speed real-time multiple mode embedded controller via CORBA, or slower controllers via EPICS Channel Access interfaces. The overall operation of the system is simplified by the automation.
The UML is discussed and our use of it to implement the sequencer is presented. The decision to use the Rhapsody product as our CASE tool is explained and reflected upon. Most importantly, a section on lessons learned is presented and the difficulty of integrating CASE tool automatically generated C++ code into a large control system consisting of multiple infrastructures is presented.
This paper presents a discussion of the architectural issues resulting when software systems need to cancel operations once they have been initiated. This may seem a minor issue, but our experience is that this requirement can have a huge effect on the design of instrumental software environments. A number of major constraints on the structure of command-based environments such as the AAO's DRAMA system can be traced to the perceived need to be able to cancel any operation cleanly. This becomes particularly difficult to implement if these operations involve significant amounts of time or even potentially indefinite amounts of time, such as operations involving blocking I/O. In general, the cleanest results come from having a process or thread cancel itself, rather than relying on the ability to cancel it externally, but this turns the problem into one of finding mechanisms whereby processes can discover, reliably, that they need to cancel themselves. As system architectures are considered for the next generation of telescopes, it seems timely to consider these design problems and even to what extent the ideal requirement of cleanly cancellable operations may have been reduced by the move towards queue-scheduled operations and away from traditional interactive observing.
Altair, ALTitude-conjugated Adaptive optics for InfraRed at Gemini North, was commissioned last October and is one of Canada’s major contributions to the Gemini Project, a seven-nation consortium that built identical 8m telescopes in Hawaii (Gemini North) and Chile (Gemini South). Altair coordinates and transfers data and status to both local and external subsystems at very high speeds. External Gemini subsystems include the Telescope Control System (TCS), Acquisition and Guiding (A&G), Observatory Control System (OCS), Gemini Interlock System (GIS), Time Server, Data Handling System (DHS), and Status and Alarm Database. This paper focuses on a few select sequences such as closing the control loop and delivering a corrected image, collecting statistics, and display data to highlight the complexity of the interactions within Altair.
Traditionally telescope main axes controllers use a cascaded PI structure. We investigate the benefits and limitations of this and question if better performance can be achieved with modern control techniques. Our interest is mainly to improve disturbance rejection since the tracking performance normally is easy to achieve. Comparison is made to more advanced controller structures using H-∠infinity design. This type of controller is more complex and needs a mathematical model of the telescope dynamics. We discuss how to obtain this model and also how to reduce it to a more manageable size using state of the art model reduction techniques. As a design example the VLT altitude axis is chosen.
LINC-NIRVANA is a Fizeau interferometer for the Large Binocular Telescope (LBT) doing imaging in the near infrared (J,H,K - band). Multi-conjugated adaptive optics is used to increase sky coverage and to get diffraction limited images over a 2 arcminute field of view. The control system consists of five independent loops, which are mediated through a master control. Due to the configuration, LINC-NIRVANA has no delay line like other interferometers. To remove residual atmospheric piston, the system must control both the primary and secondary mirrors, in addition to a third, dedicated piston mirror. This leads to a complex and interlocked control scheme and software. We will present parts of the instrument software design, which was developed in an object-oriented manner using UML. Several diagram types were used to structure the overall system and to evaluate the needs and interfaces of each sub-system to each other.
VISTA is a wide-field survey telescope with a 1.6° field of view, sampled with a camera containing a 4 x 4 array of 2K x 2K pixel infrared detectors. The detectors are spaced so an image of the sky can be constructed without gaps by combining 6 overlapping observations, each part of the sky being covered at least twice, except at the tile edges. Unlike a typical ESO-VLT instrument, the camera also has a set of on-board wavefront sensors. The camera has a filter wheel, a collection of pressure and temperature sensors, and a thermal control system for the detectors and the cryostat window, but the most challenging aspect of the camera design is the need to maintain a sustained data rate of 26.8 Mb/second from the infrared detectors. The camera software needs to meet the requirements for VISTA, to fit into the ESO-VLT software architecture, and to interface with an upgraded IRACE system being developed by ESO-VLT. This paper describes the design for the VISTA camera software and discusses the software development process. It describes the solutions we have adopted to achieve the desired data rate, maximise survey speed, meet ESO-VLT standards, interface to the IRACE software and interface the on-board wavefront sensors to the VISTA telescope software.
OmegaCAM is the wide field optical imager for the VLT Survey Telescope (VST), part of the VLT Observatory, operated by the European Southern Observatory (ESO). The camera consists of a mosaic of 32 4k x 2k CCDs, that almost completely fill with an array of 16k x 16k pixels its 1 degree squared field of view. The instrument will start scientific operations in the first quarter of 2005. In this paper, after a brief review of the instrument software design, we describe the functionality for each major software subsystem: ICS (Instrument Control Software) which is in charge of the control of the opto-mechanics, in particular of the filter system, AG, which takes care of autoguiding, IA (Image Analysis), in charge of measuring aberrations using a curvature-like wavefront sensor, OS (Observation Software) which coordinates all instrument subsystems in the execution of scientific observation and creates data files for the archive. Finally we report about the activities for the integration of the software with the opto-mechanics and the instrument electronics.
FLAMES is a complex observational facility for multi-object spectrography installed at ESO VLT UT2 telescope at Paranal. It consists of a Fibre Positioner that feeds GIRAFFE, a medium-high resolution spectrograph, and UVES, a high resolution stand-alone spectrograph operational in slit mode since 1999. The Positioner is the core component of FLAMES. It is a rather large and complex system comprising two spherical focal plates of approx. 90 cm in diameter, an exchanger mechanism, R-θ robot motions and a pneumatic gripper mechanism with a built in miniature CCD camera. The main task of the Positioner is to place a fibre (button) at a given focal plate position with accuracy better than 40 microns. The fibre positioning process is performed on the plate attached to the robot while an observation is being performed on the plate attached to the telescope rotator. The whole instrument is driven by software designed in accordance with the VLT Common Software standards, allowing the complete integration of the instrument in the VLT environment. The paper mainly focuses on two areas: the low level control and the performance of the Fibre Positioner; and the high level coordinating software architecture that provides facility for parallel operations of multiple instruments.
The Canada-France-Hawaii Telescope is now operating a wide-field visible camera with a one-degree field of view. We have developed a guiding and auto-focus system that uses two stage-mounted CCD cameras fed by Shack-Hartmann optics providing position and focus error signals to the telescope guiding and focus control systems. The two camera stages patrol guide fields separated by more than a degree, one to the north and one to the south of the main camera field. Guiding generates a 50 Hz correction signal applied to a tip-tilt plate in the light path and a low frequency correction signal sent to control telescope position. During guiding a focus error signal is used to adjust telescope focus. Calibration issues include guide camera focusing, image distortion produced by the wide field corrector, guide stage positioning, and determining ideal guide star positions on the cameras. This paper describes the resulting system, including preselected guide star acquisition, guiding, telescope focus control, and calibration.
The VISTA wide field survey telescope will use the ESO Telescope Control System as used on the VLT and NTT. However the sensors for both auto-guiding and active optics are quite different and so the ESO TCS will require some significant modifications. VISTA will use large format CCDs at fixed locations in the focal plane for auto-guiding and a pair of curvature sensors, also fixed in the focal plane, for wave-front sensing. As a consequence, three reference stars are required for each science observation in contrast to the VLT which uses a single star for both auto-guiding and active optics. This paper will outline the reasons for adopting this design, review how it differs from the VLT/NTT and describe the modifications that are being made to the ESO TCS to enable it to be used for VISTA. It will describe the software that implements auto-guiding and active optics in the VLT TCS and how the design has been adapted to the different requirements of VISTA. This will show how the modular and distributed design of the ESO TCS has enabled it to be adapted to a new telescope with radically different design choices whilst maintaining the existing architecture and the bulk of the existing implementation.
Proc. SPIE 5496, Real-time operation without a real-time operating system for instrument control and data acquisition, 0000 (15 September 2004); doi: 10.1117/12.551111
We are building the Field-Imaging Far-Infrared Line Spectrometer (FIFI LS) for the US-German airborne observatory SOFIA. The detector read-out system is driven by a clock signal at a certain frequency. This signal has to be provided and all other sub-systems have to work synchronously to this clock. The data generated by the instrument has to be received by a computer in a timely manner. Usually these requirements are met with a real-time operating system (RTOS).
In this presentation we want to show how we meet these demands differently avoiding the stiffness of an RTOS. Digital I/O-cards with a large buffer separate the asynchronous working computers and the synchronous working instrument. The advantage is that the data processing computers do not need to process the data in real-time. It is sufficient that the computer can process the incoming data stream on average. But since the data is read-in synchronously, problems of relating commands and responses (data) have to be solved: The data is arriving at a fixed rate. The receiving I/O-card buffers the data in its buffer until the computer can access it. To relate the data to commands sent previously, the data is tagged by counters in the read-out electronics. These counters count the system's heartbeat and signals derived from that. The heartbeat and control signals synchronous with the heartbeat are sent by an I/O-card working as pattern generator. Its buffer gets continously programmed with a pattern which is clocked out on the control lines. A counter in the I/O-card keeps track of the amount of pattern words clocked out. By reading this counter, the computer knows the state of the instrument or knows the meaning of the data that will arrive with a certain time-tag.
We present a design for the computer systems which control, configure,
and monitor the Atacama Large Millimeter Array (ALMA) correlator and
process its output. Two distinct computer systems implement this
functionality: a rack- mounted PC controls and monitors the
correlator, and a cluster of 17 PCs process the correlator output into
raw spectral results. The correlator computer systems interface to
other ALMA computers via gigabit Ethernet networks utilizing CORBA and
raw socket connections. ALMA Common Software provides the software
infrastructure for this distributed computer environment. The control
computer interfaces to the correlator via multiple CAN busses and the
data processing computer cluster interfaces to the correlator via
sixteen dedicated high speed data ports. An independent array-wide
hardware timing bus connects to the computer systems and the
correlator hardware ensuring synchronous behavior and imposing hard
deadlines on the control and data processor computers. An aggregate
correlator output of 1 gigabyte per second with 16 millisecond periods
and computational data rates of approximately 1 billion floating point
operations per second define other hard deadlines for the data
processing computer cluster.
The increasing number of digital control applications in the context of the VLT, and particularly the VLT Interferometer, brought the need to find a common solution to address the problems of performance and maintainability. Tools for Advanced Control (TAC) aims at helping both control and software engineers in the design and prototyping of real-time control applications by providing them with a set of standard functions and an easy way to combine them to create complex control algorithms. In this paper we describe the software architecture and design of TAC, the VLT standard for digital control applications. Algorithms are described at schematic level and take the form of a set of interconnected function blocks. Periodical execution of the algorithm as well as features like runtime modification of parameters and probing of internal data are also managed by TAC, allowing the application designers to avoid spending time writing low value software code and therefore focus on application-specific concerns. We also summarize the results achieved on the first actual applications using TAC, to manage real-time control or digital signal processing algorithms, currently deployed and being commissioned at Paranal Observatory.
Instruments and telescopes being planned for the US community include a wide assortment of facilities. These will require a consistent interface. Existing controllers use a variety of interfaces that will make using multiple controller types difficult. A new architecture that takes maximum advantage of code and hardware re-use, maintainability and extensibility is being developed at NOAO. The MONSOON Image acquisition/Detector controller system makes maximum use of COTS hardware and Open-Source development and can support OUV and IR detectors, singly or in very large mosaics. A basic requirement of the project was the ability to seamlessly handle even massive focal planes like LSST and ODI.
Software plays a vital role in the flexibility of the MONSOON system. The authors have built on their experience with previous systems (E.g. GNAAC, wildfire, ALICE, SDSU etc.), to develop a command interface, based on a dictionary of commands that can be applied to any detector controller project. The Generic Pixel Server, or GPX, concept consists of a dictionary that not only supports the needs of projects that use MONSOON controllers, but the set of commands can be used as the interface to any detector controller with only modest additional effort. This generic command set (the GPX dictionary) is defined here as introduction to the GPX concept.
The San Diego State University Generation 2 CCD controller (SDSU-2)
architecture is widely used in both optical and infrared astronomical
instruments. This architecture was employed in the CCD controllers
for the DEIMOS instrument commissioned on Keck-II in June 2002.
In 2004, the CCD dewar in the HIRES instrument on Keck-I will be
upgraded to a 3 x 1 mosaic of MIT/LL 2K x 4K CCDs controlled by an
SDSU-2 CCD controller.
For each of these SDSU-2 CCD controllers, customized versions of PAL
chips were developed to extend the capabilities of this controller
architecture. For both mosaics, a custom timing board PAL enables rapid, software-selectable switching between dual- and single-amplifier-per-CCD readout modes while reducing excess utilization of fiber optic bandwidth for the latter. For the HIRES CCD mosaic, a custom PAL for the clock generation boards provides software selection of different clock waveforms that can address the CCDs of the mosaic either individually or globally, without any need to reset the address jumpers on these boards.
The custom PAL for the clock generation boards enables a method for
providing differing exposure times on each CCD of the mosaic. These
distinct exposure times can be implemented in terms of a series of
sub-exposures within a single, global mosaic observation. This allows for more effective observing of sources that have flux gradients across the spectral dimension of the CCD mosaic because those CCDs located near the higher end of the flux gradient can be read out more frequently, thus reducing the number of cosmic rays in each individual sub-exposure from those CCDs.
The software for the Atacama Large Millimeter Array (ALMA) is being developed by many institutes on two continents. The software itself
will function in a distributed environment, from the 0.5-14 kmbaselines that separate antennas to the larger distances that
separate the array site at the Llano de Chajnantor in Chile from the operations and user support facilities in Chile, North America
and Europe. Distributed development demands 1) interfaces that allow separated groups to work with minimal dependence on their counterparts
at other locations; and 2) a common architecture to minimize duplication and ensure that developers can always perform similar
tasks in a similar way. The Container/Component model provides a blueprint for the separation of functional from technical concerns:
application developers concentrate on implementing functionality in Components, which depend on Containers to provide them with
services such as access to remote resources, transparent serialization of entity objects to XML, logging, error handling and security. Early system integrations have verified that this architecture is sound and that developers can successfully exploit its features. The Containers and their services are provided by a system-orienteddevelopment team as part of the ALMA Common Software (ACS), middleware that is based on CORBA.
The ALMA Common Software (ACS) is a set of application frameworks built on top of CORBA. It provides a common software infrastructure to all partners in the ALMA collaboration. The usage of ACS extends from high-level applications such as the Observation Preparation Tool [7] that will run on the desk of astronomers, down to the Control Software [6] domain. The purpose of ACS is twofold: from a system perspective, it provides the implementation of a coherent set of design patterns and services that will make the whole ALMA software [1] uniform and maintainable; from the perspective of an ALMA developer, it provides a friendly programming environment in which the complexity of the CORBA middleware and other libraries is hidden and coding is drastically reduced. The evolution of ACS is driven by a long term development plan, however on the 6-months release cycle the plan is adjusted based on incoming requests from ALMA subsystem development teams. ACS was presented at SPIE 2002[2]. In the two years since then, the core services provided by ACS have been extended, while the coverage of the application framework has been increased to satisfy the needs of high-level and data flow applications. ACS is available under the LGPL public license. The patterns implemented and the services provided can be of use also outside the astronomical community; several projects have already shown their interest in ACS. This paper presents the status of ACS and the progress over the last two years. Emphasis is placed on showing how requests from ACS users have driven the selection of new features.
ALMA software, from high-level data flow applications down to instrument control, is built using the ACS framework. To meet the challenges of developing distributed software in distributed teams, ACS offers a container/component model that integrates the use of XML transfer objects. ACS containers are built on top of CORBA and are available for C++, Java, and Python, so that ALMA software can be written as components in any of these languages. The containers perform technical aspects of the software system, while components can focus on the implementation of functional requirements.
Like Web services, components can use XML to exchange structured data by value. For Java components, the container seamlessly integrates the use of XML binding classes, which are Java classes that encapsulate access to XML data through type-safe methods. Binding classes are generated from XML schemas, allowing the Java compiler to enforce compliance of application code with the XML schemas.
This presentation will explain the capabilities of the ACS container/component model, and how it relates to other middleware technologies that are popular in industry.
The enterprise architecture presents a view of how software utilities and applications are related to one another under unifying rules and principles of development. By constructing an enterprise architecture, an organization will be able to manage the components of its systems within a solid conceptual framework. This largely prevents duplication of effort, focuses the organization on its core technical competencies, and ultimately makes software more maintainable. In the beginning of 2003, several prominent challenges faced software development at the GBT. The telescope was not easily configurable, and observing often presented a challenge, particularly to new users. High priority projects required new experimental developments on short time scales. Migration paths were required for applications which had proven difficult to maintain. In order to solve these challenges, an enterprise architecture was created, consisting of five layers: 1) the telescope control system, and the raw data produced during an observation, 2) Low-level Application Programming Interfaces (APIs) in C++, for managing interactions with the telescope control system and its data, 3) High-Level APIs in Python, which can be used by astronomers or software developers to create custom applications, 4) Application Components in Python, which can be either standalone applications or plug-in modules to applications, and 5) Application Management Systems in Python, which package application components for use by a particular user group (astronomers, engineers or operators) in terms of resource configurations. This presentation describes how these layers combine to make the GBT easier to use, while concurrently making the software easier to develop and maintain.
The Large Millimeter Telescope monitor and control system is
automatically generated from a set of XML configuration files. This
insures that all inter-system communications and user interfaces
adhere to a common standard. The system was originally designed to
control the electro-mechanical components of the telescope but it maps
well to the control of instruments. Properties of the instruments are
defined in XML and subsequent control and communication code and user
interfaces are generated. This approach works well in theory, however,
when it comes to installing the system on the actual instruments,
several problems arise: the goals of instrument developers, software support for instrument developers, hardware compatibility issues, and choice of computer architecture and development environment.
In this paper, we present a discussion of the above issues and suggest tried solutions.
Proc. SPIE 5496, James Web Space Telescope: supporting multiple ground system transitions in one year, 0000 (15 September 2004); doi: 10.1117/12.550475
Ideas, requirements, and concepts developed during the very early phases of the mission design often conflict with the reality of a situation once the prime contractors are awarded. This happened for the James Webb Space Telescope (JWST) as well. The high level requirement of a common real-time ground system for both the Integration and Test (I&T), as well as the Operation phase of the mission is meant to reduce the cost and time needed later in the mission development for recertification of databases, command and control systems, scripts, display pages, etc. In the case of JWST, the early Phase A flight software development needed a real-time ground system and database prior to the spacecraft prime contractor being selected. To compound the situation, the very low level requirements for the real-time ground system were not well defined. These two situations caused the initial real-time ground system to be switched out for a system that was previously used by the flight software development team. To meet the high-level requirement, a third ground system was selected based on the prime spacecraft contractor needs and JWST Project decisions. The JWST ground system team has responded to each of these changes successfully. The lessons learned from each transition have not only made each transition smoother, but have also resolved issues earlier in the mission development than what would normally occur.
The successful development of any complex control system requires a blend of good software management, an appropriate computer architecture and good software engineering. Due to the large number of controlled parts, high performance goals and required operational efficiency, the control systems for large telescopes are particularly challenging to develop and maintain.
In this paper the authors highlight some of the specific challenges that need to be met by control system developers to meet the requirements within a limited budget and schedule. They share some of the practices applied during the development of the Southern African Large Telescope (SALT) and describe specific aspects of the design that contribute to meeting these challenges. The topics discussed include: development methodology, defining the level of system integration, computer architecture, interface management, software standards, language selection, user interface design and personnel selection.
Time will reveal the full truth, but the authors believe that the significant progress achieved in commissioning SALT (now 6 months from telescope completion), can largely be attributed to the combined application of these practices and design concepts.
The VISTA wide field survey telescope will be operated and maintained from 2006 by ESO at their Cerro Paranal Observatory. To minimise both development costs and operational costs, the telescope's software will reuse software from the VLT wherever feasible. Some software modules will be reused without modification, others will include modifications or enhancements and yet others will be complete rewrites or completely new. This paper examines the methods used in the software development process to integrate existing and new software in a transparent and maintainable manner. On the basis of the work so far performed, some lessons are presented for the reuse of VLT software for a new telescope by an organisation without previous knowledge of VLT software.
The Virgo Gravitational Waves Detector has recently entered its commissioning phase. An important element in this phase is the application of Software Engineering (SE) practices to the Control and Data Analysis Software. This article focus on the experience in applying those SE practices as a simple but effective set of standards and tools. The main areas covered are software configuration management, problem reporting, integration planning, software testing and systems performance monitoring.
Key elements of Software Configuration Management (SCM) are source code control allowing checkin/checkout of sources from a software archive combined with a backup plan. The tool SCVS developed on top of CVS in order to provide an easier and more structured use mode is supporting this.
Tracking bugs and modifications is a necessary complement of SCM. A central database with email and web interface to submit, query and modify Software Problem Reports (SPR) has been implemented on top of the WREQ tool.
Integrating software components that were not designed with integration in mind is one of the major problems in software development. An explicit Integration Plan is therefore absolutely essential. We are currently implementing a slow upgrade cycle Common Software Releases management as structured integration plan.
Software Testing must be closely integrated with development and to the most feasible extent automatic. With the use of the automated test tool tat, the developer can incrementally build a unit/regression test suite that will help measure progress, spot unintended side effects, and focus the development efforts.
One of the characteristics of large and complex projects, like Virgo, is the difficulty in understanding how well the different subsystems are performing and then plan for changes. In order to support System Performance Monitoring the tool Big Brother has been adopted to make it possible to trace the reliability of the different subsystems and thus providing essential information for software improvements.
Remote Telescope Markup Language (RTML) is an XML-based interface/document format designed to facilitate the exchange of astronomical observing requests and results between investigators and observatories as well as within networks of observatories. While originally created to support simple imaging telescope requests (Versions 1.0-2.1), RTML Version 3.0 now supports a wide range of applications, from request preparation, exposure calculation, spectroscopy, and observation reports to remote telescope scheduling, target-of-opportunity observations and telescope network administration. The elegance of RTML is that all of this is made possible using a public XML Schema which provides a general-purpose, easily parsed, and syntax-checked medium for the exchange of astronomical and user information while not restricting or otherwise constraining the use of the information at either end. Thus, RTML can be used to connect heterogeneous systems and their users without requiring major changes in existing local resources and procedures. Projects as very different as a number of advanced amateur observatories, the global Hands-On Universe project, the MONET network (robotic imaging), the STELLA consortium (robotic spectroscopy), and the 11-m Southern African Large Telescope are now using or intending to use RTML in various forms and for various purposes.
The internet has brought about great change in the astronomical community, but this interconnectivity is just starting to be exploited for use in instrumentation. Utilizing the internet for communicating between distributed astronomical systems is still in its infancy, but it already shows great potential. Here we present an example of a distributed network of telescopes that performs more efficiently in synchronous operation than as individual instruments. RAPid Telescopes for Optical Response (RAPTOR) is a system of telescopes at LANL that has intelligent intercommunication, combined with wide-field optics, temporal monitoring software, and deep-field follow-up capability all working in closed-loop real-time operation. The Telescope ALert Operations Network (TALON) is a network server that allows intercommunication of alert triggers from external and internal resources and controls the distribution of these to each of the telescopes on the network. TALON is designed to grow, allowing any number of telescopes to be linked together and communicate. Coupled with an intelligent alert client at each telescope, it can analyze and respond to each distributed TALON alert based on the telescopes needs and schedule.
The eSTAR Project uses intelligent agent technologies to carry out resource discovery, submit observation requests and analyze the reduced data returned from a network of robotic telescopes in an observational grid. The agents are capable of data mining and cross-correlation tasks using on-line catalogues and databases and, if necessary, requesting additional data and follow-up observations from the telescopes on the network. We discuss how the maturing agent technologies can be used both to provide rapid followup to time critical events, and for long term monitoring of known sources, utilising the available resources in an intelligent manner.
Remote observing is the dominant mode of operation for both Keck
Telescopes and their associated instruments. Over 90% of all Keck
observations are carried out remotely from the Keck Headquarters in
Waimea, Hawaii (located 40 kilometers from the telescopes on the summit of Mauna Kea). In addition, an increasing number of observations are now conducted by geographically-dispersed observing teams, with some team members working from Waimea while others collaborate from Keck remote observing facilities located in California. Such facilities are now operational on the Santa Cruz and San Diego campuses of the University of California, and at the California Institute of Technology in Pasadena.
This report describes our use of the X and VNC protocols for providing
remote and shared graphical displays to distributed teams of observers
and observing assistants located at multiple sites. We describe the
results of tests involving both protocols, and explore the limitations
and performance of each under different regimes of network bandwidth
and latency. We also examine other constraints imposed by differences
in the processing performance and bit depth of the various frame buffers used to generate these graphical displays.
Other topics covered include the use of ssh tunnels for securely encapsulating both X and VNC protocol streams and the results of tests of ssh compression to improve performance under conditions of limited network bandwidth. We also examine trade-offs between different topologies for locating VNC servers and clients when sharing
displays between multiple sites.
Newly developed method and technology for determining the spatial position of the feeds of the FAST are introduced in this paper. Base on the measurements of the position and orientation of cabin in which the feeds are mounted, a loop feedback control enables accurately driving the feeds along desired tracks. The key technique of this implementation is the precise measurement of 6-freedom coordinates of the cabin in air with high sampling rate. An innovated way for this purpose is put forward and tested, combining data by different type of sensors. The errors of measurements and their influences on the control accuracy are analyzed theoretically, and checked by model tested. The experiment shows the feasibility and effectivity of the scheme of measurement and control for the telescope.
The Expanded Very Large Array (EVLA) uses fiber optic technologies for intermediate frequency (IF) digital data transmission, and local oscillator and reference distribution (LO). These signals are sent on separate fibers to each of the 27 EVLA antennas. The data transmission system transmits the four digitized IF signals from the antennas to the central electronics building. A sustained data rate of 10.24 Gbits/s per channel and 122.88 Gbits formatted per antenna is achieved. Each IF signal uses a set of three channels, twelve channels in total, and is wavelength division multiplexed onto a single fiber. The IF system configuration includes an EML CW laser, an erbium doped fiber amplifier (EDFA), passive optical multiplexers, up to 22 km of standard single mode fiber, and an APD optical receiver.
The LO system uses two fibers to provide a round trip phase measurement at 1310 nm. The phase requirement for the LO system requires that a phase stability of less than 2.8 picoseconds per hour at 40 GHz be maintained across the entire array. To accomplish this, a near real-time continuous measurement is made of the phase delay of the amplitude modulated 512 MHz signals that are distributed to each antenna. This information is used by the correlator to set the delay on each of the baselines in the array. This paper presents a complete description of the two EVLA fiber systems, LO and IF, including specific component specifications.
The Advanced Technology Solar Telescope (ATST) is intended to be the premier facility for experimental solar physics. A premium has been placed on operating ATST as a laboratory-style observatory to maximize the flexibility available to solar physicists. In particular, the main observation platform is a rotating coude platform supporting eight optical benches on which instruments may be assembled from available components. The Virtual Instrument Model has been developed to formalize the operation of a facility where instruments may exist for a single experiment before components are reassembled into a new instrument. The model allows the laboratory-style operation to fit easily within a typical modern telescope control system. This paper presents one possible implementation of the Virtual Instrument Model that is based on the container/component model becoming increasing popular in software development.
MONSOON is the next generation OUV-IR controller development project being conducted at NOAO. MONSOON was designed from the start as an "architecture" that provides the flexibility to handle multiple detector types, rather than as a set of specific hardware to control a particular detector. The hardware design was done with maintainability and scalability as key factors. We have, wherever possible chosen commercial off-the-shelf components rather than use in-house or proprietary systems.
From first principles, the software design had to be configurable in order to handle many detector types and focal plane configurations. The MONSOON software is multi-layered with simulation of the hardware built in. By keeping the details of hardware interfaces confined to only two libraries and by strict conformance to a set of interface control documents the MONSOON software is usable with other hardware systems with minimal change. In addition, the design provides that focal plane specific details are confined to routines that are selected at load time.
At the top-level, the MONSOON Supervisor Level (MSL), we use the GPX dictionary, a defined interface to the software system that instruments and high-level software can use to control and query the system. Below this are PAN-DHE pairs that interface directly with portions of the focal plane. The number of PAN-DHE pairs can be scaled up to increase channel counts and processing speed or to handle larger focal planes. The range of detector applications supported goes from single detector LAB systems, four detector IR systems like NEWFIRM, up to 500 CCD focal planes like LSST. In this paper we discuss the design of the PAN software and it's interaction with the detector head electronics.
MONSOON is NOAO's diverse, future-proof, array controller project that holds the promise of a common hardware and software platform for the whole of US astronomy. As such it is an implementation of the Generic Pixel Server which is a new concept that serves OUV-IR pixel data. The fundamental element of the server is the GPX dictionary which is the only entry point into the system from instrumentation or observatory level software. In the MONSOON implementation, which uses mostly commercial off-the-shelf hardware and software components, the MONSOON supervisor layer (MSL) is the highest level layer and this communicates with multiple Pixel-Acquisition-Node / Detector-Head-Electronics (PAN-DHE) pairs to co-ordinate the acquisition of the celestial data. The MSL is the MONSOON implementation of the GPX and this paper discusses the design requirements and the techniques used to meet them.
MONSOON is the next generation OUV-IR controller project being developed at NOAO. The design is flexible, emphasizing code re-use, maintainability and scalability as key factors. The software needs to support widely divergent detector systems ranging from
multi-chip mosaics (for LSST, QUOTA, ODI and NEWFIRM) down to large single or multi-detector laboratory development systems. In order for this flexibility to be effective and safe, the software must be able to configure itself to the requirements of the attached detector system at startup. The basic building block of all MONSOON systems is the PAN-DHE pair which make up a single data acquisition node. In this paper we discuss the software solutions used in the automatic PAN configuration system.
Embedded microcontroller modules offer many advantages over the standard PC such as low cost, small size, low power consumption, direct access to hardware, and if available, access to an efficient preemptive real-time multitasking kernel. Typical difficulties associated with an embedded solution include long development times, limited memory resources, and restricted memory management capabilities. This paper presents a case study on the successes and challenges in developing a control system for a remotely controlled, Alt-Az steerable, water vapour detector using the Rabbit 2000 family of 8-bit microcontroller modules in conjunction with the MicroC/OS-II multitasking real-time kernel.
By means of ACS (ALMA Common Software) framework we designed and implemented a sampling system which allows sampling of every Characteristic Component Property with a specific, user-defined, sustained frequency limited only by the hardware. Collected data are sent to various clients (one or more Java plotting widgets, a dedicated GUI or a COTS application) using the ACS/CORBA Notification Channel. The data transport is optimized: samples are cached locally and sent in packets with a lower and user-defined frequency to keep network load under control. Simultaneous sampling of the Properties of different Components is also possible. Together with the design and implementation issues we present the performance of the sampling system evaluated on two different platforms: on a VME based system using VxWorks RTOS (currently adopted by ALMA) and on a PC/104+ embedded platform using Red Hat 9 Linux operating system. The PC/104+ solution offers, as an alternative, a low cost PC compatible hardware environment with free and open operating system.
The QNX based real time database is one of main features for Large sky Area Multi-Object fiber Spectroscopic Telescope's (LAMOST) control system, which serves as a storage and platform for data flow, recording and updating timely various status of moving components in the telescope structure as well as environmental parameters around it. The database joins harmonically in the administration of the Telescope Control System (TCS). The paper presents methodology and technique tips in designing the EMPRESS database GUI software package, such as the dynamic creation of control widgets, dynamic query and share memory. The seamless connection between EMPRESS and the graphical development tool of QNX’s Photon Application Builder (PhAB) has been realized, and so have the Windows look and feel yet under Unix-like operating system. In particular, the real time feature of the database is analyzed that satisfies the needs of the control system.
Proc. SPIE 5496, Design of PCI-based data acquisition, antenna control, and real-time web-based database for a solar radio observational system, 0000 (15 September 2004); doi: 10.1117/12.551056
Solar activity is one of the main sources of space disturbances, which are primarily responsible for space disaster weather. Solar activity is concerned with 11 years period and has many exhibitions such as the change of sunspot's number and solar radio flux in 10.7cm wavelength. The 1.0-2.0 GHz, 2.6-3.8 GHz, and 5.2-7.6 GHz solar radio spectrometers and 2840 MHz solar radio telescope of National Astronomical Observatory in Huairou Solar Observational Station have got considerable radio flux data since 1999. In order to make further researches on solar action and develop space weather forecast, the real-time observed data should be well utilized. Therefore we designed the data acquisition, antenna control and real-time web-based database system for the 2840 MHz solar radio telescope. The paper introduces the whole design of a PCI-based data acquisition, antenna control and real-time web-based database system for the solar radio observation at HuaiRou in China. The popular PCI controller-PCI9052 is utilized to implement the interface between PCI bus and peripheral devices. PLD chip is applied for the data transferring and antenna control. The device driver of Windows is developed based on Driverworks and Windows DDK. The real-time database is based on MySQL and Apache.
Proc. SPIE 5496, Control software for OSIRIS: an infrared integral-field spectrograph for the Keck adaptive optics system, 0000 (15 September 2004); doi: 10.1117/12.551824
OSIRIS is an infrared integral-field spectrograph built for the Keck AO system. Integral-field spectrographs produce very complicated raw data products, and OSIRIS is no exception. OSIRIS produces frames that contain up to 4096 interleaved spectra. In addition to the IFU capabilities of OSIRIS, the instrument is equipped with a parallel-field imager to monitor current AO conditions by imaging an off-axis star and evaluating its PSF. The design of the OSIRIS software was driven by the complexity of the instrument, switching the focus from simply controlling the instrument components to targeting the acquisition of usable scientific data.
OSIRIS software integrates the planning, execution, and reduction of observations. An innovation in the OSIRIS control software is the formulation of observations into 'datasets' rather than individual frames. Datasets are functional groups of frames organized by the needs and capabilities of the data reduction software (DRS). A typical OSIRIS dataset consists of dithered spectral observations, coupled with the associated imaging data from the parallel-field AO PSF imager. A Java-based planning tool enables 'sequences' of datasets to be planned and saved both prior to and during observing sessions. An execution client interprets these XML-based files, configures the hardware servers for both OSIRIS and AO, and executes the observations. The DRS, working on one dataset of raw data at a time, produces science-quality data that is ready for analysis. This methodology should lead to superior observational efficiency, decreased likelihood of observer error, minimized reduction time, and therefore, faster scientific discovery.
OSIRIS (Optical System for Imaging and low/intermediate-Resolution Integrated Spectroscopy) and EMIR (InfraRed MultiObject Spectrograph) are instruments designed to obtain images and low resolution spectra of astronomical objects in the optical and infrared domains. They will be installed on Day One and Day Two, respectively, in the Nasmyth focus of the 10-meter Spanish GTC Telescope. This paper describes the architecture of the Data Acquisition System (DAS), emphasizing the functional and quality attributes. The DAS is a component oriented, concurrent, distributed and real time system which coordinates several activities: acquisition of images coming from the detectors controller, tagging, and data communication with the required telescope system resources. This architecture will minimize efforts in the development of future DAS. Common aspects, such as the data process flow, concurrency, asynchronous/synchronous communication, memory management, and exception handling, among others, are managed by the proposed architecture. This system also allows a straightforward inclusion of variable parts, such as dedicated hardware and different acquisition modes. The DAS has been developed using an object oriented approach and uses the Adaptive Communication Environment (ACE) to be operating system independent.
The Lowell Observatory Instrumentation System (LOIS) is an instrument control software system with a common interface that can control a variety of instruments. Its user interface includes GUI-based, scripted, and remote program control interfaces, and supports operational paradigms ranging from traditional direct observer interaction to fully automated operation. Currently LOIS controls a total of ten instruments built at Lowell Observatory (including one for SOFIA), NASA Ames Research Center, MIT (for Magellan), and Boston University. Together, these instruments include optical and near-IR imaging, spectroscopic, and polarimetric capability. This paper reviews the actual design of LOIS in comparison to its original design requirements and implementation approaches, and evaluates its strengths and weaknesses relative to operational performance, user interaction and feedback, and extensibility to new instruments.
The Goodman spectrograph is an all-refracting articulated-camera high-throughput imaging spectrograph for the SOuthern Astrophysical Research telescope (SOAR). It is designed to take advantage of Volume Phase Holographic (VPH) gratings. Due to the high level of mechanical complexity, a fully graphical control system with parallel motor control was developed. We have developed a software solution in LabVIEW that functions as a control system, component management tool, and engineering platform. A modular software design allows other instrument projects to easily adopt our approach. Distinguishing features of the control system include automated configuration changes, remote capability, and PDA control for component swaps.
The AAO's new AAO2 detector controllers can handle both infra-red detectors and optical CCDs. IR detectors in particular place considerable demands on a data handling system, which has to get the data from the controllers into the data processing chain as efficiently as possible, usually with significant constraints imposed by the need to read out the detector in as smooth a manner as possible. The AAO2 controller makes use of a VME chassis that contains both a real-time VxWorks system and a UNIX system. These share access to common VME memory, the VxWorks system reading from the controller into the shared memory and the UNIX system reading it from the shared memory and processing it. Modifications to the DRAMA data acquisition environment's bulk-data sub-system hide this use of VME shared memory in the normal DRAMA bulk-data API. This means that the code involved can be tested under UNIX, using standard UNIX shared memory mechanisms, and then deployed on the VxWorks/UNIX VME system without any code changes being needed. When deployed, the data transfer from the controller via VxWorks into the UNIX-based data processing chain is handled by consecutive DMA transfers into and out of VME memory, easily achieving the required throughput. We discuss aspects of this system, including a number of the less obvious problems that were encountered.
In this paper we present the software development process and history of the LUCIFER (LBT NIR spectroscopic Utility with Camera and Integral- Field Unit for Extragalactic Research) multi-mode near-infrared instrument, which is one of the first light instruments of the LBT on Mt. Graham, Arizona. The software is realised as a distributed system in Java using its remote method invocation service (RMI). We describe the current status of the software and give an overview of the planned computer hardware architecture.
The LBT double prime focus camera (LBC) is composed of twin CCD mosaic imagers. The instrument is designed to match the double channel structure of the LBT telescope and to exploit parallel observing mode by optimizing one camera at blue and the other at red side of the visible spectrum. Because of these facts, the LBC activity will likely consist of simultaneous multi-wavelength observation of specific targets, with both channels working at the same time to acquire and download images at different rates. The LBC Control Software is responsible for coordinating these activities by managing scientific sensors and all the ancillary devices such as rotators, filter wheels, optical correctors focusing, house-keeping information, tracking and Active Optics wavefront sensors. The result is obtained using four dedicated PCs to control the four CCD controllers and one dual processor PC to manage all the other aspects including instrument operator interface. The general architecture of the LBC Control Software is described as well as solutions and details about its implementation.
Proc. SPIE 5496, Data acquirement and process system based on ethernet for multichannel solar telescope, 0000 (15 September 2004); doi: 10.1117/12.553156
For astronomical observations, there are many kinds of CCD cameras for different scientific purposes. Sometimes, this even happens in one telescope. Traditionally, a CCD camera has an individual image grabber, data process unit, and corresponding control computer. Consequently, this brings some inconvenience and problems not only to the system management but also to the updating of the system. This paper presents a resolution to this problem for the Multi-Channel Solar Telescope (MCST). All CCD cameras are connected in an Ethernet through an Ethernet interface. A server is needed to send commands to all cameras and transfer data through TCP/IP. Each CCD camera has an embedded system to control the camera, receive commands from the server and signals from the camera, process, and store the data. This paper describes the design of an Ethernet controlled camera. The camera is PULNIX TM1010, which is controlled by an Altera embedded system by Cyclone EP1C20F400C7 FPGA, which embedded with a Nios processor.
The architecture of the software which controls the LAMOST fiber positioning sub-system is described. The software is composed of two parts as follows: a main control program in a computer and a unit controller program in a MCS51 single chip microcomputer ROM. And the function of the software includes: Client/Server model establishment, observation planning, collision handling, data transmission, pulse generation, CCD control, image capture and processing, and data analysis etc. Particular attention is paid to the ways in which different parts of the software can communicate. Also software techniques for multi threads, SOCKET programming, Microsoft Windows message response, and serial communications are discussed.
Proc. SPIE 5496, Slitmasks from observer to telescope: astrometric slitmask manufacturing and control for Keck spectrographs, 0000 (15 September 2004); doi: 10.1117/12.552300
This paper documents the astrometric slitmask design, submission,
fabrication, control and configuration tools used for two large
spectrographs at W. M. Keck Observatory on Mauna Kea, Hawai'i.
For supplemental illustrations and documents, including an online
version of the poster and interactive demos, we refer the reader to
http://spg.ucolick.org/Docs/SPIE/2004 .
The Advanced Technology Solar Telescope (ATST) is intended to be the premier solar observatory for experimental solar physics. The ATST telescope control software is designed to operate similar to current nighttime telescopes, but will contain added functionality required for solar observations. These additions include the use of solar coordinate systems, non-sidereal track rates, solar rotation models, alternate guide signal sources, the control of thermal loads on the telescope, unusual observation and calibration motions, and serendipitous acquisition of transient objects.
These requirements have resulted in a design for the ATST telescope control system (TCS) that is flexible and well-adapted for solar physics experiments. This report discusses both the classical design of the ATST TCS and the modifications required to observe in a solar physics environment. The control and servo loops required to operate both the pointing and wavefront correction systems are explained.
The LBT-AdOpt subsystem is a complex machine which includes several
software controlled parts. It is essentially divided into two parts: a
real-time loop which implements the actual adaptive optics control loop, from the wavefront sensor to the deformable secondary mirror, and a supervisor which performs a number of coordination and diagnostics tasks. The coordination and diagnostics task are essential for the proper operation of the system both as an aid for the preparation of observations and because only a continuous monitoring of dynamic system parameters can guarantee optimal performances and system safety during the operation. In the paper we describe the overall software architecture of the LBT-AdOpt supervisor and we discuss the functionalities required for a proper
operation.