The Atacama Large Millimeter /sub-millimeter Array (ALMA) has been working in operations phase regime since 2013. The transition to the operations phase has changed the priorities within the observatory, in which, most of the available time will be dedicated to science observations at the expense of technical time required for testing newer version of ALMA software. Therefore, a process to design and implement a new simulation environment, which must be comparable - or at least- be representative of the production environment was started in 2017. Concepts of model in the loop and hardware in the loop were explored. In this paper we review and present the experiences gained and lessons learned during the design and implementation of the new simulation environment.
Proc. SPIE. 9913, Software and Cyberinfrastructure for Astronomy IV
KEYWORDS: Observatories, Digital signal processing, Computing systems, Data processing, Signal processing, Antennas, Operating systems, Optical correlators, Visibility, Current controlled current source
The ALMA correlator back-end consists of a cluster of 16 computing nodes and a master collector/packager node. The mission of the cluster is to process time domain lags into auto-correlations and complex visibilities, integrate them for some configurable amount of time and package them into a workable data product. Computers in the cluster are organized such that individual workloads per node are kept within achievable levels for different observing modes and antennas in the array. Over the course of an observation the master node transmits enough state information to each involved computing node to specify exactly how to process each set of lags received from the correlator. For that distributed mechanism to work, it is necessary to unequivocally identify each individual lag set arriving at each computing node. The original approach was based on a custom hardware interface to each node in the cluster plus a realtime version of the Linux Operating System. A modification recently introduced in the ALMA correlator consists of tagging each lag set with a time stamp before delivering them to the cluster. The time stamp identifies a precise 16- millisecond window during which that specific data set was streamed to the computing cluster. From the time stamp value a node is able to identify a centroid (in absolute time units), base-lines, and correlator mode during that hardware integration. That is, enough information to let the digital signal processing pipeline in each node to process time domain lags into frequency domain auto-correlations per antenna and visibilities per base-line. The scheme also means that a good degree of concurrency can be achieved in each node by having individual CPU cores process individual lag sets at the same time, thus rendering enough processing power to cope with a maximum 1 GiB/sec output from the correlator. The present paper describes how we time stamp lag sets within the correlator hardware, the implications to their on-line processing in software and the benefits that this extension has brought in terms of software maintainability and overall system simplifications.
The ALMA correlator processes the digitized signals from 64 individual antennas to produce a grand total of 2016 correlated base-lines, with runtime selectable lags resolution and integration time. The on-line software system can process a maximum of 125M visibilities per second, producing an archiving data rate close to one sixteenth of the former (7.8M visibilities per second with a network transfer limit of 60 MB/sec). Mechanisms in the correlator hardware design make it possible to split the total number of antennas in the array into smaller subsets, or sub-arrays, such that they can share correlator resources while executing independent observations. The software part of the sub-system is responsible for configuring and scheduling correlator resources in such a way that observations among independent subarrays occur simultaneously while internally sharing correlator resources under a cooperative arrangement. Configuration of correlator modes through its CAN-bus interface and periodic geometric delay updates are the most relevant activities to schedule concurrently while observations happen at the same time among a number of sub-arrays. For that to work correctly, the software interface to sub-arrays schedules shared correlator resources sequentially before observations actually start on each sub-array. Start times for specific observations are optimized and reported back to the higher level observing software. After that initial sequential phase has taken place then simultaneous executions and recording of correlated data across different sub-arrays move forward concurrently, sharing the local network to broadcast results to other software sub-systems. The present paper presents an overview of the different hardware and software actors within the correlator sub-system that implement some degree of concurrency and synchronization needed for seamless and simultaneous operation of multiple sub-arrays, limitations stemming from the resource-sharing nature of the correlator, limitations intrinsic to the digital technology available in the correlator hardware, and milestones so far reached by this new ALMA feature.
The Atacama Large Millimeter Array (ALMA) is an international telescope project currently under construction in the Atacama desert of Chile. It has a provision for 64 antennas of 12m each, arranged over a geographical area of a few square kilometers. Antenna control and correlated data acquisition is implemented by means of a distributed set of realtime Linux computers, each one hosting ALMA Common Software (ACS) based applications and connected to a common time base distributed by the ALMA Master Clock as a 48ms electronic pulse signal (time event). All these computers require to be time synchronized for achieving coordination between commands and data acquisition. For this purpose, the ArrayTime system presented here implements a real-time software facility that makes possible to
unambiguously time-stamp each time event arriving at each computer node (distributed clock), relative to an external time source of 1Hz and in phase to the TAI second. Array time is the absolute time of each time event, and synchronization of distributed clocks is resolved by communicating the array time, via ACS services, for the next time event interrupt at least once during the operational cycle of the distributed clock. Thereafter, it is possible to schedule
application tasks within a latency range of 100us by extrapolating from the last interrupt and based on the current CPU Time Stamp Counter (TSC) and the estimated frequency of the CPU clock. In the following, we present a description of the elements that constitute the ArrayTime facility.
Proc. SPIE. 5496, Advanced Software, Control, and Communication Systems for Astronomy
KEYWORDS: Human-machine interfaces, Computing systems, Adaptive optics, Control systems, Data archive systems, Data processing, Antennas, Optical correlators, Binary data, Current controlled current source
We present a design for the computer systems which control, configure,
and monitor the Atacama Large Millimeter Array (ALMA) correlator and
process its output. Two distinct computer systems implement this
functionality: a rack- mounted PC controls and monitors the
correlator, and a cluster of 17 PCs process the correlator output into
raw spectral results. The correlator computer systems interface to
other ALMA computers via gigabit Ethernet networks utilizing CORBA and
raw socket connections. ALMA Common Software provides the software
infrastructure for this distributed computer environment. The control
computer interfaces to the correlator via multiple CAN busses and the
data processing computer cluster interfaces to the correlator via
sixteen dedicated high speed data ports. An independent array-wide
hardware timing bus connects to the computer systems and the
correlator hardware ensuring synchronous behavior and imposing hard
deadlines on the control and data processor computers. An aggregate
correlator output of 1 gigabyte per second with 16 millisecond periods
and computational data rates of approximately 1 billion floating point
operations per second define other hard deadlines for the data
processing computer cluster.
The Very Large Telescope (VLT) Observatory on Cerro Paranal (2635 m) in Northern Chile is approaching completion. After the four 8-m Unit Telescopes (UT) individually saw first light in the last years, two of them were combined for the first time on October 30, 2001 to form a stellar interferometer, the VLT Interferometer. The remaining two UTs will be integrated into the interferometric array later this year. In this article, we will describe the subsystems of the VLTI and the planning for the following years.
Two siderostats having 40 cm input pupil have been developed for the early commissioning phase of the VLT Interferometer at Paranal. Performances, design and development of the system are briefly introduced. First results obtained in Europe are discussed.
As major observatories are planning automatic and optimized scheduling of large astronomical facilities, reliable and accurate monitoring of observing conditions is a pre- requisite. For this purpose, the concept of Astronomical Site Monitor has been developed for the VLT as an integrated sub-system of the observatory.