In radio astronomy interferometers where the number of stations is large (in the ALMA case 66 antennas, where 8 digitizers are deployed in each antenna) tuning the digitizers parameters: thresholds and bias, is a process which needs to be repeated several times, therefore finding an algorithm that allows to speed up this process is a critical task. It is quite important to keep the digitizers properly adjusted in order to reach the maximal efficiency of the correlator, specially in a regime of coarse quantization (88% for 2 bits, 96% for 3 bits), and also is critical for avoiding signal artifacts which can degrade the collected data (DC bias or harmonics). This work presents a set of different approaches for automatically tuning the digitizers primary selected as: PID by using a proportional/integrative/derivative controller and defining a system to process a coupled MIMO system as an uncoupled SISO; Fuzzy Logic by making extensive advantage of the expert operator knowledge; and finally an hybrid scheme combining PID and Fuzzy Logic for a rapid and accurate tuning process. The aim of the present work is to evaluate the performance of each tuning method based on metrics like: required tuning time, stability and robustness under different extreme boundary conditions. In addition, we suggest the means for collecting the needed information considering an usual interferometer architecture. Furthermore, we provide an automated approach to find the best sampler's clock timing profile. The aim of this work is to provide a guideline for implementing an algorithm which allows to tune a large set of digitizers under different conditions in a fast and precise automated process. The produced report will come in handy for integration into interferometer projects comprising a large number of individual stations (ALMA, SKA, VLA, CHIME, MeerKAT).
The Atacama Large Millimeter /sub-millimeter Array (ALMA) has been working in operations phase regime since 2013. The transition to the operations phase has changed the priorities within the observatory, in which, most of the available time will be dedicated to science observations at the expense of technical time required for testing newer version of ALMA software. Therefore, a process to design and implement a new simulation environment, which must be comparable - or at least- be representative of the production environment was started in 2017. Concepts of model in the loop and hardware in the loop were explored. In this paper we review and present the experiences gained and lessons learned during the design and implementation of the new simulation environment.
Proc. SPIE. 9913, Software and Cyberinfrastructure for Astronomy IV
KEYWORDS: Observatories, Digital signal processing, Computing systems, Data processing, Signal processing, Antennas, Operating systems, Optical correlators, Visibility, Current controlled current source
The ALMA correlator back-end consists of a cluster of 16 computing nodes and a master collector/packager node. The mission of the cluster is to process time domain lags into auto-correlations and complex visibilities, integrate them for some configurable amount of time and package them into a workable data product. Computers in the cluster are organized such that individual workloads per node are kept within achievable levels for different observing modes and antennas in the array. Over the course of an observation the master node transmits enough state information to each involved computing node to specify exactly how to process each set of lags received from the correlator. For that distributed mechanism to work, it is necessary to unequivocally identify each individual lag set arriving at each computing node. The original approach was based on a custom hardware interface to each node in the cluster plus a realtime version of the Linux Operating System. A modification recently introduced in the ALMA correlator consists of tagging each lag set with a time stamp before delivering them to the cluster. The time stamp identifies a precise 16- millisecond window during which that specific data set was streamed to the computing cluster. From the time stamp value a node is able to identify a centroid (in absolute time units), base-lines, and correlator mode during that hardware integration. That is, enough information to let the digital signal processing pipeline in each node to process time domain lags into frequency domain auto-correlations per antenna and visibilities per base-line. The scheme also means that a good degree of concurrency can be achieved in each node by having individual CPU cores process individual lag sets at the same time, thus rendering enough processing power to cope with a maximum 1 GiB/sec output from the correlator. The present paper describes how we time stamp lag sets within the correlator hardware, the implications to their on-line processing in software and the benefits that this extension has brought in terms of software maintainability and overall system simplifications.
The ALMA correlator processes the digitized signals from 64 individual antennas to produce a grand total of 2016 correlated base-lines, with runtime selectable lags resolution and integration time. The on-line software system can process a maximum of 125M visibilities per second, producing an archiving data rate close to one sixteenth of the former (7.8M visibilities per second with a network transfer limit of 60 MB/sec). Mechanisms in the correlator hardware design make it possible to split the total number of antennas in the array into smaller subsets, or sub-arrays, such that they can share correlator resources while executing independent observations. The software part of the sub-system is responsible for configuring and scheduling correlator resources in such a way that observations among independent subarrays occur simultaneously while internally sharing correlator resources under a cooperative arrangement. Configuration of correlator modes through its CAN-bus interface and periodic geometric delay updates are the most relevant activities to schedule concurrently while observations happen at the same time among a number of sub-arrays. For that to work correctly, the software interface to sub-arrays schedules shared correlator resources sequentially before observations actually start on each sub-array. Start times for specific observations are optimized and reported back to the higher level observing software. After that initial sequential phase has taken place then simultaneous executions and recording of correlated data across different sub-arrays move forward concurrently, sharing the local network to broadcast results to other software sub-systems. The present paper presents an overview of the different hardware and software actors within the correlator sub-system that implement some degree of concurrency and synchronization needed for seamless and simultaneous operation of multiple sub-arrays, limitations stemming from the resource-sharing nature of the correlator, limitations intrinsic to the digital technology available in the correlator hardware, and milestones so far reached by this new ALMA feature.
The Atacama Large Millimeter /submillimeter Array (ALMA) has entered into operation phase since 2013. This transition changed the priorities within the observatory, in which, most of the available time will be dedicated to science observations at the expense of technical time. Therefore, it was planned to design and implement a new simulation environment, which must be comparable - or at least- be representative of the production environment. Concepts of model in the loop and hardware in the loop were explored. In this paper we review experiences gained and lessons learnt during the design and implementation of the new simulation environment.
Taking a large interferometer for radio astronomy, such as the ALMA1 telescope, where the amount of stations (50 in the
case of ALMA’s main array, which can extend to 64 antennas) produces an enormous amount of data in a short period of
time – visibilities can be produced every 16msec or total power information every 1msec (this means up to 2016
baselines). With the aforementioned into account it is becoming more difficult to detect problems in the signal produced
by each antenna in a timely manner (one antenna produces 4 x 2GHz spectral windows x 2 polarizations, which means a
16 GHz bandwidth signal which is later digitized using 3-bits samplers).
This work will present an approach based on machine learning algorithms for detecting problems in the already digitized
signal produced by the active antennas (the set of antennas which is being used in an observation). The aim of this work
is to detect unsuitable, or totally corrupted, signals. In addition, this development also provides an almost real time
warning which finally helps stop and investigate the problem in order to avoid collecting useless information.
Latest discoveries in the field of astronomy have been associated to the development of extremely sophisticated
instruments. With regards to radio-astronomy, instrumentation has evolved to higher processing data rates and a
continuous performance improvement, in the analog and digital domain. Developing, maintaining, and using such kinds
of instruments – especially in radio-astronomy – requires understanding complex processes which involve plenty of
subtle details. The above has inspired the engineering and astronomical communities to design low-cost instruments,
which can be easily replicated by the non-specialist or highly skilled personnel who possess a basic technical
background. The final goal of this work is to provide the means to build an affordable tool for teaching radiometry
sciences. In order to take a step further this way, a design of a basic interferometer (two elements) is here below
introduced, intended to turn into a handy tool for learning the basic principles behind the interferometry technique and
radiometry sciences. One of the pedagogical experiences using this tool will be the measurement of the sun’s angular
diameter. Using these two Ku band receptors, we aim to capture the solar radiation in the 11-12GHz frequency range,
the power variations at the earth spin, with a proper phase-lock of the receptors will generate a cross-correlation power
oscillation where we can obtain an approximation of the angular sun’s diameter. Variables of interest in this calculation
are the declination of the sun (which depends on the capture date and location) and the relation between maximal and
minimal power within a fringe cycle.
The ALMA telescope will be composed of 66 high precision antennas; each antenna producing 8 times 2GHz bandwidth signals (4 pairs or orthogonal linear polarizations signals). Detecting the root cause of a loss of coherence issue between pairs of antennas can take valuable time which could be used for scientific purposes. This work presents an approach for quickly determining, in a systematic fashion, the source of this kind of issues. Faulty sub-system can be detected using the telescope calibration software and the granularity information. In a complex instrument such as the ALMA telescope, finding the cause of a loss of coherence issue can be a cumbersome task due to the several sub-systems involved on the signal processing (Frequency down-converter, analog and digital filters, instrumental delay), the interdependencies between sub-systems can make this task even harder. A method based on the information provided by the TelCal1 sub-system (in specific the Delay Measurements) will be used to help identify either the faulty unit or the wrong configuration which is causing the loss of coherence issue. This method uses granularity information to help find the cause of the problem.
The ALMA telescope is composed of 66 high precision antennas, each antenna having 8 high bandwidth digitizers
(4Gsamples/Second). It is a critical task to determine the well functioning of those digitizers prior to starting a round of
observations. Since observation time is a valuable resource, it is germane that a tool be developed which can provide a
quick and reliable answer regarding the digitizer status. Currently the digitizer output statistics are measured by using
comparators and counters. This method introduced uncertainties due to the low amount of integration, in addition to
going through all the possible states for all available digitizer time which all resulted in the antennas taking a
considerable amount of time. In order to avoid the aforementioned described problems, a new method based on
correlator resources is hereby presented.
The ALMA Test Interferometer appeared as an infrastructure solution to increase both ALMA time availability for science activities and time availability for Software testing and Engineering activities at a reduced cost (<30000K USD) and a low setup time of less than 1 hour. The Test Interferometer could include up to 16 Antennas when used with only AOS resources and a possible maximum of 4 Antennas when configured using Correlator resources at OSF. A joined effort between ADC and ADE-IG took the challenge of generate the Test Interferometer from an already defined design for operations which imposed a lot of complex restrictions on how to implement it. Through and intensive design and evaluation work it was determined that is possible to make an initial implementation using the ACA Correlator and now it is also being tested the feasibility to implement the Testing Interferometer connecting the Test Array at AOS with Correlator equipment installed at the OSF, separated by 30 km. app. Lastly, efforts will be done to get interferometry between AOS and OSF Antennas with a baseline of approximately 24 km.
Two large correlators have been constructed to combine the signals captured by the ALMA antennas deployed on the
Atacama Desert in Chile at an elevation of 5050 meters. The Baseline correlator was fabricated by a NRAO/European
team to process up to 64 antennas for 16 GHz bandwidth in two polarizations and another correlator, the Atacama
Compact Array (ACA) correlator, was fabricated by a Japanese team to process up to 16 antennas. Both correlators meet
the same specifications except for the number of processed antennas. The main architectural differences between these
two large machines will be underlined. Selected features of the Baseline and ACA correlators as well as the main
technical challenges met by the designers will be briefly discussed. The Baseline correlator is the largest correlator ever
built for radio astronomy. Its digital hybrid architecture provides a wide variety of observing modes including the ability
to divide each input baseband into 32 frequency-mobile sub-bands for high spectral resolution and to be operated as a
conventional 'lag' correlator for high time resolution. The various observing modes offered by the ALMA correlators to
the science community for 'Early Science' are presented, as well as future observing modes. Coherently phasing the
array to provide VLBI maps of extremely compact sources is another feature of the ALMA correlators. Finally, the status
and availability of these large machines will be presented.