The Atacama Large Millimeter /sub-millimeter Array (ALMA) has been working in operations phase regime since 2013. The transition to the operations phase has changed the priorities within the observatory, in which, most of the available time will be dedicated to science observations at the expense of technical time required for testing newer version of ALMA software. Therefore, a process to design and implement a new simulation environment, which must be comparable - or at least- be representative of the production environment was started in 2017. Concepts of model in the loop and hardware in the loop were explored. In this paper we review and present the experiences gained and lessons learned during the design and implementation of the new simulation environment.
The Atacama Large Millimeter /submillimeter Array (ALMA) has entered into operation phase since 2013. This transition changed the priorities within the observatory, in which, most of the available time will be dedicated to science observations at the expense of technical time. Therefore, it was planned to design and implement a new simulation environment, which must be comparable - or at least- be representative of the production environment. Concepts of model in the loop and hardware in the loop were explored. In this paper we review experiences gained and lessons learnt during the design and implementation of the new simulation environment.
The ALMA telescope will be composed of 66 high precision antennas; each antenna producing 8 times 2GHz bandwidth signals (4 pairs or orthogonal linear polarizations signals). Detecting the root cause of a loss of coherence issue between pairs of antennas can take valuable time which could be used for scientific purposes. This work presents an approach for quickly determining, in a systematic fashion, the source of this kind of issues. Faulty sub-system can be detected using the telescope calibration software and the granularity information. In a complex instrument such as the ALMA telescope, finding the cause of a loss of coherence issue can be a cumbersome task due to the several sub-systems involved on the signal processing (Frequency down-converter, analog and digital filters, instrumental delay), the interdependencies between sub-systems can make this task even harder. A method based on the information provided by the TelCal1 sub-system (in specific the Delay Measurements) will be used to help identify either the faulty unit or the wrong configuration which is causing the loss of coherence issue. This method uses granularity information to help find the cause of the problem.
The ALMA telescope is composed of 66 high precision antennas, each antenna having 8 high bandwidth digitizers
(4Gsamples/Second). It is a critical task to determine the well functioning of those digitizers prior to starting a round of
observations. Since observation time is a valuable resource, it is germane that a tool be developed which can provide a
quick and reliable answer regarding the digitizer status. Currently the digitizer output statistics are measured by using
comparators and counters. This method introduced uncertainties due to the low amount of integration, in addition to
going through all the possible states for all available digitizer time which all resulted in the antennas taking a
considerable amount of time. In order to avoid the aforementioned described problems, a new method based on
correlator resources is hereby presented.
After the inauguration of the Atacama Large Millimeter/submillimeter Array (ALMA), the Software Operations Group in Chile has refocused its objectives to: (1) providing software support to tasks related to System Integration, Scientific Commissioning and Verification, as well as Early Science observations; (2) testing the remaining software features, still under development by the Integrated Computing Team across the world; and (3) designing and developing processes to optimize and increase the level of automation of operational tasks. Due to their different stakeholders, each of these tasks presents a wide diversity of importances, lifespans and complexities. Aiming to provide the proper priority and traceability for every task without stressing our engineers, we introduced the Kanban methodology in our processes in order to balance the demand on the team against the throughput of the delivered work.
The aim of this paper is to share experiences gained during the implementation of Kanban in our processes, describing the difficulties we have found, solutions and adaptations that led us to our current but still evolving implementation, which has greatly improved our throughput, prioritization and problem traceability.
The ALMA Test Interferometer appeared as an infrastructure solution to increase both ALMA time availability for science activities and time availability for Software testing and Engineering activities at a reduced cost (<30000K USD) and a low setup time of less than 1 hour. The Test Interferometer could include up to 16 Antennas when used with only AOS resources and a possible maximum of 4 Antennas when configured using Correlator resources at OSF. A joined effort between ADC and ADE-IG took the challenge of generate the Test Interferometer from an already defined design for operations which imposed a lot of complex restrictions on how to implement it. Through and intensive design and evaluation work it was determined that is possible to make an initial implementation using the ACA Correlator and now it is also being tested the feasibility to implement the Testing Interferometer connecting the Test Array at AOS with Correlator equipment installed at the OSF, separated by 30 km. app. Lastly, efforts will be done to get interferometry between AOS and OSF Antennas with a baseline of approximately 24 km.
Starting 2009, the ALMA project initiated one of its most exciting phases within construction: the first antenna
from one of the vendors was delivered to the Assembly, Integration and Verification team. With this milestone and
the closure of the ALMA Test Facility in New Mexico, the JAO Computing Group in Chile found itself in the front
line of the project's software deployment and integration effort. Among the group's main responsibilities are the
deployment, configuration and support of the observation systems, in addition to infrastructure administration,
all of which needs to be done in close coordination with the development groups in Europe, North America
and Japan. Software support has been the primary interaction key with the current users (mainly scientists,
operators and hardware engineers), as the software is normally the most visible part of the system.
During this first year of work with the production hardware, three consecutive software releases have been
deployed and commissioned. Also, the first three antennas have been moved to the Array Operations Site, at
5.000 meters elevation, and the complete end-to-end system has been successfully tested. This paper shares the
experience of this 15-people group as part of the construction team at the ALMA site, and working together
with Computing IPT, on the achievements and problems overcomed during this period. It explores the excellent
results of teamwork, and also some of the troubles that such a complex and geographically distributed project
can run into. Finally, it approaches the challenges still to come, with the transition to the ALMA operations
The Atacama Large Millimeter Array (ALMA) is a joint project between astronomical organizations in Europe, North
America, and Japan. ALMA will consist of at least 50 twelve meter antennas operating in the millimeter and submillimeter
wavelength range. It will be located at an altitude above 5000m in the Chilean Atacama desert. The ALMA
Test Facility (ATF), located in New Mexico, USA, is a proving ground for development and testing of hardware,
software, commissioning and operational procedure.
At the ATF emphasis has shifted from hardware testing to software and operational functionality. The support of the
varied goals of the ATF requires stable control software and at the same time flexibility for integrating newly developed
features. For this purpose regression testing has been introduced in the form of a semi-automated procedure. This
supplements the established offline testing and focuses on operational functionality as well as verifying that previously
fixed faults did not re-emerge.
The regression tests are carried out on a weekly basis as a compromise between the developers' response- and the
available technical time. The frequent feedback allows the validation of submitted fixes and the prompt detection of sideeffects
and reappearing issues. Results of nine months are presented that show the evolution of test outcomes, supporting
the conclusion that the regression testing helped to improve the speed of convergence towards stable releases at the ATF.
They also provided an opportunity to validate newly developed or re-factored software at an early stage at the test
facility, supporting its eventual integration. Hopefully this regression test procedure will be adapted to commissioning
operations in Chile.
APEX, the Atacama Pathfinder Experiment, has been successfully commissioned and is in operation now. This novel submillimeter telescope is located at 5107 m altitude on Llano de Chajnantor in the Chilean High Andes, on what is considered one of the world's outstanding sites for submillimeter astronomy. The primary reflector with 12 m diameter has been carefully adjusted by means of holography. Its surface smoothness of 17-18 μm makes APEX suitable for observations up to 200 μm, through all atmospheric submm windows accessible from the ground.