A future large area radio array optimized to perform imaging of thermal emission down to milliarcsecond scales is currently under consideration in North America. This `Next Generation Very Large Array' (ngVLA) will have ten times the effective collecting area and ten times longer baselines (300 km) than the JVLA. The large number of antennas and their large geographical distribution pose significant challenges to ngVLA operations and maintenance. We draw on experience from operating the JVLA, VLBA, and ALMA to highlight notable operational issues and outline a preliminary operations concept for the ngVLA.
The software for the Atacama Large Millimeter/submillimeter Array (ALMA) that has been developed in a collaboration of ESO, NRAO, NAOJ and the Joint ALMA Observatory for well over a decade is an integrated end-to-end software system of about six million lines of source code. As we enter the third cycle of science observations, we reflect on some of the decisions taken and call out ten topics where we could have taken a different approach at the time, or would take a different approach in today’s environment. We believe that these lessons learned should be helpful as the next generation of large telescope projects move into their construction phases.
At the end of 2012, ALMA software development will be completed. While new releases are still being prepared
following an incremental development process, the ALMA software has been in daily use since 2008. Last year it was
successfully used for the first science observations proposed by and released to the ALMA scientific community. This
included the whole project life cycle from proposal preparation to data delivery, taking advantage of the software being
designed as an end-to-end system. This presentation will report on software management aspects that became relevant in
the last couple of years. These include a new feature driven development cycle, an improved software verification
process, and a more realistic test environment at the observatory. It will also present a forward look at the planned
transition to full operations, given that upgrades, optimizations and maintenance will continue for a long time.
Starting 2009, the ALMA project initiated one of its most exciting phases within construction: the first antenna
from one of the vendors was delivered to the Assembly, Integration and Verification team. With this milestone and
the closure of the ALMA Test Facility in New Mexico, the JAO Computing Group in Chile found itself in the front
line of the project's software deployment and integration effort. Among the group's main responsibilities are the
deployment, configuration and support of the observation systems, in addition to infrastructure administration,
all of which needs to be done in close coordination with the development groups in Europe, North America
and Japan. Software support has been the primary interaction key with the current users (mainly scientists,
operators and hardware engineers), as the software is normally the most visible part of the system.
During this first year of work with the production hardware, three consecutive software releases have been
deployed and commissioned. Also, the first three antennas have been moved to the Array Operations Site, at
5.000 meters elevation, and the complete end-to-end system has been successfully tested. This paper shares the
experience of this 15-people group as part of the construction team at the ALMA site, and working together
with Computing IPT, on the achievements and problems overcomed during this period. It explores the excellent
results of teamwork, and also some of the troubles that such a complex and geographically distributed project
can run into. Finally, it approaches the challenges still to come, with the transition to the ALMA operations
The Atacama Large Millimeter Array (ALMA) is a joint project between astronomical organizations in Europe, North
America, and Japan. ALMA will consist of at least 50 twelve meter antennas operating in the millimeter and submillimeter
wavelength range. It will be located at an altitude above 5000m in the Chilean Atacama desert. The ALMA
Test Facility (ATF), located in New Mexico, USA, is a proving ground for development and testing of hardware,
software, commissioning and operational procedure.
At the ATF emphasis has shifted from hardware testing to software and operational functionality. The support of the
varied goals of the ATF requires stable control software and at the same time flexibility for integrating newly developed
features. For this purpose regression testing has been introduced in the form of a semi-automated procedure. This
supplements the established offline testing and focuses on operational functionality as well as verifying that previously
fixed faults did not re-emerge.
The regression tests are carried out on a weekly basis as a compromise between the developers' response- and the
available technical time. The frequent feedback allows the validation of submitted fixes and the prompt detection of sideeffects
and reappearing issues. Results of nine months are presented that show the evolution of test outcomes, supporting
the conclusion that the regression testing helped to improve the speed of convergence towards stable releases at the ATF.
They also provided an opportunity to validate newly developed or re-factored software at an early stage at the test
facility, supporting its eventual integration. Hopefully this regression test procedure will be adapted to commissioning
operations in Chile.
The control subsystem for the Atacama Large Millimeter Array (ALMA) must fulfill a number of roles. Principle
amongst these is the ability to conduct observations and the ability to monitor and maintain the health of the hardware.
These two roles impose different requirements on the control subsystem. The ALMA control subsystem uses a design
which explicitly recognizes these different roles and provides capabilities that are targeted at the astronomers, engineers
and other users of the ALMA control subsystem. In this paper we will describe this aspect of the design of the ALMA
control subsystem with emphasis on how the various components of the software interact to meet the requirements of
these different users and produce a coherent control subsystem that can transition from a high level, astronomical
perspective of the array to a detailed low-level perspective with a focus on a particular piece of hardware.
The Atacama Large Millimeter Array (ALMA) will, when it is completed
in 2012, be the world's largest millimeter & sub-millimeter radio
telescope. It will consist of 64 antennas, each one 12 meters in
diameter, connected as an interferometer.
The ALMA Test Interferometer Control System (TICS) was developed as a
prototype for the ALMA control system. Its initial task was to provide
sufficient functionality for the evaluation of the prototype
antennas. The main antenna evaluation tasks include surface
measurements via holography and pointing accuracy, measured at both
optical and millimeter wavelengths.
In this paper we will present the design of TICS, which is a
distributed computing environment. In the test facility there are four
computers: three real-time computers running VxWorks (one on each
antenna and a central one) and a master computer running Linux. These
computers communicate via Ethernet, and each of the real-time
computers is connected to the hardware devices via an extension of the
We will also discuss our experience with this system and outline
changes we are making in light of our experiences.