The AO188 Single Conjugate facility AO system at Subaru Telescope delivers diffraction-limited images in near-IR in natural and laser guide star modes. We have recently started a major upgrade of AO188 to fulfill the high performance requirements of its downstream instruments, including the Subaru Coronagraphic Extreme-AO. The first phase of this upgrade started in 2017 with the integration of a new real time computer (RTC) and real time system (RTS) CACAO(https://github.com/CACAO-org/CACAO), an open-source real-time software for adaptive optics developed collaboratively and used extensively by the SCExAO instrument. This major upgrade will enable loop optimization, predictive control and include diagnosis tools, therefore improving the performance and stability of AO188 and its downstream instrument module. This paper introduces the architecture of the new RTS describing the different steps we followed to adapt CACAO to our AO interfaces and aging hardware, in preparation of our first engineering tests on-sky achieved successfully on July 23rd 2018.
Subaru Telescope, an 8-meter class optical telescope located in Hawaii, has been using a high-availability commodity cluster as a platform for our Observation Control System (OCS). Until recently, we have followed a tried-and-tested practice of running the system under a native (Linux) OS installation with dedicated attached RAID systems and following a strict cluster deployment model to facilitate failover handling of hardware problems,1.2 Following the apparent benefits of virtualizing (i.e. running in Virtual Machines (VMs)) many of the non- observation critical systems at the base facility, we recently began to explore the idea of migrating other parts of the observatory's computing infrastructure to virtualized systems, including the summit OCS, data analysis systems and even the front ends of various Instrument Control Systems. In this paper we describe our experience with the initial migration of the Observation Control System to virtual machines running on the cluster and using a new generation tool – ansible - to automate installation and deployment. This change has significant impacts for ease of cluster maintenance, upgrades, snapshots/backups, risk-management, availability, performance, cost-savings and energy use. In this paper we discuss some of the trade-offs involved in this virtualization and some of the impacts for the above-mentioned areas, as well as the specific techniques we are using to accomplish the changeover, simplify installation and reduce management complexity.
Subaru Telescope has recently replaced most equipment of Subaru Telescope Network II with the new equipment which
includes 124TB of RAID system for data archive. Switching the data storage from tape to RAID enables users to access
the data faster. The STN-III dropped some important components of STN-II, such as supercomputers, development &
testing subsystem for Subaru Observation Control System, or data processing subsystem. On the other hand, we invested
more computers to the remote operation system. Thanks to IT innovations, our LAN as well as the network between Hilo
and summit were upgraded to gigabit network at the similar or even reduced cost from the previous system. As the result
of the redesigning of the computer system by more focusing on the observatory operation, we greatly reduced the total
cost for computer rental, purchase and maintenance.