The Gemini Observatory operates two 8-meter IR/optical telescopes: one in Hawaii and the other in Chile. High-speed network connections among all of the mountain telescopes, their sea-level bases, and a support facility in Tucson are essential to their operation, providing video and audio communications, administrative computing systems, remote telescope operation, scientific data management, and many other applications. All the sites have recently been connected via the Abilene network, through collaborations with more than a dozen astronomy facilities near of the various Gemini sites, with Florida International University's AMPATH program, with various providers, and with grant support for the National Science Foundation. While the bandwidth levels required will change over time, Gemini's current objective is a minimum 10 Mbps presence on Internet2 to and from its principal sites. The Gemini North, Gemini South, and Tucson sites are at this level or better. Gemini North has been upgraded to a burst capability up to 155 Mbps to the US Mainland Internet2. Gemini South has burst capability to 10 Mbps, with 6 Mbps guarantied to the US Mainland Internet2.
In order to achieve an effective operation and research based on data taken by the Subaru Telescope, we installed a satellite storage and analysis system at Mitaka headquarters of Naoj, on March 2002. Data taken by instruments by Subaru Telescope located at the summit of Mauna Kea is transferred to the STN-II system at Hilo base, and satellite system at Mitaka through the OC3 dedicated network link between Hilo and Mitaka. In Japan, an academic research backbone, SuperSINET spans among various universities and institutes with 10Gbps bandwidth at most, and it is easy for astronomers from Japan to access Subaru data through high speed backbone network in Japan. Database on each site, Hilo and Mitaka, are maintained independently, however, all records and history of updating are transferred each other frequently enough to make it possible for recovery in case of any discrepancy among database. Since the round trip time of the light signal between Hawaii and Japan could not be reduced 45msec, we need a special tuning not only for the data transfer between those two node, but also for the remote control sequence.
The e-STAR (e-Science Telescopes for Astronomical Research) project uses GRID techniques to develop the software infrastructure for a global network of robotic telescopes. The basic architecture is based around Intelligent Agents which request data from Discovery Nodes that may be telescopes or databases. Communication is based on a development of the XML RTML language secured using the Globus I/O library, with status serving provided via LDAP. We describe the system architecture and protocols devised to give a distributed approach to telescope scheduling, as well as giving details of the implementation of prototype Intelligent Agent and Discovery Node systems.
The Green Bank facility of the National Radio Astronomy Observatory is spread out over 2,700 acres in the Allegheny Mountains of West Virginia. Good communication has always been needed between the radio telescopes and the control buildings. The National Radio Quiet Zone helps protect the Green Bank site from radio transmissions that interfere with the astronomical signals. Due to stringent Radio Frequency Interference (RFI) requirements, a fiber optic communication system was used for Ethernet transmissions on the site and coaxial cable within the buildings. With the need for higher speed communications, the entire network has been upgraded to use optical fiber with modern Ethernet switches. As with most modern equipment, the implementation of the control of the newly deployed Green Bank Telescope (GBT) depends heavily on TCP/IP. In order to protect the GBT from the commodity Internet, the GBT uses a non-routable network. Communication between the control building Local Area Network (LAN) and the GBT is implemented using a Virtual LAN (VLAN). This configuration will be extended to achieve isolation between trusted local user systems, the GBT, and other Internet users. Legitimate access to the site, for example by remote observers, is likely to be implemented using a virtual private network (VPN).
At present, the signals received by the 10 antennas of the Very Long Baseline Array (VLBA) are recorded on instrumentation tapes. These tapes are then shipped from the antenna locations - distributed across the mainland USA, the US Virgin Islands, and Hawaii - to the processing center in Socorro, New Mexico. The Array operates today at a mean sustained data rate of 128 Mbps per antenna, but peak rates of 256 Mbps and 512 Mbps are also used. Transported tapes provide the cheapest method of attaining these bit rates. The present tape system derives from wideband recording techniques dating back to the late 1960s, and has been in use since the commissioning of the VLBA in 1993. It is in need of replacement on a time scale of a few years. Further, plans are being developed which would increase the required data rates to 1 Gbps in 5 years and 100 Gbps in 10 years. With the advent of higher performance networks, it should be possible to transmit the data directly to the processing center. However, achieving this connectivity is complicated by the remoteness of the antennas -
Since 1995 Nippon Telegraph and Telephone Corporation (NTT) has been conducting experiments on real-time VLBI (very long baseline interferometry) using a large scale network testbed having the maximum speed of 2.4Gb/s. With the real-time data transmission using high-speed communications network, the bottleneck resulted from the limited data rates with the conventional magnetic tape based VLBI system can be removed. Two applications of VLBI, geodesy and radio astronomy, are being pursued with our trial and extensive research items regarding the real-time VLBI technology are being conducted. So far, through the experiments using the developed real-time VLBI system, great improvement in observation performance has been achieved. Now we are concentrated in developing an economical VLBI data transfer system using advanced IP (Internet Protocol) technologies to achieve greater connectivity to other research organizations.
The Expanded Very Large Array (ELVA) uses fiber optic technologies for the Digital Data Transmission, the Local Oscillator and Reference distribution, and all Monitor/Control functions. These signals are sent on separate fibers to each of the twenty-seven EVLA antennas. The Data Transmission System (DTS) is used to transmit the four digitized IF signals from the antennas to the Central Electronics Building. A sustained data rate of 10.24 Gbits/s per channel and 122.88 Gbits formatted per antenna is supported. Each IF signal uses a parallel interface of three synchronized single bit high-speed serial optical fiber transmission channels. Each set of three channels, twelve channels in total, is wavelength division multiplexed onto a single fiber. The formatted data are received and de-formatted before the data are sent to the correlator. The system configuration includes a CW laser, an Erbium Doped Fiber Amplifier, passive optical multiplexers, up to 22 km of standard single mode fiber and an APD optical receiver. This paper presents a complete description of the EVLA fiber system including specific component specifications. The calculated performance of the IF system is compared to the actual performance and resulting "Lesson Learned" are presented.
The goal of the Expanded Very Large Array (EVLA), project is to upgrade a world-class astronomical instrument in the meter-to-millimeter wavelength bands. The project combines modern technologies with the sound design of the existing Very Large Array (VLA) to increase by an order of magnitude the sensitivity, resolution, and frequency coverage of the existing instrument. This paper discusses the techniques used to maintain phase coherence of the EVLA system. The enhancements to the VLA system include improved feeds and receivers, new Local Oscillator (LO) and Intermediate Frequency (IF) systems, a fiber optic LO distribution system, high speed digitizers, 10Gbps digital links, a dense wavelength division multiplexed fiber transmission system, and a new high speed correlator. The phase requirement for the LO system requires that a phase stability of 2.8 picoseconds per hour at 40 GHz be maintained across the entire array. To accomplish this, a near real time continuous measurement will be made of the phase delay in the fiber optic cable distributing the LO reference signals to each antenna. This information will be used by the correlator to set the delay on each of the baselines in the array.
The SOAR telescope will begin science operations in 3Q 2003. From the outset, astronomers at all U.S. research universities will be able to use it remotely, avoiding 24+ hrs of travel, and allowing half-nights to be scheduled to enhance scientific return. Most SOAR telescope systems, detector array controllers, and instruments will operate under LabVIEW control. LabVIEW enables efficient intercommunication between modules executing on dispersed computers, and is operating-system independent. We have developed LabVIEW modules for remote observing that minimize bandwidth to the shared LAN atop Cerro Pachon. These include control of a Polycom videoconferencing unit, export of instrument control GUI's and telescope telemetry to tactical displays, and a browser that first compresses an image in Chile by a factor of 256:1 from FITS to JPEG2000 and then sends it to the remote astronomer. Wherever the user settles the cursor, a region-of-interest window of lossless compressed data is downloaded for full fidelity. As an example of a dedicated facility, we show layout and hardware costs of the Remote Observing Center at UNC, where instruments on SOAR, SALT, and other telescopes available to UNC-CH astronomers will be operated.
Remote observing is now the dominant mode of operation for both Keck telescopesand their associated instruments. Over 90% of all Keck observations arecarried out remotely from the Keck Headquarters in Waimea, Hawaii. The majority of Keck observers, however, are affiliated with research institutions located on the U.S. mainland, primarily in California. To observe with the Keck telescopes, most of these astronomers currently travel several thousand kilometers in order to sit in a Keck remote control room located tens of kilometers from the telescopes. Given recent improvements in network infrastructure and facilities, many of these observations can now be conducted directly from California.
This report describes the operation of a Keck telescope remote observing facility located at the UCO/Lick Observatory headquarters on the Santa Cruz campus of the University of California (UCSC). This facility currently enables remote operation and engineering of Keck optical instruments via Internet-2. The facility was initially located in temporaryquarters and became operational on a trial basis in September 2001. In June 2002, the facility moved to permanent quarters and became fully operational in July 2002.
We examine in detail isssues of Internet network bandwidth and reliability, and describe the design, routing implementation, and operation of an automated fall-back network utilizing dialed ISDN telephone circuits and routers. This report also briefly describes the status of efforts to establish Keck remote observing facilities at other California sites, and how the fall-back network design could be expanded to support multiple sites.
The NASA Infrared Telescope Facility (IRTF) on Mauna Kea now offers observers the opportunity to carry out their observations remotely. They can choose to work from the mid-level station at Hale Pohaku, from a dedicated remote observing room at the Institute for Astronomy in Hilo, or from their home institution. As a test of our remote capabilities, observations have been successfully obtained by observers from an office at the Observatoire de Paris in Meudon, France. Their observing program utilized SpeX, the IRTF's low- to medium-resolution near-IR spectrograph and imager, to measure the 0.8-2.5 micron reflectance spectra of fast moving, near-Earth asteroids. All target acquisition, guiding, and instrument control was commanded from Meudon. We describe this observing campaign, and provide details about the techniques we have developed for remote observing.
The National Radio Astronomy Observatory (NRAO) has four major locations distributed across the continental USA. The observatory has long used audio conferencing for its internal meetings and working groups, but we began using video conferencing in 2000 both to enhance the quality of human communication and to allow sharing of visual aids and graphical presentations during inter-site meetings. The video conferencing equipment operates over our existing frame-relay network connections so the only operations cost has been its coexistence with other internal network traffic. In order to provide the necessary Quality of Service (QoS), the video conferencing equipment was placed on individual Local Area Network (LAN) segments on the site routers. A video hub (multi-conferencing unit) has allowed routine four-way conferencing between the main NRAO sites. Conferences with domestic and international colleagues over the commodity Internet and via Integrated Service Digital Network (ISDN) connections are also routinely supported. Using the existing equipment, we have also been successful in sharing auditorium presentations, such as workshops, tutorials, colloquia, and other special events to all major NRAO locations. The success and user acceptance has been such that we have recently expanded from four video installations to ten, allowing several simultaneous conferences.
Developing state of the art instrumentation for astronomy is often best done by geographically disparate teams that span several institutions. These efforts necessarily require costly face-to-face meetings and site visits. The benefits of the World Wide Web, video conferencing, and modular design techniques, however, have recently increased the efficiency and lowered the costs of these efforts. In this paper we discuss how these methods were applied during the development an emerging collaboration to produce common detector systems
Keywords, a concept that uses named parameters to access information from devices and instruments, originated in the early 1990s and is the foundation of the Keck Task Library, KTL. KTL uses different underlying communication schemes to provide a consistent API to a diversity of client applications. Increasing instrument complexity, the need to integrate multiple subsystems into a unified whole, and the demand for greater flexibility and productivity in the software development process, has prompted us to review the concept of keywords and its implementation. In this paper, we discuss the application of modern software methodology and communication protocols to enhance the Keck Task Library.
To monitor the atmospheric conditions in the radio astronomical observations, we have developed a new type of the radio seeing monitor, which enables us to measure the atmospheric turbulence in real-time and in a wide range of the direction in the celestial hemisphere. The base of the measurement system is a radio interferometer, in which the beacon waves of low earth orbit satellites (LEO satellites) for mobile communication system are received as reference signals. Time variations of the differences in arrival time are measured between element antennas of the interferometer, which are given as phase variations of cross-power spectra of the signals received by the antennas. we have made test observations of the atmospheric disturbances, and obtained a typical profile that the magnitude of the phase variations tends to increase with decreasing elevation angle of the reference source, i.e., the LEO satellite. In addition, we found that the magnitude of the phase variations is locally enhanced in some directions.
The Rapid Telescopes for Optical Response (RAPTOR) experiment is a spatially distributed system of autonomous robotic telescopes that is designed to monitor the sky for optical transients. The core of the ystem is composed of two telescope arrays, separated by 38 kilometers, that stereoscopically view the same 1500 square-degree field with a wide-field imaging array and a central 4 square-degree field with a more sensitive narrow-field ``fovea" imager. Coupled to each telescope array is a real-time data analysis pipeline that is designed to identify interesting transients on timescales of seconds and, when a celestial transient is identified, to command the rapidly slewing robotic mounts to point the narrow-field ``fovea'' imagers at the transient. The two narrow-field telescopes then image the transient with higher spatial resolution and at a faster cadence to gather light curve information. Each ``fovea" camera also images the transient through a different filter to provide color information. This stereoscopic monitoring array is supplemented by a rapidly slewing telescope with a low resolution spectrograph for follow-up observations of transients and a sky patrol telescope that nightly monitors about 10,000 square-degrees for variations, with timescales of a day or longer, to a depth about 100 times fainter. In addition to searching for fast transients, we will use the data stream from RAPTOR as a real-time sentinel for recognizing important variations in known sources. All of the data will be publically released through a virtual observatory called SkyDOT (Sky Database for Objects in the Time Domain) that we are developing for studying variability of the optical sky. Altogether, the RAPTOR project aims to construct a new type of system for discovery in optical astronomy---one that explores the time domain by "mining the sky in real time".
ESO's Science Archive is distributed across four different sites on two continents. With the huge amount of data produced by the various instruments this poses special requirements on the way data is transfered between the sites and distributed to the various subscribers. ESO's latest development, the Next Generation Archive System (NGAS), is based on cheap ATA disks connected to custom PCs running http based servers controlling the archiving process, supporting retrieval and checking the health status of the disks and the data itself. The current deployment of this system covers just a single 8kx8k pixel wide field imager, which is producing about 30 GB of raw data per night. The next generation of wide field telescopes/instruments VISTA/VISTACam and VST/OmegaCam will produce data rates well exceeding 500 GB and 125 GB during a single typical night, respectively. The total data rate of all ESO telescopes/instruments will grow to about 0.75 TB/night once VISTA is operational. The archiving of this data is essential, the next important step is to support not just only retrieval, but also flexible processing schemes of the data within the archive cluster directly.
The Rapid Telescope for Optical Response (RAPTOR) program consists of a network of robotic telescopes dedicated to the search for fast optical transients. The pilot project is composed of three observatories separated by approximately 38 kilometers located near Los Alamos, New Mexico. Each of these observatories is composed of a telescope, mount, enclosure, and weather station, all operating robotically to perform individual or coordinated transient searches. The telescopes employ rapidly slewing mounts capable of slewing a 250 pound load 180 degrees in under 2 seconds with arcsecond precision. Each telescope consists of wide-field cameras for transient detection and a narrow-field camera with greater resolution and sensitivity. The telescopes work together by employing a closed-loop system for transient detection and follow-up. Using the combined data from simultaneous observations, transient alerts are generated and distributed via the Internet. Each RAPTOR telescope also has the capability of rapidly responding to external transient alerts received over the Internet from a variety of ground-based and satellite sources. Each observatory may be controlled directly, remotely, or robotically while providing state-of-health and observational results to the client and the other RAPTOR observatories. We discuss the design and implementation of the spatially distributed RAPTOR system.
As technology advances, remote operation of telescopes has paved the way for joint observational projects between Astronomy clubs. Equipped with a small telescope, a standard CCD, and a networked computer, the observatory can be set up to carry out several photometric studies. However, most club members lack the basic training and background required for such tasks. A collaborative network between professionals and amateurs is proposed to utilize professional know-how and amateurs' readiness for continuous observations. Working as a team, various long-term observational projects can be carried out using small telescopes. Professionals can play an important role in raising the standards of astronomy clubs via specialized training programs for members on how to use the available technology to search/observe certain events (e.g. supernovae, comets, etc.). Professionals in return can accumulate a research-relevant database and can set up an early notification scheme based on comparative analyses of the recently-added images in an online archive. Here we present a framework for the above collaborative teamwork that uses web-based communication tools to establish remote/robotic operation of the telescope, and an online archive and discussion forum, to maximize the interactions between professionals and amateurs and to boost the productivity of small telescope observatories.