iTVP is a system built for IP-based delivery of live TV programming, video-on-demand and audio-on-demand
with interactive access over IP networks. It has a country-wide range and is designed to provide service to a high
number of concurrent users. iTVP prototype contains the backbone of a two-level hierarchical system designed
for distribution of multimedia content from a content provider to end users. In this paper we present experience
gained during a few months of the prototype operation. We analyze efficiency of iTVP content distribution
system and resource usage at various levels of the hierarchy. We also characterize content access patterns
and their influence on system performance, as well as quality experienced by users and user behavior. In our
investigation, scalability is one of the most important aspects of the system performance evaluation. Although
the range of the prototype operation is limited, as far as the number of users and the content repository is
concerned, we believe that data collected from such a large scale operational system provides a valuable insight
into efficiency of a CDN-type of solution to large scale streaming services. We find that the systems exhibits
good performance and low resource usage.
In this paper, we propose a novel <i>loopback</i> approach in a two-level streaming architecture to exploit collaborative client/proxy buffers for improving the quality and efficiency of large-scale
streaming applications. At the upper level we use an overlay to deliver video from a central server to proxy servers, at the lower level a proxy server delivers video with the help of collaborative caches. In particular, a proxy server and its clients in a local domain cache different portions of a video and form delivery loops. In each loop, a single video stream originates at the proxy, passes through a number of clients, and is passed back to the proxy. As a
result, with limited bandwidth and storage space contributed by collaborative clients, we are able to significantly reduce the requirements of network bandwidth, I/O bandwidth, and cache space at a proxy. Furthermore, we develop local repair schemes to address the client failure issues for enhancing server quality and eliminating most repairing load at servers. For popular videos, our local repair schemes are able to handle most of single-client failures without service disruption and retransmissions from a central server. Our analysis and simulations have shown the efficacy of loopback in various settings.