Content distribution networks (CDNs) are a popular service for the
dissemination of multimedia content over wide areas. The existance of a centralized administrative structure makes them attractive for the
commercial distribution of high quality content. By sharing resources,
service providers can implement their services more efficiently than a
single content provider who establishes a distribution structure himself. An efficient operation requires cost estimations that allow service providers to determine the dimensioning of their infrastructure and the placement of content in the system. In case of video streaming, distribution mechanisms that exploit multicast, segmented delivery and out-of-order delivery can be applied to merge streams and reduce resource consumption. Several applicable stream merging mechanisms exist in the literature and can be used. We examine three such mechanisms, namely patching, gleaning and prefix
caching in a hierarchically organized CDN. We show that a co-optimization of movie placement and stream merging mechanism has an undesirable effect on quality by delivering highly popular movies over longer distances than less popular ones. We explore and compare two approaches for overcoming this problem by qualifying the placement optimization with additional conditions. We find that in this case, straight-forward sorting is a good solution.
Peer-to-peer system performance is dependent on the routing efficiency of the underlying overlay network. Overlay congestion, which may occur during the forwarding process on peers, can have an adverse impact on the performance of an overlay network. Although previous work has been aware of its existence and taken measures against it, the study of overlay congestion has received little attention. As a result, little is known about its system performance impact and how well it is handled currently. In this work, we initiate an investigation of congestion control and message loss in a specific peer-to-peer system--the Gnutella network. Our work starts with an analysis of Gnutella servent implementations. This reveals the heterogeneity among their implementations of congestion control. We next undertake a measurement study to characterize the message loss behavior in a Gnutella network and how it relates to the congestion control mechanisms implemented by the servents. We then use packet-level simulations to understand the global performance of
certain Gnutella congestion control mechanisms.
Multicasting is a natural paradigm for streaming live multimedia to
multiple end receivers. Since IP multicast is not widely deployed, many application-layer multicast protocols have been proposed. However, all of these schemes focus on the construction of multicast trees, where a relatively small number of links carry the multicast streaming load, while the capacity of most of the other links in the overlay network remain unused. In this paper, we propose CodedStream, a high-bandwidth live media distribution system based on end-system overlay multicast. In CodedStream, we construct a k-redundant multicast graph (a directed acyclic graph) as the multicast topology, on which network coding is applied to work around bottlenecks. Simulation results have shown that the combination of k-redundant multicast graph and network coding may indeed bring significant benefits with respect to improving the quality of live media at the end receivers.
In this work, we explore network traffic shaping mechanisms that deliver packets at pre-determined intervals; allowing the network interface to transition to a lower power consuming sleep state. We focus our efforts on commodity devices, IEEE 802.11b ad hoc mode and popular streaming formats. We argue that factors such as the lack of scheduling clock phase synchronization among the participants and scheduling delays introduced by back ground tasks affect the potential energy savings. Increasing the periodic transmission delays to transmit data infrequently can offset some of these effects at the expense of flooding the wireless channel for longer periods of time; potentially increasing the time to acquire the channel for non-multimedia traffic. Buffering mechanisms built into media browsers can mitigate the effects of these added delays from being mis-interpreted as network congestion. We show that practical implementations of such traffic shaping mechanisms can offer significant energy savings.
Our previous MMCN03 paper reported a cross-layer adaptation framework, GRACE-1, that coordinates the adaptation of CPU frequency/voltage, CPU scheduling, and application quality. GRACE-1 assumes that all application processes (or threads) are independent from each other and adapt individually. This assumption, however, is invalid for multi-threaded applications that include dependent and cooperative processes. To support the joint performance requirements
of such dependent processes, this paper extends GRACE-1 with a process group management mechanism. The enhanced framework, called GRACE-grp, introduces a new OS abstraction, group control block, to provide the OS-level recognition and support of process groups. Through a group control block, dependent processes can explicitly set up a group and specify their dependency in the OS kernel. Consequently, GRACE-grp schedules and adapts them in a synchronized and consistent manner, thereby delivering joint performance guarantees. We have implemented and evaluated the GRACE-grp framework. Our experimental results show that compared to GRACE-1, GRACE-grp provides better support for the joint quality of dependent processes and reduces CPU energy consumption by 6.2% to 8.7% for each process group.
Recent research efforts have demonstrated the promising potential of building cost-effective media streaming systems on top of peer-to-peer (P2P) networks. A P2P media streaming architecture can reach large size and streaming capacity that are difficult to achieve in conventional server-based streaming services. Hybrid streaming systems that combine the use of dedicated streaming servers and P2P networks were proposed to build on the advantages of both paradigms. However, the dynamics of such systems and the impact of various factors on system behaviors are not totally clear. In this paper, we present an analytical framework to quantitatively study the features of a hybrid media streaming model. Based on this framework, we derive an equation to describe the capacity growth of a single-file streaming system. We then extend the analysis to multi-file scenarios
by solving an optimization problem. We also show that the system model achieves optimal allocation of server bandwidth among different media objects. The unpredictable departure/failure of peers is a critical factor that affects performance of P2P systems. To model peer failures in our system, we propose the concept of peer lifespan. The original equation is enhanced with coefficients generated from the distribution of peer lifespan. Results from large-scale simulations support our analysis.
Enhanced DCF (EDCF) is currently under review as the new standard for quality of service in IEEE 802.11 wireless LANs. In EDCF, per-flow differentiation is achieved by maintaining separate queues for different traffic categories (TCs). However, due to its static QoS parameter setting, EDCF does not perform adequately under high traffic load. We present an extended performance model of EDCF and possible conditions for network getting overloaded. With this extended model, we show that the overall throughput of a network can be improved by changing the distribution of the number of active stations over a set of TCs. Hence, we propose to dynamically re-allocate flow priorities evenly in order to maintain high system performance while providing QoS guarantee for individual real-time flows. Our scheme has several interesting features: (1) Performance of EDCF is improved; (2) Low priority flows are not starved under high traffic load; (3) Misuse of priority can be easily handled. Simulations are conducted for both infrastructure-based and Ad-hoc models. Results show that dynamic priority re-allocation does not decrease throughput of real-time flows under low to medium loads, while considerable improvement over EDCF is obtained even under high loads, making it easy to support multimedia applications.
The scarcity and large fluctuations of link bandwidth in wireless networks have motivated the development of adaptive multimedia services in mobile communication networks, where it is possible to increase or decrease the bandwidth of individual ongoing flows. This paper studies the issues of quality of service (QoS) provisioning in such systems. In particular, call admission control and bandwidth adaptation are formulated as a constrained Markov decision problem. The rapid growth in the number of states and the difficulty in estimating state transition probabilities in practical systems make it very difficult to employ classical methods to find the optimal policy. We present a novel approach that uses a form of reinforcement learning known as Q-learning to solve QoS provisioning for wireless adaptive multimedia. Q-learning does not require the explicit state transition model to solve the Markov decision problem; therefore more general and realistic assumptions can be applied to the underlying system model for this approach than in previous schemes. Moreover, the proposed scheme can efficiently handle the large state space and action set of the wireless adaptive multimedia QoS provisioning problem. Handoff dropping probability and average allocated bandwidth are considered as QoS constraints in our model and can be guaranteed simultaneously. Simulation results demonstrate the effectiveness of the proposed scheme in adaptive multimedia mobile communication networks.
For bursty traffic with a large peak-to-average ratio and a stochastic channel, is it possible to minimize the response time of every flow while maximizing the effective channel utilization and maintain fairness? This is the question we address in this paper. In wireless networks with a single shared channel, channel arbitration is a core issue for flows with throughput and timeliness requirements on the uplink and peer-to-peer links where the instantaneous demand is not known. This paper presents a link layer frame scheduling algorithm for delay-sensitive variable bit rate traffic, such as high-rate multimedia (MPEG-4), over a wireless channel. We evaluate our scheduling algorithm over two Medium Access Control (MAC) architectures and compare it to four scheduling strategies that cover a range of classes: TDMA, proportional share algorithms, real-time scheduling algorithms, and size-based scheduling algorithms. Detailed simulation results, with full-length MPEG-4 movie traces over a fading wireless channel, show that Fair-Shortest Remaining Processing Time (Fair-SRPT) outperforms other algorithms in terms of QoS performance, channel utilization efficiency and response time under all utilization levels and channel error rates. Our Fair-SRPT scheme avoids the classical SRPT problems of preferring small jobs by using normalization to mean reservations. An attractive feature of the proposed approach is that it can be implemented with no modifications to the IEEE 802.11e and IEEE 802.15.3 high-rate personal area network standards.
It is common to run multimedia and other periodic, soft real-time applications on general-purpose computer systems. These systems use best-effort scheduling algorithms that cannot guarantee applications will receive responsive scheduling to meet deadline or timing requirements. We present a simple mechanism called Missed Deadline Notification (MDN) that allows applications to notify the system when they do not receive their desired level of responsiveness. Consisting of a single system call with no arguments, this simple interface allows the operating system to provide better support for soft real-time applications without any a priori information about their timing or resource needs. We implemented MDN in three different schedulers: Linux, BEST, and BeRate. We describe these implementations and their performance when running real-time applications and discuss policies to prevent applications from abusing MDN to gain extra resources.
Media flows have been classified into streaming flows and elastic flows. Traditionally, admission control schemes, in middleware systems and otherwise, have dealt with streaming flows. In the Internet context, elastic TCP flows have been considered for admission control more recently -- the aim being to ensure that they actually complete. In this paper, we present a simple reservation-based abstraction for middleware systems: an elastic flow with deadlines, that includes streaming flows as a special case. We use this specification in a simple model of a link. We then present a novel way to view the problem, as a maximum network flow problem. We show that this formulation also provides an admissible schedule for the flows. We then study the incremental version of the admission control problem, and present some heuristics. We finally briefly explore potential applications of this abstraction.
In general, existing segment-based caching strategies target one of the following two performance objectives: (1) reducing client startup delay by giving a high priority to cache the beginning segments of media objects, or (2) reducing server traffic by caching popular segments of media objects. Our previous study has shown that the approach targeting the second objective has several advantages over the first one. However, we have also observed that the effort of improving server traffic reduction can increase client startup delay, which may potentially offset the overall performance gain. Little work so far has considered these two objectives in concert. In this paper, we first build an analytical model for these two types of typical segment-based caching approaches. The analysis on the model reveals the nature of the trade-off between two performance objectives and the bounds of each are given under certain circumstances. To provide a feasible way to evaluate different strategies, we propose a new comprehensive performance metric based on the analysis. To understand this performance trade-off, we restructure the adaptive-lazy segmentation strategy with a heuristic replacement policy to improve overall performance. The evaluation results confirm our analysis and show the effectiveness of our proposed new performance metric.
We present a data storage retrieval and communications system capable of supporting high-resolution image browsing on an inexpensive PC cluster based Image-Wall system. The data is first partitioned and then strategically written both onto a single hard disk and then across multiple hard disks. After presenting the data allocation scheme, we present schemes for retrieving the data from the hard disks and neighboring renderers. The optimality of the storage and retrieval mechanisms is proved and analytical results are presented for an initial implementation.
In this paper, we propose a new time-reduction method for video skimming in which the focus is on the overall playback time. While fast-forwarding is a natural way to check whether or not items are of interest, the sound is not synchronized with the images and the lack of comprehensible audio data means that we must work from the images alone. The focus in video summarization has been solely on video segmentation, i.e. building a structure that represents the parts and flow of meaning in the video. In our system, the user simply specifies the running time required for the summarized video. We describe the current state of our prototype system and its results in testing, which show how well it works.
In this paper we present polishing, a novel technique to maximize the playback utility of a streamed layer-encoded video. Polishing reduces the amount of layer variations in a cached layer-encoded video before streaming it to a client if this increases the quality of the video. Polishing can also be used as a cache replacement strategy for removing the parts of layer-encoded videos on a cache that harm the quality least. This paper presents optimal algorithms for both applications and simulation results.
We describe the architecture and implementation of our comprehensive
multi-platform collaboration framework known as Columbia InterNet
Extensible Multimedia Architecture (CINEMA). It provides a
distributed architecture for collaboration using synchronous
communications like multimedia conferencing, instant messaging,
shared web-browsing, and asynchronous communications like discussion
forums, shared files, voice and video mails. It allows seamless
integration with various communication means like telephones,
IP phones, web and electronic mail. In addition, it provides
value-added services such as call handling based on location
information and presence status. The paper discusses the media services needed for collaborative environment, the components
provided by CINEMA and the interaction among those components.
This paper presents a unified set of abstractions and operations for
hardware devices, software processes, and media data in a distributed audio and video environment. These abstractions, which are provided through a middleware layer called Indiva, use a file system metaphor to access resources and high-level commands to simplify the development of Internet webcast and distributed collaboration control applications. The design and implementation of Indiva are described and examples are presented to illustrate the usefulness of the abstractions.
As video conferencing and e-meeting systems are used more and more on the Internet and in businesses it becomes increasingly important to be able to participate from any computer at any location. Often this is impossible, since these systems requires often special software that are not available everywhere or impossible to install for administrative reasons. Many locations also lack the necessary network infrastructure such as IP multicast. This paper presents a WWW gateway system that enables users to participate using only a standard web browser. The design and architecture of the system are described and performance tests that show the scalability of the system are also presented.