Mobile multimedia systems must provide application quality of service (QoS) in the presence of dynamically varying and multiple resource constraints (e.g., variations in available CPU time, energy, and bandwidth). Researchers have therefore proposed adaptive systems that can respond to changing resource availability and application demands. All system layers can benefit from adaptation, but fully exploiting these benefits requires a new cross-layer adaptation framework to coordinate the adaptations in the different layers. This paper presents such a framework and its first prototype, called GRACE-1. The framework supports application QoS under CPU and energy constraints via coordinated adaptation in the hardware, OS, and application layers. Specifically, GRACE-1 uses global adaptation to handle large and long-term variations, setting application QoS, CPU allocation, and CPU frequency/voltage to qualitatively new levels. In response to small and temporary variations, it uses local adaptation within each layer. We have implemented the GRACE-1 prototype on an HP laptop with an adaptive processor. Our experimental results show that, compared to previous approaches that exploit adaptation in only some of the layers or in an uncoordinated way, GRACE-1 can provide higher overall system utility in several cases.
We present a flexible framework for empowering mobile access to Internet content. It utilizes Internet Content Adaptation Protocol (ICAP) as a common communication platform for improving performance and reducing latency. ICAP is an open protocol and extends a caching server to provide value-added services at the edge of the network. Our adaptation framework provides higher scalability, reliability and performance for content adaptation and enables efficient caching for transcoded content. Comparison of the adaptation results generated by our system is presented.
One of key issues in multi-user interactive applications is to provide a consistent, shared view among participants while maintaining interactive performance. The buffered synchronization mechanism can support a consistent view by holding events in buffer until a fixed amount of time called playout delay expires and executing them at the same time at all participants. However, if events get lost or delayed, consistency may not be kept among participants, which causes an overhead to recover the lost information. On the other hand, the execution of events should be delayed more than it could have happened when the events arrive earlier than the playout delay. In this paper, we propose an efficient event synchronization scheme adapting dynamically to a network state that ensures a consistent view among participants while maintaining interactive performance. Each participant determines the playout delay dynamically based on the estimated network state. If a network state is unloaded, the playout delay decreases. It increases in case of a loaded network state. The simulation results show that the proposed scheme provides more consistent view among the participants than the static scheme when the loaded network state continues and improves the interactive performance when the network state is unloaded.
Large-scale continuous media (CM) system implementations require scalable servers most likely built from clusters of storage nodes. Across such nodes random data placement is an attractive alternative to the traditional round-robin striping. One benefit of random placement is that additional nodes can be added with low data-redistribution overhead such that the system remains load balanced. One of the challenges in this environment is the implementation of a retransmission-based error control (RBEC) technique. Because data is randomly placed, a client may not know which server node to ask for a lost packet retransmission.
We have designed and implemented a RBEC technique that utilizes the benefits of random data placement in a cluster server environment while allowing a client to efficiently identify the correct server node for lost packet requests. We have implemented and evaluated our technique with a one-, two-, and four-way server cluster and across local and wide-area networks. Our results show the feasibility and effectiveness of our approach in a real-world environment.
Multimedia transmission over wide-area networks currently only considers the server and network resource constraints and client device capabilities. It is also essential that the accessibility of the multimedia content for different users with diverse capabilities and disabilities be considered. In this paper we develop a transcoding technique to present the multimedia content to suit diverse disabled user groups by using an ability based classification approach. Using ability-based prioritization, the appropriate alternate modalities and quality levels are chosen to replace the inaccessible modalities. The transcoding process allows for refinements to cater to specific types and degrees of impairments. Our performance results illustrate the benefits of the ability-based transcoding approach.
Internet webcasting is gaining attention as the cost of communications and computer processing declines. The UC Berkeley Multimedia, Interfaces, and Graphics Seminar is webcast worldwide on the Internet using IETF standards on the Multicast Backbone (Mbone). Many potential viewers are unable to receive the webcast due to network problems and/or the lack of software supporting the Mbone protocols and codecs. The Transcoding Gateway, called tgw, simplifies the management of simulcasting several transmissions that use different transport protocols, media formats, and network bandwidth. Tgw incorporates transmission bandwidth allocation, transcoding to alternative formats, and redirection of streams to alternate addresses to manage transmission of a webcast.
Different from traditional data traffic, multimedia traffic has stringent requirements on quality of service (QoS). For multimedia networking, a prime concern is to ensure that there are adequate network resources to meet the QoS requirements of multimedia traffic. In order to make effective traffic and congestion control, and network management, in-service QoS monitoring and estimation (ISME) has to be employed as opposed to the conventional out-of-service monitoring and testing techniques. In this paper, an ISME scheme is proposed. Virtual buffer techniques are employed in the ISME scheme to reduce the monitoring time required to make a valid observation of QoS. Simulation using both voice traffic source model and real variable bit rate video traffic is performed. Simulation results show that the proposed scheme is able to achieve better accuracy than that in the literature. Moreover, it also requires less monitoring time than that in the literature.
A video server may need to reserve resources in order to meet the Quality of Service (QoS) requirements of the various clients that it services. We present an admission control strategy that provides deterministic admission to the minimum or necessary layers and statistical admission to the higher layers of a client video stream accessing variable bit-rate multi-resolution video. An efficient data placement scheme called Partial-Bundled is proposed and compared with other data placement strategies. In the event of a node failure, the load on the failed node is distributed across the functional nodes. Therefore, the load on each node is increased by a fraction depending on the data replication scheme used, and it may be necessary to degrade the resolutions of the streams. Two heuristic algorithms to provide for graceful degradation in the presence of disk failure are described. The heuristic algorithms attempt to maximize the rewards (related to average quality of a stream) while reducing the resolution of clients.
To distribute video and audio data in real-time streaming mode, both Content Distributed Network (CDN) based and peer-to-peer (P2P) based architectures have been proposed. However, each architecture has its limitations. CDN servers are expensive to deploy and maintain. The storage space and out-bound bandwidth allocated to each media file are
limited and incur a cost. Current solutions to lowering such cost usually compromise the media quality delivered. On the other hand, a P2P architecture needs a sufficient number of 'seed' supplying peers to 'jumpstart' the system. Compared with a CDN server, a peer offers very low out-bound bandwidth. Furthermore, it is not clear how to fairly determine the contribution of each supplying peer. In this paper, we propose a novel hybrid architecture which integrates CDN and P2P based streaming media distribution. The architecture is highly cost-effective: it significantly lowers the cost of CDN server resources, without compromising the media quality delivered. Furthermore, we propose a limited contribution policy for the supplying peers in the system, so that the streaming capacity of supplying peers is exploited on a limited and fair basis. We present an in-depth quantitative analysis of the hybrid system. The analysis is very well supported by our extensive simulation results.
This paper investigates an architecture and implementation for the use of a TCP-friendly protocol in a scalable video distribution system for hierarchically encoded layered video. The design supports a variety of heterogeneous clients, because recent developments have shown that access network and client capabilities differ widely in today's Internet. The distribution system presented here consists of videos servers, proxy caches and clients that make use of a TCP-friendly rate control (TFRC) to perform congestion controlled streaming of layer encoded video. The data transfer protocol of the system is RTP compliant, yet it integrates protocol elements for congestion control with protocols elements for retransmission that is necessary for lossless transfer of contents into proxy caches. The control protocol RTSP is used to negotiate capabilities, such as support for congestion control or retransmission.
By tests performed with our experimental platform in a lab test and over the Internet, we show that congestion controlled streaming of layer encoded video through proxy caches is a valid means of supporting heterogeneous clients. We show that filtering of layers depending on a TFRC-controlled permissible bandwidth allows the preferred delivery of the most relevant layers to end-systems while additional layers can be delivered to the cache server. We experiment with uncontrolled delivery from the proxy cache to the client as well, which may result in random loss and bandwidth waste but also a higher goodput, and compare these two approaches.
The temporal ordering and the spatial viewpoints of video frames in conventional digital video content are completely determined at the time of authoring. Because of the lack of runtime navigation flexibility in the content, traditional video-on-demand systems have very limited navigation controls, such as fast-forward/rewind etc. In contrast, we have developed a new form of interactive video content called active video, which supports hyper-linking among related video sequences for temporal navigation and interpolation among stored video sequences, that simultaneously capture a dynamic scene, for spatial navigation. Thus active video enables the end user to choose the temporal frame sequencing and the viewing angle (even virtual ones) during playback. This additional navigation flexibility cannot be supported by traditional video distribution systems. The storage and playback of active video poses unique design challenges. Active video delivery requires computation support on the data path, between the storage and the playback application, to interpolate new views based on stored views. Since active video is only a specific instance of a broad class of interactive media, that require computation support on the server, the new distribution system designed for active video should also be programmable and extensible to store and perform runtime processing of other forms of interactive media. A software architecture of the shared programmable computation framework also needs to address performance and data isolation issues. We have designed and implemented a comprehensive active video authoring, compression, storage, and playback system called Memphis. In this paper, we describe the design and implementation of the storage and playback components which address the above design issues.
Broadly used Database Management Systems (DBMS) are not able to tackle the requirements of multimedia in querying, indexing and content modeling. Therefore, extenders for multimedia data types have been proposed. These extensions, however, offer only limited semantic modeling and rely on basic index structures which do not meet the whole nature of multimedia, for instance for a Nearest-Neighbor Search. In this context, the paper presents a methodology for enhancing extensible ORDBMS for multimedia data. In particular, we introduce an MPEG-7 Multimedia Data Cartridge which includes a semantically rich metadata model for multimedia content relying on the MPEG-7 standard. Furthermore, to fulfill the needs for efficient multimedia query processing, we created in this Cartridge a new indexing and query framework for various types of retrieval operations.
Decentralized peer-to-peer (P2P) file sharing systems, where peers query each other for content, are the most dominant in today's Internet. In the unstructured decentralized P2P systems, there is no direct connection between content location and system topology. Searches in such systems are typically broadcast within a limited region of the network and thus may not receive a response if the content is not within that region. Structured decentralized P2P systems provide a connection between content location and the system topology. There, queries can be directed to a peer who can respond definitively. Research in distributed computing has examined the problem of matching a client process to a desired server process. An approach to solving this distributed match-making problem is to have the server "post" or replicate information to other nodes in the system. We modify this approach to be used in the decentralized P2P file sharing environment. In this paper we propose a "posting" protocol to improve the success of searches in the decentralized P2P systems. By having peers replicate keyword information to other peers the search success rate can be increased. We evaluate different posting policies and compare the results for searching with and without posting.
Recent years have seen a tremendous growth of interests in streaming continuous media such as video over the Internet. This would create an enormous increase in the demand on various server and networking resources. To minimize service delays and to reduce loads placed on these resources, we propose an Overlay Caching Scheme (OCS) for overlay networks. OCS utilizes virtual cache structures to coordinate distributed overlay caching nodes along the delivery path between the server and the clients. OCS establishes and adapts these structures
dynamically according to clients' locations and request patterns. Compared with existing video caching techniques, OCS offers better performances in terms of average service delays, server load, and network load in most cases in our study.
To achieve scalable and efficient on-demand media distribution, existing solutions mainly make use of multicast as underlying data delivery support. However, due to the intrinsic conflict between the synchronous multicast transmission and the asynchronous nature of on-demand media delivery, these solutions either suffer from large playback delay or require clients to be capable of receiving multiple streams simultaneously and buffering large amount of data. Moreover, the limited and slow deployment of IP multicast hinders their application on the Internet.
To address these problems, we propose asynchronous multicast, which is able to directly support on-demand data delivery. Asynchronous multicast is an application level solution. When it is deployed on a proxy network, stable and scalable media distribution can be achieved. In this paper, we focus on the problem of efficient media distribution. We first propose a temporal dependency model to formalize the temporal relations among asynchronous media requests. Based on this model, we propose the concept of Media Distribution Graph (MDG), which represents the dependencies among all asynchronous requests in the proxy network. Then we formulate the problem of efficient media distribution as finding Media Distribution Tree (MDT), which is the minimal spanning tree on MDG. Finally, we present our algorithm for MDT construction/maintenance. Through theoretical analysis and experimental study, we claim that our solution can meet the goals of scalability, efficiency and low access latency at the same time.
Since multimedia applications are known to be resource-hungry and mobile devices are resource-poor, in this paper, we propose techniques to reduce the energy consumption of streaming media applications running on mobile hosts. Our proposed techniques are proxy-based and involve power-friendly transformations on the requested streams so as to limit the energy required for receiving and decoding this data. Additionally, our proxy employs intelligent network transmission techniques to reduce the energy needs for network reception of streaming data. We implement our techniques into a prototype proxy and client and demonstrate their efficacy via an experimental evaluations. Our results show that our power-friendly transformations are effective over a range of bit rates and stream resolutions, while our intelligent transmission techniques can reduce the potential energy wastage during network reception by 65-98%.
Widespread availability of high-speed networks and fast, cheap computation have rendered high-quality Media-on-Demand (MoD) feasible. Research on scalable MoD has resulted in many efficient schemes that involve segmentation and asynchronous broadcast of media data, requiring clients to buffer and reorder out-of-order segments efficiently for serial playout.
In such schemes, buffer space requirements run to several hundred megabytes and hence require efficient buffer management techniques involving both primary memory and secondary storage: while disk sizes have increased exponentially, access speeds have not kept pace at all. The conversion of out-of-order arrival to in-order playout suggests the use of external memory priority queues, but their content-agnostic nature prevents them from performing well under MoD loads. In this paper, we propose and evaluate a series of simple heuristic schemes which, in simulation studies and in combination with our scalable MoD scheme, achieve significant improvements in storage performance over existing schemes.
To address the scalability issue in video-on-demand systems, many broadcasting schemes have been proposed to date. The major performance parameters of such a broadcasting scheme are the server broadcast bandwidth, the user bandwidth and the user’s initial waiting time. The broadcasting schemes with the least server bandwidth requirement currently known require the same bandwidth on the user side as that on the server side. We propose a new broadcast scheme, named Generalized Fibonacci Broadcasting (GFB), to address the issue of limiting the user-side bandwidth requirement. For any given combination of the server and user bandwidths, GFB can always achieve the least user waiting time among all the currently known broadcasting schemes. Furthermore, it would be very easy to implement GFB.
We present the first broadcasting protocol that can alter the number of channels allocated to a given video without inconveniencing the viewer and without causing any temporary bandwidth surge. Our variable bandwidth broadcasting (VBB) protocol assigns to each video a minimum number of channels whose bandwidths are all equal to the video consumption rate. Additional channels can be assigned to the video at any time to reduce the customer waiting time or retaken to free server bandwidth. The cost of this additional flexibility is quite reasonable as the bandwidth requirements of our VBB fall between those of the fast broadcasting protocol and the new pagoda broadcasting protocol.
Video-on-Demand is undoubtedly a promising technology for many important applications. Several periodic broadcast techniques have been proposed for the cost-effective implementation of such systems. However, the once-and-for-all implementation strategies of these broadcast schemes imply a common bandwidth requirement for all the clients. Multiresolution techniques address this issue by sacrificing video quality. We present an alternative approach which does not have this drawback. Our protocol, the HEterogeneous Receiver-Oriented (HeRO) Broadcasting, allows receivers of various communication capabilities to share the same periodic broadcast, and therefore enjoy the same video quality while requiring very little buffer space. This is achieved using a new data segmentation scheme with a surprising property. We present the broadcast technique, and compare its performance with that of existing methods.