Upgrading strategies are investigated for application in future super-broadband subscriber loops, where such technologies as super-multi-channel distribution, super- high-definition pictures, super large capacity storage and all-optical transport are expected to become available. Gradual upgrading is considered so that future systems can keep compatibility with existing systems. First, time frame and strategies for subscriber loop upgrade are overviewed and assumptions for evolution of broadband multimedia distribution systems are discussed. Next, upgrading strategies for broadband multimedia distribution are overviewed to discuss new schemes for representing super-high-definition (SHD) video channels and to show that considerably large number of additional channel counts can be required when upgradability and compatibility are taken into consideration. Finally, optimization of above mentioned upgradable and compatible super-multimedia distribution schemes is investigated.
A new scheme called recursive patching is proposed in this research to reduce the service bandwidth consumption of a video-on-demand (VOD) system by allowing later clients to merge their services with that of a preceding client that requests the same media. A series of practical on-line algorithms are presented to reduce the high complexity of the optimal recursive patching. A control window is introduced to regulate the degree of service merging. A simple greedy method without control does not work well while a cost-aware method with a carefully chosen control window can provide a simple and robust solution. Furthermore, two starting rules are considered. Our results indicate that the starting rule should be chosen carefully to match different recursive patching schemes. It is demonstrated that a cost-aware recursive patching scheme with a proper starting rule can successfully adapt to various incoming arrival distribution.
Experiences in the use of the Internet as a delivery medium for multimedia-based applications have revealed serious deficiencies in its ability to provide the QoS of Multimedia Applications. We propose an extension to TCP that addresses the QoS requirements of applications with soft real-time constraints. Although, TCP has been found unsuitable for real-time applications, it can with minor modifications be adjusted to better comply with the QoS needs of applications with soft real-time requirements. Enhancing TCP with support for this group of applications is important since the congestion control mechanism of TCP assures stability of the Internet. In contrast, specialized multimedia protocols that lack appropriate congestion control can never be deployed on a large scale basis. Two factors of great importance for applications with soft real time constraints are jitter and throughput. By relaxing the reliability offered by TCP, the extension gives better jitter characteristics and an improved throughput. The extension only needs to be implemented at the receiving side. The reliability provided is controlled by the receiving application, thereby allowing a flexible tradeoff between different QoS parameters. In this paper, our TCP extension is presented and analyzed. The analysis investigates how the different application-controlled parameters influence performance. Our analysis is supported by a simulation study that investigates the tradeoff between interarrival jitter, throughput, and reliability. The simulation results also confirm that the extended version of TCP still behaves in a TCP-friendly manner.
Packet delay and loss are two essential problems to the transmission of real-time voice over the best-effort Internet. Much effort has been involved in packet-level error control and delay jitter concealment. In our previous work, a time-scale modification of audio signal with per-packet adaptive playout algorithm is proposed to minimize the packet dropping at the receiver-end due to delay jitter. This work further extends the applicability of packet-based Synchronized OverLap-and-Add (SOLA) algorithm to an integrated delay/error concealment. Considering the timing relationship with the Forward Error Correction (FEC) parity-packet for loss recovery, the proposed adaptive playout algorithm estimates and adapts to packet loss as well as delay jitter. To enhance the playout quality, per-packet time-stretching factor is bounded by the content classification module, which classifies the audio signal into different categories. We also investigate the impact of stretching-ratio transition strategy to the perceived quality. To demonstrate the proposed adaptive playout, both analysis and performance evaluation is provided by comparing it to the silenced-based adaptive playout.
Internet multimedia applications have timing constraints that are often not met by TCP, the de facto Internet transport protocol, hence, most multimedia applications use UDP. Since UDP does not guarantee data arrival, UDP flows often have high data loss rates. Network data loss can be ameliorated by the use of Forward Error Compression (FEC), where a server adds redundant data to the flow to help the client repair lost data. However, the effectiveness of FEC depends upon the network burst loss rates, and current FEC approaches are non-adaptive or adapt without effectively monitoring this rate. We propose a Forward Error Correction protocol that explicitly adapts the redundancy to the measured network burst loss rates. Through evaluation under a variety of network conditions, we find our adaptive FEC approach achieves minimal end-to-end delay and low loss rates after repair.
Proxy servers play an important role in between servers and clients in various multimedia systems on the Internet. Since proxy servers do not have an infinite-capacity cache for keeping all the continuous media data, the challenge for the replacement policy is to determine which streams should be cached or removed from the proxy server. In this paper, a new proxy replacement algorithm, named the Least Popular Used (LPU) caching algorithm, is proposed for layered encoded multimedia streams in the Internet. The LPU method takes both the short-term and long-term popularity of the video into account in determining the replacement policy. Simulation evaluation shows that our proposed scheme achieves better results than some existing methods in term of the cache efficiency and replacement frequency under both static and dynamic access environments.
In ATM Networks, real time VBR traffic, which is inherently bursty in nature, requires dynamic bandwidth allocation. There are algorithms proposed in the literature where the bandwidth is allocated in linear proportion to the requirements. But, these algorithms lead to unequal growth in the queue size and reduction in the network performance. Minmax algorithm is one such algorithm which maintains a fair distribution of buffer lengths across the sources of a class, and performs better for homogeneous sources when compared to other dynamic allocation schemes. Nevertheless, this algorithm can be applied only when the sources demand same QoS parameter. To overcome such an inadequacy, we propose a novel method of dynamic bandwidth allocation, viz., Modified Minmax Algorithm (MMA), for multiple QoS sources. Unlike other models, MMA takes cell loss ratio (CLR) also into account besides the buffer occupancies and arrival rates. The performance of MMA has been evaluated using multiple QoS sources simulated for this purpose. The results are encouraging, for it gives better QoS performance (by an order of one) for hetrogeneous sources with different QoS values. Results on the evaluation of the performance of delay and jitter for various buffer sizes are also presented in this paper.
The soon to be released MPEG-7 standard provides a Multimedia Content Description Interface. In other words, it provides a rich set of tools to describe the content with a view to facilitating applications such as content based querying, browsing and searching of multimedia content. In this paper, we describe practical applications of MPEG-7 tools. We use descriptors of features such as color, shape and motion to both index and analyze the content. The aforementioned descriptors stem from our previous work and are currently in the draft international MPEG-7 standard. In our previous work, we have shown the efficacy of each of the descriptors individually. In this paper, we show how we combine color and motion to effectively browse video in our first application. In our second application, we show how we can combine shape and color to recognize objects in real time. We will present a demonstration of our system at the conference. We have already successfully demonstrated it to the Japanese press.
With the growing ubiquity and portability of multimedia- enabled devices, Universal Multimedia Access, is emerging as one of the important applications for the next generation of multimedia systems. The basic concept underlying Universal Multimedia Access is the adaptation, summarization, and personalization of multimedia content according to the user environment. The different dimensions for adaptation include rate reduction, adaptive spatial and temporal sampling, quality reduction, summarization, personalization, and re- editing of the multimedia content according to the user environment. The different relevant dimensions of the user environment include device capabilities, bandwidth, and user preferences, usage content, and spatial- and temporal- awareness. The emerging MPEG-7 and MPEG-21 standards are addressing Universal Multimedia Access in a number of ways. MPEG-7 provides tools for describing hints, content variations, space and frequency views, and summaries of multimedia content. MPEG-7 also provides tools for describing user preferences and usage history. MPEG-21 is addressing the description of user environment, which includes terminals and networks. In this paper we present an application-based perspective on Universal Multimedia Access using MPEG-7. We describe the different tools, methods, and systems that use MPEG-7 for enabling Universal Multimedia Access and describe a system for annotating and transcoding images using MPEG-7.
In a unique study carried out at the University of California at Berkeley (UCB) entitled, How much information , the yearly production of new information generated throughout the world has been estimated to be between 1 and 2 exabytes, or 250MB for each man, woman, and child on the earth. Of this information, about 93% is in digital format. We are increasingly in danger of being swamped with this information growth and now more than ever, we need sophisticated tools to control, manage and index this content. MPEG-7, the standard for descriptions of multimedia content, is intended to be the solution to manage this explosion of content. This paper begins with some findings from the UCB study, a brief introduction to what MPEG-7 is and outlines typical MPEG-7 applications ranging from Content Browsers to Multimedia Authoring tools showing how MPEG-7 can be used to manage digital content.
MPEG-7 is a leading standard of providing a standardized way of describing multimedia contents, among various metadata standards. In the fields of consumer electronics, providing an easy-to-use user interface for the majority of naive users is one of the most important issues in the development of MPEG-7 applications. The proposed news browser provides an intuitive interface for conventional VCR users, while providing a powerful way of browsing news articles of interest. The news browser is based on four images automatically extracted from each article of the selected news program. Anchor shot image is selected mainly for the purpose of separating each article. The icon image extracted from the anchor shot image provides a single image summarization of the given article. The synthesized key text image helps users to roughly understand the content by providing selected text information. The episode shot image provides supplementary visual aid to the user by showing a single key frame from the episode shot. By giving a glimpse at the selected four images, the users can get a rough idea on each article and select only the article of interest to be played. The extracted description including key images for the browser and the structure information is generated based on the MPEG-7 standard to provide interoperability between the standard based applications.
Current advanced television concepts envision data broadcasting along with the video stream, which is used by interactive applications at the client end. In this case, these applications do not proactively personalize the experience and may not allow user requests for additional information. We propose content enhancement using automatic retrieval of additional information based on video content and user interests. Our paper describes Video Retriever Genie, a system that enhances content with additional information based on metadata that provides semantics for the content. The system is based on a digital TV (Philips TriMedia) platform. We enhance content through user queries that define information extraction tasks that retrieve information from the Web. We present several examples of content enhancement such as additional movie character/actor information, financial information and weather alerts. Our system builds a bridge between the traditional TV viewing and the domain of personal computing and Internet. The boundaries between these domains are dissolving and this system demonstrates one effective approach for content enhancement. In addition, we illustrate our discussion with examples from two existing standards - MPEG-7 and TV-Anytime.
In this paper, we propose a service architecture that enables end users to establish multipoint multimedia communications without being conscious of one another's networking environment. The proposed architecture is composed of three conceptual models based on RM-ODP(Reference Model of Open Distributed Processing), those are concerned with enterprise, information and computational viewpoint, respectively. From the enterprise viewpoint, the Session Coordinator Role is introduced, which manages connections among end users in consideration of their terminal devices' capabilities. Furthermore, the Content Provider and Content Consumer Role, which respectively play the role of providing and consuming a multimedia content, are also introduced to represent these capabilities. These roles are embodied as service components from the computational viewpoint. From the information viewpoint, two concepts, session and content flow, are introduced to facilitate unified management of various multimedia communications. The concept, continuous session mobility, is also the key idea of the architecture. This enables end users to move around participating in a multimedia conferencing independently of their networking environments. We show the concrete model of realizing it by implementing a prototype system based on the architecture.
In this paper, a delay equalization approach is proposed for cohesive conference presentation with minimal screen- freezing effect. The underlying screen-freezing effect is due to varying delays in the communication channels involved in broadcasting. In turn, this poses a threat to the goal of uniform delay distribution among data packets for synchronized presentation broadcast. We wish to achieve uniformity among packet arrival times to the recipients. This objective is achieved by transforming a given input delay distribution (D) to the desired output density through a delay equalization process. Considering a general case of between packet and frame delay distributions for a given input, the desired output is obtained through a delay equalization process. In case of a specific normal delay distribution, a memory-less system g(D) is determined as an approximation to the delay equalizer such that the equalizer output O(t)equalsg[D(t)] is uniformly distributed in the allowable delay limits of (a,b) assuring the specified quality of service (QoS) parameters. The corresponding delay is added at each recipient workstation, which acts as the wait period required before it begins its designated presentation.For a zero mean normal delay density case, the equalizer transfer function can be given in a closed from solution as O(t)equalsg[D(t)[equals(b-a)]0.5+eft(-D/$RO OTR(0)))[+a, where erf(x) is the standard error function. The histogram approximation is adopted as an asymptotically delay equalization means for general cases. This technique provides a means for modifying the dynamic range of data acquired by altering it into a desired distribution. In experimentation, equalizer characteristic functions are derived for a set of selected input delays to obtain the desired output. The delay equalizer system developed here is suited for deployment in a distributed hierarchical conferencing environment. To accommodate a broadcast continuity, the multimedia presentation is provided with a shared workspace among the web servers acting as administrators in the network. The arrival times and the presentation durations of the data packets at the recipient workstations are recorded in this shared workspace. For a case study, a Poisson distribution type delay density is selected to be the input for the delay equalizer with the desired output to be uniformly distributed. The transfer function input and output distributions are derived and depicted in a parametric form for different values of mean in the simulation results to illustrate the effectiveness of the equalization process. We have demonstrated that this method is appropriate for equalizing the lumped communication channel delays. Further work is in progress for developing the distributed channel delay equalization.
In order to cater the needs of the users in the communication field in terms of high speed, high bandwidth, error free reception, researchers focused their attention towards BISDN/ATM in the recent years. Since the major efficiency of the ATM networks depends on the ATM switching, in this paper, we focused our attention towards high throughput of an ATM switch architecture. The Banyan switch architecture is taken for our study in this paper. We have simulated an 8 X 8 Banyan architecture with the proposed scheme of using RS coder and decoder. The Banyan architecture has been studied for its performance with Self-Similar traffic. The performance of the proposed architecture in terms of cell loss and average delay versus the cell arrival rate is presented.
This paper presents a new scheme for compact shape-coding which can reduce the needed bandwidth for low bit rate MPEG- 4 applications. Our scheme is based on a coarse representation of the alpha plane with a block size resolution of 8x8 pixels. This arrangement saves bandwidth and reduces the algorithm complexity (number of computations), as compared to the Content-based Arithmetic Encoding (CAE) algorithm. In our algorithm, we encode the alpha plane of a macroblock with only 4 bits, while we can further reduce the number of encoding bits by using the Huffman code. The encoding blocks are only contour macroblocks, transparent macroblocks are considered as background macroblocks, while opaque macroblocks are considered as object macroblocks. We show that the amount of bandwidth saving with representing the alpha-plane can reach a factor of 9.5. Such a scheme is appropriate for mobile applications where there is a lack of both bandwidth and processing power. We also speculate that our scheme will be compatible to the MPEG-4 standard.
A view-dependent progressive mesh (VDPM) coding algorithm is proposed in this research to facilitate interactive 3D graphics streaming and browsing. The proposed algorithm splits a 3D graphics model into several partitions, progressively compresses each partition, and reorganizes topological and geometrical data to enable the transmission of visible parts with a higher priority. With the real-time streaming protocol (RTSP), the server is informed of the viewing parameters before transmission. Then, the server can adaptively transmit visible parts in detail, while cutting off invisible parts. Experimental results demonstrate that the proposed algorithm reduces the required transmission bandwidth, and exhibits acceptable visual quality even at low bit rates.
With the proliferation of digital media such as images, audio, and video, robust digital watermarking and data hiding techniques are needed for copyright protection, copy control, annotation, and authentication. While many techniques have been proposed for digital color and grayscale images, not all of them can be directly applied to binary document images. The difficulty lies in the fact that changing pixel values in a binary document could introduce irregularities that are very visually noticeable. Over the last few years, we have seen a growing but limited number of papers proposing new techniques and ideas for document image watermarking and data hiding. In this paper, we present an overview and summary of recent developments on this important topic, and discuss important issues such as robustness and data hiding capacity of the different techniques.
Semi-fragile watermarking techniques aim to prevent tampering and fraudulent use of modified images. A semi-fragile watermark monitors the integrity of the content of the image but not its exact representation. Thus the watermark is designed so that if the content of the image has not been tampered with, and so long as the correct key is known and the image ha sufficiently high quality, the integrity is proven. However if some parts of the image is replaced by someone who does not possess the key, the watermark information will not be reliably detected, which can be taken as evidence of forgery. In this paper we compare the performance of nine semi-fragile watermarking algorithms in terms of their miss probability under forgery attack, and in terms of false alarm probability under mild, hence non-malicious signal processing operations that preserve the content and quality of the image. We propose desiderata for semi-fragile watermarking algorithms and indicate the promising algorithms among existing ones.
Robust identification of audio, still images and video is currently almost always associated with watermarking. Although being a powerful tool, there are some relevant issues with the use of watermarking. In this paper we review these issues, and at the same time propose to reconsider the older technique of robust feature recognition as a serious alternative. Moreover, we argue that not only in the context of content recognition, but also for other applications, a benefit is to be expected from the combination of robust feature recognition and digital watermarking.
Protecting the media of the future - securing the future of the media is an essential task for our new century. Security is defined by security measures, e.g. confidentiality, integrity, authenticity, and non-repudiation. Most of these measures are using watermarking techniques and cryptographic mechanisms like cipher systems, digital signature schemes, and authentication protocols. The security of these mechanisms is mainly based on the authenticity of specific data like keys and attributes - both data must be dedicated to its owner in an authentic manner. Otherwise, the authenticity of data and of owners can not be guaranteed and subsequently, the security can not be assured. Therefore in our paper we want to focus on data and entity (owner) authentication. We introduce a general framework to protect media data by combining different existing techniques: cryptographic, watermarking and biometric approaches. As an example we describe general concepts for a content-fragile watermarking approach for digital images and a generic approach for biometric authentication.
The effect of a partially known watermarking channel, additive noise, and multiple watermarks on the watermarking capacity region is studied. A channel can be partially known because of the following reasons : (a) randomly time-varying channel characteristics, (b) unknown attacks by an adversary, (c) uncertainty due to estimation errors such as in oblivious watermarking techniques. A mathematical model for this scenario is introduced. No assumptions are made regarding the probability distribution of the channel. Lower and upper bounds on the feasible watermarking rate region are derived. It is shown that, in terms of watermarking capacity, it is better to cancel the effect of an interfering watermark than to treat is as noise. As a special case, it is also observed that the capacity estimates based on the popular additive Gaussian noise model tend to either over or under estimate the capacity for a single watermark channel. Numerical results are also presented. Finally, we observe that the proposed mathematical model can be applied to real-life applications such as image/video watermarking. Image processing operations such as scaling, geometrical transformations etc. that distort the image (not just add noise) fall under the proposed mathematical model.
This paper presents a review of some influential work in the area of digital watermarking using communications and information-theoretic analysis. After a brief introduction, some popular approaches are classified into different groups and an overview of various algorithms and analysis is provided. Insights and potential future trends in the area of watermarking theory are discussed.
Many multimedia data hiding systems demand either multiple bits or multiple sets of data to be embedded. This paper examines the modulation and multiplexing techniques for accomplishing the task of extending the basic single-bit embedding to multiple-bit embedding. Amplitude modulo modulation, orthogonal/bi-orthogonal modulation, and TDMA and CDMA type modulation/multiplexing are discussed and compared. Several examples are included to demonstrate the use of such techniques in practical designs.
The development and spread of multimedia services require authentication techniques to prove the originality and integrity of multimedia data and (or) to localize the alterations made on the media. A wide variety of authentication techniques have been proposed in the literature, but most studies have been primarily focused on still images. In this paper, we will mainly address video authentication. We first summarize the classification of video tampering methods. Based on our proposed classification, the quality of existing authentication techniques can be evaluated. We then propose our own authentication system to combat those tampering methods. The comparison of two basic authentication categories, fragile watermark and digital signature, are made and the need for combining them are discussed. Finally, we address some issues on authenticating a broad sense video, the mixture of visual, audio and text data.
Digital video broadcasting is increasingly being adopted all over the world. The video broadcasters would require that the viewable contents of the pay channels be protected from unauthorized copying and distribution by subscribers, which is copyright protection. The subscribers would require that they be not wrongfully implicated by the broadcasters and thus ensure customer's rights protection. In this paper we present an integrated solution to address the copyright protection and customers rights protection for a video broadcasting environment. The copyright protection is addressed using a mask based watermarking technique and customer's rights protection is obtained through the use of an interactive watermarking protocol.
We present standards-compliant visible watermarking schemes for digital images and video in DCT-based compressed formats. The watermarked data is in the same compressed format as the original and can be viewed with standard tools and applications. Moreover, for most of the schemes presented, the watermarked data has exactly the same compressed size as the original. The watermark can be inserted and removed using a key for applications requiring content protection. The watermark application and removal algorithms are very efficient and exploit some features of compressed data formats (such as JPEG and MPEG) which allow most of the work to be done in the compressed domain.
Handling packet loss or delay in the mobile and/or Internet environment is usually a challenging problem for multimedia transmission. Using connection-oriented protocol such as TCP may introduce intolerable time delay in re-transmission. Using datagram-oriented protocols such as UDP may cause partial representation in case of packet loss. In this paper, we propose a new method of using our self-authentication-and-recovery images (SARI) to do the error detection and concealment in the UDP environment. The lost information in a SARI image can be approximately recovered based on the embedded watermark, which includes the content-based authentication information and recovery information. Images or video frames are watermarked in a priori such that no additional mechanism is needed in the networking or the encoding process. Because the recovery is not based on adjacent blocks, the proposed method can recover the corrupted area even though the information loss happen in large areas or high variant areas. Our experiments show the advantages of such technique in both transmission time saving and broad application potentials.
In this paper, we introduce a new forensic tool that can reliably detect modifications in digital images, such as distortion due to steganography and watermarking, in images that were originally stored in the JPEG format. The JPEG compression leave unique fingerprints and serves as a fragile watermark enabling us to detect changes as small as modifying the LSB of one randomly chosen pixel. The detection of changes is based on investigating the compatibility of 8x8 blocks of pixels with JPEG compression with a given quantization matrix. The proposed steganalytic method is applicable to virtually all steganongraphic and watermarking algorithms with the exception of those that embed message bits into the quantized JPEG DCT coefficients. The method can also be used to estimate the size of the secret message and identify the pixels that carry message bits. As a consequence of our steganalysis, we strongly recommend avoiding using images that have been originally stored in the JPEG format as cover-images for spatial-domain steganography.
In this paper, we describe an integrated network management system for ATM over ADSL service provisioning. There are two distinct networks of ATM and Internet. Most of routers in Internet connected with WDM. The Network Access Server (NAS) in the Internet provides the Internet access service for the ATM over ADSL subscriber. The ATM network takes the roles of backbone network for the pure ATM PVC and SVC services and the access network for the ATM over ADSL service. In order to define the generic network model that can be commonly applicable for the backbone network for pure ATM service and the access network for ATM over ADSL service taking into account the scalability, we suggest two fragments of the topological fragment and connectivity fragment to maximize the scalability in accordance with the ITU-T G.805 layering and partitioning concepts and the RM-ODP information viewpoint. In addition, we propose the distributed computational model of the ATM over ADSL network management system using the RM-ODP computational viewpoint and TMN functional decomposition of FCAPS taking into account the functional distribution and the modularity. Lastly, we describe the scenario for providing the integrated ADSL service.
This paper first describes the different modes of distant teaching. One scheme example of distant teaching system is given, which is built on network environment. Then system components, function, and work processes are described in this paper. In the final of the paper, discusses some key technologies of implementing distant teaching system.
A new technique to recover the information loss in a block-based image coding system is developed in this paper. Disk arrays organize multiple, independent disks into a large, high-performance logical disk. However, with more devices, reliability drops. A single disk failure in RAID level 5 will lead to the increase of the load of each surviving disk by 100% for data reconstruction. Each disk is loaded at less than 50% of its capacity in the fault-free state so that the surviving disks will not saturate when failure occurs. By image partitioning, decoder reconstructs DCT blocks in the sub-image and does not impose any significant load on the disk array. Our approach improves the quality of the compressed image by 2dB to 10dB and above and reduces the code overhead of the compressed image data by 2 percent to 10 percent and above for different images due to recovering main AC coefficients.
Development towards high-bandwidth wireless devices that are capable of processing complex, streaming multimedia is enabling a new breed of network-based media services. Coping with the diversity of network and device capabilities requires services to be flexible and able to adapt to the needs and limitations of the environment at hand. Before efficient deployment, multi-platform services require additional issues to be considered, e.g. content handling, digital rights management, adaptability of content, user profiling, provisioning, and the available access methods. The key issue is how the content and the service is being modelled and stored for inauguration. We propose a new service content model based on persistent media objects able to store and manage XHTML-based multimedia services. In our approach, media, content summaries, and other meta-information are stored within media objects that can be queried from the object database. The content of the media objects can also specify queries to the database and links to other media objects. The final presentation is created dynamically according to the service request and user profiles. Our approach allows for dynamic updating of the service database together with user group management, and provides a method for notifying the registered users by different smart messaging methods, e.g. via e-mail or a SMS message. The model is demonstrated with an 'ice-hockey service' running in our platform called Princess. The service also utilizes SMIL and key frame techniques for the video representation.
In this paper, we describe new content authoring issues related to mobility and cross-platform multimedia systems. We present the novel architecture for Content Provider Interface (CPI), which provides the necessary tools for creation and delivery of mobile multimedia services across heterogeneous environments. CPI is a fully implemented system and set of tools for creating and updating new multimedia services delivered via a mobile adaptive multimedia service platform called Princess. CPI is based on an object-oriented database for editing and storing media objects, which are interpreted to service catalogs and presentations. Additionally, CPI is capable of creating rich notifications to the system to dynamically report content updates and allows users to create and use templates to establish new media objects, combine databases and preview presentations outside the service platform. The proposed approach addresses the requirements and difficulties of the cross-platform multimedia systems by integrating data management, service logic and modular service/presentation design together in a unique way with dynamic content updates and notification features, as well as user group management.