This work explored mechanisms to asynchronously distribute video objects to intranet users. The primary application
driver was to disseminate lecture videos created by the instructor as well as annotated videos from students. The storage
requirements made remote storage mechanisms as well as local infrastructure storage impractical. Hence, we investigated
the feasibility of distributing video contents from user devices. Based on the recent trend of devices going wireless, we
analyzed the viability of using laptop devices. We envision a variant of RSS feed mechanism that searched for the lectures
among currently available replicas. The effectiveness of this distribution mechanism depended on the total number of
voluntary replicas and availability patterns of wireless devices. Using extensive analysis of the observed node behavior,
we showed that though laptop users were online for shorter durations, their temporal consistency can provide reasonable
availability, especially at the times of the day when students were typically active.
A number of prior efforts analyzed the behavior of popular peer-to-peer (P2P) systems and proposed ways for maintaining
the overlays as well as methods for searching for contents using these overlays. However, little was known about how
successful users could be in locating the shared objects in these system. There might be a mismatch between the way
content creators named objects and the way such objects were queried by the consumers. Our aim was to examine the
terms used in the queries and shared object names in the Gnutella file-sharing system. We analyzed the object names of
over 20 million objects collected from 40,000 peers as well as terms from over 230,000 queries. We observed that almost
half (44.4%) of the queries had no matching objects in the system regardless of the overlay or search mechanism used to
locate the objects. We also evaluated the query success rates against random peer groups of various sizes (200, 1K, 2K, 3K,
4K, 5K, 10K and 20K peers sampled from the full 40,000 peers). We showed that the success rates increased rapidly from
200 to 5,000 peers, but only exhibited modest improvements when increasing the number of peers beyond 5,000. Finally,
we observed Zipf-like distribution for query terms and the object names. However, the relative popularity of a term in the
object names did not correlate with the terms popularity in the query workload. This observation affected the ability of
hybrid P2P systems to guide searches by creating a synopsis of the peer object names. A synopsis created by using the
distribution of terms in the object names need not represent relevant terms for the query. Our results can be used to guide
the design of future P2P systems that are optimized for the observed object names and user query behavior.
Peer to peer (P2P) systems are traditionally designed to scale to a large number of nodes. However, we focus on scenarios where the sharing is effected only among neighbors. Localized sharing is particularly attractive in scenarios where wide area network connectivity is undesirable, expensive or unavailable. On the other hand, local neighbors may not offer the wide variety of objects possible in a much larger system. The goal of this paper is to investigate a P2P system that shares contents with its neighbors. We analyze the sharing behavior of Apple iTunes users in an University setting. iTunes restricts the sharing of audio and video objects to peers within the same LAN sub-network. We show that users are already making a significant amount of content available for local sharing. We show that these systems are not appropriate for applications that require access to a specific object. We argue that mechanisms that allow the user to specify classes of interesting objects are better suited for these systems. Mechanisms such as bloom filters can allow each peer to summarize the contents available in the neighborhood, reducing network search overhead. This research can form the basis for future storage systems that utilize the shared storage available in neighbors and build a probabilistic storage for local consumption.
In this work, we explore network traffic shaping mechanisms that deliver packets at pre-determined intervals; allowing the network interface to transition to a lower power consuming <i>sleep</i> state. We focus our efforts on commodity devices, IEEE 802.11b ad hoc mode and popular streaming formats. We argue that factors such as the lack of scheduling clock phase synchronization among the participants and scheduling delays introduced by back ground tasks affect the potential energy savings. Increasing the periodic transmission delays to transmit data infrequently can offset some of these effects at the expense of flooding the wireless channel for longer periods of time; potentially increasing the time to acquire the channel for non-multimedia traffic. Buffering mechanisms built into media browsers can mitigate the effects of these added delays from being mis-interpreted as network congestion. We show that practical implementations of such traffic shaping mechanisms can offer significant energy savings.
With the proliferation of mobile streaming multimedia, available battery capacity constrains the end-user experience. Since streaming applications tend to be long running, wireless network interface card's (WNIC) energy consumption is particularly an acute problem. In this work, we explore the WNIC energy consumption implications of popular multimedia streaming formats from Microsoft (Windows media), Real (Real media) and Apple (Quick Time). We investigate the energy consumption under varying stream bandwidth and network loss rates. We also explore history-based client-side strategies to reduce the energy consumed by transitioning the WNICs to a lower power consuming sleep state. We show that Microsoft media tends to transmit packets at regular intervals; streams optimized for 28.8 Kbps can save over 80% in energy consumption with 2% data loss. A high bandwidth stream (768 Kbps) can still save 57% in energy consumption with less than 0.3% data loss. For high bandwidth streams, Microsoft media exploits network-level packet fragmentation, which can lead to excessive packet loss (and wasted energy) in a lossy network. Real stream packets tend to be sent closer to each other, especially at higher bandwidths. Quicktime packets sometimes arrive in quick succession; most likely an application level fragmentation mechanism. Such packets are harder to predict at the network level without understanding the packet semantics.
Transcoding is a technique employed by network proxies to dynamically customize multimedia objects for prevailing network conditions and individual client characteristics. Transcoding can be performed along a number of different axes and the specific transcoding technique used depends on the type of multimedia object. Our goal in this paper is to understand the nature of typical Internet images and their transcoding characteristics. We focus our attention on transcodings intended to customize an image for file size savings. Our results allow the developers of a transcoding proxy server to choose the appropriate transcoding techniques for the important classes of Internet images. We analyze the characteristics of images available on the Web through a representative trace. We show that most GIF images accessed on the Internet are small; about 80% of the GIF images are smaller than 6 KBs. JPEG images are larger than GIF images; about 40% of the JPEG images are larger than 6 KBs. We also establish the characteristics of popular image transcoding operations. We show that for JPEG images, the JPEG compression metric and a transcoding that reduces the spatial geometry are productive transcoding operations (saves at least 50% of the file size for 50% of the images). Our systematic study of image characteristics leads to some surprising results. For example, a naive spatial geometry reduction of GIF images by a factor of 2 along each axis actually causes an increase in the file size compared to the original image for 40% of the images. Thus it is important to understand the characteristics of individual images before choosing the proper transcoding operation.