PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
A study on traffic characteristics of the Internet is essential to design the Internet infrastructure. In this paper, we first characterize WWW (World Wide Web) traffic based on the access log data obtained at four different servers. We find that the document size, the require inter- arrival time and the access frequency of WWW traffic exhibit heavy-tail distributions. Namely, the document size and the request inter-arrival time follow log-normal distributions, and the access frequency does the Pareto distribution. For the request inter-arrival time, however, an exponential distribution becomes adequate if we are concerned with the busiest hours. Based on our analytic results, we next build an M/G/1/PS queuing model to discuss a design methodology of the Internet access network. The accuracy of our model is validated by comparing with the trace-driven simulation. We then show that the M/G/1/PS model can be utilized to estimate the access line capacity for providing high-quality document transfer.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The World Wide Web has gained tremendously in popularity over the last several years. One solution to the problem of overloaded WWW servers is to use multicast for the delivery of pages. In this work we explore the use of UDP, best- effort multicast as a delivery option. Reliability is achieved through repetitive, cyclic transmission of a requested page. We describe the cyclic multicast technique and consider the various procedures needed for its successful operation. We characterize the gains in performance achieved by our proposal through an extensive performance analysis and reference our ongoing work in simulating and implementing a cyclic multicast server.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We developed a networked virtual reality system on which one could have live videos with high interactive usage. However, since a client in our system requires both live videos and computer graphics objects continuously from a server while it is rendering the current frame of the virtual world, a quality of the presentation of the virtual space depends on the current available system resources on the client as well as network resources. In this paper, we discuss the notion of QoS for a networked VR system as an application level QoS which is based on the notion of an importance of presence of objects in a virtual space calculated by a distance from a current user's position to the object and an angle from the user's perspective to the object. Then, we show a heuristic mapping mechanism of the application level QoS to a network level QoS which determines the allocation of bandwidth of videos and CG objects to maximize the presentation quality. We adopt bandwidth as an example of network level QoS. We are developing our prototype system on the SGI Indigo utilizing the IRIS Performer with the Virtual Reality Modeling Language 2.0.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The call establishment process in ATM networks can constitute a source of overload in the switches of the network. We simulate the call set-up phase of an ATM connection, including path selection, bandwidth reservation and call rejection, and simulate the flow of call establishment messages to estimate by simulation and analytically, the queue lengths of call establishment processing at the input, output and intermediate switches. A simplified analytical model is derived to estimate the queue lengths of call establishment jobs at each node of the network when advanced reservation with perfect information is implemented. The analytical model provides a lower bound to queue lengths, and remains within order of magnitude accuracy when the call request traffic is high.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Future switching architectures lead to the distribution of call processing and fabric control in separate processing entities. Message exchange among fabric controllers and call processors occurs in order to establish a communication path in the network. As new switching fabrics and network applications are introduced, volume and type of call requests at a distributed switch are also expected to increase. Even more so than in the past, overload control is becoming critical and requires special attention. In this paper, after describing a distributed switching architecture we consider the problem of multivariate overload control by formulating it as a Semi-Markov Decision Process. The model consists of observation of the system states at each state change. At each system state, an action appropriate for overload control in the switch entity is selected. The objective is to maximize the average system throughput.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Modern out-of-band telephone switching systems often have a distributed architecture, where a number of processors with various functions are connected to each other via a local area network, and the system is connected to the signaling networks by a number of signaling processors with each terminating one high speed link. The capacity of the signaling processor is determined by not only its capability of processing the steady-state load, but also by its ability to meet the performance requirements for the situation of link failure and changeover. A more complicated but not negligible scenario is the double link failure, which is often the result of facility failure. In this paper we present a framework that analyzes the performance during a double changeover and link retrieval from the point of view of feasibility. The fundamental question we try to answer is whether the capacity of the signaling processors are sufficient to meet the message delay requirements for the double changeover.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
`All optical networks' are future networks in which the path between end nodes remains entirely optical. This new kind of networks is well suited to fulfill the growing demand on bandwidth. One of the popular methods to operate all optical networks is the `deflection' principle, different from recent methods such as diffusion and virtual circuits used in Internet and ATM networks respectively. In this paper, we focus on the performance of all optical deflection networks in a metropolitan area of the `Manhattan Street' type. Our investigation is conducted using both analytical and simulation methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We discuss an architecture and underlying technologies of terabit-per-second network with the spectral-domain modulation. The baseline architecture is TDM which exploits our techniques for imposing > 1000 pixel phase and amplitude modulation on the spectrum of a 100 fs laser pulse, using microsecond radiofrequency pulse trains, for all-optical demultiplexing to efficiently select 100 fs - 100 ps time slices. The network architectures we will initially explore is a single-hop star network that uses an active star coupler as a shared medium. WE will also explore a WDM/TDM network with a waveguide grating router as its hub.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Reliable multicasting encompasses a spectrum of applications with varying requirements. Also there seems to be a consensus among the researchers that a single protocol cannot serve the entire application space. Therefore, there is a need for a common framework which can support protocols specifically suited for various applications. This paper begins with a categorization of reliable multicasting applications followed by the description of key problems in each category. In addition, some partial solutions are provided to these problems using a tree-based hierarchical framework. It is argued that a tree-based hierarchy can potentially provide a scalable framework for reliable multicasting in general.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, a simple hierarchical approach to inter/intra domain multicasting is introduced. The proposed Hierarchical Tree Multicast Protocol (HTMP) supports sender initiated multicasting, i.e., requires by an application starting a multicast session to provide an initial list of potential receivers. Based on this information, HTMP utilizes the unicast routing information which is currently available in the Internet routers and builds a hierarchical shared tree architecture which consists of separate intra-domain core based trees interconnected by a core based inter-domain tree that may span the entire Internet. Although HTMP requires an initial group of potential receivers, it allows for new hosts to join in supporting three modes of session participation: (a) a restricted mode which allows access only to a specified list of group members, (b) a semi- restricted mode which allows new members to join in but only after authorization from the session manager, (c) an open mode which does not require any authorization. HTMP supports a distribution (one-to-many) mode of multicasting, in which the hierarchical tree is transformed into a source based tree with core the session initiator, as well as an interactive mode (many-to-many) of multicasting. Finally, HTMP provides a receiver oriented resource reservation mechanism which allows for heterogeneity of traffic streams.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Widespread acceptance and use of the internet requires more efficient utilization of the available bandwidth. Multi- party interactions are increasing with the typical application, such as video conferencing, being high bandwidth with heavy user interaction. The first generation of multicast protocols are based on the formation of one or more routing delivery trees for a specified number of participants who are identified by a single group address. This address space, known as a class `D' address, is a flat, global address space with no initial scope or scaling boundaries. Recent efforts have introduced administrative scoping and hierarchies to address these problems, however, the logical inherent hierarchy used by the unicast routing protocols to assist in route aggregation (a scaling issue) does not apply to multicast. Aggregation of multiple unicast entities has already occurred through the use of the multicast address. Aggregation of multiple multicast address implies a priori knowledge of the locations of the members across groups resulting in an undesirable implementation. We propose a solution to the scaling and scope control problems using a cluster based hierarchical multicast architecture which pre-supposes no underlying unicast protocol. We provide solutions to the construction of delivery trees, scalability and scope control problems which is not overly complex. We characterize the path length performance of this solution and back up the analysis through simulations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In recent years, there has been a lot of interest in providing real-time multimedia services like digital audio and video over packet-switched networks such as Internet and ATM. These services require certain quality of service (QoS) from the network. The routing algorithm should take QoS factor for an application into account while selecting the most suitable route for the application. In this paper, we introduce a new routing metric and use it with two different heuristics to compute the multicast tree for guaranteed QoS applications that need firm end-to-end delay bound. We then compare the performance of our algorithms with the other proposed QoS-based routing algorithms. Simulations were run over a number of random networks to measure the performance of different algorithms. We studied routing algorithms along with resource reservation and admission control to measure the call throughput over a number of random networks. Simulation results show that our algorithms give a much better performance in terms of call throughput over other proposed schemes like QOSPF.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Broadband networks will increasing carry prerecorded traffic, such as high-fidelity audio, short multimedia clips, and full-length movies. We study how to manage the transmission and transport of traffic from prerecorded VBR sources so that network resources are efficiently utilized and end users receive satisfactory service. Specifically, we study Piecewise Constant-Rate Transmission and Transport (PCRTT), whereby the server transmits and the network transports each connection's packets at different constant rates over a small number of intervals. We show how dynamic programming can be applied to find optimal PCRTT rates and intervals for a wide variety of optimization criteria, including criteria which explicitly account for delays to user interaction. We also introduce two admission policies for PCRTT: peak-rate admission and packing admission. Using a public domain MPEG trace, we present several numerical examples which illustrate the traffic management schemes and the DP methodology.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Network support for variable bit-rate video needs to consider (1) properties of workload induced (e.g., significant autocorrelations into far lags and heterogeneous marginal distributions), and (2) application specific bounds on delay-jitter and statistical cell-loss probabilities. The objective of this paper is to present a quality-of-service solution for such traffic at each multiplexing point in a network. Heterogeneity in both offered workload and quality- of-service requirements are addressed. A per-virtual-circuit framing structure and a pseudo earliest-due-date cell dispatcher are introduced to provide guaranteed delay-jitter bounds. Heterogeneous jitter-bounds are supported through software controlled frame-sizes which may be independently set for each virtual-circuit. The framing structure is a generalization of per-link framing introduced by Golestani. The proposed framing structure eliminates correcting for phase mismatches between incoming frames and outgoing frames, necessary in per-link framing. This results in reduction in end-to-end delay bound, buffer requirements, and a simpler implementation. Strong auto-correlations typically seen in video traffic make equivalent bandwidth computations for heterogeneous cell-loss bounds intractable. To address this, the framing strategy is combined with an active cell-discard mechanism with prioritized cell- dropping, the latter utilizing the history of dropped cells and target cell-loss bounds for each virtual circuit. Upper bounds on the equivalent bandwidth needed to support a given workload with a target quality of service are also given. These are validated through numerical and simulation results from variable bit-rate MPEG-I video traces.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper investigates the sensitivity of MPEG-2 transport stream packets to communication network impairments such as bit errors and data loss. In the case of ATM, techniques for using the CRC and length fields in the AAL5 Trailer for error detection are assessed, and their effect on received video quality is observed. A new cell loss padding technique that utilizes the length field in the case of AAL5 to conceal the impact of cell losses is proposed. The essence of the proposed technique is based on passing the incomplete PDU (PDU with lost cells) and the length field to the decoder Transport Stream layer or to an AAL-SSCS where parsing and padding is performed based on the available information such as the type of incomplete TS packet. For AAL1 the sequence number is used for detection of lost cells, and padding is performed according to the position of the lost cells. Preliminary experimental results on the effect of background load, and the number of background traffic sources on the variance of cell transfer delay, and CDV of a CBR reference traffic source is also presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The currently specified variable bit-rate (VBR) service class for real-time traffic on broad-band networks severely sacrifices network utilization to provide QoS support for multimedia traffic. A new service class, called VBR+, has been proposed to balance the goals of providing acceptable QoS and achieving high network utilization. VBR+ is a flexible service class that extends the traditional VBR service with bandwidth renegotiation. Bandwidth renegotiation is well suited for the dynamic traffic profiles of multimedia applications. Renegotiation allows a more efficient network capacity allocation and potentially allows the network to operate at more aggressive statistical multiplexing regimes while maintaining acceptable QoS. Some quality degradation is caused by source rate control when the network is congested and renegotiation requests cannot be fully satisfied. This paper quantifies the trade-off between video quality and network utilization for VBR+ transport. The multiplexing performance of VBR+ traffic is obtained via simulation using MPEG-2 video traces obtained using NEC's VisuaLink codec. Results shows that VBR+ transport can maintain acceptable video quality at 70 - 80% link utilization. This represents a 20 - 30% improvement on utilization over the currently specified VBR service for comparable video quality.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we present the behavior of the Sunshine ATM switch for variable bit rate MPEG video sources. The Sunshine switch is the main distribution element of our near video-on-demand (NVOD) system. We are adding a copy network to the basic Sunshine architecture in order to provide it with a multicasting capability. Basically, our system consists of a cascade of a NVOD server, a copy network, a Sunshine switch, and the users. Requests for new movies can be served each 5 minutes. We investigate, by making extensive simulations, the complexity of the switch for light, medium, and heavy MPEG traffic. Important dimensioning parameters investigated are the number of recirculators and the number of banyan networks in the switch for a cell blocking probability of 10-6. Our video traffic simulator generates MPEG video traffic for a videoconference-type of sources with a few different scenes and Gamma distributed holding times.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Many emerging continuous media applications can tolerate some degree of both packet loss and delay. Retransmission- based packet loss recovery protocols, which reduce packet loss at the expense of playout delay, appear promising for this class of applications. We describe a point-to-point implementation of a retransmission-based error recovery protocol. The implementation helps identify a number of problems that exist when extending the approach to multicast distribution. To study these problems, we introduce and analyze a model of multicast network packet loss. We show that one possible solution to the realization of bandwidth- efficient error recovery protocols lies with emerging active network technology.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Asynchronous Transfer Mode (ATM) networks provide guarantees on quality of service by first determining the availability of the required network resources and then reserving these resources for the duration of the application call. The Call Admission Control components of the network is responsible for these functions. This paper is concerned with the calculation of bandwidth requirements for traffic traces obtained from measurements taken from a wide area ATM testbed network. Two kinds of traces are considered: IP applications traffic and jittered constant bit rate traffic streams that have traversed multiple switching hops of the network. The actual bandwidth requirements are computed using simulation and then compared with the equivalent bandwidth theory for on-off sources.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
ATM is one of the leading high-speed networking technologies which has found wide acceptance in the last few years. In order to allow conventional protocols such as IP to run over ATM, various standards have been defined by the ATM Forum. These standards allow traditional applications such as those using TCP/IP or UDP/IP protocols and those applications running over broadcast networks to run unchanged over ATM networks. In this paper, we compare the performances of two common IP implementations over ATM namely, Classical IP and LANE, using local area ATM network testbeds consisting of DEC Alpha workstations and Intel Pentium machines. All hosts run windows NT 4.0 and are connected to a DEC GIGAswitch/ATM. We use metrics such as application throughput, latency, and CPU usage for comparing performances since they have a direct impact on the Quality of Service that is delivered to end-user applications. In addition, we also explore experimentally the effects of ATM adapters, protocols (TCP or UDP), host architectures, and processor speeds on application performance when using Classical IP and LANE over ATM.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
ATM (Asynchronous Transfer Mode) defines several Quality of Service (QoS) levels such as CBR (Constant Bit Rate) and VBR (Variable Bit Rate). Each service has parameters that are associated with its performance characteristic. The parameters used for specifying CBR service are Peak Cell Rate (PCR) and Cell Delay Variation Tolerance (CDVTOL). The parameters used for specifying VBR service are PCR, SCR (Sustainable Cell Rate), and BT (Burst Tolerance). The network takes responsibility for monitoring and policing the traffic of each VCC/VPC at the ingress of the network. The CPE needs to traffic shape at the ingress of the network in order to meet the contracted QoS requirements. Measurement procedures and tools are required in order to provide a means of tuning the CPE and network service to meet the customer's end-to-end QoS expectations. The following paper describes how the Generic Cell Rate Algorithm variables can be adjusted to provide acceptable levels of QoS in a mixed VBR, and CBR environment. In addition, the paper examines a methodology for measuring the performance of VBR and CBR services. The authors uses this information to build a QoS measurement tool.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Design and analysis of ATM networks in which sources are represented as fluid on/off is addressed. It is critical that cell loss rate as an important QOS measure be related to the call blocking, an important network design parameter, and this relation be used in link capacity sizing, call admission, and routing design. Recently, some research on the relation of cell loss and call blocking have appeared. However, they ignore buffering in the multiplexer. In this paper, it is shown that call blocking in a buffered ATM multiplexer behaves as an Erlang-B loss system. For analytic purposes system buffer is assumed infinite and transmission rate of the link (C) is assumed to be finite. Using past results on probability of cell loss and properties of Nearly Completely Decomposable Markov Chains, the aggregate cell loss when system is in different states is determined. It is shown that call blocking performance for a given link capacity is independent of the individual source parameters. An extended set of application of the results and an inverse problem is stated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we present a new rate-based flow control scheme for ATM ABR services and analyze its performance. The proposed algorithm, which we refer to as First-order Rate- based Flow Control (FRFC) is the most single form of queue- length-based flow control. The asymptotic stability, the steady-state throughput, queue length and fairness, and the transient behavior are analyzed for case of multiple connections with diverse round-trip delays. We also consider a novel approach to dynamically adjust a queue threshold in the FRFC according to the changes in the available bandwidth, and the arrival and departure of connections. Simulations show that the simple FRFC with dynamic queue threshold effectively maintains high throughput, small loss and a desired fairness in these dynamic environments and is a promising solution for ABR flow control in ATM networks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In communication networks with large delay-bandwidth product, congestion could happen over shorter time scales than those at which end-to-end protocols such as congestion control schemes typically operate. In such cases, the congestion can dissipate rapidly before congestion feedback information returns to the source. Network designers therefore face a challenge. The bursty and cyclic nature of Variable Bit Rate (VBR) traffic creates another issue for transmission in ATM networks. To reach the dual goals of keeping cell loss rate low and network utilization high, we propose an adaptive rate-based flow control scheme for real- time VBR traffic in ATM networks. The goal of the scheme is to minimize the impact of traffic overload in order to limit the cell loss rate to an acceptable range and also increase the network utilization. The proposed flow control scheme is based on predicting the evolution of buffer occupancy over time using a Proportional-plus-Integral-plus-Derivative controller and a linear predictor to adaptively update the optimum data emission rate at the transmitter. The adaptive policy attempts to keep the buffer occupancy for each virtual channel at a steady level and the simulation results show that the proposed scheme works effectively against network congestion. Along with the design of the new flow control scheme, we also develop a hierarchically structured testbed to measure network performance and explore various flow control schemes in ATM networks with diverse classes of incoming traffic.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Data teletraffic is characterized by bursty arrival processes. Performance models are characterized by a desire to know under what circumstances is the probability that an arrival finds a full input buffer very small. In this paper I examine how four models proposed in the literature perform on two data sets of local-area-network traffic. Among my conclusions are (1) the protocol governing the data transmission may have a substantial effect on the statistical properties of the packet stream, (2) approximating the probability that a finite buffer of size b overflows may not be adequately approximated by the probability that an infinite buffer has at least b packets in it, and (3) a data-based estimate of large-deviation rate-function does the best job of estimating packet loss on these data sets. This method may overestimate the loss rate by several orders of magnitude, so there is room for further refinements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recent measurements of network traffic have shown that self- similarity is an ubiquitous phenomenon present in both local area and wide area traffic traces. In previous work, we have shown a simple, robust application layer causal mechanism of traffic self-similarity, namely, the transfer of files in a network system where the file size distributions are heavy- tailed. In this paper, we study the effect of scale- invariant burstiness on network performance when the functionality of the transport layer and the interaction of traffic sources sharing bounded network resources is incorporated. First, we show that transport layer mechanisms are important factors in translating the application layer causality into link traffic self-similarity. Network performance as captured by throughput, packet loss rate, and packet retransmission rate degrades gradually with increased heavy-tailedness while queueing delay, response time, and fairness deteriorate more drastically. The degree to which heavy-tailedness affects self-similarity is determined by how well congestion control is able to shape a source traffic into an on-average constant output stream while conserving information. Second, we show that increasing network resources such as link bandwidth and buffer capacity results in a superlinear improvement in performance. When large file transfers occur with nonnegligible probability, the incremental improvement in throughput achieved for large buffer sizes is accompanied by long queueing delays vis-a- vis the case when the file size distribution is not heavy- tailed. Buffer utilization continues to remain at a high level implying that further improvement in throughput is only achieved at the expense of a disproportionate increase in queueing delay. A similar trade-off relationship exists between queueing delay and packet loss rate, the curvature of the performance curve being highly sensitive to the degree of self-similarity. Third, we investigate the effect of congestion control on network performance when subject to highly self-similar traffic conditions. We implement an open-loop congestion control using unreliable transport on top of UDP where the data stream is throttled at the source to achieve a fixed arrival rate. Decreasing the arrival rate results in a decline in packet loss rate whereas link utilization increases. In the context of reliable communication, we compare the performance of three versions of TCP--Reno, Tahoe, and Vegas--and we find that sophistication of control leads to improved performance that is preserved even under highly self-similar traffic conditions. The performance gain from Tahoe to Reno is relatively minor while the performance jump from TCP Reno to Vegas is more pronounced consistent with quantitative results reported elsewhere.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The quality of the real-time traffic severely depends on its loss rate and delay time. In the integrated service networks, the real-time traffic and the non-real-time traffic share the network resources so that one can affect the quality of another and vice versa. In this context, it is very crucial to develop some mechanism to guarantees the quality-of-service required by the real-time traffic. In this paper, we analyze the delay time of CBR packets from real-time sources when CBR packets have the priority over UBR packets in a manner that UBR packets are serviced only if no CBR packets are waiting in the buffer. Furthermore, we obtain various numerical results on the statistical bound on delay time such as the 99.9-percentile delay and compare it with the deterministic bound. By the comparison, we show that CAC (Call Admission Control) based upon the statistical bound is very effective in using the network resources efficiently when CBR packets can tolerate some loss due to late arrival.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Consider a set of users sharing an ATM node that offers two types of resources, bandwidth and buffer, in two flavors, guaranteed or fixed resources and variable resources. The amounts of variable resources available to the users fluctuate randomly. Each user is allocated certain amounts of fixed and variable bandwidth and buffer. Periodically the users renegotiate their allocation to adapt to changes in network conditions or their own utility functions. The network sets prices for these resources and the users choose allocations that maximize their own benefits in that period, utility minus the resource cost. In this paper we present a simple model to study this resource renegotiation problem. We exhibit some interesting properties of the equilibrium prices and allocations that result from interaction of these users. Under the assumption that the users' utility depends on the amounts of available resources only through their mean and variance, every user will hold strictly positive amounts of variable bandwidth and buffer in equilibrium. We discuss how to exploit these properties to design resource renegotiation strategies.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
To efficiently utilize resources in broadband networks, we propose a control method for balancing work- and buffer- conserving service disciplines that is based on the use of prediction. Three key concepts are discussed in this study. First, it is noted that feedback control with a predictor can minimize buffer backlog. Second, it is proven that with a dynamic balance, the globally optimal bandwidth assignments for all virtual circuits can be determined by minimizing an appropriate cost function. Lastly, via trace- driven simulation, it is shown that assigning bandwidth based on this hybrid balance of work- and buffer-conserving service results in a reduction in cell loss when compared to a traditional work-conserving algorithm. The simulation traces were generated from an MPEG-encoded video.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The burstiness of compressed video complicates the provisioning of network resources for emerging multimedia services. For stored video applications, the server can smooth the variable-bit-rate stream by prefetching frames into the client playback buffer in advanced of each burst. Drawing on a priori knowledge of the frame lengths and client buffer size, such bandwidth smoothing techniques can minimize the peak and variability of the rate requirements while avoiding underflow and overflow of the playback buffer. However, in an internetworking environment, a single service provider typically does not control the entire path from the stored-video server to the client buffer. To develop efficient techniques for transmitting variable-bit- rate video across a portion of the route, we investigate bandwidth smoothing across a tandem of nodes, which may or may not include the server and client sites. We show that it is possible to compute an optimal transmission schedule for the tandem system by solving a collection of independent single-link problems. To develop efficient techniques for minimizing the network bandwidth requirements, we characterize how the peak rate varies as a function of the buffer allocation and the playback delay. Simulation experiments illustrate the subtle interplay between buffer space, playback delay, and bandwidth requirements for a collection of full-length video traces. These analytic and empirical results suggest effective guidelines for provisioning network services for the transmission of compressed video.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As multimedia communication services have been widely spreading, video traffic rapidly increases in B-ISDN based on the ATM technology. In this paper, we propose the analytical model showing the relation between image quality and cell losses for the MPEG-1-encoded video traffic and verify this analysis with simulations. Simulations reveal that the amount of practically measured image quality degradation is almost the same as that obtained by our analysis model. When CLR is lower than 10-3, the amount of image quality degradation is about 1 approximately 5%. Our analysis between layers can be extended and applied to `Call Admission Control' for MPEG video services.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The end-to-end quality of service (QoS) of a variable bit rate (VBR) MPEG-2 compressed video transported over an interconnected ATM satellite-cable-terrestrial network incorporates the impact of the characteristics of these three media as well as the specificity of the video encoding/decoding processes. In this paper, we present the impact of a selected set of the QoS parameters on a number of VBR MPEG-2 encoded video clips on each segment of such internetworking. The QoS parameters used in our simulation include cell transmission delay, cell delay variance, cell misinsertion rate, and cell error rate. Also, we discuss and compare the quantified impact of bit error characteristics of satellite and fiber optics ATM systems. Objective and subjective results of our simulation show that the impact of the satellite segment on the MPEG-2 encoded video quality is comparable to that of the fiber optics system and to that generated from the VBR MPEG-2 vide encoding/decoding process.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a linear-time method to optimize stored VBR video transmission on CBR channel. Given the transmission bandwidth, we have presented a linear-time method to minimize both the client buffer size and playback work-ahead. In this paper, the service connection time is minimized to maximize the network utilization for the given transmission bandwidth and client buffer. The required work- ahead is also minimized for the minimum response time. The proposed scheme can be extended easily to transmit the VBR video with minimum rate variability. Experiments of many well-known benchmark videos indicate that the proposed method can obtain better (or at least comparable) resource utilization and requires less memory buffer than conventional approaches. By considering the transmission of MPEG-1 video Advertisements with the same work-ahead time, our obtained results shows better network utilization than that of D-BIND. When compares by transmitting the long MPEG- 1 movie Star War, our approach uses smaller memory buffer than that of the combination of MVBA and D-BIND to achieve the same network utilization.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One major function of the ATM (Asynchronous Transfer Mode) networks is to provide real-time, low-loss, and minimum- delay transmission of VBR (variable bit rate) video traffic. The Leaky Bucket (LB) mechanism has been proposed and adopted at the ATM Forum to police network traffic. In this work, we enhance and integrate the ideas of (1) soft multiplexing and (2) multi-level rate policing, to design an advanced traffic policing mechanism: the Multi-Level Leaky Bucket with Token Passing. The new mechanism has an efficient mechanism to estimate the current bit-rate and burst-duration of a VBR source. Based on the estimation, it allows sources to transmit at different bit rates for restricted amounts of time. It also allows interactions among the Leaky Buckets of the input streams of the same virtual path. We use simulation to evaluate the performance of the new mechanism and to compare it with three existing methods. Two VBR traffic source models are used: (1) Markov Modulated Poisson Process with parameters set according to a HDTV model, and (2) an Autoregressive Markovian model of order 2, AR. We compare bandwidth utilization, cell-loss probability, and queuing delay for all methods, by varying the offered traffic load and the size of cell buffers and token buckets. We found that, due to different traffic characteristics possessed by different VBR sources, using token passing or multi-rate policing alone does not improve the original LB on certain VBR sources. We found that the new mechanism achieves better performance than the three existing schemes in all the scenarios.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper addresses the problem of efficient traffic shaping while providing QoS guarantees for Constant Bit Rate (CBR) sources. A simulation model is developed to capture the performance of several mechanisms that schedule cells according to the Generic Cell Rate Algorithm which is used for policing of connections. Results show that efficient shaping with QoS guarantees can be only achieved by a multi- level priority scheme. The suggested shaper mechanism absorbs the Cell Delay Variation of CBR cells and minimizes the buffer required at the receiver end.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Connection Admission Control (CAC) for ATM networks should be based on information that is available locally, or that can be gathered easily from neighbors. We propose a method for implementing CAC, which is based on a Distributed Dynamic Routing Procedure, and which uses the Network Performance parameters that have been gathered at different nodes of the network. These network performance parameters define what different nodes are capable of offering to the connections. The method will find the `best' route that is able to meet the QoS constraints imposed by the user.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
ATM has been under a thorough standardization process for more than ten years. Looking at it now, what have we achieved during this time period? Originally ATM was meant to be an easy and efficient protocol enabling varying services over a single network. What it is turning to be it `yet another ISDN'--network full of hopes and promises but too difficult to implement and expensive to market. The fact is that more and more `nice features' are implemented on the cost of overloading network with hard management procedures. Therefore we need to adopt a new approach. This approach keeps a strong reminder on `what is necessary.' This paper presents starting points for an alternative approach to the traffic management. We refer to this approach as `the minimum management principle.' Choosing of the suitable service classes for the ATM network is made difficult by the fact that the more services one implements the more management he needs. This is especially true for the variable bit rate connections that are usually treated based on the stochastic models. Stochastic model, at its best, can only reveal momentary characteristics in the traffic stream not the long range behavior of it. Our assumption is that ATM will move towards Internet in the sense that strict values for quality make little or no sense in the future. Therefore stochastic modeling of variable bit rate connections seems to be useless. Nevertheless we see that some traffic needs to have strict guarantees and that the only economic way of doing so is to use PCR allocation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Frame Relay is a rapidly growing service offering from all the major telecommunications carriers. Managing the growth and topology of such networks is a very challenging problem which requires modeling the traffic being loaded on the network. This paper describes the development of a three- state traffic model with hyper-exponentially distributed off times. It is shown that this model can be bit to match actual traffic statistics collected from the Management Information Base of a large Frame Relay network, whereas the commonly used two-state traffic model cannot. A rapid method of setting the model parameters by approximating the operation of the Leaky Bucket algorithm is also described.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we propose the dynamic priority control mechanism for handling the cell service ratio according to the relative cell occupancy ratio in the buffer in order to improve the QoS. We also analyze theoretically the QoS, that is the cell loss probability and the mean cell delay time for the proposed priority control mechanism. The two service classes of our concern are the delay sensitive class and the loss sensitive class. The relative cell occupancy ratio represents the ratio of the number of the loss sensitive class cells to the number of the delay sensitive class cells stored in the buffer. In other words, the proposed priority control mechanism has the methods to change dynamically the cell service ratio according to the relative cell occupancy ratio of the loss sensitive class and the delay sensitive class. Here what the cell service ratio is K means that a loss sensitive class cell is served whenever the maximum number K of delay sensitive class cells are consecutively served. The analytical results show that the proposed control mechanism is capable of improving the cell loss probability with regard to loss sensitive class cells and the mean cell delay time with regard to delay sensitive class cells by selecting properly the cell service ratio according to the relative cell occupancy ration in the buffer and the input traffic conditions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A number of priority mechanisms have been proposed and analyzed for ATM switching and multiplexing. The need of such mechanisms arises from the difficulty to accurately characterize and size low priority traffic. This investigation concerns the performance evaluation of a shared buffer for Frame Relay switch with threshold priority mechanism compared to one without priority mechanism. The main contribution of the paper is the proposed Markovian model for this scheme. The evaluation, based on finite-state Markov model of a switch and its incoming traffic, allow to guarantee loss requirements, i.e. probabilities of 10-6 and 10-4 for high and low priority frames. The Markovian model was generated with the use of a more general tool, the principles of which are also presented in the article. This tool is prepared to construct various Markovian models of computer networks and their control mechanisms. It contains a set of objects written in C++ which may be composed into a network. Steady-state analysis is performed. In addition, a comparative study shows that analytical results validated by simulation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The design of a traffic management mechanism for intranets connected to the Internet via an available bit rate access- link is presented. Selection of control parameters for this mechanism for optimum performance is shown through analysis. An estimate for packet loss probability at the access- gateway is derived for random fluctuation of available bit rate of the access-link. Some implementation strategies of this mechanism in the standard intranet protocol stack are also suggested.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present an implicit end-to-end rate based flow control policy for best effort (ABR/UBR) connections in heterogeneous network which is an interconnected set of networks that use traditional packet switching or implement ATM like protocols. The intermediate switches may use a FCFS or a rate allocating packet scheduling discipline such as Fair Queuing. Although our scheme is presented in the context of a connection oriented network it is also applicable in a datagram network, such as the Internet. An important feature of the proposed scheme is that it requires no support from the underlying network and lower layers to indicate or control congestion, but instead uses the mean spread of sets of uniformly spaced packets to estimate the available bandwidth at the bottleneck server (switch/router) along the path of a connection. This estimate of available bandwidth is then fed to a controller which adjusts the sending rate so as to maintain a certain number of packets buffered at the bottleneck server. The proposed Rate Estimation and Control scheme is studied extensively using simulations. Results obtained from simulation experiments shows that the scheme can adapt quickly to changes in bandwidth available to best effort traffic and consequently utilizes the resources at the servers better and loses fewer packets than TCP. Our simulation results show that in the case of multiple competing connections using the proposed scheme, the available bandwidth is distributed equitably.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper proposes a method to optimize the utilization of the internal resources of WAN routers. Associated with admission control and policy control techniques of existing protocols, like RSVP or RTP, is allows to better utilize intermediate nodes internal resources. The scope is allowing best-effort traffic to get better performances from WANs. Our method is based on a statistical interpretation of VBR streams compressed by means of the MPEG-1 algorithm. This high level analysis of the videos, namely the scene level, leads to the definition of a forecasting scheme.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we propose a simple method which can be used to characterize any pre-compressed variable bit rate video stream in terms of ATM traffic parameters: Peak Cell Rate (PCR), Sustainable Cell Rate (SCR), and Maximum Burst Size (MBS). Though it is simple to find the PCR parameter, there is generally a very large set of SCR/MBS pairs that can be used to describe a given video stream. A SCR/MBS pair is considered most suitable which results in efficient utilization of the network resources provided the Quality of Service commitments can be met. We suggest that the resources should be allocated based on the actual traffic rather than the worst case traffic corresponding to the selected SCR/MBS pair. The proposed scheme will benefit both, the network operator and the user, since it results in more efficient utilization of the network resources.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper a class of parametric models is proposed to describe the output bit-rate from a motion-compensated two- layer video coder. No frame buffering is considered and the amounts of bits generated are regarded slice by slice (1 slice equals 16 consecutive frame lines), thus resulting in a repetitive behavior of the bit rate time series. After removing the nonstationary mean components of each frame, traditionally modeled as Auto Regressive (AR), the resulting cyclo-stationary process is spectrally analyzed and the periodical mean is estimated and subtracted. A simplified seasonal AR model is tailored to the residuals, still affected by little periodicity. Correlation between the low resolution channel and the detail channel, together making up the full resolution channel, is accounted by the overall model, whose fitness is assessed from true bit rate data from a two-layer subband+discrete cosine transform (SB+DCT) video coder. Simple procedures to derive likely values of the parameters ruling the model are proposed as well. Synthetic realizations of bit rate time series are produced and discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Since ATM networks support many types of traffic generated by variety of applications, they must be able to guarantee QoS for these types of traffic. QoS control schemes should support the individual service classes that are available through the ATM networks. Since, in general, the quality of a VC is affected by other VCs in the same transmission path and in other service classes, traffic control should support multiple service classes. In this paper, we propose the mechanism for the buffer allocation and the cell scheduling algorithm with logical separation of a single buffer in the ATM switching system, and analyze the cell loss probability and the delay of each traffic (CBR/VBR/ABR) based on the weighted value and the dynamic cell service scheduling algorithm. The proposed switch system classifies composite traffics incoming to the switch, according to the characteristic of traffic, and then stores them in the logically separated buffers (quality service lines). And the proposed system adopts the round-robin service with weighted value in order to transmit cells in buffers through output port.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We investigate the impact of scheduling policies on the tail distribution of sojourn times experienced by various unevenly-loaded queues in a two-stage polling system served by a symmetric multiprocessor system running under a Unix- like operating system. The queues are statically divided into groups, with each group being managed by a process. A process can run on any of the available processors. Service to a customer is thus scheduled first at the process level and then at the queue level. Assuming that all customers have the same service requirement, and for Poisson arrivals and exponentially distributed service times and setup times, it is shown by simulation that the earliest-customer policy outperforms both 1-limited and exhaustive policies in the sense of providing equitable service to the queues.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Two-way hybrid fiber-optic coaxial (HFC) networks have enabled the integrated services of high-speed Web surfing, additional phone lines and video on demand etc. on top of the existing CATV broadcast service. However, the key to the success of this technology relies on the quality of service (QoS) provided to the subscribers. In terms of the available bandwidth and channel quality of the two-way transmission system, the reverse path (from subscribers to the headend) is most likely the problematic factor toward subscriber satisfaction due to its limited bandwidth (5 - 42 MHz) and inherent noise characteristics in the tree-and-branch network topology. This paper investigates some traffic management issues in order to guarantee the diverse QoS of different traffic types in the return path of the HFC network where code division multiple access technique is employed as the medium access protocol. In particular, the tradeoff between the QoS of voice and data services, and strategies for the headend to improve the QoS of the integrated traffic are studied and quantified by numerical examples.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Understanding the nature of data communications requires tools to collect data from the computer, the network and their interaction. A tool is needed to get better understanding of the processes generating traffic to the network. The main components are an instrumented Linux kernel, a synthetic benchmark program, a system call tracer and a set of analysis programs for the post processing. The data collected from the network can be synchronized with the data collected from computer with an adequate accuracy without expensive hardware. Changes on the operating system (e.g. scheduling algorithm) or on the network can be easily evaluated by the synthetic benchmark where it is possible to modify CPU/IO-intensity ratio and the number of processes each type thus emulating different real-world applications. The data and the code size can be modified to evaluate the memory system performances over different working set sizes. The early measurements on the Ethernet indicate that this toolbox is useful. Measurements have revealed how network traffic is affected as number of processes changes. The toolbox development continues on ATM environment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Network performance measurement tools, mainly used for TCP performance evaluations, have serious problems that their applications are limited to single point-to-point configuration and only the mean throughput for long term are measurable. Their major drawback is that they cover only a subset of the entire TCP functions and small variations of network systems. In this paper, we proposed a TCP performance measurement tool called DBS (Distributed Benchmark System) which is aiming to give performance index with multi-point configuration and also in order to measure changes of throughput. It measures the performance of entire TCP functions in various operational environments. Experiments had been conducted using DBS to measure TCP end- to-end performance with various kinds of networks such as both LANs including FDDI, ATM, and HIPPI over ATM networks, and WANs including VSAT satellite channels. In short, DBS has the capability of both measuring and analyzing TCP performance more in details. Through these measurements, DBS unveiled details of TCP performance which cannot be realized by other existing tools.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.