A drastic TCP performance degradation was reported when TCP is operated on the ATM networks. This deadlock problem is 'caused' by the high speed provided by the ATM networks. Therefore this deadlock problem is shared by any high-speed networking technologies when TCP is run on them. The problems are caused by the interaction of the sender-side and receiver-side Silly Window Syndrome (SWS) avoidance algorithms because the network's Maximum Segment Size (MSS) is no longer small when compared with the sender and receiver socket buffer sizes. Here we propose a new receiver-side adaptive acknowledgment algorithm (RSA3) to eliminate the deadlock problems while maintaining the SWS avoidance mechanisms. Unlike the current delayed acknowledgment strategy, the RSA3 does not rely on the exact value of MSS an the receiver's buffer size to determine the acknowledgement threshold.Instead the RSA3 periodically probes the sender to estimate the maximum amount of data that can be sent without receiving acknowledgement from the receiver. The acknowledgment threshold is computed as 35 percent of the estimate. In this way, deadlock-free TCP transmission is guaranteed. Simulation studies have shown that the RSA3 even improves the throughput performance in some non-deadlock regions. This is due to a quicker response taken by the RSA3 receiver. We have also evaluated different acknowledgment thresholds. It is found that the case of 35 percent gives the best performance when the sender and receiver buffer sizes are large.
Applications rely on network transmission services like dedicated ISDN or ATM to provide QoS guarantees for audio and video multimedia data in reasonable quality. This paper describes information dispersal as an alternative method to introduce robustness in error-sensitive and time-constrained when they use best-effort networks like the global Internet. We developed an experimental IP telephony application where the source audio stream is split and passed simultaneously to multiple, different Internet Service Providers. Before sending, the stream is protected with redundant data against eventual losses by an erasure-resilient FEC scheme utilizing a maximum distance separable code. This method assumes that routing paths are mutually independent. Extensive measurements were taken for voice transmission between two computers located on different continents and connected over the public Internet by using four different ISPs. The results show the increased robustness of the generated stream against uncorrelated burst delays and loses. It takes advantage of sharing the original and redundant data among all available connections.
In an ATM-WAN linking ATM-LANs, it is very important to do traffic control across multilinks. Especially, when QoS of a call is required. Especially, when QoS of a call is required to be guaranteed, it is necessary to assign its requirement of QoS to each link from the source node to the destination node, and to calculate QoS like transmission delay that each link can provide at the stage of call admission acceptance. After describing problems of existing QoS to each link according to its traffic status. In our scheme, the concept of critical utilization factor of link is introduced to approximate the transmission delay the link can guarantee after a new call accepted. For a simple network model with connections of three links, the call blocking rate and network utilization of the scheme were numerically analyzed and simulated. The numerical examples and simulation results show that the proposed scheme can hold down call blocking rate and promote the utilization of network resources.
Two pillars of QoS routing are discussed: the QoS algorithm and the network function to provide each node a consistent view of the topology. Generally, QoS algorithms are believed to be exceedingly complex due to previous announcements that they belong to the class of NP-complete problems. However, a very efficient QoS algorithm, TAMCRA, has ben designed which is slightly more complex that the well-known Dijkstra algorithm and far from hard NP-complete. The topology distribution mechanisms responsible to offer each node in the system a consistent view are complicated due to the coupling of some QoS link metrics with the state of the network resources. The difficulty lies in the different time scales that impact the process: the slowly flooding of topology information and the more rapid variations of the traffic flowing through the links.
We have designed and are implementing a research platform, NIST Switch, for experimenting with novel approaches to routing with quality of service (QoS) guarantees in an IP environment. NIST Switch is based on commodity PC hardware running freely-available operating systems. It implements quality of service through label switching over Ethernet, using proposed extensions to RSVP and differentiated services to signal QoS requests and distributed labels. In NIST Switch, the labels applied to packets have two fundamental uses in QoS traffic management: locally, labels select the characteristics of the queuing and traffic shaping measures applied to packets. In the network, labels identify the path segments over which traffic will flow. A label-switched route then consists of a 'bundle' of these sticks and brushes, chosen so as to achieve the appropriate aggregate behavior. NIST Switch routing algorithms attempt to optimize the set of path segments used, so as to maximize the number of service requests which can be met while bounding the number of labels allocated. As an experimental platform, NIST Switch is designed to be easily altered. Each of its key components are independently configurable modules which are readily replaceable.
This paper describes an application in which speech coding and resultant bandwidth usage is dynamically adapted to available network bandwidth. Network feedback concerning available bandwidth and current load is provided by the statistics generated by the real-time protocol which is used to support many audio and video applications running on the Multicast Backbone. This feedback is evaluated and used to control the speech encoding and adaptation. Speech segments are encoded to 16, 12, 10, 8 or 4 bit samples depending on network traffic conditions. Fewer bits are used when current network conditions show a reduction in available bandwidth, and additional bits are used when bandwidth is plentiful. Bit reduction lowers the amount of bandwidth needed to transmit the audio samples, and subsequently, fewer audio data packets are lost. Listening tests were performed to evaluate perceptual quality and to establish an acceptable maximum loss rate and minimum encoding rate.
The evolving demands of networks to support Webtone, H.323, AIN and other advanced services require multimedia servers that can deliver a number of value-added capabilities such as to negotiate protocols, deliver network services, and respond to QoS requests. The server is one of the primary limiters on network capacity. THe next generation server must be based upon a flexible, robust, scalable, and reliable platform to keep abreast with the revolutionary pace of service demand and development while continuing to provide the same dependability that voice networks have provided for decades. A new distributed platform, which is based upon the Totem fault-tolerant messaging system, is described. Processor and network resources are modeled and analyzed. Quantitative results are presented that assess this platform in terms of messaging capacity and performance for various architecture and design options including processing technologies and fault-tolerance modes. The impacts of fault-tolerant messaging are identified based upon analytical modeling of the proposed server architecture.
Several approaches have been proposed to empower communication systems with quality of service (QoS) capabilities. In general, their main goal is to coherently support the end-to-end performance needs of applications, based on the establishment of, and agreement on, a set of concepts, policies and mechanisms. Regardless of the used approach, an important challenge associated with quality of service provision is the development of an efficient and flexible way to monitor QoS. The existence of an effective metric to quantify the QoS offered to classes or flows of data, and to assess the performance of communication systems, would facilitate the implementation of a QoS monitor. Such a QoS metric should be able to produce comparable measures, independently of the nature and scope of the objects to quantify, that is, should turn possible uniform QoS measures. Nevertheless, the main difficulty related to the development of such metric streams exactly from the disparate nature and scope of the things to measure. This paper discusses the above mentioned difficulties nd proposes a QoS metric intended to support QoS measurements on integrated QoS management system, which is fundamental to construct QoS-capable communication systems able to efficiently deal with the increasing variety of applications. This paper also present the main challenges found during the first approach to the QoS metric implementation. The intention was to test the metric basic concepts, to assess its feasibility, and to measure its associated overhead. The result of these overhead test are also presented.
Providing profitable video services is becoming one of the primary goals of many network services when delivering a large number of video streams in a single data link. Aggregated bandwidth utilization and QoS degradation are the issues that need to be considered. It is well known that Variable Bit Rate video traffic generated by Motion Picture Expert Group exhibits significant multiple-time-scale rate variability. This phenomenon causes difficulties for QoS provisioning, QoS control, and QoS management. Many research activities were concentrating on reducing the bursty nature of video traffic and increasing statistical multiplexing gain by using different buffering techniques at several transmission levels, which induces frame delays and delay variations. In this paper we propose a multiplexing technique that consists of two phases. First, we use the 'scene' property of the video traffic that exhibits a larger time scale for bit rate variation to segment the traffic. And second, we use MPEG temporal relationships to achieve multiplexing within each 'scene' segment in conjunction with the service classes requirements, so that better bandwidth utilization is achieved without sacrificing video quality. This approach is especially promising in the applications that have limited amount of resources, such as wireless network or in those that have a large amount of traffic types with different QoS requirements, such as video data servers or in situations where multicasting is not applicable. Finally, we suggest that even better statistical gain can be achieved by using the concept of differentiated services to multiplex on going traffic in a service-class fashion.
Differentiated Services approach has been recently proposed for providing Quality of Service in Internet. Simple Integrated Media Access is a network service concept that use this approach by utilizing priority bits in packet headers. Three drop preference bits are used for making the decision about packets to be discarded when congestion arises in a core network node. A single delay indication bit marks the packet either as belonging to a realtime or non real time flow. The acceptance of packets is not directly connected with real time properties of a packet. The real time support of SIMA is based on two features. First, real time packets are placed to a short real time buffer inside a core network node. Secondly, the real time buffer is always emptied before the longer, non real time buffer. This means a shorter queuing delay for real time packets. For the successful operation of the network it is crucial to have small enough transfer delay for real time packets. On the other hand, the difference in transfer delay must be significant enough between real time and non real time service in order to justify their existence. Finally, there is also the question, whether real time packets can block non real time flows since the real time packets are always transmitted first in a SIMA node. To answer these questions this paper addresses the issues related with transfer delay of packets over a SIMA network. We present extensive simulation results about the transfer delay distributions in order to provide insight into SIMA and transfer delays.
Live audio streaming is an important component of Internet multimedia. The currently deployed Internet offers poor support for such streams due to the lack of QoS capabilities. However, IPv6, the new Internet Protocol has now included provision for QoS. The introduction of a flow label in the protocol header is intended to enable classification of packets according to their destination and service. Reservation protocols such as RSVP can make use of this stream identifier to reserve resources for particular streams in the routers along the transport path. This paper explores the effectiveness of resource reservation in IPv6 networks for live audio streaming. An important area for investigation is whether there is an efficiency gain due to the employment of low level flow labels. The paper summarizes the result of our extensive measurements and comparisons with currently deployed technologies. Specific attention is paid to the performance characteristics of real time applications, notably the delay, jitter and bandwidth. The results are based on a specially developed audio streaming application which enables RSVP over IPv6 using flow labels. Since the integration of RSVP in IPv6 is still work-in-progress, we had to modify the currently available RSVP implementation in order to access the IPv6 flow label. For audio data transport, we use the real-time transport protocol (RTP). The real-time transport control protocol, known as the feedback channel of RTP, forms with its receiver reports the basis of our benchmark tests.
An algorithm was developed with the goal of automatically recognizing isochronous traffic in a packet switched network. Designed with constant bit rate traffic in mid, this algorithm examines the interarrival times between packets in flows classifying as isochronous those flows that have both an average inter arrival times between packets in flows classifying is isochronous those flows that have both an average interarrival time consistent with an isochronous application and sufficiently low variance for interarrival times. The algorithm was analyzed based on a priori considerations and was then applied to actual traffic in an effort to characterize the parameters for appropriate use. Traffic flows from several common isochronous applications were examined focusing on common CODECs. From a priori considerations, the algorithm has the desirable characteristics of building intelligence into the network and being purely local to the individual switch or router on which it is installed while totally transparent to other equipment. While the approach appears sound in general principle, the isochronous applications examined show a much higher degree of variability in packet rates than expected. Moreover, most common applications seem to be shifting to CODECs that produce variable bit rate traffic. Thus, using the algorithm with such applications would require a st of threshold parameters that would likely result in a high misclassification rate.
Giving users simple access to QoS selection capability of ATM networks is important for the future of integrated services networks. We describe the first instance of an application in which users had the capacity to tune ATM traffic parameters with a button, at run time. To achieve this we implemented the necessary signaling extension on ATM switches as well as on end-systems. Furthermore, we extended Arequipa to support modification, and modified the popular Mbone tool Vic to use Arequipa to request and modify ATM bandwidth at will. We describe how Vic accomplished this over the ATM WAN of SWISSCOM, transferring live video from Lausanne to Basel over switched, renegotiable ATM connections.
The efficient support of the IP Integrated Services architecture (IIS) in ATM networks is nowadays a challenging research topic in the internetworking area. The integration of the two architectures needs the overcoming of the inconsistencies existing between their service models, in particular when considering multicast communications. The IIS architecture is based on the Resource Reservation Protocol (RSVP). IP multicast and RSVP rely on a receiver- oriented mode, where a receiver is not only capable of deciding whether to join or leave a multicast sessions, but it is also allowed to choose the associated QoS independently from the other participants. On the other hand, the ATM service model is sender-oriented: the originating node of a multipoint session is the only entity allowed to add or delete leaf nodes. In addition, the originating node sets the QoS of a session, which is unique for all the receivers. The IETF standard for IP multicast over ATM is the MARS model. Nevertheless this approach has been developed for best effort connections only. In this paper we present an extension to the MARS protocol n order to be integrated with the RSVP protocol for supporting QoS management. Within a logical IP Subnet, a Multicast Integration Server (MIS) is entitled of IP multicast address to ATM addressees resolution using the MARS protocol, and of QoS management using RSVP. In addition to the functionalities foreseen by the MARS specification, the MARS server within the MIS uses anew set of messages to inform the sender to a group about the QoS chosen by each receiver. An RSVP entity within the MIS is used to distribute RSVP messages within the LIS and to provide to the MARS the translation of the QoS level form IIS to ATM.
This paper defines a new group communication model called concast communication. Being the counterpart to multicast, concast involves multiple senders transmitting to a single receiver. Concast communication is used in a wide range of applications including collaborative applications, report-in style applications, or just end-to-end acknowledgements in a reliable multicast protocol. This paper explores the issues involved in designing concast communication services. We examine various message combination methods including concatenation, compression, and reduction to reduce the traffic loads imposed on the network and packet implosion at the receiver. Group management operations such as group creation/deletion, joining/leaving, and concast routing are discussed. We also address transmission issues such as reliable delivery, flow control, congestion control, and QoS. We conclude the paper by presenting a concast communication model that we have been developing in the context of TMTP5. The model uses concast communication to implement reliable multicast and it shares concast trees with the multicast group whenever possible to reduce overhead costs.
In this paper, we considered two networks which are the next-generation public network and the Internet, as the future information network infrastructure. These two networks had originally been developed independently, and now they both have several services for users to be able to utilize services on the Internet environment from the public network environment, and especially to utilize services on the public network from the Internet environment. This will have a bad influence on the future multimedia network society. Consequently, we adopted TINA, and proposed the interworking architecture for seamless service utilization between the two networks. Because these two networks have different characters, we introduce an IWU as a gateway node to realize seamless interworking. The IWU is the gateway for both application level and transport level. More over, considering to utilize public network's services from the Internet after a user moves to the place where s/he can only use the Internet environment, the kind of terminals/operating-system s/he would use is indefinite factor. However, it is obviously desirable to be able to utilize the public network's services with any terminal/operating-system. Therefore we also proposed and implemented the platform named PLUS-TINA. The PLUS-TINA is constructed with Java language so as to insure its portability.
In current end systems, multiple flexible, complex and distributed applications concurrently share and compete both end system resource and transmission bandwidth of heterogeneous multi-protocol networks, especially the Internet. Our objective is to enable adaptation awareness in these applications to fully cope with the dynamics in resource availability over the heterogeneous Internet, as well as fluctuations in QoS requirements of the applications themselves. In this paper, we present the theoretical and practical aspects of a Task Control Model implemented in the middleware layer, which applies control theoretical approaches to utilize measurement-based samples monitored in the network traffic, as well as resource and QoS demand dynamics observed in the end systems.
Simple Integrated Media Access (SIMA) is a network service based on drop preference bits in every packet. A key characteristic of SIMA is that the packet discarding decision are made locally without any knowledge about the load condition in other parts of the network. A possible problem of SIMA is that some packets could be lost in the last node after they have gone through the whole network. This seems to be a waste of network resource,s as some other packets could be transmitted in the network instead of the packets discarded in the last node. The question addressed in the paper is how much a network may profit by using perfect information about the network load condition in a way that the goodput in maximized by discarding packets only in the first node. A network with 5 nodes has been evaluated with a large number of different load conditions. The results show that the average benefit is less than 2 percent of the network capacity if the overall packet loss ratio is at most 20 percent. Only if the average packet los ratio is very high, could the benefit be significant.
Due to its traffic control and performance assurance characteristics, ATM is being employed as the core network in most campuses. However, bulk of the workstations remain on Ethernet, generating IP traffic that passes through ATM using special schemes such as PTOP or LANE. In such a network, the performance is affected due to the extra overheads in multiple conversions between cells and packets and managing virtual circuits. The aim of this paper is to compare the performance of PTOP and LANE in passing the IP traffic under various conditions. This study helps in understanding the various performance issues in these environments in order to define the end-to-end quality of service for Ethernet-ATM networks.
In this paper we outline an overall network architecture for the Internet Service Providers who want offer an Internet- Virtual Private Network service with QoS guarantees and, at the same time, with a high-level of efficiency in the network resource usage. The proposed approach is based on the negotiation of a service level agreement, which includes the definition of profile of traffic the user is allowed to emit. The ingress nodes perform an adaptive shaping of the user traffic entering the network, driven by a fast congestion notification scheme. In this scenario, the adoption of a service architecture based on a class-of- service concept enables the Internet Service Provider to offer different level of network performance according to the customer needs.
Widespread availability of real-time services is then ext challenge for the Internet after the introduction of the WWW has already changed Internet traffic patterns once. As the Internet provides a 'best effort' datagram service only, no assurance for actual packet delivery for real-time flows can be given. Most real-time applications exhibit tolerance against occasional loss of packets, but are sensitive to losses that occur in bursts. This is especially significant for a voice service, which we consider as our primary target application in this paper due to its importance in the future Internet, its relatively well-known subjective properties in the presence of packet loss an its simple flow structure. Currently all hop-by-hop approaches to enhance the QoS for real-time flows either use strict per-flow setup and state maintenance of reservations or rely on the sender/ingress router which is unaware of the amount and the location of congestion in the network to mark packets for preferred treatment. This results in either high resource consumption in the network or dissatisfactory perceived quality. For interactive voice, end-to-end adaptation to the current network load is also difficult to apply, considering the per-flow overhead and usual traffic properties.
Much attention has recently been given to the differentiated services (Diffserv) approach to provide Quality of Service (QoS) for IP networks. This packet-marking based approach to IP QoS is attractive due to its simplicity and ability to scale. Two of the most popular services proposed for the Diffserv approach are the Assured and Premium Services. In this work prototypical implementations of Diffserv components are described. The prototypes are used to study the single-queue, dual drop-preference model proposed as a basis for assured services in Diffserv.
With the proliferation of Continuous Media (CM) applications on the Internet, it has become important to be able to devise suitable metrics that capture their performance. One such metric identified is packet loss, which results from statistical multiplexing over-loads, missed deadlines and other network pathologies. While work exist in literature on estimating the packet loss ratio, little work has been done on capturing the actual loss process itself. For the same loss ratio, CM applications perceive different QoS for different loss patterns. Thus, it is important to be able provide simple means of capturing the los process for users as well as for operators. We provide one such approach in this paper. We also describe a control algorithm that regulates the loss process while handling multiplexing overloads. This algorithm, which can be used in a statistically shared resource such as a server or a network buffer, preemptively discards packets based on the 'distance' to the previously lost packet of the same stream during overloads. The experimental results involving Variable Bit Rate video streams indicate that the algorithm greatly improves the QoS by spreading out losses for various streams with disparate loss requirements.
New services come at cost to the Internet community, therefore determining the needed level of QoS for the services is important. This is due to the fact that guaranteeing quality demands extra functionality and extra cost for equipment in the network. We have chosen voice transmission over IP network as a candidate service for the study of QoS issues. We wanted to see what is the difference in perceived quality of service in following cases: a network with conventional forwarding and a similar network using a layer 3 switching. We present measurements with a commercial embedded microprocessor system and a public domain software in a general purpose computer. From experiences we can say that the present general purpose computer architecture is no means optimal for providing controlled quality of service for a real-time communication. We conclude that routers offering real-time services need prioritized traffic handling.
A recent flurry of research activity has defined an Integrated Internet Services (IIS) model, in which traditional best-effort datagram delivery can co-exit with additional enhanced QoS delivery classes. Such a service model can be used efficiently only it is combined with a usage-based and QoS-sensitive pricing scheme. The traffic flow measurement architecture (RTFM) provides a tool for measuring Internet's traffic flow. Nevertheless, the current Internet still lacks an accounting infrastructure capable of supporting sophisticated pricing schemes. This paper presents the design and the implementation of a traffic accounting mechanism for IIS. The implementation is based on the RTFM framework and has two main components: meters, that are dedicated host attached to a network segment and measuring traffic flowing on that segment, and managers, which retrieve information from the meter.
In this paper, we discuss the change in landscape of the Internet/telephony integration from the standpoint of price structure. We propose an idea called soft guarantee as a basis for service providers and user to work out their charging mechanism. Since telephony will play a significant role in the system, we also provide some experiment results to illustrate the benefit of soft guarantee.
In this work, we first briefly introduce the concept of IP flow classification on a general conceptual level. The intention is to rise above the technological details and create a conceptual point of view on flow classification and closely related issue. Then we move on to study and compare earlier flow classification methods such as the all and selected flow classifier ad the packet count flow classifier. The comparison of these methods is done with actual network traffic and various performance metrics are presented. It is found that while the traditional methods of flow classification are found to reduce the resource usage of the network elements, they provide the user with an ambiguous traffic profile at the best. A measurement-based learning approach to flow classification is then presented. We first introduce the list based flow classification algorithm to act as the reference point to the novel approach of using learning vector quantization in flow classification. It is found that both the list classifier and the learning vector quantization algorithm, when used in flow classification, require only moderate performance from the network elements while producing an intuitive and user- comprehensible traffic profile being able to adapt to traffic profile changes. The learning vector quantization flow classifier is more sensitive to changing network traffic profiles and functions somewhat more accurately than the list classifier. While all measurement-based approaches suffer the delay of analyzing the measurement data our results indicate that measurement-based approach to flow classification is able to provide users more accurate service profiles in changing traffic environment while stating reasonable performance demands to the network equipment.
n this paper we describe a technique for measuring one-way packet delay and loss on the international Internet, and describe the use of this technique to characterize the QoS behavior of Internet connections, including packet loss, delay, and delay variation. The delay measurement technique depends on the use of Global Positioning System time receivers to provide an accurate absolute time reference at network measurement pints. Packets in the network are characterized by a signature derived from the packet header and payload, and so can be recognized at the measurement points. Thus packet loss can be detected, and the item taken for a packet to move from one measurement point to another can be measured with an accuracy of better than 10 microseconds. Measurements have been made on the Intent between New Zealand and the US, the UK, and Singapore, and have revealed a number of interesting phenomena on differing time-scales.
Wireless LANs, CDPD, and microcellular data networks are fast becoming the preferred method of access to the Internet. Wireless links supporting end-user mobility, however, exhibit several characteristics that are different from traditional wired networks. These include: dynamically changing bandwidth due to mobile host movement in and out of cells where bandwidth is shared, high rates of packet corruption and subsequent loss, and frequent and lengthy disconnections due to obstacles, fading, and movement between cells. In addition, these effects are short-lived and difficult to reproduce, leading to a need for adequate measurement and analysis tools and for feedback utilities to allow the mobile user to respond to changes in wireless link quality.
The prevailing Internet has brought many novel network applications. These applications often involve the interactions among multiple members in a single sessions. For example the images and voices of the lecturer in distance leaning must be transmitted to multiple destinations at the same time on demand. The status and opinions of any participant should also be transmitted to each other in a video conferencing for the interactivity of conversations. Without suitable protocol supports from the underlying networks, these applications may be costly and infeasible. One of the important protocol supports is the capability of multicasting, sending a single data packet to a set of receivers that are members of a multicast group. In the past, there are four well-known Internet multicast routing protocols: DVMRP, MOSPF, CBT, and PIM. In this paper, we examine the four protocols in detail. Then we propose several directions for performance improvement.
In this paper, we present a new Internet multicast routing architecture and protocol called Adaptive Source and CORe based multicasT (ASCORT). ASCORT is designed with the objective of flexibly balancing the scalability and end-to- end performance. Unlike the core-based approach, all the distributions trees in ASCORT are source-specific. As a result, ASCORT has a better end-to-end performance than the core-based approach by being adaptive to the sources' location distributions. On the other hand, the tree nodes are not required to keep routing states for all the sources, thus resulting in a better scalability performance than the source-specific approach. In this paper we will provide an overview of ASCORT's approach and two protocols to convert a forest of independent source-specific trees to a forest of ASCORT source-specific distribution trees. We will also present preliminary simulation result to compare ASCORT's performance with the core-based and source-specific approaches.
This paper proposes Monitor-based flow control (MBFC) to realize flow/congestion control needed for one-to-many bulk reliable multicast (RM) protocols. Bulk RM on top of UP multicast requires flow/congestion control because 1) it needs to adjust to the effective bandwidth to minimize packet losses and retransmission, and 2) to share the link bandwidth with other legacy traffic such as TCP so that RM does not override them aggressively. MBFC is a generic mechanism to implement such flow/congestion control. Thus we think it is very important to provide an effective flow and congestion control mechanism that enable RM traffic to coexist with legacy TCP traffic in Internet. MBFC is based on rate flow control and involves a monitor mechanism to adjust its sending rate. In order to realize existence with TCP traffic it mimics TCP's flow/congestion control algorithm, that is, additive increase/multiplicative decrease. To investigate its effectiveness MBFC was implemented in our RM protocol, and a series of simulations were performed in which simultaneous bulk RM and TCP flow were poured. The simulation result showed that our rate control policy effectively achieved bandwidth sharing between bulk RM and bulk TCP transfer traffic. We also conclude that 1) setting the transmission rate to the worst receiver leads to an extremely unfair bandwidth sharing occupied by TCP flow, 2) If Rm only experiences a few loses it should not give up its bandwidth, otherwise TCP monopolizes it. 3) It is more effective for MBFC to introduce RED gateways to prevent RM from an aggressive TCP traffic.
Both IETF's Classical IP-over-ATM and ATM Forum's MPOA/LANE take the client-server approach to IP/ATM address resolution. This paper proposes and discusses an alternative approach that more closely emulates the traditional peer-to- peer resolution method in stub IP/ATM networks. The absence of address resolution servers in this Sever-Less IP Multicast (SLIM) scheme improves the portability and the reliability. The entire software is implemented in one piece, rendering (re)configuration easier. And since SLIM does not have servers as single points of failure, the reliability of the system is greatly extended. SLIM achieves improvements at a comparable complexity as the client-server schemes.
A tree-based shortest path routing algorithm is introduced in this paper. With this algorithm, every network node can maintain a shortest path routing tree topology of the network with itself as the root. In this algorithm, every node constructs its own routing tree based upon its neighbors' routing trees. Initially, the routing tree at each node has the root only, the node itself. As information exchanges, every node's routing tree will evolve until a complete tree is obtained. This algorithm is a trade-off between distance vector algorithm and link state algorithm. Loops are automatically deleted, so there is no count-to- infinity effect. A simple routing tree information storage approach and a protocol data until format to transmit the tree information are given. Some special issues, such as adaptation to topology change, implementation of the algorithm on LAN, convergence and computation overhead etc., are also discussed in the paper.
Label switching technology enables high performance, flexible, layer-3 packet forwarding based on the fixed length label information mapped to the layer-3 packet stream. A Label Switching Router (LSR) forwards layer-3 packets based on their label information mapped to the layer-3 address information as well as their layer-3 address information. This paper evaluates the required number of labels under traffic-driven label mapping policy using the real backbone traffic traces. The evaluation shows that the label mapping policy requires a large number of labels. In order to reduce the required number of labels, we propose a label mapping policy which is a traffic-driven label mapping for the traffic toward the same destination network. The evaluation shows that the proposed label mapping policy requires only about one tenth as many labels compared with the traffic-driven label mapping for the host-pair packet stream,and the topology-driven label mapping for the destination network packet stream.
This paper discusses and compares specific aspects of models such as Classical IP, MARS, NHRP, LANE and MPOA. We will refer to these models as being the conventional models in the following text. They mainly allow to emulate an IP or MAC service over ATM, and are characterized by the fact that they are defined in a non-intrusive way with ATM. Other model types are possible, such as MPLS, which is intrusive with ATM. In this paper, we are focusing on models which consider the underlying layer as a Non Broadcast Multiple Access (NBMA) black box, providing a connection oriented service. We propose a complete re-engineering of conventional models, into a single, unique generic mode, called MPON or MultiProtocol Over NBMA.
Routing voice over the Internet has drawn considerable attention over the past few years. The Internet has generally provided poor support for delay sensitive voice traffic, and thus new protocols have been proposed to give QoS guarantees. This paper reports on the performance of the public Internet between New Zealand and the United States, and simulates the effects that a guaranteed QoS woudl have.
The growth of the Internet is placing strain on the world wide telecommunications infrastructure. In particular it is no longer possible to purchase capacity on terrestrial cables to some parts of the world. To meet the growing traffic needs of the Internet some Network Service Providers are deploying asymmetric satellite connections. These providers must chose an architecture for the international component of the network. Depending on the architecture chosen the traffic on the international circuit might be made up of a large number of independent connections or a smaller number of connections carrying aggregated traffic.The appropriate approach is not immediately apparent because there are opposing performance factors and because the effect of asymmetric delay on TCP performance is not well understood. In this paper we describe a discrete event simulation of the effect of carrying multiplexed HTTP connections over as asymmetric high bandwidth delay circuit. We show that a high degree of multiplexing mitigates against TCP's bandwidth delay product limits but that using a TCP connection per HTTP request causes a significant increase in delay.