PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
We show how to model the black-holing and looping of traffic during an Interior Gateway Protocol (IGP) convergence event at an IP network and how to significantly improve both the convergence time and packet loss duration through IGP parameter tuning and algorithmic improvement. We also explore some congestion avoidance and congestion control algorithms that can significantly improve stability of networks in the face of occasional massive control message storms. Specifically we show the positive impacts of prioritizing Hello and Acknowledgement packets and slowing down LSA generation and retransmission generation on detecting congestion in the network. For some types of video, voice signaling and circuit emulation applications it is necessary to reduce traffic loss durations following a convergence event to below 100 ms and we explore that using Fast Reroute algorithms based on Multiprotocol Label Switching Traffic Engineering (MPLS-TE) that effectively bypasses IGP convergence. We explore the scalability of primary and backup MPLS-TE tunnels where MPLS-TE domain is in the backbone-only or edge-to-edge. We also show how much extra backbone resource is needed to support Fast Reroute and how can that be reduced by taking advantage of Constrained Shortest Path (CSPF) routing of MPLS-TE and by reserving less than 100% of primary tunnel bandwidth during Fast Reroute.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The network stability, performance and QoS guaranties given to the end users strongly depend on the routing protocol. A significant increase of the routing tables average size degrades the routing protocol performance. The route aggregation can be used for decreasing the amount of processing and storing routing information, but creates a lot of configuration and control problems. Traditionally, the route aggregation is used for a fixed network configuration. The edge routers process and store the lower level network detail but only pass the aggregate information to the upper level. In this way, the edge routers saving the upper level routers from unnecessary details and also deny other edge routers the detailed information needed to calculate an efficient aggregation plan for all lower level networks. Later exceptions to the address plan can be handled by CIDR principles, but at the cost of inserting the exception detail into the upper level network to be processed and stored by all of its routers.
The paper describes how routers can supplement routing information to automatically determine the best route aggregation and does not require initial configuration of aggregation rules, but instead, only the network interfaces to the subnetworks. Aggregation is thus no longer statically constrained by and pre-calculated from the initial network addressing plan, but automatic and flexible to changes as the routing protocol itself is. The supplemental information needed to do this may be exchanged by using extensions of the routing protocol or a separate message exchange protocol installed on the edge routers of the network.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
It is common, either for a telecommunications service provider or for a corporate enterprise, to have multiple data networks. For example, both an IP network and an ATM or Frame Relay network could be in operation to serve different applications. This can result in parallel transport links between the same two locations, each carrying data traffic under a different protocol. In this paper, we consider some practical engineering rules, for particular situations, to evaluate whether or not it is advantageous to combine these parallel traffic streams onto a single transport link. Combining the streams requires additional overhead (a so-called "cell tax" ) but, in at least some situations, can result in more efficient use of modular transport capacity. Simple graphs can be used to summarize the analysis. Some interesting, and perhaps unexpected, observations can be made.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In recent years, various real-time applications in the Internet have been emerging with rapid increase of the network bandwidth. A real-time application traditionally uses either UDP (User Datagram Protocol) or TCP (Transmission Control Protocol) as its transport layer protocol. However, using either UDP or TCP is insufficient for most real-time applications because of lacking a smooth rate control mechanism or suffering a significant transfer delay. In the literature, several transport-layer communication protocols for real-time applications have been proposed. In this paper, among these transport-layer communication protocols, we focus on TFRC (TCP-Friendly Rate Control). Steady state performances of TCP and TFRC connections such as throughput and fairness have been throughly investigated by many researchers using simulation experiments. However, transient state properties of TCP and TFRC connections such as stability and responsiveness have not been investigated. In this paper, we therefore analyze both steady state and transient state performances of TCP and TFRC connections using a control theoretic approach. We frist model TFRC and TCP connections with different propagation delays and the active queue management mechanism of RED (Random Early Detection) router as independent discrete-time systems. By combining these discrete-time systems, we analyze steady state performance of TCP and TFRC connections such as throughput, transfer delay, and packet loss probability. We also analyze transient state performance of TCP and TFRC connections using linearization of discrete-time systems around their equilibrium points.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The introduction of new ATM service categories increases the benefits of ATM, making the technology suitable for a virtually unlimited range of applications.
Connection Admission Control (CAC) is defined as the set of actions taken by the network during the call (virtual connection) set-up phase, or during call re-negotiation phase, to determine whether a connection request can be accepted or rejected. Network resources (port bandwidth and buffer space) are reserved to the incoming connection at each switching element traversed, if so required, by the service category.
The major focus of this paper is call admission in the context of multi-service, multi-class ATM networks. Several strategies suggesting rules on bandwidth sharing are found in the litterature.
This study investigates particularly the Complete Sharing approach. Two service categories are concerned, namely, Constant Bit rate/Deterministic Bit Rate (CBR/DBR) and Variable Bit Rate/Statistical Bit Rate (VBR/SBR). Each service category is represented by a set of call classes corresponding to different bandwidth needs. We propose two algorithms to solve the underlying Markovian system: Product-form and Recursive solutions. A performance study based on the latter algorithm is implemented. We analyze the results of this very sharing strategy and set the not-to-violate limits for a beneficial use of it.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recently, several gateway-based congestion control mechanisms have been proposed to support the end-to-end congestion control mechanism of TCP (Transmission Control Protocol). In this paper, we focus on RED (Random Early Detection), which is a promising gateway-based congestion control mechanism. RED randomly drops an arriving packet with a probability proportional to its average queue length (i.e., the number of packets in the buffer). However, it is still unclear whether the packet marking function of RED is optimal or not. In this paper, we investigate what type of packet marking function, which determines the packet marking probability from the average queue length, is suitable from the viewpoint of both steady state and transient state performances. Presenting several numerical examples, we investigate the advantages and disadvantages of three packet marking functions: linear, concave, and convex. We show that, although the average queue length in the steady state becomes larger, use of a concave function improves the transient behavior of RED and also realizes robustness against network status changes such as variation in the number of active TCP connections.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
TCP congestion control mechanism has been widely investigated and deployed on Internet in preventing congestion collapse. We would like to employ modern control theory to specify quantitatively the control performance of the TCP communication system. In this paper, we make use of a commonly used performance index called the Integral of the Square of the Error (ISE), which is a quantitative measure to gauge the performance of a control system. By applying the ISE performance index into the Proportional-plus-Integral controller based on Pole Placement (PI_PP controller) for active queue management (AQM) in IP routers, we can further tune the parameters for the controller to achieve an optimum control minimizing control errors. We have analyzed the dynamic model of the TCP congestion control under this ISE, and used OPNET simulation tool to verify the derived optimized parameters of the controllers.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The exponential gtowth of traffic delivered to an individual customer both for business and personal needs puts tremendous pressure on the telecommunications networks. Because the development of the long-haul and metro networks has advanced rapidly and their capacity much eceeds demand, tremendous pressure now falls in the local networks to provide customers with access to the global telecom infrastructure. Building a broadband access network enabling fast delivery of high-volume traffic is the current task of network operators. A brief review of broadband access networks brings us to the conclusion that only wired optical networks can serve as an immediate and future solution to the "last-mile" problem. After discussin goptical access network classification, we focus mainly on passive optical networks (PON) because PON is a major technology today. From the network standpoint, we discuss the principle of PON operation, architectures, topologies, protocols and standards, design issues, and network management and services. We also discuss the main problems with PON and the use of WDM technology. From the hardware standpoint, we consider both active and passive components. We analyze the structure and elements of these components, including their technical characteristics.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recent and future communications networks have to provide QoS guarantees for a rapidly growing number of various telecommunication services, which can be ensured by application of an efficient MAC layer. Various communication technologies, such as cellular networks and PLC (PowerLine Communications) access networks, apply reservation MAC protocols, providing a good network utilization and realization of different QoS guarantees. In this investigation, we analyze possibilities for provision of QoS guarantees for various telecommunications services with a two-step reservation MAC protocol using per-packet reservation principle, which is proposed for application in broadband PLC access networks. Particularly, performance of the reservation procedure is analyzed to provide priority mechanisms which are necessary for realization of various telecommunications services ensuring the required QoS guarantees. Since the telephony, realized by the packet voice service, has the strongest QoS requirements among various telecommunications services, we analyze possibility for its realization within the two-step MAC protocol. It can be concluded that the packet voice can be efficiently implemented. However, with application of a combined reservation domains for various service classes, network performance could be further improved.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
EPON is an emerging local subscriber access network that consists of low-cost point-to-multipoint fiber infrastructure with Ethernet. Because several ONUs use a single shared medium in EPON, it is important to control upstream traffic. An EPON allocates upstream bandwidth to ONUs using a request/permit mechanism. In this paper, we proposed a new DBA algorithm supporting multiple priority queues and evaluated its performance. From the simulation result, we have confirmed that our proposed DBA algorithm can reduce the average queue length in comparison to that of ETRI's DBA algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Establishing multicast communications in MPLS-capable networks is an
essential requirement for a wide-scale deployment of MPLS in the
Internet. This paper outlines a framework for the setup of a
MultiPoint-to-MultiPoint (MP2MP) Label Switched Path (LSP) for establishing uni-directional multicast shared trees. The presented framework is intended for multicast applications within a single autonomous domain and can be extended to cover inter-domain multicast sessions.
We propose the use of one (or more) control points in the network called Rendez-vous Points (RP) in a manner similar to PIM-SM shared trees. Senders of the multicast session have to register with the RP and establish unicast LSPs with the RP. Receivers who join the session have to send their join requests to the RP which acts as a root (and the sender) of a one-to-many tree by establishing a Point-to-MultiPoint (P2MP) LSP between the RP and the receiver. This architecture utilizes more than one RP to implement RP failure recovery, to provide load balancing within the domain, and to enable the extension of this framework to multiple domains by establishing LSPs between RPs in different domains. This architecture also has the advantage of using existing MPLS techniques and existing routing protocols and requires only the addition of more management capabilities at the RPs. The paper explains the framework in details and provides an example on how to set the LSP on a given topology.
We also refer to some preliminary simulation results testing the scalability of the architecture in comparison with traditional multicast routing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The proposed standard for the IEEE 802.3 Ethernet Passive Optical Network includes a random delayed transmission scheme for registration of new nodes. Although the scheme performs well on low loads, our simulation demonstrates the degraded and undesirable performance of the scheme at higher loads. We propose a simple modification to the current scheme that increases its range of operation and is compatible with the IEEE draft standard. We demonstrate the improvement in performance gained without any significant increase in registration delay.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recently, there has been increased interest in the use of optical networks for disaster recovery of large computer systems by extending storage area networks (SANs) over hundreds of kilometers or more. These optical datacom networks, which incorporate wavelength division multiplexing (WDM), have several unique requirements. The purpose of this work has been to develop computer simulation tools for optical datacom networks. The models are capable of automatically designing a WDM network configuration based on minimal input; validating the design against any protocol-specific requirements; suggesting alternative configurations; and optimizing the design based on metrics including performance of the network (efficient use of bandwidth to support the attached computing devices), reliability (searching the proposed topology for single points of failure), scalability (based on user input of potential future upgrade paths), and cost of the associated networking equipment. The model incorporates typical computer performance data, which allows the prediction of system performance before the network is implemented. We present simulation results for a variety of MAN topologies, using currently available WDM networking equipment. These results have been validated by comparison with an enterprise optical networking testbed constructed for storage area networks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Nationwide IP networks typically include nodes in major cities and the following elements: customer equipment, access routers, backbone routers, peering routers, access links connecting customer equipment to access routers, access routers to backbone routers, and backbone links interconnecting backbone routers. The part of this network consisting of backbone routers and related interconnecting links is referred to as the “backbone”. We develop a new approach for accurately computing the Availability measure of IP networks by directly simulating each type of backbone outage event and its impact on traffic loss. We use this approach to quantify availability improvement as a result of introducing various technological changes in the network such as IGP tuning, high availability router architecture, MPLS-TE and Fast Reroute. A situation, where operational backbone links do not have enough spare capacity to carry additional traffic during the outage time, is referred to as bandwidth loss. We concentrate on one unidirectional backbone link and derive asymptotic approximations for the expected bandwidth loss in the framework of generalized Erlang and Engset models when the total number of resource units and request arrival rates are proportionally large. Simulation results demonstrate good accuracy of the approximations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
For free space optics to support multi user communications, multi-access schemes need to be incorporated.
Conventional asynchronous optical code division multiple access (OCDMA) using optical orthogonal codes (OOCs)
cannot deliver the required performance due to code interference, which sets a lower bound on the achievable bit error
rate. Thus we introduce the use of complementary Walsh Hadamard codes as a promising solution to support
synchronous OCDMA, capable of achieving required performance due to inherent interference cancellation, in addition
to supporting a higher number of users while maintaining short code lengths, and alleviating some of the stringent
requirements on the receiver hardware.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
New approaches are presented to facilitate dynamic radio bandwidth management for mobile communication systems. The aim is achieve an overall high level of QoS for both handoff calls and new calls. At the same time, the utilization of wireless network resources, i.e. the revenues earned by the operator. The simultaneous satisfaction of these two conflicting interests, under varying mobility and network traffic conditions, will be difficult. However, a balanced operation could be obtained by applying two novel approaches in system management. First, apriori information about possible handoffs, in the form of cell transition probabilities could be provided by the mobile, which is based on data collected by the mobile itself. This information is used to make handoff reservation requests in neighboring cells. Second, simultaneously controlling the radio resource reservation and new call admission to the system. This approach controls both the amount of reserved channels and the number of new calls admitted in a dynamic way. A theoretical analysis and a simulation have been used to study these approaches and it has been demonstrated that these approaches perform better then other reported approaches in the literature.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Many analytic and numerical techniques have appeared in literature
to evaluate performance measures of queueing systems with
heavy-tail distributed interarrival times. In this paper, we take
the advantage of having a closed-form expression for the Laplace
transform of Pareto probability distributions in order to have a
better setting to evaluate the different performance measures of
Pareto queueing systems. We consider the Pareto distributed
interarrival times as a particular case of general and independent
arrivals queueing systems with exponentially distributed service
times, i.e. {\sc GI/M/...} queueing systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multi-Protocol Label Switching (MPLS) is suitable for implementing Traffic Engineering (TE) to achieve two goals: Quality of Service (QoS) provisioning and the efficient use of network resources. In fact, MPLS allows several detour paths to be (pre-)established for some source-destination pair as well as its primary path of minimum hops. Thus, we focus on a two-phase path management scheme using these two kinds of paths. In the first phase, each primary path is allocated to a flow on a specific source-destination pair if the path is not congested, i.e., if its utilization is less than some predetermined threshold; otherwise, as the second phase, one of the detour paths is allocated randomly if it is available. Therefore, in this paper, we analytically evaluate this path management scheme by extending the M/M/c/c queueing system and we investigate the impact of a threshold on the flow-blocking probability. Through some numerical results, we discuss the adequacy of the path management scheme for MPLS-TE.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We examine the cause of the tail of the distribution of the number of packet and byte arrivals at backbone routers. One possible cause is that sometimes there are a large number of active connections resulting in a large number of arrivals in a short period of time. Another possibility is that the tail is due to one or a few very fast connections. By examining time-stamped packet headers from several backbone links, we find that the tail is neither strictly from many users nor strictly from fast connections. Rather, at some times and some time-scales, we find that the tail (the skewness of the distribution in particular) is strongly influenced by the tail of the distribution of the number of active connections, while at other times, the tail of the number of arrivals is due to the tail of the distribution of the connection bit-rates.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
To expose network characteristics by active/passive measurements, measuring some timing issues such as one-way delay, one-way queuing delay, and inter-packet time is essential, and is conducted by time-stamping for packets passing through an observation point. However, emerging high-speed networks require very high precision of time-stamping, far beyond the precision of conventional software-based time-stamping systems such as 'tcpdump'. For example, the inter-packet time of two consecutive 64-byte length packets on a giga-bit link can be less than 0.001 msec. In this paper, to demonstrate the usefulness and strong necessity of precise time-stamping on high-speed links, experiments of network measurements over a nation-wide IPv6 testbed in Japan have been performed, using a hardware-based time-stamping system that can synchronize to GPS with a high resolution of 0.0001 msec and within a small error of 0.0003 msec. In our experiments, several interesting results are seen, e.g., i) the distribution of one-way queuing delay exhibits a considerable difference depending on the size and the type (UDP/ICMP) of packets; ii) the minimal one-way delays for various sizes of UDP/ICMP packets give an accurate estimate of the transmission delay and the propagation delay; iii) the correlation between interpacket times at the sender and the receiver sides in a sequence of TCP ACK packets clearly shows the degree of ACK compression; iv) the inter-packet time in a UDP stream generated by a DV streaming application shows three dominant sending rates and a very rare peak rate, which might provide crucial information to bandwidth dimensioning; all of which would indicate the usefulness of precise time-stamping.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multiclass IP networks open new dimensions and challenges on active monitoring as efficient strategies of in-band probing are required to sense each class performance without causing noticeable side-effects on real traffic. In our study, we provide new insights on how to perform efficiently active monitoring in these networks, suggesting the use of light and multipurpose probing streams able to capture simultaneously the behavior of multiple QoS metrics of each class. Considering one-way-delay, jitter and loss metrics, we explore different spatial-temporal characteristics of probing, focusing on finding patterns adjusted to each class measurement requirements. We demonstrate that commonly used probing streams fail to capture these metrics simultaneously and we propose novel colored probing patterns able to increase multipurpose active monitoring efficiency. As test environment, we consider a diffserv domain where admission control resorts to feedback from edge-to-edge active monitoring to dynamically control hard real-time, soft real-time and elastic traffic classes. Comparing graphically and statistically the probing and passive measurement outcome of each class, the obtained results show that despite being difficult to match the scale and shape of multiple metrics, a single and properly colored probing stream can capture close and simultaneously the behavior of one-way-delay, jitter and loss, for low in-band probing rates.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Applying multi-protocol label switching techniques to IP-based backbone for traffic engineering goals has shown advantageous. Obtaining a volume of load on each internal link of the network is crucial for traffic engineering applying. Though collecting can be available for each link, such as applying traditional SNMP scheme, the approach may cause heavy processing load and sharply degrade the throughput of the core routers. Then monitoring merely at the edge of the network and mapping the measurements onto the core provides a good alternative way. In this paper, we explore a scheme for traffic mapping with edge-based measurements in MPLS network. It is supposed that the volume of traffic on each internal link over the domain would be mapped onto by measurements available only at ingress nodes. We apply path-based measurements at ingress nodes without enabling measurements in the core of the network. We propose a method that can infer a path from the ingress to the egress node using label distribution protocol without collecting routing data from core routers. Based on flow theory and queuing theory, we prove that our approach is effective and present the algorithm for traffic mapping. We also show performance simulation results that indicate potential of our approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we investigate the application of the head dropping policy as a partial solution to the problem of queue oscillation encountered by RED and its variants. With this method, instead of the tail dropping, which is currently used by RED and many other AQM schemes, the TCP source can be informed of the congestion occurring in the bottleneck router a time period earlier. Specifically, that is a time period of the queuing delay. We have compared DH-RED (drop head RED) and DH-BLUE (drop head BLUE) with the current RED and BLUE in a variety of situations and found that the performances such as the queue size stability as well as packet drop rate can be greatly improved.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Wireless Free Space Optics (FSO) is one of the most promising candidates for future broadband communications, offering transmission rates far beyond possible by RF technology. However, free space wireless optical channel presents far more challenging conditions for signals than typical RF channels, making system availability and throughput a critical issue. A novel design for an FSO system based on integration of ultra-short pulse lasers and advanced signal processing techniques is presented. Simulations indicate that the novel design promises considerably improved availability and throughput compared to traditional FSO systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We study the dynamic allocation of bandwidth for video traffic in wireless networks. Our approach consists of two stages. In the first stage, we apply the FARIMA (Fractional Autoregressive Integrated Moving Average) models to forecast traffic based on online traffic measurements. In the second stage, we use the forecast results to allocate bandwidth dynamically. We evaluate our FARIMA-based scheme by comparing it with the ARIMA-based and the static schemes in terms of packet loss probability, queue length and bandwidth utilization. Through the experiments with real traffic traces, we demonstrate our approach works well for highly fluctuating traffic in WiFi.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An understanding of the traffic characteristics and accurate traffic models are necessary for the improvement of the capability of wireless networks. In this paper we have analyzed the non-linear dynamical behavour of several real traffic traces collected from wireless testbeds. We have found strong evidence that the wireless traffic is chaotic from our observations. That is we found from the traffic correlation dimension, largest Lyapunov exponent and the principal components analysis, which are typical indicators of chaotic traffic. This gives us the good theoretical basis for the analysis and modeling of wireless traffic using Chaos Theory.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A single rate multicast congestion control for streaming media applications called Binomial congestion control At Receivers for Multicast (BARM) is proposed. Combining aspects of window-based and rate-based congestion control, the protocol shifts most of the congestion control mechanisms to multicast receivers. The main features of BARM are as follows. (1) The protocol adopts binomial algorithm (k=l=0.5, α=0.28, β=0.2 for our implementation) to adjust congestion window, which not only provides TCP-friendliness but decreases abrupt rate fluctuations, making it suitable for real time streaming media multicast applications. (2) The binomial algorithm is executed at the receivers instead of at the sender; to do this, a congestion window is maintained and updated separately by each receiver. Hence the protocol not only has a better scalability but reduces the burden of the sender significantly and is suitable to Client/Server model. (3) The congestion window is converted to the expected receiving rate which is then fed back to the sender if permitted. Compared to window feedback scheme, rate feedback scheme is simpler and increases the scalability. (4) The representative approach is used to suppress the feedback implosion. Simulations results indicate that BARM shows good fairness, TCP-friendliness, smoothness, scalability, and acceptable responsiveness.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Provisioning Protection and Restoration (P&R) capability is a necessity in Generalized Multi-Protocol Label Switching (GMPLS) networks. Label Switched Path (LSP) segment-based recovery is an important P&R type. In this paper, three novel overlapped LSP segment selection schemes are proposed. The near-optimal set of overlapped LSP segments can be obtained by the proposed schemes, which have polynomial time complexity. Differences among the proposed schemes are discussed and compared. Performance comparisons between LSP segment recovery with the proposed schemes and end-to-end LSP recovery are conducted. The results show that the proposed schemes perform very well in LSP segment recovery. With the proposed schemes, the complexity of LSP segment recovery design is decreased considerably. Moreover, the proposed schemes provide flexible choices for GMPLS-based P&R techniques in terms of different failure recovery time and resource utilization constraints.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multi-granularity switching is an attractive technology, which can not only save the network cost but also realize traffic grooming. There are many different sizes tunnels available in multi-granularity optical network, so resource assignment becomes very important for lightpath establishment. In this paper, we focus on the research in waveband switching (WBS) network. Firstly the node structure and multi-granularity connections are analyzed, and the term of RRA (routing and resource assignment) is proposed to state the problem of lightpath establishment. And a kind of resource assignment policy for RRA is proposed according to the tunnel granularity. As a part of resource assignment in WBS network, the problem of waveband paths assignment is discussed and several new dynamic algorithms are proposed. They are SBP (shortest waveband path), FBP (first-fit waveband path), LBP (longest waveband path) and LBC (longest waveband combination) algorithms. In our numerical simulations, some factors that impact the performance of network are analyzed, such as waveband assignment algorithm, switching ratios and waveband partition. And also some useful conclusions are drawn, which are instructive for the design and implementation of multi-granularity network.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One of the most immediate benefits of MPLS is the ability to perform traffic engineering. Traditionally, the only mechanism for redirecting traffic has been to change the link metrics in the Interior Gateway Protocol (IGP, responsible for routing within a site), but this can potentially change the paths of all the packets traversing that link. With MPLS there is a finer granularity because it does not operate on a link basis and therefore it is possible to shift individual LSPs from congested paths to an alternate path. This also simplifies the operation of the network operator since the network operator can assign global optimization algorithms that provide mapping from the traffic demand to the physical link that could not be done using local optimization. Constraint-based routing (or its variant Explicit Routing ER) allows for traffic engineering. What is important, however, is that ER can allow for distributed routing of the same type as the routing and wavelength assignment in the optical adaptation layer. Furthermore, constraint-based routing use topology/resources updates to perform distributed LSP route computations, which can be used to deploy distributed shortest-path lightpath routing. A detailed comparison between distributed path routing strategies based on traffic parameters and fix path routing schemes is presented in this paper and it is shown that a distributed path routing scheme based on traffic correlation parameter is superior than fix path routing schemes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In next-generation IP over WDM networks, lightpaths are set up or torn down dynamically. Traditionally, OSPF in IP layer and OSPF-TE in optical layer disseminate routing information independently. Obviously, this 2-layer routing mechanism is complex and O&M cost is high. Furthermore, in a dynamic environment, both OSPF-TE and OSPF have very heavy control overheads when lightpaths change frequently. In this paper, an integrated routing protocol is proposed. The link state information of both IP layer and optical layer is disseminated simultaneously using the same routing protocol messages. The proposed protocol also advertises wavelength availability information if necessary in order to reduce the blocking probability of routing and wavelength assignment (RWA) algorithm. This proposed integrated protocol is very simple. Furthermore, its control overhead can be reduced from several to about ten times. In addition, RWA's performance is also improved. Hence, the performance of the IP over WDM networks will be improved greatly and significantly.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The main purpose of this paper is to propose a novel bandwidth allocation scheme for facilitating quality of service (QoS) routing in mobile ad hoc networks (MANETs). In a MANET using time division multiple access (TDMA), each node communicates with its neighbors in the same time slot. In general, finding a route with the maximum end-to-end bandwidth subject to the constraint of collision-free transmission is an NP-complete problem. This paper proposed a sub-optimal solution to this problem. The solution is based on a centrally controlled bandwidth allocation scheme to properly assign the available time slots to each intermediate link. The advantage of our proposed scheme is that the resource utilization of MANET is maximized and the end-to-end QoS is guaranteed during the route establishment period. Performance analyses show that when using the proposed scheme with AODV to perform QoS routing, it achieves about 25% throughput higher than its best-effort counterpart if the node is moving at 5mps speed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper studies the network resource monitoring issues in MPLS enabled IP networks.
First it analyzes the MPLS use cases, relevant standards and various MPLS management proposals. Some basic concepts relevant to MPLS network resource management are clarified such as 1) Label Switched Path (LSP) route selection method, 2) LSP setup method, 3) LSP computation method and 4) LSP deployment mode. It is shown that, among all the technical method alternatives, the combination of the explicit routed and control plane signaled LSP makes the resource management task more complicated and challenging.
Then, the network topology discovery functionality is found to be different in MPLS networks than in the conventional IP networks. The existing IP topology discovery method can’t provide an accurate and up-to-date topology database, which is critical for the dynamic resource management. A new method named LSP feedback, which piggybacks the MPLS topology information on the reverse messaging of the label distribution protocol, is introduced. The LSP feedback method has advantages in its high accuracy, more up-to-date and low overhead. But it still has some drawback such as lack of global topology information on the routing domain. This document proposes two approaches to enhance the LSP feedback method.
Finally, a bandwidth limit calculation algorithm is described in details.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.