One of the functions of network engineering is to allocate resources optimally to forecasted demand. We generalize the mechanism by incorporating price-demand relationships into the problem formulation, and optimizing pricing and routing jointly to maximize total revenue.
We consider a network, with fixed topology and link bandwidths, that offers multiple services, such as voice and data, each having characteristic price elasticity of demand, and quality of service and policy requirements on routing.
Prices, which depend on service type and origin-destination, determine demands,
that are routed, subject to their constraints, so as to maximize revenue.
We study the basic properties of the optimal solution and prove that link shadow costs provide the basis for both optimal prices and optimal routing policies. We investigate the impact of input parameters, such as link capacities and price elasticities, on prices, demand growth, and routing policies. Asymptotic analyses, in which network bandwidth is scaled to grow, give results that are noteworthy for their qualitative insights. Several numerical examples illustrate the analyses.
Several flow control algorithms have been proposed for attaining
maxmin fair rates. All of these strategies use flow specific
states at all nodes in the network, and are hence unsuitable for
deployment in the current internet. We propose a new approach
which attains maxmin fairness in unicast networks while
maintaining per flow states at the edge nodes only. Next, we
consider multirate, multicast network and argue that any flow
control algorithm must use flow specific states at the forking
points of the multicast tree. This happens because the packet
forwarding rates must be different for different branches at the
forking point, and thus the session specific forwarding rates
must be stored at the forking point. Finally, we present a maxmin
fair bandwidth allocation policy for multirate, multicast
networks, which uses per flow states only at the forking nodes,
and no per flow state at the intermediate nodes.
For the past decade, a lot of Internet research has been devoted to providing different levels of service to applications. Initial proposals for service differentiation provided strong service guarantees, with strict bounds on delays, loss rates, and throughput, but required high overhead in terms of computational complexity and memory, both of which raise scalability concerns. Recently, the interest has shifted to service architectures with low overhead. However, these newer service architectures only provide weak service guarantees, which do not always address the needs of applications. In this paper, we describe a service architecture that supports strong service guarantees, can be implemented with low computational complexity, and only requires to maintain little state information. A key mechanism of the proposed service architecture is that it addresses scheduling and buffer management in a single algorithm. The presented architecture offers no solution for controlling the amount of traffic that enters the network. Instead, we plan on exploiting feedback mechanisms of TCP congestion control algorithms for the purpose of regulating the traffic entering the network.
In this paper, TEAM, an automated
manager for DiffServ/MPLS networks is
introduced. TEAM is composed of a
Traffic Engineering Tool (TET), which
deals with resource and route management
issues, a Measurement and Performance
Evaluation Tool (MPET), which measures
important parameters in the network and
inputs them to TET, and a Simulation
Tool (ST), which may be used by TET to
consolidate its decisions. Details of
the issues addressed by each tool are
described. All proposed algorithms were
individually evaluated through
simulation and experiments on a physical
testbed. The design and implementation
details of each tool are discussed.
The Internet consists of about 13,000 Autonomous Systems (AS's) that
exchange routing information using the Border Gateway Protocol (BGP).
The operators of each AS must have control over the flow of traffic
through their network and between neighboring AS's. However, BGP is a
complicated, policy-based protocol that does not include any direct
support for traffic engineering. In previous work, we have
demonstrated that network operators can adapt the flow of traffic in
an efficient and predictable fashion through careful adjustments to
the BGP policies running on their edge routers.
Nevertheless, many details of the BGP protocol and decision process
make predicting the effects of these policy changes difficult. In
this paper, we describe a tool that predicts traffic flow at network
exit points based on the network topology, the import policy
associated with each BGP session, and the routing advertisements
received from neighboring AS's. We present a linear-time algorithm
that computes a network-wide view of the best BGP routes for each
destination prefix given a static snapshot of the network state,
without simulating the complex details of BGP message passing. We
describe how to construct this snapshot using the BGP routing tables
and router configuration files available from operational routers. We
verify the accuracy of our algorithm by applying our tool to routing
and configuration data from AT&T's commercial IP network. Our route
prediction techniques help support the operation of large IP backbone
networks, where interdomain routing is an important aspect of traffic
MPLS (Multi-Protocol Label Switching) has recently emerged to facilitate
the engineering of network traffic. This can be achieved by directing packet flows over paths that satisfy multiple requirements. MPLS has been regarded as an enhancement to traditional IP routing, which has the following problems: (1) all packets with the same IP destination address have to follow the same path through the network; and (2) paths have often been computed based on static and single link metrics. These problems may cause traffic concentration, and thus degradation in quality of service. In this paper, we investigate by simulations a range of routing solutions and examine the tradeoff between scalability and performance. At one extreme, IP packet routing using dynamic link metrics provides a stateless solution but may lead to routing oscillations. At the other extreme, we consider a recently proposed Profile-based Routing (PBR), which uses knowledge of potential ingress-egress pairs as well as the traffic profile among them. Minimum Interference Routing (MIRA) is another recently proposed MPLS-based scheme, which only exploits knowledge of potential ingress-egress pairs but not their traffic profile. MIRA and the more
conventional widest-shortest path (WSP) routing represent alternative MPLS-based approaches on the spectrum of routing solutions. We compare these solutions in terms of utility, bandwidth acceptance ratio as well as their scalability
(routing state and computational overhead) and load balancing capability.
While the simplest of the per-flow algorithms we consider, the performance of WSP is close to dynamic per-packet routing, without the potential instabilities of dynamic routing.
Border Gateway Protocol allows ASs to apply diverse routing policies for selecting routes and
propagating reachability information to other ASs.
This enables network operators to configure routing policies so as
to control traffic flows between ASs.
However, BGP is not designed for the inter-AS traffic engineering.
This makes it difficult to implement effective routing policies to address
network performance and utilization problems.
Network operators usually tweak routing policies to influence the
inter-domain traffic among the available links.
This can lead to undesirable traffic flow patterns across the Internet
and degrade the Internet traffic performance.
In this paper, we show several observations on Internet traffic flow patterns and
derive routing policies that give rise to the traffic flow
Our results show that an AS can reach as much as 20\% of the prefixes
via a peer link even though there is a path via a customer link.
In addition, an AS can reach as much as 80\% of the prefixes via
a provider link even though there is a path via a peer link.
Second, we analyze the cause of the prevalence of
these traffic patterns.
Our analysis shows that an AS typically does not receive
the potential route from its customers or peers.
Third, we find that
alternate routes have with lower propagation delay than the chosen routes
for some prefixes.
This shows that some traffic engineering practices might adversely affect
In this paper we report the development and implementation of QoPS, Quality of Service (QoS) provision system, that provides QoS support for voice, video and data applications in home wireless networks. QoPS is independent of the network interface card and of the multimedia applications we run. The experimental results of our solution were obtained on a PC-Windows based testbed consisting of a home wireless network connected to the Internet via a residential gateway. The experimental results we have obtained provide evidence that our solution provides quality of service support for multimedia applications.
The optimal radio resource management (RRM) design is formulated as a
Semi-Markov Decision Process optimization problem in this research. The
stationary optimal call admission control policy, which adopts a hybrid
handoff scheme, is determined by solving a set of linear programming
equations under users' QoS constraints. The call admission control
policy can choose actions of hard- or soft-handoff or simply rejecting
the request, depending on the traffic conditions and QoS metrics. The
performance is analyzed via simulation and compared with those of the
non-controlled scheme and the fixed RRM scheme. Simulations are
conducted by OPNET to study the performance under different traffic
This paper presents a formal study on channelized bandwidth resource provisioning for multi-rate and multi-session video broadcast in a broadband wireless network with heterogeneous users. The formulation is generic in that it considers both inter-session and intra-session wireless resource allocation. It also takes into account the most fundamental issues associated with video transmission including encoding complexity, transport overhead, non-linear relationship between the receiver perceived video quality and the delivered bandwidth, and intra- and inter-session fairness. In addition, we derive an optimal allocation scheme to manage the scarce wireless resources.
TCP(a,b) protocols parameterize the congestion window increase value and decrease ratio , motivated by the QoS requirements of multimedia applications for smooth rate adjustments. Based on a projection of the expected throughput of standard TCP(1,.5), modified versions of the protocol have been designed to generate smoother traffic patterns and to maintain a friendly behavior. In this paper, we study the design assumptions of TCP(a,b) and we discuss the impact of equation-based modulation of a and b on application efficiency. We confirm experimentally that, in general, smoothness and responsiveness constitute a tradeoff; however, we uncover undesirable dynamics of the protocols when the network is heterogeneous (wired/wireless) or the flow characteristics do not follow a prescribed and static behavior. For example, we show that smooth backward adjustments confine the protocol's capability to exploit resources that become available rapidly after a handoff in mobile network, and embarrass the fair and efficient growth of incoming flows. Furthermore, we show that in the context of wireless networks with high error rate, a low a dictates a conservative behavior that degrades the protocol performance with both best-effort and real-time applications; and in the context of high contention of heterogeneous flows, a low a does not contribute to efficiency and friendliness.
In this paper we study the bandwidth provisioning problem for a service
overlay network which buys bandwidth from the underlying network
domains to provide end-to-end value-added QoS sensitive services such as VoIP and Video-on-Demand. A key problem in the SON deployment is the
problem of bandwidth provisioning, which is critical to the cost
recovery in deploying and operating value-added services over the SON.
In this paper, we mathematically formulate
the bandwidth provisioning problem, taking into account
various factors such as SLA, service QoS, traffic demand distributions,
and bandwidth costs. Analytical models and approximate solutions are
developed for long-term static bandwidth provisioning. Numerical studies
are also performed to illustrate the properties of the proposed solution
and demonstrate the effect of traffic demand distributions and bandwidth
costs on the bandwidth provisioning of a SON.
Traffic engineering has become a central topic in contemporary public IP networks, driven by economic considerations and the competitive aspiration on the part of network service providers to reduce operational expenditures, improve service quality, and increase revenue. Internet traffic engineering deals with the performance optimization of operational IP networks, encompassing the application of technology and scientific principles to network planning, dimensioning, and dynamic control. The key concepts developed in this paper center around the notion of "integrated traffic engineering," which embraces a broader and more holistic view of operational network performance enhancement when compared with conventional traffic engineering, especially in IP over Optical (IPO) networks. Integrated traffic engineering refers to a new philosophy of collaborative decision making geared towards operational network performance optimization that involves simultaneous consideration of various control and management capabilities at different levels in the network hierarchy. Integrated traffic engineering is intended to increase network capacity, performance, efficiency, and survivability while reducing costs. This paper discusses the fundamental concepts of integrated traffic engineering in IPO networks.
In this paper we study regenerator placement and traffic engineering of restorable paths in Generalized Multipro-tocol
Label Switching (GMPLS) networks. Regenerators are necessary in optical networks due to transmission
impairments. We study a network architecture where there are regenerators at selected nodes and we propose
two heuristic algorithms for the regenerator placement problem. Performances of these algorithms in terms of
required number of regenerators and computational complexity are evaluated. In this network architecture with
sparse regeneration, offline computation of working and restoration paths is studied with bandwidth reservation
and path rerouting as the restoration scheme. We study two approaches for selecting working and restoration
paths from a set of candidate paths and formulate each method as an Integer Linear Programming (ILP) prob-lem.
Traffic uncertainty model is developed in order to compare these methods based on their robustness with
respect to changing traffic patterns. Traffic engineering methods are compared based on number of additional
demands due to traffic uncertainty that can be carried. Regenerator placement algorithms are also evaluated
from a traffic engineering point of view.
Network researchers employ a variety of experimental methods and
tools including analytic modeling techniques, simulators, and widely deployed
measurement infrastructures. It is natural to assume that the overall
scope of network research may be limited by the type and capability of the
tools and test systems that are available. In this paper we describe a new,
bench--style approach for conducting network research that we argue is
essential for effectively investigating different classes of important
problems. We describe the architecture for the workbench environment
which enables this approach - what we call the Internet Instance Laboratory
(IIL). The conceptual model for an IIL is a highly configurable laboratory
environment containing commercial networking equipment typical of any
end--to--end path in the Internet. An IIL would also have the capability
to create accurately a broad range of conditions across all networking
layers. The two most important advantages of an IIL are the ability to
instrument entire end--to--end paths and the ability to install new
equipment or protocols in any location in the environment. Clearly,
neither of these opportunities is available in the live Internet. While
an IIL offers significant capabilities, developing such a testbed is not
without significant challenges. We describe these challenges and
approaches for addressing them. Finally, we discuss different classes
of research questions which would become tractable if an IIL were available.
Internet measurements show that the size distribution of Web-based transactions is usually very skewed; a few large requests constitute most of the total traffic. Motivated by the advantages of scheduling algorithms which favor short jobs, we propose to
perform differentiated control over Web-based transactions to give preferential service to short web requests. The control is realized through service semantics provided by
Internet Traffic Managers, a Diffserv-like architecture.
To evaluate the performance of such a control system, it is necessary to have a fast but accurate analytical method.
To this end, we model the Internet as a time-shared system and propose a numerical approach which utilizes Kleinrock's conservation law to solve the model. The numerical results are shown to match well those obtained
by packet-level simulation, which runs orders of magnitude slower than our numerical method.
We analyze the global BGP routing instabilities observed during the Code Red II and Nimda worm attacks in July and September 2001, respectively. Compelling analysis is shown on the correlation between the observed instabilities and the worm attacks. We analyze router failure modes that can be triggered by the abnormal traffic during the worm attack and how they can lead to global routing instability. Independent research has partially confirmed that such failure modes can
and likely do occur in practice. Highly detailed large-scale simulations help close the loop, indicating that such failure modes do in fact trigger the kind of widespread BGP instabilities that were observed empirically.
A traffic matrix (TM)is a succinct representations of traffic
exchanges between nodes in a communication network. Such a
representation is of major interest to ISPs since it is needed to
design the network topology, perform capacity planning, configure
network routing policies, and assist in traffic engineering. Research
on TMs has taken off only recently and significant efforts are
underway to reach solutions that enable network operators to obtain
TMs systematically, either by measurement or inference approaches. In
this paper we take a step toward defining a common framework for
describing TMs. We introduce a two-level taxonomy of TMs based
on the spatial representation of network traffic used and the
aggregation level for the sources and destinations
engaging in traffic exchanges. We show that conversion between traffic
matrix types depends on the level of aggregation. Using the defined
taxonomy, we show the relationship between traffic matrix types and
their size complexity, that is, the number of elements in them. We
enumerate important network engineering and management applications
and use the taxonomy to clearly specify which type of traffic matrix
is needed for each application. We briefly discuss scalability issues
related to the methods for obtaining traffic matrices in the context
of the defined taxonomy.
Aggregate network traffic exhibits strong burstiness and non-Gaussian
distributions, which popular models such as
fractional Gaussian noise (fGn) fail to
capture. To better understand the cause of traffic burstiness, we
connection-level information of traffic traces. A careful study reveals that
traffic burstiness is directly related to the heterogeneity in connection
bandwidths and round-trip times and that a small number of high-bandwidth
connections are solely responsible for bursts. This separation of
connections has far-reaching implications on network control and leads to a new
model for network traffic which we call the alpha/beta model.
In this model, the network traffic is composed of two components:
a bursty, non-Gaussian
alpha component (stable Levy noise) and
a Gaussian, long range dependent beta component (fGn).
We present a fast scheme to separate the alpha and beta components
of traffic using wavelet denoising.
We study the impact of aggregation on the scaling behavior of Internet backbone tra
ffic, based on
traces collected from OC3 and OC12 links in a tier-1 ISP.
We make two striking observations regarding the sub-second small time
scaling behaviors of Internet backbone traffic: 1) for a majority of these traces,
the Hurst parameters at small time scales (1ms - 100ms) are fairly
close to 0.5. Hence the traffic at these time scales are nearly
uncorrelated; 2) the scaling behaviors at small time
scales are link-dependent, and stay fairly invariant over changing utilization
To understand the scaling behavior of network traffic,
we develop analytical models and employ them to demonstrate
how traffic composition -- aggregation of traffic with different
characteristics -- affects the small-time scalings of network traffic.
The degree of aggregation and burst correlation structure are two major
factors in traffic composition. Our
trace-based data analysis confirms this. Furthermore, we discover that
traffic composition on a backbone link stays fairly consistent over time and
changing utilization, which we believe is the cause for the invariant
small-time scalings we observe in the traces.
Internet traffic on a network link can be modeled as a stochastic process. After detecting and quantifying the
properties of this process, using well known tools from statistics, as well as some variants,
a series of mathematical models is developed, culminating to one
which is able to generate ``traffic'' that exhibits --as a key feature--
different behavior in different time scales, similar to real traffic, and is moreover indistinguishable from real traffic
by other statistical tests as well. Tools inspired from the models are then used to determine and calibrate
the type of activity taking place in each of the time scales. The above procedure does not require any detailed
information originating from either the network dynamics, or the decomposition of the total traffic into its constituent
user connections, but rather only the compliance of these connections to very weak conditions.
As we increase our dependency upon networked communication, the
incentive to compromise and degrade network performance increases for
those who wish to disrupt the flow of information. Attacks that lead
to such compromise and degradation can come in a variety of forms,
including distributed denial of service (DDoS) attacks, cutting wires,
jamming transmissions, and monitoring/eavesdropping. Users can
protect themselves from monitoring by applying cryptographic
techniques, and the recent work has explored developing networks that
react to DDoS attacks by locating the source(s) of the attack.
However, there has been little work that addresses preventing the
other kinds of attacks as opposed to reacting to them. Here, we
discuss how network overlays can be used to complicate the job of an
attacker that wishes to prevent communication. To amplify our point,
we focus briefly on a study of preventing DDoS attacks by using
Assisting during emergencies is one of the important functions of the
telephone system. Emergency communications has three components:
summoning help during emergencies, coordinating emergency response and
notifying citizens and public officials of local emergencies. As we transition to an
Internet-based telecommunications system, these functions need to be
provided, but there is also an opportunity to add new functionality and
improve scalability and robustness. We discuss three aspects of
Internet-based communications related to emergencies: First, we
describe how Internet telephony can be used to provide emergency call
(``911'' or ``112'') services. Secondly, Internet telephony needs to be
enhanced to allow prioritized access to communications resources during
emergency-induced network congestion. Finally, Internet event
notification can be a valuable new mechanism to alert communities to
pending or on-going emergencies such as hurricanes or chemical spills.
We consider the dynamic sensor coverage problem in the absence of
global localization information. In the regime where few sensors are
available compared to the size of the space being explored, a
successful strategy must effectively mobilize the sensors by mounting
them on mobile robots. We present here an approach where mobile robots explore an uncharted environment, by deploying small communication beacons. These beacons act as local markers for preferred directions
of further exploration. The robots never acquire a map of their
surroundings, nor are localized, however they ensure timely coverage
of all regions of the space by relying on the local instructions
disseminated by the stationary communication beacons. Preliminary data
from experiments suggest that our algorithm produces exploratory,
patrol-like behavior, resulting in good spatial sensor coverage
Disasters such as the 9/11 attacks, as well as major and unpredictable events, can cause extreme network overload. By "extreme overload" we mean, first, that the offered load at a link is significantly higher than the link's capacity, and second, that the average throughput per session is too low. Under such conditions, the network can suffer from a form of "livelock" in which even though links are fully utilized, most users cannot complete their transfers. The underlying reasons are that the network carries many retransmitted packets, and that it services flows that are finally aborted by users or applications.
To prevent extreme network overload, we propose a router mechanism called Guardian. Guardian is a form of admission control module that is automatically activated when it detects the onset of extreme overload at a network link. Guardian's objective is to allow at least some sessions to complete, rejecting new TCP or UDP sessions that would probably not manage to acquire a minimum acceptable throughput. Guardian does not require signalling, and it can be implemented using standard techniques for session counting and caching.
This paper describes on-going work. As such, we focus on the motivation for the proposed mechanism, and on Guardian's main design.
Connectivity within ad-hoc and peer-to-peer networks undergoes
constant change. One approach to reducing the cost of finding
information within these networks is to replicate the information
among multiple points within the network. A desirable replication
approach should cache copies of all pieces of information as close to
each node as possible without exceeding the storage resources of the
nodes within the network. In addition, the approach should require
minimum communication overhead among participating nodes and should
adjust the locations of stored content as connectivity within the
network changes. Here, we formulate this caching problem as a graph
coloring problem, where the color of the node determines the content
that the node should store. We present a distributed algorithm where
each node chooses its color in a greedy manner, minimizing its own
distance to the color furthest from it. We demonstrate convergence of
this algorithm and evaluate its performance in the context of its
ability to place information near all nodes in the network.
We present the design and evaluation of the query-routing protocol of
the TerraDir distributed directory. TerraDir is a wide-area
distributed directory designed for hierarchical namespaces, and
provides a lookup service for mapping keys to objects. We introduce
distributed lookup and caching algorithms that leverage the underlying
data hierarchy. Our algorithms provide efficient lookups while
avoiding the load imbalances often associated with hierarchical
systems. The TerraDir load balancing scheme also incorporates a node
replication algorithm that provides configurable failure resilience
with provably low overheads.
Although peer-to-peer networking applications continue to increase
in popularity, there have been few measurement studies of their performance. We present the first study of the locality of files stored and transferred among peers in Napster and Gnutella over month-long periods. Our analysis indicates that the locality of files is skewed in all four cases and fits well to a log-quadratic distribution. This predicts that caches of the most popular songs would increase performance of the system. We also took baseline measurements of file types and sizes for comparison over time with future studies. Not surprisingly, audio files are most popular, however a significant fraction of stored data is occupied by videos.
Finally, we measured the distribution of time peers in Gnutella were available for downloading. We found that node availability is strongly influenced by time-of-day effects, and that most user's tend to be available for only very short contiguous lengths of time.