Internet Draft
Internet Engineering Task Force Indra Widjaja
Fujitsu Network Communications
INTERNET DRAFT Anwar Elwalid
Expired in six months Bell Labs, Lucent Technologies
August 1998
MATE: MPLS Adaptive Traffic Engineering
<draft-widjaja-mpls-mate-00.txt>
Status of this Memo
This document is an Internet-Draft. Internet-Drafts are working
documents of the Internet Engineering Task Force (IETF), its areas,
and its working groups. Note that other groups may also distribute
working documents as Internet-Drafts.
Internet-Drafts are draft documents valid for a maximum of six months
and may be updated, replaced, or obsoleted by other documents at any
time. It is inappropriate to use Internet-Drafts as reference
material or to cite them other than as ``work in progress.''
To learn the current status of any Internet-Draft, please check the
``1id-abstracts.txt'' listing contained in the Internet-Drafts Shadow
Directories on ftp.is.co.za (Africa), nic.nordu.net (Europe),
munnari.oz.au (Pacific Rim), ds.internic.net (US East Coast), or
ftp.isi.edu (US West Coast).
Abstract
This document describes an MPLS Adaptive Traffic Engineering scheme,
called MATE. The main goal of MATE is to avoid network congestion by
balancing the loads among the LSPs. MATE makes minimal assumptions
in that the intermediate LSRs are not required to perform traffic
engineering activities or measurements beside forwarding packets.
Moreover, MATE does not impose any particular scheduling, buffer
management, architecture, or a priori traffic characterization, on
the LSRs.
1.0 Introduction
One of the main advantages of MPLS is its efficient support of expli-
cit routing through the use of Label Switched Paths (LSPs) [1][2].
With destination-based forwarding as in the conventional datagram
networks, explicit routing is usually provided by attaching to each
Widjaja & Elwalid Expired in six months [Page 1]
Internet Draft MPLS Adaptive Traffic Engineering August 1998
packet the network-layer address of each node along the explicit
path. This approach makes the overhead in the packet prohibitively
expensive. Switched forwarding using label swapping in MPLS enables
a label to identify an LSP regardless of whether the LSP is esta-
blished through hop-by-hop or explicit routing. In other words, MPLS
imposes the same amount of packet header overhead irrespective of the
use of explicit routing.
Explicit routing has useful applications such as Virtual Private Net-
works (VPNs) and Traffic Engineering. This memo focuses on engineer-
ing the traffic across multiple explicit routes. The purpose of
traffic engineering is to manage network resources as efficiently as
possible so that congestion is minimized. This may be done by avoid-
ing links that are already heavily stressed. Traffic engineering typ-
ically becomes more effective as the network provides more alternate
paths. Traffic engineering also becomes more critical in large Auto-
nomous Systems where maximal operational efficiency should be
emphasized [3].
It is envisioned that traffic engineering is performed only for
traffic that does not require resource reservation, but may need pro-
visioning on an aggregated basis. Examples include best-effort and
differentiated services. This memo proposes that traffic engineering
be done by establishing multiple LSPs between a given ingress LSR and
a given egress LSR in an MPLS domain. The main objective is to have
the ingress LSR distribute traffic across the multiple LSPs effec-
tively so that network resources among the LSPs are equitably util-
ized. This memo describes a scheme called MPLS Adaptive Traffic
Engineering (MATE). It is to be noted that MATE is intended for
quasi-stationary situations where traffic statistics changes rela-
tively slowly (much longer than the round-trip delay between the
ingress and egress LSRs). Recent measurements in the Internet indi-
cate traffic stationarity in at least 5-min intervals [4].
Some of the features of MATE include:
o Traffic engineering on a per-class basis.
o Distributed load-balancing algorithm.
o End-to-end protocol between ingress and egress LSRs.
o No new hardware or protocol requirement on intermediate LSRs,
nor a priori traffic distributions.
o No assumption on the scheduling or buffer management
schemes at the LSR.
o Optimization decision based on LSP congestion measure.
o Minimal packet reordering due to traffic engineering.
o No clock synchronization between two LSRs.
Widjaja & Elwalid Expired in six months [Page 2]
Internet Draft MPLS Adaptive Traffic Engineering August 1998
2.0 Traffic Engineering Using MATE
This section provides the description of MATE. While the specifics
of the scheme are tailored for best-effort service, the basic princi-
ples are intended to be extensible to other services (e.g., differen-
tiated service) as well.
2.1 Overview
It is assumed that M explicit LSPs have been established between an
ingress LSR and an egress LSR when traffic engineering is to be
facilitated between these two LSRs. The value of M is typically
between 2 and 5 (the case with M=1 is not interesting). The M LSPs
may be chosen to be the "best" M paths calculated by a link-state
protocol or configured manually. Explicit LSPs may be established by
RSVP, LDP, or other means. The mechanism to select and establish the
LSPs is beyond the scope of this document.
Once the LSPs are setup, the ingress LSR tries to distribute the
traffic across the LSPs so that the traffic loads are balanced and
congestion is thus minimized. The traffic to be engineered at the
ingress LSR is the aggregated flow (called traffic trunk in [5]) that
shares the same Forwarding Equivalence Class. This document assumes
that the traffic to be engineered consists of the best-effort ser-
vice. Future work will consider differentiated service.
In order to perform effective traffic distribution, the characteris-
tics of the LSPs and the QoS requirements of the traffic must be
known. In general, the LSP characteristics may include the average
packet delay, packet delay variance, loading factor/utilization,
packet loss rate, bottleneck bandwidth, available bandwidth, etc.
For best-effort traffic, there is no explicit QoS requirement, except
that it is desirable to have minimum packet loss rate. Since MATE is
intended to be as flexible as possible, the pertinent LSP charac-
teristics are not assumed to be given quantities, but must be gath-
ered through some measurement. In MATE, the ingress LSR transmits
probe packets periodically to the egress LSR which will return the
probe packets back to the ingress LSR. Based on the information in
the returning probe packets, the ingress LSR is able to compute the
LSP characteristics. Intermediate LSRs are not required to modify
the contents of the probe packets, but such optional capabilities may
be used to refine the measurement process.
MATE employs a four-phase algorithm for load balancing. The first
phase initializes the congestion measure for each LSP. The conges-
tion measure may be a function of delay derivative and packet loss.
Widjaja & Elwalid Expired in six months [Page 3]
Internet Draft MPLS Adaptive Traffic Engineering August 1998
In the second phase, the algorithm tries to equalize the congestion
measure for each LSP. Once the measures are equalized, the algorithm
moves to the third phase. The algorithm monitors each LSP in the
third phase. If an appreciable change in the network state is
detected, the algorithm moves to the fourth phase where the conges-
tion measures are appropriate adjusted. Then, the algorithm goes to
the second phase and the whole process repeats.
2.2 Traffic Filtering and Distribution
MATE performs a two-stage traffic distribution. First, MATE distri-
butes the engineered traffic for a given ingress-egress pair equally
among N bins at the ingress LSR. If the total incoming traffic to be
engineered is of rate R bps, each bin receives an amount of r = R/N
bps. Then, the traffic from the N bins is mapped into M LSPs accord-
ing to the rule defined below.
The engineered traffic can be filtered and distributed into the N
bins in a number of ways. A simple method is to distribute the
traffic on a per-packet basis without filtering. For example, one
may distribute incoming packets at the ingress LSR to the bins in a
round-robin fashion. Although it does not have to maintain any per-
flow state, the method suffers from potentially having to reorder an
excessive amount of packets for a given flow which is undesirable for
TCP applications.
On the other extreme, one may filter the traffic on a per-flow basis
(e.g., based on