Internet Draft
Internet Draft						Gerald R. Ash
Internet Draft
							Gerald R. Ash
							AT&T Labs
							March 2000
Expires: September 2000



	Traffic Engineering & QoS Methods for IP-, ATM-, &
		TDM-Based Multiservice Networks 

		<draft-ash-te-qos-routing-00.txt>

STATUS OF THIS MEMO:  

This document is an Internet-Draft and is in full conformance with all
provisions of Section 10 of RFC2026.  Internet-Drafts are working documents
of the Internet Engineering Task Force (IETF), its areas, and its working
groups.  Note that other groups may also distribute working documents as
Internet-Drafts.

Internet-Drafts are draft documents valid for a maximum of six months and
may be updated, replaced, or obsoleted by other documents at any time.  It
is inappropriate to use Internet-Drafts as reference material or to cite
them other than as "work in progress."

The list of current Internet-Drafts can be accessed at
http://www.ietf.org/ietf/1id-abstracts.txt.  The list of Internet-Draft
Shadow Directories can be accessed at http://www.ietf.org/shadow.html.
Distribution of this memo is unlimited.

ABSTRACT

This contribution proposes initial text for a new recommendation on traffic
engineering (TE) and QoS methods for IP-, ATM-. & TDM-based multiservice
networks. This contribution describes and recommends TE methods which
control a network's response to traffic demands and other stimuli, such as
link failures or node failures.  These TE methods include:
(a)	traffic management through control of routing functions, which
include call routing (number/name translation to routing address),
connection routing, QoS resource management, and routing table management.
(b)	capacity management through control of network design.
(c)	TE operational requirements for traffic management and capacity
management, including forecasting, performance monitoring, and short-term
network adjustment.
These TE methods are recommended for application across network types based
on established practice and experience.

***************************************************************************
NOTE: A MICROSOFT WORD VERSION OF THIS DRAFT (WITH THE FIGURES) IS
       AVAILABLE ON REQUEST 
***************************************************************************

Ash                 <draft-ash-te-qos-routing-00.txt>              [Page 1]


Internet Draft  TE & QoS Methods for IP,ATM,TDM-Based Networks   March 2000

TABLE OF CONTENTS

1.0 Introduction
1.1 Scope
1.3 Definitions
1.4 References
1.7	Abbreviations
1.8	Traffic Engineering Model
1.9	Traffic Model
1.10	Traffic Management Functions
1.11	Capacity Management Functions
1.12	 Traffic Engineering Modeling & Analysis
1.12 Authors' Addresses

Annex 1.   Bibliography 

Annex 2.  Call Routing & Connection Routing Methods 
2.1	Introduction
2.2	Call Routing Methods
2.3	Connection (Bearer-Path) Routing Methods
2.4	Fixed Routing (FR) Path Selection
2.5	Time-Dependent Routing (TDR) Path Selection
2.6	State-Dependent Routing (SDR) Path Selection
2.7	Event-Dependent Routing (EDR) Path Selection
2.8	Interdomain Routing
2.9	Dynamic Transport Routing
2.10	 Modeling of Traffic Engineering Methods
2.11	Summary

Annex 3.  QoS Resource Management Methods
3.1 Introduction
3.2 Class-of-Service Identification, Policy-Based Routing Table Derivation,
& QoS Resource Management Steps
3.2.1 Class-of-Service Identification & Policy-Based Routing Table
Derivation
3.2.2 QoS Resource Management Steps
3.3 Bandwidth-Allocation, Bandwidth-Protection, and Priority-Routing Issues
3.3.1 Dynamic Bandwidth Reservation Principles
3.3.2 Per-Virtual-Network QoS Resource Allocation
3.3.3 Per-Flow QoS Resource Allocation
3.4 Priority Queuing
3.5 Other QoS Resource Management Constraints
3.6 Interdomain QoS Resource Management
3.7 Modeling of Traffic Engineering Methods
3.7.1 Example of Bandwidth Reservation Methods
3.7.2 Comparison of Per-Virtual-Network & Per-Flow QoS Resource Management 

Annex 4.  Routing Table Management Methods & Requirements
4.1 Introduction
4.2 Routing Table Management for IP-Based Networks
4.3 Routing Table Management for ATM-Based Networks
4.4 Routing Table Management for TDM-Based Networks
4.5 Signaling and Information Exchange Requirements
4.5.1 Call Routing (Number Translation to Routing Address)
Information-Exchange Parameters
4.5.2 Connection Routing Information-Exchange Parameters
4.5.3 QoS Resource Management Information-Exchange Parameters

Ash                 <draft-ash-te-qos-routing-00.txt>              [Page 2]


Internet Draft  TE & QoS Methods for IP,ATM,TDM-Based Networks   March 2000

4.5.4 Routing Table Management Information-Exchange Parameters
4.5.5 Harmonization of Information-Exchange Standards
4.5.6 Open Routing Application Programming Interface (API)
4.6 Examples of Internetwork Routing
4.6.1 Internetwork E Uses a Mixed Path Selection Method
4.6.2 Internetwork E Uses a Single Path Selection Method
4.6.3 Modeling of Traffic Engineering Methods

Annex 5.  Capacity Management Methods
5.1 Introduction
5.2 Link Capacity Design Models
5.3 Shortest Path Selection Models
5.4 Multihour Network Design Models
5.4.1 Discrete Event Flow Optimization (DEFO) Models
5.4.2 Traffic Load Flow Optimization (TLFO) Models
5.4.3 Virtual Trunking Flow Optimization (VTFO) Models
5.5 Day-to-day Load Variation Design Models
5.6 Forecast Uncertainty/Reserve Capacity Design Models
5.7 Modeling of Traffic Engineering Methods

Annex 6.  Traffic Engineering Operational Requirements 
6.1 Introduction
6.2 Traffic Management
6.2.1 Real-time Performance Monitoring
6.2.2 Network Control
6.2.3 Work Center Functions
6.2.3.1 Automatic controls
6.2.3.2 Code Controls
6.2.3.3 Reroute Controls
6.2.3.4 Peak-Day Control
6.2.4 Traffic Management on Peak Days
6.2.5 Interfaces to Other Work Centers
6.3 Capacity Management---Forecasting
6.3.1 Load forecasting
6.3.1.1 Configuration Database Functions
6.3.1.2 Load aggregation, basing, and projection functions. 
6.3.1.3 Load adjustment cycle and view of business adjustment cycle. 
6.3.2 Network Design
6.3.3 Work Center Functions
6.3.4 Interfaces to Other Work Centers
6.4 Capacity Management---Daily and Weekly Performance Monitoring
6.4.1 Daily Congestion Analysis Functions
6.4.2 Study-week Congestion Analysis Functions
6.4.3 Study-period Congestion Analysis Functions
6.5 Capacity Management---Short-Term Network Adjustment
6.5.1 Network Design Functions
6.5.2 Work Center Functions
6.5.3 Interfaces to Other Work Centers
6.6 Comparison of TE with TDR versus SDR/EDR 

1.0 Introduction

Traffic engineering (TE) is an indispensable network function which controls
a network's response to traffic demands and other stimuli, such as network
failures.  TE encompasses 


Ash                 <draft-ash-te-qos-routing-00.txt>              [Page 3]


Internet Draft  TE & QoS Methods for IP,ATM,TDM-Based Networks   March 2000

*	traffic management through control of routing functions, which
include number/name translation to routing address, path selection, routing
table management, and QoS resource management.  
*	capacity management through control of network design.

Current and future networks are rapidly evolving to carry a multitude of
voice/ISDN services and packet data services on internet protocol (IP),
asynchronous transfer mode (ATM), and time division multiplexing (TDM)
networks. The long awaited data revolution is occurring, with the extremely
rapid growth of data services such as IP-multimedia and frame-relay
services.  Within these categories of networks and services supported by IP,
ATM, and TDM protocols have evolved various TE methods.  The TE mechanisms
are covered in the Recommendation, and a comparative analysis and
performance evaluation of various TE alternatives are evaluated.  Finally,
operational requirements for TE implementation are covered.

We begin this ANNEX with a general model for TE functions, which include
traffic management and capacity management functions responding to traffic
demands on the network.  We then present a traffic-variations model which
these TE functions are responding to.  Next we outline traffic management
functions which include call routing (number/name translation to routing
address), connection or bear-path routing, QoS resource management, and
routing table management.  These traffic management functions are further
developed in ANNEXs 2, 3, and 4.  We then outline capacity management
functions, which are further developed in ANNEX 5.  Finally we briefly
summarize TE operational requirements, which are further developed in ANNEX
6.

In ANNEX 2, we present models for call routing, which entails number/name
translation to a routing address associated with service requests, and also
compare various connection (bearer-path) routing methods. In ANNEX 3, we
examine QoS resource management methods in detail, and illustrate per-flow
versus per-bandwidth-pipe resource management and the realization of
multiservice integration with priority routing services.  In ANNEX 4, we
identify and discuss routing table management approaches. This includes a
discussion of TE signaling and  information exchange requirements needed for
interworking across network types, so that the information exchange at the
interface is compatible across network types. In ANNEX 5 we describe
principles for TE capacity management, and in ANNEX 6 we present TE
operational requirements.

1.1 Scope

This contribution proposes initial text for a new recommendation on traffic
engineering (TE) and QoS methods for IP-, ATM-. & TDM-based multiservice
networks. This contribution describes and recommends TE methods which
control a network's response to traffic demands and other stimuli, such as
link failures or node failures.  These TE methods include:
(a)	traffic management through control of routing functions, which
include call routing (number/name translation to routing address),
connection routing, QoS resource management, and routing table management.
(b)	capacity management through control of network design.
(c)	TE operational requirements for traffic management and capacity
management, including forecasting, performance monitoring, and short-term
network adjustment.
These TE methods are recommended for application across network types based
on established practice and experience.

Ash                 <draft-ash-te-qos-routing-00.txt>              [Page 4]


Internet Draft  TE & QoS Methods for IP,ATM,TDM-Based Networks   March 2000


1.3 Definitions

Call:	a generic term used to describe the establishment,
	utilization, and release of a connection or data flow
Blocking: refers to the denial or non-admission of a call or
	connection-request, based for example on the lack of available 
	resources on a particular link (e.g., link bandwidth or 
	queuing resources)
Link: 	a bandwidth transmission medium between nodes that is engineered 
	as a unit;
Destination node: terminating node within a given network;
Node:	a network element (switch, router/switch, exchange) providing 
	switching and routing capabilities, or an aggregation of such 
	network elements representing a network;
Multiservice Network: a network in which various classes of service share
	the transmission, switching, management, and other resources of 
	the network
O-D pair: an originating node to destination node pair for a given
	connection/bandwidth-allocation request;
Originating node: originating node within a given network;
Path:	a concatenation of links providing a
	connection/bandwidth-allocation between an O-D pair;
Route:	a set of paths connecting the same O-D pair;
Routing table: describes the path choices and selection rules to select one
	path out of the route for a connection/bandwidth-allocation request
Traffic stream:	a class of  connection requests with the same traffic 
	characteristics;
Via node: an intermediate node in a route within a given network;

1.4 References

[E.164]  ITU-T Recommendation, The International Telecommunications
Numbering Plan.

[E.170]  ITU-T Recommendation, Traffic Routing.

[E.177]  ITU-T Recommendation, B-ISDN Routing.

[E.191]  ITU-T Recommendation, B-ISDN Numbering and Addressing, October
1996.

[E.350]  ITU-T Recommendation, Dynamic Routing Interworking.

[E.351]  ITU-T Recommendation, Routing of Multimedia Connections Across
TDM-, ATM-, and IP-Based Networks.

[E.352]  ITU-T Recommendation, Routing Guidelines for Efficient Routing
Methods.

[E.353]  ITU-T Draft Recommendation, Routing of Calls when Using
International Network Routing Addresses

[E.412]  ITU-T Recommendation,  Network Management Controls.

[E.529]  ITU-T Recommendation,  Network Dimensioning using End-to-End GOS
Objectives, May 1997.


Ash                 <draft-ash-te-qos-routing-00.txt>              [Page 5]


Internet Draft  TE & QoS Methods for IP,ATM,TDM-Based Networks   March 2000

[E.600]  ITU-T Recommendation,  Terms and Definitions of Traffic
Engineering, March 1993.

[G.723.1]  ITU-T Recommendation, Dual Rate Speech Coder for Multimedia
Communications Transmitting at 5.3 and 6.3 kbit/s, March 1996.

[E.734]  ITU-T Recommendation, Methods for Allocation and Dimensioning
Intelligent Network (IN) Resources, October 1996.

[H.225.0] ITU-T Recommendation, Media Stream Packetization and
Synchronization on Non-Guaranteed Quality of Service LANs, November 1996.

[H.245]  ITU-T Recommendation, Control Protocol for Multimedia
Communication, March 1996.

[H.246]  Draft ITU-T Recommendation, Interworking of H.Series Multimedia
Terminals with H.Series Multimedia Terminals and Voice/Voiceband Terminals
on GSTN and ISDN, September 1997.

[H.323]  ITU-T Recommendation, Visual Telephone Systems and Equipment for
Local Area Networks which Provide a Non-Guaranteed Quality of Service,
November 1996.

[I.211] ITU-T Recommendation, B-ISDN Service Aspects, March 1993.

[I.324]  ITU-T Recommendation, ISDN Network Architecture, 1991.

[I.327]  ITU-T Recommendation, B-ISDN Functional Architecture, March 1993.

[I.356]  ITU-T Recommendation, B-ISDN ATM Layer Cell Transfer Performance,
October 1996.

[Q.71]  ITU-T Recommendation, ISDN Circuit Mode Switched Bearer Services.

[Q.2761]  ITU-T Recommendation, Broadband Integrated Services Digital
Network (B-ISDN) Functional Description of the B-ISDN User Part (B-ISUP) of
Signaling System Number 7.

[Q.2931]  ITU-T Recommendation, Broadband Integrated Services Digital
Network (B-ISDN) - Digital Subscriber Signalling System No. 2 (DSS 2) -
User-Network Interface (UNI) Layer 3 Specification for Basic Call/Connection
Control, February 1995.

1.5 Abbreviations

AAR			Automatic Alternate Routing
ABR			Available Bit Rate
ADR			Address
AESA			ATM End System Address
AFI			Authority and Format Identifier
AINI			ATM Inter-Network Interface
ARR			Automatic Rerouting
AS			Autonomous System
ATM			Asynchronous Transfer Mode
B			Busy
BBP			Bandwidth Broker Processor
BGP			Border Gateway Protocol
BICC			Bearer Independent Call Control

Ash                 <draft-ash-te-qos-routing-00.txt>              [Page 6]


Internet Draft  TE & QoS Methods for IP,ATM,TDM-Based Networks   March 2000

B-ISDN			Broadband Integrated Services Digital Network
BNA			Bandwidth Not Available
BW			Bandwidth
BWIP			Bandwidth in Progress
BWOF			Bandwidth Offered
BWOV			Bandwidth Overflow
BWPC			Bandwidth Peg Count
CAC			Call Admission Control
CBK			Crankback
CBR			Constant Bit Rate
CCS			Common Channel Signaling
CIC			Call Identification Code
CRLDP			Constraint-Based Routing Label Distribution Protocol
CRLSP			Constraint-Based Routing Label Switched Path
DADR			Distributed Adaptive Dynamic Routing
DAR			Dynamic Alternate Routing
DCC			Data Country Code
DCR			Dynamically Controlled Routing
DIFFSERV		Differentiated Services
DN			Destination Node
DNHR			Dynamic Nonhierarchical Routing
DoS			Depth-of-Search
DSP			Domain Specific Part
DTL			Designated Transit List
EDR			Event Dependent Routing
ER			Explicit Route
FR			Fixed Routing
GCAC			Generic Call Admission Control
GOS			Grade of Service
HL			Heavily Loaded
IAM			Initial Address Message
ICD			International Code Designator
IDI			Initial Domain Identifier
IDP			Initial Domain Part
IE			Information Element
IETF			Internet Engineering Task Force
II			Information Interchange
ILBW			Idle Link Bandwidth
INRA			International Network Routing Address
IP			Internet Protocol
IPDC			Internet Protocol Device Control
LBL			Link Blocking Level 
LC			Link capability
LDP			Label Distribution Protocol
LL			Lightly Loaded
LLR			Least Loaded Routing
LSA			Link State Advertisement
LSP			Label Switched Path
MEGACO			Media Gateway Control
MOD			Modify
MPLS			Multiprotocol Label Switching
NANP			North American Numbering Plan
N-ISDN			Narrowband Integrated Services Digital Network
NSAP			Network Service Access Point
ODR			Optimized Dynamic Routing
ON			Originating Node
OSPF			Open Shortest Route First

Ash                 <draft-ash-te-qos-routing-00.txt>              [Page 7]


Internet Draft  TE & QoS Methods for IP,ATM,TDM-Based Networks   March 2000

PAR			Parameters
PNNI			Private Network-to-Network Interface
PSTN			Public Switched Telephone Network
PTSE			PNNI Topology State Elements
QoS			Quality of Service
R			Reserved
RQE			Routing Query Element
RSE			Routing State Element
RRE			Routing Recommendation Element
RSVP			Resource Reservation Protocol
RTNR			Real-Time Network Routing
SCP			Service Control Point
SDR			State-Dependent Routing
SI			Service Identity
SIP			Session Initiation Protocol
SS7			Signaling System 7
STR			State- and Time-Dependent Routing
SVC			Switched Virtual Circuit
SVP			Switched Virtual Path
TBW			Total Bandwidth
TBWIP			Total Bandwidth In Progress
TDR			Time-Dependent Routing
TIPHON			Telecommunications and Internet Protocol 
			Harmonization Over Networks
TLV			Type/Length/Value
ToS			Type of Service
TR			Trunk Reservation
TRAF			Traffic
TSE			Topology State Element
UBR			Unassigned Bit Rate
UNI			User-Network Interface
VBR			Variable Bit Rate
VC			Virtual Circuit
VCI			Virtual Circuit Identifier
VN			Via Node
VNET			Virtual Network
VPI			Virtual Path Identifier
WIN			Worldwide Intelligent Network (Routing)

1.6 Traffic Engineering Model

Figure 1.1 illustrates a model for network traffic engineering. The central
box represents the network, which can have various architectures and
configurations, and the routing tables used within the network. Network
configurations could include metropolitan area networks, national intercity
networks, and global international networks, which support both hierarchical
and nonhierarchical structures and combinations of the two. Routing tables
describe the route choices from an originating node to a terminating node,
for a connection request for a particular service. Hierarchical and
nonhierarchical traffic routing tables are possible, as are fixed routing
tables and dynamic routing tables. Routing tables are used for a
multiplicity of traffic and transport services on the telecommunications
network.

Figure 1.11  Traffic Engineering Model

Terminology used in the Recommendation, as illustrated in Figure 1.2, is
that a link connects two nodes, a path is a sequence of links connecting an

Ash                 <draft-ash-te-qos-routing-00.txt>              [Page 8]


Internet Draft  TE & QoS Methods for IP,ATM,TDM-Based Networks   March 2000

origin and destination node, and a route is the set of different paths
between the origin and destination that a call might be routed on within a
particular routing discipline.  Here a call is a generic term used to
describe the establishment, utilization, and release of a connection, or
data flow.  In this context a call can refer to a voice call established
perhaps using the SS7 signaling protocol, or to a web-based data flow
session, established perhaps by the HTTP and associated IP-based protocols.
Various implementations of routing tables are discussed in ANNEX 2.

Figure 1.2  Terminology

Traffic engineering functions include traffic management, capacity
management, and network planning.  Traffic management ensures that network
performance is maximized under all conditions including load shifts and
failures. Capacity management ensures that the network is designed and
provisioned to meet performance objectives for network demands at minimum
cost. Network planning ensures that node and transport capacity is planned
and deployed in advance of forecasted traffic growth. Figure 1.1 illustrates
traffic management, capacity management, and network planning as three
interacting feedback loops around the network.  The input driving the
network ("system") is a noisy traffic load ("signal"), consisting of
predictable average demand components added to unknown forecast error and
load variation components. The load variation components have different time
constants ranging from instantaneous variations, hour-to-hour variations,
day-to-day variations, and week-to-week or seasonal variations. Accordingly,
the time constants of the feedback controls are matched to the load
variations, and function to regulate the service provided by the network
through capacity and routing adjustments.  

Traffic management functions include a) call routing, which entails
number/name translation to routing address, b) connection or bearer-path
routing methods, c) QoS resource management and d) routing table management.
These functions can be a) decentralized and distributed to the network
nodes, b) centralized and allocated to a centralized controller such as a
bandwidth broker, or c) performed by a hybrid combination of these
approaches.

Capacity management plans, schedules, and provisions needed capacity over a
time horizon of several months to one year or more. Under exceptional
circumstances, capacity can be added on a shorter-term basis, perhaps one to
several weeks, to alleviate service problems. Network design embedded in
capacity management encompasses both routing design and capacity design.
Routing design takes account of the capacity provided by capacity
management, and on a weekly or possibly real-time basis adjusts routing
tables as necessary to correct service problems. The updated routing tables
are provisioned (configured) in the switching systems either directly or via
an automated routing update system. Network planning includes node planning
and transport planning, operates over a multiyear forecast interval, and
drives network capacity expansion over a multiyear period based on network
forecasts. 

The scope of the TE methods includes the establishment of connections for
narrowband, wideband, and broadband multimedia services within multiservice
networks and between multiservice networks.  Here a multiservice network
refers to one in which various classes of service share the transmission,
switching, management, and other resources of the network.  These classes of
services can include constant bit rate (CBR), variable bit rate (VBR),

Ash                 <draft-ash-te-qos-routing-00.txt>              [Page 9]


Internet Draft  TE & QoS Methods for IP,ATM,TDM-Based Networks   March 2000

unassigned bit rate (UBR), and available bit rate (ABR) traffic classes.
There are quantitative performance requirements that the various classes of
service normally are required to meet, such as end-to-end blocking, delay,
and/or delay-jitter objectives.  These objectives are achieved through a
combination of traffic management and capacity management.

Figure 1.3 illustrates the functionality for setting up a connection from an
originating node in one network to a destination node in another network,
using one or more routing methods across networks of various types.  The
Figure illustrates a multimedia connection between two PCs which carries
traffic for a combination of voice, video, and image applications.  For this
purpose a logical point-to-point connection is established from the PC
served by node a1 to the PC served by node c2.  The connection could be a
CBR ISDN connection across TDM-based network A and ATM-based network C, or
it might be a VBR connection via IP-based network B.  Gateway nodes a3, b1,
b4, and c1 provide the interworking capabilities between the TDM-, ATM-, and
IP-based networks.  The actual multimedia connection might be routed, for
example, on a route consisting of nodes a1-a2-a3-b1-b4-c1-c2, or possibly on
a different route through different gateway nodes. 

Figure 1.3 Example of Multimedia Connection across TDM-, ATM-, and IP-Based
	   Networks

We now briefly describe the traffic model, the traffic management functions,
the capacity management functions, and the TE operational requirements,
which are further developed in the Recommendation in ANNEXs 2-6 of the
Recommendation.

1.13	Traffic Model

In this section we discuss load variation models which drive traffic
engineering functions, that is traffic management, capacity management, and
network planning. Table 1.1 summarizes the types of models used to represent
the different traffic variations under consideration.  


Table 1.1 Traffic Models for Load Variations

For instantaneous traffic load variations, the load is typically modeled as
a stationary random process over a given period (normally within each hourly
period) characterized by a fixed mean and variance. From hour to hour, the
mean traffic loads are modeled as changing deterministically; for example,
according to their 20-day average values. From day to day, for a fixed hour,
the mean load is modeled as a random variable having a gamma distribution
with a mean equal to the 20-day average load. From week to week, the load
variation is modeled as a time-varying deterministic process in the network
design procedure. The random component of the realized week-to-week load is
the forecast error, which is equal to the forecast load minus the realized
load. Forecast error is accounted for in short-term capacity management. 

In traffic management, traffic load variations such as instantaneous
variations, hour-to-hour variations, day-to-day traffic variations, and
week-to-week variations are responded to in traffic management by
appropriately controlling number translation/routing, path selection,
routing table management, and/or QoS resource management. Traffic management
provides monitoring of network performance through collection and display of
traffic and performance data, and allows traffic management controls, such

Ash                 <draft-ash-te-qos-routing-00.txt>              [Page 10]


Internet Draft  TE & QoS Methods for IP,ATM,TDM-Based Networks   March 2000

as destination-address per-connection blocking, per-connection gapping,
routing table modification, and route selection/reroute controls, to be
inserted when circumstances warrant.  For example, a focused overload might
lead to application of connection gapping controls in which a connection
request to a particular destination address or set of addresses is admitted
only once every x seconds, and connections arriving after an accepted call
are rejected for the next x seconds.  In that way call gapping throttles the
calls and prevents overloading the network to a particular focal point.
Routing table modification and reroute control are illustrated in ANNEX 3.

Capacity management must provide sufficient capacity to carry the expected
traffic variations so as to meet end-to-end blocking/delay objective levels.
Here the term blocking refers to the denial or non-admission of a call or
connection-request, based for example on the lack of available resources on
a particular link (e.g., link bandwidth or queuing resources).  Traffic load
variations lead in direct measure to capacity increments and can be
categorized as (1) minute-to-minute instantaneous variations and associated
busy-hour traffic load capacity, (2) hour-to-hour variations and associated
multihour capacity, (3) day-to-day variations and associated day-to-day
capacity, and (4) week-to-week variations and associated reserve capacity.

Design methods within the capacity management procedure account for the mean
and variance of the within-the-hour variations of the offered and overflow
loads. For example, classical methods [e.g., Wil56] are used to size links
for these two parameters of load.  Multihour dynamic route design accounts
for the hour-to-hour variations of the load and, hour-to-hour capacity can
vary from zero to 20 percent or more of network capacity. Hour-to-hour
capacity can be reduced by multihour preplanned or real-time dynamic routing
design models such as the discrete event flow optimization, traffic flow
optimization, and virtual trunking flow optimization models described in
ANNEX 5.  It is known that some daily variations are systematic (for
example, Monday is always higher than Tuesday); however, in some day-to-day
variation models these systematic changes are ignored and lumped into the
stochastic model. For instance, the traffic load between Los Angeles and New
Brunswick is very similar from one day to the next, but the exact calling
levels differ for any given day. We characterize this load variation in
network design by a stochastic model for the daily variation, which results
in additional capacity called day-to-day capacity. Day-to-day capacity is
needed to meet the average blocking/delay objective when the load varies
according to the stochastic model.  Day-to-day capacity is nonzero due to
the nonlinearities in link blocking and/or link queuing delay levels as a
function of load.  When the load on a link fluctuates about a mean value,
because of day-to-day variation, the mean blocking/delay is higher than the
blocking/delay produced by the mean load. Therefore, additional capacity is
provided to maintain the blocking/delay probability grade-of-service
objective in the presence of day-to-day load variation. Typical day-to-day
capacity required is 4--7 percent of the network cost for medium to high
day-to-day variations, respectively.  Reserve capacity, like day-to-day
capacity, comes about because load uncertainties---in this case forecast
errors---tend to cause capacity buildup in excess of the network design,
which exactly matches the forecast loads. Reluctance to disconnect and
rearrange link and transport capacity contributes to this reserve capacity
buildup. Typical ranges for reserve capacity are from 15 to 25 percent or
more of network cost. 


Ash                 <draft-ash-te-qos-routing-00.txt>              [Page 11]


Internet Draft  TE & QoS Methods for IP,ATM,TDM-Based Networks   March 2000

1.14	Traffic Management Functions

In ANNEXs 2-4, traffic management functions are discussed:

a)	Call Routing Methods (ANNEX 2).  Call routing involves the
translation of a number or name to a routing address.  We describe how
number (or name) translation should result in the E.164 network service
access point (NSAP) addresses [E.164, E.191], network routing addresses
(NRAs), and/or IP addresses.  These addresses are used for routing purposes
and therefore must be carried in the connection-setup information element
(IE).  

b)	Connection/Bearer-Path Routing Methods (ANNEX 2).  Connection or
bearer-path routing involves the selection of a path from the originating
node to the destination node in a network.  We discuss bearer-path selection
methods, which are categorized into the following four types: fixed routing
(FR), time-dependent routing (TDR), state-dependent routing (SDR), and
event-dependent routing (EDR).  These methods are associated with routing
tables, which consist of a route and rules to select one path from the route
for a given connection or bandwidth-allocation request.

c)	QoS Resource Management Methods (ANNEX 3).  QoS resource management
functions include class-of-service derivation, policy-based routing table
derivation, connection admission, bandwidth allocation, bandwidth
protection, bandwidth reservation, priority routing, priority queuing, and
other related resource management functions.  
d)	Routing Table Management Methods (ANNEX 4).  Routing table
management information, such as topology update, status information, or
routing recommendations, is used for purposes of applying the routing table
design rules for determining route choices in the routing table.  This
information is exchanged between one node and another node, such as between
the ON and DN, for example, or between a node and a network element such as
a bandwidth-broker processor (BBP).  This information is used to generate
the routing table, and then the routing table is used to determine the path
choices used in the selection of a path.

1.15	Capacity Management Functions

In ANNEX 5, we discuss capacity management methods, as follows:
a)	Link Capacity Design Models.  These models find the optimum tradeoff
between traffic carried on a shortest network path (perhaps a direct link)
versus traffic carried on alternate network paths.
b)	Shortest Path Selection Models.  These models enable the
determination of shortest paths in order to provide a more efficient and
flexible routing plan.
c)	Multihour Network Design Models.  Three models are described
including i) discrete event flow optimization (DEFO) models, ii) traffic
load flow optimization (TLFO) models, and iii) virtual trunking flow
optimization (VTFO) models.
d)	Day-to-day Load Variation Design Models.  These models describe
techniques for handling day-to-day variations in capacity design.
e)	Forecast Uncertainty/Reserve Capacity Design Models.  These models
describe the means for accounting for errors in projecting design traffic
loads in the capacity design of the network. 
1.16	 Traffic Engineering Operational Requirements 

In ANNEX 6, we discuss traffic engineering operational requirements, as

Ash                 <draft-ash-te-qos-routing-00.txt>              [Page 12]


Internet Draft  TE & QoS Methods for IP,ATM,TDM-Based Networks   March 2000

follows:
a)	Traffic Management.  We discuss requirements for real-time
performance monitoring, network control, and work center functions.  The
latter includes automatic controls, code controls, reroute controls,
peak-day control, traffic management on peak days, and interfaces to other
work centers.
b)	Capacity Management - Forecasting.  We discuss requirements for load
forecasting, including configuration database functions, load aggregation,
basing, and projection functions, and load adjustment cycle and view of
business adjustment cycle.  We also discuss network design, work center
functions, and interfaces to other work centers.
c)	Capacity Management - Daily and Weekly Performance Monitoring.  We
discuss requirements for daily congestion analysis, study-week congestion
analysis, and study-period congestion analysis. 
d)	Capacity Management - Short-Term Network Adjustment.  We discuss
requirements for network design, work center functions, and interfaces to
other work centers.
e)	Comparison of TE with TDR versus SDR/EDR.  We contrast TE functions
in the TDR-based network with those in a SDR/EDR-based network. 
1.17	 Traffic Engineering Modeling & Analysis
In ANNEXs 2-5 we use network models to illustrate the traffic engineering
methods developed in the ANNEXs.  The details of the models are presented in
each ANNEX in accordance with the TE functions being illustrated. 

1.12 Authors' Addresses

Gerald R. Ash
AT&T Labs
Room MT E3-3C37
200 Laurel Avenue
Middletown, NJ 07748
Phone: 732-420-4578
Fax:   732-368-6687
Email: gash@att.com

Annex 1.   Bibliography 

[A98]  Ash, G. R., Dynamic Routing in Telecommunications Networks,
McGraw-Hill, 1998.

[AAFJLLS99]  Ash, G. R., Ashwood-Smith, P., Fedyk, D., Jamoussi, B., Lee,
Y., Li, L., Skalecki, D., LSP Modification Using CRLDP,
draft-ash-crlsp-modify-00.txt, July 1999.

[ACEWX00]  Awduche, D. O., Chiu, A., Elwalid, A., Widjaja, I., Xiao, X., A
Framework for Internet Traffic Engineering, draft-ietf-te-framework-00.txt,
January 2000.

[ACFM99]  Ash, G. R., Chen, J., Fishman, S. D., Maunder, A., Routing
Evolution in Multiservice Integrated Voice/Data Networks, International
Teletraffic Congress ITC-16, Edinburgh, Scotland, June 1999.

[ADFFT98]  Anderson, L., Doolan, P., Feldman, N., Fredette, A., Thomas, B.,
LDP Specification, IETF Draft, draft-ietf-mpls-ldp-01.txt, August 1998.

[AL99] Ash, G. R., Lee, Y., Routing of Multimedia Connections Across TDM-,
ATM-, and IP-Based Networks, IETF Draft,
draft-ash-itu-sg2-qos-routing-00.txt, May 1999.

Ash                 <draft-ash-te-qos-routing-00.txt>              [Page 13]


Internet Draft  TE & QoS Methods for IP,ATM,TDM-Based Networks   March 2000


[AM98] Ash, G. R., Maunder, A., Routing of Multimedia Connections when
Interworking with PSTN, ATM, and IP Networks, AF-98-0927, Nashville TN,
December 1998.

[AAJL99] Ash, G. R., Aboul-Magd, O. S., Jamoussi, B., Lee, Y.,  QoS Resource
Management in MPLS-Based Networks, IETF Draft, draft-ash-qos-routing-00.txt,
Minneapolis MN, March 1999.

[AM99] Ash, G. R., Maunder, A., QoS Resource Management in ATM Networks,
AF-99-, Rome Italy, April 1999. 

[AMAOM98]  Awduche, D. O., Malcolm, J. Agogbua, J., O'Dell, M., McManus, J.,
Requirements for Traffic Engineering Over MPLS, IETF Draft,
draft-ietf-mpls-traffic-eng-00.txt, October 1998.

[ATM95]  ATM Forum Technical Committee, B-ISDN Inter Carrier Interface
(B-ICI) Specification Version 2.0 (Integrated), af-bici-0013.003, December
1995. 

[ATM960055]  ATM Forum Technical Committee, Private Network-Network
Interface Specification Version 1.0 (PNNI 1.0), af-pnni-0055.000, March
1996.

[ATM960056]  ATM Forum Technical Committee, Traffic Management Specification
Version 4.0, af-tm0056.000, April 1996.

[ATM960061]  ATM Forum Technical Committee, ATM User-Network Interface (UNI)
Signaling Specification Version 4.0, af-sig-0061.000, July 1996.

[ATM98]  ATM Forum Technical Committee, Specification of the ATM
Inter-Network Interface (AINI) (Draft), ATM Forum/BTD-CS-AINI-01.03, July
1998.

[ATM990097] ATM Signaling Requirements for IP Differentiated Services and
IEEE 802.1D, ATM Forum, Atlanta, GA, February 1999.

[B99]  Bernet, Y., et. al., A Framework for Differentiated Services, IETF
draft-ietf-diffserv-framework-02.txt, February 1999.

[BZBHJ97]  Bradem. R., Zhang, L., Berson, S., Herzog, S., Jamin, S.,
Resource ReSerVation Protocol (RSVP) - Version 1 Functional Specification,
IETF Network Working Group RFC 2205, September 1997.

[CDFFSV97]  Callon, R., Doolan, P., Feldman, N., Fredette, A., Swallow, G.,
Viswanathan, A., IETF Network Working Group Draft, A Framework for
Multiprotocol Label Switching, draft-ietf-mpls-framework-02.txt, November
1997.
[CNRS98]  Crawley, E., Nair, R., Rajagopalan, B., Sandick, H., A Framework
for QoS-based Routing in the Internet, IETF RFC 2386, August 1998.

[COM 2-39-E]  ANNEX, Draft New Recommendation E.ip, Report of Joint Meeting
of Questions 1/2 and 10/2, Torino, Italy, July 1998.

[D99]  Dvorak, C., IP-Related Impacts on End-to-End Transmission
Performance, ITU-T Liaison to Study Group 2, Temporary Document TD GEN-22,
Geneva Switzerland, May 1999.

Ash                 <draft-ash-te-qos-routing-00.txt>              [Page 14]


Internet Draft  TE & QoS Methods for IP,ATM,TDM-Based Networks   March 2000

 
[DN99]  Dianda, R. B., Noorchashm, M., Bandwidth Modification for UNI, PNNI,
AINI, and BICI, ATM Forum Technical Working Group, April 1999.

[ETSIa]  ETSI Secretariat, Telecommunications and Internet Protocol
Harmonization over Networks (TIPHON); Naming and Addressing; Scenario 2,
DTS/TIPHON-04002 v1.1.64, 1998

[ETSIb]  ETSI STF, Request for Information (RFI): Requirements for Very
Large Scale E.164 -> IP Database, TD35, ETSI EP TIPHON 9, Portland,
September 1998.

[ETSIc]  TD290, ETSI Working Party Numbering and Routing, Proposal to Study
IP Numbering, Addressing, and Routing Issues, Sophia, September 1998.

[FGLRRT00]  Feldman, A., Greenberg, A., Lund, C., Reingold, N., Rexford, J.,
True, F., Deriving Traffic Demands for Operational IP Networks: Methodology
and Experience, work in progress.

[FGLRR99] Feldman, A., Greenberg, A., Lund, C., Reingold, N., Rexford, J.,
True, F., Netscope: Traffic Engineering for IP Networks, IEEE Network
Magazine, March 2000.

[G99a] Glossbrenner, K., Elements Relevant to Routing of ATM Connections,
ITU-T Liaison to Study Group 2, Temporary Document 1/2-8, Geneva
Switzerland, May 1999.

[G99b] Glossbrenner, K., IP Performance Studies, ITU-T Liaison to Study
Group 2, Temporary Document GEN-27, Geneva Switzerland, May 1999.

[GWA97]  Gray, E., Wang, Z., Armitage, G., Generic Label Distribution
Protocol Specification, IETF Draft, draft-gray-mpls-generic-ldp-spec-00.txt,
November 1997.

[GR99]  Greene, N., Ramalho, M., Media Gateway Control Protocol Architecture
and Requirements, IETF Draft, draft-ietf-megaco-reqs-00.txt, January 1999.

[HSSR99]  Handley, M., Schulzrinne, H., Schooler, E. Rosenberg, J. SIP:
Session Initiation Protocol, IETF RFC 2543, March 1999.

[J99]  Jamoussi, B., Editor, Constraint-Based LSP Setup using LDP, IETF
draft-ietf-mpls-cr-ldp-01.txt, February 1999.

[KR00]  Kurose, J. F., Ross, K. W., Computer Networking, A Top-Down Approach
Featuring the Internet, Addison-Wesley, 2000.

[LKPCD98]  Luciani, J., Katz, D., Piscitello, D., Cole, B., Doraswamy, N.,
NBMA Next Hop Resolution Protocol (NHRP), IETF RFC 2332, April 1998.

[M98]  Moy, J, OSPF Version 2, IETF RFC 2328, April 1998.

[NWRH99]  Neilson, R., Wheeler, J., Reichmeyer, F., Hares, S., A Discussion
of Bandwidth Broker Requirements for Internet2 Qbone Deployment, August
1999.

[PARLAY]  Parlay API Specification 1.2, September 10, 1999.


Ash                 <draft-ash-te-qos-routing-00.txt>              [Page 15]


Internet Draft  TE & QoS Methods for IP,ATM,TDM-Based Networks   March 2000

[PL99]  Faltstrom, P., Larson, B., E.164 Number and DNS, IETF
draft-faltstrom-e164-03.txt, September 1999.

[RVC99]  Rosen, E., Viswanathan, A., Callon, R., Multiprotocol Label
Switching Architecture, IETF draft-ietf-mpls-arch-04.txt, February 1999.

[S94]  Stevens, W. R., TCP/IP Illustrated, Volume 1, The Protocols,
Addison-Wesley, 1994.

[S95]  Steenstrup, M., Editor, Routing in Communications Networks,
Prentice-Hall, 1995.

[SCFJ96]  Schulzrinne, H., Casner, S., Frederick, R., Jacobson, V., RTP: A
Transport Protocol for Real-Time Applications, IETF RFC 1889, January 1996.

[ST98]  Sikora, J., Teitelbaum, B., Differentiated Services for Internet2,
Internet2: Joint Applications/Engineering QoS Workshop, Santa Clara, CA, May
1998.

[T1S198]  ATM Trunking for the PSTN/ISDN, Committee T1S1.3 (B-ISUP),
T1S1.3/98, NJ, December 1998.

[V99]  Villamizar, C., MPLS Optimized Multipath,
draft-villamizar-mpls-omp-01, February 1999.

[XN99]  Xiao, X., Ni, L. M., Internet QoS: A Big Picture, IEEE Network
Magazine, March/April, 1999.

[ZSSC97]  Zhang, Sanchez, Salkewicz, Crawley, Quality of Service Extensions
to OSPF or Quality of Service Route First Routing (QOSPF), IETF Draft,
draft-shang-qos-ospf-01.txt, September 1997.


Ash                 <draft-ash-te-qos-routing-00.txt>              [Page 16]


Internet Draft  TE & QoS Methods for IP,ATM,TDM-Based Networks   March 2000

ANNEX 2 Call Routing & Connection Routing Methods

Traffic Engineering & QoS Methods for IP-, ATM-, & TDM-Based Multiservice
Networks 

2.1	Introduction

In the Recommendation we assume the separation of "call routing" and
signaling for call establishment from "connection (or bearer-path) routing"
and signaling for bearer-channel establishment.  Call routing protocols
primarily translate a number or a name, which is given to the network as
part of a call setup, to a routing address needed for the connection
(bearer-path) establishment.   Call routing protocols are described for
example in [Q.2761] for the Broadband ISDN Used Part (B-ISUP) call
signaling, [ATM990048, T1S198] for bearer-independent call control (BICC),
or virtual trunking, call signaling, [H.323] for H.323 call signaling,
[GR99] for the media gateway control [MEGACO] call signaling, and in
[HSSR99] for the session initiation protocol (SIP) call signaling.
Connection routing protocols include for example [Q.2761] for B-ISUP
signaling, [ATM960055] for PNNI signaling, [ATM960061] for UNI signaling,
[DN99] for switched virtual path (SVP) signaling, and [J00] for MPLS
constraint-based routing label distribution protocol (CRLDP) signaling.

A specific connection or bearer-path routing method is characterized by the
routing table used in the method.  The routing table consists of a set of
paths and rules to select one path from the route for a given connection
request.  When a connection request arrives at its originating node (ON),
the ON implementing the routing method executes the path selection rules
associated with the routing table for the connection to determine a selected
path from among the path candidates in the route for the connection request.
In a particular routing method, the path selected for the connection request
is governed by the connection routing, or path selection, rules.  Various
path selection methods are discussed: fixed routing (FR) path selection,
time-dependent routing (TDR) path selection, state-dependent routing (SDR)
path selection, and event-dependent routing (EDR) path selection. 

2.2	Call Routing Methods

Call routing entails number (or name) translation to a routing address,
which is then used for connection establishment.  Routing addresses can
consist, for example, of a) E.164 network service access point (NSAP)
addresses [E.164, E.191], b) network routing addresses (NRAs), and/or c) IP
addresses.  As discussed in ANNEX 4, a TE requirement is the need for
carrying E.164-NSAP addresses, NRAs, and IP addresses in the
connection-setup information element (IE).  In that case, E.164-NSAP
addresses, NRAs, and IP addresses become the standard addressing method for
interworking across IP-, ATM-, and TDM-based networks.  Another TE
requirement is that a call identification code (CIC) be carried in the
call-control and bearer-control connection-setup IEs in order to correlate
the call-control setup with the bearer-control setup, [ATM990048, T1S198].
Carrying these additional parameters in the Signaling System 7 (SS7) ISDN
User Part (ISUP) connection-setup IEs is sometimes referred to as the ISUP+
virtual trunking protocol or BICC protocol.

Number (or name) translation, then, should result in the E.164-NSAP
addresses, NRAs, and/or IP addresses.  NRA formats are covered in [E.353],

Ash                 <draft-ash-te-qos-routing-00.txt>        [Page ANNEX2-1]


Internet Draft  TE & QoS Methods for IP,ATM,TDM-Based Networks   March 2000

and IP-address formats in [S94].  The NSAP address has a 20-byte format as
shown in Figure 2.1a below [E.191]. 

Figure 2.1a NSAP Address Structure

The IDP is the initial domain part and the DSP is the domain specific part.
The IDP is further subdivided into the AFI and IDI.  The IDI is the initial
domain identifier and can contain the 15-digit E.164 address if the AFI is
set to 45. AFI is the authority and format identifier and determines what
kind of addressing method is followed, and based on the 1 octet AFI value,
the length of the IDI and DSP fields can change.  The E.164-NSAP address is
used to determine the route to the destination endpoint.  E.164-NSAP
addressing for B-ISDN services is supported in ATM networks using PNNI,
through use of the above NSAP or ATM end system address (AESA) format.  In
this case the E.164 part of the NSAP address occupies the 8 octet IDI, and
the 11 octet DSP can be used at the discretion of the network operator
(perhaps for sub-addresses).  The above NSAP structure also supports AESA
DCC (data country code) and AESA ICD (international code designator)
addressing formats.

Within the IP network, routing is performed using IP addresses.  Translation
databases, such as based on domain name system (DNS) technology [enum], are
used to translate the E.164 numbers/names for calls to IP addresses for
routing over the IP network.  The IP address is a 4-byte address structure
as shown below:

Figure 2.1b.  IP Address Structure

There are four classes of IP addresses. Different classes (A, B and C) have
different field lengths for the network identification field.  Class D is
used for multicasting. Another hierarchy added in 1984 split the hostid
portion into the subnet and host portion, respectively. The length of the
subnet portion is flexible so long as it is greater than one and forms the
most significant bits of the hostid field. Many service providers prefer the
class B addresses since they provide ample space for subnetwork addressing.
However, there are only 16,383 Class B addresses possible and which soon
exhausted the class B address space. To alleviate this problem, classless
inter-domain routing (CIDR) was designed. This allowed the blocks of C
addresses to be given to service providers in such a manner as to provide
efficient address aggregation followed by changes in the BGP4.0 protocol for
efficient address advertisements.

2.3	Connection (Bearer-Path) Routing Methods

Connection routing is characterized by the routing table used in the method
and rules to select one path from the route for a given connection or
bandwidth-allocation request.  When a connection/bandwidth-allocation
request is initiated by an ON, the ON implementing the routing method
executes the path selection rules associated with the routing table for the
connection/bandwidth-allocation to find an admissible path from among the
paths in the route that satisfies the connection/bandwidth-allocation
request.  In a particular routing method, the selected path is determined
according to the rules associated with the routing table.  In a network with
originating connection/bandwidth-allocation control, the ON maintains
control of the connection/bandwidth-allocation request.  If
crankback/bandwidth-not-available is used, for example, at a via node (VN),
the preceding node maintains control of the connection/bandwidth-allocation

Ash                 <draft-ash-te-qos-routing-00.txt>        [Page ANNEX2-2]


Internet Draft  TE & QoS Methods for IP,ATM,TDM-Based Networks   March 2000

request even if the request is blocked on all the links outgoing from the
VN.

Here we are discussing network-layer logical routing (sometimes referred to
as "layer-3" routing), as opposed to link layer ("layer-2") routing or
physical-layer ("layer-1") routing.  Later in the ANNEX we also address
link-layer transport routing in addition to network-layer routing.  The
network-layer logical routing methods addressed include those discussed in 

*	Open Shortest Path First (OSPF), Border Gateway Protocol (BGP), and
Multiprotocol Label Switching (MPLS) for IP-based routing methods,
*	User-to-Network Interface (UNI), Private Network-to-Network
Interface (PNNI), ATM Inter-Network Interface (AINI), and Bandwidth Modify
for ATM-based routing methods, and
*	Recommendations E.170 and E.350 for TDM-based routing methods.

In an IP network, the logical links consist of MPLS label switched paths
(LSPs) between the IP nodes, in an ATM network, the logical links consist of
virtual paths (VPs) between the ATM nodes, and in a TDM network, the logical
links consist of trunk groups between the TDM nodes.  A sparse logical
network is typically used with IP and ATM technology, as illustrated in
Figure 2.2, and FR, TDR, SDR, and EDR can be used in combination with
multilink shortest path selection.  

Figure 2.2  Sparse Logical Netwoork Topology with Connections Routing 
	    on Multilink Paths

A meshed logical network is typically used with TDM technology, but can be
used also with IP or ATM technology as well, and selected paths are normally
limited to 1 or 2 logical links, as illustrated in Figure 2.3.  

Figure 2.3  Mesh Logical Network Topology with Connections Routed on 
		1- and 2-Link Paths

Paths may be set up on individual connections (or "per flow") for each call
request, such as on a switched virtual circuits (SVC).  Paths may also be
set up for bandwidth-allocation requests associated with "bandwidth pipes"
or "virtual trunking", such as on switched virtual paths (SVPs) in ATM-based
networks or constraint-based routing label switched paths (CRLSPs) in
IP-based networks. Paths are determined by (normally proprietary) algorithms
based on the network topology and reachable address information.  These
paths can cross multiple peer groups in ATM-based networks, and multiple
autonomous systems in IP-based networks.  An ON may select a path from the
routing table based on the routing rules and the QoS resource management
criteria, described in ANNEX 3, which must be satisfied on each link in the
route.  If a link is not allowed based on the QoS criteria, then a release
with crankback/bandwidth-not-available parameter is used to signal that
condition to the ON in order to return the connection/bandwidth-allocation
request to the ON, which may then select an alternate route. In addition to
controlling bandwidth allocation, the QoS resource management procedures can
check end-to-end transfer delay, delay variation, and transmission quality
considerations such as loss, echo, and noise.

When source routing is used, setup of a connection/bandwidth-allocation
request is achieved by having the ON identify the entire selected route
including all VNs and DN in the route in a designated-transit-list (DTL) or
explicit-route (ER) parameter in the connection-setup IE.  If the QoS or

Ash                 <draft-ash-te-qos-routing-00.txt>        [Page ANNEX2-3]


Internet Draft  TE & QoS Methods for IP,ATM,TDM-Based Networks   March 2000

traffic parameters cannot be realized at any of the VNs in the connection
setup request, then the VN generates a crankback
(CBK)/bandwidth-not-available (BNA) parameter in the connection-release IE
which allows a VN to return control of the connection request to the ON for
further alternate routing.  In ANNEX 4, the DTL/ER and CBK/BNA elements are
identified as being required for interworking across IP-, ATM-, and
TDM--based networks.

As noted earlier, connection routing, or path selection, methods are
categorized into the following four types: fixed routing (FR),
time-dependent routing (TDR), state-dependent routing (SDR), and
event-dependent routing (EDR).  We discuss each of these methods in the
following paragraphs.  Examples of each of these path selection methods are
illustrated in Figures 2.4a and 2.4b and discussed in the following
sections.

Dynamic routing allows routing tables to be changed dynamically, either in a
preplanned time-varying manner, as in TDR, or in real time, as in SDR or
EDR.  With pre-planned TDR path selection methods, routing patterns
contained in routing tables might change every hour or at least several
times a day to respond to measured hourly shifts in traffic loads, and in
general TDR routing tables change with a time constant normally greater than
a call holding time. A typical TDR routing method may change routing tables
every hour, which is longer than a typical call holding time of a few
minutes. Three implementations of dynamic path selection are illustrated in
Figure 2.4a, which shows multilink path routing, two-link path routing, and
progressive routing.  

Figure 2.4a  Dynamic Routing Methods

TDR routing tables are preplanned, preconfigured, and recalculated perhaps
each week within the capacity management network design function.  Real-time
dynamic path selection does not depend on precalculated routing tables.
Rather, the node or centralized bandwidth broker senses the immediate
traffic load and if necessary searches out new paths through the network
possibly on a per-traffic-flow basis.  With real-time path selection
methods, routing tables change with a time constant on the order of or less
than a traffic-flow holding time. As illustrated in Figure 2.4b, real-time
path selection methods include EDR and SDR. 

Figure 2.4b  Dynamic Routing Methods

2.4	Fixed Routing (FR) Path Selection

In a fixed routing (FR) method, a routing pattern is fixed for a connection
request.  A typical example of fixed routing is a conventional hierarchical
alternate routing where the route and route selection sequence are
determined on a preplanned basis and maintained over a long period of time.
FR is more efficiently applied when the network is nonhierarchical, or flat,
as compared to the hierarchical structure [A98]. 

The aim of hierarchical fixed routing is to carry as much traffic as is
economically feasible over direct links between pairs of nodes low in the
hierarchy. This is accomplished by application of routing procedures to
determine where sufficient load exists to justify high-usage links, and then
by application of alternate-routing principles that effectively pool the
capacities of high-usage links with those of final links, to the end that

Ash                 <draft-ash-te-qos-routing-00.txt>        [Page ANNEX2-4]


Internet Draft  TE & QoS Methods for IP,ATM,TDM-Based Networks   March 2000

all traffic is carried efficiently.

The routing of calls in a hierarchical network involves an originating
ladder, a terminating ladder, and links interconnecting the two ladders. In
metropolitan networks, a two-level ladder is normally employed. A five-level
ladder was used in the North American network prior to Bell System
divestiture.  In a two-level network, for example, the originating ladder is
the final link from lower level-1 node to the upper level-2 node, and the
terminating ladder is the final link from upper level-2 node to the lower
level-1 node.  Links A--P, A--T2, T1--P and T1--T2 in Figure 2.4c are
examples of interladder links.  

Figure 2.4c  Fixed Routing Methods (2-Level Hierarchical Network)

The identification of the proper interladder link for the routing of a given
call identifies the originating ladder "exit" point and the terminating
ladder "entry" point. Once these exit and entry points are identified and
the intraladder links are known, a first-choice path from originating to
terminating location can be determined.

Various levels of traffic concentration are used to achieve an appropriate
balance between transport and switching. The primary requirement is that
every customer be connectable to every other customer. In a hierarchy having
a maximum of five levels, customer lines are terminated on the switching-
function-5 level. Switching functions 5, 4, 3, 2, and 1 are provided for
concentrating traffic into efficient traffic items for routing. Under this
arrangement, there is a maximum of 25 interladder links from the originating
ladder to the terminating ladder. The routing procedures provide a specific
sequence of first-route selection from among the 25 choices.  The generally
preferred sequence for the interladder link is

1.	A call involving no via nodes: route A--B.

2.	A call involving one via node: path A-T2-B, A-T1-B, in that order.

3.	A call involving two via nodes: path A-T1-T2-B

This procedure provides only the first-choice interladder link from A to B.
Calls from B to A often route differently. To determine the B-to-A route
requires reversing the diagram, making B-T2 the originating ladder and A-T1
the terminating ladder. In Figure 2.4c the preferred route from B to A is
B-A, B-T1-A, B-T2-A, and B-T2-T1-A, in that order.  The alternate route for
any high-usage link is the route the node-to-node traffic load between the
nodes would follow if the high-usage link did not exist. In Figure 2.4c,
this is B-T1-A.

We briefly list the rules for routing traffic in hierarchical fixed-routing
networks of two or more levels [ATT77]. There are eight rules that govern
the selection of first-choice paths and alternate paths in the network
design process. Some of the rules required for multilevel national intercity
networks do not apply to two-level networks serving metropolitan areas. A
list of the rules and their applicability is given here: 

1.	Two-ladder limit rule. This rule is that traffic must route only via
the hierarchical routing ladders of the originating and terminating nodes. 


Ash                 <draft-ash-te-qos-routing-00.txt>        [Page ANNEX2-5]


Internet Draft  TE & QoS Methods for IP,ATM,TDM-Based Networks   March 2000

2.	Intraladder direction rule. This rule is that switched traffic must
only route toward the terminating node upward in direction on the
originating hierarchical routing ladder and downward on the terminating
hierarchical routing ladder. 

3.	Multiple switching function rule. This rule is that a node
performing multiple switching functions must be assumed to have a
hierarchical routing ladder internal to the node extending from its lowest
hierarchical switching function to its highest hierarchical switching
function. This rule is applicable whenever a node performs multiple
switching functions.

4.	One-level limit rule. This rule is that when evaluating potential
candidate links, only those first-route traffic items for which the
switching functions performed at each end of the link differ by no more than
one level should be considered. 

5.	Switch low rule. This rule is that switched traffic must route via
tandems involving the lowest functional level in the switching hierarchy,
considering both hierarchcal routing ladders. 

6.	Directional routing rule. This rule is that if there is a choice of
routes involving switching at the same functional level in each of two
routing ladders, the route using that functional level in the terminating
hierarchical routing ladder should be chosen. 

7.	Single-route rule. This rule is that routes must be chosen so that
there is only one first-choice path from one node to another, regardless of
the switching functions performed by those nodes. In two-level hierarchical
networks, this requirement is met through application of the directional
routing rule discussed above. In networks with more than two hierarchical
levels, this rule is necessary to ensure that extra and unnecessary
switching is not planned. 

8.	Alternate-route selection rule. This rule is that the alternate
route at each end of a high-usage link must be the route the node-to-node
traffic load between the nodes would follow if the high-usage link did not
exist.

2.5	Time-Dependent Routing (TDR) Path Selection

TDR methods are a type of dynamic routing in which the routing tables are
altered at a fixed point in time during the day or week.  TDR routing tables
are determined on a preplanned basis and are implemented consistently over a
time period.  The TDR routing tables are determined considering the time
variation of traffic load in the network, for example based on measured
hourly load patterns. Several TDR time periods are used to divide up the
hours on an average business day and weekend into contiguous routing
intervals sometimes called load set periods.  Typically, the TDR routing
tables used in the network are coordinated by taking advantage of
noncoincidence of busy hours among the traffic loads. 

In TDR, the routing tables are preplanned and designed off-line using a
centralized bandwidth broker, which employs a TDR network design model. Such
models are discussed in ANNEX 5. The off-line computation determines the
optimal routes from a very large number of possible alternatives, in order
to minimize the network cost.  The designed routing tables are loaded and

Ash                 <draft-ash-te-qos-routing-00.txt>        [Page ANNEX2-6]


Internet Draft  TE & QoS Methods for IP,ATM,TDM-Based Networks   March 2000

stored in the various nodes in the TDR network, and periodically recomputed
and updated (e.g., every week) by the bandwidth broker.  In this way an ON
does not require additional network information to construct TDR routing
tables, once the routing tables have been loaded.  This is in contrast to
the design of routing tables in real time, such as in the SDR and EDR
methods described below.  Routes in the TDR routing table may consist of
time varying routing choices and use a subset of the available routes.
Routes used in various time periods need not be the same. 

Paths in the TDR routing table may consist of the direct link, a two-link
path through a single VN, or a multiple-link path through multiple VNs.
Path routing implies selection of an entire path between originating and
terminating nodes before a connection is actually attempted on that path. If
a connection on one link in a path is blocked, the call then attempts
another complete path. Implementation of such a routing method can be done
through control from the originating node, plus a multiple-link crankback
capability to allow paths of two, three, or more links to be used.
Crankback is an information-exchange message capability that allows a call
blocked on a link in a path to return to the originating node for further
alternate routing on other paths.  Path-to-path routing is nonhierarchical
and allows the choice of the most economical paths rather than being
restricted to hierarchical paths.

Path selection rules employed in TDR routing tables, for example, may
consist of simple sequential routing.  In the sequential method all traffic
in a given time period is offered to a single route, and lets the first
route in the route overflow to the second route which overflows to the third
route, and so on.  Thus, traffic is routed sequentially from route to route,
and the route is allowed to change from hour to hour to achieve the
preplanned dynamic, or time varying, nature of the TDR method.  

Other TDR route selection rules can employ probabilistic techniques to
select each route in the route and thus influence the realized flows.  One
such method of implementing TDR multilink path selection is to allocate
fractions of the traffic to routes and to allow the fractions to vary as a
function of time. One approach is cyclic path selection, illustrated in
Figure 2.4a, which has as its first route (1, 2, ..., M), where the notation
(i, j, k) means that all traffic is offered first to path i, which overflows
to path j, which overflows to path k. The second route of a cyclic route
choice is a cyclic permutation of the first route: (2, 3, ..., M, 1). The
third route is likewise (3, 4, ..., M, 1, 2), and so on. This approach has
computational advantages because its cyclic structure requires considerably
fewer calculations in the design model than does a general collection of
paths. The route congestion level of cyclic routes are identical; what
varies from route to route is the proportion of flow on the various links.

Two-link TDR path selection is illustrated in Figure 2.4a.  An example
implementation is two-link sequential TDR (2S-TDR) path selection.  By using
the CCS crankback signal, 2S-TDR limits path connections to at most two
links, and such TDR two-link sequential path selection allows nearly as much
network utilization and performance improvement as TDR multilink path
selection.  This is because in the design of multilink path networks, about
98 percent of the traffic is routed on one- and two-link paths, even though
paths of greater length are allowed. Because of switching costs, paths with
one or two links are usually less expensive than paths with more links.
Therefore, as illustrated in Figure 2.4a, two-link path routing uses the
simplifying restriction that paths can have only one or two links, which

Ash                 <draft-ash-te-qos-routing-00.txt>        [Page ANNEX2-7]


Internet Draft  TE & QoS Methods for IP,ATM,TDM-Based Networks   March 2000

requires only single-link crankback to implement and uses no common links as
is possible with multilink path routing. Alternative two-link path selection
methods include the cyclic routing method described above and sequential
routing. 

In sequential routing, all traffic in a given hour is offered to a single
route, and the first path is allowed to overflow to the second path, which
overflows to the third path, and so on. Thus, traffic is routed sequentially
from path to path with no probabilistic methods being used to influence the
realized flows. The reason that sequential routing works well is that
permuting path order provides sufficient flexibility to achieve desired
flows without the need for probabilistic routing. Both fixed and dynamic
versions of two-link path routing are compared, and the results are
discussed below. In 2S-TDR, the sequential route is allowed to change from
hour to hour. The TDR nature of the dynamic path selection method is
achieved by introducing several route choices, which consist of different
sequences of paths, and each path has one or, at most, two links in tandem. 

Paths in the routing table are subject to depth-of-search (DoS) restrictions
for QoS resource management, which is discussed in ANNEX 3.  DoS requires
that the bandwidth capacity available on each link in the path be sufficient
to meet a DoS bandwidth threshold level, which is passed to each node in the
path in the setup message.  DoS restrictions prevent connections that path
on the first choice (shortest) ON-DN path, for example, from being swamped
by alternate routed multiple-link connections.

A TDR connection set-up example is now given.  The first step is for the ON
to identify the DN and routing table information to the DN.  The ON then
tests for spare capacity on the first or shortest path, and in doing this
supplies the VNs and DN on this path, along with the DoS parameter, to all
nodes in the path.  Each VN tests the available bandwidth capacity on each
link in the path against the DoS threshold.  If there is sufficient
capacity, the VN forwards the connection setup to the next node, which
performs a similar function.  If there is insufficient capacity, the VN
sends a release message with crankback/bandwidth-not-available parameter
back to the ON, at which point the ON tries the next route in the route as
determined by the routing table rules.  As described above, the TDR routes
are preplanned, loaded, and stored in each ON.

Allocating traffic to the optimum route choice during each time period leads
to design benefits due to the noncoincidence of loads. Since in many network
applications traffic demands change with time in a reasonably predictable
manner, the routing also changes with time to achieve maximum link
utilization and minimum network cost. Several TDR routing time periods are
used to divide up the hours on an average business day and weekend into
contiguous routing intervals. The network design is performed in an
off-line, centralized computation in the bandwidth broker that determines
the optimal routing tables from a very large number of possible alternatives
in order to minimize the network cost. In TDR path selection, rather than
determine the optimal routing tables based on real-time information, a
centralized bandwidth broker design system employs a design model described
in ANNEX 5. The effectiveness of the design depends on how accurately we can
estimate the traffic load on the network. Forecast errors are corrected in
the short-term capacity management process, which allows routing table
updates to replace link augments whenever possible. 


Ash                 <draft-ash-te-qos-routing-00.txt>        [Page ANNEX2-8]


Internet Draft  TE & QoS Methods for IP,ATM,TDM-Based Networks   March 2000

2.6	State-Dependent Routing (SDR) Path Selection

In SDR, the routing tables are altered automatically according to the state
of the network.  For a given SDR method, the routing table rules are
implemented to determine the route choices in response to changing network
status, and are used over a relatively short time period.  Information on
network status may be collected at a central processor or distributed to
nodes in the network.  The information exchange may be performed on a
periodic or on-demand basis.  SDR methods use the principle of routing
connections on the best available route on the basis of network state
information.  For example, in the least loaded routing (LLR) method, the
residual capacity of candidate routes is calculated, and the route having
the largest residual capacity is selected for the connection.  In general,
SDR methods calculate a route cost for each connection request based on
various factors such as the load-state or congestion state of the links in
the network.  In SDR, the routing tables are designed on-line by the ON or a
central bandwidth broker processor (BBP) through the use of network status
and topology information obtained through information exchange with other
nodes and/or a centralized BBP.  There are various implementations of SDR
distinguished by 

a)	whether the computation of the routing tables is distributed among
the network nodes or centralized and done in a centralized BBP, and
b)	whether the computation of the routing tables is done periodically
or connection by connection.

This leads to three different implementations of SDR:

a)	centralized periodic SDR (CP-SDR) -- here the centralized BBP
obtains link status and traffic status information from the various nodes on
a periodic basis (e.g., every 10 seconds) and performs a computation of the
optimal routing table on a periodic basis.  To determine the optimal routing
table, the BBP executes a particular routing table optimization procedure
such as LLR and transmits the routing tables to the network nodes on a
periodic basis (e.g., every 10 seconds).

b)	distributed periodic SDR (DP-SDR) -- here each node in the SDR
network obtains link status and traffic status information from all the
other nodes on a periodic basis (e.g., every 5 minutes) and performs a
computation of the optimal routing table on a periodic basis (e.g., every 5
minutes).  To determine the optimal routing table, the ON executes a
particular routing table optimization procedure such as LLR. 

c)	distributed connection-by-connection (DC-SDR) SDR -- here an ON in
the SDR network obtains link status and traffic status information from the
DN, and perhaps from selected VNs, on a connection by connection basis and
performs a computation of the optimal routing table for each connection.  To
determine the optimal routing table, the ON executes a particular routing
table optimization procedure such as LLR. 

In DP-SDR path selection, nodes may exchange status and traffic data, for
example, every five minutes, between traffic management processors, and
based on analysis of this data, the traffic management processors can
dynamically select alternate routes to optimize network performance. This
method is illustrated in Figure 2.4b.  Flooding is a common technique for
distributing the status and traffic data, however other techniques with less
overhead are also available, as discussed in ANNEX 7.

Ash                 <draft-ash-te-qos-routing-00.txt>        [Page ANNEX2-9]


Internet Draft  TE & QoS Methods for IP,ATM,TDM-Based Networks   March 2000


Figure 2.4b illustrates a CP-SDR path selection method with periodic updates
based on periodic network status. CP-SDR path selection provides
near-real-time routing decisions by having an update of the number of idle
trunks in each link sent to a network database every five seconds.  Routing
tables are determined from analysis of the status data using a path
selection method which provides that the shortest path choice is used if the
bandwidth is available. If the shortest path is busy, the second path is
selected from the list of feasible paths on the basis of having the greatest
level of idle bandwidth at the time; the current second path choice becomes
the third, and so on. This path update is performed every five seconds. The
CP-SDR model uses dynamically activated bandwidth reservation and other
controls to automatically modify routing tables during network overloads and
failures. CP-SDR requires the use of network status and routing
recommendation information-exchange messages.

Figure 2.4b also illustrates an example of a DC-SDR path selection method.
In DC-SDR, the routing computations are distributed among all the nodes in
the network. This is illustrated in Figure 2.4b.  DC-SDR uses real-time
exchange of network status information, with CCS query and status messages,
to determine an optimal route from a very large number of possible choices.
With DC-SDR, the originating node first tries the direct path and if it is
not available finds an optimal two-link path by querying the terminating
node through the CCS network for the busy-idle status of all links connected
to the terminating node. The originating node compares its own link
busy-idle status to that received from the terminating node, and finds the
least loaded two-link path to route the call.  DC-SDR computes required
bandwidth allocations by virtual network from node-measured traffic flows
and uses this capacity allocation to reserve capacity when needed for each
virtual network.  Any excess traffic above the expected flow is routed to
temporarily idle capacity borrowed from capacity reserved for other loads
that happen to be below their expected levels. Idle link capacity is
communicated to other nodes via the query-status information-exchange
messages, as illustrated in Figure 2.4b, and the excess traffic is
dynamically allocated to the set of allowed paths that are identified as
having temporarily idle capacity.  DC-SDR controls the sharing of available
capacity by using dynamic bandwidth reservation, to protect the capacity
required to meet expected loads and to minimize the loss of traffic for
classes-of-service which exceed their expected load and allocated capacity.

Paths in the SDR routing table may consist of the direct link, a two-link
route through a single VN, or a multiple-link route through multiple VNs.
Paths in the routing table are subject to DoS restrictions on each link.

2.7	Event-Dependent Routing (EDR) Path Selection

In EDR, the routing tables are updated locally on the basis of whether
connections succeed or fail on a given route choice.  In the EDR learning
approaches, the path last tried, which is also successful, is tried again
until blocked, at which time another path is selected at random and tried on
the next call. EDR path choices can also be changed with time in accordance
with changes in traffic load patterns.  Success-to-the-top (STT) EDR path
selection, illustrated in Figure 2.4b, is a decentralized per-traffic-flow
EDR path selection method with update based on random routing.  STT-EDR uses
a simplified decentralized learning method to achieve flexible adaptive
routing. The direct link is used first if available, and a fixed alternate
path is used until it is blocked. In this case a new alternate path is

Ash                 <draft-ash-te-qos-routing-00.txt>        [Page ANNEX2-10]


Internet Draft  TE & QoS Methods for IP,ATM,TDM-Based Networks   March 2000

selected at random as the alternate route choice for the next call overflow
from the direct link. Dynamically activated trunk reservation is used under
call-blocking/delay conditions. STT-EDR uses crankback when a via path is
blocked at the via node, and the call advances to a new random path choice.
In STT-EDR, many path choices can be tried by a given flow before the flow
is blocked. 

In the EDR learning approaches, such as STT-EDR, the shortest path is used
first and then the path last tried, which is also successful, is tried again
until blocked, at which time another path is selected at random and tried on
the next flow request, if needed. A bandwidth reservation technique is used.
The current alternate route choice can be updated randomly, cyclically, or
by some other means, and may be maintained as long as a connection can be
established successfully on the route. Hence the routing table is
constructed with the information determined during connection setup, and no
additional information is required by the ON.  Routes in the EDR routing
table may consist of the direct link, a two-link route through a single VN,
or a multiple-link route through multiple VNs.  Routes in the routing table
are subject to DoS restrictions on each link.  Note that for either SDR or
EDR, as in TDR, the alternate route for a connection request may be changed
in a time-dependent manner considering the time-variation of the traffic
load.  

2.8	Interdomain Routing

Interdomain routing can support a multiple ingress/egress capability, as
illustrated in Figure 2.5 in which a call is routed either on the shortest
path or, if not available, via an alternate path through any one of the
other nodes from an originating node to an gateway node.  

Figure 2.5  Multiple Ingress/Egress Interdomain Routing

A destination network could be served by more than one gateway node, in
which case multiple ingress/egress routing is used. As illustrated in Figure
2.5, with multiple ingress/egress routing, a call from the originating node
N1 destined for the destination gateway node DGN1 tries first to access the
links from originating gateway node OGN3 to DGN1. In doing this it is
possible that the call could be routing from N1 to N3 directly or via N2. If
no bandwidth is available from N3 to DGN1, the call control can be returned
to N1 with a crankback/bandwidth-not-available indicator, after which the
call is routed to OGN4 to access the OGN4-to-DGN1 bandwidth. If the call
cannot be completed on the link connecting gateway node OGN3 to DGN1, the
call can return to the originating node N1 through use of a
crankback/band-not-available indicator for possible further routing to
another gateway node at OGN4, which also has link capacity to DGN1. In this
manner all ingress/egress connectivity is utilized to a connecting network,
maximizing call completion and reliability.

Once the call reaches gateway node OGN3, this node determines the routing to
the destination gateway node DGN1 and routes the call accordingly. In
completing the call to DGN1, gateway node OGN3 can dynamically select a
direct shortest path, an alternate path through an alternate node in the
destination network, or perhaps an alternate path through an alternate node
in another network domain.

With interdomain routing, calls are routed first to a shortest direct path
between the originating and destination domain, then to a list of alternate

Ash                 <draft-ash-te-qos-routing-00.txt>        [Page ANNEX2-11]


Internet Draft  TE & QoS Methods for IP,ATM,TDM-Based Networks   March 2000

paths through alternate nodes in the terminating network domain, then to a
list of alternate paths through alternate nodes in the originating network
domain (e.g., OGN3 and OGN4 in Figure 2.5), and finally to a list of
alternate paths through nodes in other transit network domains. Therefore,
the interdomain routing paths are divided into three types: the direct short
path, alternate paths in the same origination/destination domain, and
alternate or transit paths through other transit domains. 

2.9	Dynamic Transport Routing

Dynamic transport routing can combine with dynamic traffic routing to shift
transport bandwidth among node pairs and services through use of flexible
transport switching technology.  Dynamic transport routing can provide
automatic link provisioning, diverse link routing, and rapid link
restoration for improved transport capacity utilization and performance
under stress. Figure 2.6 illustrates the difference between the physical
transport network and the logical transport network.  


Logical transport links are individual logical connections between network
nodes, which make up the link connections and are routed on the physical
transport network. Links can be provisioned at given rates, such as OC1,
OC12, OC48, OC192, etc., and is dependent on the level of traffic demand
between nodes. Figure 2.6 indicates that in the logical transport network,
many node pairs have a "direct" logical link connection where none exists in
the physical transport network.  

Figure 2.6  Logical & Physical Transport Networks

A logical link connection is obtained by cross-connecting through transport
switching devices. This is distinct from per-flow routing, which switches a
call on the logical links at each node in the call path. Thus, the logical
transport network is overlaid on a sparser physical transport network.

Cross-connect devices, such as optical cross-connects (OXCs), are able to
switch transport channels, for example OC48 channels, onto different
higher-capacity transport links such as an individual WDM channel on a
fiberoptic cable.  Transport routes can be rearranged at high speed using
OXCs, typically within tens of milliseconds switching times.  These OXCs can
reconfigure logical transport capacity on demand, such as for peak day
traffic, weekly redesign of link capacity, or emergency restoration of
capacity under node or transport failure.  Re-arrangement of logical link
capacity involves reallocating both transport bandwidth and node
terminations to different links.  OXC technology is amenable to centralized
traffic management control providing rearrangeable transport routing and
perhaps real-time transport routing.

Figure 2.7 illustrates the concept of dynamic traffic and transport routing
from a generalized switching node point of view. 

Figure 2.7 Dynamic Transport Routing

At the traffic demand level in the transmission hierarchy, flow requests are
switched using dynamic traffic routing on the logical transport link network
by node switching logic. At the OC1 and higher demand levels in the
transmission hierarchy, logical transport link demands are switched using
OCC systems, which allow dynamic transport routing to route transport

Ash                 <draft-ash-te-qos-routing-00.txt>        [Page ANNEX2-12]


Internet Draft  TE & QoS Methods for IP,ATM,TDM-Based Networks   March 2000

demands in accordance with traffic levels.  Real-time logical transport link
and real-time response to traffic congestion can be provided by OCC dynamic
transport routing to improve network performance.

An illustration of dynamic transport routing is given in Figure 2.8, which
shows how transport demand is routed according to varying seasonal
requirements. As seasonal demands shift, the dynamic transport network is
better able to match demands to routed transport capacity, thus gaining
efficiencies in transport requirements.

Figure 2.8   Dynamic Transport Routing vs. Fixed Transport Routing

Figure 2.8 illustrates how dynamic transport routing achieves network
capacity reductions, and shows how transport demand is routed according to
varying seasonal requirements. As seasonal demands shift, the dynamic
transport network is better able to match demands to routed transport
capacity, thus gaining efficiencies in transport requirements. The figure
illustrates the variation of winter and summer capacity demands. With fixed
transport routing, the maximum termination capacity and transport capacity
are provided across the seasonal variations, because in a manual environment
without dynamic transport rearrangement it is not possible to disconnect and
reconnect capacity on such short cycle times. When transport rearrangement
is automated with dynamic transport routing, however, the termination and
transport design can be changed on a weekly, daily, or, with high-speed
packet switching, real-time basis to exactly match the termination and
transport design with the actual network demands. Notice that in the fixed
transport network there is unused termination and transport capacity that
cannot be used by any demands; sometimes this is called "trapped capacity,"
because it is available but cannot be accessed by any actual demand.  The
dynamic transport network, in contrast, follows the capacity demand with
flexible transport routing, and together with transport network design it
reduces the trapped capacity. Therefore, the variation of demands leads to
capacity-sharing efficiencies, which in the example of Figure 2.8 reduce
termination capacity requirements by 50 node terminations, or approximately
10 percent compared with the fixed transport network, and by 50 transport
capacity requirements, or approximately 14 percent.  Therefore, with dynamic
transport routing capacity utilization can be made more efficient in
comparison with fixed transport routing, because with dynamic transport
network design the link sizes can be matched to the network load.

Dynamic transport routing can achieve performance improvements for similar
reasons, due to noncoincidence of transport capacity demands that can change
daily. An example is the traffic noncoincidence experienced on peak days
such as Christmas Day. On Christmas Day there are many busy nodes and many
idle nodes. For example, a node may be relatively idle on Christmas Day if
it were a downtown business-node, while another node serving residential
traffic may be very busy. Therefore, on Christmas Day, the business-node
demands are reduced, and through dynamic transport routing appropriate
capacity reductions can be made automatically. Similarly, the
residential-node demands are increased on Christmas Day. Access demands to
the overloaded residential-node can be redirected to freed-up termination
capacity on the business-node, which also frees up termination capacity on
the residential-node to be used for internode demand increases. By this kind
of access demand and internode demand rearrangement, based on noncoincident
traffic shifts, more traffic to and from an overloaded node can be completed
because internode transport capacity is increased, now using freed-up
transport capacity from the reduction in the transport capacity needed by

Ash                 <draft-ash-te-qos-routing-00.txt>        [Page ANNEX2-13]


Internet Draft  TE & QoS Methods for IP,ATM,TDM-Based Networks   March 2000

the underloaded business node. On a peak day such as Christmas Day, the busy
nodes are often limited by internode transport capacity; this rearrangement
reduces or eliminates this bottleneck.

Dynamic transport routing can provide dynamic restoration of failed
capacity, such as that due to fiber cuts, onto spare or backup transport
capacity. Dynamic transport routing thereby provides a self-healing network
capability to ensure a network-wide path selection and immediate adaptation
to failure. Hence dynamic transport routing provides better network
performance at reduced cost. The combination of dynamic connection routing
together with dynamic transport routing provides synergistic reinforcement
to achieve these network improvements.

2.10	Modeling of Traffic Engineering Methods

In the Recommendation, a full-scale national network node model is used
together with a multiservice traffic demand model to study various TE
scenarios and tradeoffs.  The 135-node national model is illustrated in
Figure 2.9.

Figure 2.9  135-Node National Network Model

Typical voice/ISDN traffic loads are used to model the various network
alternatives, which are based on 72 hours of a full-scale national network
loading [A98].  Here the traffic loads are dynamically varying and tracked
by the exponential smoothing models discussed in ANNEX 3.  These voice/ISDN
loads are further segmented in the model into eight constant-bit-rate (CBR)
virtual networks (VNETs), including business voice, consumer voice,
international voice in and out, key-service voice, normal and key-service
64-kbps ISDN data, and 384-kbps ISDN data.  For the CBR voice services, the
mean data rate is assumed to be 64 kbps.  The data services traffic model
incorporates typical traffic load patterns and comprises three additional
VNET load patterns.  These include a) a variable bit rate real-time (VBR-RT)
VNET, representing services such as IP-telephony and compressed voice, b) a
variable bit rate non-real-time (VBR-NRT) VNET, representing services such
as WWW multimedia and credit card check, and c) an unassigned bit rate (UBR)
VNET, representing services such as email, voice mail, and file transfer
multimedia applications.  For the VBR-RT connections, the data rate varies
from 6.4 to 51.2 kbps with a mean of 25.6 kbps. For the VBR-NRT connections,
the data rate varies from 38.4 to 64 kbps with a mean of 51.2 kbps. For the
UBR connections, the data rate varies from 6.4 to 3072 kbps with a mean of
1536 kbps. For modeling purposes, the service and link bandwidth is
segmented into 6.4 kbps slots, that is, 10 slots per 64 kbps channel.  Table
2.1 summarizes the multiservice traffic model used for the TE studies.


Table 2.1.  Virtual Network (VNET) Traffic Model used for TE Studies 

The cost model represents typical switching and transport costs, and
illustrates the economies-of-scale for costs projected for high capacity
network elements in the future.  Table 2.2 gives the relative average
switching and transport allocated per unit of bandwidth, as follows:


Ash                 <draft-ash-te-qos-routing-00.txt>        [Page ANNEX2-14]


Internet Draft  TE & QoS Methods for IP,ATM,TDM-Based Networks   March 2000

Table 2.2.  Cost Assumptions (average cost per equivalent 64 kbps bandwidth)

Data Rate	Average  Transport Cost	Average Switching/Cross-Connect Cost
DS3		0.19 x miles + 8.81	26.12
OC3		0.17 x miles + 9.76	19.28
OC12		0.15 x miles + 7.03	9.64
OC48		0.05 x miles + 2.77	3.92

A discrete event network design model, described in ANNEX 5, is used in the
design and analysis of 3 connection routing methods: 2-STT-EDR path routing
in a meshed logical network, 2-link DC-SDR routing in a meshed logical
network, and multilink 2-STT-EDR routing, as might be supported for example
by MPLS TE in a sparse logical network.  

The network models for the  two-link STT-EDR/DC-SDR, and multilink
STT-EDR/DC-SDR/DP-SDR networks are now described.  In the two-link STT-EDR
and DC-SDR models, we assume 135 packet-switched-nodes (MPLS- or
PNNI-based).  Synchronous to asynchronous conversion (SAC) is assumed to
occur at the packet-switched-nodes for link connections from
circuit-switched-nodes.  Links in these two-link STT-EDR/DC-SDR models are
assumed to provide fine-grained link bandwidth allocation, and a meshed
network topology design results among the nodes, that is, links exist
between most (90 percent or more) of the nodes. In the two-link
STT-EDR/DC-SDR models, one and two-link routing with crankback is used
throughout the network. Two-link path selection is modeled both with both
STT path selection and distributed connection-by-connection SDR (DC-SDR)
path selection.  Packet-switched-nodes use two-link STT-EDR or two-link
DC-SDR routing to all other nodes. Quality-of-service priority queuing is
modeled in the performance analyses, in which the key-services are given the
highest priority, normal services the middle priority, and best-effort
services the lowest priority in the queuing model.  This queuing model
quantifies the level of delayed traffic for each virtual network. In routing
a connection with two-link STT-EDR routing, the ON checks the equivalent
bandwidth and allowed DoS first on the direct path, then on the current
successful two-link via path, and then sequentially on all candidate
two-link paths.  In routing a connection with two-link DC-SDR, the ON checks
the equivalent bandwidth and allowed DoS first on the direct path, and then
on the least-loaded path that meets the equivalent bandwidth and DoS
requirements.  Each VN checks the equivalent bandwidth and allowed DoS
provided in the setup message, and uses crankback to the ON if the
equivalent bandwidth or DoS are not met.

In the multilink STT-EDR/DC-SDR/DP-SDR model, we assume 135
packet-switched-nodes.  Because high rate OC3/12/48 links provide highly
aggregated link bandwidth allocation, a sparse network topology design
results among the packet-switched-nodes, that is, high rate OC3/12/48 links
exist between relatively few (10 to 20 percent) of the
packet-switched-nodes.  Secondly, multilink shortest path selection with
crankback is used throughout the network. Quality-of-service priority
queuing is modeled in the performance analyses, in which the key-services
are given the highest priority, normal services the middle priority, and
best-effort services the lowest priority in the queuing model.  This queuing
model quantifies the level of delayed traffic for each virtual network. The
multilink path selection options are modeled with STT path selection, DC-SDR
path selection, and distributed periodic path selection (DP-SDR).  In the
model of DP-SDR, the status updates, which are modeled with flooding link
status updates every 10 seconds.  Note that the multilink DP-SDR performance

Ash                 <draft-ash-te-qos-routing-00.txt>        [Page ANNEX2-15]


Internet Draft  TE & QoS Methods for IP,ATM,TDM-Based Networks   March 2000

results should also be comparable to the performance of multilink
centralized-periodic SDR (CP-SDR), in which status updates and path
selection updates are made every 10 seconds, respectively,  to and from a
bandwidth-broker processor.  

In routing a connection with multilink shortest path selection with two-link
STT-EDR routing, for example, the ON checks the equivalent bandwidth and
allowed DoS first on the first choice path, then on current successful
alternate path, and then sequentially on all candidate alternate paths.
Again, each VN checks the equivalent bandwidth and allowed DoS provided in
the setup message, and uses crankback to the ON if the equivalent bandwidth
or DoS are not met.  

In the models the logical network design is optimized for each routing
alternative, while the physical transport/switching network is held fixed.
We seek to find the best combination of logical topology design,
dimensioning, and routing method.  Generally the meshed VP link topologies
are optimized by one- and two-link routing, while the sparse OC3/12/48 link
topologies are optimized by multilink shortest path routing.  Modeling
results include a) separate voice/ISDN and data designs, b) integrated
voice/ISDN and data designs, c) network design for all IP-telephony with
compressed voice, d) designs to compare mixed TDM-based routing and
packet-based routing in two-link and multilink networks, e) designs to
compare hierarchical and two-link STT-EDR routing methods, and f)
performance analyses for overloads and failures.  Illustrative network
design costs for the voice/ISDN designs are illustrated in Figure 2.10 and
for the integrated voice/ISDN and data designs in Figure 2.11.  These design
costs and details are discussed further in ANNEX 5. 

Figure 2.10  Voice/ISDN Network Design Cost

Figure 2.11  Integrated Voice/ISDN & Data Netwoork Design Cost

The design results show that, as expected, the two-link STT-EDR and two-link
DC-SDR logical mesh networks are highly connected (90%+), while the
multilink MPLS-based and PNNI-based networks are sparsely connected
(10-20%).  The network cost comparisons illustrate that the sparse MPLS and
PNNI networks achieve a small cost advantage, since they take advantage of
the greater cost efficiencies of high bandwidth logical links (up to OC48).
However, these differences in cost may not be significant, and can change as
equipment costs evolve and as the relative cost of switching and transport
equipment changes.  Sensitivities of the results to different cost
assumptions were investigated.  For example, if the relative cost of
transport increases relative to switching, then the two-link STT-EDR and
two-link DC-SDR meshed networks can appear to be more efficient than the
sparse multilink STT-EDR/DC-SDR/DP-SDR networks.  These results are
consistent with those presented in other studies of meshed and sparse
logical networks, as a function of relative switching and transport costs,
see for example [A98].

Comparing the results of the separate voice/ISDN and data designs and the
integrated voice/ISDN and data designs shows that integration does achieve
some small capital cost advantage of about 2 percent.  However, probably
more significant are the operational savings of integration which result
from operating a single network rather than two or more networks.  In
addition, the performance of an integrated voice and data network leads to
advantages in capacity sharing, especially when different traffic classes

Ash                 <draft-ash-te-qos-routing-00.txt>        [Page ANNEX2-16]


Internet Draft  TE & QoS Methods for IP,ATM,TDM-Based Networks   March 2000

having different routing priorities, such as key service and best-effort
service, are integrated and share capacity on the same network.  These
performance results are reported below.  A study of voice compression for
all voice traffic, such as might occur if IP-telephony is widely deployed,
shows that network capital costs might be reduced by as much as 10% if this
evolutionary direction is followed.  An analysis of hierarchical routing
versus two-link STT-EDR routing illustrates that more than 20% reduction in
network capital costs can be achieved if such evolution to flexible routing
is followed.  In addition, operation savings should also result from simpler
provisioning of flexible routing options.

The performance analyses for overloads and failures include call admission
control with QoS resource management (as discussed in ANNEX 3), in which we
distinguish the key services, normal services, and best-effort services as
indicated in the tables below.  Table 2.3 gives performance results for a
30% general overload, Table 2.4 gives performance results for a six-times
overload on a single network node, and Table 2.5 gives performance results
for a single transport link failure.

Table 2.3: 30% General Overload (% Lost/Delayed Traffic)

Table 2.4: 6X Focused Overload on OKBK (% Lost/Delayed Traffic)

Table 2.5: Failure on CHCG-NYCM Link (% Lost/Delayed Traffic)

Performance analysis results show that the multilink STT-EDR/DC-SDR/DP-SDR
options perform somewhat better under overloads than the  two-link
STT-EDR/DC-SDR options, because of greater sharing of network capacity.
Under failure, the two-link STT-EDR/DC-SDR options perform better for many
of the virtual network categories than the multilink STT-EDR/DC-SDR/CP-SDR
options, because they have a richer choice of alternate routing paths and
are much more highly connected than the multilink STT-EDR/DC-SDR/DP-SDR
networks.  Loss of a link in a sparely connected multilink
STT-EDR/DC-SDR/DP-SDR network can have more serious consequences than in
more highly connected logical networks.  The performance results illustrate
that capacity sharing of CBR, VBR, and UBR traffic classes, when combined
with QoS resource management and priority queuing, leads to efficient use of
bandwidth with minimal traffic delay and loss impact, even under overload
and failure scenarios.  

The STT and SDR path selection methods are quite comparable for the two-link
network scenarios.  However, the STT path selection method performs somewhat
better than the SDR options in the multilink case.  In addition, the DC-SDR
path selection option performs somewhat better than the CP-DCR option in the
multilink case, which is a result of the 10-second old status information
causing misdirected paths in some cases.  Hence, it can be concluded that
state information does not necessarily improve performance in all cases, and
that if state information is used, it is sometimes better that it is very
recent status information.

The TE modeling conclusions are summarized as follows: 

1.	Capital cost advantages may be attributed to the multilink
STT-EDR/DC-SDR/DP-SDR options, but may not be significant compared to
operational costs, and are subject to the particular switching and transport
cost assumptions.  Capacity design models are further detailed in ANNEX 5.
The multilink STT-EDR/DC-SDR/DP-SDR networks provide better overall

Ash                 <draft-ash-te-qos-routing-00.txt>        [Page ANNEX2-17]


Internet Draft  TE & QoS Methods for IP,ATM,TDM-Based Networks   March 2000

performance under overload, but performance under failure may favor the
two-link STT-EDR/2-DC-SDR options with more alternate routing choices.  One
item of concern in the multilink STT-EDR/DC-SDR/DP-SDR networks is with post
dial delay, in which perhaps 5 or more links may need to be connected for an
individual call.  An analysis has shown that high speed vendor technologies
of the future may offset this concern in comparison to post dial delay of
today's technology.  

2.	The performance results illustrate that capacity sharing of CBR,
VBR, and UBR traffic classes, when combined with QoS resource management and
priority queuing, leads to efficient use of bandwidth with minimal traffic
delay and loss impact.  QoS resource management models are further detailed
in ANNEX 3.  State information as used by the two-link and multilink SDR
options provides only a small network capital cost advantage, and
essentially equivalent performance to the two-link STT-EDR options, and
somewhat worse in the multilink SDR options, as illustrated in the network
performance results.  We conclude from the results that various path
selection methods can interwork with each other in the same network, which
will be required if a multi-vendor network is deployed.  A simple
implementation of QoS resource management, as further described in ANNEX 3,
is shown to be very effective in achieving key service, normal service, and
best effort service differentiation.  Voice and data services can be offered
with 1- and 2-link dynamic routing methods.  However, results of the TE
models presented here illustrate the network efficiency, performance, and
automatic provisioning advantages of the packet-based multilink shortest
path routing protocols and logically sparse high-bandwidth-link designs for
future integrated voice/data services networks.  Voice and data integration
can provide capital cost advantages, but may be more important in achieving
operational simplicity and cost reduction.  Finally, if IP-telephony takes
hold and a significant portion of voice calls use voice compression
technology, this could lead to more efficient networks.

3.	Overall the packet-based (e.g., MPLS/TE) routing strategies offer
several advantages.  MPLS/TE is the standard routing, signaling, and
provisioning protocol for IP-based networks.  The sparse network topology
with the high-speed switching and transport links has been shown to have
economic benefit, due to lower cost network designs achieved by the
economies of scale of higher rate network elements.  The sparse
high-bandwidth-link networks have been shown to have better response to
overload conditions than logical mesh networks, due to greater sharing of
network capacity. The packet-based routing protocols have powerful
capabilities for automatic provisioning of links, nodes, and reachable
addresses, which provide operational advantages for such networks.  Because
the sparse high-bandwidth-link network designs have dramatically fewer links
to provision compared to mesh network designs (10-20% connected versus 90%
or more connected for mesh networks), there is less provisioning work to
perform.  In addition to having fewer links to provision, sparse
high-bandwidth-link network designs use larger increments of capacity on
individual links and therefore capacity additions would need to occur less
frequently than in highly connected mesh networks, which would have much
smaller increments of capacity on the individual links.  The multilink
STT-EDR/DC-SDR/DP-SDR routing methods are synergistic with evolution of data
network services which implement these protocols.  Also the sparse
high-bandwidth-link topology is synergistic with similar topologies which
have been in place for many years in data networks.  Should a service
provider pursue integration of the voice/ISDN and data services networks,
these factors will help support such an integration direction.

Ash                 <draft-ash-te-qos-routing-00.txt>        [Page ANNEX2-18]


Internet Draft  TE & QoS Methods for IP,ATM,TDM-Based Networks   March 2000


2.11	 Summary

We have discussed call routing and connection routing methods employed in TE
functions.  Several connection routing alternatives were discussed, which
include FR, TDR, EDR, and SDR methods.  Dynamic transport routing was
explained and illustrated with design and performance examples.  Models were
presented to illustrate the tradeoffs between the many TE approaches
explained in the ANNEX, and conclusions were drawn on the advantages of both
two-link and multilink STT-EDR/DC-SDR/DP-SDR routing and operation.

Ash                 <draft-ash-te-qos-routing-00.txt>        [Page ANNEX2-19]


Internet Draft  TE & QoS Methods for IP,ATM,TDM-Based Networks   March 2000

ANNEX 3
QoS Resource Management Methods

Traffic Engineering & QoS Methods for IP-, ATM-, & TDM-Based Multiservice
Networks 

3.1	Introduction

QoS resource management functions include connection admission, bandwidth
allocation, bandwidth protection, bandwidth reservation, priority routing,
priority queuing, and other related resource management functions.  QoS
resource management methods have been applied successfully in TDM-based
networks [A98], and are being extended to IP-based and ATM-based networks.
In an illustrative QoS resource management method, bandwidth is allocated in
discrete changes to each of several virtual networks (VNETs), which are each
assigned a priority corresponding to either high-priority key services,
normal-priority services, or best-effort low-priority services.  Examples of
services within these VNET categories include a) high-priority key priority
services such as defense voice communication, b) normal-priority services
such as constant rate, interactive, delay-sensitive voice; variable rate,
interactive, delay-sensitive IP-telephony; and variable rate,
non-interactive, non-delay-sensitive WWW file transfer, and c) low-priority
best effort services such as variable rate, non-interactive,
non-delay-sensitive voice mail, email, and file transfer.  Bandwidth changes
in VNET bandwidth capacity are determined by edge nodes based on an overall
aggregated bandwidth demand for VNET capacity (not on a per-connection
demand basis).  Based on the aggregated bandwidth demand, these edge nodes
make periodic discrete changes in bandwidth allocation, that is, either
increase or decrease bandwidth, such as on the constraint-based routing
label switched paths (CRLSPs) constituting the VNET bandwidth capacity. 

In the illustrative QoS resource management method, the bandwidth allocation
control for each VNET CRLSP is based on estimated bandwidth needs, bandwidth
use, and status of links in the CRLSP. The edge node, or originating node
(ON), determines when VNET bandwidth needs to be increased or decreased on a
CRLSP, and uses an illustrative MPLS CRLSP bandwidth modification procedure
to execute needed bandwidth allocation changes on VNET CRLSPs.  In the
bandwidth allocation procedure the constraint-based routing label
distribution protocol (CRLDP) [J99], for example,  is used to specify
appropriate parameters in the label request message a) to request bandwidth
allocation changes on each link in the CRLSP, and b) to determine if link
bandwidth can be allocated on each link in the CRLSP.  If a link bandwidth
allocation is not allowed, an illustrative CRLDP notification message with
crankback parameter allows the ON to search out possible bandwidth
allocation on another CRLSP.  In particular, we illustrate an optional
depth-of-search (DoS) parameter in the CRLDP label request message to
control the bandwidth allocation on individual links in a CRLSP.  In
addition, we illustrate an optional  modify parameter in the CRLDP label
request message to allow dynamic modification of the assigned traffic
parameters (such as peak data rate, committed data rate, etc.) of an already
existing CRLSP.  Finally, we illustrate a crankback parameter in the CRLDP
notification message to allow an edge node to search out additional
alternate CRLSPs when a given CRLSP cannot accommodate a bandwidth request. 

QoS resource management can be applied on a per-flow (or per-call) basis, as
described in this Section, or can be beneficially applied to "bandwidth

Ash                 <draft-ash-te-qos-routing-00.txt>       [Page ANNEX3-1]


Internet Draft  TE & QoS Methods for IP,ATM,TDM-Based Networks   March 2000

pipes" ("virtual trunking") in the form of SVPs in ATM-based networks, or
CRLSPs in IP-based networks.

QoS resource management provides integration of services on a shared
network, for many classes-of-service such as:

a)	CBR services including voice, 64-, 384-, and 1,536-kbps N-ISDN
switched digital data, international switched transit, priority defense
communication, virtual private network, 800/free-phone, fiber preferred, and
other services.
b)	Real-time VBR services including IP-telephony, compressed video, and
other services .
c)	Non-real-time VBR services including WWW file transfer, credit card
check, and other services.
d)	UBR services including voice mail, email, file transfer, and other
services.

We now illustrate the principles of QoS resource management, which includes
integration of many traffic classes, as discussed above.

3.2	Class-of-Service Identification, Policy-Based Routing Table
Derivation, & QoS Resource Management Steps

3.2.1	Class-of-Service Identification & Policy-Based Routing Table
Derivation

QoS resource management entails identifying class-of-service and QoS
resource management parameters, which may include, for example: 

*	service identity (SI), 
*	virtual network (VNET), 
*	link capability (LC), and 
*	QoS and traffic threshold parameters. 

In addition to controlling bandwidth allocation, the QoS resource management
procedures can check end-to-end transfer delay, delay variation, and
transmission quality considerations such as loss, echo, and noise, as
discussed in Section 3.5 below. 

Determination of class-of-service begins with translation at the originating
node. The number or name is translated to determine the routing address of
the destination node.  If multiple ingress/egress routing is used, multiple
destination node addresses are derived for the call.  Other data derived
from call information, such as link characteristics, Q.931 message
information elements, Information Interchange digits, and network control
point routing information, are used to derive the class-of-service for the
call. 

Class-of-service parameters are derived through application of policy-based
routing.  The SI, which describes the actual service associated with the
call, VNET, which describes the bandwidth allocation and routing table
parameters to be used by the call; and the link capability (LC), which
describes the link hardware capabilities such as fiber, radio, satellite,
and digital circuit multiplexing equipment (DCME), that the call should
require, prefer, or avoid. The combination of SI, VNET, and LC constitute
the class-of-service, which together with the network node number is used to
access routing table data.

Ash                 <draft-ash-te-qos-routing-00.txt>       [Page ANNEX3-2]


Internet Draft  TE & QoS Methods for IP,ATM,TDM-Based Networks   March 2000


Policy-based routing rules are used in SI derivation, which for example uses
the type of origin, type of destination, signaling service type, and dialed
number/name service type to derive the SI. The type of origin can be derived
normally from the type of incoming link to the connected network domain,
connecting either to a directly connected (also known as nodal) customer
equipment location, a switched access local exchange carrier, or an
international carrier location. Similarly, based on the dialed numbering
plan, the type of destination network is derived and can be a directly
connected (nodal) customer location if a private numbering plan is used (for
example, within a virtual private network), a switched access customer
location if a North American Numbering Plan (NANP) number is used to the
destination, or an international customer location if the international
E.164 numbering plan is used. Signaling service type is derived based on
bearer capability within signaling messages, information digits in dialed
digit codes, numbering plan, or other signaling information and can indicate
long-distance service (LDS), virtual private network (VPN) service, ISDN
switched digital service (SDS), and other service types. Finally, dialed
number service type is derived based on special dialed number codes such as
800 numbers or 900 numbers and can indicate 800 (INWATS) service, 900
(MULTIQUEST) service, and other service types. Type of origin, type of
destination, signaling service type, and dialed number service type are then
all used to derive the SI.

The following are examples of the use of policy-based routing rules to
derive class-of-service parameters. A long-distance service SI, for example,
is derived from the following information:

1.	The type of origination network is a switched access local exchange
carrier, because the call originates from a local exchange carrier node.

2.	The type of destination network is a switched access local exchange
carrier, based on the NANP dialed number. 

3.	The signaling service type is long-distance service, based on the
numbering plan (NANP). 

4.	The dialed number service type is not used to distinguish
long-distance service SI.

A service identity mapping table uses the above four inputs to derive the
service identity. This policy-based routing table is changeable by
administrative updates, in which new service information can be defined
without software modifications to the node processing. From the SI and
bearer-service capability the SI/bearer-service-to-virtual network mapping
table is used to derive the VNET.  For the derivation of the 800 service SI,
the dialed number service type is used to distinguish the 800 service
identity. 

Table 2.1 in ANNEX 2 illustrates the VNET mapping table. Here the SIs are
mapped to individual virtual networks. Routing parameters for priority or
key services are discussed further in the sections below.

Link capability selection allows calls to be routed on specific transmission
that have the particular characteristics required by these calls. A call can
require, prefer, or avoid a set of transmission characteristics such as
fiber transmission, radio transmission, satellite transmission, or

Ash                 <draft-ash-te-qos-routing-00.txt>       [Page ANNEX3-3]


Internet Draft  TE & QoS Methods for IP,ATM,TDM-Based Networks   March 2000

compressed voice transmission. Link capability requirements for the call can
be determined by the SI of the call or by other information derived from the
signaling message or from the routing number. The routing logic allows the
call to skip those links that have undesired characteristics and to seek a
best match for the requirements of the call.

3.2.2	QoS Resource Management Steps

The illustrative QoS resource management method consists of the following
steps:

1. At the ON, the destination node (DN), and QoS resource management
information are determined through the digit translation database and other
service information available at the ON.
 2. The DN and QoS resource management information are used to access the
appropriate VNET and routing table between the ON and DN.
 3. The connection request is set up over the first available route in the
routing table with the required transmission resource selected based on the
QoS resource management data.

In the first step, the ON translates the dialed digits to determine the
address of the DN.  If multiple ingress/egress routing is used, multiple
destination node addresses are derived for the connection request.  Other
data derived from connection request information includes link
characteristics, Q.931 message information elements, information interchange
(II) digits, and service control point (SCP) routing information, and are
used to derive the QoS resource management parameters (SI, VNET, LC, and
QoS/traffic thresholds).  SI describes the actual service associated with
the connection request, VNET describes the bandwidth allocation and routing
table parameters to be used by the connection request, and the LC describes
the link characteristics including fiber, radio, satellite, and voice
compression, that the connection request should require, prefer, or avoid.
Each connection request is classified by its SI.  A connection request for
an individual service is allocated an equivalent bandwidth equal to EQBW and
routed on a particular VNET.  For CBR services the equivalent bandwidth EQBW
is equal to the average or sustained bit rate.  For VBR services the
equivalent bandwidth EQBW is a function of the sustained bit rate, peak bit
rate, and perhaps other parameters.  For example, EQBW equals 64 kbps of
bandwidth for CBR voice connections, 64 kbps of bandwidth for CBR ISDN
switched digital 64-kbps connections, and 384-kbps of bandwidth for CBR ISDN
switched digital 384-kbps connections.

In the second step, the SI value is used to derive the VNET.  In the
multi-service, QoS resource management  network, bandwidth is allocated to
individual VNETs  which is protected as needed but otherwise shared. Under
normal non-blocking/delay network conditions, all services fully share all
available bandwidth.  When blocking/delay occurs for VNET i, bandwidth
reservation acts to prohibit alternate-routed traffic and traffic from other
VNETs from seizing the allocated capacity for VNET i.  Associated with each
VNET are average bandwidth (BWavg) and maximum bandwidth (BWmax) parameters
to govern bandwidth allocation and protection, which are discussed further
in the next Section.  LC selection allows connection requests to be routed
on specific transmission links that have the particular characteristics
required by a connection requests.  A connection request can require,
prefer, or avoid a set of transmission characteristics such as fiber
transmission, radio transmission, satellite transmission, or compressed
voice transmission.  LC requirements for the connection request can be

Ash                 <draft-ash-te-qos-routing-00.txt>       [Page ANNEX3-4]


Internet Draft  TE & QoS Methods for IP,ATM,TDM-Based Networks   March 2000

determined from the SI or by other information derived from the signaling
message or dialed number.  The routing table logic allows the connection
request to skip those transmission routes that have links that have
undesired characteristics and to seek a best match for the requirements of
the connection request.

In the third step, the VNET routing table determines which network capacity
is allowed to be selected for each connection request.  In using the VNET
routing table to select network capacity, the ON selects a first choice
route based on the routing table selection rules.  Whether or not bandwidth
can allocated to the connection request on the first choice route is
determined by the QoS resource management rules given below.  If a first
choice route cannot be accessed, the ON may then try alternate routes
determined by FR, TDR, SDR, or EDR route selection rules outlined in ANNEX
2.  Whether or not bandwidth can be allocated to the connection request on
the alternate route again is determined by the QoS resource management rules
now described.  

3.3	Bandwidth-Allocation, Bandwidth-Protection, and Priority-Routing
Issues

This Section specifies the resource allocation controls and priority
mechanisms, and the information needed to support them.  In the illustrative
QoS resource management method, the connection/bandwidth-allocation
admission control for each link in the route is performed based on the
status of the link. The ON may select any route for which the first link is
allowed according to QoS resource management criteria.  If a subsequent link
is not allowed, then a release with crankback/bandwidth-not-available is
used to return to the ON and select an alternate route.  This use of an EDR
path selection, which entails the use of the release with
crankback/bandwidth-not-available mechanism to search for an available path,
is an alternative to SDR path selection, which may entail flooding of
frequently changing link state parameters such as available-cell-rate.  The
tradeoffs between EDR with crankback and SDR with link-state flooding are
further discussed in ANNEX 5.  In particular, when EDR path selection with
crankback is used in lieu of SDR path selection with link-state flooding,
the reduction in the frequency of such link-state parameter flooding allows
for larger peer group sizes.  This is because link-state flooding can
consume substantial processor and link resources, in terms of message
processing by the processors and link bandwidth consumed by messages on the
links. 

Two cases of QoS resource management are considered in this Section:
per-virtual-network (per-VNET) management and per-flow management.  In the
per-VNET method, such as illustrated for IP-based MPLS networks, LSP
bandwidth is managed to meet the bandwidth needs of VNET service needs.
Individual flows are allocated bandwidth within the CRLSPs accordingly, as
CRLSP bandwidth is available.  In the per-flow method, bandwidth is
allocated to each individual flow, such as in SVC set-up in an ATM-based
network, from the overall pool of bandwidth, as the total pool bandwidth is
available.  A fundamental principle applied in these bandwidth allocation
methods is the use of bandwidth reservation techniques.  We first review
bandwidth reservation principles and then discuss per-VNET and per-flow QoS
resource allocation.


Ash                 <draft-ash-te-qos-routing-00.txt>       [Page ANNEX3-5]


Internet Draft  TE & QoS Methods for IP,ATM,TDM-Based Networks   March 2000

3.3.1	Dynamic Bandwidth Reservation Principles

Bandwidth reservation (the TDM-network terminology is "trunk reservation")
gives preference to the preferred traffic by allowing it to seize any idle
bandwidth in a link, while allowing the non-preferred routing traffic to
only seize bandwidth if there is a minimum level of idle bandwidth
available, where the minimum-bandwidth threshold is called the reservation
level.  P. J. Burke [Bur61] first analyzed bandwidth reservation behavior
from the solution of the birth--death equations for the bandwidth
reservation model.  Burke's model showed the relative lost-traffic level for
preferred traffic, which is not subject to bandwidth reservation
restrictions, as compared to non-preferred traffic, which is subject to the
restrictions.  Figure 3.1 illustrates the percent lost traffic of preferred
and non-preferred traffic on a typical link with 10 percent traffic
overload. It is seen that the preferred traffic lost traffic is near zero,
whereas the non-preferred lost traffic is much higher, and this situation is
maintained across a wide variation in the percentage of the preferred
traffic load. Hence, bandwidth reservation protection is robust to traffic
variations and provides significant dynamic protection of particular streams
of traffic.

Figure 3.1.  Dynamic Bandwidth Reservation Performance under 10% Overload

Bandwidth reservation is a crucial technique used in nonhierarchical
networks to prevent "instability," which can severely reduce throughput in
periods of congestion, perhaps by as much as 50 percent of the
traffic-carrying capacity of a network [E.525]. The phenomenon of
instability has an interesting mathematical solution to network flow
equations, which has been presented in several studies [NaM73, Kru82,
Aki84].  It is shown in these studies that nonhierarchical networks exhibit
two stable states, or bistability, under congestion and that networks can
transition between these stable states in a network congestion condition
that has been demonstrated in simulation studies. A simple explanation of
how this bistable phenomenon arises is that under congestion, a network is
often not able to complete a connection request on the direct or shortest
route, which consist in this example of a single link. If alternate routing
is allowed, such as on longer, multiple-link routes, which are assumed in
this example to consist of two links, then the connection request might be
completed on a two-link route selected from among a large number of two-link
route choices, only one of which needs sufficient idle bandwidth on both
links to be used to route the connection.  Because this two-link connection
now occupies resources that could perhaps otherwise be used to complete two
one-link connections, this is a less efficient use of network resources
under congestion. In the event that a large fraction of all connections
cannot complete on the direct link but instead occupy two-link routes, the
total network throughput capacity is reduced by one-half because most
connections take twice the resources needed. This is one stable state; that
is, most or all connections use two links. The other stable state is that
most or all connections use one link, which is the desired condition. .

Bandwidth reservation is used to prevent this unstable behavior by having
the preferred traffic on a link be the direct traffic on the primary,
shortest route, and the non-preferred traffic, subjected to bandwidth
reservation restrictions as described above, be the alternate-routed traffic
on longer routes. In this way the alternate-routed traffic is inhibited from
selecting longer alternate routes when sufficient idle trunk capacity is not
available on all links of an alternate-routed connection, which is the

Ash                 <draft-ash-te-qos-routing-00.txt>       [Page ANNEX3-6]


Internet Draft  TE & QoS Methods for IP,ATM,TDM-Based Networks   March 2000

likely condition under network and link congestion. Mathematically, the
studies of bistable network behavior have shown that bandwidth reservation
used in this manner to favor direct shortest connections eliminates the
bistability problem in nonhierarchical networks and allows such networks to
maintain efficient utilization under congestion by favoring connections
completed on the shortest route.  For this reason, dynamic trunk reservation
is universally applied in nonhierarchical networks [E.529], and often in
hierarchical networks [Mum76]. 

There are differences in how and when bandwidth reservation is applied,
however, such as whether the bandwidth reservation for direct-routed
connections is in place at all times or whether it is dynamically triggered
to be used only under network or link congestion. This is a complex network
throughput trade-off issue, because bandwidth reservation can lead to some
loss in throughput under normal, low-congestion conditions. This loss in
throughput arises because if bandwidth is reserved for connections on the
shortest route, but these calls do not arrive, then the capacity is
needlessly reserved when it might be used to complete alternate-routed
traffic that might otherwise be blocked. However, under network congestion,
the use of bandwidth reservation is critical to preventing network
instability, as explained above [E.525].

It is beneficial for bandwidth reservation techniques be included in
IP-based and ATM-based routing methods, in order to ensure the efficient use
of network resources especially under congestion conditions.  Currently
recommended route-selection methods, such as methods for optimized multipath
for traffic engineering in IP-based MPLS networks [V99], or route selection
in ATM-based PNNI networks [ATM960055], give no guidance on the necessity
for using bandwidth-reservation techniques.  Such guidance is essential for
acceptable network performance.

Examples are given in this ANNEX for dynamically triggered bandwidth
reservation techniques, where bandwidth reservation is triggered only under
network congestion.  Such methods are shown to be effective in striking a
balance between protecting network resources under congestion and ensuring
that resources are available for sharing when conditions permit. In Section
3.6 the phenomenon of network instability is illustrated through simulation
studies, and the effectiveness of bandwidth reservation in eliminating the
instability is demonstrated. Bandwidth reservation is also shown to be an
effective technique to share bandwidth capacity among services integrated on
a direct link, where the reservation in this case is invoked to prefer
direct link capacity for one particular service as opposed to another
service when network and link congestion are encountered. These two aspects
of bandwidth reservation, that is, for avoiding instability and for sharing
bandwidth capacity among services, are illustrated in Section 3.6.

3.3.2	Per-Virtual-Network QoS Resource Allocation

Through the use of bandwidth allocation, reservation, and congestion control
techniques, QoS resource management can provide good network performance
under normal and abnormal operating conditions for all services sharing the
integrated network [A98].  Such methods have been analyzed in recent
modeling studies for IP-based networks [ACFM99], and in this draft these
IP-based QoS resource management methods are described.  However, the
intention here is to illustrate the general principles of QoS resource
management and not to recommend a specific implementation.  
As illustrated in Figure 3.2, in the multi-service, QoS resource management

Ash                 <draft-ash-te-qos-routing-00.txt>       [Page ANNEX3-7]


Internet Draft  TE & QoS Methods for IP,ATM,TDM-Based Networks   March 2000

network, bandwidth is allocated to the individual VNETs (high-priority key
services VNETs, normal-priority services VNETs, and best-effort low-priority
services VNETs).  

Figure 3.2  Virtual Network (VNET) Bandwidth Management

This allocated bandwidth is protected by bandwidth reservation methods, as
needed, but otherwise shared.  Each ON monitors VNET bandwidth use on each
VNET CRLSP, and determines when VNET CRLSP bandwidth needs to be increased
or decreased. Bandwidth changes in VNET bandwidth capacity are determined by
ONs based on an overall aggregated bandwidth demand for VNET capacity (not
on a per-connection demand basis).  Based on the aggregated bandwidth
demand, these ONs make periodic discrete changes in bandwidth allocation,
that is, either increase or decrease bandwidth on the CRLSPs constituting
the VNET bandwidth capacity. For example, if connection requests are made
for VNET CRLSP bandwidth that exceeds the current CRLSP bandwidth
allocation, the ON initiates a bandwidth modification request on the
appropriate CRLSP(s).  For example, this bandwidth modification request may
entail increasing the current CRLSP bandwidth allocation by a discrete
increment of bandwidth denoted here as delta-bandwidth (DBW).  DBW is a
large enough bandwidth change so that modification requests are made
relatively infrequently.  Also, the ON periodically monitors CRLSP bandwidth
use, such as once each minute, and if bandwidth use falls below the current
CRLSP allocation the ON initiates a bandwidth modification request to
decrease the CRLSP bandwidth allocation by a unit of bandwidth such as DBW.

In making a VNET bandwidth allocation modification, the ON determines the
QoS resource management parameters including the VNET priority (key, normal,
or best-effort), VNET bandwidth-in-use, VNET bandwidth allocation
thresholds, and whether the CRLSP is a first choice CRLSP or alternate
CRLSP.  These parameters are used to access a VNET depth-of-search (DoS)
table to determine a DoS load state threshold (Pi), or the "depth" to which
network capacity can be allocated for the VNET bandwidth modification
request. In using the DoS threshold to allocate VNET bandwidth capacity, the
ON selects a first choice CRLSP based on the routing table selection rules.


Path selection in this IP network illustration may use open shortest path
first (OSPF) for intra-domain routing.  In OSPF-based layer 3 routing, as
illustrated in Figure 3.3, ON A determines a list of shortest paths by
using, for example, Dijkstra's algorithm.  

This path list could be determined based on administrative weights of each
link, which are communicated to all nodes within the autonomous system (AS)
domain.  These administrative weights may be set, for example, to [1 +
epsilon x distance], where epsilon is a factor giving a relatively smaller
weight to the distance in comparison to the hop count.   The ON selects a
path from the list based on, for example, FR, TDR, SDR, or EDR path
selection, as discussed in ANNEX 2.  

For example, in using the first CRLSP A-B-E in Figure 3.3, ON A sends CRLDP
label request message to VN B, which in turn forwards the CRLDP label
request message to DN E.  VN B and DN E are passed in the explicit routing
(ER) parameter contained in the CRLDP label request message.  Each node in
the CRLSP reads the ER information, and passes the CRLDP label request
message to the next node listed in the ER parameter.  If the first path is
blocked at any of the links in the path, a CRLDP notification message with a

Ash                 <draft-ash-te-qos-routing-00.txt>       [Page ANNEX3-8]


Internet Draft  TE & QoS Methods for IP,ATM,TDM-Based Networks   March 2000

crankback parameter is returned to ON A which can then attempt the next
path.  If FR is used, then this path is the next path in the shortest path
list, for example path A-C-D-E.  If TDR is used, then the next path is the
next path in the routing table for the current time period.  If SDR is used,
OSPF implements a distributed method of flooding link status information,
which is triggered either periodically and/or by crossing load state
threshold values.  This method of distributing link status information can
be resource intensive and may not be any more efficient than simpler path
selection methods such as EDR.  If EDR is used, then the next path is the
last successful path, and if that path is unsuccessful another alternate
path is searched out according to the EDR path selection method.

Hence in using the selected CRLSP, the ON sends the explicit route, the
requested traffic parameters (peak data rate, committed data rate, etc.), a
DoS-parameter, and a modify-parameter in the CRLDP label request message to
each VN and the DN in the selected CRLSP.  Whether or not bandwidth can be
allocated to the bandwidth modification request on the first choice CRLSP is
determined by each VN applying the QoS resource management rules.  These
rules entail that the VN determine the CRLSP link states, based on bandwidth
use and bandwidth available, and compare the link load state to the DoS
threshold Pi sent in the  CRLDP parameters, as further explained below.  If
the first choice CRLSP cannot admit the bandwidth change, a  VN or DN
returns control to the ON through the use of the crankback-parameter in the
CRLDP notification message.  At that point the ON may then try an alternate
CRLSP.  Whether or not bandwidth can be allocated to the bandwidth
modification request on the alternate path again is determined by the use of
the DoS threshold compared to the CRLSP link load state at each VN.
Priority queuing is used during the time the CRLSP is established, and at
each link the queuing discipline is maintained such that the packets are
given priority according to the VNET traffic priority. 

Hence determination of the CRLSP link load states is necessary for QoS
resource management to select network capacity on either the first choice
CRLSP or alternate CRLSPs.  Four link load states are distinguished: lightly
loaded (LL), heavily loaded (HL), reserved (R), and busy (B).  Management of
CRLSP capacity uses the link state model and the DoS model to determine if a
bandwidth modification request can be accepted on a given CRLSP.  The
allowed DoS load state threshold Pi determines if a bandwidth modification
request can be accepted on a given link to an available bandwidth "depth."
In setting up the bandwidth modification request, the ON encodes the DoS
load state threshold allowed on each link in the DoS-parameter Pi, which is
carried in the CRLDP label request.  If a CRLSP link is encountered at a VN
in which the idle link bandwidth and link load state are below the allowed
DoS load state threshold Pi, then the VN sends a CRLDP notification message
with the crankback-parameter to the ON, which can then route the bandwidth
modification request to an alternate CRLSP choice.  For example, in Figure
3.3, CRLSP A-B-E may be the first path tried where link A-B is in the LL
state and link B-E is in the R state.  If the DoS load state allowed is
Pi=HL or better, then the CRLSP bandwidth modification request in the CRLDP
label request message is routed on link A-B but will not be admitted on link
B-E, wherein the CRLSP bandwidth modification request will be cranked back
in the CRLDP notification message to the originating node A to try alternate
CRLSP A-C-D-E.  Here the CRLSP bandwidth modification request succeeds since
all links have a state of HL or better.  

The DoS load state threshold is a function of bandwidth-in-progress, VNET
priority, and bandwidth allocation thresholds, as follows:

Ash                 <draft-ash-te-qos-routing-00.txt>       [Page ANNEX3-9]


Internet Draft  TE & QoS Methods for IP,ATM,TDM-Based Networks   March 2000


Table 3.1
Determination of Depth-of-Search (DoS) Load State Threshold (Per-VNET
Bandwidth Allocation)

Load State	Key 	Normal Priority VNET	Best Effort 
Allowedi	Priority VNET	First Choice  CRLSP	Alternate  CRLSP
Priority VNET
R	if BWIPi *  2 * BWmaxi	If BWIPi * BWavgi	Not Allowed	Note
1
HL	if BWIPi *  2 * BWmaxi	If BWIPi * BWmaxi	if BWIPi * BWavgi
Note 1
LL	All BWIPi	All BWIPi	All BWIPi	Note 1

where 

	BWIPi		=	bandwidth-in-progress on VNET i
	BWavgi		= 	minimum guaranteed bandwidth required for
				VNET i to carry the 
				average offered bandwidth load
	BWmaxi 		= 	the bandwidth required for VNET i to meet the
				blocking/delay probability 
				grade-of-service objective for CRLSP
				bandwidth allocation requests
	       		= 	1.1 x BWavgi
	Note 1		=	CRLSPs for the best effort priority VNET are
				allocated zero bandwidth; 
				Diffserv queuing admits best effort packets
				only if  there is available
				bandwidth on a link

Note that BWIP, BWavg, and BWmax are specified per ON-DN pair, and that the
QoS resource management method provides for a key priority VNET, a normal
priority VNET, and a best effort VNET.  Key services admitted by an ON on
the key VNET are given higher priority routing treatment by allowing greater
path selection DoS than normal services admitted on the normal VNET.  Best
effort services admitted on the best effort VNET are given lower priority
routing treatment by allowing lesser path selection DoS than normal.  The
quantities BWavgi are computed periodically, such as every week w, and can
be exponentially averaged over a several week period, as follows:

	BWavgi(w)	=	.5 x  BWavgi(w-1) + .5 x [ BWIPavgi(w) +
				BWOVavgi(w) ]
	BWIPavgi	=	average bandwidth-in-progress across a load
				set period on VNET i
	BWOVavgi	=	average bandwidth allocation request
				rejected (or overflow) across a load set
				period on VNET i

where all variables are specified per ON-DN pair, and where BWIPi and BWOVi
are averaged across various load set periods, such as morning, afternoon,
and evening averages for weekday, Saturday, and Sunday,  to obtain BWIPavgi
and BWOVavgi. 

Illustrative values of the thresholds to determine link load states are as
follows:


Ash                 <draft-ash-te-qos-routing-00.txt>       [Page ANNEX3-10]


Internet Draft  TE & QoS Methods for IP,ATM,TDM-Based Networks   March 2000

Table 3.1
Determination of Link Load State

Name of State		Condition
Busy	B		ILBWk < DBW
Reserved	R	ILBWk * Rthrk
Heavily Loaded	HL	Rthrk < ILBWk * HLthrk
Lightly Loaded	LL	HLthrk < ILBWk

where

	ILBWk		=	idle link bandwidth on link k
	DBW		=	delta bandwidth requirement for a bandwidth
				allocation request
	Rthrk		=	reservation bandwidth threshold for link k
			=	N x .05 x TBWk for bandwidth reservation
				level N
	HLthrk		=	heavily loaded bandwidth threshold for link
				k
			=	Rthrk + .05 x TBWk 
	TBWk		=	the total bandwidth required on link k to
				meet the blocking/delay
				probability grade-of-service objective for
				bandwidth 
				allocation requests on their first choice
				CRLSP.  

QoS resource management implements bandwidth reservation logic to favor
connections routed on the first choice CRLSP in situations of link
congestion.  If link congestion (or blocking/delay) is detected, bandwidth
reservation is immediately triggered and the reservation level N is set for
the link according to the level of link congestion.  In this manner
bandwidth allocation requests attempting to alternate-route over a congested
link are subject to bandwidth reservation, and the first choice CRLSP
requests are favored for that link.  At the same time, the LL and HL link
state thresholds are raised accordingly in order to accommodate the reserved
bandwidth capacity N for the VNET. Figure 3.4 illustrates bandwidth
allocation and the mechanisms by which bandwidth is protected through
bandwidth reservation.  Under normal bandwidth allocation demands bandwidth
is fully shared, but under overloaded bandwidth allocation demands,
bandwidth is protected through the reservation mechanisms wherein each VNET
can use its allocated bandwidth.  Under failure, however, the reservation
mechanisms operate to give the key VNET its allocated bandwidth before the
normal priority VNET gets its bandwidth allocation.  As noted on Table 3.1,
the best effort low-priority VNET is not allocated bandwidth nor is
bandwidth reserved for the best effort VNET. Further illustrations are given
in Section 3.6 of the robustness of dynamic bandwidth reservation in
protecting the preferred bandwidth requests across wide variations in
traffic conditions.

Figure 3.4  Bandwidth Allocation, Protection, & Priority Routing

The reservation level N (for example, N may have 1 of 4 levels), is
calculated for each link k based on the link blocking/delay level of
bandwidth allocation requests.  The link blocking/delay level is equal to
the total requested but rejected (or overflow) link bandwidth allocation
(measured in total bandwidth), divided by the total requested link bandwidth

Ash                 <draft-ash-te-qos-routing-00.txt>       [Page ANNEX3-11]


Internet Draft  TE & QoS Methods for IP,ATM,TDM-Based Networks   March 2000

allocation, over the last periodic update interval, which is, for example,
every three minutes.  That is

	BWOVk		= 	total requested bandwidth allocation
				rejected (or overflow) on 
				link k
	BWOFk		= 	total requested or offered bandwidth
				allocation on link k
	LBLk		=	link blocking/delay level on link k
			=	BWOVk/BWOFk

If LBLk exceeds a threshold value, the reservation level N is calculated
accordingly.  The reserved bandwidth and link states are calculated based on
the total link bandwidth required on link k, TBWk, which is computed
on-line, for example every 1-minute interval m, and approximated as follows:

	TBWk(m)		=	.5 x  TBWk(m-1) + 
				.5 x [ 1.1 x  TBWIPk(m) +  TBWOVk(m)]
	TBWIPk		=	sum of the bandwidth in progress
				(BWIPi) for all VNETs i
				for bandwidth requests on their
				first choice CRLSP over link k
	TBWOVk		=	sum of bandwidth overflow (BWOVi) for all
				VNETs i
				for bandwidth requests on their
				first choice CRLSP over link k

Therefore the reservation level and load state boundary thresholds are
proportional to the estimated required bandwidth load, which means that the
bandwidth reserved and the bandwidth required to constitute a lightly loaded
link rise and fall with the bandwidth load, as, intuitively, they should.

3.3.3	Per-Flow QoS Resource Allocation

Per-flow QoS resource management methods have been applied successfully in
TDM-based networks, where bandwidth allocation is determined by edge nodes
based on bandwidth demand for each connection request.  Based on the
bandwidth demand, these edge nodes make changes in bandwidth allocation
using for example an SVC-based QoS resource management approach illustrated
in this Section.  Again, the determination of the link load states is used
for QoS resource management in order to select network capacity on either
the first choice path or alternate paths.  Also the allowed DoS load state
threshold determines if an individual connection request can be admitted on
a given link to an available bandwidth "depth."  In setting up each
connection request, the ON encodes the DoS load state threshold allowed on
each link in the connection-setup IE.  If a link is encountered at a VN in
which the idle link bandwidth and link load state are below the allowed DoS
load state threshold, then the VN sends a crankback/bandwidth-not-available
IE to the ON, which can then route the connection request to an alternate
route choice.  For example, in Figure 3.3, path A-B-E may be the first path
tried where link A-B is in the LL state and link B-E is in the R state.  If
the DoS load state allowed is HL or better, then the connection request is
routed on link A-B but will not be admitted on link B-E, wherein the
connection request will be cranked back to the originating node A to try
alternate route A-C-D-E.  Here the connection request succeeds since all
links have a state of HL or better.  


Ash                 <draft-ash-te-qos-routing-00.txt>       [Page ANNEX3-12]


Internet Draft  TE & QoS Methods for IP,ATM,TDM-Based Networks   March 2000

The illustrative DoS load state threshold is a function of
bandwidth-in-progress, service priority, and bandwidth allocation
thresholds, as follows:

Table 3.3
Determination of Depth-of-Search (DoS) Load State Threshold (Per-Flow
Bandwidth Allocation)

Load State	Key Service	Normal Service	Best Effort 
Allowedi		First Choice Route	Alternate Route	Service
R	If BWIPi *  2 * BWmaxi	If BWIPi * BWavgi	Not Allowed	Not
Allowed
HL	If BWIPi *  2 * BWmaxi	If BWIPi * BWmaxi	If BWIPi * BWavgi
Not Allowed
LL	All BWIPi	All BWIPi	All BWIPi	All BWIPi

where 

	BWIPi	=	bandwidth-in-progress on VNET i
	BWavgi 	= 	minimum guaranteed bandwidth required for
			VNET i to carry the average 
			offered bandwidth load
	BWmaxi 	= 	the bandwidth required for VNET i to meet the
			blocking/delay 
			probability grade-of-service objective  =
			1.1 x BWavgi

Note that all parameters are specified per ON-DN pair, and that the QoS
resource management method provides for key service and best effort service.
Key services are given higher priority routing treatment by allowing greater
route selection DoS than normal services.  Best effort services are given
lower priority routing treatment by allowing lesser route selection DoS than
normal.  The quantities BWavgi are computed periodically, such as every week
w, and can be exponentially averaged over a several week period, as follows:

	BWavgi(w)=	.5 x  BWavgi(w-1) + .5 x [ BWIPavgi(w) +
			BWOVavgi(w) ]
	BWIPavgi=	average bandwidth-in-progress across a load
			set period on VNET i
	BWOVavgi=	average bandwidth overflow across a load set
			period 

where BWIPi and BWOVi are averaged across various load set periods, such as
morning, afternoon, and evening averages for weekday, Saturday, and Sunday,
to obtain BWIPavgi and BWOVavgi.  Illustrative values of the thresholds to
determine link load states are given in Table 3.2.

The illustrative QoS resource management method implements bandwidth
reservation logic to favor connections routed on the first choice route in
situations of link congestion.  If link blocking/delay is detected,
bandwidth reservation is immediately triggered and the reservation level N
is set for the link according to the level of link congestion.  In this
manner traffic attempting to alternate-route over a congested link is
subject to bandwidth reservation, and the first choice route traffic is
favored for that link.  At the same time, the LL and HL link state
thresholds are raised accordingly in order to accommodate the reserved
bandwidth capacity for the VNET.  The reservation level N (for example, N

Ash                 <draft-ash-te-qos-routing-00.txt>       [Page ANNEX3-13]


Internet Draft  TE & QoS Methods for IP,ATM,TDM-Based Networks   March 2000

may have 1 of 4 levels), is calculated for each link k based on the link
blocking/delay level and the estimated link traffic.  The link
blocking/delay level is equal to the equivalent bandwidth overflow count
divided by the equivalent bandwidth peg count over the last periodic update
interval, which is typically three minutes.  That is

	BWOVk		= 	equivalent bandwidth overflow count on link
				k
	BWPCk		= 	equivalent bandwidth peg count on link k
	LBLk		=	link blocking/delay level on link k
			=	BWOVk/BWPCk

If LBLk exceeds a threshold value, the reservation level N is calculated
accordingly.  The reserved bandwidth and link states are calculated based on
the total link bandwidth required on link k, TBWk, which is computed
on-line, for example every 1-minute interval m, and approximated as follows:

	TBWk(m)	=	.5 x  TBWk(m-1) + .5 x [ 1.1 x  TBWIPk(m) +
			TBWOVk(m) ]
	TBWIPk	=	sum of the bandwidth in progress (BWIPi) for
			all VNETs i
			for connections on their first choice route
			over link k
	TBWOVk	=	sum of bandwidth overflow (BWOVi) for all VNETs i
			for connections on their first choice route
			over link k

Therefore the reservation level and load state boundary thresholds are
proportional to the estimated required bandwidth traffic load, which means
that the bandwidth reserved and the bandwidth required to constitute a
lightly loaded link rise and fall with the traffic load, as, intuitively,
they should.

3.4	Priority Queuing

In addition to the QoS bandwidth management procedure for bandwidth
allocation requests, a QoS priority of service queuing capability is used
during the time connections are established on each of the three VNETs.  At
each link, a queuing discipline is maintained such that the packets being
served are given priority in the following order: key VNET services, normal
VNET services, and best effort VNET services. Following the MPLS CRLSP
bandwidth allocation setup and the application of QoS resource management
rules, the priority of service parameter and label parameter need to be sent
in each IP packet, as illustrated in Figure 3.5. The priority of service
parameter may be included in the type of service (ToS), or differentiated
services (DiffServ) [B98, ST98], parameter already in the IP packet header.
Another possible alternative is that the priority of service parameter might
be included in  the MPLS label or "shim" appended to the IP packet (this is
a matter for further study).  In either case, from the priority of service
parameter, the IP node can determine the QoS treatment based on the QoS
resource management (priority queuing) rules for key VNET packets, normal
VNET packets, and best effort VNET packets.  From the label parameter, the
IP node can determine the next node to route the IP packet to as defined by
the MPLS protocol.  In this way, the backbone nodes can have a very simple
per-packet processing implementation to implement QoS resource management
and MPLS routing.


Ash                 <draft-ash-te-qos-routing-00.txt>       [Page ANNEX3-14]


Internet Draft  TE & QoS Methods for IP,ATM,TDM-Based Networks   March 2000

Figure 3.5  IP Packet Structure under MPLS Switching

3.5	Other QoS Resource Management Constraints 

Other QoS routing constraints are taken into account in the QoS resource
management and route selection methods in addition to bandwidth allocation,
bandwidth protection, and priority routing.  These include end-to-end
transfer delay, delay variation [G99a], and transmission quality
considerations such as loss, echo, and noise [D99, G99a, G99b].
Additionally, link capability (LC) selection allows connection requests to
be routed on specific transmission media that have the particular
characteristics required by these connection requests.  In general, a
connection request can require, prefer, or avoid a set of transmission
characteristics such as fiber optic or radio transmission, satellite or
terrestrial transmission, or compressed or uncompressed transmission.  The
routing table logic allows the connection request to skip links that have
undesired characteristics and to seek a best match for the requirements of
the connection request.  For any SI, a set of LC selection preferences  is
specified for the connection request. LC selection preferences can override
the normal order of selection of routes.  If a LC characteristic is
required, then any route with a link that does not have that characteristic
is skipped.  If a characteristic is preferred, routes having all links with
that characteristic are used first.  Routes having links without the
preferred characteristic will be used next.  A LC preference  is set for the
presence or absence of a characteristic.  For example, if fiberoptic
transmission is required, then only routes with links having Fiberoptic=Yes
are used.  If we prefer the presence of fiberoptic transmission, then routes
having all links with Fiberoptic=Yes are used first, then routes having some
links with Fiberoptic=No.

3.6	Interdomain QoS Resource Management

Interdomain routing can also apply class-of-service routing concepts and
increased routing flexibility for interdomain routing. It works
synergistically with multiple ingress/egress routing, and can use link
status information in combination with call completion history to select
paths and also uses dynamic bandwidth reservation techniques discussed in
Section 3.3.1.

Interdomain routing can use the virtual network concept that enables service
integration by allocating bandwidth for services and using dynamic bandwidth
reservation controls. Therefore, bandwidth can be fully shared among virtual
networks in the absence of congestion. When a certain virtual network
encounters congestion, bandwidth is reserved to ensure that the virtual
network reaches its allocated bandwidth. Interdomain routing can employ
class-of-service routing capabilities including key service protection,
directional flow control, link selection capability, automatically updated
time-variable bandwidth allocation, and alternate routing capability through
the use of overflow paths and control parameters such as interdomain routing
load set periods. Link selection capability allows specific link
characteristics, such as fiber transmission, to be preferentially selected.
Thereby interdomain routing can improve performance and reduce the cost of
the interdomain network with flexible routing capabilities.

Interdomain routing tries to find an available alternate path based on load
state and call completion performance, in which the originating node uses
its link status to the via node, in combination with the call completion

Ash                 <draft-ash-te-qos-routing-00.txt>       [Page ANNEX3-15]


Internet Draft  TE & QoS Methods for IP,ATM,TDM-Based Networks   March 2000

performance from the via node to the destination node, in order to find the
least-loaded, most available path to route the call over. For each path, a
load state and a completion state are tracked. The load state indicates
whether the link bandwidth from the gateway node to the destination node is
lightly loaded, heavily loaded, reserved, or busy. The completion state
indicates whether a path is achieving above-average completion, average
completion, or below-average completion. The selection of a via path is
based on the load state and completion state. Alternate paths in the same
destination network domain and in a transit network domain are each
considered separately. Within a category of via paths, selection is based on
the load state and completion state. During times of congestion, the link
bandwidth to a destination node may be in a reserved state, in which case
the remaining link bandwidth are reserved for traffic to the destination
node. During periods of no congestion, capacity not needed by one virtual
network is made available to other virtual networks that are experiencing
loads above their allocation.

Interdomain routing uses discrete load states for links, such as lightly
loaded, heavily loaded, reserved, and busy. The idle link bandwidth in a
link is compared with the load state thresholds for the link to determine
its load condition.  This determination is made every time bandwidth in the
link is either seized or released. The load state thresholds used for a
particular link are based on the current estimates of four quantities: (1)
the current bandwidth in progress, BWIPki, for a particular virtual network
i; (2) the current node-to-node congestion level NNik for a particular
virtual network i; (3) the offered traffic load TLki to each of the other
terminating nodes in the network, which is based on the bandwidth in
progress BWIPki and the congestion to each terminating node, measured over
the last several minutes; and (4) BWavgki, which is the average virtual
network link bandwidth.

The load state thresholds for the lightly loaded and heavily loaded states
are set to fixed percentages of the BWavgki estimate. As such, the load
state thresholds rise as the BWavgki estimate to that node increases.
Higher load state thresholds reduce the chances that the link is used for
alternate path connections for calls to other nodes; this enables the link
to carry more traffic on the shortest, primary route, and therefore better
handle the call load between the nodes connected by the link. The reserved
state threshold is based on the reservation level Rki calculated on each
link, which in turn is based on the node-to-node congestion level.

As mentioned previously, completion rate is tracked on the various via paths
by taking account of the information relating either the successful
completion or noncompletion of a call through the via node. A noncompletion,
or failure, is scored for the call if a signaling release message is
received from the far end after the call seizes an egress link, indicating a
network incompletion cause value. If no such signaling release message is
received after the call seizes an egress trunk, then the call is scored as a
success. There is no completion count for a call that does not seize an
egress link. Each gateway node keeps a call completion history of the
success or failure of the last 10 calls using a particular via path, and it
drops the oldest record and adds the call completion for the newest call on
that path. Based on the number of call completions relative to the total
number of calls, a completion state is computed using the completion rate
thresholds given below.

The completion state is dynamic in that the call completions are

Ash                 <draft-ash-te-qos-routing-00.txt>       [Page ANNEX3-16]


Internet Draft  TE & QoS Methods for IP,ATM,TDM-Based Networks   March 2000

continuously tracked, and if a path suddenly experiences a greater number of
call noncompletions, calls are routed over the path whose completion rate
represents the highest rate of call completions. 

Based on the completion states, calls are normally routed on the first path
with a high completion state with a lightly loaded egress link. If such a
path does not exist, then a path having an average completion state with a
lightly loaded egress link is selected, followed by a path having a low
completion state with a lightly loaded egress link. If no path with a
lightly loaded egress link is available, and if the search depth permits the
use of a heavily loaded egress link, the paths with heavily loaded egress
links are searched in the order of high completion, average completion, and
low completion. If no such paths are available, paths with reserved egress
links are searched in the same order, based on the call completion state, if
the search depth permits the use of a reserved egress link.

The rules for selecting direct shortest paths and via paths for a call are
governed by the availability of shortest path bandwidth and node-to-node
congestion. The path sequence consists of the shortest path, lightly loaded
alternate paths, heavily loaded alternate paths, and reserved alternate
paths. In general, greater path selection depth is allowed if congestion is
detected to the destination network domain, because more alternate path
choices serve to reduce the congestion. 

Interdomain routing includes the following steps for call establishment: (1)
The service identity, virtual network, link capability, and terminating node
are identified; (2) this information is used to select the corresponding
virtual network data, which include bandwidth reservation levels, load state
thresholds, and traffic measurements; and (3) an appropriate path is
selected through execution of path selection logic and the call is
established on the selected path. The various virtual networks share
bandwidth on the network links. On a weekly basis, interdomain routing
allocates the shortest path bandwidth to virtual network i, which is
referred to as BWavgki.

The gateway node automatically computes the BWavgki bandwidth allocations
once a week. A different allocation is used for various load set periods,
for example each of 36 two-hour load set periods: 12 weekday, 12 Saturday,
and 12 Sunday. The allocation of the bandwidth is based on a rolling average
of the traffic load for each of the virtual networks, to each destination
node, in each of the load set periods. BWavgki is based on average traffic
levels and is the minimum guaranteed bandwidth for virtual network i, but if
virtual network i is meeting its performance objective, other virtual
networks are free to share the BWavgki bandwidth allotted to it.

A node uses the quantities BWavgki, BWIPki, and NNki to dynamically allocate
link bandwidth to virtual networks. Under normal network conditions in which
there is no congestion, all virtual networks fully share all available
capacity. Because of this, the network has the flexibility to carry a call
overload between two nodes for one virtual network if the traffic loads for
other virtual networks are sufficiently below their design levels. An
extreme call overload between two nodes for one virtual network may cause
calls for other virtual networks to be blocked, in which case link bandwidth
is reserved to ensure that each virtual network gets the amount of bandwidth
allotted. This dynamic bandwidth reservation during times of overload
results in network performance that is analogous to having the link
bandwidth allocation between the two nodes dedicated for each virtual

Ash                 <draft-ash-te-qos-routing-00.txt>       [Page ANNEX3-17]


Internet Draft  TE & QoS Methods for IP,ATM,TDM-Based Networks   March 2000

network.

Sharing of bandwidth on a link is implemented by allowing calls on virtual
network i to seize bandwidth on the link if the bandwidth in progress BWIPki
is below the level BWavgki. However, if BWIPki is equal to or greater than
BWavgki, calls on virtual network i can seize a virtual trunk on the direct
link only when the idle-link bandwidth (ILBW) on the link is greater than
the bandwidth reserved by other virtual networks that are not meeting their
performance objectives.

Key services in interdomain routing are given preferential treatment on the
shortest path, on which the reserved bandwidth is kept separately for key
virtual networks, RBW (key VNETs), as well as for all virtual networks.  For
key virtual networks, if BWIPki < BWavgki, then idle bandwidth on the
shortest path can always be seized. An additional restriction, however, is
imposed in selecting shortest path capacity for calls for normal services.
That is, if BWIPki < BWavgki, then we select bandwidth on the shortest path
only if ILBW >= ri + RBW (key VNETs).  This additional restriction allows
preferential treatment for key services, especially under network failures
in which there is insufficient capacity to complete all calls.  Analogous
rules govern the use of heavily loaded paths and reserved paths.  Here
again, key services are given preferential treatment, but only up to a
maximum level of key service traffic. This choking mechanism for selecting
via path capacity is necessary to limit the total capacity actually
allocated to key service traffic.

In general, greater search depth is allowed if congestion is detected from
an originating gateway node to destination gateway node, because more
alternate path choices serve to reduce the congestion, and greater
dependence on alternate routing is needed to meet network congestion
objectives. The key service protection mechanism provides an effective
network capability for service protection. A constraint is that key service
traffic should be a relatively small fraction (preferably less than 20
percent) of total network traffic. With class-of-service routing
administration, the provisioning of normal services and key services routing
logic for existing and new services can be flexibly supported via the
interdomain routing administrative process, without software development in
the nodes, once the marketing/service decision is made on the service to be
offered.

Link capability selection allows calls to be routed on specific links that
have the particular characteristics required by these calls. In general, a
call can require, prefer, or avoid a set of link characteristics such as
fiberoptic or radio transmission, satellite or terrestrial transmission, or
compressed or uncompressed transmission. The link capability selection
requirements for the call can be determined by the service identity of the
call.  The link selection logic allows the call to skip links that have
undesired characteristics and to seek a best match for the requirements of
the call.  For any service identity, a set of link capability selection
preferences may be specified for the call. Link capability selection
preferences override the normal order of selection of links, which is
automatically derived by the interdomain routing logic or can be provisioned
by input parameters. Link capability selection preferences allow specific
services to use the links in a different order.

For any call, link capability selection preferences may be set based on the
service identity of the call. If a characteristic is required for a call,

Ash                 <draft-ash-te-qos-routing-00.txt>       [Page ANNEX3-18]


Internet Draft  TE & QoS Methods for IP,ATM,TDM-Based Networks   March 2000

then any link that does not have that characteristic is skipped. If a
characteristic is preferred, links with that characteristic are used first.
Links without the preferred characteristic will be used next, but only if no
links with the preferred characteristic are available. A preference can be
set for the presence or absence of a characteristic. For example, if the
absence of satellite is required, then only link bandwidth with Satellite=No
are used. If we prefer the absence of satellite, then link bandwidth with
Satellite=No are used first, then link bandwidth with Satellite=Yes. 

Interdomain routing discussed in this Section therefore extends the call
routing, connection routing, and QoS resource management concepts to routing
between network domains.

3.7	Modeling of Traffic Engineering Methods

In this Section, we again use the full-scale national network model
developed in ANNEX 2 to study various TE scenarios and tradeoffs.  The
135-node national model is illustrated in Figure 2.9, the multiservice
traffic demand model is summarized in Table 2.1, and the cost model is
summarized in Table 2.2.

3.7.1	Example of Bandwidth Reservation Methods

As discussed in Section 3.3.1, dynamic bandwidth reservation can be used to
favor one category of traffic over another category of traffic.  A simple
example of the use of this method is to reserve bandwidth in order to prefer
traffic on the shorter primary routes over traffic using longer alternate
routes.  This is most efficiently done by using a method which reserves
bandwidth only when congestion exists on links in the network.  We now give
illustrations of this method, and compare the performance of a network in
which bandwidth reservation is used under congestion to the case when
bandwidth reservation is not used.  

In the example, traffic is first routed on the shortest route, and then
allowed to alternate route on longer routes if the primary route in not
available.  In the case where bandwidth reservation is used, five percent of
the link bandwidth is reserved for traffic on the primary route when
congestion is present on the link. 

Table 3.4 illustrates the performance of bandwidth reservation methods for a
high-day network load pattern.  This is the case for multilink path routing
being used in to set up per-flow CRLSPs in a sparse network topology.  

Table 3.4
Performance of Dynamic Bandwidth Reservation Methods for CRLSP Setup
(Percent Lost/Delayed Traffic under Overload)
(Per-Flow Multilink Path Routing in Sparse Network Topology; 135-Node
Multiservice Network Model)

Overload Factor	Without Bandwidth Reservation	With Bandwidth Reservation
7		11.94				3.86	
8		22.85				9.66
10		37.74				24.78


We can see from the results of Table 3.4 that performance improves when
bandwidth reservation is used.  The reason for the poor performance without

Ash                 <draft-ash-te-qos-routing-00.txt>       [Page ANNEX3-19]


Internet Draft  TE & QoS Methods for IP,ATM,TDM-Based Networks   March 2000

bandwidth reservation is due to the lack of reserved capacity to favor
traffic routed on the more direct primary routes under network congestion
conditions.  Without bandwidth reservation nonhierarchical networks can
exhibit unstable behavior in which essentially all connections are
established on longer alternate routes as opposed to shorter primary routes,
which greatly reduces network throughput and increases network congestion
[Aki84, Kru82, NaM73].  If we add the bandwidth reservation mechanism, then
performance of the network is greatly improved.  

Another example is given in Table 3.5, where 2-link state dependent routing
(SDR) is used in a meshed network topology.  In this case, the average
business day loads for a 65-node national network model were inflated
uniformly by 30 percent [A98]. The Table gives the average hourly lost
traffic due to blocking of connection admissions in load-set-periods 2, 3,
and 5, which correspond to the two early morning busy hours and the
afternoon busy hour. 


Table 3.5
Performance of Dynamic Bandwidth Reservation Methods
(Percent Lost Traffic under 30% Overload)
(Per-Flow 2-link SDR in Meshed Network Topology; 65-Node Network Model)

Hour	Without Bandwidth Reservation	With Bandwidth Reservation	
2	12.19				0.22	
3	22.38				0.18
5	18.90				0.24


Again, we can see from the results of Table 3.5 that performance
dramatically improves when bandwidth reservation is used.  A clear
instability arises when bandwidth reservation is not used, because under
congestion a network state in which virtually all traffic occupies 2 links
instead of 1 link is predominant.  When bandwidth reservation is used, flows
are much more likely to be routed on a 1-link path, because the bandwidth
reservation mechanism makes it less likely that a 2-link path can be found
in which both links have idle capacity in excess of the reservation level.  

A performance comparison is given in Table 3.6 for a single link failure in
a 135-node design averaged over 5 network busy hours, for the case without
bandwidth reservation and with bandwidth reservation.  Clearly the use of
bandwidth reservation protects the performance of each virtual network
class-of-service category.

Table 3.6
Performance of Dynamic Bandwidth Reservation Methods
(Percent Lost/Delayed Traffic under DNVR-OKBK Link Failure)
(Multilink STT-EDR; 135-Node Network Model)

3.7.2	Comparison of Per-Virtual-Network & Per-Flow QoS Resource Management

Here we use the 135-node model to compare the per-virtual-network methods of
QoS resource, as described in Section 3.3.2, and the per-flow methods
described in Section 3.3.3.  We look at these two cases in Figure 3.6, which
illustrates the case of per-virtual-network CRLSP bandwidth allocation the
case of per-flow CRLSP bandwidth allocation.  The two figures compare the
performance in terms of lost or delayed traffic under a focused overload

Ash                 <draft-ash-te-qos-routing-00.txt>       [Page ANNEX3-20]


Internet Draft  TE & QoS Methods for IP,ATM,TDM-Based Networks   March 2000

scenario on the Oakbrook (OKBK), IL node (such as might occur, for example,
with a radio call-in give-away offer).  The size of the focused overload is
varied from the normal load (1X case) to a 10 times overload of the traffic
to OKBK (10X case).  Here a fixed routing (FR) CRLSP bandwidth allocation is
used for both the per-flow CRLSP bandwidth allocation case and the
per-virtual-network bandwidth allocation case.  The results show that the
per-flow and per-virtual-network bandwidth allocation performance is
similar; however, the improved performance of the key priority traffic and
normal priority traffic in relation to the best-effort priority traffic is
clearly evident.

Figure 3.6  Performance under Focused Overload on OKBR Switch

The performance analyses for overloads and failures for the per-flow and
per-virtual-network bandwidth allocation are now examined in which event
dependent routing (EDR) with success-to-the-top (STT) path selection are
used.  Again the simulations include call admission control with QoS
resource management, in which we distinguish the key services, normal
services, and best-effort services as indicated in the tables below.  Table
3.7 gives performance results for a 30% general overload, Table 3.8 gives
performance results for a six-times overload on a single network node, and
Table 3.9 gives performance results for a single transport link failure.
Performance analysis results show that the multilink STT-EDR per-flow
bandwidth allocation and per-virtual-network bandwidth allocation options
perform similarly under overloads and failures.

Table 3.7: 30% General Overload (% Lost/Delayed Traffic)

Table 3.8: 6X Focused Overload on OKBK (% Lost/Delayed Traffic)

Table 3.9: Failure on CHCG-NYCM Link (% Lost/Delayed Traffic)

We also investigate the performance of hierarchical network designs, which
represent the topological configuration to be expected with multi-area (or
multi-autonomous-system (multi-AS), or multi-domain) networks.  In Figure
3.7 we show the model considered, which consists of 135 edge nodes each
homed onto one of 21 backbone nodes.  

Figure 3.7  Hierarchical Network Model

Typically, the edge nodes may be grouped into separate areas or autonomous
systems, and the backbone nodes into another area or autonomous system.
Within each area a flat routing topology exists, however between edge areas
and the backbone area a hierarchical routing relationship exists.  This
routing hierarchy is modeled for both the per-flow and per-virtual-network
bandwidth allocation examples, and the results are given in Tables 3.10 to
3.12 for the 30% general overload, 6-times focused overload, and link
failure examples, respectively.  We can see that the performance of the
hierarchical network case is substantially worse than the flat network
model, which models a single area or autonomous system consisting of 135
nodes.

Table 3.10: 30% General Overload, Hierarchical Model (% Lost/Delayed
Traffic)

Table 3.11: 6X Focused Overload on OKBK, Hierarchical Model (% Lost/Delayed
Traffic)

Ash                 <draft-ash-te-qos-routing-00.txt>       [Page ANNEX3-21]


Internet Draft  TE & QoS Methods for IP,ATM,TDM-Based Networks   March 2000


Table 3.12: Failure on CHCG-NYCM Link, Hierarchical Model (% Lost/Delayed
Traffic)

We illustrate the operation of the model with some examples.  First suppose
there is 10 mbps of normal-priority  traffic and 10 mbps of best-effort
priority traffic being carried in the network between node A and node B.
Best-effort traffic is treated in the model like unassigned bit rate (UBR)
traffic and is not allocated any bandwidth.  Hence it doesn't get any CRLSP
bandwidth allocation, and is not treated as MPLS forward equivalence class
(FEC) traffic at all (it would be routed by the interior gateway protocol,
or IGP, such as OSPF).  Hence the best-effort traffic cannot be denied
bandwidth allocation to be throttled back at the edge router like the normal
and key-priority traffic could be denied bandwidth allocation.  The only way
that the best-effort traffic gets dropped/lost is to drop it at the queues,
therefore it is essential that the traffic that is allocated bandwidth on
the CRLSPs have higher priority at the queues than the best-effort traffic.
Therefore in the model the three classes of traffic get these DiffServ
markings: best-effort get no-DiffServ marking, which ensures that it will
get best-effort priority queuing treatment.  Normal-priority traffic gets
the assured forwarding (AF) DiffServ marking, which is a middle priority
level of queuing treatment, and key-priority traffic gets the expedited
forwarding (EF) DiffServ marking, which is the highest priority queuing
level.

Now suppose that there is 30 mbps of bandwidth available between A and B so
that all the normal-priority and best-effort traffic is getting through.
Now suppose that the traffic for both the normal-priority and best-effort
traffic increases to 20 mbps.  The normal-priority traffic requests and gets
a CRLSP bandwidth allocation increase to 20 mbps on the A to B CRLSP.
However, the best-effort traffic, since it has no CRLSP assigned and
therefore no bandwidth allocation, is just sent into the network at 20 mbps.
Since there is only 30 mbps of bandwidth available from A to B, the network
must drop 10 mbps of best-effort traffic in order to leave room for the 20
mbps of normal-priority traffic.  The way this is done in the model is
through the queuing mechanisms governed by the DiffServ priority settings on
each category of traffic.  Through the DiffServ marking, the queuing
mechanisms in the model discard about 10 mbps of the best-effort traffic at
the priority queues.  If the DiffServ markings were not used, then the
normal-priority and best-effort traffic would compete equally on the
first-in/first-out (FIFO) queues, and perhaps 15 mbps of each would get
through, not the desired situation.  

Taking this example further, if the normal-priority and best-effort traffic
both increase to 40 mbps, then the normal-priority traffic tries to get a
CRLSP bandwidth allocation increase to 40 mbps.  However, the most it can
get is 30 mbps, so 10 mbps is denied for the normal-priority traffic in the
MPLS constraint-based routing procedure.  By having the DiffServ markings of
AF on the normal-priority traffic and none on the best-effort traffic,
essentially all the best-effort traffic is dropped at the queues since the
normal-priority traffic is allocated and gets the full 30 mbps of A to B
bandwidth.  If there were no diffserv markings, then again perhaps 15 mbps
of both normal-priority and best-effort get through.  Or in this case,
perhaps a bit more best-effort traffic is carried than normal-priority
traffic, since 40 mbps of best-effort traffic is sent into the network and
only 30 mbps of normal-priority traffic is sent into the network, and the
FIFO queues will receive more best-effort pressure than normal-priority

Ash                 <draft-ash-te-qos-routing-00.txt>       [Page ANNEX3-22]


Internet Draft  TE & QoS Methods for IP,ATM,TDM-Based Networks   March 2000

pressure.

Some of the conclusions from the models include:

1. In a multiservice network environment, with best-effort traffic (web
traffic email, ..), normal-priority traffic (CBR voice, IP-telephony voice,
switched digital service, ..), and key-priority traffic (800-gold, incoming
international, ..) sharing the same network, MPLS bandwidth allocation plus
DiffServ/priority-queuing are both needed.  In the models the
normal-priority and key-priority traffic use MPLS to receive bandwidth
allocation while the best-effort traffic gets no bandwidth allocation. Under
congestion (e.g., from overloads or failures), the DiffServ/priority-queuing
mechanisms push out the best-effort traffic at the queues so that the
normal-priority and key-priority traffic can get through on the
MPLS-allocated CRLSP bandwidth.  

2. In a multiservice network where the normal-priority and key-priority
traffic use MPLS to receive bandwidth allocation and there is no best-effort
priority traffic, then the DiffServ/priority queuing becomes less important.
This is because the MPLS bandwidth allocation more-or-less assures that the
queues will not overflow, and perhaps therefore DiffServ would not be needed
as much.

3. As bandwidth gets more and more plentiful/cheaper, the point at which the
MPLS and DiffServ mechanisms really matter goes to a higher and higher
threshold.  That is, for example, the models show that the overload factor
at which congestion occurs gets larger as the bandwidth modules get bigger
(i.e., OC3 to OC12 to OC48 to OC192, etc.).  However, the congestion point
will always be reached with failures and/or large-enough overloads
necessitating the MPLS/DiffServ mechanisms.

Ash                 <draft-ash-te-qos-routing-00.txt>       [Page ANNEX3-23]


Internet Draft  TE & QoS Methods for IP,ATM,TDM-Based Networks   March 2000

ANNEX 4
Routing Table Management Methods & Requirements

Traffic Engineering & QoS Methods for IP-, ATM-, & TDM-Based Multiservice
Networks 

4.1	Introduction

Routing table management typically entails the automatic generation of
routing tables based on network topology and other information such as
status.  Routing table management information, such as topology update,
status information, or routing recommendations, is used for purposes of
applying the routing table design rules for determining path choices in the
routing table.  This information is exchanged between one node and another
node, such as between the ON and DN, for example, or between a node and a
network element such as a bandwidth-broker processor (BBP).  This
information is used to generate the routing table, and then the routing
table is used to determine the path choices used in the selection of a path.

This automatic generation function is enabled by the automatic exchange of
link, node, and reachable address information among the network nodes. In
order to achieve automatic update and synchronization of the topology
database, which is essential for routing table management, IP- and ATM-based
based networks already interpret HELLO protocol mechanisms to identify links
in the network. For topology database synchronization the link state
advertisement (LSA) is used in IP-based networks, and the PNNI
topology-state-element (PTSE) exchange is used in ATM-based networks, to
automatically provision nodes, links, and reachable addresses in the
topology database.  Use of a single peer group/autonomous system for
topology update leads to more efficient routing and easier administration,
and is best achieved by minimizing the use of topology state (LSA and PTSE)
flooding for dynamic topology state information. It is required in Section
4.5 that a topology state element (TSE) be developed within TDM-based
networks. When this is the case, then the HELLO and LSA/TSE/PTSE parameters
will become the standard topology update method for interworking across IP-,
ATM-, and TDM-based networks.

Status update methods are required for use in routing table management
within and between network types. In TDM-based networks, status updates of
link and/or node status are used [E.350].  Within IP- and ATM-based
networks, status updates are provided by a flooding mechanism. It is
required in Section 4.5 that a routing status element (RSE) be developed
within TDM-based networks, which will be compatible with the PNNI topology
state element (PTSE) in ATM-based networks and the link state advertisement
(LSA) element in IP-based networks. When this is the case, then the
RSE/PTSE/LSA parameters will become the standard status update method for
interworking across TDM-, ATM-, and IP-based networks.

Query for status methods are required for use in routing table management
within and between network types.  Such methods allow efficient
determination of status information, as compared to flooding mechanisms.
Such query for status methods are provided in TDM-based networks [E.350].
It is required in Section 4.5 that a routing query element (RQE) be
developed within ATM-based and IP-based networks. When this is the case,
then the RQE parameters will become the standard query for status method for
interworking across TDM-, ATM-, and IP-based networks.

Ash                 <draft-ash-te-qos-routing-00.txt>       [Page ANNEX4-1]


Internet Draft  TE & QoS Methods for IP,ATM,TDM-Based Networks   March 2000


Routing recommendation methods are proposed for use in routing table
management within and between network types.  For example, such methods
provide for a database, such as a BBP, to advertise recommended paths to
network nodes based on status information available in the database.  Such
routing recommendation methods are provided in TDM-based networks [E.350].
It is required in Section 4.5 that a routing recommendation element (RRE) be
developed within ATM-based and IP-based networks. When this is the case,
then the RRE parameters will become the standard query for status method for
interworking across TDM-, ATM-, and IP-based networks.

4.2	Routing Table Management for IP-Based Networks

IP networks typically run the OSPF protocol for intra-domain routing [M98,
S95] and the BGP protocol for inter-domain routing [S95].  OSPF and BGP are
designed for routing of datagram packets carrying multimedia internet
traffic.  Within OSPF, a link-state update topology exchange mechanism is
used by each IP node to construct its own shortest path routing tables.
Through use of these routing tables, the IP nodes match the destination IP
address to the longest match in the table and thereby determine the shortest
path to the destination for each IP packet.  In current OSPF operation, this
shortest path remains fixed unless a link is added or removed (e.g., fails),
and/or an IP node enters or leaves the network.  However the protocol allows
for possibly more sophisticated dynamic routing mechanisms to be
implemented.  MPLS is currently being developed as a means by which IP
networks may provide connection oriented services, such as with ATM layer-2
switching technology [RCV99], and differentiated services (DiffServ) [B98,
ST98] is being developed to provide QoS resource management.

These IP-based protocols provide for a) exchange of node and link status
information, b) automatic update and synchronization of topology databases,
and c) fixed and/or dynamic route selection based on topology and status
information. For topology database synchronization, each node in an IP-based
OSPF/BGP network exchanges HELLO packets with its immediate neighbors and
thereby determines its local state information. This state information
includes the identity and group membership of the node's immediate
neighbors, and the status of its links to the neighbors. Each node then
bundles its state information in LSAs, which are reliably flooded throughout
the autonomous system (AS), or group of nodes exchanging routing information
and using a common routing protocol, which is analogous to the PNNI peer
group used in ATM-based networks.  The LSAs are used to flood node
information, link state information, and reachability information.  As in
PNNI, some of the topology state information is static and some is dynamic.
In order to allow larger AS group sizes, a network can use OSPF in such a
way so as to minimize the amount of dynamic topology state information
flooding by setting thresholds to values that inhibit frequent updates.
 
IP-based routing of connection/bandwidth-allocation requests and QoS support
are in the process of standardization primarily within the MPLS and DiffServ
[B99, ST98] activities in the IETF.  The following assumptions are made
regarding the outcomes of these IP-based routing standardization:

a)	Call routing in support of connection establishment functions on a
per-connection basis to determine the routing address based on a name/number
translation, and uses a protocol such as H.323 [H.323] or the session
initiation protocol (SIP) [HSSR99].  It is assumed that the call routing
protocol interworks with the B-ISUP and bearer-independent call control

Ash                 <draft-ash-te-qos-routing-00.txt>       [Page ANNEX4-2]


Internet Draft  TE & QoS Methods for IP,ATM,TDM-Based Networks   March 2000

(BICC) protocols to accommodate setup and release of connection requests.
b)	Connection/bandwidth-allocation routing in support of bearer-path
selection is assumed to employ OSPF/BGP path selection methods in
combination with MPLS.  MPLS employs a constraint-based routing label
distribution protocol (CRLDP) [AMAOM98, CDFFSV97, J99] or a resource
reservation protocol (RSVP) [BZBHJ97] to establish constraint-based routing
label switched paths (CRLSPs).  Bandwidth allocation to CRLSPs is managed in
support of QoS resource management, as discussed in ANNEX 3.
c)	The CRLDP label request message (equivalent to the setup message)
carries the explicit route parameter specifying the via nodes (VNs) and
destination node (DN) in the selected CRLSP and the DoS parameter specifying
the allowed bandwidth selection threshold on a link.
d)	The CRLDP notify (equivalent to the release) message is assumed to
carry the crankback/bandwidth-not-available parameter specifying return of
control of the connection/bandwidth-allocation request to the originating
node (ON), for possible further alternate routing to establish additional
CRLSPs.
e)	Call control routing is coordinated with
connection/bandwidth-allocation for bearer-path establishment.
f)	Reachability information is exchanged between all nodes. To
provision a new IP address, the node serving that IP address is provisioned.
The reachability information is flooded to all nodes in the network using
the OSPF LSA flooding mechanism.
g)	The ON performs destination name/number translation, screening,
service processing, and all steps necessary to determine the routing table
for the connection/bandwidth-allocation request across the IP network. The
ON makes a connection/bandwidth-allocation request admission if bandwidth is
available and places the connection/bandwidth-allocation request on a
selected CRLSP.

IP-based networks employ an IP addressing method to identify node endpoints
[S94].  A mechanism is needed to translate E.164 NSAPs to IP addresses in an
efficient manner.  Work is in progress [E.NUM] to interwork between IP
addressing and E.164 numbering/addressing, in which a translation database
is required, based on domain name system (DNS) technology, to convert E.164
addresses to IP addresses.  With such a capability, IP nodes could make this
translation of E.164 NSAPs directly, and thereby provide interworking with
TDM- and ATM-based networks which use E.164 numbering and addressing.  If
this is the case, then E.164 NSAPs could become a standard addressing method
for interworking across IP-, ATM-, and TDM-based networks.

As stated above, path selection in an IP-based network is assumed to employ
OSPF/BGP in combination with MPLS and the CRLDP protocol that functions
efficiently in combination with call control establishment of individual
connections.  In OSPF-based layer 3 routing, as illustrated in Figure 3.1,
an ON N1 determines a list of shortest paths by using, for example,
Dijsktra's algorithm.  

Figure 3.1.  IP/MPLS Routing Example

This path list could be determined based on administrative weights of each
link, which are communicated to all nodes within the AS group.  These
administrative weights may be set, for example, to 1 + epsilon x distance,
where epsilon is a factor giving a relatively smaller weight to the distance
in comparison to the hop count.   The ON selects a path from the list based
on, for example, FR, TDR, SDR, or EDR path selection, as described in ANNEX
2.  For example, to establish a CRLSP on the first path, the ON N1 sends an

Ash                 <draft-ash-te-qos-routing-00.txt>       [Page ANNEX4-3]


Internet Draft  TE & QoS Methods for IP,ATM,TDM-Based Networks   March 2000

CRLDP label request message to VN N2, which in turn forwards the CRLDP label
request message to VN N3, and finally to DN N4.  The VNs N2 and N3 and DN N4
are passed in the explicit route (ER) parameter contained in the CRLDP label
request message.  Each node in the path reads the ER information, and passes
the CRLDP label request message to the next node listed in the ER parameter.
If the first-choice path is blocked at any of the links in the path, a CRLDP
notify message with crankback/bandwidth-not-available parameter is returned
to the ON which can then attempt the next path.  If FR is used, then this
path is the next path in the shortest path list, for example path
N1-N6-N7-N8-N4.  If TDR is used, then the next path is the next path in the
routing table for the current time period.  If SDR is used, OSPF implements
a distributed method of flooding link status information, which is triggered
either periodically and/or by crossing load state threshold values.  As
described in the beginning of this Section, this method of distributing link
status information can be resource intensive and indeed may not be any more
efficient than simpler path selection methods such as EDR.  If EDR is used,
then the next path is the last successful path, and if that path is
unsuccessful another alternate path is searched out according to the EDR
path selection method.

Bandwidth-allocation control information is used to seize and modify
bandwidth allocation on LSPs, to release bandwidth on LSPs, and for purposes
of advancing the LSP choices in the routing table.   Existing CRLSP label
request (setup) and notify (release) messages, as described in [J99], can be
used with additional parameters to control CRLSP bandwidth modification, DoS
on a link, or CRLSP crankback/bandwidth-not-available to an ON for further
alternate routing to search out additional bandwidth on alternate CRLSPs.
Actual selection of a CRLSP is determined from the routing table, and CRLSP
control information is used to establish the path choice.  Forward
information exchange is used in CRLSP set up and bandwidth modification, and
includes for example the following parameters:

1.	LABEL REQUEST - ER: The explicit route (ER) parameter in CRLDP
specifies each VN and the DN in the CRLSP, and used by each VN to determine
the next node in the path.
2.	LABEL REQUEST - DoS: The DoS parameter is used by each VN to compare
the load state on each CRLSP link to the allowed DoS threshold to determine
if the CRLDP setup or modification request is admitted or blocked on that
link.
3.	LABEL REQUEST - MODIFY: The MODIFY parameter is used by each VN/DN
to update the traffic parameters (e.g., committed data rate) on an existing
CRLSP to determine if the CRLDP modification request is admitted or blocked
on each link in the CRLSP.

The setup-priority parameter serves as a DoS parameter in the CRLDP LABEL
REQUEST message to control the bandwidth allocation, queuing priorities, and
bandwidth modification on an existing CRLSP [AAFJLLS99].

Backward information exchange is used to release a
connection/bandwidth-allocation request on a link such as from a DN to a VN
or from a  VN to an ON, and includes for example the following parameter:

4.	NOTIFY-BNA:  The bandwidth-not-available parameter in the notify
(release) message sent from the VN to ON or DN to ON, and allows for
possible further alternate routing at the ON to search out alternate CRLSPs
for additional bandwidth.


Ash                 <draft-ash-te-qos-routing-00.txt>       [Page ANNEX4-4]


Internet Draft  TE & QoS Methods for IP,ATM,TDM-Based Networks   March 2000

A bandwidth-not-available parameter is already planned for the CRLDP NOTIFY
message to allow the ON to search out additional bandwidth on additional
CRLSPs.

In order to achieve automatic update and synchronization of the topology
database, which is essential for routing table design, IP-based networks
already interpret HELLO protocol mechanisms to identify links in the
network. For topology database synchronization the OSPF LSA exchange is used
to automatically provision nodes, links, and reachable addresses in the
topology database. This information is exchanged between one node and
another node, and in the case of OSPF a flooding mechanism of LSA
information is used.

5.	HELLO: Provides for the identification of links between nodes in the
network.
6.	LSA: Provides for the automatic updating of nodes, links, and
reachable addresses in the topology database.

In summary, IP-based networks already incorporate standard signaling for
routing table management functions, which includes the ER, HELLO, and LSA
capabilities.  Additional requirements needed to support QoS resource
management include the DoS parameter and MODIFY parameter in the CRLDP LABEL
REQUEST message, the crankback/bandwidth-not-available parameter in the
CRLDP notify message, as proposed in [AALJ99], and the support for QUERY,
STATUS, and RECOM routing table design information exchange, as required in
Section 4.5.  Call control with the H.323 and session initiation protocol
[HSSR99] protocols needs to be coordinated with MPLS/CRLDP CRLSP
connection/bandwidth-allocation control. 

4.3	Routing Table Management for ATM-Based Networks

PNNI is a standardized signaling and dynamic routing strategy for ATM
networks adopted by the ATM Forum [ATM96].  PNNI provides interoperability
among different vendor equipment and scaling to very large networks.
Scaling is provided by a hierarchical peer group structure that allows the
details of topology of a peer group to be flexibly hidden or revealed at
various levels within the hierarchical structure.  Peer group leaders
represent the nodes within a peer group for purposes of routing protocol
exchanges at the next higher level.  Border nodes handle inter-level
interactions at call setup.  PNNI routing involves two components: a) a
topology distribution protocol, and b) the path selection and crankback
procedures.  The topology distribution protocol floods information within a
peer group.  The peer group leader abstracts the information from within the
peer group and floods the abstracted topology information to the next higher
level in the hierarchy, including aggregated reachable address information.
As the peer group leader learns information at the next higher level, it
floods it to the lower level in the hierarchy, as appropriate.  In this
fashion, all nodes learn of network-wide reachability and topology.

PNNI path selection is source-based in which the ON determines the
high-level path through the network.  The ON performs number translation,
screening, service processing, and all steps necessary to determine the
routing table for the connection/bandwidth-allocation request across the ATM
network. The node places the selected path in the DTL and passes the DTL to
the next node in the SETUP message. The next node does not need to perform
number translation on the called party number but just follows the path
specified in the DTL.  When a connection/bandwidth-allocation request is

Ash                 <draft-ash-te-qos-routing-00.txt>       [Page ANNEX4-5]


Internet Draft  TE & QoS Methods for IP,ATM,TDM-Based Networks   March 2000

blocked due to network congestion, a PNNI crankback/bandwidth-not-available
is sent to the first ATM node in the peer group.  The first ATM node may
then use the PNNI  alternate routing after crankback/bandwidth-not-available
capability to select another path for the connection/bandwidth-allocation
request.  If the network is flat, that is, all nodes have the same peer
group level, the ON controls the edge-to-edge path.  If the network has more
than one level of hierarchy, as the call progresses from one peer group into
another, the border node at the new peer group selects a path through that
peer group to the next peer group downstream, as determined by the ON.  This
occurs recursively through the levels of hierarchy.  If at any point the
call is blocked, for example when the selected path bandwidth is not
available, then the call is cranked back to the border node or ON for that
level of the hierarchy and an alternate path is selected.  The path
selection algorithm is not stipulated in the PNNI specification, and each ON
implementation can make its own path selection decision unilaterally.  Since
path selection is done at an ON, each ON makes path selection decisions
based on its local topology database and specific algorithm.  This means
that different path selection algorithms from different vendors can
interwork with each other.
 
In the routing example illustrated in Figure 3.1 now used to illustrate
PNNI, an ON N1 determines a list of shortest paths by using, for example,
Dijsktra's algorithm.  This path list could be determined based on
administrative weights of each link which are communicated to all nodes
within the peer group through the PTSE flooding mechanism.  These
administrative weights may be set, for example, to 1 + epsilon x distance,
where epsilon is a factor giving a relatively smaller weight to the distance
in comparison to the hop count.   The ON then selects a path from the list
based on any of the methods described in ANNEX 2, that is FR, TDR, SDR, and
EDR.  For example, in using the first choice path, the ON N1 sends a PNNI
setup message to VN N2, which in turn forwards the PNNI setup message to VN
N3, and finally to DN N4.  The VNs N2 and N3 and DN N4 are passed in the DTL
parameter contained in the PNNI setup message.  Each node in the path reads
the DTL information, and passes the PNNI setup message to the next node
listed in the DTL.  

If the first path is blocked at any of the links in the path, or overflows
or is excessively delayed at any of the queues in the path, a
crankback/bandwidth-not-available message is returned to the ON which can
then attempt the next path.  If FR is used, then this path is the next path
in the shortest path list, for example path N1-N6-N7-N8-N4.  If TDR is used,
then the next path is the next path in the routing table for the current
time period.  If SDR is used, PNNI implements a distributed method of
flooding link status information, which is triggered either periodically
and/or by crossing load state threshold values.  As described in the
beginning of this Section, this flooding method of distributing link status
information can be resource intensive and indeed may not be any more
efficient than simpler path selection methods such as EDR.  If EDR is used,
then the next path is the last successful path, and if that path is
unsuccessful another alternate path is searched out according to the EDR
path selection method.

Connection/bandwidth-allocation control information is used in
connection/bandwidth-allocation set up to seize bandwidth in links, to
release bandwidth in links, and to advance path choices in the routing
table.   Existing connection/bandwidth-allocation setup and release messages
[ATM960055] can be used with additional parameters to control SVP bandwidth

Ash                 <draft-ash-te-qos-routing-00.txt>       [Page ANNEX4-6]


Internet Draft  TE & QoS Methods for IP,ATM,TDM-Based Networks   March 2000

modification, DoS on a link, or SVP bandwidth-not-available to an ON for
further alternate routing.  Actual selection of a path is determined from
the routing table, and connection/bandwidth-allocation control information
is used to establish the path choice.  Forward information exchange is used
in connection/bandwidth-allocation set up, and includes for example the
following parameters:

1.	SETUP-DTL/ER: The designated-transit-list/explicit-route (DTL/ER)
parameter in PNNI specifies each VN and the DN in the path, and used by each
VN to determine the next node in the path.
2.	SETUP-DoS:  The DoS parameter used by each VN to compare the load
state on the link to the allowed DoS to determine if the SVC
connection/bandwidth-allocation request is admitted or blocked on that link.
3.	MODIFY REQUEST - DoS: The DoS parameter used by each VN to compare
the load state on the link to the allowed DoS to determine if the SVP
modification request is admitted or blocked on that link.

It is required that the DoS parameter be carried in the SVP MODIFY REQUEST
and SVC SETUP messages, to control the bandwidth allocation and queuing
priorities. 

Backward information exchange is used to release a
connection/bandwidth-allocation request on a link such as from a DN to a VN
or from a VN to an ON, and includes for example the following parameter:

4.	RELEASE-CB:  The crankback/bandwidth-not-available parameter in the
release message is sent from the VN to ON or DN to ON, and allows for
possible further alternate routing at the ON.
5.	MODIFY REJECT-BNA: The bandwidth-not-available parameter in the
modify reject message is sent from the VN to ON or DN to ON, and allows for
possible further alternate routing at the ON to search out additional
bandwidth on alternate SVPs.

SVC crankback/bandwidth-not-available is already defined for PNNI-based
signaling.  We propose a bandwidth-not-available parameter in the SVP MODIFY
REJECT message to allow the ON to search out additional bandwidth on
additional SVPs.

In order to achieve automatic update and synchronization of the topology
database, which is essential for routing table design, ATM-based networks
already interpret HELLO protocol mechanisms to identify links in the
network. For topology database synchronization the PTSE exchange is used to
automatically provision nodes, links, and reachable addresses in the
topology database. This information is exchanged between one node and
another node, and in the case of PNNI a flooding mechanism of PTSE
information is used.

6.	HELLO: Provides for the identification of links between nodes in the
network.
7.	PTSE: Provides for the automatic updating of nodes, links, and
reachable addresses in the topology database.

In summary, ATM-based networks already incorporate standard signaling and
messaging directly applicable to routing implementation, which includes the
DTL, crankback/bandwidth-not-available, HELLO, and PTSE capabilities.
Additional requirements needed to support QoS resource management include
the DoS parameter in the SVC SETUP and SVP MODIFY REQUEST messages, the

Ash                 <draft-ash-te-qos-routing-00.txt>       [Page ANNEX4-7]


Internet Draft  TE & QoS Methods for IP,ATM,TDM-Based Networks   March 2000

bandwidth-not-available parameter in the SVP MODIFY REJECT message, as
proposed in [AM99], and the support for QUERY, STATUS, and RECOM routing
table design information exchange, as required in Section 4.5.

4.4	Routing Table Management for TDM-Based Networks

TDM-based voice/ISDN networks have evolved several dynamic routing methods,
which are widely deployed and include TDR, SDR, and EDR implementations
[A98].  TDR includes dynamic nonhierarchical routing (DNHR), deployed in the
US Government FTS-2000 network.  SDR includes dynamically controlled routing
(DCR), deployed in the Stentor Canada, Bell Canada, MCI, and Sprint
networks, and real-time network routing (RTNR), deployed in the AT&T
network.  EDR includes dynamic alternate routing (DAR), deployed in the
British Telecom network, and STT, deployed in the AT&T network.  

TDM-based network call routing protocols are described for example in
[T1S198, ATM990048] for BICC virtual trunking, and in [Q.2761] for the
Broadband ISDN Used Part (B-ISUP) signaling protocol.  We summarize here the
information exchange required between network elements to implement the
TDM-based path selection methods, which include connection control
information required for connection set up, routing table design information
required for routing table generation, and topology update information
required for the automatic update and synchronization of topology databases.

Routing table management information is used for purposes of applying the
routing table design rules for determining path choices in the routing
table.  This information is exchanged between one node and another node,
such as between the ON and DN, for example, or between a node and a network
element such as a bandwidth broker processor (BBP).  This information is
used to generate the routing table, and then the routing table is used to
determine the path choices used in the selection of a path.  The following
messages can be considered for this function:

1.	QUERY:  Provides for an ON to DN or ON to BBP link and/or node
status request.
2.	STATUS:  Provides ON/VN/DN to BBP or DN to ON link and/or node
status information.
3.	RECOM: Provides for an BBP to ON/VN/DN routing recommendation.

These information exchange messages are already deployed in non-standard
TDM-based implementations, and need to be extended to standard TDM-based
network environments. 

In order to achieve automatic update and synchronization of the topology
database, which is essential for routing table design, TDM-based networks
need to interpret at the gateway nodes the HELLO protocol mechanisms of ATM-
and IP-based networks to identify links in the network, as discussed above
for ATM-based networks.  Also needed for topology database synchronization
is a mechanism analogous to the PTSE exchange, as discussed above, which
automatically provisions nodes, links, and reachable addresses in the
topology database. 

Path-selection and QoS-resource management control information is used in
connection/bandwidth-allocation set up to seize bandwidth in links, to
release bandwidth in links, and for purposes of advancing path choices in
the routing table.   Existing connection/bandwidth-allocation setup and
release messages, as described in Recommendations Q.71 and Q.2761, can be

Ash                 <draft-ash-te-qos-routing-00.txt>       [Page ANNEX4-8]


Internet Draft  TE & QoS Methods for IP,ATM,TDM-Based Networks   March 2000

used with additional parameters to control path selection, DoS on a link, or
crankback/bandwidth-not-available to an ON for further alternate routing.
Actual selection of a path is determined from the routing table, and
connection/bandwidth-allocation control information is used to establish the
path choice. 

Forward information exchange is used in connection/bandwidth-allocation set
up, and includes for example the following parameters:

4.	SETUP-DTL/ER:  The designated-transit-list/explicit-route (DTL/ER)
parameter specifies each VN and the DN in the path, and used by each VN to
determine the next node in the path.
5.	SETUP-DoS:  The DoS parameter is used by each VN to compare the load
state on the link to the allowed DoS to determine if the
connection/bandwidth-allocation request is admitted or blocked on that link.

In B-ISUP these parameters could be carried in the initial address message
(IAM). 

Backward information exchange is used to release a
connection/bandwidth-allocation on a link such as from a DN to a VN or from
a  VN to an ON, and includes for example the following parameter:

6.	RELEASE-CB:  The crankback/bandwidth-not-available parameter in the
release message is sent from the VN to ON or DN to ON, and allows for
possible further alternate routing at the ON.

In B-ISUP signaling this parameter could be carried in the RELEASE message.

4.5	Signaling and Information Exchange Requirements

Table 4.1 summarizes the required signaling and information exchange methods
supported within each routing technology which are required to be supported
across network types.  Table 4.1 identifies 

a)	the required information-exchange parameters, shown in non-bold
type, to support the routing methods, and
b)	the required standards, shown in bold type, to support the
information-exchange parameters.

Table 4.1
Required Signaling and Information-Exchange Parameters
to Support Routing Methods

These information-exchange methods are required for use within each network
type and for interworking across network types.  Therefore it is required
that all information-exchange parameters identified in Table 4.1 be
supported by the standards identified in the table, for each of the five
network technologies.  That is, it is required that standards be developed
for all information-exchange parameters not currently supported, which are
identified in Table 4.1 as references to Sections of this ANNEX.  This will
ensure information-exchange compatibility when interworking between the
TDM-, ATM-, and IP-based network types, as denoted in the left three network
technology columns.  To support this information-exchange interworking
across network types, it is further required that the information exchange
at the interface be compatible across network types.  Standardizing the
required information routing methods and information-exchange parameters

Ash                 <draft-ash-te-qos-routing-00.txt>       [Page ANNEX4-9]


Internet Draft  TE & QoS Methods for IP,ATM,TDM-Based Networks   March 2000

also supports the network technology cases in the right two columns of Table
4.1, in which PSTNs incorporate ATM- or IP-based technology

We first discuss the routing methods identified by the rows of Table 4.1,
and we then discuss the harmonization of PSTN/ATM-Based and PSTN/IP-Based
information exchange, as identified by columns 4 and 5 of Table 4.1.   In
Sections 4.5.1 to 4.5.4, we describe, respectively the call routing (number
translation to routing address), connection routing, QoS resource
management, and routing table management information-exchange parameters
required in Table 4.1.   In Section 4.5.5, we discuss the harmonization of
routing methods standards for the two technology cases in the right two
columns of Table 4.1 in which PSTNs incorporate ATM- or IP-based technology.

4.5.1	Call Routing (Number Translation to Routing Address)
Information-Exchange Parameters

In the ANNEX we assume the separation of call-control signaling for call
establishment from connection/bandwidth-allocation-control signaling for
bearer-channel establishment.  Call-control signaling protocols are
described for example in [Q.2761] for the Broadband ISDN Used Part (B-ISUP)
signaling protocol, [ATM990048, T1S198] for BICC virtual trunking, [H.323]
for the H.323 protocol, [GR99] for the media gateway control [MEGACO]
protocol, and in [HSSR99] for the session initiation protocol (SIP).
Connection control protocols include for example [Q.2761] for B-ISUP
signaling, [ATM960055] for PNNI signaling, [ATM960061] for UNI signaling,
[DN99] for SVP signaling, and [J99] for MPLS CRLDP signaling.

As discussed in ANNEX 2, number/name translation should result in the E.164
NSAP addresses, INRAs, and/or IP addresses.  It is required that provision
be made for carrying E.164-NSAP addresses, INRAs, and IP addresses in the
connection-setup IE.  When this is the case, then E.164-NSAP addresses,
INRAs, and IP addresses  will become the standard addressing method for
interworking across TDM-, ATM-, and IP-based networks.  In addition, it is
required that a call identification code (CIC) be carried in the
call-control and bearer-control connection-setup IEs in order to correlate
the call-control setup with the bearer-control setup, [ATM990048, T1S198].
Carrying these additional parameters in the Signaling System 7 (SS7) ISDN
User Part (ISUP) connection-setup IEs is sometimes referred to as the BICC
virtual trunking protocol.

As shown in Table 4.1, it is required that provision be made for carrying
E.164-NSAP addresses, INRAs, and IP addresses in the connection-setup IE.
In particular, it is required that E.164-NSAP-address, INRA, and IP-address
elements be developed within IP-based and PSTN/IP-based networks. It is
required that number translation/routing methods supported by these
parameters be developed for IP-based and PSTN/IP-based networks.  When this
is the case, then E.164-NSAP addresses, INRAs, and IP addresses will become
the standard addressing method for interworking across TDM-, ATM-, and
IP-based networks.

4.5.2	Connection Routing Information-Exchange Parameters

Connection/bandwidth-allocation control information is used to seize
bandwidth on links in a path, to release bandwidth on links in a path, and
for purposes of advancing path choices in the routing table.   Existing
connection/bandwidth-allocation setup and connection-release IEs, as
described in [Q.2761, ATM960055, ATM960061, DN99, J99], can be used with

Ash                 <draft-ash-te-qos-routing-00.txt>       [Page ANNEX4-9]


Internet Draft  TE & QoS Methods for IP,ATM,TDM-Based Networks   March 2000

additional parameters to control SVC/SVP/CRLDP path routing, DoS
bandwidth-allocation thresholds, and crankback/bandwidth-not-available to
allow further alternate routing. Actual selection of a path is determined
from the routing table, and connection/bandwidth-allocation control
information is used to establish the path choice.  

Source routing can be implemented through the use of
connection/bandwidth-allocation control signaling methods employing the DTL
or ER parameter in the connection-setup (IAM, SETUP, MODIFY REQUEST, and
LABEL REQUEST) IE and the crankback (CBK)/bandwidth-not-available (BNA)
parameter in the connection-release (RELEASE, MODIFY REJECT, and NOTIFY) IE.
The DTL or ER parameter specifies all VNs and DN in a path, as determined by
the ON, and the crankback/bandwidth-not-available parameter allows a VN to
return control of the connection request to the ON for further alternate
routing.  

Forward information exchange is used in connection/bandwidth-allocation
setup, and includes for example the following parameters:

1.	Setup with designated-transit list/explicit-route (DTL/ER)
parameter: The DTL parameter in PNNI or the ER parameter in CRLDP specifies
each VN and the DN in the path, and is used by each VN to determine the next
node in the path.

Backward information exchange is used to release a
connection/bandwidth-allocation request on a link such as from a DN to a VN
or from a VN to an ON, and the following parameters are required:

2.	Release with crankback/bandwidth-not-available (CBK/BNA) parameter:
The CBK/BNA parameter in the connection-release IE is sent from the VN to ON
or DN to ON, and allows for possible further alternate routing at the ON.

It is required that the CBK/BNA parameter be included (as appropriate) in
the RELEASE IE for TDM-based networks, the SVC RELEASE and SVP MODIFY REJECT
IE for ATM-based networks, and CRLDP NOTIFY IE for IP-based networks.  This
parameter is used to allow the ON to search out additional bandwidth on
additional SVC/SVP/CRLSPs.

As shown in Table 4.1, it is required that the DTL/ER and CBK/BNA elements
be developed within TDM-based networks, which will be compatible with the
DTL element in ATM-based networks and the ER element in IP-based networks.
It is required [E.350] that path-selection methods be developed supported by
these parameters for TDM-based networks.  Furthermore it is required that
TDR and EDR path-selection methods be developed supported by these
parameters for ATM-based, IP-based, PSTN/ATM-based, and PSTN/IP-based
networks.  When this is the case, then the DTL/ER and CBK/BNA parameters
will become the standard path-selection method for interworking across TDM-,
ATM-, and IP-based networks.

4.5.3	QoS Resource Management Information-Exchange Parameters

QoS resource management information is used to provide differentiated
service priority in seizing bandwidth on links in a path and also in
providing queuing resource priority.  These parameters are required:

3.	Setup with QoS parameters (QoS-PAR): The QoS-PAR include QoS
thresholds such as transfer delay, delay variation, and packet loss. The

Ash                 <draft-ash-te-qos-routing-00.txt>       [Page ANNEX4-10]


Internet Draft  TE & QoS Methods for IP,ATM,TDM-Based Networks   March 2000

QoS-PAR parameters are used by each VN to compare the link QoS performance
to the requested QoS threshold to determine if the
connection/bandwidth-allocation request is admitted or blocked on that link.
4.	Setup with traffic parameters (TRAF-PAR): The TRAF-PAR include
traffic parameters such as average bit rate, maximum bit rate, and minimum
bit rate. The TRAF-PAR parameters are used by each VN to compare the link
traffic characteristics to the requested TRAF-PAR thresholds to determine if
the connection/bandwidth-allocation request is admitted or blocked on that
link.
5.	Setup with depth-of-search (DoS) parameter: The DoS parameter is
used by each VN to compare the load state on the link to the allowed DoS to
determine if the connection/bandwidth-allocation request is admitted or
blocked on that link.
6.	Setup with modify (MOD) parameter: The MOD parameter is used by each
VN to compare the requested modified traffic parameters on an existing
SVP/CRLSP to determine if the modification request is admitted or blocked on
that link.
7.	Differentiated services (DIFFSERV) parameter: The DIFFSERV parameter
is used in ATM-based and IP-based networks to support priority queuing.  The
DIFFSERV parameter is used at the queues associated with each link to
designate the relative priority and management policy for each queue.

It is required that the QoS-PAR, TRAF-PAR, DTL/ER, DoS, MOD, and DIFFSERV
parameters be included (as appropriate) in the initial address message (IAM)
for TDM-based networks, the SVC/SVP SETUP IE and SVP MODIFY REQUEST IE for
ATM-based networks, and CRLDP LABEL REQUEST IE for IP-based networks.  These
parameters are used to control the routing, bandwidth allocation, and
routing/queuing priorities. 

As shown in Table 4.1, it is required that the QoS-PAR and TRAF-PAR elements
be developed within TDM-based networks to support bandwidth allocation and
protection, which will be compatible with the QoS-PAR and TRAF-PAR elements
in ATM-based and IP-based networks.  In addition, it is required that the
DoS element be developed within TDM-based networks, which will be compatible
with the DoS element in ATM-based and IP-based networks.  Finally, it is
required that the DIFFSERV element should be developed in ATM-based and
IP-based networks to support priority queuing. It is required that
QoS-resource-management methods be developed supported by these parameters
for TDM-based networks.  When this is the case, then the QoS-PAR, TRAF-PAR,
DoS, and DIFFSERV parameters will become the standard
QoS-resource-management methods for interworking across TDM-, ATM-, and
IP-based networks.

4.5.4	Routing Table Management Information-Exchange Parameters  

Routing table management information is used for purposes of applying the
routing table design rules for determining path choices in the routing
table.  This information is exchanged between one node and another node,
such as between the ON and DN, for example, or between a node and a network
element such as a bandwidth broker processor (BBP).  This information is
used to generate the routing table, and then the routing table is used to
determine the path choices used in the selection of a path. 

In order to achieve automatic update and synchronization of the topology
database, which is essential for routing table design, ATM- and IP-based
based networks already interpret HELLO protocol mechanisms to identify links
in the network. For topology database synchronization the PTSE exchange is

Ash                 <draft-ash-te-qos-routing-00.txt>       [Page ANNEX4-11]


Internet Draft  TE & QoS Methods for IP,ATM,TDM-Based Networks   March 2000

used in ATM-based networks and LSA is used in IP-based networks to
automatically provision nodes, links, and reachable addresses in the
topology database.  Hence these parameters are required for this function:

8.	HELLO parameter: Provides for the identification of links between
nodes in the network.
9.	Topology-state-element (TSE) parameter: Provides for the automatic
updating of nodes, links, and reachable addresses in the topology database.

These information exchange parameters are already deployed in ATM- and
IP-based network implementations, and are required to be extended to
TDM-based network environments.

The following parameters are required for the status query and routing
recommendation function: 

10.	Routing-query-element (RQE) parameter: Provides for an ON to DN or
ON to BBP link and/or node status request.
11.	Routing-status-element (RSE) parameter: Provides for a node to BBP
or DN to ON link and/or node status information.
12.	Routing-recommendation-element (RRE) parameter: Provides for an BBP
to node routing recommendation.

These information exchange parameters are being standardized with
Recommendation [E.350], and are required to be extended to ATM- and IP-based
network environments.

As shown in Table 4.1, it is required that a TSE parameter be developed
within TDM-based PSTN networks. It is required that topology update routing
methods supported by these parameters be developed for PSTN/TDM-based
networks.  When this is the case, then the HELLO and TSE/PTSE/LSA parameters
will become the standard topology update method for interworking across
TDM-, ATM-, and IP-based networks.

As shown in Table 4.1, it is required that a RSE parameter be developed
within TDM-based networks, which will be compatible with the PTSE parameter
in ATM-based networks and the LSA parameter in IP-based networks. It is
required [E.350] that status update routing methods supported by these
parameters be developed for TDM-based networks.  When this is the case, then
the RSE/PTSE/LSA parameters will become the standard status update method
for interworking across TDM-, ATM-, and IP-based networks.

As shown in Table 4.1, it is required that a RQE parameter be developed
within ATM-based, IP-based, PSTN/ATM-based, and PSTN/IP-based networks. It
is required that query-for-status routing methods supported by these
parameters be developed for ATM-based, IP-based, PSTN/ATM-based, and
PSTN/IP-based networks.  When this is the case, then the RQE parameters will
become the standard query for status method for interworking across TDM-,
ATM-, and IP-based networks.

As shown in Table 4.1, it is required that a RRE parameter be developed
within ATM-based, IP-based, PSTN/ATM-based, and PSTN/IP-based networks. It
is required that routing-recommendation methods be developed supported by
these parameters for ATM-based, IP-based, PSTN/ATM-based, and PSTN/IP-based
networks.  When this is the case, then the RRE parameters will become the
standard query for status method for interworking across TDM-, ATM-, and
IP-based networks.

Ash                 <draft-ash-te-qos-routing-00.txt>       [Page ANNEX4-12]


Internet Draft  TE & QoS Methods for IP,ATM,TDM-Based Networks   March 2000


4.5.5	Harmonization of Information-Exchange Standards

Harmonization of information-exchange standards is needed for the two
technology cases in the right two columns of Table 4.1, in which PSTNs
incorporate ATM- or IP-based technology.  For example, the harmonized
standards pertain to the case when PSTNs such as network B and network C in
Figure 1.1 incorporate IP- or ATM-based technology.  Assuming network B is a
PSTN incorporating IP-based technology, established routing methods and
compatible information-exchange are required to be applied.  Achieving this
will affect recommendations both with ITU-T and IETF that apply to the
impacted routing and information exchange functions.  

Contributions to the IETF and ATM Forum are necessary to address 

a)	needed number translation/routing functionality, which includes
support for international network routing address and IP address parameters,

b)	needed routing table management information-exchange functionality,
which includes query-for-status and routing-recommendation methods,
c)	needed path selection information-exchange functionality, which
includes time dependent routing and event dependent routing.

4.5.6	Open Routing Application Programming Interface (API)

Application programming interfaces (APIs) are being developed to allow
control of network elements through open interfaces available to individual
applications.  APIs allow applications to access and control network
functions including routing policy, as necessary, according to the specific
application functions. The API parameters under application control, such as
those specified for example in [PARLAY], are independent of the individual
protocols supported within the network, and therefore can provide a common
language and framework across various network technologies, such as TDM-,
ATM-, and IP-based technologies.  

The signaling/information-exchange connectivity management parameters
specified in this Section which need to be controlled through an
applications interface include QoS-PAR, TRAF-PAR, DTL/ER, DoS, MOD,
DIFFSERV, E.164-NSAP, INRA, CIC, and perhaps others.  The
signaling/information-exchange routing policy parameters specified in this
Section which need to be controlled through an applications interface
include TSE, RQE, RRE, and perhaps others.  These parameters are required to
be specified within the open API interface for routing functionality, and in
this way applications will be able to access and control routing
functionality within the network independent of the particular routing
protocol(s) used in the network.

4.6	Examples of Internetwork Routing

A network consisting of various subnetworks using different routing
protocols is considered in this Section.  As illustrated in Figure 4.2,
consider a network with four subnetworks denoted as networks A, B, C, and D,
where each network uses a different routing protocol.  In this example,
network A is an ATM-based network which uses PNNI EDR path selection,
network B is a TDM-based network which uses centralized periodic SDR path
selection, network C is an IP-based network which uses MPLS EDR path
selection, and network D is a TDM-based network which uses TDR path

Ash                 <draft-ash-te-qos-routing-00.txt>       [Page ANNEX4-13]


Internet Draft  TE & QoS Methods for IP,ATM,TDM-Based Networks   March 2000

selection.  Internetwork E is defined by the shaded nodes in Figure 4.2 and
is a virtual network where the interworking between networks A, B, C, and D
is actually taking place.

Figure 4.2  Example of an Internetwork Routing Scenario

Figure 4.2. Example of an Internetwork Routing Scenario.   BBPb denotes a
bandwidth broker  processor in network B for a centralized periodic SDR
method. The set of shaded nodes is internetwork E for routing of
connection/bandwidth-allocation requests between networks A, B, C, and D.

4.6.1	Internetwork E Uses a Mixed Path Selection Method

Internetwork E can use various path selection methods in delivering
connection/bandwidth-allocation requests between the subnetworks A, B, C,
and D.  For example, internetwork E can implement a mixed path selection
method in which each node in internetwork E uses the path selection method
used in its home subnetwork.  Consider a connection/bandwidth-allocation
request from node a1 in network A to node b4 in network B.  Node a1 first
paths the connection/bandwidth-allocation request to either node a3 or a4 in
network A and in doing so uses EDR path selection.  In that regard node a1
first tries to route the connection/bandwidth-allocation request on the
direct link a1-a4, and assuming that link a1-a4 bandwidth is unavailable
then selects the current successful path a1-a3-a4 and routes the
connection/bandwidth-allocation request to node a4 via node a3.  In so doing
node a1 and node a3 put the DTL/ER parameter (identifying ON a1, VN a3, and
DN a4) and QoS-PAR, TRAF-PAR, DoS, and DIFFSERV parameters in the
connection/bandwidth-allocation request connection-setup IE.  

Node a4 now proceeds to route the connection/bandwidth-allocation request to
node b1 in subnetwork B using EDR path selection. In that regard node a4
first tries to route the connection/bandwidth-allocation request on the
direct link a4-b1, and assuming that link a4-b1 bandwidth is unavailable
then selects the current successful path a4-c2-b1 and routes the
connection/bandwidth-allocation request to node b1 via node c2.  In so doing
node a4 and node c2 put the DTL/ER parameter (identifying ON a4, VN c2, and
DN b1) and QoS-PAR, TRAF-PAR, DoS, and DIFFSERV parameters in the
connection/bandwidth-allocation request connection-setup IE.  

If node c2 finds that link c2-b1 does not have sufficient available
bandwidth, it returns control of the connection/bandwidth-allocation request
to node a4 through use of a CBK/BNA parameter in the connection-release IE.
If now node a4 finds that link d4-b1 has sufficient idle bandwidth capacity
based on the RSE parameter in the status response IE from node b1, then node
a4 could next try path a4-d3-d4-b1 to node b1.  In that case node a4 routes
the connection/bandwidth-allocation request to node d3 on link a4-d3, and
node d3 is sent the DTL/ER parameter (identifying ON a4, VN d3, VN d4, and
DN b1) and the DoS parameter in the connection-setup IE.  In that case node
d3 tries to seize idle bandwidth on link d3-d4, and assuming that there is
sufficient idle bandwidth routes the connection/bandwidth-allocation request
to node d4 with the DTL/ER parameter (identifying ON a4, VN d3, VN d4, and
DN b1) and the QoS-PAR, TRAF-PAR, DoS, and DIFFSERV parameters in the
connection-setup IE.  Node d4 then routes the
connection/bandwidth-allocation request on link d4-b1 to node b1, which has
already been determined to have sufficient idle bandwidth capacity.  If on
the other hand there is insufficient idle d4-b1 bandwidth available, then
node d3 returns control of the call to node a4 through use of a CRK/BNA

Ash                 <draft-ash-te-qos-routing-00.txt>       [Page ANNEX4-14]


Internet Draft  TE & QoS Methods for IP,ATM,TDM-Based Networks   March 2000

parameter in the connection-release IE.  At that point node a4 may try
another multilink path, such as a4-a3-b3-b1, using the same procedure as for
the a4-d3-d4-b1 path.

Node b1 now proceeds to route the connection/bandwidth-allocation request to
node b4 in network B using centralized periodic SDR path selection. In that
regard node b1 first tries to route the connection/bandwidth-allocation
request on the direct link b1-b4, and assuming that link b1-b4 bandwidth is
unavailable then selects a two-link path b1-b2-b4 which is the currently
recommended alternate path identified in the RRE parameter from the
bandwidth broker processor (BBPb) for network B.  BBPb bases its alternate
routing recommendations on periodic (say every 10 seconds) link and traffic
status information in the RSE parameters received from each node in network
B.  Based on the status information, BBPb then selects the two-link path
b1-b2-b4 and sends this alternate path recommendation in the RRE parameter
to node b1 on a periodic basis (say every 10 seconds).  Node b1 then routes
the connection/bandwidth-allocation request to node b4 via node b2.  In so
doing node b1 and node b2 put the DTL/ER parameter (identifying ON b1, VN
b2, and DN b4) and QoS-PAR, TRAF-PAR, DoS, and DIFFSERV parameters in the
connection/bandwidth-allocation request connection-setup IE.

A connection/bandwidth-allocation request from node b4 in network B to node
a1 in network A would mostly be the same as the
connection/bandwidth-allocation request from a1 to b4, except with all the
above steps in reverse order.  The difference would be in routing the
connection/bandwidth-allocation request from node b1 in network B to node a4
in network A.  In this case, based on the mixed path selection assumption in
virtual network E, the b1 to a4 connection/bandwidth-allocation request
would use centralized periodic SDR path selection, since node b1 is in
network B, which uses centralized periodic SDR.  In that regard node b1
first tries to route the connection/bandwidth-allocation request on the
direct link b1-a4, and assuming that link b1-a4 bandwidth is unavailable
then selects a two-link path b1-c2-a4 which is the currently recommended
alternate path identified in the RRE parameter from the bandwidth broker
processor (BBPb) for virtual network E.  BBPb bases its alternate routing
recommendations on periodic (say every 10 seconds) link and traffic status
information in the RSE parameters received from each node in virtual
subnetwork E.  Based on the status information, BBPb then selects the
two-link path b1-c2-a4 and sends this alternate path recommendation in the
RRE parameter to node b1 on a periodic basis (say every 10 seconds).  Node
b1 then routes the connection/bandwidth-allocation request to node a4 via VN
c2.  In so doing node b1 and node c2 put the DTL/ER parameter (identifying
ON b1, VN c2, and DN a4) and QoS-PAR, TRAF-PAR, DoS, and DIFFSERV parameters
in the connection/bandwidth-allocation request connection-setup IE.

If node c2 finds that link c2-a4 does not have sufficient available
bandwidth, it returns control of the connection/bandwidth-allocation request
to node b1 through use of a CRK/BNA parameter in the connection-release IE.
If now node b1 finds that path b1-d4-d3-a4 has sufficient idle bandwidth
capacity based on the RSE parameters in the status IEs to BBPb, then node b1
could next try path b1-d4-d3-a4 to node a4.  In that case node b1 routes the
connection/bandwidth-allocation request to node d4 on link b1-d4, and node
d4 is sent the DTL/ER parameter (identifying ON b1, VN d4, VN d3, and DN a4)
and the QoS-PAR, TRAF-PAR, DoS, and DIFFSERV parameters in the
connection-setup IE.  In that case node d4 tries to seize idle bandwidth on
link d4-d3, and assuming that there is sufficient idle bandwidth routes the
connection/bandwidth-allocation request to node d3 with the DTL/ER parameter

Ash                 <draft-ash-te-qos-routing-00.txt>       [Page ANNEX4-15]


Internet Draft  TE & QoS Methods for IP,ATM,TDM-Based Networks   March 2000

(identifying ON b1, VN d4, VN d3, and DN a4) and the QoS-PAR, TRAF-PAR, DoS,
and DIFFSERV parameters in the connection-setup IE.  Node d3 then routes the
connection/bandwidth-allocation request on link d3-a4 to node a4, which is
expected based on status information in the RSE parameters to have
sufficient idle bandwidth capacity.  If on the other hand there is
insufficient idle d3-a4 bandwidth available, then node d3 returns control of
the call to node b1 through use of a CRK/BNA parameter in the
connection-release IE.  At that point node b1 may try another multilink
path, such as b1-b3-a3-a4, using the same procedure as for the b1-d4-d3-a4
path.

Allocation of end-to-end performance parameters across networks is addressed
in Recommendation I.356, Section 9.  An example is the allocation of the
maximum transfer delay to individual network components of an end-to-end
connection, such as national network portions, international portions, etc.

4.6.2	Internetwork E Uses a Single Path Selection Method

Internetwork E may also use a single path selection method in delivering
connection/bandwidth-allocation requests between the networks A, B, C, and
D.  For example, internetwork E can implement a path selection method in
which each node in internetwork E uses EDR.  In this case the example
connection/bandwidth-allocation request from node a1 in network A to node b4
in network B would be the same as described above.  A
connection/bandwidth-allocation request from node b4 in network B to node a1
in network A would be the same as the connection/bandwidth-allocation
request from a1 to b4, except with all the above steps in reverse order.  In
this case the routing of the connection/bandwidth-allocation request from
node b1 in network B to node a4 in network A would also use EDR in a similar
manner to the a1 to b4 connection/bandwidth-allocation request described
above.  

4.7	Modeling of Traffic Engineering Methods

In this Section, we again use the full-scale national network model
developed in ANNEX 2 to study various TE scenarios and tradeoffs.  The
135-node national model is illustrated in Figure 2.9, the multiservice
traffic demand model is summarized in Table 2.1, and the cost model is
summarized in Table 2.2.

As we have seen, routing table management entails many different
alternatives and tradeoffs, such as:

*	centralized  routing table control versus distributed control
*	pre-planned routing table control versus on-line routing table
control
*	per-flow traffic management versus per-virtual-network traffic
management
*	sparse logical topology versus meshed logical topology
*	FR versus TDR versus SDR versus EDR path selection 
*	multilink path selection versus two-link path selection
*	path selection using local status information versus global status
information
*	global status dissemination alternatives including status flooding,
distributed query for status, and centralized status in a bandwidth-broker
processor


Ash                 <draft-ash-te-qos-routing-00.txt>       [Page ANNEX4-16]


Internet Draft  TE & QoS Methods for IP,ATM,TDM-Based Networks   March 2000

Here we evaluate the tradeoffs in terms of the number of information
elements and parameters exchanged, by type, under various TE scenarios.
This approach gives some indication of the processor and information
exchange load required to support routing table management under various
alternatives.  In particular, we examine the following cases:

*	2-link DC-SDR
*	2-link STT-EDR
*	multilink CP-SDR
*	multilink DP-SDR
*	multilink DC-SDR
*	multilink STT-EDR

Tables 4.2 and 4.3 summarize the comparative results for these cases, for
the case of SDR path selection and STT path selection, respectively.  The
135-node multiservice model was used for a simulation under a 30% general
network overload in the network busy hour.

Table 4.2
Signaling and Information-Element Parameters Exchanged for
Various TE Methods with SDR Per-Flow Bandwidth Allocation
(135-Node Multiservice Network Model; 30% General Overload in Network Busy
Hour; Number of IE Parameters Exchanged)

Table 4.3
Signaling and Information-Element Parameters Exchanged for
Various TE Methods with STT-EDR Per-Virtual-Network Bandwidth Allocation
(135-Node Multiservice Network Model; 30% General Overload in Network Busy
Hour; Number of IE Parameters Exchanged)

Tables 4.4 and 4.5 summarize the comparative results for the case of SDR
path selection and STT path selection, respectively, in which the 135-node
multiservice model was used for a simulation under a 6-times focused
overload on the OKBK node in the network busy hour.

Table 4.4
Signaling and Information-Element Parameters Exchanged for
Various TE Methods with SDR Per-Flow Bandwidth Allocation
(135-Node Multiservice Network Model; 6X Focused Overload on OKBK in Network
Busy Hour; Number of IE Parameters Exchanged)

Table 4.5
Signaling and Information-Element Parameters Exchanged for
Various TE Methods with STT-EDR Per-Virtual-Network Bandwidth Allocation
(135-Node Multiservice Network Model; 6X Focused Overload on OKBK in Network
Busy Hour; Number of IE Parameters Exchanged)

Tables 4.6 and 4.7 summarize the comparative results for the case of SDR
path selection and STT path selection, respectively, in which the 135-node
multiservice model was used for a simulation under a facility failure on the
CHCG-NYCM link in the network busy hour.

Table 4.6
Signaling and Information-Element Parameters Exchanged for
Various TE Methods with SDR Per-Flow Bandwidth Allocation
(135-Node Multiservice Network Model; Failure of CHCG-NYCM Link in Network
Busy Hour; Number of IE Parameters Exchanged)

Ash                 <draft-ash-te-qos-routing-00.txt>       [Page ANNEX4-17]


Internet Draft  TE & QoS Methods for IP,ATM,TDM-Based Networks   March 2000


Table 4.7
Signaling and Information-Element Parameters Exchanged for
Various TE Methods with STT-EDR Per-Virtual-Network Bandwidth Allocation
(135-Node Multiservice Network Model; Failure of CHCG-NYCM Link in Network
Busy Hour; Number of IE Parameters Exchanged)

Tables 4.8 - 4.10 summarize the comparative results for the case of STT path
selection, in the hierarchical network model shown in Figure 3.7, for the
30% general overload, the 6-times focused overload, and the link failure
scenarios, respectively.  Both the per-flow bandwidth allocation and
per-virtual network bandwidth allocation cases are given in these tables.

Table 4.8
Signaling and Information-Element Parameters Exchanged for
Various TE Methods with STT-EDR Per-Virtual-Network Bandwidth Allocation
(135-Edge-Node & 21-Backbone-Node Hierarchical Multiservice Network Model; 
30% General Overload in Network Busy Hour; Number of IE Parameters Exchanged)

Table 4.9
Signaling and Information-Element Parameters Exchanged for
Various TE Methods with STT-EDR Per-Virtual-Network Bandwidth Allocation
(135-Edge-Node & 21-Backbone-Node Hierarchical Multiservice Network Model; 
6X Focused Overload on OKBK in Network Busy Hour;
Number of IE Parameters Exchanged)

Table 4.10
Signaling and Information-Element Parameters Exchanged for
Various TE Methods with STT-EDR Per-Virtual-Network Bandwidth Allocation
(135-Edge-Node & 21-Backbone-Node Hierarchical Multiservice Network Model; 
Failure of CHCG-NYCM Link in Network Busy Hour;
Number of IE Parameters Exchanged)

Tables 4.2 - 4.10 illustrate the potential benefits of EDR methods in
reducing the routing table management overhead.  In ANNEX 3 we discussed EDR
methods applied to QoS resource management, in which he connection
bandwidth-allocation admission control for each link in the path is
performed based on the local status of the link. That is, the ON selects any
path for which the first link is allowed according to QoS resource
management criteria.  Each VN then checks the local link status of the links
specified in the ER parameter against the DoS parameter.  If a subsequent
link is not allowed, then a release with crankback/bandwidth-not-available
is used to return to the ON which may then select an alternate path.  This
use of this EDR path selection method, then, which entails the use of the
release with crankback/bandwidth-not-available mechanism to search for an
available path, is an alternative to the SDR path selection alternatives,
which may entail flooding of frequently changing link state parameters such
as available-cell-rate. 

A "least-loaded routing" strategy based on available-bit-rate on each link
in a path, is used in the SDR dynamic routing methods illustrated in the
above tables, and is a well-known, successful way to implement dynamic
routing.  Such SDR  methods have been used in several large-scale network
applications in which efficient methods are used to disseminate the
available-link-bandwidth status information, such as the query for status
method using the RQE and RRE parameters.  However, there is a high overhead
cost to obtain the available-link-bandwidth information when using flooding

Ash                 <draft-ash-te-qos-routing-00.txt>       [Page ANNEX4-18]


Internet Draft  TE & QoS Methods for IP,ATM,TDM-Based Networks   March 2000

techniques, such as those which use the TSE parameter for link-state
flooding.  This is clearly evident in Tables 4.2 - 4.10.  As a possible way
around this, the EDR routing methods illustrated above do not require the
dynamic flooding of available-bit-rate information. When EDR path selection
with crankback is used in lieu of SDR path selection with link-state
flooding, the reduction in the frequency of such link-state parameter
flooding allows for larger peer group sizes.  This is because link-state
flooding can consume substantial processor and link resources, in terms of
message processing by the processors and link bandwidth consumed on the
links.  Crankback/bandwidth-not-available is then an alternative to the use
of link-state-flooding algorithm for the ON to be able to determine which
subsequent links in the path will be allowed. 

Ash                 <draft-ash-te-qos-routing-00.txt>       [Page ANNEX4-19]


Internet Draft  TE & QoS Methods for IP,ATM,TDM-Based Networks   March 2000

ANNEX 5
Capacity Management Methods

Traffic Engineering & QoS  Methods for IP-, ATM-, & TDM-Based Multiservice
Networks 

5.1	Introduction

In this ANNEX we discuss capacity management principles, as follows:
a)	Link Capacity Design Models.  These models find the optimum tradeoff
between traffic carried on a shortest network path (perhaps a direct link)
versus traffic carried on alternate network paths.
b)	Shortest Path Selection Models.  These models enable the
determination of shortest paths in order to provide a more efficient and
flexible routing plan.
c)	Multihour Network Design Models.  Three models are described
including i) discrete event flow optimization (DEFO) models, ii) traffic
load flow optimization (TLFO) models, and iii) virtual trunking flow
optimization (VTFO) models.
d)	Day-to-day Load Variation Design Models.  These models describe
techniques for handling day-to-day variations in capacity design.
e)	Forecast Uncertainty/Reserve Capacity Design Models.  These models
describe the means for accounting for errors in projecting design traffic
loads in the capacity design of the network. 

5.2	Link Capacity Design Models

Link capacity design requires a tradeoff of the traffic load carried on the
link and traffic that must route on alternate paths.  High link occupancy
implies more efficient capacity utilization, however high occupancy leads to
link congestion and the resulting need for some traffic not to be routed on
the direct link but on alternate paths.  Alternate paths may entail longer,
less efficient paths.  A good balance can be struck between link capacity
design and alternate path utilization.  For example, consider Figure 5.1,
which illustrates a network where traffic is offered on link A-B connecting
node A and node B. 

Figure 5.1  Tradeoff Between Direct Link Capacity and Alternate Path Capacity

Some of the traffic can be carried on link A-B, however when the capacity of
link A-B is exceeded, some of the traffic must be carried on alternate paths
or be lost.  The objective is to determine the direct A-B link capacity and
alternate routing path flow such that all the traffic is carried at minimum
cost.  A simple optimization procedure is used to determine the best
proportion of traffic to carry on the direct A-B link and how much traffic
to alternate route to other paths in the network.  As the direct link
capacity is increased, the direct link cost increases while the alternate
path cost decreases as more direct capacity is added, because the overflow
load decreases and therefore the cost of carrying the overflow load
decreases. An optimum, or minimum, cost condition is achieved when the
direct A-B link capacity is increased to the point where the cost per
incremental unit of bandwidth capacity to carry traffic on the direct link
is just equal to the cost per unit of bandwidth capacity to carry traffic on
the alternate network.  This is a design principle used in many design
models, be they sparse or meshed networks, fixed hierarchical routing
networks or dynamic nonhierarchical routing networks.

Ash                 <draft-ash-te-qos-routing-00.txt>       [Page ANNEX5-1]


Internet Draft  TE & QoS Methods for IP,ATM,TDM-Based Networks   March 2000


5.3	Shortest Path Selection Models

Some routing methods such as hierarchical routing, limits path choices and
provides inefficient design on high-capacity backbone links.  This limits
flexibility and reduces efficiency. If we choose paths based on cost and
relax constraints such as a hierarchical network structure, a more efficient
network results. Additional benefits can be provided in network design by
allowing a more flexible routing plan that is not restricted to hierarchical
routes but allows the selection of the shortest nonhierarchical paths.
Dijkstra's method [Dij59], for example, is often used for shortest path
selection. Figure 5.2 illustrates the selection of shortest paths between
two network nodes, SNDG and BRHM.  

Figure 5.2  Shortest Path Routing

Longer paths, such as SNDG-SNBO-ATLN-BRHM, which might arise through
hierarchical path selection, are less efficient than shortest path
selection, such as SNDG-PHNX-BRHM, SNDR-TCSN-BRHM, or SNDG-MTGM-BRHM.  There
are really two components to the shortest path selection savings. One
component results from eliminating link splintering.  Splintering occurs,
for example, when more than one node is required to satisfy a traffic load
within a given area, such as a metropolitan area.  Multiple links to a
distant node could result, thus dividing the load among links which are less
efficient than a single large link. A second component of shortest path
selection savings arises from path cost.  Routing on the least costly, most
direct, or shortest paths is often more efficient than routing over longer
hierarchical paths.

5.4	Multihour Network Design Models

Dynamic routing design improves network utilization relative to fixed
routing design because fixed routing cannot respond as efficiently to
traffic load variations that arise from business/residential phone use, time
zones, seasonal variations, and other causes. Dynamic routing design
increases network utilization efficiency by varying routing tables in
accordance with traffic patterns and designing capacity accordingly. A
simple illustration of this principle is shown in Figure 5.3, where there is
afternoon peak load demand between nodes A and B but a morning peak load
demand between nodes A and C and nodes C and B. 

Figure 5.3  Multihour Network Design

Here a simple dynamic route design is to provide capacity only between nodes
A and C and nodes C and B but no capacity between nodes A and B. Then the
A--C and C--B morning peak loads route directly over this capacity in the
morning, and the A--B afternoon peak load uses this same capacity by routing
this traffic on the A--C--B path in the afternoon. A fixed routing network
design provides capacity for the peak period for each node pair and thus
provides capacity between nodes A and B, as well as between nodes A and C
and nodes C and B.

The effect of multihour network design is illustrated by a national
intercity network design model illustrated in Figure 5.4.  Here it is shown
that about 20 percent of the network's first cost can be attributed to
designing for time-varying loads.  


Ash                 <draft-ash-te-qos-routing-00.txt>       [Page ANNEX5-2]


Internet Draft  TE & QoS Methods for IP,ATM,TDM-Based Networks   March 2000

Figure 5.4  Hourly versus Multihour Network Design

As illustrated in the figure, the 17 hourly networks are obtained by using
each hourly load, and ignoring the other hourly loads, to size a network
that perfectly matches that hour's load.  Each hourly network represents the
hourly traffic load capacity cost referred to in Table 1.1 in the
Recommendation.  The 17 hourly networks show that three network busy periods
are visible, where we see morning, afternoon, and evening busy periods, and
the noon-hour drop in load and the early-evening drop as the business day
ends and residential calling begins in the evening. The hourly network curve
separates the capacity provided in the multihour network design into two
components: Below the curve is the capacity needed in each hour to meet the
load; above the curve is the capacity that is available but is not needed in
that hour. This additional capacity exceeds 20 percent of the total network
capacity through all hours of the day, which represents the multihour
capacity cost referred to in Table 1.1.  This gap represents the capacity of
the network to meet noncoincident loads.

We now discuss the three types of multihour network design models---
discrete event flow optimization models, virtual trunking flow optimization
models, and traffic flow optimization models -- and illustrate how they are
applied to various fixed and dynamic network designs. For each model we
discuss steps that include initialization, routing design, capacity design,
and parameter update.

5.4.1	Discrete Event Flow Optimization (DEFO) Models

Discrete event flow optimization (DEFO) models are used for fixed and
dynamic traffic network design.  These models optimize the routing of
discrete event flows, as measured in units of individual connection
requests, and the associated link capacities. Figure 5.5 illustrates steps
of the DEFO model. 

Figure 5.5  Discrete Event Flow Optimization (DEFO) Model

The event generator converts traffic demands to discrete connection-request
events. The discrete event model provides routing logic according to the
particular routing method and routes the connection-request events according
to the routing table logic. DEFO models use simulation models for path
selection and routing table management to route discrete-event demands on
the link capacities, and the link capacities are then optimized to meet the
required flow. We generate initial link capacity requirements based on the
traffic load matrix input to the model. Based on design experience with the
model, an initial node-termination capacity is estimated based on a maximum
design occupancy in the node busy hour of 0.93, and the total network
occupancy (total traffic demand/total link capacity) in the network busy
hour is adjusted to fall within the range of 0.84 to 0.89.  Network
performance is evaluated as an output of the discrete event model, and any
needed link capacity adjustments are determined. Capacity is allocated to
individual links in accordance with the Kruithof allocation method [Kru37],
which distributes link capacity in proportion to the overall demand between
nodes.

Kruithof's technique is used to estimate the node-to-node requirements pij
from the originating node i to the terminating node j under the condition
that the total node link capacity requirements may be established by adding
the entries in the matrix p = [pij]. Assume that a matrix q = [qij],

Ash                 <draft-ash-te-qos-routing-00.txt>       [Page ANNEX5-3]


Internet Draft  TE & QoS Methods for IP,ATM,TDM-Based Networks   March 2000

representing the node-to-node link capacity requirements for a previous
iteration, is known. Also, the total link capacity requirements bi at each
node i and the total link capacity requirements dj at each node j are
estimated as follows:

bi  = ai/gamma

dj = aj/gamma

where ai is the total traffic at node i, aj is the total traffic at node j,
and gamma is the average traffic-carrying capacity per trunk, or node design
occupancy, as given previously.  The terms pij can be obtained as follows:

faci = bi/(sum-j qij)

facj = dj/(sum-i qij)

Eij = (faci + facj)/2

pij = qij x Eij

After the above equations are solved iteratively, the converged steady state
values of pij are obtained.

The DEFO model can generate connection-request events according to a Poisson
arrival distribution and exponential holding times, or with more general
arrival streams and arbitrary holding time distributions, because such
models can readily be implemented in the discrete routing table simulation
model. Connection-request events are generated in accordance with the
traffic load matrix input to the model.  These events are routed on the
selected path according to the routing table rules, as modeled by the
routing table simulation, which determines the selected path for each call
event and flows the event onto the network capacity.

The output from the routing design is the fraction of traffic lost and
delayed in each time period. From this traffic performance, the capacity
design determines the new link capacity requirements of each node and each
link to meet the design performance level. From the estimate of lost and
delayed traffic at each node in each time period, an occupancy calculation
determines additional node link capacity requirements for an updated link
capacity estimate. Such a link capacity determination is made based on the
amount of blocked traffic. The total blocked traffic delta-a is estimated at
each of the nodes, and an estimated link capacity increase delta-T for each
node is calculated by the relationship

delta-T = delta-a/gamma

where again gamma is the average traffic-carrying capacity per trunk. Thus,
the (T for each node is distributed to each link according to the Kruithof
estimation method described above. The Kruithof allocation method [Kru37]
distributes link capacity in proportion to the overall demand between nodes
and in accordance with link cost, so that overall network cost is minimized.
Sizing individual links in this way ensures an efficient level of
utilization on each link in the network to optimally divide the load between
the direct link and the overflow network. Once the links have been resized,
the network is re-evaluated to see if the performance objectives are met,
and if not, another iteration of the model is performed.

Ash                 <draft-ash-te-qos-routing-00.txt>       [Page ANNEX5-4]


Internet Draft  TE & QoS Methods for IP,ATM,TDM-Based Networks   March 2000


We evaluate in the model the confidence interval of the engineered
blocking/delay.  For this analysis, we evaluate the binomial distribution
for the 90th percentile confidence interval. Suppose that for a traffic load
of A in which calls arrive over the designated time period of stationary
traffic behavior, there are on average m blocked calls out of n attempts.
This means that there is an average observed blocking/delay probability of

p1 = m/n

where, for example, p1 = .01 for a 1 percent average blocking/delay
probability.  Now, we want to find the value of the 90th percentile
blocking/delay probability p such that

E(n,m,p) = sum(r=m-to-n) {Crn pr qn-r >= .90

where

Crn = n!/(n-r)!r!
	
is the binomial coefficient, and

q = 1 - p
	
Then the value p represents the 90th percentile blocking/delay probability
confidence interval. That is, there is a 90 percent chance that the observed
blocking/delay will be less than or equal to the value p. Methods given in
[Wei63] are used to numerically evaluate the above expressions.

As an example application of the above method to the DEFO model, suppose
that network traffic is such that 1 million calls arrive in a single
busy-hour period, and we wish to design the network to achieve 1 percent
average blocking/delay or less. If the network is designed in the DEFO model
to yield at most .00995 probability of blocking/delay---that is, at most
9,950 calls are blocked out of 1 million calls in the DEFO model---then we
can be more than 90 percent sure that the network has a maximum
blocking/delay probability of .01. For a specific switch pair where 2,000
calls arrive in a single busy-hour period, suppose we wish to design the
switch pair to achieve 1 percent average blocking/delay probability or less.
If the network capacity is designed in the DEFO model to yield at most .0075
probability of blocking/delay for the switch pair---that is, at most 15
calls are blocked out of 2,000 calls in the DEFO model---then we can be more
than 90 percent sure that the switch pair has a maximum blocking/delay
probability of .01. These methods are used to ensure that the blocking/delay
probability design objectives are met, taking into consideration the
sampling errors of the discrete event model.

The greatest advantage of the DEFO model is its ability to capture very
complex routing behavior through the equivalent of a simulation model
provided in software in the routing design module. By this means, very
complex routing networks have been designed by the model, which include all
of the routing methods discussed in ANNEX 2, TDR, SDR, and EDR methods, and
the multiservice QoS resource allocation models discussed in ANNEX 3.  A
flow diagram of the DEFO model, in which DC-SDR logical blocks described in
ANNEX 2 are implemented, is illustrated in Figure 5.6. The DEFO model is
general enough to include all TE models yet to be determined. 


Ash                 <draft-ash-te-qos-routing-00.txt>       [Page ANNEX5-6]


Internet Draft  TE & QoS Methods for IP,ATM,TDM-Based Networks   March 2000

Figure 5.6  Discrete Event Flow OPtimization Model with Multilink Success-to-
the-Top Event Dependent Routing (M-STT-EDR)

5.4.2	Traffic Load Flow Optimization (TLFO) Models

Traffic load flow optimization (TLFO) models are used for fixed and dynamic
traffic network design. These models optimize the routing of traffic flows
and the associated link capacities. Such models typically solve mathematical
equations that describe the routing of traffic flows analytically and, for
dynamic network design, often solve linear programming flow optimization
models. Various types of traffic flow optimization models are distinguished
as to how flow is assigned to links, paths, and routes. In fixed network
design, traffic flow is assigned to direct links and overflow from the
direct links is routed to alternate paths through the network, as described
above.  In dynamic network design, traffic flow models are often path based,
in which traffic flow is assigned to individual paths, or route based, in
which traffic flow is assigned to routes. 

As applied to fixed and dynamic routing networks, TLFO models do network
design based on shortest path selection and linear programming traffic flow
optimization. An illustrative traffic flow optimization model is illustrated
in Figure 5.7.

Figure 5.7  Traffic Load Flow Optimization (TLFO) Model

There are two versions of this model: route-TLFO and path-TLFO models.
Shortest least-cost path routing gives connections access to paths in order
of cost, such that connections access all direct circuits between nodes
prior to attempting more expensive overflow paths. Routes are constructed
with specific path selection rules. For example, route-TLFO models construct
routes for multilink or two-link path routing by assuming crankback and
originating node control capabilities in the routing. The linear programming
flow optimization model strives to share link capacity to the greatest
extent possible with the variation of loads in the network. This is done by
equalizing the loads on links throughout the busy periods on the network,
such that each link is used to the maximum extent possible in all time
periods. The routing design step finds the shortest paths between nodes in
the network, combines them into candidate routes, and uses the linear
programming flow optimization model to assign traffic flow to the candidate
routes.

The capacity design step takes the routing design and solves a fixed-point
traffic flow model to determine the capacity of each link in the network.
This model determines the flow on each link and sizes the link to meet the
performance level design objectives used in the routing design step. Once
the links have been sized, the cost of the network is evaluated and compared
to the last iteration.  If the network cost is still decreasing, the update
module (1) computes the slope of the capacity versus load curve on each
link, which reflects the incremental link cost, and updates the link
"length" using this incremental cost as a weighting factor and (2)
recomputes a new estimate of the optimal link overflow using the method
described above. The new link lengths and overflow are fed to the routing
design, which again constructs route choices from the shortest paths, and so
on. Minimizing incremental network costs helps convert a nonlinear
optimization problem to a linear programming optimization problem. Yaged
[Yag71, Yag73] and Knepley [Kne73] take advantage of this approach in their
network design models. This favors large efficient links, which carry

Ash                 <draft-ash-te-qos-routing-00.txt>       [Page ANNEX5-7]


Internet Draft  TE & QoS Methods for IP,ATM,TDM-Based Networks   March 2000

traffic at higher utilization efficiency than smaller links. Selecting an
efficient level of blocking/delay on each link in the network is basic to
the route/path-TLFO model. The link overflow optimization model [Tru54] is
used in the TLFO model to optimally divide the load between the direct link
and the overflow network.

5.4.3	Virtual Trunking Flow Optimization (VTFO) Models

Virtual trunk flow optimization (VTFO) models are used for fixed and dynamic
traffic and transport network design.  These models optimize the routing of
"virtual trunking (VT)" flows, as measured in units of VT bandwidth demands
such as 1.5 mbps, OC1, OC12, etc.  For application to network design, VTFO
models use mathematical equations to convert traffic demands to VT capacity
demands, and the VT flow is then routed and optimized. Figure 5.8
illustrates the VTFO steps.  The VT model converts traffic demands directly
to VT demands.  This model typically assumes an underlying traffic routing
structure. 

Figure 5.8  Virtual Trunking Flow Optimization (VTFO) Model

A linear programming VT flow optimization model can be used for network
design, in which hourly traffic demands are converted to hourly VT demands
by using, for example, TLFO network design methods described above for each
hourly traffic pattern.  The linear programming VT flow optimization is then
used to optimally route the hourly node-to-node VT demands on the shortest,
least-cost paths and size the links to satisfy all the VT demands.
Alternatively, node-to-node traffic demands are converted to node-to-node VT
demands by using the approach described above to optimally divide the
traffic load between the direct link and the overflow network, but in this
application of the model we obtain an equivalent VT demand, by hour, as
opposed to an optimum link-overflow objective. 

5.5	Day-to-day Load Variation Design Models

In network design we use the forecast traffic loads, which are actually mean
loads about which there occurs a day-to-day variation, characterized, for
example, by a gamma distribution with one of three levels of variance
[Wil58]. Even if the forecast mean loads are correct, the actual realized
loads exhibit a random fluctuation from day to day. Studies have established
that this source of uncertainty requires the network to be augmented in
order to maintain the required performance objectives.  Accommodating
day-to-day variations in the network design procedure can use an equivalent
load technique that models each node pair in the network as an equivalent
link designed to meet the performance objectives.  On the basis of
day-to-day variation design models, such as [HiN76, Wil58], the link
bandwidth N required in the equivalent link to meet the required objectives
for the forecasted load R with its specified instantaneous-to-mean ratio
(IMR) and specified level of day-to-day variation phi is determined.
Holding fixed the specified IMR value and the calculated bandwidth capacity
N, we calculate what larger equivalent load Re requires bandwidth N to meet
the performance objectives if the forecasted load had had no day-to-day
variation.  The equivalent traffic load Re is then used which in place of R,
since it produces the same equivalent bandwidth when designed for the same
IMR-level but in the absence of day-to-day variation.


Ash                 <draft-ash-te-qos-routing-00.txt>       [Page ANNEX5-8]


Internet Draft  TE & QoS Methods for IP,ATM,TDM-Based Networks   March 2000

5.6	Forecast Uncertainty/Reserve Capacity Design Models

Network designs are made based on measured traffic loads and estimated
traffic loads that are subject to error. In network design we use the
forecast traffic loads because the network capacity must be in place before
the loads occur. Errors in the forecast traffic reflect uncertainty about
the actual loads that will occur, and as such the design needs to provide
sufficient capacity to meet the expected load on the network in light of
these expected errors. Studies have established that this source of
uncertainty requires the network to be augmented in order to maintain the
blocking/delay probability grade-of-service objectives [FHH79].

The capacity management process accommodates the random forecast errors in
the procedures. When some realized node-to-node performance levels are not
met, additional capacity and/or routing changes are provided to restore the
network performance to the objective level. Capacity is often not
disconnected in the capacity management process even when load forecast
errors are such that this would be possible without performance degradation.
Capacity management, then, is based on the forecast traffic loads and the
link capacity already in place. Consideration of the in-service link
capacity entails a transport routing policy that could consider (1) fixed
transport routing, in which transport is not rearranged; (2) rearrangeable
transport routing, which allows periodic transport rearrangement including
some capacity disconnects; and (3) real-time transport routing, in which
transport capacity is adjusted in real time according to transport demands.

The capacity disconnect policy may leave capacity in place even though it is
not called for by the network design.  In-place capacity that is in excess
of the capacity required to exactly meet the design loads with the objective
performance is called reserve capacity.  There are economic and service
implications of the capacity management strategy. Insufficient capacity
means that occasionally link capacity must be connected on short notice if
the network load requires it. This is short-term capacity management. There
is a trade-off between reserve capacity and short-term capacity management.
Reference [FHH79] analyzes a model that shows the level of reserve capacity
to be in the range of 6--25 percent, when forecast error, measurement error,
and other effects are present. In fixed transport
routing networks, if links are found to be overloaded when actual loads are
larger than forecasted values, additional link capacity is provided to
restore the objective performance levels, and, as a result, the process
leaves the network with reserve capacity even when the forecast error is
unbiased. Operational studies in fixed transport routing networks have
measured up to 20 percent and more for network reserve capacity. Methods
such as the Kalman filter [PaW82], which provides more accurate traffic
forecasts and rearrangeable transport routing, can help reduce this level of
reserve capacity. On occasion, the planned design underprovides link
capacity at some point in the network, again because of forecast errors, and
short-term capacity management is required to correct these forecast errors
and restore service.

The model illustrated in Figure 5.9 is used to study network design of a
network on the basis of forecast loads, in which the network design accounts
for both the current network and the forecast loads in capacity management.
Capacity management can make short-term capacity additions if network
performance for the realized traffic loads becomes unacceptable and cannot
be corrected by routing adjustments.  Capacity management tries to minimize
reserve capacity while maintaining the design performance objectives and an

Ash                 <draft-ash-te-qos-routing-00.txt>       [Page ANNEX5-9]


Internet Draft  TE & QoS Methods for IP,ATM,TDM-Based Networks   March 2000

acceptable level of short-term capacity additions. Capacity management uses
the traffic forecast, which is subject to error, and the existing network.
The model assumes that the network design is always implemented, and, if
necessary, short-term capacity additions are made to restore network
performance when design objectives are not met.

Figure 5.9  Design Model Illustrating Forecast Error & Reserve Capacity
Trade-Off

With fixed traffic and transport routing, link capacity augments called for
by the design model are implemented, and when the network design calls for
fewer trunks on a link, a disconnect policy is invoked to decide whether
trunks should be disconnected. This disconnect policy reflects a degree of
reluctance to disconnect link capacity, so as to ensure that disconnected
link capacity is not needed a short time later if traffic loads grow. With
dynamic traffic routing and fixed transport routing reduction in reserve
capacity is possible while retaining a low level of short-term capacity
management. With dynamic traffic routing and dynamic transport routing,
additional reduction in reserve capacity is achieved.  With dynamic traffic
routing and dynamic transport routing design, as illustrated in Figure 5.10,
reserve capacity can be reduced in comparison with fixed transport routing,
because with dynamic transport network design the link sizes can be matched
to the network load. 

Figure 5.10  Trade-off of Reserve Capacity vs. Rearrangement Activity

5.7	Modeling of Traffic Engineering Methods

In this Section, we again use the full-scale national network model
developed in ANNEX 2 to study various TE scenarios and tradeoffs.  The
135-node national model is illustrated in Figure 2.9, the multiservice
traffic demand model is summarized in Table 2.1, and the cost model is
summarized in Table 2.2.

Here we illustrate the use of the DEFO model to design for a per-flow
multiservice network design and a per-virtual-network design, and to provide
comparisons of these designs.  The per-flow and per-virtual network designs
for the flat 135-node model are summarized in Table 5.1.

Table 5.1
Comparison of Per-Virtual-Network Design & Per-Flow Network Design
 (135-Node Multiservice Network Model; DEFO Design Model)

We see from the above results that the per-virtual network design compared
to the per-flow design yields the following results:

*	the per-flow design has 0.996 of the total termination capacity of
the per-virtual-network design
*	the per-flow design has 0.991 of the total transport capacity of the
per-virtual-network design
*	the per-flow design has 0.970 of the total network cost of the
per-virtual-network design

These results indicate that the per-virtual-network design and per-flow
design are quite comparable in terms of capacity requirements and design
cost.  In ANNEX 3 we showed that the performance of these two designs was
also quite comparable under a range of network scenarios.

Ash                 <draft-ash-te-qos-routing-00.txt>       [Page ANNEX5-10]


Internet Draft  TE & QoS Methods for IP,ATM,TDM-Based Networks   March 2000


In Table 5.2 we illustrate the use of the DEFO model to design for a
per-flow hierarchical multiservice network design and a hierarchical
per-virtual-network design, and to provide comparisons of these designs.
Recall that the hierarchical model, illustrated in Figure 3.7, consisted of
135-edge-nodes and 21 backbone-nodes.  The edge-nodes are homed onto the
backbone nodes in a hierarchical relationship.  The per-flow and per-virtual
network designs for the hierarchical 135-edge-nodeand 21-backbone-node model
are summarized in Table 5.2.

Table 5.2
Comparison of Per-Virtual-Network Design & Per-Flow Network Design
 (135-Edge-Node and 21-Backbone-Node Hierarchical Multiservice Network
Model; DEFO Design Model)

We see from the above results that the hierarchical per-virtual network
design compared to the hierarchical per-flow design yields the following
results:

*	the hierarchical per-flow design has 0.956 of the total termination
capacity of the hierarchical per-virtual-network design
*	the per-flow design has 0.992 of the total transport capacity of the
per-virtual-network design
*	the per-flow design has 0.971 of the total network cost of the
per-virtual-network design

These results indicate that the hierarchical per-virtual-network design and
hierarchical per-flow designs are quite comparable in terms of capacity
requirements and design cost.  In ANNEX 3 we showed that the performance of
these two designs was also quite comparable under a range of network
scenarios.

In this model the hierarchical designs appear to be less expensive than the
flat designs.  This is because of the larger percentage of OC48 links in the
hierarchical designs, which is also considerably sparser than the flat
design and therefore the traffic loads are concentrated onto fewer, larger,
links.  As discussed in ANNEX 2, there is an economy of scale built into the
cost model which affords the higher capacity links (e.g., OC48 as compared
to OC3) a considerably lower per-unit-of-bandwidth cost, and therefore a
lower overall network cost is achieved as a consequence.  However, the
performance analysis results discussed in ANNEX 3 show that the flat designs
perform better than the hierarchical designs under the overload and failure
scenarios that were modeled.  This also is a consequence of the sparser
hierarchical network and lesser availability of alternate paths for more
robust network performance.

Ash                 <draft-ash-te-qos-routing-00.txt>       [Page ANNEX5-11]


Internet Draft  TE & QoS Methods for IP,ATM,TDM-Based Networks   March 2000


ANNEX 6
Traffic Engineering Operational Requirements

Traffic Engineering & QoS Methods for IP-, ATM-, & TDM-Based Multiservice
Networks 



6.1	Introduction

As discussed in the Recommendation, Figure 1.1 illustrates a model for
network routing and network management and design. The central box
represents the network, which can have various configurations, and the
traffic routing tables and transport routing tables within the network.
Routing tables describe the route choices from an originating node to a
terminating node for a connection request for a particular service.
Hierarchical, nonhierarchical, fixed, and dynamic routing tables have all
been discussed in the Recommendation. Routing tables are used for a
multiplicity of services on the telecommunications network, such as an
MPLS/TE-based network used for illustration in this ANNEX.

Traffic engineering functions include traffic management, capacity
management, and network planning. Figure 1.1 illustrates these functions as
interacting feedback loops around the network. The input driving the network
is a noisy traffic load, consisting of predictable average demand components
added to unknown forecast error and other load variation components. The
feedback controls function to regulate the service provided by the network
through traffic management controls, capacity adjustments, and routing
adjustments. Traffic management provides monitoring of network performance
through collection and display of real-time traffic and performance data and
allows traffic management controls such as code blocks, connection request
gapping, and reroute controls to be inserted when circumstances warrant.
Capacity management includes capacity forecasting, daily and weekly
performance monitoring, and short-term network adjustment. Forecasting
operates over a multiyear forecast interval and drives network capacity
expansion. Daily and weekly performance monitoring identify any service
problems in the network. If service problems are detected, short-term
network adjustment can include routing table updates and, if necessary,
short-term capacity additions to alleviate service problems. Updated routing
tables are sent to the switching systems either directly or via an automated
routing update system. Short-term capacity additions are the exception, and
most capacity changes are normally forecasted, planned, scheduled, and
managed over a period of months or a year or more. Network design embedded
in capacity management includes routing design and capacity design. Network
planning includes longer-term node planning and transport network planning,
which operates over a horizon of months to years to plan and implement new
node and transport capacity.

In Sections 6.2 to 6.5, we focus on the steps involved in traffic management
of the MPLS/TE-based network (Section 6.2), capacity forecasting in the
MPLS/TE-based network (Section 6.3), daily and weekly performance monitoring
(Section 6.4), and short-term network adjustment in the MPLS/TE-based
network (Section 6.5). For each of these three topics, we illustrate the
steps involved with examples.

Monitoring of traffic and performance data is a critical issues for traffic

Ash                 <draft-ash-te-qos-routing-00.txt>       [Page ANNEX6-1]


Internet Draft  TE & QoS Methods for IP,ATM,TDM-Based Networks   March 2000

management, capacity forecasting, daily and weekly performance monitoring,
and short-term network adjustment.  This topic is receiving attention in
IP-based networks [FGLRR99] where traffic and performance data has been
somewhat lacking, in contrast to TDM-based networks where such TE monitoring
data has been developed to a sophisticated standard over a period of time
[A98].  The discussions in this ANNEX intend to point out the kind and
frequency of TE traffic and performance data required to support each
function.

6.2 Traffic Management

In this section we concentrate on the surveillance and control of the
MPLS/TE-based network. We also discuss the interactions of traffic managers
with other work centers responsible for MPLS/TE-based network operation.
Traffic management functions should be performed at a centralized work
center, and be supported by centralized traffic management operations
functions (TMOF), perhaps embedded in a centralized bandwidth-broker
processor (here denoted TMOF-BBP). A functional block diagram of TMOF-BBP is
illustrated in Figure 6.1.

Figure 6.1  Traffic Management Operations Functions within Bandwidth-Broker
Processor

6.2.1 Real-time Performance Monitoring

The surveillance of the MPLS/TE-based network should be performed through
monitoring the highest bandwidth-overflow/delay-count node-pair, preferably
on a geographical display, which is normally monitored at all times. This
display should be used in the auto-update mode, which means that every five
minutes TMOF-BBP automatically updates the exceptions shown on the map
itself and displays the node pairs with the highest bandwidth overflow/delay
count. TMOF-BBP also should have displays that show the high
bandwidth-overflow/delay-percent pairs within threshold values. 

Traffic managers are most concerned with what connection requests can be
rerouted and therefore want to know the location of the heaviest
concentrations of blocked call routing attempts. For that purpose,
overflow/delay percentages can be misleading. From a service revenue
standpoint, the difference between 1 percent and 10 percent blocking/delay
on a node pair may favor concentration on the 1 percent blocking/delay
situation, because there are more connection requests to reroute. TMOF-BBP
should also display all the exceptions that there are with the auto
threshold display, which displays everything exceeding the present
threshold--- for example either 1 percent bandwidth-overflow/delay or 1 or
more blocked connection requests, in 5 minutes.  In the latter case, this
display then shows the total blocked connection requests and not just the
highest pairs.

For peak-day operation, or operation on a high day (such as a Monday after
Thanksgiving), traffic managers should work back and forth between the auto
threshold display and the highest blocked-connection-count pair display.
They can spend most of their time with the auto threshold display, where
they can see everything that is being blocked.  Then, when traffic managers
want to concentrate on clearing out some particular problem, they can look
at the highest blocked-connection-count pair display, an additional feature
of which is that it allows the traffic manager to see the effectiveness of
controls. 

Ash                 <draft-ash-te-qos-routing-00.txt>       [Page ANNEX6-2]


Internet Draft  TE & QoS Methods for IP,ATM,TDM-Based Networks   March 2000


The traffic manager can recognize certain patterns from the surveillance
data. For example, a focused overload on a particular city/node such as
caused by a flooding situation discussed further in Sections 6.3, 6.4, and
6.5.  The typical traffic pattern under a focused overload is that most
locations show heavy overflow/delay into and out of the focus-overload node.
Under such circumstances, the display should show the bandwidth
overflow/delay percent for any node pair in the MPLS/TE-based network that
exceeds 1 percent bandwidth overflow/delay percent. 

One of the other things traffic managers should be able to see with TMOF-BBP
using the highest bandwidth-overflow/delay-count pair display is a node
failure. Transport failures should also show on the displays, but the
resulting display pattern depends on the failure itself.

6.2.2 Network Control

The MPLS/TE-based network needs automatic controls built into the node
processing and also has automatic and manual controls that can be activated
from TMOF-BBP. We first describe the required controls and what they do, and
then we discuss how the MPLS/TE-based traffic managers work with these
controls. Two protective automatic traffic management controls are required
in the MPLS/TE-based network: dynamic overload control (DOC), which responds
to node congestion, and dynamic bandwidth reservation (DBR), which responds
to link congestion. DOC and DBR should be selective in the sense that they
control traffic destined for hard-to-reach points more stringently than
other traffic. 

The complexity of MPLS/TE networks makes it necessary to place more emphasis
on fully automatic controls that are reliable and robust and do not depend
on manual administration. DOC and DBR should respond automatically within
the node software program. For DBR, the automatic response can be coupled,
for example, with two  bandwidth reservation threshold levels, represented
by the amount of idle bandwidth on an MPLS/TE-based link. DBR bandwidth
reservation levels should be automatic functions of the link size.

DOC and DBR are not strictly link-dependent but should also depend on the
node pair to which a controlled connection request belongs. A connection
request offered to an overloaded via node should either be canceled at the
originating node or advanced to an alternate via node, depending on the
destination of the call. DBR should differentiate between primary (shortest)
path and alternate path connection requests.

DOC and DBR should also use a simplified method of obtaining hard-to-reach
control  selectivity. In the MPLS/TE-based network, hard-to-reach codes can
be detected by the terminating node, which then communicates them to the
originating nodes and via nodes. Because the terminating node is the only
exit point from the MPLS/TE-based network, the originating node should treat
a hard-to-reach code detected by a terminating node as hard to reach on all
MPLS/TE-based links.

DOC should normally be permanently enabled on all links. DBR should
automatically be enabled by an originating node on all links when that
originating node senses general network congestion. DBR is particularly
important in the
MPLS/TE-based network because it minimizes the use of less efficient
alternate path connections and maximizes useful network throughput during

Ash                 <draft-ash-te-qos-routing-00.txt>       [Page ANNEX6-3]


Internet Draft  TE & QoS Methods for IP,ATM,TDM-Based Networks   March 2000

overloads. The automatic enabling mechanism for DBR ensures its proper
activation without manual intervention. DOC and DBR should automatically
determine whether to subject a controlled connection request to a cancel or
skip control.  In the cancel mode, affected connection requests are blocked
from the network, whereas in the skip mode such connection requests skip
over the controlled link to an alternate link.  DOC and DBR should be
completely automatic controls. Capabilities such as automatic enabling of
DBR, the automatic skip/cancel mechanism, and the DBR one-link/two-link
traffic differentiation adapt these controls to the MPLS/TE-based network
and make them robust and powerful automatic controls.

Code-blocking controls block connection requests to a particular destination
code. These controls are particularly useful in the case of focused
overloads, especially if the connection requests are blocked at or near
their origination. Code blocking controls need not block all calls, unless
the destination node is completely disabled through natural disaster or
equipment failure. Nodes equipped with code-blocking controls can typically
control a percentage of the connection requests to a particular code. The
controlled E.164 name (dialed number code), for example, may be NPA, NXX,
NPA-NXX, or NPA-NXX-XXXX, when in the latter case one specific customer is
the target of a focused overload.

A call-gapping control, illustrated in Figure 6.2, is typically used by
network managers in a focused connection request overload, such as sometimes
occurs with radio call-in give-away contests. 

Figure 6.2  Call Gap Control

Call gapping allows one connection request for a controlled code or set of
codes to be accepted into the network, by each node, once every x seconds,
and connection requests arriving after the accepted connection request are
rejected for the next x seconds. In this way, call gapping throttles the
connection requests and prevents the overload of the network to a particular
focal point.

An expansive control is also required. Reroute controls should be able to
modify routes by inserting additional paths at the beginning, middle, or end
of a route sequence. Such reroutes should be inserted manually or
automatically through TMOF-BBP. When a reroute is active on a node pair, DBR
should be prevented on that node pair from going into the cancel mode, even
if the overflow/delay is heavy enough on a particular node pair to trigger
the DBR cancel mode. Hence, if a reroute is active, connection requests
should have a chance to use the reroute paths and not be blocked prematurely
by the DBR cancel mode.

In the MPLS/TE-based network, a display should be used to graphically
represent the controls in effect. Depending on the control in place, either
a certain shape or a certain color should tell traffic managers which
control is implemented. Traffic managers should be able to tell if a
particular control at a node is the only control on that node. Different
symbols should be used for the node depending on the controls that are in
effect.


Ash                 <draft-ash-te-qos-routing-00.txt>       [Page ANNEX6-4]


Internet Draft  TE & QoS Methods for IP,ATM,TDM-Based Networks   March 2000

6.2.3 Work Center Functions

6.2.3.1 Automatic controls

The MPLS/TE-based network requires automatic controls, as described above,
and if there is spare capacity, traffic managers can decide to reroute. In
the example focus-overload situation, the links are occupied sufficiently,
and there is often no network capacity available for reroutes. The DBR
control is normally active at the time. In order to get connection requests
out of focus-overload-node, traffic managers sometimes must manually disable
the DBR control at the focus-overload-node. This gave preference to
connection requests going out of the focus-overload-node. Thereby, the
focus-overload-node gets much better completion of outgoing connection
requests than will the other nodes at completing calls into the
focus-overload node. This control results in using the link capacity more
efficiently.

Traffic managers should be able to manually enable or inhibit DBR and also
inhibit the skip/cancel mechanism for both DBR and DOC. Traffic managers
should monitor DOC controls very closely because they indicate switching
congestion or failure.  Therefore, DOC activations should be investigated
much more thoroughly and more quickly than DBR activations, which are
frequently triggered by normal heavy traffic. 

6.2.3.2 Code Controls

Code controls are used to cancel connection requests for very hard-to-reach
codes. Code control is used when connection requests cannot complete to a
point in the network or there is isolation. For example, traffic managers
should use code controls for a focus overload situation, such as caused by
an earthquake, in which there can be isolation. Normal hard-to-reach traffic
caused by heavy calling volumes will be blocked by the DBR control, as
described above.

Traffic managers should use data on hard-to-reach codes in certain
situations for problem analysis. For example, if there is a problem in a
particular area, one of the early things traffic managers should look at is
the hard-to-reach data to see if they can identify one code or many codes
that are hard to reach and if they are from one location or several
locations. 
 
6.2.3.3 Reroute Controls

Traffic managers should sometimes use manual reroute even when an automatic
reroute capability is there. Reroutes are used primarily for transport
failures or heavy traffic surges, such as traffic on heavier than normal
days, where the surge is above the normal capabilities of the network to
handle the load. Those are the two prime reasons for rerouting. Traffic
managers do not usually reroute into a disaster area. 

6.2.3.4 Peak-Day Control

Peak-day routing in the MPLS/TE-based network should involve using the
primary (shortest) path (CRLSP) as the only engineered path and then the
remaining available paths as alternate paths all subject to DBR controls.
The effectiveness of the additional alternate paths and reroute capabilities
depends very much on the peak day itself. The greater the peak-day traffic,

Ash                 <draft-ash-te-qos-routing-00.txt>       [Page ANNEX6-5]


Internet Draft  TE & QoS Methods for IP,ATM,TDM-Based Networks   March 2000

the less effective the alternate paths are. That is, on the higher peak
days, such as Christmas and Mother's Day, the network is filled with
connections mostly on shortest paths. On lower peak days, such as Easter or
Father's Day, the use of alternate paths and rerouting capabilities are more
effective. This is because the peaks, although they are high and have an
abnormal traffic pattern, are not as high as on Christmas or Mother's Day.
So on these days there is additional capacity to complete connection
requests on the alternate paths. Reroute paths are particularly available in
the early morning and late evening. Depending on the peak day, at times
there is also a lull in the afternoon, and TMOF-BBP should normally be able
to find reroute paths that are available.

6.2.4 Traffic Management on Peak Days

A typical peak-day routing method uses the shortest path between node pairs
as the only engineered path, followed by alternate paths protected by DBR.
This method is more effective during the lighter peak days such as
Thanksgiving, Easter, and Father's Day. With the lighter loads, when the
network is not fully saturated, there is a much better chance of using the
alternate paths.  However, when we enter the network busy hour or
combination of busy hours, with a peak load over most of the network, the
routing method at that point drops back to shortest-path routing because of
the effect of bandwidth reservation. At other times the alternate paths are
very effective in completing calls. 

6.2.5 Interfaces to Other Work Centers

The main interaction traffic managers have is with the capacity managers.
Traffic managers notify capacity managers of conditions in the network that
are affecting the data that they use in making decisions as to whether or
not to add capacity.   Examples are transport failures and node failures
that would distort traffic data. A node congestion signal can trigger DOC;
DOC cancels all traffic destined to a node while the node congestion is
active. All connection requests to the failed node are reflected as overflow
connection requests for the duration of the node congestion condition. This
can be a considerable amount of canceled traffic.  The capacity manager
notifies traffic managers of the new link capacity requirements that they
are trying to get installed but that are delayed.  Traffic managers can then
expect to see congestion on a daily basis or several times a week until the
capacity is added. This type of information is passed back and forth on a
weekly or perhaps daily basis.

6.3 Capacity Management---Forecasting

In this section we concentrate on the forecasting of MPLS/TE-based
node-to-node loads and the sizing of the MPLS/TE-based network. We also
discuss the interactions of network forecasters with other work centers
responsible for MPLS/TE-based network operations.

Network forecasting functions should be performed from a capacity
administration center and supported by network forecasting operations
functions integrated into the BBP (NFOF-BBP). A functional block diagram of
NFOF-BBP is illustrated in Figure 6.3. In the following two sections we
discuss the steps involved in each functional block.


Ash                 <draft-ash-te-qos-routing-00.txt>       [Page ANNEX6-6]


Internet Draft  TE & QoS Methods for IP,ATM,TDM-Based Networks   March 2000

6.3.1 Load forecasting
 
6.3.1.1 Configuration Database Functions

As illustrated in Figure 6.3, the configuration database is used in the
forecasting function, and within this database are defined various specific
components of the network itself, for example: backbone nodes, access nodes,
transport points of presence, buildings, manholes, microwave towers, and
other facilities.

Figure 6.3  Capacity Management Functions within Bandwidth-Broker Processor

Forecasters maintain configuration data for designing and forecasting the
MPLS/TE-based network.  Included in the data for each backbone node and
access node, for example, are the number/name translation capabilities,
equipment type, type of signaling, homing arrangement, international routing
capabilities, operator service routing capabilities, and billing/recording
capabilities.  When a forecast cycle is started, which is normally each
month, the first step is to extract the relevant pieces of information from
the configuration database that are necessary to drive network forecasting
operations functions (NFOF-BBP). One of information items indicates the date
of the forecast view; this is when the configuration files were frozen,
which then represents the network structure at the time the forecast is
generated.

6.3.1.2 Load Aggregation, Basing, and Projection Functions. 

NFOF-BBP should process data from a centralized message database, which
represents a record of all connection requests placed on the network, over
four study periods within each year, for example, March, May, August, and
November, each a 20-day study period. From the centralized a sampling method
can be used, for example a 5 percent sample of recorded connections for 20
days.  Forecasters can then equate that 5 percent, 20-day sample to one
average business day.  The load information then consists of messages and
traffic load by study period. In the load aggregation step, NFOF-BBP may
apply nonconversation time factors to equate the traffic load obtained from
billed traffic load to the actual holding time traffic load.

The next step in load forecasting is to aggregate all of the
access-node-to-access-node loads up to the backbone node-pair level. This
produces the backbone-node-to-backbone-node traffic item sets. These
node-to-node traffic item sets are then routed to the candidate links.
NFOF-BBP should then project those aggregated loads into the future, using
smoothing techniques to compare the current measured data with the
previously projected data and to determine an optimal estimate of the base
and projected loads. The result is the initially projected loads that are
ready for forecaster adjustments and business/econometric adjustments.
 
6.3.1.3 Load Adjustment Cycle and View of Business Adjustment Cycle. 

Once NFOF-BBP smoothes and projects the data, forecasters can then enter a
load adjustment cycle. This should be an online process that has the
capability to go into the projected load file for all the forecast periods
for all the years
and apply forecaster-established thresholds to those loads. For example, if
the forecaster requests to see any projected load that has deviated more
than 15 percent from what it was projected to be in the last forecast cycle,

Ash                 <draft-ash-te-qos-routing-00.txt>       [Page ANNEX6-7]


Internet Draft  TE & QoS Methods for IP,ATM,TDM-Based Networks   March 2000

a load analysis module in NFOF-BBP should search through all the node pairs
that the forecaster is responsible for, sort out the ones that exceed the
thresholds, and print them on a display. The forecaster then has the option
to change the projected loads or accept them.  

After the adjustment cycle is complete and the forecasters have adjusted the
loads to account for missing data, erroneous data, more accurate current
data, or specifically planned events that cause a change in load,
forecasters should then apply the view of the business adjustments. Up to
this point, the projection of loads has been based on projection models and
network structure changes, as well as the base study period billing data.
The view of the business adjustment is intended to adjust the future network
loads to compensate for the effects of competition, rate changes, and
econometric factors on the growth rate.  This econometric adjustment process
tries to encompass those
factors in an adjustment that is applied to the traffic growth rates. Growth
rate adjustments should be made for each business, residence, and service
category, since econometric effects vary according to service category.

6.3.2 Network Design

Given the MPLS/TE-based node-pair loads, adjusted by the forecasters, and
also adjusted for econometric projections, the network design model should
then be executed by NFOF-BCC based on those traffic loads. The node-to-node
loads are estimated for each hourly backbone-node-to-backbone-node traffic
load, including the minute-to-minute variability and the day-to-day
variation, plus the control parameters.  The access-node-to-backbone-node
links should also be sized in this step.

A list of all the MPLS/TE-based node pairs should then be sent to the
transport planning database, from which is extracted transport information
relative to the transport network between the node pairs on that list. Once
the information has been processed in the design model, NFOF-BBP should
output the MPLS/TE-based forecast report.  Once the design model has run for
a forecast cycle, the forecast file and routing information should be sent
downstream to the provisioning systems, planning systems, and capacity
management system, and the capacity manager takes over from there as far as
implementing the routing and the link capacity called for in the forecast.

6.3.3 Work Center Functions

Capacity management and forecasting operation should be centralized. Work
should be divided on a geographic basis so that the MPLS/TE-based forecaster
and capacity manager for a region can work with specific work centers within
the region. These work centers include the node planning and implementation
organizations and the transport planning and implementation organizations.
Their primary interface should be with the system that is responsible for
issuing the orders to augment link capacity. Another interface is with the
routing organization that processes the routing information coming out of
NFOF-BBP.

NFOF-BBP should provide a considerable amount of automation, and as such
people can spend their time on more productive activities. By combining the
forecasting job and the capacity management job into one centralized
operation, additional efficiencies are achieved from a reduction in
fragmentation. By centralizing the operations, this avoids duplication from
distributing the operation within regional groups. And, with the automation,

Ash                 <draft-ash-te-qos-routing-00.txt>       [Page ANNEX6-8]


Internet Draft  TE & QoS Methods for IP,ATM,TDM-Based Networks   March 2000

time need only be spent to clear a problem or analyze data outliers, rather
than to check and verify everything.

This operation requires people who are able to understand and deal with a
more complex network, and the network complexity will continue to increase
as new technology and services are introduced. Other disciplines can
usefully centralize their operations, for example, node planning, transport
planning, equipment ordering, and inventory control.   With centralized
equipment-ordering and inventory control, for example, all equipment
required for the network can be bulk ordered and distributed. This leads to
a much more efficient use of inventory.

6.3.4 Interfaces to Other Work Centers

Network forecasters work cooperatively with node planners, transport
planners, traffic managers, and capacity managers. With an MPLS/TE network,
forecasting, capacity management, and traffic management must tie together
closely. One way to develop those close relationships is by having
centralized, compact work centers. The forecasting process essentially
drives all the downstream construction and planning processes for an entire
network operation.

6.4 Capacity Management---Daily and Weekly Performance Monitoring

In this section we concentrate on the analysis of node-to-node capacity
management data and the design of the MPLS/TE-based network. Capacity
management becomes mandatory at times, as seen from the node-to-node traffic
data, when significant congestion problems are extant in the network or when
it is time to implement a new forecast. We discuss the interactions of
capacity managers with other work centers responsible for MPLS/TE-based
network operation.  Capacity management functions should be performed from a
capacity administration center and should be supported by the capacity
management operations functions embedded, for example, in the BBP (denoted
here as the CMOF-BBP). A functional block diagram of the CMOF-BBP is
illustrated within the lower three blocks of Figure 6.3. In the following
sections we discuss the processes in each functional block.

6.4.1 Daily Congestion Analysis Functions

A daily congestion summary should be used to give a breakdown of the highest
to the lowest node-pair congestion that occurred the preceding day. This is
an exception-reporting function, in which there should be an ability to
change the display threshold. For example, the capacity manager can request
to see only node pairs whose congestion level is greater than 10 percent.
Capacity managers investigate to find out whether they should exclude these
data and, if so, for what reason. One reason for excluding data is to keep
them from downstream processing if they are associated with an abnormal
network condition. This would prevent designing the network for this type of
nonrecurring network condition. In order to find out what the network
condition was, capacity managers consult with the traffic managers. If, for
example, traffic managers indicate that the data is associated with an
abnormal network condition, such as a focused overload due to flooding the
night before, then capacity managers may elect to exclude the data.

6.4.2 Study-week Congestion Analysis Functions

The CMOF-BBP functions should also support weekly congestion analysis.  This

Ash                 <draft-ash-te-qos-routing-00.txt>       [Page ANNEX6-9]


Internet Draft  TE & QoS Methods for IP,ATM,TDM-Based Networks   March 2000

should normally occur after capacity managers form the weekly average using
the previous week's data. The study-week data should then be used in the
downstream processing to develop the study-period average. The weekly
congestion data are set up basically the same way as the daily congestion
data and give the node pairs that had congestion for the week.  This
study-week congestion analysis function gives another opportunity to review
the data to see if there is a need to exclude any weekly data.

6.4.3 Study-period Congestion Analysis Functions

Once each week, the study-period average should be formed using the most
current four weeks of data. The study-period congestion summary gives an
idea of the congestion during the most current study period, in which node
pairs that experienced average business day average blocking/delay greater
than 1 percent are identified.  If congestion is found for a particular node
pair in a particular hour, the design model may be exercised to solve the
congestion problem. In order to determine whether they should run the design
model for that problem hour, capacity managers should first look at the
study-period congestion detail data.  For the node pair in question they
look at the 24 hours of data to see if there are any other hours for that
node pair that should be investigated. Capacity managers should also
determine if there is pending capacity addition for the problem node pair.

6.5 Capacity Management---Short-Term Network Adjustment

6.5.1 Network Design Functions

There are several features should be available in the design model.  First,
capacity managers should be able to select a routing change option. With
this option, the design model should make routing table changes to utilize
the network capacity that is in place to minimize congestion. The design
model should also design the network to the specified grade-of-service
objectives. If it cannot meet the objectives with the network capacity in
place, it specifies how much capacity to add to which links in order to meet
the performance objectives. The routing table update implementation should
be automatic from the CMOF-BBP all the way through to the network nodes.  An
evaluator option of the design model should be available to determine the
carried traffic per link, or network efficiency, for every link in the
network for the busiest hour. 

6.5.2 Work Center Functions

Certain sections of the network should be assigned so that all capacity
managers have an equal share of links that they are responsible for. Each
capacity manager therefore deals primarily with one region. Capacity
managers also need to work with transport planners so that the transport
capacity planned for the links under the capacity manager's responsibility
is available to the capacity manager. If, on a short-term basis, capacity
has to be added to the network, capacity managers find out from the
transport planner whether the transport capacity is available. CMOF-BBP is
highly automated, and the time the capacity manager spends working with
CMOF-BBP system displays should be small compared with other daily
responsibilities. One of the most time-consuming work functions is following
up on the capacity orders to determine status: Are they in the field? Does
the field have them? Do they have the node equipment working? If capacity
orders are delayed, the capacity manager is responsible for making sure that
the capacity is added to the network as soon as possible. With the normal

Ash                 <draft-ash-te-qos-routing-00.txt>       [Page ANNEX6-10]


Internet Draft  TE & QoS Methods for IP,ATM,TDM-Based Networks   March 2000

amount of network activity going on, that is the most time-consuming part of
the work center function.

6.5.3 Interfaces to Other Work Centers

The capacity manager needs to work with the forecasters to learn of network
activity that will affect the MPLS/TE-based network. Of concern are new
nodes coming into the network capacity management activity that affects the
MPLS/TE-based network. Capacity managers should interact quite frequently
with traffic managers to learn of network conditions such as cable cuts,
floods, or disasters. Capacity managers detect such activities the next day
in the data; the network problem stands out immediately. Before they exclude
the data, however, capacity managers need to talk with the traffic managers
to find out specifically what the problem was in the network. In some cases,
capacity managers will share information with them about something going on
that they may not be aware of. For example, capacity managers may be able to
see failure events in the data, and they can share this type of information
with the traffic managers. Other information capacity managers might share
with traffic managers relates to peak days. Capacity managers are able the
next morning to give the traffic managers the actual reports and information
of the load and congestion experienced in the network.

Capacity managers also work with the data collection work center. If they
miss collecting data from a particular node for a particular day, capacity
managers should discuss this with that work center to get the data into
CMOF-BBP.  In CMOF-BBP, capacity managers should have some leeway in getting
data into the system that may have been missed. So if data are missed one
night on a particular node, the node should be available to be repolled to
pull data into CMOF-BBP. 

Capacity managers frequently communicate with the routing work centers
because there is so much activity going on with routing. For example,
capacity mangers work with them to set up the standard numbering/naming
plans so that they can access new routing tables when they are entered into
the network. Capacity managers also work with the people who are actually
doing the capacity order activity on the links. Capacity managers should try
to raise the priority on capacity orders if there is a congestion condition,
and often a single congestion condition may cause multiple activities in the
MPLS/TE network. 

6.6 Comparison of TE with TDR versus SDR/EDR 

With an SDR/EDR-based MPLS/TE network, as compared to a TDR-based network,
several improvements occur in TE functions. Under TDR-based networks,
TMOF-BBP should automatically put in reroutes to solve congestion problems
by looking everywhere in the network for additional available capacity and
adding additional alternate paths to the existing preplanned paths, on a
five-minute basis. With SDR/EDR-based networks, in contrast, this automatic
rerouting function is replaced by real-time examination of all admissible
routing choices. 

Hence an important simplification introduced with the SDR/EDR-based networks
is that routing tables need not be calculated by the design model, because
these are computed in real time by the node or BBP. This leads to
simplifications in that the routing tables computed in TDR-based networks
are no longer needed.  Hence simplifications are introduced into the
administration of network routing. With TDR, routing tables must be

Ash                 <draft-ash-te-qos-routing-00.txt>       [Page ANNEX6-11]


Internet Draft  TE & QoS Methods for IP,ATM,TDM-Based Networks   March 2000

periodically reoptimized and downloaded into nodes via the CMOF-BBP process.
Reoptimizing and changing the routing tables in the TDR-based network
represents an automated yet large administrative effort involving perhaps
millions of records. This function is simplified in SDR/EDR-based networks
since the routing is generated in real time for each connection request and
then discarded. Also, because SDR/EDR-based TE adapts to network conditions,
less network churn and short-term capacity additions are required. This is
one of the operational advantages of SDR/EDR-based MPLS/TE networks---that
is, to automatically adapt TE so as to move the traffic load to where
capacity is available in the network.



Ash                 <draft-ash-te-qos-routing-00.txt>       [Page ANNEX6-12]


Internet Draft  TE & QoS Methods for IP,ATM,TDM-Based Networks   March 2000