Docstoc

ument title

Document Sample
ument title Powered By Docstoc
					                                                            Technical Specification

Differentiated Services – Network Configuration and
Management



Tests of available implementations



Authors:
Csaba Géczi (editor), Hungarian Telecom
Balázs Varga, Hungarian Telecom
Ferenc Telbisz, Hungarian Telecom
Egil Aarstad, Telenor
Pedro Andrés Aranda Gutiérrez, Telefonica
Rüdiger Geib, Deutsche Telecom




                                            Abstract
This document should be read by such technical experts, who are responsible for testing IP
networks.
Currently, DiffServ related recommendations are under construction and not even the terminology
of DiffServ is unique. Such a confusing situation makes the introduction of services based on
DiffServ rather problematic.
In order to overcome these problems Testing methods of Differentiated Services (DiffServ)
networks were developed in the DISCMAN project. The current document describes basic testing
scenarios for conformance testing as well as complex testing scenarios for performance evaluation.
Interoperability, multi-domain or MPLS-based DiffServ testing are out of scope of this material and
belong to the Advance Scenarios of test implementations.




                                                                         EDIN          0079-1006
                                                                         Project           P1006
                                                                            For full publication
                                                                                   January 2001
[End of Abstract]
Disclaimer
This document contains material which is the copyright of certain EURESCOM PARTICIPANTS,
and may not be reproduced or copied without permission.
All PARTICIPANTS have agreed to full publication of this document.
The commercial use of any information contained in this document may require a license from the
proprietor of that information.
Neither the PARTICIPANTS nor EURESCOM warrant that the information contained in the report
is capable of use, or that use of the information is free from risk, and accept no liability for loss or
damage suffered by any person using this information.
page 1 (126)                                                                                           EURESCOM Technical Specification


Table of contents
Table of contents................................................................................................................................................ 1
List of Figures and Tables ................................................................................................................................. 2
Abbreviations ..................................................................................................................................................... 3
Definitions ......................................................................................................................................................... 4
1    Introduction................................................................................................................................................ 7
2    Basic scenarios ........................................................................................................................................... 8
  2.1      Common settings ............................................................................................................................... 8
     2.1.1       Reference models used for testing ............................................................................................. 8
     2.1.2       General rules are implemented in the routers ............................................................................. 9
     2.1.3       Application characteristics ......................................................................................................... 9
     2.1.4       Service classes applied for DiffServ testing ............................................................................. 10
     2.1.5       Test traffic characterisation ...................................................................................................... 11
  2.2      Basic scenario for conformance tests ............................................................................................... 12
     2.2.1       Test configurations ................................................................................................................... 12
     2.2.2       AF tests outline ........................................................................................................................ 12
     2.2.3       EF tests outline ......................................................................................................................... 13
  2.3      Basic scenario for performance testing ............................................................................................ 14
     2.3.1       Introduction .............................................................................................................................. 14
     2.3.2       Objective .................................................................................................................................. 14
     2.3.3       Test bed set-up ......................................................................................................................... 14
3    Vendor dependent tests ............................................................................................................................ 17
  3.1      Single vendor tests based on the PICS ............................................................................................. 17
     3.1.1       AF PHB Test Suite................................................................................................................... 22
     3.1.2       EF PHB test Suite .................................................................................................................... 75
  3.2      Single vendor tests - Performance tests ........................................................................................... 99
     3.2.1       Test Descriptions ..................................................................................................................... 99
     3.2.2       Performance Testing - Test Suites ......................................................................................... 100
4    Backward compatibility ......................................................................................................................... 118
5    Advance tests ......................................................................................................................................... 120
  5.1      Application oriented tests............................................................................................................... 120
     5.1.1       Introduction ............................................................................................................................ 120
     5.1.2       Overview of the Test Environment ........................................................................................ 120
     5.1.3       Test Traffic Generation .......................................................................................................... 121
     5.1.4       Test Scenarios ........................................................................................................................ 123
6    Conclusion ............................................................................................................................................. 125
7    References.............................................................................................................................................. 126




 2001 EURESCOM Participants in P1006                                                                                                     EDIN 0079-1006
EURESCOM Technical Specification                                                                                                       page 2 (126)

List of Figures and Tables
Figure 1. Elements of a DiffServ network and theirs functionalities ................................................................. 8
Figure 2. Test configuration for basic tests ...................................................................................................... 12
Figure 3. Measurement configuration for TCP traffic ..................................................................................... 15
Figure 4. Measurement set-up for UDP traffic ................................................................................................ 15
Figure 5. Traffic discarding policy at the SUT ................................................................................................ 64
Figure 6. Application oriented test environment ........................................................................................... 120
Figure 7. Traffic model of an Internet source ................................................................................................ 121
Figure 8. The source model is divided into a stochastic and a comm part ..................................................... 122
Figure 9. Network topology with traffic generators ...................................................................................... 123


Table 1. EF PHB (RFC 2598) ............................................................................................................................ 9
Table 2. AF PHB (RFC 2597) ........................................................................................................................... 9
Table 3. Application characteristics ................................................................................................................. 10
Table 4. DSCPs to be tested............................................................................................................................. 22
Table 5. Maximum number of different AF classes ........................................................................................ 28
Table 6. DS codepoints of the maximum number of incoming traffic of different AF classes ....................... 29
Table 7. Drop precedence level coding ............................................................................................................ 36
Table 8. Buffer occupancy settings .................................................................................................................. 64
Table 9. Used traffic types for EF BA test ....................................................................................................... 75
Table 10. Used traffic type for marking test .................................................................................................... 83
Table 11. Compatibility matrix between IP Precedence field and DiffServ Class Selector Codepoints ....... 118




EDIN 0079-1006                                                              2001 EURESCOM Participants in Project P1006
page 3 (12826)                                            EURESCOM Technical Specification

Abbreviations
AF               Assured Forwarding PHB
BA               Behaviour Aggregate
BE               Best-Effort
CBR              Constant Bit Rate
CBS              Committed Burst Size [byte]
Cegr             SUT Egress link speed [bits/s]
CIR              Committed Access Rate [bits/s]
d                As part of a bit sequence: don’t care
Default PHB      Best Effort
DiffServ         Differentiated Services
DS               Differentiated Services
DSD              Differentiated Services Domain
EBS              Excess Burst Size [byte]
EF               Expedited Forwarding
ER               Egress Router
IETF             Internet Engineering Task Force
IngR             Ingress Router
IntR             Interior Router
IntServ          Integrated Services
IP               Internet Protocol
MF               Multi Field
MPLS             Multi Protocol Label Switching
OSI              Open Systems Interconnect
PBS              Peak Burst Size [byte]
PHB              Per Hop Behaviour (e.g. AF, EF or default)
PICS             Protocol Implementation Conformance Statement
PIR              Peak Information Rate[bits/s]
PIR              Project Internal Results
QoS              Quality of Service
RED              Random Early Detection
RFC              Request for Comments
RIO              Random Early Detection In- and Out of profile
RSVP             Resource Reservation Protocol
Rtest            Traffic rate [bits/s]
RTT              Round Trip Time
SLA              Service Level Agreement
SUT              System under test
TCP              Transmission Control Protocol
Ton              Burst duration [ms]
UDP              User Datagram Protocol
VLL              Virtual Leased Line
VoIP             Voice over IP




 2001 EURESCOM Participants in P1006                                      EDIN 0079-1006
EURESCOM Technical Specification                                                     page 4 (126)

Definitions
Definitions used in this document (according to RFC2475):
Behaviour Aggregate (BA)      a DS behaviour aggregate.
BA classifier                 a classifier that selects packets based only on the contents of the
                              DS field.
Boundary link                 a link connecting the edge nodes of two domains.
Classifier                    an entity which selects packets based on the content of packet
                              headers according to defined rules.
DS behaviour aggregate        a collection of packets with the same DS codepoint crossing a link
                              in a particular direction.
DS boundary node              a DS node that connects one DS domain to a node either in another
                              DS domain or in a domain that is not DS-capable.
DS-capable                    capable of implementing differentiated services as described in this
                              architecture; usually used in reference to a domain consisting of
                              DS-compliant nodes.
DS codepoint                  a specific value of the DSCP portion of the DS field, used to select
                              a PHB.
DS-compliant                  enabled to support differentiated services functions and behaviours
                              as defined in [RFC2474], this document, and other differentiated
                              services documents; usually used in reference to a node or device.
DS domain                     a DS-capable domain; a contiguous set of nodes which operate
                              with a common set of service provisioning policies and PHB
                              definitions.
DS egress node                a DS boundary node in its role in handling traffic as it leaves a DS
                              domain.
DS ingress node               a DS boundary node in its role in handling traffic as it enters a DS
                              domain.
DS interior node              a DS node that is not a DS boundary node.
DS field                      the IPv4 header TOS octet or the IPv6 Traffic Class octet when
                              interpreted in conformance with the definition given in [RFC2474].
                              The bits of the DSCP field encode the DS codepoint, while the
                              remaining bits are currently unused.
DS node                       a DS-compliant node.
DS region                     a set of contiguous DS domains which can offer differentiated
                              services over paths across those DS domains.
Downstream DS domain          the DS domain downstream of traffic flow on a boundary link.
Dropper                       a device that performs dropping.
Dropping                      the process of discarding packets based on specified rules;
                              policing.
Legacy node                   a node which implements IPv4 Precedence as defined in
                              [RFC791,RFC1812] but which is otherwise not DS-compliant.
Marker                        a device that performs marking.
Marking                       the process of setting the DS codepoint in a packet based on
                              defined rules; pre-marking, re-marking.


EDIN 0079-1006                                 2001 EURESCOM Participants in Project P1006
page 5 (12826)                                              EURESCOM Technical Specification
Meter                         a device that performs metering.
Metering                      the process of measuring the temporal properties (e.g., rate) of a
                              traffic stream selected by a classifier. The instantaneous state of
                              this process may be used to affect the operation of a marker,
                              shaper, or dropper, and/or may be used for accounting and
                              measurement purposes.
Microflow                     a single instance of an application-to-application flow of packets
                              which is identified by source address, source port, destination
                              address, destination port and protocol id.
MF Classifier                 a multi-field (MF) classifier which selects packets based on the
                              content of some arbitrary number of header fields; typically some
                              combination of source address, destination address, DS field,
                              protocol ID, source port and destination port.
Per-Hop-Behaviour (PHB)       the externally observable forwarding behaviour applied at a DS-
                              compliant node to a DS behaviour aggregate.
PHB group                     a set of one or more PHBs that can only be meaningfully specified
                              and implemented simultaneously, due to a common constraint
                              applying to all PHBs in the set such as a queue servicing or queue
                              management policy. A PHB group provides a service building
                              block that allows a set of related forwarding behaviours to be
                              specified together (e.g., four dropping priorities). A single PHB is a
                              special case of a PHB group.
Policing                      the process of discarding packets (by a dropper) within a traffic
                              stream in accordance with the state of a corresponding meter
                              enforcing a traffic profile.
Pre-mark                      to set the DS codepoint of a packet prior to entry into a
                              downstream DS domain.
Provider DS domain            the DS-capable provider of services to a source domain.
Re-mark                       to change the DS codepoint of a packet, usually performed by a
                              marker in accordance with a TCA.
Service                       the overall treatment of a defined subset of a customer's traffic
                              within a DS domain or end-to-end.
Service Level Agreement (SLA)         a service contract between a customer and a service
                             provider that specifies the forwarding service a customer should
                             receive. A customer may be a user organisation (source domain) or
                             another DS domain (upstream domain). A SLA may include traffic
                             conditioning rules which constitute a TCA in whole or in part.
Service Provisioning          a policy which defines how traffic
Policy                        conditioners are configured on DS boundary nodes and how traffic
                              streams are mapped to DS behaviour aggregates to achieve a range
                              of services.
Shaper                        a device that performs shaping.
Shaping                       the process of delaying packets within a traffic stream to cause it to
                              conform to some defined traffic profile.
Source domain                 a domain which contains the node(s) originating the traffic
                              receiving a particular service.
Traffic conditioner       an entity which performs traffic conditioning functions and which
                          may contain meters, markers, droppers, and shapers. Traffic
                          conditioners are typically deployed in DS boundary nodes only. A
 2001 EURESCOM Participants in P1006                                      EDIN 0079-1006
EURESCOM Technical Specification                                                       page 6 (126)
                               traffic conditioner may re-mark a traffic stream or may discard or
                               shape packets to alter the temporal characteristics of the stream and
                               bring it into compliance with a traffic profile.
Traffic conditioning           control functions performed to enforce rules specified in a TCA,
                               including metering, marking, shaping, and policing.
Traffic Conditioning Agreement (TCA) an agreement specifying classifier rules and any
                              corresponding traffic profiles and metering, marking, discarding
                              and/or shaping rules which are to apply to the traffic streams
                              selected by the classifier. A TCA encompasses all of the traffic
                              conditioning rules explicitly specified within a SLA along with all
                              of the rules implicit from the relevant service requirements and/or
                              from a DS domain's service provisioning policy.
Traffic profile                a description of the temporal properties of a traffic stream such as
                               rate and burst size.
Traffic stream                 an administratively significant set of one or more microflows
                               which traverse a path segment. A traffic stream may consist of the
                               set of active microflows which are selected by a particular
                               classifier.
Upstream DS domain             the DS domain upstream of traffic flow on a boundary link.




EDIN 0079-1006                                  2001 EURESCOM Participants in Project P1006
page 7 (12826)                                                EURESCOM Technical Specification

1       Introduction
This document describes the Testing methods of Differentiated Services (DiffServ) networks
developed in the DISCMAN project.
The recommendations of IETF (RFC2474, 2475, 2597, 2598, 2697, 2698) outline the main concept
of DiffServ networks and propose two main PHB (per hop behaviour) group to transfer IP traffic
over a DiffServ domain. The recommendations are under construction and service providers which
intended to offer DiffServ Services to their customers are in a difficult situation because not even
the terminology of DiffServ is unique.
Therefore it is a very important task to develop and fulfil testing scenarios, which are able to
measure and demonstrate the advantages and the usability of DiffServ networks. Testing proposals
of this document describe basic testing scenarios for conformance testing as well as complex
testing scenarios for performance evaluation. Testing scenarios are formulated unique in order to
support the comparison of different vendors’ implementations.




 2001 EURESCOM Participants in Project P1006                                     EDIN 0079-1006
page 8 (12826)                                                 EURESCOM Technical Specification

2       Basic scenarios
Basic scenarios consist a single DS domain and the main goal of the tests is to verify the
conformity of different vendors’ implementations with IETF recommendations. A single DS
domain means the less equipment consisting all the functionalities existing in a DS network.
These tests focus on the basic functions, which should be fulfilled by DiffServ compliant routers or
the DiffServ domain itself. The tests consider Assured Forwarding and Expedited Forwarding as
the two standard based services. The default service (Best Effort) handled partially, only from EF
and AF perspectives.
Interoperability, multi-domain or MPLS-based DiffServ testing are out of scope of this material
and belong to the Advance Scenarios of test implementations.

2.1     Common settings

2.1.1   Reference models used for testing

In DiffServ the roles and functions of the nodes which are involved in a DiffServ domain are
defined in [RFC2597][RFC2598]. Figure 1. illustrates these roles and functions should be fulfilled
by the network nodes.


                           IngR: Ingress Router       IntR: Interior Router    ER: Egress Router

                                 Classification           Classification           Classification
                                 Conditioning                                      Conditioning




             Access network              IngR               IntR                   ER
                                                          DS- network

              Figure 1. Elements of a DiffServ network and theirs functionalities
Inside the DS domain classifiers selects packets based on the value of the DS field and uses the
appropriate buffer management and packet scheduling mechanisms on the packet. Complex
classification - e.g. setting of the DS field - and traffic conditioning functions need only be
performed at network boundaries [RFC2474, RFC 2475]. The decoupling of these functions from
the forwarding behaviours within the network interior allows implementation of a wide variety of
service behaviours, with room for future expansion.
As a consequence no reclassification is needed inside the network. It is currently an open issue,
whether under special circumstances (e.g. when congestion occurs) reclassification should be used
or not.
Based on [RFC2474, RFC2475] it makes no sense to distinguish between First Hop Router and
Ingress Router, because a differentiated services boundary is located at the ingress to the first-hop
differentiated services-compliant router. The Interior Routers do not need the complex
classification function and that is the main difference comparing to the Ingress and Egress Routers.
EDIN 0079-1006                                     2001 EURESCOM Participants in Project P1006
page 9 (12826)                                                   EURESCOM Technical Specification
Therefore basic test scenarios consist only the following equipment:
   Ingress router,
   Interior router,
   Egress router.
IngR: MF classification+conditioning
IntR: BA classification
EgrR: BA classification+conditioning
Typically, traffic arrives at the boundary of a DS domain pre-marked and pre-shaped. However, on
traffic eg. from non-DS compliant network complex classification and conditioning is needed.
The elements of the classifier and the conditioner can be founded at [RFC2475].

2.1.2 General rules are implemented in the routers

In basic testing scenarios (I) there is no differentiation of first hop router and ingress router and (ii)
there is no re-marking inside the DS domain.
Table 1. and Table 2. give one solution to assign objectives to the network elements. This
assignment is not a compulsory one. It’s only a possibility. At the Ingress and at the Egress side
practically all classification and conditioning function are used. Inside the DS domain mainly the
BA classification is the general function, but the provider may use similar traffic conditioning
mechanism to those used at the network boundaries. [DiffFrame][RFC2475]
                                    Table 1. EF PHB (RFC 2598)
                               Ingress Router             Interior Router            Egress Router
MF Classification                     X
BA Classification                     X                          X                          X
                                                                 X
Meter                                 X                                                     X
Marker, Re-marker                     X                                                     X
                                                                 X
Policer                               X
                                                                 X
Shaper                                X                                                     X


                                    Table 2. AF PHB (RFC 2597)
                               Ingress Router             Interior Router            Egress Router
MF Classification                     X
BA Classification                     X                          X                          X
                                                                 X
Meter                                 X                                                     X
                                                                 X
Marker, Re-marker                     X                                                     X
                                                                 X
Policer                               X

2.1.3 Application characteristics

Current and future IP based applications need different handling method from the network. To
aggregate correctly the traffic, common characteristics have to be found between the applications.
Table 3 characterises the applications with network specific features (delay, delay variation,
guaranteed minimum bandwidth).


 2001 EURESCOM Participants in Project P1006                                          EDIN 0079-1006
page 10 (12826)                                                    EURESCOM Technical Specification
                                      Table 3. Application characteristics
Application Type                Typical Applications   Low delay Low delay Low loss Guaranteed
                                                                 variation          minimum
                                                                                    bandwidth
Multimedia broadcast            Voice/video broadcast N            Y            N1       Y
Multimedia interactive VoIP, video conf.               Y           Y            N1       Y
                                                                                    2
Interactive text base           e.g.: telnet, IRC      N           N            N        N
                                                                                    2
Interactive data                e.g.: ftp, http        N           N            N        N
transfer
non interactive data            message handling, E-   N           N            N2       N
transfer                        mail, database
                                replication
control message                 e.g.: ICMP, IGMP       Y           not          Y        Low
                                                                   applicable            bandwidth
Telemetry                                              Y3                       Y
2) forward error correction, interpolating
2
    ) cyclic error correction
3) application dependent
The numerical values of the transmission parameters are largely application dependent. E.g. in
multimedia broadcast a low delay is about a couple of hundred msec-s, while with text based
interactive communication it is a couple of sec-s.
With such a table, a more or less adequate number of service classes can be created, but further
investigation is needed, how these applications resp. services can be mapped to the PHB classes,
both to EF and AF classes.
This approach can be a good solution, when a service provider wants to differentiate based on the
application types. For test purposes, we define one kind of mapping between services and DiffServ
classes.

2.1.4      Service classes applied for DiffServ testing

It is very hard to choose an appropriate number of classes even for test purposes. Too many classes
can cause difficulties in differentiation and at testing as well, too few can degrade the effectiveness.
Of course, these classes are planned to use only in advance testing. For basic scenarios a more
simple model can be used.
Because of these reasons – for advance testing - we chose service classes listed below.
2) Service classes without subclasses.
In this scenario the influence of different service classes on each other can be tested. This conforms
to the situation when voice, video, data and control transfer occur simultaneously in the DS
network.
           2) Class EF (Premium service): low delay, low delay variation, low loss.
           b) Class AF11: low delay, low loss
           c) Class AF21: low delay, guaranteed minimum bandwidth.
           d) Class AF31: low loss,
           e) Class AF41: guaranteed minimum bandwidth.
           f) Class BE (Standard service): best effort.


EDIN 0079-1006                                          2001 EURESCOM Participants in Project P1006
page 11 (12826)                                                EURESCOM Technical Specification
        DSCP 101110 ==>          Class EF
        DSCP 100010 ==>          Class AF41
        DSCP 011010 ==>          Class AF31
        DSCP 010010 ==>          Class AF21
        DSCP 001010 ==>          Class AF11
        DSCP 000000 ==>          Class BE
EF Class: Priority queue
AF Classes: Different queues for the different classes
Remark:
Vendor specific fair-queue mechanism (eg. CBWFQ) between the queues.
No special congestion avoidance mechanism (eg. WRED) is required.
2. Service subclasses.
In this scenario the influence of different classes on the subclasses can be tested. This conform to
the situation when e.g. video transmission, where video synchronous signals are transferred in
Class AF11, main picture information are transferred in Class AF12 and detailed picture
information in the Class AF13 inside the DS network.
        2) Class EF (Premium service ): low delay, low delay variation, low loss.
        b) Class AF11: low delay, low loss
        c) Class AF12: low delay, moderate low loss
        d) Class AF13: low delay, constrained low loss
        e) Class AF41: guaranteed minimum bandwidth.
        f) Class BE (Standard service ): best effort.
        DSCP 101110 ==>          Class EF
        DSCP 100010 ==>          Class AF41
        DSCP 001110 ==>          Class AF13
        DSCP 001100 ==>          Class AF12
        DSCP 001010 ==>          Class AF11
        DSCP 000000 ==>          Class BE
EF Class: Priority queue
AF Classes: Different queues for the different classes
Remark:
Vendor specific fair-queue mechanism (eg. CBWFQ) between the queues.
Special congestion avoidance mechanism (eg. WRED in the AF1x queue) is required.

2.1.5 Test traffic characterisation

For the sake of repeatable tests it is important to use well-defined load/traffic sources. These
sources have to generate several type of traffic in a controlled manner. Considering the
conformance tests 2 major type of bursty test traffic are needed: a simple on-off type and a more
sophisticated e.g. Poisson type.
On-off type test traffic is necessary at burst size related testing. Shaping or policing parameters and
features can be tested well, when the exact number of test packets are known. In case of on-off


 2001 EURESCOM Participants in Project P1006                                        EDIN 0079-1006
page 12 (12826)                                                        EURESCOM Technical Specification
source, the behaviour of the test traffic is very-well controlled, therefor the behaviour of the queue
– which handles that traffic – can be estimated as well.
For general test purposes e.g. policing tests, marking tests etc. more sophisticated models should be
used, because these are closer to the real life. One of these models is the Poisson model. Poisson
arrival test traffic patterns are widely used, and many traffic generator implemented this type of test
traffic.

2.2     Basic scenario for conformance tests

2.2.1   Test configurations

The basic scenario consists of vendor independent figures and test outlines. These tests are willing
to verify whether an AF or EF implementation of a vendor is conformant with the correspondent
RFCs or not. The basics of these tests are PICSs developed in the DISCMAN project. Test
configuration for basic tests can be seen at Figure 2.


                                                                                       Destination
                                                                                       link
              Ingress

 Traffic
               interface 1
                                                     DS-network                                      Traffic
                                                                                                     analyser
 generator1
                             IngR                                                ER
                             Egress interface speed: Cegr

                             SUT
              Ingress
               interface 2


 Traffic
 generator2
                                                                                                       Host




                                                            Traffic
                                                            analyser

                              Figure 2. Test configuration for basic tests

2.2.2   AF tests outline

Traffic Classification and Conditioning
Classification
Classification is needed mainly at the ingress side of the network to give a certain priority label for
the packets coming from the user side. The objective of this assignment has to be the role of the
service provider and this assignment can happen according to the application, port, IP address, etc.
(MF classification) or only according to the DSCP field (BA classification).
Therefore the aim of these testing is to verify whether the assignment method is working properly
or not. The tests involves BA and the possible MF classification methods (listed in [RFC2597]).
Test Case: AF.1.1-AF.2.6
        Behaviour Aggregate (BA) Classifier
A set of packets with the same DS codepoint is a behaviour aggregate. A BA classifier selects
packets based on the contents of the DS field. These set of packets are served using the same queue
during the transfer of the packets. The aim of these tests is to verify whether the classification
method is working properly or not.

EDIN 0079-1006                                               2001 EURESCOM Participants in Project P1006
page 13 (12826)                                                EURESCOM Technical Specification
Test Case: AF.1.1
        Multiple Field Classifier
A set of packets with the same content of some arbitrary number of header fields are identified
during MF classification. MF classifiers set the DS Codepoint of the packets to an appropriate
value based on the classification rules. The aim of these tests is to verify whether the classification
method is working properly or not.
Test Case: AF.2.1-AF.2.6
Traffic Conditioning
The traffic conditioner modifies the traffic stream parameters in order that the traffic stream fulfils
the predefined traffic conditioning agreement (TCA). Traffic conditioning may include meters,
markers, droppers and shapers and located generally at the boundaries of the DS domain.
Test Case: AF.4.1-AF.6.8
        Marking
The marker sets the DS codepoint of a packet to an appropriate value based on the classification
procedure. The aim of these tests is to verify whether the marking or re-marking method is working
properly or not.
Test Case: AF.5.1-AF.5.13
        Policing
A policer discards packets of a traffic stream according to the predefined traffic profile. The policer
is a key function for a service provider to control the access traffic. The aim of these tests is to
verify that the policing method working properly or not.
Test Case: AF.6.1-AF.6.8
        Shaping
Shaper causes delaying of packets of a traffic stream in order to be conform with the predefined
traffic profile. The aim of these tests is to verify whether the shaping function is working properly
or not.
Test Case: AF.4.1
Traffic Forwarding
Assured forwarding (AF) PHB group provides forwarding of IP packets in maximum four
independent AF classes. In each class there three level of drop precedence. (More classes and drop
precedence level can be used locally.) Important that a DS node must not aggregate two or more
AF classes together and the node has to allocate a configurable, minimum amount of resources to
each AF class. Excess resources can be allocated to an AF class. The aim of these tests is to verify
whether the forwarding function is working properly or not.
Test Case: AF.3.1- AF.3.11

2.2.3 EF tests outline

Traffic Classification
Traffic classification method in EF is similar to in AF. MF and BA classification are also used but
there is only one class and only one recommended DSCP. The aim of traffic classification tests is
to verify whether the implementation of the classification method is working properly or not.
Test Case: EF.1.1 - EF.1.7
Conditioning
EF PHB is planned to use to build low loss, low latency, low jitter, assured bandwidth service
through DS domains. Creating such a service, role of conditioning is very important. Policing and


 2001 EURESCOM Participants in Project P1006                                        EDIN 0079-1006
page 14 (12826)                                               EURESCOM Technical Specification
shaping have to ensure that the aggregate’s arrival rate at any node is always less than the node’s
configured minimum departure rate.
The aim of conditioning tests is to verify whether the implementation of the conditioner (meter,
marker, shaper, dropper) is working properly or not.
Test Case: EF.3.1 - EF.5.4
Traffic Forwarding
The EF PHB is defined as a forwarding treatment for a particular DiffServ aggregate where the
departure rate of the aggregate’s packets from any DiffServ node must equal or exceed a
configurable rate. The EF traffic should receive this rate independent of the intensity of any other
traffic attempting to transit the node. The aim of traffic forwarding tests is to verify whether the
implementation of EF PHB is working properly or not.
Test Case: EF.6.1 - EF.6.4

2.3     Basic scenario for performance testing

2.3.1   Introduction

The final usability indicator for a network device is the throughput, which it can offer to the
network. This test suite is designed to measure the throughput for DiffServ-compliant
implementations. The strategy followed is to provide performance measurements for each of the
components in the DiffServ chain on its own and then concatenate the different components to
reality near configurations.
The indicators used to quantify the load in the router are:
2) Main CPU load
b) Line card CPU load
TCP and UDP traffic will be used separately to assess the impact of the different configurations on
file transfer based and multimedia traffic.

2.3.2   Objective

The objective of these tests is to characterise the equipment’s implementation of the DiffServ
functionality both in ingress and egress of the network.
Since links carrying IP traffic are normally unbalanced, traffic control in the egress of the network
might be necessary for users, which can be characterised as information sinks, whereas traffic
control at the ingress will be appropriated for users behaving as information sources (e.g. WWW
servers).

2.3.3   Test bed set-up

2.3.3.1 TCP test bed set-up

The following figure (Figure 3) shows the measurement configuration for TCP traffic.




EDIN 0079-1006                                     2001 EURESCOM Participants in Project P1006
page 15 (12826)                                               EURESCOM Technical Specification




                     Figure 3. Measurement configuration for TCP traffic
The ATM traffic Analyser is inserted into the output fibre of the device under test and works as an
optical layer signal regenerator. The traffic between the two routers is not disturbed in any other
way. Both routers are programmed with the minimal configuration assuring connectivity between
the two test workstations (in brown).
The control workstation (in blue) executes SNMP based software to read the processor load and
other significant variables of both routers. DiffServ features are enabled router in the left and
remain disabled for the router on the right, which serves as the reference machine.

2.3.3.2 UDP test bed set-up

The following figure (Figure 4) shows the test bed used to characterise the router with UDP traffic.




                         Figure 4. Measurement set-up for UDP traffic
The ATM traffic Generator side of the ATM Traffic Generator/Analyser is used to inject UDP
packets into the device under test. The Analyser is used to check the traffic leaving the router for
integrity.


 2001 EURESCOM Participants in Project P1006                                      EDIN 0079-1006
page 16 (12826)                                                EURESCOM Technical Specification
The control workstation (in blue) executes SNMP based software to read the processor load and
other significant variables of the router.

2.3.3.3 Structure of the test suite

The tests cover the scenarios characterising the device’s behaviour
           Colouring behaviour with
                 One resulting colour
                 Two colours depending on the traffic
           Traffic policing functionality (e.g. colouring + dropping excess traffic)
           with TCP versus UDP traffic
           Implementing the DiffServ functionality at the ingress or the egress of the network




EDIN 0079-1006                                    2001 EURESCOM Participants in Project P1006
page 17 (12826)                                                EURESCOM Technical Specification

3        Vendor dependent tests

3.1      Single vendor tests based on the PICS
Test#      Test Label                                      Purpose
           Assured Forwarding PHB
           Classification
AF.1.1     Classific.AF.codepoints.BA                      To verify that the system classifies
                                                           incoming IP traffic according to the DS
                                                           codepoints
AF.2.1     Classific.AF.codepoints.MF.IP.destaddr          To verify that the system classifies
                                                           incoming IP traffic according to the
                                                           destination address
AF.2.2     Classific.AF.codepoints.MF.IP.sourceaddr        To verify that the system classifies
                                                           incoming IP traffic according to the source
                                                           address
AF.2.3     Classific.AF.codepoints.MF.IP.protID            To verify that the system classifies
                                                           incoming IP traffic according to the
                                                           protocol ID
AF.2.4     Classific.AF.codepoints.MF.IP.sourceport        To verify that the system classifies
                                                           incoming IP traffic according to the source
                                                           port
AF.2.5     Classific.AF.codepoints.MF.IP.destport          To verify that the system classifies
                                                           incoming IP traffic according to the
                                                           destination port
AF.2.6     Classific.AF.codepoints.MF.IP.combine           To verify that the system classifies
                                                           incoming IP traffic according to the
                                                           combination of the classification methods
           AF PHB Group
AF.3.1     PHBgroup.AF.class.general                       To verify that the system implements all the
                                                           four general use AF classes
AF.3.2     PHBgroup.AF.class.independence                  To verify that packets from different AF
                                                           classes forwarded independently
AF.3.3.1   PHBgroup.AF.min.resource.allocation.buffer      To verify that the system allocate a
                                                           minimum amount of buffer space for each
                                                           implemented AF class
AF.3.3.2   PHBgroup.AF.min.resource.allocation.bandwidt    To verify that the system allocate a
           h                                               minimum amount of forwarding resources
                                                           to implement each AF class. This
                                                           configured rate are achieved by all classes
                                                           over both small and large time scales.
AF.3.4.1   PHBgroup.AF.min.resource.allocation.short_tim   To verify that the system achieves a
           escale                                          configured rate by all classes over small
                                                           time scales for each AF service class.
AF.3.4.2   PHBgroup.AF.min.resource.allocation.long_time To verify that the system achieves a
           scale                                         configured rate by all classes over long time
                                                         scales for each AF service class.
AF.3.5     PHBgroup.AF.resource.excess                     To verify that the system allow excess
                                                           resources to be assigned to AF classes
AF.3.6     PHBgroup.AF.respect.forwarding_prec             To verify that the system does not forward


 2001 EURESCOM Participants in Project P1006                                       EDIN 0079-1006
page 18 (12826)                                        EURESCOM Technical Specification
                                                    an IP packet with smaller probability if it
                                                    contains a drop precedence value p than if it
                                                    contains a drop precedence value q when
                                                    p<q.
AF.3.7     PHBgroup.AF.threedropprec                To verify that the system accepts all three
                                                    drop precedence DSCP’s within any AF
                                                    class
AF.3.8     PHBgroup.AF.twodropprec                  To verify that the system yields at least two
                                                    drop precedence per AF class
AF.3.9     PHBgroup.AF.dropprec_number              To verify that the system supports three
                                                    different drop precedence levels per AF
                                                    class.
AF.3.10    PHBgroup.AF.dropprec.sequence            To verify that when the system yields only
                                                    two drop precedence per AF class, the
                                                    DSCP Afx1 yields lower loss probability
                                                    than DSCP Afx2 and Afx3
AF.3.11    PHBgroup.AF.order                        To verify that the system respects the order
                                                    of AF packets belonging to the same micro-
                                                    flow regardless of their drop precedence
           Conditioning
AF.4.1     Condition.AF.shape                       To verify that the system implements traffic
                                                    shaping at the edge of a domain to control
                                                    the amount of AF traffic entering or exiting
AF.4.2     Condition.AF.discard                     To verify that the system implements
                                                    packet discarding at the edge of a domain to
                                                    control the amount of AF traffic entering or
                                                    exiting
AF.4.3     Condition.AF.precchange                  To verify that the system changes the drop
                                                    precedence of packets at the edge of a
                                                    domain to control the amount of AF traffic
                                                    entering or exiting
AF.4.4     Condition.AF.classchange                 To verify that the system changes AF class
                                                    of packets at the edge of a domain to
                                                    control the amount of AF traffic entering or
                                                    exiting.
AF.4.5     Condition.AF.reorder                     To verify that the conditioning actions of
                                                    the system don’t cause reordering of
                                                    packets of the same microflow.
           Marking
AF.5.1     Mark.AF.srTCM.implement                  To verify that the system supports a „Single
                                                    Rate Three Colour Marker - srTCM method
                                                    for marking
AF.5.2     Mark.AF.srTCM.Blindmode                  To verify that when the system supports a
                                                    „Single Rate Three Colour Marker - srTCM
                                                    method it implements the Colour-Blind-
                                                    Mode
AF.5.3     Mark.AF.srTCM.Awaremode                  To verify that when the system supports a
                                                    „Single Rate Three Colour Marker - srTCM
                                                    method it implements the Colour-Aware-
                                                    Mode
AF.5.4     Mark.AF.srTCM.CIR                        To verify that when the system supports a
                                                    „Single Rate Three Colour Marker - srTCM
                                                    method the Committed Information Rate
                                                    (CIR) is measured in bytes of IP packets per

EDIN 0079-1006                              2001 EURESCOM Participants in Project P1006
page 19 (12826)                                    EURESCOM Technical Specification
                                                second (without link and physical layer
                                                information)
AF.5.5     Mark.AF.srTCM.CBS                    To verify that when the system supports a
                                                „Single Rate Three Colour Marker - srTCM
                                                method the Committed Burst Size (CBS) is
                                                measured in bytes
AF.5.6     Mark.AF.srTCM.EBS                    To verify that when the system supports a
                                                „Single Rate Three Colour Marker - srTCM
                                                method the Excess Burst Size (EBS) is
                                                measured in bytes
AF.5.7     Mark.AF.trTCM.implement              To verify that the system supports a „Two
                                                Rate Three Colour Marker – trTCM method
                                                for marking
AF.5.8     Mark.AF.trTCM.Blindmode              To verify that when the system supports a
                                                „Two Rate Three Colour Marker - trTCM
                                                method it implements the Colour-Blind-
                                                Mode
AF.5.9     Mark.AF.trTCM.Awaremode              To verify that when the system supports a
                                                „Two Rate Three Colour Marker - srTCM
                                                method it implements the Colour-Aware-
                                                Mode
AF.5.10    Mark.AF.trTCM.PIR                    To verify that when the system supports a
                                                „Single Rate Three Colour Marker - srTCM
                                                method the Peak Information Rate – PIR is
                                                measured in bytes of IP packets per second
                                                (without    link   and    physical    layer
                                                information)
AF.5.11    Mark.AF.trTCM.CIR                    To verify that when the system supports a
                                                „Single Rate Three Colour Marker - srTCM
                                                method the Committed Information Rate -
                                                CIR is measured in bytes of IP packets per
                                                second (without link and physical layer
                                                information)
AF.5.12    Mark.AF.trTCM.PBS                    To verify that when the system supports a
                                                „Single Rate Three Colour Marker - srTCM
                                                method the Peak Burst size PBS is
                                                measured in bytes
AF.5.13    Mark.AF.trTCM.CBS                    To verify that when the system supports a
                                                „Single Rate Three Colour Marker - srTCM
                                                method the CBS is measured in bytes.
           Queuing and discarding
AF.6.1     Discard.AF.congest.min               To verify that the system minimises long-
                                                term congestion within each AF class.
AF.6.2     Discard.AF.congest.short_term        To verify that the system allows short-term
                                                congestion resulting from bursts within
                                                each AF class.
AF.6.3     Discard.AF.congest.drop              To verify that the system drops packets as a
                                                result of long term congestion within an AF
                                                class
AF.6.4     Discard.AF.congest.queue             To verify that the system queues packets as
                                                a result of short term congestion within an
                                                AF class
AF.6.5     Discard.AF.congest.equaldropping     To verify that the system’s dropping
                                                algorithm results in an equal packet discard


 2001 EURESCOM Participants in Project P1006                            EDIN 0079-1006
page 20 (12826)                                                 EURESCOM Technical Specification
                                                             probability for flows with different short-
                                                             term burst shapes but identical longer-term
                                                             packet rates, if both flows are within the
                                                             same AF class.
AF.6.6     Discard.AF.congest.discardrate                    To verify that the system’s dropping
                                                             algorithm results in discard rate of a
                                                             particular micro-flow’s packets within a
                                                             single precedence level which is
                                                             proportional to that flow’s percentage of the
                                                             total amount of traffic passing through that
                                                             precedence level at any smoothed
                                                             congestion level.
AF.6.7     Discard.AF.respond.gradually                      To verify that the dropping algorithm of the
                                                             system responds gradually to congestion.
AF.6.8     Discard.AF.independent config                     To verify that the system’s dropping
                                                             algorithm control parameters of your
                                                             system are independently configurable for
                                                             each packet drop precedence and for each
                                                             AF class.
           Tunneling
AF.7.1     Tunnel.AF.forwarding.assurance                    To verify that the system doesn’t reduce the
                                                             forwarding assurance of tunnelled AF
                                                             packets.
AF.7.2     Tunnel.AF.ordering                                To verify that the system doesn’t re-order
                                                             the packets within a tunnelled AF micro-
                                                             flow.


           Expedited Forwarding PHB
           Classification
EF.1.1     classific.EF.codepoint.BA                         To verify that the system classifies
                                                             incoming IP traffic according to the DS
                                                             codepoint
EF.1.2     classific.EF.codepoints.MF.IP.destaddr            To verify that the system classifies
                                                             incoming IP traffic according to the
                                                             destination address
EF.1.3     classific.EF.codepoints.MF.IP.sourceaddr          To verify that the system classifies
                                                             incoming IP traffic according to the source
                                                             address
EF.1.4     classific.EF.codepoints.MF.IP.protID              verify that the system classifies incoming IP
                                                             traffic according to the protocol ID
EF.1.5     classific.EF.codepoints.MF.IP.sourceport          To verify that the system classifies
                                                             incoming IP traffic according to the source
                                                             port
EF.1.6     classific.EF.codepoints.MF.IP.destport            To verify that the system classifies
                                                             incoming IP traffic according to the
                                                             destination port
EF.1.7     classific.EF.codepoints.MF.IP.combine             To verify that the system classifies
                                                             incoming IP traffic according to the
                                                             combination of the classification methods
           Traffic Profiling
EF.2.1     Profile.EF.rate                                   To verify that the system profile EF traffic
                                                             to a rate


EDIN 0079-1006                                       2001 EURESCOM Participants in Project P1006
page 21 (12826)                                    EURESCOM Technical Specification
EF.2.2     Profile.EF.burstsize                 To verify that - in case of pre-emptive
                                                forwarding mechanism – the system
                                                profiles EF traffic to a burst size.
           Marking
EF.3.1     mark.EF.fromdefault                  To verify that the system mark incoming IP
                                                traffic (DS 000000) into EF DSCP
                                                (101110).
EF.3.2     mark.EF.fromother                    To verify that the system mark incoming IP
                                                traffic (DS*000000 and DS*101110) into
                                                EF DSCP (101110).
           Policing
EF.4.1     police.EF.peakrate                   To verify that the system police incoming
                                                EF traffic to a given peak rate
EF.4.2     police.EF.burstsize                  To verify that the system - in case of pre-
                                                emptive forwarding mechanism - police
                                                incoming EF traffic to a given burst size
EF.4.3     police.EF.peakrate.set               To verify that the policing peak rate is
                                                configurable by the network administrator
EF.4.4     police.EF.burstsize.set              To verify that the system - in case of pre-
                                                emptive forwarding mechanism - police
                                                incoming EF traffic to a given burst size
EF.4.5     police.EF.discard.violated           To verify that the system discard incoming
                                                EF traffic violating policing parameters
EF.4.6     police.EF.discard.all                To verify that the system discard all
                                                incoming EF traffic when no peak rate is
                                                configured
           Shaping
EF.5.1     shape.EF.rate                        To verify that the system shape the
                                                incoming EF traffic based on a given peak
                                                rate
EF.5.2     shape.EF.burstsize                   To verify that the system - in case of pre-
                                                emptive forwarding mechanism - shape the
                                                incoming EF traffic based on a given burst
                                                size.
EF.5.3     shape.EF.rate*burstsize              To verify that shaping parameters (peak
                                                rate, burst size) can be set by the network
                                                administrator
EF.5.4     shape.EF.violation                   To verify that the system shapes incoming
                                                EF traffic violating a given set of policing
                                                parameters
           Forwarding
EF.6.1     forward.EF.set.deprate               To verify that a minimum departure rate for
                                                an EF aggregate is configurable on the
                                                system
EF.6.2     forward.EF.independency              To verify that the departure rate for an EF
                                                aggregate independent from the intensity of
                                                other traffics
EF.6.3     forward.EF.min.deprate               To verify that the policing of an EF
                                                aggregate is configurable so that EF arrival
                                                rate is always equal to or less than the
                                                systems minimum departure rate for the EF
                                                aggregate

 2001 EURESCOM Participants in Project P1006                            EDIN 0079-1006
page 22 (12826)                                                  EURESCOM Technical Specification
EF.6.4       forward.EF.overmeasured                          To verify that the system forwards an EF
                                                              aggregate on a rate averaging at least the
                                                              configured rate, when measured over any
                                                              time interval starting from an output service
                                                              MTU sized packet at the configured rate



3.1.1    AF PHB Test Suite

3.1.1.1 Behaviour Aggregate

Test# AF.1.1
        Test Label: classific.AF.codepoints.BA
        Last modification: 21.09.2000
        Purpose: To verify that the system classifies incoming IP traffic according to the DS
                 codepoints.
        References: PICS for AF PHB 2.1.1.1 (1-13)
        Requirement: Optional
        Test Configuration: Figure 2.
        Used traffic types: see below.
        DSCP code points: Table 4.
                                     Table 4. DSCPs to be tested
                                                  DSCP
                                                  001010
                                                  001100
                                                  001110
                                                  010010
                                                  010100
                                                  010110
                                                  011010
                                                  011100
                                                  011110
                                                  100010
                                                  100100
                                                  100110
        Test Setup
        Initial Conditions:
         Cegr is the bandwidth of the (single) egress link connecting the SUT to the receiving
            measurement monitor(s).
         Set up the load generator 1 to create a flow 1 marked by a single AF DSCP (table 1.1.1)
            with a rate Rtest = 0.4 * Cegr and a packet spacing of 1 / Rtest (CBR). Flow 1 is destined to
            the destination traffic analyser.



EDIN 0079-1006                                      2001 EURESCOM Participants in Project P1006
page 23 (12826)                                                  EURESCOM Technical Specification
       Set up the load generator 2 to create a flow 2 marked as default with an average rate of
          0.55 * Cegr, bursty arrival (e.g. Poisson). Flow 2 is destined to a host other than the
          destination traffic analyser.
       Allocate a rate of at least 0.8 * Cegr on the egress interface to the AF class of the traffic flow
          1.
       Set the system to classify incoming IP traffic into an AF class.
      Procedure:
       Verify that the test environment is configured as described above.
       Start to send flow 1 on interface 1 for at least ten seconds.
       Send flow 2 simultaneously on interface 2.
       Measure the traffic received by the destination traffic analyser on losses per AF DSCP.
      Repeat this test for all individual AF DSCP code points of Table 4.
      Observable results:
       The AF DS codepoints of the traffic measured by the destination traffic analyser must be
          the same as on ingress interface 1.
       No AF packets are lost.
      Problems:
      Results: PASSED                     FAILED                     INCONCLUSIVE




3.1.1.2 Multiple Field

Test# AF.2.1
      Test Label: classific.AF.codepoints.MF.IP.destaddr
      Last modification: 16.10.2000
      Purpose: To verify that the system classifies incoming IP traffic according to the destination
              address.
      References: PICS for AF PHB 2.1.1 (14)
      Requirement: Optional
      Test Configuration: Figure 2.
      Used traffic types: see below.
      Test Setup
      Initial Conditions:
       Set up the load generator 1 to create a flow 1 with a rate Rtest = 0.4 * Cegr and a packet
          spacing of 1 / Rtest (CBR). Flow 1 is destined to the destination traffic analyser.
       Set up the load generator 2 to create a flow 2 marked as default with an average rate of
          0.55 * Cegr, bursty arrival (e.g. Poisson). Flow 2 is destined to a host other than the
          destination traffic analyser.
       Allocate a rate of at least 0.8 * Cegr on the egress interface to the AF class of the traffic flow
          1.
       Set the system to classify the incoming traffic flow 1 into a specific DSCP according to the
          IP destination address.

 2001 EURESCOM Participants in Project P1006                                          EDIN 0079-1006
page 24 (12826)                                                  EURESCOM Technical Specification
      Procedure:
       Verify that the test environment is configured as described above.
       Start to send flow 1 on interface 1 for at least ten seconds.
       Send flow 2 simultaneously on interface 2.
       Measure the traffic received by the destination traffic analyser on losses per AF DSCP.
      Repeat this test for all individual AF DSCP code points of Table 4.
      Observable results:
       DS codepoints of egress traffic should be correct.
       No AF packets are lost.
      Problems:
      Results: PASSED                     FAILED                     INCONCLUSIVE




Test# AF.2.2
      Test Label: classific.AF.codepoints.MF.IP.sourceaddr
      Last modification: 16.10.2000
      Purpose: To verify that the system classifies incoming IP traffic according to the source
              address.
      References: PICS for AF PHB 2.1.1 (15)
      Requirement: Optional
      Test Configuration: Figure 2.
      Used traffic types: see below.
      Test Setup
      Initial Conditions:
       Set up the load generator 1 to create a flow 1 with a rate Rtest = 0.4 * Cegr and a packet
          spacing of 1 / Rtest (CBR). Flow 1 is destined to the destination traffic analyser.
       Set up the load generator 2 to create a flow 2 marked as default with an average rate of
          0.55 * Cegr, bursty arrival (e.g. Poisson). Flow 2 is destined to a host other than the
          destination traffic analyser.
       Allocate a rate of at least 0.8 * Cegr on the egress interface to the AF class of the traffic flow
          1.
       Set the system to classify the incoming traffic flow 1 into a specific DSCP according to the
          IP source address.
      Procedure:
       Verify that the test environment is configured as described above.
       Start to send flow 1 on interface 1 for at least ten seconds.
       Send flow 2 simultaneously on interface 2.
       Measure the traffic received by the destination traffic analyser on losses per AF DSCP.
      Repeat this test for all individual AF DSCP code points of Table 4.
      Observable results:


EDIN 0079-1006                                     2001 EURESCOM Participants in Project P1006
page 25 (12826)                                                 EURESCOM Technical Specification
       DS codepoints of egress traffic should be correct.
       No AF packets are lost.
      Problems:
      Results: PASSED                     FAILED                    INCONCLUSIVE




Test# AF.2.3
      Test Label: classific.AF.codepoints.MF.IP.protID
      Last modification: 16.10.2000
      Purpose: To verify that the system classifies incoming IP traffic according to the protocol
              ID.
      References: PICS for AF PHB 2.1.1 (16)
      Requirement: Optional
      Test Configuration: Figure 2.
      Used traffic types: see below.
      Test Setup
      Initial Conditions:
          Set up the load generator 1 to create a flow 1 with a rate Rtest = 0.4 * Cegr and a packet
           spacing of 1 / Rtest (CBR). Flow 1 is destined to the destination traffic analyser.
          Set up the load generator 2 to create a flow 2 marked as default with an average rate of
           0.55 * Cegr, bursty arrival (e.g. Poisson). Flow 2 is destined to a host other than the
           destination traffic analyser.
          Allocate a rate of at least 0.8 * Cegr on the egress interface to the AF class of the traffic
           flow 1.
          Set the system to classify the incoming traffic flow 1 into a specific DSCP according to
           the IP protocol ID.
      Procedure:
       Verify that the test environment is configured as described above.
       Start to send flow 1 on interface 1 for at least ten seconds.
       Send flow 2 simultaneously on interface 2.
       Measure the traffic received by the destination traffic analyser on losses per AF DSCP.
      Repeat this test for all individual AF DSCP code points of Table 4.
      Observable results:
       DS codepoints of egress traffic should be correct.
       No AF packets are lost.
      Problems:
      Results: PASSED                     FAILED                    INCONCLUSIVE




Test# AF.2.4


 2001 EURESCOM Participants in Project P1006                                        EDIN 0079-1006
page 26 (12826)                                                 EURESCOM Technical Specification
      Test Label: classific.AF.codepoints.MF.IP.sourceport
      Last modification: 16.10.2000
      Purpose: To verify that the system classifies incoming IP traffic according to the source
              port.
      References: PICS for AF PHB 2.1.1 (17)
      Requirement: Optional
      Test Configuration: Figure 2.
      Used traffic types: see below.
      Test Setup
      Initial Conditions:
          Set up the load generator 1 to create a flow 1 with a rate Rtest = 0.4 * Cegr and a packet
           spacing of 1 / Rtest (CBR). Set up the load generator to use a specific UDP port number
           as a source port. Flow 1 is destined to the destination traffic analyser.
          Set up the load generator 2 to create a flow 2 marked as default with an average rate of
           0.55 * Cegr, bursty arrival (e.g. Poisson). Flow 2 is destined to a host other than the
           destination traffic analyser.
          Allocate a rate of at least 0.8 * Cegr on the egress interface to the AF class of the traffic
           flow 1.
       Set the system to classify the incoming traffic according to the source port.
      Procedure:
       Verify that the test environment is configured as described above.
       Start to send flow 1 on interface 1 for at least ten seconds.
       Send flow 2 simultaneously on interface 2.
       Measure the traffic received by the destination traffic analyser on losses per AF DSCP.
      Repeat this test for all individual AF DSCP code points of Table 4.
      Observable results:
       DS codepoints of egress traffic should be correct.
       No AF packets are lost.
      Problems:
      Results: PASSED                     FAILED                    INCONCLUSIVE




Test# AF.2.5
      Test Label: classific.AF.codepoints.MF.IP.destport
      Last modification: 16.10.2000
      Purpose: To verify that the system classifies incoming IP traffic according to the destination
              port.
      References: PICS for AF PHB 2.1.1 (18)
      Requirement: Optional
      Test Configuration: Figure 2.
      Used traffic types: see below.

EDIN 0079-1006                                     2001 EURESCOM Participants in Project P1006
page 27 (12826)                                                  EURESCOM Technical Specification
      Test Setup
      Initial Conditions:
       Set up the load generator 1 to create a flow 1 with a rate Rtest = 0.4 * Cegr and a packet
          spacing of 1 / Rtest (CBR). Set up the load generator to use a specific UDP port number as
          a destination port. Flow 1 is destined to the destination traffic analyser.
       Set up the load generator 2 to create a flow 2 marked as default with an average rate of
          0.55 * Cegr, bursty arrival (e.g. Poisson). Flow 2 is destined to a host other than the
          destination traffic analyser.
       Allocate a rate of at least 0.8 * Cegr on the egress interface to the AF class of the traffic flow
          1.
       Set the system to classify the incoming traffic according to the destination port.
      Procedure:
       Verify that the test environment is configured as described above.
       Start to send flow 1 on interface 1 for at least ten seconds.
       Send flow 2 simultaneously on interface 2.
       Measure the traffic received by the destination traffic analyser on losses per AF DSCP.
      Repeat this test for all individual AF DSCP code points of Table 4.
      Observable results:
       DS codepoints of egress traffic should be correct.
       No AF packets are lost.
      Problems:
      Results: PASSED                     FAILED                     INCONCLUSIVE




Test# AF.2.6
      Test Label: classific.AF.codepoints.MF.IP.combine
      Last modification: 16.10.2000
      Purpose: To verify that the system classifies incoming IP traffic according to the
               combination of the classification methods.
      References: PICS for AF PHB 2.1.1 (19)
      Requirement: Conditional ( If any test from 2.1 to 2.5 is passed then Optional else N/A)
      Test Configuration: Figure 2.
      Used traffic types: see below.
      Test Setup
      Initial Conditions:
       Set up the load generator 1 to create a flow 1 with a rate Rtest = 0.4 * Cegr and a packet
          spacing of 1 / Rtest (CBR) and with specific source port, destination port, source IP
          address, destination IP address and protocol ID. Flow 1 is destined to the destination
          traffic analyser.
       Set up the load generator 2 to create a flow 2 marked as default with an average rate of
          0.55 * Cegr, bursty arrival (e.g. Poisson). Flow 2 is destined to a host other than the
          destination traffic analyser.

 2001 EURESCOM Participants in Project P1006                                          EDIN 0079-1006
page 28 (12826)                                                 EURESCOM Technical Specification
       Allocate a rate of at least 0.8 * Cegr on the egress interface to the Afi class of the traffic
          flow 1.
       Set the system to classify the incoming traffic flow 1 into a DSCP according to one of the
          successful results which was measured from AF.2.1 to AF.2.5.
      Procedure:
       Verify that the test environment is configured as described above.
       Start to send flow 1 on interface 1 for at least ten seconds.
       Send flow 2 simultaneously on interface 2.
       Measure the traffic received by the destination traffic analysers on losses per AF DSCP.
      Repeat this test for all individual AF DSCP code points of Table 4.
      Observable results:
       DS codepoints of egress traffic should be correct.
       No AF packets are lost.
      Problems:
      Results: PASSED                     FAILED                    INCONCLUSIVE
      Remark:


3.1.1.3 Traffic Forwarding

Test# AF.3.1
      Test Label: PHBgroup.AF.class.general
      Last modification: 21.09.2000
      Purpose: To verify that the system implements all the four general use AF classes.
      References: PICS for AF PHB 2.1.2. (20)
      Requirement: Optional
      Test Configuration: Figure 2.
      Used traffic types: see below.
      Test Setup
      Initial Conditions:
       Check the system manual on the maximum number of different AF classes to be supported.
                       Table 5. Maximum number of different AF classes
                             Different AF class DS codepoints
                                        001dd0 AF1d
                                        010dd0 AF2d
                                        011dd0 AF3d
                                        100dd0 AF4d
                     Note that at least one of the ”d” bits has to be set to 1.
       Set up the load generator 1 to create a test flow 1 of rate Rtest = N * 0.1 * Cegr for the
          maximum number N of the supported AF classes (N * [1,2,3,4]). Each of the N
          individual AF classes constituting the sub-flows of flow 1 should have a rate of 0.1 *

EDIN 0079-1006                                     2001 EURESCOM Participants in Project P1006
page 29 (12826)                                                 EURESCOM Technical Specification
          Cegr. The packets of flow 1 are spaced 1 / Rtest The packets of each individual supported
          AF class sub flow are spaced N / Rtest. All AF subflows should be multiplexed
          simultaneously without violating the prior conditions (CBR). Flow 1 is destined to the
          destination traffic analyser.
       Set up the load generator 2 to create a flow 2 marked as default with an average rate of (0.1
          * (4-N) + 0.55)* Cegr, bursty arrival (e.g. Poisson). Flow 2 is destined to a host other than
          the destination traffic analyser.
       Allocate a rate of 0.2 * Cegr on the egress interface per supported AF class of flow 1.
       Set the system to classify incoming traffic flow 1 according to DSCP.
      Procedure:
       Verify, that the test environment is configured as described above.
       Start to send flow 1 on interface 1 for at least ten seconds.
       Send flow 2 simultaneously on interface 2.
       Measure the traffic received by the destination traffic analyser on losses and throughput per
         AF class.
      Observable results:
       The AF DS codepoints of the traffic measured by the destination traffic analyser must be
          the same as on ingress interface 1.
       The throughput of each AF class must be the same.
       No AF packets must be lost.
      Problems:
      Results: PASSED                     FAILED                    INCONCLUSIVE
      Remark:




Test# AF.3.2
      Test Label: PHBgroup.AF.class.independence
      Last modification: 21.09.2000
      Purpose: To verify that the system forwards packets from any one AF class independently
              from the packets of any other AF class.
      References: PICS for AF PHB 2.1.2 (21)
      Requirement: Mandatory
      Test Configuration: Figure 2.
      Used traffic types: see below.
      Test Setup
      Initial Conditions:
          Set the SUT to support the maximum number N of different AF classes as verified by
           test AF.3.1. This test is not applicable if only one AF class is supported by the SUT.
 Table 6. DS codepoints of the maximum number of incoming traffic of different AF classes
                                       DS codepoints of incoming traffic
                                                   001dd0 AF1d


 2001 EURESCOM Participants in Project P1006                                       EDIN 0079-1006
page 30 (12826)                                                 EURESCOM Technical Specification
                                                   010dd0 AF2d
                                                   011dd0 AF3d
                                                   100dd0 AF4d
                     Note that at least one of the ”d” bits has to be set to 1.
       Set up the load generator 1 to create a test flow 1 of rate Rtest = 0.4 * Cegr build from sub-
          flows of N-1 different AF classes. Each AF class sub-flow has a rate of Rtest / (N-1). The
          packets of flow 1 are spaced 1 / Rtest (CBR). The packets of each individual AF class sub
          flow are spaced (N-1) / Rtest. The N-1 AF classes of flow 1 should be multiplexed
          simultaneously without violating the prior conditions (CBR). Flow 1 is destined to the
          destination traffic analyser.
       Set up the load generator 2 to create a flow 2 of a single AF class which is not part of flow
          1. Flow 2 has an average rate of 0.55 * Cegr, bursty arrival (e.g. Poisson). Flow 2 is
          destined to a host other than the destination traffic analyser.
       Allocate a rate of Cegr / N on the egress interface per supported AF class.
      Procedure:
       Verify, that the test environment is configured as described above.
       Start to send flow 1 on interface 1 for at least ten seconds.
       Send flow 2 simultaneously on interface 2.
       Measure the traffic received by the destination traffic analyser on throughput per AF class.
      Repeat the test by exchanging the AF class of flow 2 with an AF class of flow 1 which was
      not earlier used to create flow 2. Continue until all of the N supported AF classes are used to
      create flow 2.
      Observable results:
       The throughput of each AF class resembling flow 1 must be the same.
       The throughput of each AF class resembling flow 1 must be close to Rtest / (N-1).
      Results: PASSED                     FAILED                    INCONCLUSIVE
      Remark:




Test# AF.3.3.1
      Test Label: PHBgroup.AF.min.resource.allocation.buffer
      Last modification: 21.09.2000
      Purpose: To verify that the system allocates a minimum amount of buffer space for each
              implemented AF class.
      References: PICS for AF PHB 2.1.2 (22)
      Requirement: Mandatory
      Test Configuration: Figure 2.
      Used traffic types: see below.
      Test Setup
      Initial Conditions:
       Set the SUT to support the maximum number N of different AF classes as verified by test
          AF.3.1.

EDIN 0079-1006                                     2001 EURESCOM Participants in Project P1006
page 31 (12826)                                                  EURESCOM Technical Specification
       Check with the systems manual whether a minimum amount of buffer space is allocated to
          each supported AF class by default. If not, allocate a buffer space of 25 ms on the egress
          interface per supported AF class (this corresponds to 0.2 * Cegr bytes if Cegr is taken as
          number without dimension).
       Set up the load generator 1 to create a test flow 1 of rate Rtest = 0.1 * Cegr for the AF class to
          be tested. Create a bursty on-off arrival with a burst duration Ton of twice the allocated
          (default, respectively) buffer size. Flow 1 is destined to the destination traffic analyser.
       Allocate a rate of 0.4 * Cegr on the egress interface to the tested AF class.
      Procedure:
       Verify, that the test environment is configured as described above.
       Start to send flow 1 on interface 1 for at least ten seconds.
       Measure the traffic received by the destination traffic analyser on throughput per AF class.
      Repeat this test for all AF classes supported by the SUT (see results of test AF.3.1).
      Observable results:
       The throughput of the AF traffic must be close to Rtest / 2.
      Problems:
      Results: PASSED                     FAILED                     INCONCLUSIVE
      Remark:




Test# AF.3.3.2
      Test Label: PHBgroup.AF.min.resource.allocation.bandwidth
      Last modification: 21.09.2000
      Purpose: To verify that the system allocates a minimum amount of bandwidth for each
               implemented AF class.
      References: PICS for AF PHB 2.1.2 (23)
      Requirement: Mandatory
      Test Configuration: Figure 2.
      Used traffic types: see below.
      Test Setup
      Initial Conditions:
       Set the SUT to support the maximum number N of different AF classes as verified by test
          AF.3.1.
       Allocate a bandwidth of 0.2 * Cegr to each supported AF class on the egress link. Allocate a
          buffer space of 25 ms on the egress interface per supported AF class (this corresponds to
          0.2 * Cegr bytes if Cegr is taken as number without dimension).
       Set up the load generator 1 to create a test flow 1 of rate R test = N * 0.1 * Cegr for the
          maximum number N of the supported AF classes. Each of the N individual AF classes
          building the sub-flows of flow 1 should have a rate of 0.1 * Cegr. The packets of flow 1
          are spaced 1 / Rtest The packets of each individual supported AF class sub flow are
          spaced N / Rtest. All AF subflows should be multiplexed simultaneously without
          violating the prior conditions (CBR). Flow 1 is destined to the destination traffic
          analyser.


 2001 EURESCOM Participants in Project P1006                                           EDIN 0079-1006
page 32 (12826)                                                 EURESCOM Technical Specification
       Set up the load generator 2 to create a flow 2 marked as default with an average rate of (0.1
          * (4-N) + 0.55)* Cegr, bursty arrival (e.g. Poisson). Flow 2 is destined to a host other than
          the destination traffic analyser.
       Allocate a rate of 0.05 * Cegr on the egress interface per supported AF class of flow 1.
      Procedure:
       Verify, that the test environment is configured as described above.
       Start to send flow 1 on interface 1 for at least ten seconds.
       Send flow 2 simultaneously on interface 2.
       Measure the traffic received by the destination traffic analyser on throughput per AF class.
      Observable results:
       The throughput per AF class must be bigger or equal than 0.05 * Cegr.
      Problems:
      Results: PASSED                     FAILED                    INCONCLUSIVE
      Remark:




Test# AF.3.4.1
      Test Label: PHBgroup.AF.min.resource.allocation.short_timescale
      Last modification: 21.09.2000
      Purpose: To verify that the system achieves a configured rate by all classes over small time
              scales for each AF service class.
      References: PICS for AF PHB 2.1.2 (26)
      Requirement: Optional
      Test Configuration: Figure 2.
      Used traffic types: see below.
      Test Setup
      Initial Conditions:
       Set the SUT to support the maximum number N of different AF classes as verified by test
          AF.3.1.
       Allocate a buffer space of 50 ms on the egress interface per supported AF class (this
          corresponds to 0.4 * Cegr bytes if Cegr is taken as number without dimension).
       Set up the load generator 1 to create a test flow 1 of rate Rtest = N * 0.1 * Cegr. Flow 1
          consists of one sub flow per AF class, with a rate of 0.1 * Cegr per sub flow. Create a
          bursty on-off arrival per AF class subflow with a burst duration T on of the allocated
          buffer size. All AF subflows should be multiplexed simultaneously. Flow 1 is destined to
          the destination traffic analyser.
       Set up the load generator 2 to create a flow 2 marked as default with an average rate of (0.1
          * (4-N) + 0.55)* Cegr, bursty arrival (e.g. Poisson). Flow 2 is destined to a host other than
          the destination traffic analyser.
       Allocate a rate of 0.1 * Cegr on the egress interface to each tested AF class.
      Procedure:
       Verify, that the test environment is configured as described above.

EDIN 0079-1006                                     2001 EURESCOM Participants in Project P1006
page 33 (12826)                                                 EURESCOM Technical Specification
       Start to send flow 1 on interface 1 for at least ten seconds.
       Send flow 2 simultaneously on interface 2.
       Measure the traffic received by the destination traffic analyser on loss in terms of packets
         per AF class.
      Observable results:
       Each AF class should have no or a very low loss.
      Problems:
      Results: PASSED                     FAILED                    INCONCLUSIVE




Test# AF.3.4.2
      Test Label: PHBgroup.AF.min.resource.allocation.long_timescale
      Last modification: 21.09.2000
      Purpose: To verify that the system achieves a configured rate per AF class over long time
              scales for each AF service class.
      References: PICS for AF PHB 2.1.2 (27)
      Requirement: Optional
      Test Configuration: Figure 2.
      Used traffic types: see below.
      Test Setup
      Initial Conditions:
       Set the SUT to support the maximum number N of different AF classes as verified by test
          AF.3.1.
       Allocate a buffer space of 25 ms on the egress interface per supported AF class (this
          corresponds to 0.2 * Cegr bytes if Cegr is taken as number without dimension).
       Set up the load generator 1 to create a test flow 1 of rate Rtest = N * 0.1 * Cegr for the
          maximum number N of the supported AF classes. Each of the N individual AF classes
          constituting the sub-flows of flow 1 should have a rate of 0.1 * Cegr. The packets of flow
          1 are spaced 1 / Rtest. The packets of each individual supported AF class sub flow are
          spaced N / Rtest. All AF subflows should be multiplexed simultaneously without
          violating the prior conditions (CBR). Flow 1 is destined to the destination traffic
          analyser.
       Set up the load generator 2 to create a flow 2 marked as default with an average rate of (0.1
          * (4-N) + 0.55)* Cegr, bursty arrival (e.g. Poisson). Flow 2 is destined to a host other than
          the destination traffic analyser.
       Allocate a rate of 0.1 * Cegr on the egress interface to each tested AF class.
      Procedure:
       Verify, that the test environment is configured as described above.
       Start to send flow 1 on interface 1 for at least ten seconds.
       Send flow 2 simultaneously on interface 2.
       Measure the traffic received by the destination traffic analyser on loss in terms of packets
         per AF class.
      Observable results:
 2001 EURESCOM Participants in Project P1006                                            EDIN 0079-1006
page 34 (12826)                                                 EURESCOM Technical Specification
       Each AF class should have no or a very low loss.
      Problems:
      Results: PASSED                     FAILED                    INCONCLUSIVE




Test# AF.3.5
      Test Label: PHBgroup.AF.resource.excess
      Last modification: 21.09.2000
      Purpose: To verify that the system allows excess resources to be assigned to AF classes.
      References: PICS for AF PHB 2.1.2 (28)
      Requirement: Optional
      Test Configuration: Figure 2.
      Used traffic types: see below.
      Test Setup
      Initial Conditions:
       Set the SUT to support the maximum number N of different AF classes as verified by test
          AF.3.1.
       Allocate a buffer space of 25 ms on the egress interface per supported AF class (this
          corresponds to 0.2 * Cegr bytes if Cegr is taken as number without dimension).
       Allocate spare resources of other AF classes or other PHBs equally to all of the N
          supported AF classes.
       Set up the load generator 1 to create a test flow 1 of rate Rtest = N * 0.1 * Cegr. Flow 1
          consists of one sub flow per AF class, with a rate of 0.1 * Cegr per sub flow. Create a
          bursty on-off arrival per AF class subflow with a burst duration T on of twice the
          allocated buffer size. All AF subflows should be multiplexed simultaneously. Flow 1 is
          destined to the destination traffic analyser.
      Note: if an implementation does not allow the AF classes to get access to excess resources in
      terms of bandwidth AND buffer space, the test may have to simplified accordingly.
       Set up the load generator 2 to create a flow 2 marked as default with an average rate of (0.1
          * (4-N) + 0.55)* Cegr, bursty arrival (e.g. Poisson). Flow 2 is destined to the destination
          traffic analyser.
       Allocate a rate of 0.05 * Cegr on the egress interface to each all AF classes.
      Procedure:
       Verify, that the test environment is configured as described above.
       Start to send flow 1 on interface 1 for at least ten seconds.
       Send flow 2 simultaneously on interface 2.
       Measure the traffic received by the destination traffic analyser on throughput per AF class
         and on losses the traffic marked as default.
      Observable results:
       The throughput per AF classes must be bigger than 0.05 * Cegr.
       The traffic marked as default must have no or very low losses.
      Problems:

EDIN 0079-1006                                     2001 EURESCOM Participants in Project P1006
page 35 (12826)                                                 EURESCOM Technical Specification
      Results: PASSED                     FAILED                    INCONCLUSIVE
      Remark:




Test# AF.3.6
      Test Label: PHBgroup.AF.respect.forwarding_prec
      Last modification: 18.09.2000
      Purpose: To verify that the system does not forward an IP packet with smaller probability if
              it contains a drop precedence value p than if it contains a drop precedence value q
              when p<q.
      References: PICS for AF PHB 2.1.2 (29)
      Requirement: Mandatory
      Test Configuration: Figure 2.
      Used traffic types: see below.
      Test Setup
      Initial Conditions:
       Set the SUT to support one of the N different AF classes as verified by test AF.3.1.
       Allocate a buffer space of 25 ms on the egress interface per supported AF class (this
          corresponds to 0.2 * Cegr bytes if Cegr is taken as number without dimension).
       Set up the SUT to discard traffic of the selected AF class-drop precedence level Afd3 after
          a congestion level threshold of 30% is reached. Set the corresponding congestion level
          threshold for Afd2 to 60% and the congestion level threshold for Afd1 to 90%.
       Set up the load generator 1 to create a test flow 1 with a rate of Rtest = 0.4 * Cegr. Set up the
          load generator to compose flow 1 of three individual sub flows which each have a rate of
          Rtest / 3. Assign the drop precedence level 1 to the first sub, flow, drop precedence level 2
          to the second and drop precedence level 3 to the third (drop precedence level coding see
          Table 7). The three sub-flows should be multiplexed simultaneously. The packets of the
          individual sub-flows are spaced 3 / Rtest, while flow 1’s packets are spaced 1 / Rtest.
       Set up the load generator 2 to create a flow 2 coded as any AF class different than the
          tested one. Flow 2 has an average rate of 0.55 * Cegr with bursty arrival (e.g. Poisson).
          Flow 2 is destined to a host other than the destination traffic analyser.
       Allocate a rate of 0.35 * Cegr on the egress interface to the AF class of flow 1.
       Allocate a rate of 0.65 * Cegr to flow 2.
      Procedure:
       Verify, that the test environment is configured as described above.
       Start to send flow 1 on interface 1 for at least ten seconds.
       Send flow 2 simultaneously on interface 2.
       Measure the traffic received by the destination traffic analyser on throughput for each
         individual AF drop precedence level.
      Repeat the test with an egress rate decreased by 0.1 for flow 1 and increased by the same
      amount for flow 2 until an egress rate of 0.05 * Cegr is reached for flow 1.
      Repeat the test by changing the AF class resembling flow 1 until every supported AF class
      has been tested.


 2001 EURESCOM Participants in Project P1006                                         EDIN 0079-1006
page 36 (12826)                                                 EURESCOM Technical Specification
      Observable results:
       The throughput per AF level must be in the order Afd1 * Afd2 * Afd3 for all tests.
      Problems:
      Results: PASSED                      FAILED                   INCONCLUSIVE
      Remark:




Test# AF.3.7
      Test Label: PHBgroup.AF.threedropprec
      Last modification: 21.09.2000
      Purpose: To verify that the system accepts all three drop precedence DSCP’s within any AF
              class.
      References: PICS for AF PHB 2.1.2 (30)
      Requirement: Mandatory
      Test Configuration: Figure 2.
      Used traffic types: see below.
      Test Setup
      Initial Conditions:
       Set the SUT to support one of the N different AF classes as verified by test AF.3.1.
       Allocate a buffer space of 25 ms on the egress interface per supported AF class (this
          corresponds to 0.2 * Cegr bytes if Cegr is taken as number without dimension).
       Set up the load generator 1 to create a test flow 1 of rate Rtest = N * 0.1 * Cegr for the
          maximum number N of the supported AF classes. Each of the N individual AF classes
          constituting the sub-flows of flow 1 should have a rate of 0.1 * Cegr. The packets of flow
          1 are spaced 1 / Rtest The packets of each individual supported AF class sub flow are
          spaced N / Rtest. All AF subflows should be multiplexed simultaneously without
          violating the prior conditions (CBR). Mark all individual sub flows with drop
          precedence level 1 (Table 7). Flow 1 is destined to the destination traffic analyser.
                              Table 7. Drop precedence level coding
                              Drop precedence            DSCP Coding
                                   level
                                       1                     Ddd01d
                                       2                     Ddd10d
                                       3                     Ddd11d
       Set up the load generator 2 to create a flow 2 marked as default with an average rate of (0.1
          * (4-N) + 0.55)* Cegr, bursty arrival (e.g. Poisson). Flow 2 is destined to a host other than
          the destination traffic analyser.
       Allocate a rate of 0.1 * Cegr on the egress interface per AF class of flow 1.
      Procedure:
       Verify, that the test environment is configured as described above.
       Start to send flow 1 on interface 1 for at least ten seconds.
       Send flow 2 simultaneously on interface 2.

EDIN 0079-1006                                     2001 EURESCOM Participants in Project P1006
page 37 (12826)                                                  EURESCOM Technical Specification
       Measure the traffic received by the destination traffic analyser on throughput and losses per
         AF class.
      Repeat the test with AF drop precedence levels 2 and 3.
      Observable results:
       The throughput per tested AF class must be close to 0.1 * Cegr.
       The losses per tested AF class must be zero.
       Losses and throughput must be the same for all drop precedence levels within each AF
          class.
      Problems:
      Results: PASSED                     FAILED                     INCONCLUSIVE
      Remark:




Test# AF.3.8
      Test Label: PHBgroup.AF.twodropprec
      Last modification: 21.09.2000
      Purpose: To verify that the system yield at least two drop precedence levels per AF class.
      References: PICS for AF PHB 2.1.2 (31)
      Requirement: Mandatory
      Test Configuration: Figure 2.
      Used traffic types: see below.
      Test Setup
      Initial Conditions:
       Set the SUT to support one of the N different AF classes as verified by test AF.3.1.
       Allocate a buffer space of 25 ms on the egress interface per supported AF class (this
          corresponds to 0.2 * Cegr bytes if Cegr is taken as number without dimension).
       Set up the SUT to discard traffic of the selected AF class-drop precedence level Afd3 after
          a congestion level threshold of 30% is reached. If possible, set the corresponding
          congestion level threshold for Afd2 to 60% (if this is impossible, set it to the same value
          as for Afd3). Set the congestion level threshold for Afd1 to 90%.
       Set up the load generator 1 to create a test flow 1 with a rate of R test = 0.4 * Cegr. Set up the
          load generator to compose flow 1 of three individual sub flows which each have a rate of
          Rtest / 3. Assign the drop precedence level 1 to the first sub, flow, drop precedence level 2
          to the second and drop precedence level 3 to the third (drop precedence level coding see
          Table 7). The three sub-flows should be multiplexed simultaneously. The packets of the
          individual sub-flows are spaced 3 / Rtest, while flow 1’s packets are spaced 1 / Rtest.
       Set up the load generator 2 to create a flow 2 coded as any AF class different than the
          tested one. Flow 2 has an average rate of 0.55 * Cegr with bursty arrival (e.g. Poisson).
          Flow 2 is destined to a host other than the destination traffic analyser.
       Allocate a rate of 0.35 * Cegr on the egress interface to the AF class of flow 1.
       Allocate a rate of 0.65 * Cegr to flow 2.
      Procedure:
       Verify, that the test environment is configured as described above.
 2001 EURESCOM Participants in Project P1006                                          EDIN 0079-1006
page 38 (12826)                                                  EURESCOM Technical Specification
       Start to send flow 1 on interface 1 for at least ten seconds.
       Send flow 2 simultaneously on interface 2.
       Measure the traffic received by the destination traffic analyser on losses for each individual
         AF drop precedence level.
      Repeat the test with an egress rate decreased by 0.1 for flow 1 and increased by the same
      amount for flow 2 until an egress rate of 0.05 * Cegr is reached for flow 1.
      Repeat the test by changing the AF class resembling flow 1 until every supported AF class
      has been tested.
      Observable results:
          The losses per AF drop precedence level must be in the order Afd1 < Afd2 * Afd3 for
           all tests.
      Problems:
      Results: PASSED                     FAILED                     INCONCLUSIVE
      Remark:




Test# AF.3.9
      Test Label: PHBgroup.AF.dropprec_number
      Last modification: 21.09.2000
      Purpose: To verify that the system supports three different drop precedence levels per AF
              class.
      References: PICS for AF PHB 2.1.2 (32)
      Requirement: Optional
      Test Configuration: Figure 2.
      Used traffic types: see below.
      Test Setup
      Initial Conditions:
       Set the SUT to support one of the N different AF classes as verified by test AF.3.1.
       Allocate a buffer space of 25 ms on the egress interface per supported AF class (this
          corresponds to 0.2 * Cegr bytes if Cegr is taken as number without dimension).
       Set up the SUT to discard traffic of the selected AF class-drop precedence level Afd3 after
          a congestion level threshold of 30% is reached. Set the corresponding congestion level
          threshold for Afd2 to 60%. Set the congestion level threshold for Afd1 to 90%.
       Set up the load generator 1 to create a test flow 1 with a rate of R test = 0.4 * Cegr. Set up the
          load generator to compose flow 1 of three individual sub flows which each have a rate of
          Rtest / 3. Assign the drop precedence level 1 to the first sub, flow, drop precedence level 2
          to the second and drop precedence level 3 to the third (drop precedence level coding see
          Table 7). The three sub-flows should be multiplexed simultaneously. The packets of the
          individual sub-flows are spaced 3 / Rtest, while flow 1’s packets are spaced 1 / Rtest.
       Set up the load generator 2 to create a flow 2 coded as any AF class different than the
          tested one. Flow 2 has an average rate of 0.55 * Cegr with bursty arrival (e.g. Poisson).
          Flow 2 is destined to a host other than the destination traffic analyser.
       Allocate a rate of 0.35 * Cegr on the egress interface to the AF class of flow 1.


EDIN 0079-1006                                     2001 EURESCOM Participants in Project P1006
page 39 (12826)                                                  EURESCOM Technical Specification
       Allocate a rate of 0.65 * Cegr to flow 2.
      Procedure:
       Verify, that the test environment is configured as described above.
       Start to send flow 1 on interface 1 for at least ten seconds.
       Send flow 2 simultaneously on interface 2.
       Measure the traffic received by the destination traffic analyser on losses for each individual
         AF drop precedence level.
      Repeat the test with an egress rate decreased by 0.1 for flow 1 and increased by the same
      amount for flow 2 until an egress rate of 0.05 * Cegr is reached for flow 1.
      Repeat the test by changing the AF class resembling flow 1 until every supported AF class
      has been tested.
      Observable results:
          The losses per AF drop precedence level must be in the order Afd1 < Afd2 < Afd3 for
           all tests.
      Problems:
      Results: PASSED                     FAILED                     INCONCLUSIVE
      Remark:




Test# AF.3.10
      Test Label: PHBgroup.AF.dropprec.sequence
      Last modification: 21.09.2000
      Purpose: To verify that when the system yields only two drop precedence per AF class, the
              DSCP Afx1 yield lower loss probability than DSCP Afx2 and Afx3.
      References: PICS for AF PHB 2.1.2 (33)
      Requirement: Conditional (If test AF.3.8 is Passed then Mandatory else N/A)
      Test Configuration: Figure 2.
      Used traffic types: see below.
      Test Setup
      Initial Conditions:
       Set the SUT to support one of the N different AF classes as verified by test AF.3.1.
       Allocate a buffer space of 25 ms on the egress interface per supported AF class (this
          corresponds to 0.2 * Cegr bytes if Cegr is taken as number without dimension).
       Set up the SUT to discard traffic of the selected AF class-drop precedence levels Afd2 and
          Afd3 after a congestion level threshold of 30% is reached. Set the congestion level
          threshold for Afd1 to 90%.
       Set up the load generator 1 to create a test flow 1 with a rate of R test = 0.4 * Cegr. Set up the
          load generator to compose flow 1 of three individual sub flows which each have a rate of
          Rtest / 3. Assign the drop precedence level 1 to the first sub, flow, drop precedence level 2
          to the second and drop precedence level 3 to the third (drop precedence level coding see
          Table 7). The three sub-flows should be multiplexed simultaneously. The packets of the
          individual sub-flows are spaced 3 / Rtest, while flow 1’s packets are spaced 1 / Rtest.


 2001 EURESCOM Participants in Project P1006                                          EDIN 0079-1006
page 40 (12826)                                                 EURESCOM Technical Specification
       Set up the load generator 2 to create a flow 2 coded as any AF class different than the
          tested one. Flow 2 has an average rate of 0.55 * Cegr with bursty arrival (e.g. Poisson).
          Flow 2 is destined to a host other than the destination traffic analyser.
       Allocate a rate of 0.35 * Cegr on the egress interface to the AF class of flow 1.
       Allocate a rate of 0.65 * Cegr to flow 2.
      Procedure:
       Verify, that the test environment is configured as described above.
       Start to send flow 1 on interface 1 for at least ten seconds.
       Send flow 2 simultaneously on interface 2.
       Measure the traffic received by the destination traffic analyser on losses for each individual
         AF drop precedence level.
      Repeat the test with an egress rate decreased by 0.1 for flow 1 and increased by the same
      amount for flow 2 until an egress rate of 0.05 * Cegr is reached for flow 1.
      Repeat the test by changing the AF class resembling flow 1 until every supported AF class
      has been tested.
      Observable results:
          The losses per AF drop precedence level must be in the order Afd1 < Afd2 and Afd1 <
           Afd3 for all tests.
      Problems:
      Results: PASSED                     FAILED                    INCONCLUSIVE
      Remark:




Test# AF.3.11
      Test Label: PHBgroup.AF.order
      Last modification: 21.09.2000
      Purpose: To verify that the system respects the order of AF packets belonging to the same
              micro-flow regardless of their drop precedence.
      References: PICS for AF PHB 2.1.2 (34)
      Requirement: Mandatory
      Test Configuration: Figure 2.
      Used traffic types: see below.
      Test Setup
      Initial Conditions:
       Set the SUT to support one of the N different AF classes as verified by test AF.3.1.
       Allocate a buffer space of 25 ms on the egress interface per supported AF class (this
          corresponds to 0.2 * Cegr bytes if Cegr is taken as number without dimension).
       Set up the SUT to discard traffic of the selected AF class-drop precedence level Afd3 after
          a congestion level threshold of 30% is reached. If possible, set the corresponding
          congestion level threshold for Afd2 to 60% (if this is impossible, set it to the same value
          as for Afd3). Set the congestion level threshold for Afd1 to 90%.



EDIN 0079-1006                                       2001 EURESCOM Participants in Project P1006
page 41 (12826)                                                 EURESCOM Technical Specification
       Set up the load generator 1 to create a test flow 1 with a rate of Rtest = 0.4 * Cegr. Set up the
          load generator to compose flow 1 of three individual sub flows which each have a rate of
          Rtest / 3. Assign the drop precedence level 1 to the first sub, flow, drop precedence level 2
          to the second and drop precedence level 3 to the third (drop precedence level coding see
          Table 7). The three sub-flows should be multiplexed simultaneously. The packets of the
          individual sub-flows are spaced 3 / Rtest, while flow 1’s packets are spaced 1 / Rtest. Set
          up the load generator to insert consecutive sequence numbers into the packets of flow 1.
          Make sure that the sequence numbers are assigned independent of the AF precedence
          level of a packet.
       Set up the load generator 2 to create a flow 2 coded as any AF class different than the
          tested one. Flow 2 has an average rate of 0.55 * Cegr with bursty arrival (e.g. Poisson).
          Flow 2 is destined to a host other than the destination traffic analyser.
       Allocate a rate of 0.35 * Cegr on the egress interface to the AF class of flow 1.
       Allocate a rate of 0.65 * Cegr to flow 2.
      Note that this test may require the insertion of information into the packets payload. If the
      destination traffic analyser can’t evaluate the sequence numbers of the payload, a sniffer may
      be inserted between the SUT and the destination traffic analyser.
      Procedure:
       Verify, that the test environment is configured as described above.
       Start to send flow 1 on interface 1 for at least ten seconds.
       Send flow 2 simultaneously on interface 2.
       Measure the traffic received by the destination traffic analyser on the sequence of all
         arriving AF packets by evaluating the sequence number.
      Repeat the test with an egress rate decreased by 0.1 for flow 1 and increased by the same
      amount for flow 2 until an egress rate of 0.05 * Cegr is reached for flow 1.
      Repeat the test by changing the AF class resembling flow 1 until every supported AF class
      has been tested.
      Observable results:
          If packet Pi is received at time Ti and packet Pi+1 at time Ti + *t, *t > 0, then the
           sequence number of Pi must be smaller than that of Pi+1 for all i.
      Problems:
      Results: PASSED                     FAILED                    INCONCLUSIVE
      Remark:




3.1.1.4 Traffic Conditioning

Test# AF.4.1
      Test Label: Condition.AF.shape
      Last modification: 16.10.2000
      Purpose: To verify that the system implements traffic shaping at the edge of a domain to
              control the amount of AF traffic entering or exiting.
      References: PICS for AF PHB 2.1.3 (35)
      Requirement: Optional


 2001 EURESCOM Participants in Project P1006                                         EDIN 0079-1006
page 42 (12826)                                                  EURESCOM Technical Specification
      Test Configuration: Figure 2.
      Used traffic types:
      Test Setup
      Initial Conditions:
       Set the SUT to support one of the N different AF classes as verified by test AF.3.1.
       Switch on traffic shaping on the SUT. Allocate a buffer space of 25 ms on the egress
          interface for the configured AF class (this corresponds to 0.2 * Cegr bytes if Cegr is taken
          as number without dimension). Allocate a rate of 0.2 * Cegr on the egress interface to the
          AF class.
       Set up the load generator 1 to create a test flow 1 of rate Rtest = 0.4 * Cegr for the AF class to
          be tested. Create a bursty on-off arrival with a burst duration Ton = 12,5ms buffer size.
          (This corresponds to 0.1 * Cegr bytes if Cegr is taken as number without dimension). Flow
          1 is destined to the destination traffic analyser.
      Procedure:
       Verify, that the test environment is configured as described above.
       Start to send flow 1 on interface 1 for at least ten seconds.
       Measure the traffic received by the destination traffic analyser on throughput per AF class.
      Repeat this test for all AF classes supported by the SUT (see results test 3.1).
      Observable results:
       DS codepoints of egress traffic should be correct.
       The throughput of the AF traffic must be close to Rtest / 2.
      Problems:
      Results: PASSED                     FAILED                     INCONCLUSIVE
      Remark:




Test# AF.4.2
      Test Label: Condition.AF.discard
      Last modification: 16.10.2000
      Purpose: To verify that the system implements packet discarding at the edge of a domain to
              control the amount of AF traffic entering or exiting.
      References: PICS for AF PHB 2.1.3 (36)
      Requirement: Optional
      Test Configuration: Figure 2.
      Used traffic types: see below
      Test Setup
      Initial Conditions:
       Set the SUT to support one of the N different AF classes as verified by test AF.3.1.
       Allocate a buffer space of 25 ms on the egress interface per supported AF class (this
          corresponds to 0.2 * Cegr bytes if Cegr is taken as number without dimension).



EDIN 0079-1006                                     2001 EURESCOM Participants in Project P1006
page 43 (12826)                                                  EURESCOM Technical Specification
       Set up the SUT to discard traffic of the selected AF class-drop precedence level Afd3 after
          a congestion level threshold of 30% is reached. Set the corresponding congestion level
          threshold for Afd2 to 60%. Set the congestion level threshold for Afd1 to 90%.
       Set up the load generator 1 to create a test flow 1 with a rate of R test = 0.4 * Cegr. Set up the
          load generator to compose flow 1 of three individual sub flows which each have a rate of
          Rtest / 3. Assign the drop precedence level 1 to the first sub, flow, drop precedence level 2
          to the second and drop precedence level 3 to the third (drop precedence level coding see
          Table 7). The three sub-flows should be multiplexed simultaneously. The packets of the
          individual sub-flows are spaced 3 / Rtest, while flow 1’s packets are spaced 1 / Rtest.
       Set up the load generator 2 to create a flow 2 coded as any AF class different than the
          tested one. Flow 2 has an average rate of 0.55 * Cegr with bursty arrival (e.g. Poisson).
          Flow 2 is destined to a host other than the destination traffic analyser.
       Allocate a rate of 0.35 * Cegr on the egress interface to the AF class of flow 1.
       Allocate a rate of 0.65 * Cegr to flow 2.
      Procedure:
       Verify, that the test environment is configured as described above.
       Start to send flow 1 on interface 1 for at least ten seconds.
       Send flow 2 simultaneously on interface 2.
       Measure the traffic received by the destination traffic analyser on losses for each individual
         AF drop precedence level.
      Repeat the test with an egress rate decreased by 0.1 for flow 1 and increased by the same
      amount for flow 2 until an egress rate of 0.05 * Cegr is reached for flow 1.
      Repeat the test by changing the AF class resembling flow 1 until every supported AF class
      has been tested.
      Observable results:
          There must be losses per AF class. (The losses per AF drop precedence level must be in
           the order Afd1 < Afd2 < Afd3 for all tests.)
      Problems:
      Results: PASSED                     FAILED                     INCONCLUSIVE
      Remark:




Test# AF.4.3
      Test Label: Condition.AF.precchange
      Last modification: 16.10.2000
      Purpose: To verify that the system changes drop precedence of packets at the edge of a
                domain to control the amount of AF traffic entering or exiting.
      References: PICS for AF PHB 2.1.3 (37)
      Requirement: Optional
      Test Configuration: Figure 2.
      Used traffic types: see below
      Test Setup
      Initial Conditions:

 2001 EURESCOM Participants in Project P1006                                          EDIN 0079-1006
page 44 (12826)                                                 EURESCOM Technical Specification
       Set the SUT to support one of the N different AF classes as verified by test AF.3.1.
       Allocate a buffer space of 25 ms on the egress interface per supported AF class (this
          corresponds to 0.2 * Cegr bytes if Cegr is taken as number without dimension).
       Set up the SUT to change the selected AF class drop precedence level from Afd1 into Afd2
          after the load of this AF class excess 0.15 * Cegr. Change the selected AF class-drop
          precedence level Afd2 into Afd3 after the load of this AF class excess 0.25 * Cegr.
       Set up the load generator 1 to create a test flow 1 with a rate of Rtest = 0.4 * Cegr. Assign the
          drop precedence level 1 to the test flow 1. Flow 1’s packets are spaced 1 / Rtest.
       Allocate a rate of 0.35 * Cegr on the egress interface to the AF class of flow 1.
       Set up the load generator 2 to create a flow 2 marked as default with an average rate of
          0.55 * Cegr, bursty arrival (e.g. Poisson). Flow 2 is destined to the destination traffic
          analyser.
       Allocate a rate of 0.65 * Cegr on the egress interface to flow 2.
      Procedure:
       Verify, that the test environment is configured as described above.
       Start to send flow 1 on interface 1 for at least ten seconds.
       Send flow 2 simultaneously on interface 2.
       Measure the traffic received by the destination traffic analyser on changes for each
         individual AF drop precedence level.
      Repeat the test with an egress rate decreased by 0.1 for flow 1 and increased by the same
      amount for flow 2 until an egress rate of 0.05 * Cegr is reached for flow 1.
      Repeat the test by changing the AF class resembling flow 1 until every supported AF class
      has been tested.
      Observable results:
       DS codepoints of egress traffic should be correct.
       Excess traffic should be re-marked (Afd1, Afd2) into a higher drop precedence value.
          Excess Afd3 traffic should be discarded.
      Problems:
      Results: PASSED                     FAILED                    INCONCLUSIVE
      Remark:




Test# AF.4.4
      Test Label: Condition.AF.classchange
      Last modification: 16.10.2000
      Purpose: To verify that the system changes AF class of packets at the edge of a domain to
              control the amount of AF traffic entering or exiting.
      References: PICS for AF PHB 2.1.3 (38)
      Requirement: Optional
      Test Configuration: Figure 2.
      Used traffic types: see below
      Test Setup

EDIN 0079-1006                                     2001 EURESCOM Participants in Project P1006
page 45 (12826)                                                 EURESCOM Technical Specification
      Initial Conditions:
       Set the SUT to support one of the N different AF classes as verified by test AF.3.1.
       Allocate a buffer space of 25 ms on the egress interface per supported AF class (this
          corresponds to 0.2 * Cegr bytes if Cegr is taken as number without dimension).
       Set up the SUT to change the selected AF class from Afx1 into Afy1 after the load of this
          AF class excess 0.25 * Cegr.
       Set up the load generator 1 to create a test flow 1 with a rate of Rtest = 0.4 * Cegr. Assign the
          drop precedence level 1 to the test flow 1. Flow 1’s packets are spaced 1 / Rtest.
       Allocate a rate of 0.35 * Cegr on the egress interface to the AF class of flow 1.
       Set up the load generator 2 to create a flow 2 marked as default with an average rate of
          0.55 * Cegr, bursty arrival (e.g. Poisson). Flow 2 is destined to the destination traffic
          analyser.
       Allocate a rate of 0.65 * Cegr on the egress interface to flow 2.
      Procedure:
       Verify, that the test environment is configured as described above.
       Start to send flow 1 on interface 1 for at least ten seconds.
       Send flow 2 simultaneously on interface 2.
       Measure the traffic received by the destination traffic analyser on changes for the AF
         classes.
      Repeat the test with an egress rate decreased by 0.1 for flow 1 and increased by the same
      amount for flow 2 until an egress rate of 0.05 * Cegr is reached for flow 1.
      Repeat the test by changing the AF class resembling flow 1 until every supported AF class
      has been tested.
      Observable Results:
       DS codepoints of egress traffic should be correct.
       Excess traffic should be re-marked from Afx1 into Afy1.
      Problems:
      Results: PASSED                     FAILED                    INCONCLUSIVE
      Remark:




Test# AF.4.5
      Test Label: Condition.AF.reorder
      Last modification: 16.10.2000
      Purpose: To verify that the conditioning actions of the system don’t cause reordering of
                packets of the same microflow.
      References: PICS for AF PHB 2.1.3 (39)
      Requirement:
      Test Configuration: Figure 2.
      Used traffic types: see below
      Test Setup


 2001 EURESCOM Participants in Project P1006                                         EDIN 0079-1006
page 46 (12826)                                                 EURESCOM Technical Specification
      Initial Conditions:
       Set the SUT to support one of the N different AF classes as verified by test AF.3.1.
       Allocate a buffer space of 25 ms on the egress interface per supported AF class (this
          corresponds to 0.2 * Cegr bytes if Cegr is taken as number without dimension).
       Set up the SUT to change the selected AF class drop precedence level from Afd1 into Afd2
          after the load of this AF class excess 0.15 * Cegr. Change the selected AF class-drop
          precedence level Afd2 into Afd3 after the load of this AF class excess 0.25 * Cegr.
       Set up the load generator 1 to create a test flow 1 with a rate of Rtest = 0.4 * Cegr. Assign the
          drop precedence level 1 to the test flow 1. Flow 1’s packets are spaced 1 / R test. Set up
          the load generator to insert consecutive sequence numbers into the packets of flow 1.
          Make sure that the sequence numbers are assigned independent of the AF precedence
          level of a packet.
       Allocate a rate of 0.35 * Cegr on the egress interface to the AF class of flow 1.
       Set up the load generator 2 to create a flow 2 marked as default with an average rate of
          0.55 * Cegr, bursty arrival (e.g. Poisson). Flow 2 is destined to the destination traffic
          analyser.
       Allocate a rate of 0.65 * Cegr on the egress interface to flow 2.
      Note that this test may require the insertion of information into the packets payload. If the
      destination traffic analyser can’t evaluate the sequence numbers of the payload, a sniffer may
      be inserted between the SUT and the destination traffic analyser.
      Procedure:
       Verify, that the test environment is configured as described above.
       Start to send flow 1 on interface 1 for at least ten seconds.
       Send flow 2 simultaneously on interface 2.
       Measure the traffic received by the destination traffic analyser on changes for each
         individual AF drop precedence level.
       Measure the traffic received by the destination traffic analyser on the sequence of all
         arriving AF packets by evaluating the sequence number.
      Repeat the test with an egress rate decreased by 0.1 for flow 1 and increased by the same
      amount for flow 2 until an egress rate of 0.05 * Cegr is reached for flow 1.
      Repeat the test by changing the AF class resembling flow 1 until every supported AF class
      has been tested.
      Observable Results:
       DS codepoints of egress traffic should be correct.
       Excess traffic should be re-marked (Afd1, Afd2) into a higher drop precedence value.
          Excess Afd3 traffic should be discarded.
          If packet Pi is received at time Ti and packet Pi+1 at time Ti + *t, *t > 0, then the
           sequence number of Pi must be smaller than that of Pi+1 for all i.
      Problems:
      Results: PASSED                     FAILED                    INCONCLUSIVE
      Remark:




EDIN 0079-1006                                     2001 EURESCOM Participants in Project P1006
page 47 (12826)                                                 EURESCOM Technical Specification
3.1.1.5 Marking and re-marking

Test# AF.5.1
      Test Label: Mark.AF.srTCM.implement
      Last modification: 16.10.2000
      Purpose: To verify that the system supports a „Single Rate Three Colour Marker - srTCM
              method for marking.
      References: PICS for AF PHB 2.2 (40)
      Requirement: Optional
      Test Configuration: Figure 2.
      Used traffic types: see below
      Remark for test set-up: srTCM can be implemented only two ways: either in Colour-Blind-
      mode or in Colour-Aware-mode. Considering this fact there have to be two Test Setup to
      examine whether srTCM is implemented well or not. A successful implementation of one of
      the two modes means that srTCM is supported on that system.
      Test Setup (1. for Blind-mode)
      Initial Conditions:
       Set the SUT to support one of the N different AF classes as verified by test AF.3.1.
       Set up the load generator 1 to create a test flow 1 with a rate of R test = 0.4 * Cegr. Create a
          bursty on-off arrival for the AF class subflow with a burst duration Ton = 20ms.
       Set the CBS value to 25 ms at the implemented Conditioner functionality on the ingress
          interface per supported AF class (this corresponds to 0.2 * Cegr bytes if Cegr is taken as
          number without dimension). At the Conditioner the conform-action should set the colour
          to Green (Afd1), excess-action should be further comparison with EBS value.
       Set the EBS value to 35 ms at the implemented Conditioner functionality on the ingress
          interface per supported AF class (this corresponds to 0.28 * Cegr bytes if Cegr is taken as
          number without dimension). Conform-action should set the colour to Yellow (Afd2),
          excess-action should be setting the colour to Red (Afd3).
       Allocate a rate (CIR) of 0.5 * Cegr on the egress interface to the AF class of flow 1.
       Allocate a buffer space of 50 ms on the egress interface per supported AF class (this
          corresponds to 0.4 * Cegr bytes if Cegr is taken as number without dimension). (This
          guarantees, that there will be no losses only remarking when Ton < 50ms)
      Test Setup (2. for Colour-Aware-mode)
      Initial Conditions:
       Set the SUT to support one of the N different AF classes as verified by test AF.3.1.
       Set up the load generator 1 to create a test flow 1 marked with a DSCP Afd1 and a rate of
          Rtest = 0.4 * Cegr. Create a bursty on-off arrival for the AF class subflow with a burst
          duration Ton = 20ms.
       The system has to check the parameters in this order: CIR, CBS, EBS.
       Set the CBS value to 25 ms at the implemented Conditioner functionality on the ingress
          interface per supported AF class (this corresponds to 0.2 * Cegr bytes if Cegr is taken as
          number without dimension). At the Conditioner the conform-action should transmit
          invariably, excess-action should be further comparison with EBS value.
       Set the EBS value to 35 ms at the implemented Conditioner functionality on the ingress
          interface per supported AF class (this corresponds to 0.28 * Cegr bytes if Cegr is taken as
          number without dimension).
 2001 EURESCOM Participants in Project P1006                                        EDIN 0079-1006
page 48 (12826)                                                 EURESCOM Technical Specification
              Conform-actions:
                     when the incoming packet is Green or Yellow these should be set to Yellow
                      (Afd2)
                     when the incoming packet is Red this should be set Red (Afd3).
              Excess-action: every incoming colour should be set to Red (Afd3).
       Allocate a rate (CIR) of 0.5 * Cegr on the egress interface to the AF class of flow 1.
       Allocate a buffer space of 50 ms on the egress interface per supported AF class (this
          corresponds to 0.4 * Cegr bytes if Cegr is taken as number without dimension). (This
          guarantees, that there will be no dropped only remarked packets when Ton < 50ms)
      Procedure:
       Verify, that the test environment is configured as described above.
       Start to send flow 1 on interface 1 for at least ten seconds.
       Measure the traffic received by the destination traffic analyser on throughput per AF class
         and on losses the traffic.
      Repeat the test with the load burst duration (Ton) increased by 10ms for flow 1 until the value
      40ms reached for flow 1.
      Repeat the test by changing the AF class resembling flow 1 until every supported AF class
      has been tested.
      Observable Results:
       After the first round only Green traffic (Afd1) can be observed at the analyser. (No excess
          traffic is over CBS)
       After the second round only Green (Afd1) and Yellow (Afd2) traffic can be observed at the
          analyser. (The traffic burst duration is between CBS and EBS)
       After the third round Green (Afd1) and Yellow (Afd2) and Red (Afd3) traffic can be
          observed at the analyser. (The traffic burst duration is over CBS)
       It shouldn’t be losses on flow 1.
      Problems:
      Results: PASSED                       FAILED                  INCONCLUSIVE
      Remark:




Test# AF.5.2
      Test Label: Mark.AF.srTCM.Blindmode
      Last modification: 16.10.2000
      Purpose: To verify that when the system supports a „Single Rate Three Colour Marker -
                srTCM method it implements the Colour-Blind-Mode.
      References: PICS for AF PHB 2.2 (41)
      Requirement: Conditional (If test AF.5.1 is Passed then Optional else N/A)
      Test Configuration: Figure 2.
      Used traffic types:
      Test Setup
      Initial Conditions:
EDIN 0079-1006                                     2001 EURESCOM Participants in Project P1006
page 49 (12826)                                                 EURESCOM Technical Specification
       Set the SUT to support one of the N different AF classes as verified by test AF.3.1.
       Set up the load generator 1 to create a test flow 1 with a rate of R test = 0.4 * Cegr. Create a
          bursty on-off arrival for the AF class subflow with a burst duration Ton = 20ms.
       Set the CBS value to 25 ms at the implemented Conditioner functionality on the ingress
          interface per supported AF class (this corresponds to 0.2 * Cegr bytes if Cegr is taken as
          number without dimension). At the Conditioner the conform-action should set the colour
          to Green (Afd1), excess-action should be further comparison with EBS value.
       Set the EBS value to 35 ms at the implemented Conditioner functionality on the ingress
          interface per supported AF class (this corresponds to 0.28 * Cegr bytes if Cegr is taken as
          number without dimension). Conform-action should set the colour to Yellow (Afd2),
          excess-action should be setting the colour to Red (Afd3).
       Allocate a rate (CIR) of 0.5 * Cegr on the egress interface to the AF class of flow 1.
       Allocate a buffer space of 50 ms on the egress interface per supported AF class (this
          corresponds to 0.4 * Cegr bytes if Cegr is taken as number without dimension). (This
          guarantees, that there will be no losses only remarking when Ton < 50ms)
      Procedure:
       Verify, that the test environment is configured as described above.
       Start to send flow 1 on interface 1 for at least ten seconds.
       Measure the traffic received by the destination traffic analyser on throughput per AF class
         and on losses the traffic.
      Repeat the test with the load burst duration (Ton) increased by 10ms for flow 1 until the value
      40ms reached for flow 1.
      Repeat the test by changing the AF class resembling flow 1 until every supported AF class
      has been tested.
      Observable Results:
       After the first round only Green traffic (Afd1) can be observed at the analyser. (No excess
          traffic is over CBS)
       After the second round only Green (Afd1) and Yellow (Afd2) traffic can be observed at the
          analyser. (The traffic burst duration is between CBS and EBS)
       After the third round Green (Afd1) and Yellow (Afd2) and Red (Afd3) traffic can be
          observed at the analyser. (The traffic burst duration is over CBS)
       It shouldn’t be losses on flow 1.
      Problems:
      Results: PASSED                       FAILED                  INCONCLUSIVE
      Remark:




Test# AF.5.3
      Test Label: Mark.AF.srTCM.Awaremode
      Last modification: 16.10.2000
      Purpose: To verify that when the system supports a „Single Rate Three Colour Marker -
              srTCM method it implements the Colour-Aware-Mode.
      References: PICS for AF PHB 2.2 (42)


 2001 EURESCOM Participants in Project P1006                                        EDIN 0079-1006
page 50 (12826)                                                 EURESCOM Technical Specification
      Requirement: Conditional (If test AF.5.1 is Passed then Optional else N/A)
      Test Configuration: Figure 2.
      Used traffic types: see below
      Test Setup
      Initial Conditions:
       Set the SUT to support one of the N different AF classes as verified by test AF.3.1.
       Set up the load generator 1 to create a test flow 1 marked with a DSCP Afd1 and a rate of
          Rtest = 0.4 * Cegr. Create a bursty on-off arrival for the AF class subflow with a burst
          duration Ton = 20ms.
       Set the CBS value to 25 ms at the implemented Conditioner functionality on the ingress
          interface per supported AF class (this corresponds to 0.2 * Cegr bytes if Cegr is taken as
          number without dimension). At the Conditioner the conform-action should be invariable
          packet forwarding, excess-action should be further comparison with EBS value.
       Set the EBS value to 35 ms at the implemented Conditioner functionality on the ingress
          interface per supported AF class (this corresponds to 0.28 * Cegr bytes if Cegr is taken as
          number without dimension). Conform-action should set the colour to Yellow (Afd2),
          excess-action should be setting the colour to Red (Afd3).
       Allocate a rate (CIR) of 0.5 * Cegr on the egress interface to the AF class of flow 1.
       Allocate a buffer space of 50 ms on the egress interface per supported AF class (this
          corresponds to 0.4 * Cegr bytes if Cegr is taken as number without dimension). (This
          guarantees, that there will be no dropped only remarked packets when Ton < 50ms)
      Procedure:
       Verify, that the test environment is configured as described above.
       Start to send flow 1 on interface 1 for at least ten seconds.
       Measure the traffic received by the destination traffic analyser on throughput per AF class
         and on losses the traffic.
      Repeat the test with the load burst duration (Ton) increased by 10ms for flow 1 until the value
      40ms reached for flow 1.
      Repeat the test by changing the AF class resembling flow 1 until every supported AF class
      has been tested.
      Observable Results:
       After the first round only Green traffic (Afd1) can be observed at the analyser. (No excess
          traffic is over CBS)
       After the second round only Yellow (Afd2) traffic can be observed at the analyser. (The
          traffic burst duration is between CBS and EBS)
       After the third round only Red (Afd3) traffic can be observed at the analyser. (The traffic
          burst duration is over CBS)
       It shouldn’t be losses on flow 1.
      Problems:
      Results: PASSED                       FAILED                  INCONCLUSIVE
      Remark:




EDIN 0079-1006                                     2001 EURESCOM Participants in Project P1006
page 51 (12826)                                                 EURESCOM Technical Specification
Test# AF.5.4
      Test Label: Mark.AF.srTCM.CIR
      Last modification: 16.10.2000
      Purpose: To verify that when the system supports a „Single Rate Three Colour Marker -
              srTCM method the CIR is measured in bytes of IP packets per second (without link
              and physical layer information).
      References: PICS for AF PHB 2.2 (43)
      Requirement: Conditional (If test AF.5.1 is Passed then Mandatory else N/A)
      Test Configuration: Figure 2.
      Used traffic types:
      Test Setup (in Colour-Blind-mode)
      Initial Conditions:
       Set the SUT to support one of the N different AF classes as verified by test AF.3.1.
       Set up the load generator 1 to create a test flow 1 with a rate of R test = 0.4 * Cegr. Create a
          bursty on-off arrival for the AF class subflow with a burst duration Ton = 20ms.
       Set the CBS value to 25 ms at the implemented Conditioner functionality on the ingress
          interface per supported AF class (this corresponds to 0.2 * Cegr bytes if Cegr is taken as
          number without dimension). At the Conditioner the conform-action should set the colour
          to Green (Afd1), excess-action should be further comparison with EBS value.
       Set the EBS value to 35 ms at the implemented Conditioner functionality on the ingress
          interface per supported AF class (this corresponds to 0.28 * Cegr bytes if Cegr is taken as
          number without dimension). Conform-action should set the colour to Yellow (Afd2),
          excess-action should be setting the colour to Red (Afd3).
       Allocate a rate (CIR) of 0.5 * Cegr on the egress interface to the AF class of flow 1.
       Allocate a buffer space of 50 ms on the egress interface per supported AF class (this
          corresponds to 0.4 * Cegr bytes if Cegr is taken as number without dimension). (This
          guarantees, that there will be no losses only remarking when Ton < 50ms)
      Procedure:
       Verify, that the test environment is configured as described above.
       Start to send flow 1 on interface 1 for at least ten seconds.
       Measure the traffic received by the destination traffic analyser on throughput per AF class
         and on losses the traffic.
      Repeat the test with the load burst duration (Ton) increased by 10ms for flow 1 until the value
      40ms reached for flow 1.
      Repeat the test by changing the AF class resembling flow 1 until every supported AF class
      has been tested.
      Observable Results:
       After the first round only Green traffic (Afd1) can be observed at the analyser. (No excess
          traffic is over CBS)
       After the second round only Green (Afd1) and Yellow (Afd2) traffic can be observed at the
          analyser. (The traffic burst duration is between CBS and EBS)
       After the third round Green (Afd1) and Yellow (Afd2) and Red (Afd3) traffic can be
          observed at the analyser. (The traffic burst duration is over CBS)
       It shouldn’t be losses on flow 1.

 2001 EURESCOM Participants in Project P1006                                        EDIN 0079-1006
page 52 (12826)                                                 EURESCOM Technical Specification
       CIR should be measured in bytes of IP packets per second (without link and physical layer
          information)
      Problems:
      Results: PASSED                     FAILED                    INCONCLUSIVE
      Remark:




Test# AF.5.5
      Test Label: Mark.AF.srTCM.CBS
      Last modification: 16.10.2000
      Purpose: To verify that when the system supports a „Single Rate Three Colour Marker -
                srTCM method the Committed Burst Size (CBS) is measured in bytes.
      References: PICS for AF PHB 2.2 (44)
      Requirement: Conditional (If test AF.5.1 is Passed then Mandatory else N/A)
      Test Configuration: Figure 2.
      Used traffic types: see below
      Test Setup (in Colour-Blind-mode)
      Initial Conditions:
       Set the SUT to support one of the N different AF classes as verified by test AF.3.1.
       Set up the load generator 1 to create a test flow 1 with a rate of R test = 0.4 * Cegr. Create a
          bursty on-off arrival for the AF class subflow with a burst duration Ton = 20ms.
       Set the CBS value to 25 ms at the implemented Conditioner functionality on the ingress
          interface per supported AF class (this corresponds to 0.2 * Cegr bytes if Cegr is taken as
          number without dimension). At the Conditioner the conform-action should set the colour
          to Green (Afd1), excess-action should be further comparison with EBS value.
       Set the EBS value to 35 ms at the implemented Conditioner functionality on the ingress
          interface per supported AF class (this corresponds to 0.28 * Cegr bytes if Cegr is taken as
          number without dimension). Conform-action should set the colour to Yellow (Afd2),
          excess-action should be setting the colour to Red (Afd3).
       Allocate a rate (CIR) of 0.5 * Cegr on the egress interface to the AF class of flow 1.
       Allocate a buffer space of 50 ms on the egress interface per supported AF class (this
          corresponds to 0.4 * Cegr bytes if Cegr is taken as number without dimension). (This
          guarantees, that there will be no losses only remarking when Ton < 50ms)
      Procedure:
       Verify, that the test environment is configured as described above.
       Start to send flow 1 on interface 1 for at least ten seconds.
       Measure the traffic received by the destination traffic analyser on throughput per AF class
         and on losses the traffic.
      Repeat the test with the load burst duration (Ton) increased by 10ms for flow 1 until the value
      40ms reached for flow 1.
      Repeat the test by changing the AF class resembling flow 1 until every supported AF class
      has been tested.
      Observable Results:

EDIN 0079-1006                                     2001 EURESCOM Participants in Project P1006
page 53 (12826)                                                 EURESCOM Technical Specification
       After the first round only Green traffic (Afd1) can be observed at the analyser. (No excess
          traffic is over CBS)
       After the second round only Green (Afd1) and Yellow (Afd2) traffic can be observed at the
          analyser. (The traffic burst duration is between CBS and EBS)
       After the third round Green (Afd1) and Yellow (Afd2) and Red (Afd3) traffic can be
          observed at the analyser. (The traffic burst duration is over CBS)
       It shouldn’t be losses on flow 1.
       CBS should be measured in bytes.
      Problems:
      Results: PASSED                       FAILED                  INCONCLUSIVE
      Remark:




Test# AF.5.6
      Test Label: Mark.AF.srTCM.EBS
      Last modification: 16.10.2000
      Purpose: To verify that when the system supports a „Single Rate Three Colour Marker -
      srTCM method the Excess Burst Size (EBS) is measured in bytes.
      References: PICS for AF PHB 2.2 (45)
      Requirement: Conditional (If test AF.5.1 is Passed then Mandatory else N/A)
      Test Configuration: Figure 2.
      Used traffic types: see below
      Test Setup (in Colour-Blind-mode)
      Initial Conditions:
       Set the SUT to support one of the N different AF classes as verified by test AF.3.1.
       Set up the load generator 1 to create a test flow 1 with a rate of R test = 0.4 * Cegr. Create a
          bursty on-off arrival for the AF class subflow with a burst duration Ton = 20ms.
       Set the CBS value to 25 ms at the implemented Conditioner functionality on the ingress
          interface per supported AF class (this corresponds to 0.2 * Cegr bytes if Cegr is taken as
          number without dimension). At the Conditioner the conform-action should set the colour
          to Green (Afd1), excess-action should be further comparison with EBS value.
       Set the EBS value to 35 ms at the implemented Conditioner functionality on the ingress
          interface per supported AF class (this corresponds to 0.28 * Cegr bytes if Cegr is taken as
          number without dimension). Conform-action should set the colour to Yellow (Afd2),
          excess-action should be setting the colour to Red (Afd3).
       Allocate a rate (CIR) of 0.5 * Cegr on the egress interface to the AF class of flow 1.
       Allocate a buffer space of 50 ms on the egress interface per supported AF class (this
          corresponds to 0.4 * Cegr bytes if Cegr is taken as number without dimension). (This
          guarantees, that there will be no losses only remarking when Ton < 50ms)
      Procedure:
       Verify, that the test environment is configured as described above.
       Start to send flow 1 on interface 1 for at least ten seconds.


 2001 EURESCOM Participants in Project P1006                                        EDIN 0079-1006
page 54 (12826)                                                EURESCOM Technical Specification
       Measure the traffic received by the destination traffic analyser on throughput per AF class
         and on losses the traffic.
      Repeat the test with the load burst duration (Ton) increased by 10ms for flow 1 until the value
      40ms reached for flow 1.
      Repeat the test by changing the AF class resembling flow 1 until every supported AF class
      has been tested.
      Observable Results:
       After the first round only Green traffic (Afd1) can be observed at the analyser. (No excess
          traffic is over CBS)
       After the second round only Green (Afd1) and Yellow (Afd2) traffic can be observed at the
          analyser. (The traffic burst duration is between CBS and EBS)
       After the third round Green (Afd1) and Yellow (Afd2) and Red (Afd3) traffic can be
          observed at the analyser. (The traffic burst duration is over CBS)
       It shouldn’t be losses on flow 1.
       EBS should be measured in bytes.
      Problems:
      Results: PASSED                       FAILED                 INCONCLUSIVE
      Remark:




Test# AF.5.7
      Test Label: Mark.AF.trTCM.implement
      Last modification: 16.10.2000
      Purpose: To verify that the system supports a „Two Rate Three Colour Marker - trTCM
                method for marking.
      References: PICS for AF PHB 2.2 (46)
      Requirement: Optional
      Test Configuration: Figure 2.
      Used traffic types: see below
      Remark for test set-up: trTCM can be implemented only two ways: either in Colour-Blind-
      mode or in Colour-Aware-mode. Considering this fact there have to be two Test Setup to
      examine whether trTCM is implemented well or not. A successful implementation of one of
      the two modes means that trTCM is supported on that system.
      Test Setup (1. for Blind-mode)
      Initial Conditions:
       Set the SUT to support one of the N different AF classes as verified by test AF.3.1.
       Set up the load generator 1 to create a test flow 1 with a rate of Rtest = 0.4 * Cegr. Create a
          bursty on-off arrival for the AF class subflow with a burst duration Ton = 20ms.
       Set the PIR value to 0.6 * Cegr at the implemented Conditioner functionality on the ingress
          interface per supported AF class. At the Conditioner the conform-action should be
          further comparison with the CIR value, excess-action should set the colour to Red
          (Afd3).



EDIN 0079-1006                                    2001 EURESCOM Participants in Project P1006
page 55 (12826)                                                 EURESCOM Technical Specification
       Set the CIR value to 0.35 * Cegr at the implemented Conditioner functionality on the ingress
          interface per supported AF class. Conform-action should set the colour to Green (Afd1),
          excess-action should set the colour to Yellow (Afd2).
       Set the CBS value to 35 ms (this corresponds to 0.28 * Cegr bytes if Cegr is taken as number
          without dimension)
       Set the PBS value to 45 ms (this corresponds to 0.36 * Cegr bytes if Cegr is taken as number
          without dimension)
       Allocate a rate of 0.8 * Cegr on the egress interface to the AF class of flow 1.
       Allocate a buffer space of 50 ms on the egress interface per supported AF class (this
          corresponds to 0.4 * Cegr bytes if Cegr is taken as number without dimension).
      Test Setup (1. for Colour-Aware-mode)
      Initial Conditions:
       Set the SUT to support one of the N different AF classes as verified by test AF.3.1.
       Set up the load generator 1 to create a test flow 1 marked with a DSCP Afd1 with a rate of
          Rtest = 0.2 * Cegr. Create a bursty on-off arrival for the AF class subflow with a burst
          duration Ton = 20ms.
       The system has to check the parameters in this order: PIR, CIR.
       Set the PIR value to 0.35 * Cegr at the implemented Conditioner functionality on the ingress
          interface per supported AF class. At the Conditioner the conform-action should be
          further comparison with the CIR value, excess-action should set the colour to Red
          (Afd3).
       Set the CIR value to 0.25 * Cegr at the implemented Conditioner functionality on the ingress
          interface per supported AF class.
             Conform-actions:
                     when the incoming packet is Green or Yellow these should be set to Yellow
                      (Afd2)
                         when the incoming packet is Red this should be set to Red (Afd3).
             Excess-action: every incoming colour should be set to Red (Afd3).
       Set the CBS value to 35 ms (this corresponds to 0.28 * Cegr bytes if Cegr is taken as number
          without dimension)
       Set the PBS value to 45 ms (this corresponds to 0.36 * Cegr bytes if Cegr is taken as number
          without dimension)
       Allocate a rate of 0.8 * Cegr on the egress interface to the AF class of flow 1.
       Allocate a buffer space of 50 ms on the egress interface per supported AF class (this
          corresponds to 0.4 * Cegr bytes if Cegr is taken as number without dimension).
      Procedure:
       Verify, that the test environment is configured as described above.
       Start to send flow 1 on interface 1 for at least ten seconds.
       Measure the traffic received by the destination traffic analyser on throughput per AF class
         and on losses the traffic.
      Repeat the test with the load rate (Rtest) increased by 0.1 * Cegr for flow 1 until the value 0.4
      * Cegr reached for flow 1.
      Repeat the test by changing the AF class resembling flow 1 until every supported AF class
      has been tested.

 2001 EURESCOM Participants in Project P1006                                         EDIN 0079-1006
page 56 (12826)                                                 EURESCOM Technical Specification
      Observable Results:
       After the first round only Green traffic (Afd1) can be observed at the analyser. (No excess
          traffic is over CIR and PIR)
       After the second round only Green (Afd1) and Yellow (Afd2) traffic can be observed at the
          analyser. (The traffic is between PIR and CIR)
       After the third round Green (Afd1) and Yellow (Afd2) and Red (Afd3) traffic can be
          observed at the analyser. (The traffic rate is over PIR)
       It shouldn’t be losses on flow 1.
      Problems:
      Results: PASSED                       FAILED                  INCONCLUSIVE
      Remark:




Test# AF.5.8
      Test Label: Mark.AF.trTCM.Blindmode
      Last modification: 16.10.2000
      Purpose: To verify that when the system supports a „Two Rate Three Colour Marker -
               trTCM method it implements the Colour-Blind-Mode.
      References: PICS for AF PHB 2.2 (47)
      Requirement: Conditional (If test AF.5.7 is Passed then Optional else N/A)
      Test Configuration: Figure 2.
      Used traffic types: see below
      Test Setup
      Initial Conditions:
       Set the SUT to support one of the N different AF classes as verified by test AF.3.1.
       Set up the load generator 1 to create a test flow 1 with a rate of R test = 0.4 * Cegr. Create a
          bursty on-off arrival for the AF class subflow with a burst duration Ton = 20ms.
       Set the PIR value to 0.6 * Cegr at the implemented Conditioner functionality on the ingress
          interface per supported AF class. At the Conditioner the conform-action should be
          further comparison with the CIR value, excess-action should set the colour to Red
          (Afd3).
       Set the CIR value to 0.35 * Cegr at the implemented Conditioner functionality on the ingress
          interface per supported AF class. Conform-action should set the colour to Green (Afd1),
          excess-action should set the colour to Yellow (Afd2).
       Set the CBS value to 35 ms (this corresponds to 0.28 * Cegr bytes if Cegr is taken as number
          without dimension)
       Set the PBS value to 45 ms (this corresponds to 0.36 * Cegr bytes if Cegr is taken as number
          without dimension)
       Allocate a rate of 0.8 * Cegr on the egress interface to the AF class of flow 1.
       Allocate a buffer space of 50 ms on the egress interface per supported AF class (this
          corresponds to 0.4 * Cegr bytes if Cegr is taken as number without dimension).
      Procedure:
       Verify, that the test environment is configured as described above.
EDIN 0079-1006                                     2001 EURESCOM Participants in Project P1006
page 57 (12826)                                                 EURESCOM Technical Specification
       Start to send flow 1 on interface 1 for at least ten seconds.
       Measure the traffic received by the destination traffic analyser on throughput per AF class
         and on losses the traffic.
      Repeat the test with the load rate (Rtest) increased by 0.1 * Cegr for flow 1 until the value 0.4 *
      Cegr reached for flow 1.
      Repeat the test by changing the AF class resembling flow 1 until every supported AF class
      has been tested.
      Observable results:
       After the first round only Green traffic (Afd1) can be observed at the analyser. (No excess
          traffic is over CIR and PIR)
       After the second round only Green (Afd1) and Yellow (Afd2) traffic can be observed at the
          analyser. (The traffic is between PIR and CIR)
       After the third round Green (Afd1) and Yellow (Afd2) and Red (Afd3) traffic can be
          observed at the analyser. (The traffic rate is over PIR)
       It shouldn’t be losses on flow 1.
      Problems:
      Results: PASSED                       FAILED                  INCONCLUSIVE
      Remark:




Test# AF.5.9
      Test Label: Mark.AF.trTCM.Awaremode
      Last modification: 16.10.2000
      Purpose: To verify that when the system supports a „Two Rate Three Colour Marker -
               srTCM method it implements the Colour-Aware-Mode.
      References: PICS for AF PHB 2.2 (48)
      Requirement: Conditional (If test AF.5.7 is Passed then Optional else N/A)
      Test Configuration: Figure 2.
      Used traffic types: see below
      Test Setup
      Initial Conditions:
       Set the SUT to support one of the N different AF classes as verified by test AF.3.1.
       Set up the load generator 1 to create a test flow 1 marked with a DSCP Afd1 with a rate of
          Rtest = 0.2 * Cegr. Create a bursty on-off arrival for the AF class subflow with a burst
          duration Ton = 20ms.
       The system has to check the parameters in this order: PIR, CIR.
       Set the PIR value to 0.35 * Cegr at the implemented Conditioner functionality on the ingress
          interface per supported AF class. At the Conditioner the conform-action should be
          further comparison with the CIR value, excess-action should set the colour to Red
          (Afd3).
       Set the CIR value to 0.25 * Cegr at the implemented Conditioner functionality on the ingress
          interface per supported AF class.
              Conform-actions:
 2001 EURESCOM Participants in Project P1006                                         EDIN 0079-1006
page 58 (12826)                                                 EURESCOM Technical Specification
                     when the incoming packet is Green or Yellow these should be set to Yellow
                      (Afd2)
                     when the incoming packet is Red this should be set to Red (Afd3).
               Excess-action: every incoming colour should be set to Red (Afd3).
       Set the CBS value to 35 ms (this corresponds to 0.28 * Cegr bytes if Cegr is taken as number
          without dimension)
       Set the PBS value to 45 ms (this corresponds to 0.36 * Cegr bytes if Cegr is taken as number
          without dimension)
       Allocate a rate of 0.8 * Cegr on the egress interface to the AF class of flow 1.
       Allocate a buffer space of 50 ms on the egress interface per supported AF class (this
          corresponds to 0.4 * Cegr bytes if Cegr is taken as number without dimension).
      Procedure:
       Verify, that the test environment is configured as described above.
       Start to send flow 1 on interface 1 for at least ten seconds.
       Measure the traffic received by the destination traffic analyser on throughput per AF class
         and on losses the traffic.
      Repeat the test with the load rate (Rtest) increased by 0.1 * Cegr for flow 1 until the value 0.4 *
      Cegr reached for flow 1.
      Repeat the test by changing the AF class resembling flow 1 until every supported AF class
      has been tested.
      Observable results:
       After the first round only Green traffic (Afd1) can be observed at the analyser. (No excess
          traffic is over CIR and PIR)
       After the second round only Green (Afd1) and Yellow (Afd2) traffic can be observed at the
          analyser. (The traffic is between PIR and CIR)
       After the third round Green (Afd1) and Yellow (Afd2) and Red (Afd3) traffic can be
          observed at the analyser. (The traffic rate is over PIR)
       It shouldn’t be losses on flow 1.
      Problems:
      Results: PASSED                       FAILED                  INCONCLUSIVE
      Remark:




Test# AF.5.10
      Test Label: Mark.AF.trTCM.PIR
      Last modification: 16.10.2000
      Purpose: To verify that when the system supports a „Single Rate Three Colour Marker -
                srTCM method the Peak Information Rate - PIR is measured in bytes of IP
                packets per second (without link and physical information).
      References: PICS for AF PHB 2.2 (49)
      Requirement: Conditional (If test AF.5.7 is Passed then Mandatory else N/A)
      Test Configuration: Figure 2.

EDIN 0079-1006                                     2001 EURESCOM Participants in Project P1006
page 59 (12826)                                                 EURESCOM Technical Specification
      Used traffic types: see below
      Test Setup (in Colour-Blind-mode)
      Initial Conditions:
       Set the SUT to support one of the N different AF classes as verified by test AF.3.1.
       Set up the load generator 1 to create a test flow 1 with a rate of R test = 0.4 * Cegr. Create a
          bursty on-off arrival for the AF class subflow with a burst duration Ton = 20ms.
       Set the PIR value to 0.6 * Cegr at the implemented Conditioner functionality on the ingress
          interface per supported AF class. At the Conditioner the conform-action should be
          further comparison with the CIR value, excess-action should set the colour to Red
          (Afd3).
       Set the CIR value to 0.35 * Cegr at the implemented Conditioner functionality on the ingress
          interface per supported AF class. Conform-action should set the colour to Green (Afd1),
          excess-action should set the colour to Yellow (Afd2).
       Set the CBS value to 35 ms (this corresponds to 0.28 * Cegr bytes if Cegr is taken as number
          without dimension)
       Set the PBS value to 45 ms (this corresponds to 0.36 * Cegr bytes if Cegr is taken as number
          without dimension)
       Allocate a rate of 0.8 * Cegr on the egress interface to the AF class of flow 1.
       Allocate a buffer space of 50 ms on the egress interface per supported AF class (this
          corresponds to 0.4 * Cegr bytes if Cegr is taken as number without dimension).
      Procedure:
       Verify, that the test environment is configured as described above.
       Start to send flow 1 on interface 1 for at least ten seconds.
       Measure the traffic received by the destination traffic analyser on throughput per AF class
         and on losses the traffic.
      Repeat the test with the load rate (Rtest) increased by 0.1 * Cegr for flow 1 until the value 0.4 *
      Cegr reached for flow 1.
      Repeat the test by changing the AF class resembling flow 1 until every supported AF class
      has been tested.
      Observable results:
       After the first round only Green traffic (Afd1) can be observed at the analyser. (No excess
          traffic is over CIR and PIR)
       After the second round only Green (Afd1) and Yellow (Afd2) traffic can be observed at the
          analyser. (The traffic is between PIR and CIR)
       After the third round Green (Afd1) and Yellow (Afd2) and Red (Afd3) traffic can be
          observed at the analyser. (The traffic rate is over PIR)
       It shouldn’t be losses on flow 1.
       PIR should be measured in bytes of IP packets per second (without link and physical layer
          information)
      Problems:
      Results: PASSED                       FAILED                  INCONCLUSIVE
      Remark:




 2001 EURESCOM Participants in Project P1006                                         EDIN 0079-1006
page 60 (12826)                                                 EURESCOM Technical Specification


Test# AF.5.10
      Test Label: Mark.AF.trTCM.CIR
      Last modification: 16.10.2000
      Purpose: To verify that when the system supports a „Single Rate Three Colour Marker -
                srTCM method the Committed Information Rate - CIR is measured in bytes of IP
                packets per second (without link and physical information)
      References: PICS for AF PHB 2.2 (50)
      Requirement: Conditional (If test AF.5.7 is Passed then Mandatory else N/A)
      Test Configuration: Figure 2.
      Used traffic types: see below
      Test Setup
      Initial Conditions:
       Set the SUT to support one of the N different AF classes as verified by test AF.3.1.
       Set up the load generator 1 to create a test flow 1 with a rate of R test = 0.4 * Cegr. Create a
          bursty on-off arrival for the AF class subflow with a burst duration Ton = 20ms.
       Set the PIR value to 0.6 * Cegr at the implemented Conditioner functionality on the ingress
          interface per supported AF class. At the Conditioner the conform-action should be
          further comparison with the CIR value, excess-action should set the colour to Red
          (Afd3).
       Set the CIR value to 0.35 * Cegr at the implemented Conditioner functionality on the ingress
          interface per supported AF class. Conform-action should set the colour to Green (Afd1),
          excess-action should set the colour to Yellow (Afd2).
       Set the CBS value to 35 ms (this corresponds to 0.28 * Cegr bytes if Cegr is taken as number
          without dimension)
       Set the PBS value to 45 ms (this corresponds to 0.36 * Cegr bytes if Cegr is taken as number
          without dimension)
       Allocate a rate of 0.8 * Cegr on the egress interface to the AF class of flow 1.
       Allocate a buffer space of 50 ms on the egress interface per supported AF class (this
          corresponds to 0.4 * Cegr bytes if Cegr is taken as number without dimension).
      Procedure:
       Verify, that the test environment is configured as described above.
       Start to send flow 1 on interface 1 for at least ten seconds.
       Measure the traffic received by the destination traffic analyser on throughput per AF class
         and on losses the traffic.
      Repeat the test with the load rate (Rtest) increased by 0.1 * Cegr for flow 1 until the value 0.4 *
      Cegr reached for flow 1.
      Repeat the test by changing the AF class resembling flow 1 until every supported AF class
      has been tested.
      Observable results:
       After the first round only Green traffic (Afd1) can be observed at the analyser. (No excess
          traffic is over CIR and PIR)



EDIN 0079-1006                                     2001 EURESCOM Participants in Project P1006
page 61 (12826)                                                 EURESCOM Technical Specification
       After the second round only Green (Afd1) and Yellow (Afd2) traffic can be observed at the
          analyser. (The traffic is between PIR and CIR)
       After the third round Green (Afd1) and Yellow (Afd2) and Red (Afd3) traffic can be
          observed at the analyser. (The traffic rate is over PIR)
       It shouldn’t be losses on flow 1.
       CIR should be measured in bytes of IP packets per second (without link and physical layer
          information)
      Problems:
      Results: PASSED                       FAILED                  INCONCLUSIVE
      Remark:




Test# AF.5.12
      Test Label: Mark.AF.trTCM.PBS
      Last modification: 16.10.2000
      Purpose: To verify that when the system supports a „Single Rate Three Colour Marker -
                srTCM method the Peak Burst size PBS is measured in bytes.
      References: PICS for AF PHB 2.2 (51)
      Requirement: Conditional (If test AF.5.7 is Passed then Mandatory else N/A)
      Test Configuration: Figure 2.
      Used traffic types:
      Test Setup
      Initial Conditions:
       Set the SUT to support one of the N different AF classes as verified by test AF.3.1.
       Set up the load generator 1 to create a test flow 1 with a rate of R test = 0.4 * Cegr. Create a
          bursty on-off arrival for the AF class subflow with a burst duration Ton = 20ms.
       Set the PIR value to 0.6 * Cegr at the implemented Conditioner functionality on the ingress
          interface per supported AF class. At the Conditioner the conform-action should be
          further comparison with the CIR value, excess-action should set the colour to Red
          (Afd3).
       Set the CIR value to 0.35 * Cegr at the implemented Conditioner functionality on the ingress
          interface per supported AF class. Conform-action should set the colour to Green (Afd1),
          excess-action should set the colour to Yellow (Afd2).
       Set the CBS value to 35 ms (this corresponds to 0.28 * Cegr bytes if Cegr is taken as number
          without dimension)
       Set the PBS value to 45 ms (this corresponds to 0.36 * Cegr bytes if Cegr is taken as number
          without dimension)
       Allocate a rate of 0.8 * Cegr on the egress interface to the AF class of flow 1.
       Allocate a buffer space of 50 ms on the egress interface per supported AF class (this
          corresponds to 0.4 * Cegr bytes if Cegr is taken as number without dimension).
      Procedure:
       Verify, that the test environment is configured as described above.


 2001 EURESCOM Participants in Project P1006                                         EDIN 0079-1006
page 62 (12826)                                                 EURESCOM Technical Specification
       Start to send flow 1 on interface 1 for at least ten seconds.
       Measure the traffic received by the destination traffic analyser on throughput per AF class
         and on losses the traffic.
      Repeat the test with the load rate (Rtest) increased by 0.1 * Cegr for flow 1 until the value 0.4 *
      Cegr reached for flow 1.
      Repeat the test by changing the AF class resembling flow 1 until every supported AF class
      has been tested.
      Observable results:
       After the first round only Green traffic (Afd1) can be observed at the analyser. (No excess
          traffic is over CIR and PIR)
       After the second round only Green (Afd1) and Yellow (Afd2) traffic can be observed at the
          analyser. (The traffic is between PIR and CIR)
       After the third round Green (Afd1) and Yellow (Afd2) and Red (Afd3) traffic can be
          observed at the analyser. (The traffic rate is over PIR)
       It shouldn’t be losses on flow 1.
       PBS should be measured in bytes.
      Problems:
      Results: PASSED                       FAILED                  INCONCLUSIVE
      Remark:




Test# AF.5.13
      Test Label: Mark.AF.trTCM.CBS
      Last modification: 16.10.2000
      Purpose: To verify that when the system supports a „Single Rate Three Colour Marker -
                srTCM method the CBS is measured in bytes.
      References: PICS for AF PHB 2.2 (52)
      Requirement: Conditional (If test AF.5.7 is Passed then Mandatory else N/A)
      Test Configuration: Figure 2.
      Used traffic types:
      Test Setup
      Initial Conditions:
       Set the SUT to support one of the N different AF classes as verified by test AF.3.1.
       Set up the load generator 1 to create a test flow 1 with a rate of R test = 0.4 * Cegr. Create a
          bursty on-off arrival for the AF class subflow with a burst duration Ton = 20ms.
       Set the PIR value to 0.6 * Cegr at the implemented Conditioner functionality on the ingress
          interface per supported AF class. At the Conditioner the conform-action should be
          further comparison with the CIR value, excess-action should set the colour to Red
          (Afd3).
       Set the CIR value to 0.35 * Cegr at the implemented Conditioner functionality on the ingress
          interface per supported AF class. Conform-action should set the colour to Green (Afd1),
          excess-action should set the colour to Yellow (Afd2).


EDIN 0079-1006                                     2001 EURESCOM Participants in Project P1006
page 63 (12826)                                                 EURESCOM Technical Specification
       Set the CBS value to 35 ms (this corresponds to 0.28 * Cegr bytes if Cegr is taken as number
          without dimension)
       Set the PBS value to 45 ms (this corresponds to 0.36 * Cegr bytes if Cegr is taken as number
          without dimension)
       Allocate a rate of 0.8 * Cegr on the egress interface to the AF class of flow 1.
       Allocate a buffer space of 50 ms on the egress interface per supported AF class (this
          corresponds to 0.4 * Cegr bytes if Cegr is taken as number without dimension).
      Procedure:
       Verify, that the test environment is configured as described above.
       Start to send flow 1 on interface 1 for at least ten seconds.
       Measure the traffic received by the destination traffic analyser on throughput per AF class
         and on losses the traffic.
      Repeat the test with the load rate (Rtest) increased by 0.1 * Cegr for flow 1 until the value 0.4 *
      Cegr reached for flow 1.
      Repeat the test by changing the AF class resembling flow 1 until every supported AF class
      has been tested.
      Observable results:
       After the first round only Green traffic (Afd1) can be observed at the analyser. (No excess
          traffic is over CIR and PIR)
       After the second round only Green (Afd1) and Yellow (Afd2) traffic can be observed at the
          analyser. (The traffic is between PIR and CIR)
       After the third round Green (Afd1) and Yellow (Afd2) and Red (Afd3) traffic can be
          observed at the analyser. (The traffic rate is over PIR)
       It shouldn’t be losses on flow 1.
       CBS should be measured in bytes.
      Problems:
      Results: PASSED                       FAILED                  INCONCLUSIVE
      Remark:




3.1.1.6 Policing

Test# AF.6.1
      Test Label: Discard.AF.congest.min
      Last modification: 16.10.2000
      Purpose: To verify that the system minimise long-term congestion within each AF class.
      References: PICS for AF PHB 2.3 (53)
      Requirement: Mandatory
      Test Configuration: Figure 2.
      Used traffic types: see below
      Test Setup
      Initial Conditions:

 2001 EURESCOM Participants in Project P1006                                         EDIN 0079-1006
page 64 (12826)                                                   EURESCOM Technical Specification
       Set the SUT to support one of the N different AF classes as verified by test AF.3.1.
       Allocate a buffer space of 120 ms on the egress interface per supported AF class (this
          corresponds to 0.4 * Cegr bytes if Cegr is taken as number without dimension).
       Set up the traffic discarding policy (active queue management) of the SUT at the egress
          side (eg. CBWFQ, RED, WRED) on the following way (Table 8), (Figure 5):
                                    Table 8. Buffer occupancy settings
                    amin            amax           bmin          bmax            cmin            cmax
buffer size         15ms             25ms          25ms          35ms            35ms            45ms


      Discard
      probability                         Lower              Higher
                                          class              class
                   100%




                             amin           amax   bmin    bmax cmin         cmax
                             AFd3           AFd3   AFd2    AFd2 AFd1          AFd1      Buffer
                                                                                        occupancy
                            Figure 5. Traffic discarding policy at the SUT
             Set up the load generator 1 to create a test flow 1 of rate Rtest = 0.4 * Cegr. Set up the
              load generator to compose flow 1 of three individual sub flows which each have a rate
              of Rtest / 3. Assign the drop precedence level 1 to the first subflow, drop precedence
              level 2 to the second and drop precedence level 3 to the third (drop precedence level
              coding see Table 7). The three sub-flows should be multiplexed simultaneously. Create
              a bursty on-off arrival for each of the subflows with a burst duration T on = 10 ms. Flow
              1 is destined to the destination traffic analyser.
             Set up the load generator 2 to create a flow 2 coded as any AF class different than the
              tested one. Flow 2 has an average rate of 0.55 * Cegr with bursty arrival (e.g. Poisson).
              Flow 2 is destined to a host other than the destination traffic analyser.
             Allocate a rate of 0.4 * Cegr on the egress interface to the AF class of flow 1.
             Allocate a rate of 0.55 * Cegr to flow 2.
      Procedure:
       Verify, that the test environment is configured as described above.
       Start to send flow 1 on interface 1 for at least ten seconds.
       Send flow 2 simultaneously on interface 2.
       Measure the traffic received by the destination traffic analyser on throughput and on losses
         per AF class subflows and on losses the traffic marked as default.
      Repeat the test with the load burst duration increased by 10ms for flow 1 until an burst
      duration of Ton = 40 ms is reached for flow 1.
      Repeat the test by changing the AF class resembling flow 1 until every supported AF class
      has been tested.
      Observable results:


EDIN 0079-1006                                       2001 EURESCOM Participants in Project P1006
page 65 (12826)                                                 EURESCOM Technical Specification
       DS codepoints of egress traffic should be correct.
       Afd3 traffic should be dropped at first round
       Afd2 traffic should be dropped at second round
       Afd1 traffic should be dropped at last round
       The losses per AF drop precedence level must be in the order Afd1 < Afd2 * Afd3 for all
          tests.
       Short-term congestion should be handled by queuing, while long-term congestion should
          be handled by dropping.
      Problems:
      Results: PASSED                     FAILED                    INCONCLUSIVE
      Remark:




Test# AF.6.2
      Test Label: Discard.AF.congest.short_term
      Last modification: 16.10.2000
      Purpose: To verify that the system allows short-term congestion resulting from bursts within
                each AF class.
      References: PICS for AF PHB 2.3 (54)
      Requirement: Mandatory
      Test Configuration: Figure 2.
      Used traffic types: see below
      Test Setup
      Initial Conditions:
       Set the SUT to support the maximum number N of different AF classes as verified by test
          AF.3.1.
       Allocate a buffer space of 50 ms on the egress interface per supported AF class (this
          corresponds to 0.4 * Cegr bytes if Cegr is taken as number without dimension).
       Set up the load generator 1 to create a test flow 1 of rate Rtest = N * 0.1 * Cegr. Flow 1
          consists of one sub flow per AF class, with a rate of 0.1 * Cegr per sub flow. Create a
          bursty on-off arrival per AF class subflow with a burst duration Ton of the allocated
          buffer size. All AF subflows should be multiplexed simultaneously. Flow 1 is destined to
          the destination traffic analyser.
       Set up the load generator 2 to create a flow 2 marked as default with an average rate of (0.1
          * (4-N) + 0.55)* Cegr, bursty arrival (e.g. Poisson). Flow 2 is destined to a host other than
          the destination traffic analyser.
       Allocate a rate of 0.1 * Cegr on the egress interface to each tested AF class.
      Procedure:
       Verify, that the test environment is configured as described above.
       Start to send flow 1 on interface 1 for at least ten seconds.
       Send flow 2 simultaneously on interface 2.



 2001 EURESCOM Participants in Project P1006                                            EDIN 0079-1006
page 66 (12826)                                                 EURESCOM Technical Specification
       Measure the traffic received by the destination traffic analyser on loss in terms of packets
         per AF class.
      Observable results:
       Each AF class should have no or a very low loss.
      Problems:
      Results: PASSED                     FAILED                    INCONCLUSIVE




Test# AF.6.3
      Test Label: Discard.AF.congest.drop
      Last modification: 16.10.2000
      Purpose: To verify that the system drop packets as a result of long term congestion within
                an AF class.
      References: PICS for AF PHB 2.3 (55)
      Requirement: Mandatory
      Test Configuration: Figure 2.
      Used traffic types: see below
      Test Setup
      Initial Conditions:
       Set the SUT to support one of the N different AF classes as verified by test AF.3.1.
       Allocate a buffer space of 120 ms on the egress interface per supported AF class (this
          corresponds to 0.4 * Cegr bytes if Cegr is taken as number without dimension).
       Set up the traffic discarding policy (active queue management) of the SUT at the egress
          side (eg. CBWFQ, RED, WRED) as shown in (Table 8) and (Figure 5)
       Set up the load generator 1 to create a test flow 1 of rate Rtest = 0.4 * Cegr. Set up the load
          generator to compose flow 1 of three individual sub flows which each have a rate of R test
          / 3. Assign the drop precedence level 1 to the first subflow, drop precedence level 2 to
          the second and drop precedence level 3 to the third (drop precedence level coding see
          Table 7). The three sub-flows should be multiplexed simultaneously. Create a bursty on-
          off arrival for each of the subflows with a burst duration T on = 10 ms. Flow 1 is destined
          to the destination traffic analyser.
       Set up the load generator 2 to create a flow 2 coded as any AF class different than the
          tested one. Flow 2 has an average rate of 0.55 * Cegr with bursty arrival (e.g. Poisson).
          Flow 2 is destined to a host other than the destination traffic analyser.
       Allocate a rate of 0.4 * Cegr on the egress interface to the AF class of flow 1.
       Allocate a rate of 0.55 * Cegr to flow 2.
      Procedure:
       Verify, that the test environment is configured as described above.
       Start to send flow 1 on interface 1 for at least ten seconds.
       Send flow 2 simultaneously on interface 2.
       Measure the traffic received by the destination traffic analyser on throughput and on losses
         per AF class subflows and on losses the traffic marked as default.


EDIN 0079-1006                                       2001 EURESCOM Participants in Project P1006
page 67 (12826)                                                 EURESCOM Technical Specification
      Repeat the test with the load burst duration increased by 10ms for all sub flows of flow 1
      until an burst duration of Ton = 40 ms is reached.
      Repeat the test by changing the AF class resembling flow 1 until every supported AF class
      has been tested.
      Observable results:
       DS codepoints of egress traffic should be correct.
       Afd3 traffic should be dropped at first round
       Afd2 traffic should be dropped at second round
       Afd1 traffic should be dropped at last round
       The losses per AF drop precedence level must be in the order Afd1 < Afd2 * Afd3 for all
          tests.
       Short-term congestion should be handled by queuing, while long-term congestion should
          be handled by dropping.
      Problems:
      Results: PASSED                     FAILED                    INCONCLUSIVE
      Remark:




Test# AF.6.4
      Test Label: Discard.AF.congest.queue
      Last modification: 16.10.2000
      Purpose: To verify that the system queues packets as a result of short term congestion within
                an AF class.
      References: PICS for AF PHB 2.3 (56)
      Requirement: Mandatory
      Test Configuration: Figure 2.
      Used traffic types: see below
      Test Setup
      Initial Conditions:
       Set the SUT to support the maximum number N of different AF classes as verified by test
          AF.3.1.
       Allocate a buffer space of 50 ms on the egress interface per supported AF class (this
          corresponds to 0.4 * Cegr bytes if Cegr is taken as number without dimension).
       Set up the load generator 1 to create a test flow 1 of rate Rtest = N * 0.1 * Cegr. Flow 1
          consists of one sub flow per AF class, with a rate of 0.1 * Cegr per sub flow. Create a
          bursty on-off arrival per AF class subflow with a burst duration Ton of the allocated
          buffer size. All AF subflows should be multiplexed simultaneously. Flow 1 is destined to
          the destination traffic analyser.
       Set up the load generator 2 to create a flow 2 marked as default with an average rate of (0.1
          * (4-N) + 0.55)* Cegr, bursty arrival (e.g. Poisson). Flow 2 is destined to a host other than
          the destination traffic analyser.
       Allocate a rate of 0.1 * Cegr on the egress interface to each tested AF class.
      Procedure:

 2001 EURESCOM Participants in Project P1006                                            EDIN 0079-1006
page 68 (12826)                                                 EURESCOM Technical Specification
       Verify, that the test environment is configured as described above.
       Start to send flow 1 on interface 1 for at least ten seconds.
       Send flow 2 simultaneously on interface 2.
       Measure the traffic received by the destination traffic analyser on loss in terms of packets
         per AF class.
      Observable results:
       Each AF class should have no or a very low loss.
      Problems:
      Results: PASSED                     FAILED                    INCONCLUSIVE




Test# AF.6.5
      Test Label: Discard.AF.congest.equaldropping
      Last modification: 16.10.2000
      Purpose: To verify that the system’s dropping algorithm results in an equal packet discard
                probability for flows with different short-term burst shapes but identical longer-
                term packet rates, if both flows are within the same AF class.
      References: PICS for AF PHB 2.3 (57)
      Requirement: Mandatory
      Test Configuration: Figure 2.
      Used traffic types: see below
      Test Setup
      Initial Conditions:
       Set the SUT to support one of the N different AF classes as verified by test AF.3.1.
       Allocate a buffer space of 50 ms on the egress interface per supported AF class (this
          corresponds to 0.4 * Cegr bytes if Cegr is taken as number without dimension).
       Set up the load generator 1 to create a test flow 1 of rate Rtest = 0.3 * Cegr. Set up the load
          generator to compose flow 1 of three individual sub flows which each have a rate of Rtest
          / 3. Assign the drop precedence level 1 to the first sub, flow, drop precedence level 2 to
          the second and drop precedence level 3 to the third (drop precedence level coding see
          Table 7). The three sub-flows should be multiplexed simultaneously. Create a bursty on-
          off arrival per subflow with a burst duration Ton = 10 ms for drop precedence level 1, Ton
          = 15 ms for drop precedence level 2, Ton = 20 ms for drop precedence level 3. All AF
          subflows should be multiplexed simultaneously. Flow 1 is destined to the destination
          traffic analyser.
       Set up the load generator 2 to create a flow 2 marked as default with an average rate of
          0.55* Cegr, bursty arrival (e.g. Poisson). Flow 3 is destined to a host other than the
          destination traffic analyser.
       Allocate a rate of 0.4 * Cegr on the egress interface to each tested AF class.
      Procedure:
       Verify, that the test environment is configured as described above.
       Start to send flow 1 on interface 1 for at least ten seconds.
       Start to send flow 2 on interface 2.

EDIN 0079-1006                                     2001 EURESCOM Participants in Project P1006
page 69 (12826)                                                 EURESCOM Technical Specification
       Measure the traffic received by the destination traffic analyser on loss in terms of packets
         per AF class.
      Observable results:
       Within the AF class all sub flows should have the same long term dropping probability.
      Problems:
      Results: PASSED                     FAILED                    INCONCLUSIVE




Test# AF.6.6
      Test Label: Discard.AF.congest.discardrate
      Last modification: 21.11.2000
      Purpose: To verify that the system’s dropping algorithm results in discard rate of a
               particular micro-flow’s packets within a single precedence level which is
               proportional to that flow’s percentage of the total amount of traffic passing
               through that precedence level at any smoothed congestion level.
      References: PICS for AF PHB 2.3 (58)
      Requirement: Mandatory
      Test Configuration: Figure 2.
      Used traffic types: see below
      Test Setup
      Initial Conditions:
       Set the SUT to support one of the N different AF classes as verified by test AF.3.1.
       Allocate a buffer space of 50 ms on the egress interface per supported AF class (this
          corresponds to 0.4 * Cegr bytes if Cegr is taken as number without dimension).
       Set up the load generator 1 to create a test flow 1 of rate Rtest = 0.3 * Cegr. Set up the load
          generator to compose flow 1 of three individual sub flows which each have a rate of Rtest
          / 3. Assign the drop precedence level 1 to all sub-flows (drop precedence level coding
          see Table 7). Create a bursty on-off arrival per subflow with a burst duration Ton = 10 ms
          for all sub-flows. All AF subflows should be multiplexed simultaneously. Flow 1 is
          destined to the destination traffic analyser.
       Set up the load generator 2 to create a flow 2 marked as default with an average rate of
          0.65* Cegr, bursty arrival (e.g. Poisson). Flow 3 is destined to a host other than the
          destination traffic analyser.
       Allocate a rate of 0.2 * Cegr on the egress interface to each tested AF class.
      Procedure:
       Verify, that the test environment is configured as described above.
       Start to send flow 1 on interface 1 for at least ten seconds.
       Start to send flow 2 on interface 2.
       Measure the traffic received by the destination traffic analyser on loss in terms of packets
         per sub-flows of that AF class.
      Observable results:
       Within the AF class all sub flows should have the same dropping probability.
      Problems:
 2001 EURESCOM Participants in Project P1006                                            EDIN 0079-1006
page 70 (12826)                                                 EURESCOM Technical Specification
      Results: PASSED                     FAILED                    INCONCLUSIVE
      Remark:




Test# AF.6.7
      Test Label: Discard.AF.respond.gradually
      Last modification: 16.10.2000
      Purpose: To verify that the dropping algorithm of the system responds gradually to
               congestion.
      References: PICS for AF PHB 2.3 (59)
      Requirement: Mandatory
      Test Configuration: Figure 2.
      Used traffic types: see below
      Test Setup
      Initial Conditions:
       Set the SUT to support one of the N different AF classes as verified by test AF.3.1.
       Allocate a buffer space of 120 ms on the egress interface per supported AF class (this
          corresponds to 0.4 * Cegr bytes if Cegr is taken as number without dimension).
       Set up the traffic discarding policy (active queue management) of the SUT at the egress
          side (eg. RED, WRED) as shown in (Table 8) and (Figure 5).
       Set up the load generator 1 to create a test flow 1 of rate Rtest = 0.4 * Cegr. Set up the load
          generator to compose flow 1 of three individual sub flows which each have a rate of Rtest
          / 3. Assign the drop precedence level 1 to the first subflow, drop precedence level 2 to
          the second and drop precedence level 3 to the third (drop precedence level coding see
          Table 7). The three sub-flows should be multiplexed simultaneously. Create a bursty on-
          off arrival for each of the subflows with a burst duration T on = 10 ms. Flow 1 is destined
          to the destination traffic analyser.
       Set up the load generator 2 to create a flow 2 coded as any AF class different than the
          tested one. Flow 2 has an average rate of 0.55 * Cegr with bursty arrival (e.g. Poisson).
          Flow 2 is destined to a host other than the destination traffic analyser.
       Allocate a rate of 0.4 * Cegr on the egress interface to the AF class of flow 1.
       Allocate a rate of 0.55 * Cegr to flow 2.
      Procedure:
       Verify, that the test environment is configured as described above.
       Start to send flow 1 on interface 1 for at least ten seconds.
       Send flow 2 simultaneously on interface 2.
       Measure the traffic received by the destination traffic analyser on throughput and on losses
         per AF class subflows and on losses the traffic marked as default.
      Repeat the test with the load burst duration increased by 3ms for all sub flows in flow 1 until
      an burst duration of Ton = 40 ms is reached. (11 round)
      Repeat the test by changing the AF class resembling flow 1 until every supported AF class
      has been tested.
      Observable results:

EDIN 0079-1006                                       2001 EURESCOM Participants in Project P1006
page 71 (12826)                                                  EURESCOM Technical Specification
       DS codepoints of egress traffic should be correct.
       Observable droppings at
                  Ton = 10 ms:    No dropping
                  Ton = 13 ms:    No dropping
                  Ton = 16 ms:    Afd3 dropping (low level)
                  Ton = 19 ms:    Afd3 dropping (higher level)
                  Ton = 22 ms:    Afd3 dropping (highest level)
                  Ton = 25 ms:    Afd2 dropping (min. level) + Afd3 dropping (all)
                  Ton = 28 ms:    Afd2 dropping (low level) + Afd3 dropping (all)
                  Ton = 31 ms:    Afd2 dropping (higher level) + Afd3 dropping (all)
                  Ton = 34 ms:    Afd2 dropping (highest level) + Afd3 dropping (all)
                  Ton = 37 ms:    Afd1 dropping (low level) + Afd2 dropping (all) + Afd3
                                  dropping (all)
                  Ton = 40 ms:    Afd1 dropping (higher level) + Afd2 dropping (all) + Afd3
                                  dropping (all)
       The losses per AF drop precedence level must be in the order Afd1 < Afd2 * Afd3 for all
          tests.
       The dropping algorithm of the system responds gradually to congestion.
      Problems:
      Results: PASSED                     FAILED                   INCONCLUSIVE
      Remark:




Test# AF.6.8
      Test Label: Discard.AF.independent.config
      Last modification: 16.10.2000
      Purpose: To verify that the system’s dropping algorithm control parameters of your system
                are independently configurable for each packet drop precedence and for each AF
                class
      References: PICS for AF PHB 2.3 (60)
      Requirement: Mandatory
      Test Configuration: Figure 2.
      Used traffic types: see below
      Test Setup
      Initial Conditions:
       Set the SUT to support one of the N different AF classes as verified by test AF.3.1.
       Allocate a buffer space of 120 ms on the egress interface per supported AF class (this
          corresponds to 0.4 * Cegr bytes if Cegr is taken as number without dimension).
       Set up the traffic discarding policy (active queue management) of the SUT at the egress
          side (eg. CBWFQ, RED, WRED) as shown in (Table 8) and (Figure 5).


 2001 EURESCOM Participants in Project P1006                                        EDIN 0079-1006
page 72 (12826)                                                  EURESCOM Technical Specification
           Set up the load generator 1 to create a test flow 1 of rate Rtest = 0.4 * Cegr. Set up the
            load generator to compose flow 1 of three individual sub flows which each have a rate
            of Rtest / 3. Assign the drop precedence level 1 to the first subflow, drop precedence
            level 2 to the second and drop precedence level 3 to the third (drop precedence level
            coding see Table 7). The three sub-flows should be multiplexed simultaneously. Create
            a bursty on-off arrival for each of the subflows with a burst duration Ton = 10 ms. Flow
            1 is destined to the destination traffic analyser.
           Set up the load generator 2 to create a flow 2 coded as any AF class different than the
            tested one. Flow 2 has an average rate of 0.55 * Cegr with bursty arrival (e.g. Poisson).
            Flow 2 is destined to a host other than the destination traffic analyser.
           Allocate a rate of 0.4 * Cegr on the egress interface to the AF class of flow 1.
           Allocate a rate of 0.55 * Cegr to flow 2.
      Procedure:
       Verify, that the test environment is configured as described above.
       Start to send flow 1 on interface 1 for at least ten seconds.
       Send flow 2 simultaneously on interface 2.
       Measure the traffic received by the destination traffic analyser on throughput and on losses
         per AF class subflows and on losses the traffic marked as default.
      Repeat the test with the load burst duration increased by 3ms for all sub flows in flow 1 until
      the burst duration of Ton = 40 ms is reached. (11 round)
      Repeat the test by changing the AF class resembling flow 1 until every supported AF class
      has been tested.
      Observable results:
       DS codepoints of egress traffic should be correct.
       Observable droppings at
                  Ton = 10 ms:    No dropping
                  Ton = 13 ms:    No dropping
                  Ton = 16 ms:    Afd3 dropping (low level)
                  Ton = 19 ms:    Afd3 dropping (higher level)
                  Ton = 22 ms:    Afd3 dropping (highest level)
                  Ton = 25 ms:    Afd2 dropping (min. level) + Afd3 dropping (all)
                  Ton = 28 ms:    Afd2 dropping (low level) + Afd3 dropping (all)
                  Ton = 31 ms:    Afd2 dropping (higher level) + Afd3 dropping (all)
                  Ton = 34 ms:    Afd2 dropping (highest level) + Afd3 dropping (all)
                  Ton = 37 ms:    Afd1 dropping (low level) + Afd2 dropping (all) + Afd3
                                  dropping (all)
                  Ton = 40 ms:    Afd1 dropping (higher level) + Afd2 dropping (all) + Afd3
                                  dropping (all)
       The losses per AF drop precedence level must be in the order Afd1 < Afd2 * Afd3 for all
          tests.
       The dropping control parameters are independently configurable.
      Problems:
      Results: PASSED                     FAILED                    INCONCLUSIVE
EDIN 0079-1006                                      2001 EURESCOM Participants in Project P1006
page 73 (12826)                                                 EURESCOM Technical Specification
      Remark:




3.1.1.7 Tunnelling

Test# AF.7.1
      Test Label: Tunnel.AF.forwarding.assurance
      Last modification: 21.11.2000
      Purpose: To verify that the system doesn’t reduce the forwarding assurance of tunnelled AF
                packets.
      References: PICS for AF PHB 2.4 (61)
      Requirement: Mandatory
      Test Configuration: Figure 2.
      Used traffic types: see below
      Test Setup
      Initial Conditions:
       Set the SUT to support the maximum number N of different AF classes as verified by test
          AF.3.1.
       Set the SUT to use an implemented tunnelling technique (eg. GRE, IPSec, L2TP, etc) for
          each AF class between the ingress router end the egress router.
       Allocate a buffer space of 50 ms on the egress interface per supported AF class (this
          corresponds to 0.4 * Cegr bytes if Cegr is taken as number without dimension).
       Set up the load generator 1 to create a test flow 1 of rate Rtest = N * 0.1 * Cegr. Flow 1
          consists of one sub flow per AF class, with a rate of 0.1 * Cegr per sub flow. Create a
          bursty on-off arrival per AF class subflow with a burst duration T on of the allocated
          buffer size. All AF subflows should be multiplexed simultaneously. Flow 1 is destined to
          the destination traffic analyser.
       Set up the load generator 2 to create a flow 2 marked as default with an average rate of (0.1
          * (4-N) + 0.55)* Cegr, bursty arrival (e.g. Poisson). Flow 2 is destined to a host other than
          the destination traffic analyser.
       Allocate a rate of 0.1 * Cegr on the egress interface to each tested AF class.
      Procedure:
       Verify, that the test environment is configured as described above.
       Start to send flow 1 on interface 1 for at least ten seconds.
       Send flow 2 simultaneously on interface 2.
       Measure the traffic received by the destination traffic analyser on loss in terms of packets
         per AF class.
      Observable results:
       Each AF class should have no or a very low loss.
       The results should be the same as was at Test Case AF.6.2
      Problems:
      Results: PASSED                     FAILED                    INCONCLUSIVE


 2001 EURESCOM Participants in Project P1006                                            EDIN 0079-1006
page 74 (12826)                                                 EURESCOM Technical Specification
      Remark:




Test# AF.7.2
      Test Label: Tunnel.AF.ordering
      Last modification: 21.11.2000
      Purpose: To verify that the system doesn’t re-order the packets within a tunnelled AF micro-
                flow.
      References: PICS for AF PHB 2.4 (62)
      Requirement: Mandatory
      Test Configuration: Figure 2.
      Used traffic types: see below
      Test Setup
      Initial Conditions:
       Set the SUT to support one of the N different AF classes as verified by test AF.3.1.
       Set the SUT to use an implemented tunnelling technique (eg. GRE, IPSec, L2TP, etc) for
          each AF class between the ingress router end the egress router.
       Allocate a buffer space of 25 ms on the egress interface per supported AF class (this
          corresponds to 0.2 * Cegr bytes if Cegr is taken as number without dimension).
       Set up the SUT to discard traffic of the selected AF class-drop precedence level Afd3 after
          a congestion level threshold of 30% is reached. If possible, set the corresponding
          congestion level threshold for Afd2 to 60% (if this is impossible, set it to the same value
          as for Afd3). Set the congestion level threshold for Afd1 to 90%.
       Set up the load generator 1 to create a test flow 1 with a rate of Rtest = 0.4 * Cegr. Set up the
          load generator to compose flow 1 of three individual sub flows which each have a rate of
          Rtest / 3. Assign the drop precedence level 1 to the first sub, flow, drop precedence level 2
          to the second and drop precedence level 3 to the third (drop precedence level coding see
          Table 7). The three sub-flows should be multiplexed simultaneously. The packets of the
          individual sub-flows are spaced 3 / Rtest, while flow 1’s packets are spaced 1 / Rtest. Set
          up the load generator to insert consecutive sequence numbers into the packets of flow 1.
          Make sure that the sequence numbers are assigned independent of the AF precedence
          level of a packet.
       Set up the load generator 2 to create a flow 2 coded as any AF class different than the
          tested one. Flow 2 has an average rate of 0.55 * Cegr with bursty arrival (e.g. Poisson).
          Flow 2 is destined to a host other than the destination traffic analyser.
       Allocate a rate of 0.35 * Cegr on the egress interface to the AF class of flow 1.
       Allocate a rate of 0.65 * Cegr to flow 2.
      Note that this test may require the insertion of information into the packets payload. If the
      destination traffic analyser can’t evaluate the sequence numbers of the payload, a sniffer may
      be inserted between the SUT and the destination traffic analyser.
      Procedure:
       Verify, that the test environment is configured as described above.
       Start to send flow 1 on interface 1 for at least ten seconds.
       Send flow 2 simultaneously on interface 2.


EDIN 0079-1006                                       2001 EURESCOM Participants in Project P1006
page 75 (12826)                                                  EURESCOM Technical Specification
       Measure the traffic received by the destination traffic analyser on the sequence of all
         arriving AF packets by evaluating the sequence number.
      Repeat the test with an egress rate decreased by 0.1 for flow 1 and increased by the same
      amount for flow 2 until an egress rate of 0.05 * Cegr is reached for flow 1.
      Repeat the test by changing the AF class resembling flow 1 until every supported AF class
      has been tested.
      Observable results:
       If packet Pi is received at time Ti and packet Pi+1 at time Ti + *t, *t > 0, then the sequence
           number of Pi must be smaller than that of Pi+1 for all i.
      Problems:
      Results: PASSED                     FAILED                     INCONCLUSIVE
      Remark:




3.1.2 EF PHB test Suite

3.1.2.1 Behaviour Aggregate

Test# EF.1.1
      Test Label: classific.EF.codepoint.BA
      Last modification: 26.10.2000
      Purpose: To verify that the system classifies incoming IP traffic according to the DS
               codepoint.
      References: PICS for EF PHB (1)
      Requirement: Optional
      Test Configuration: Figure 2.
      Used traffic types: Table 9
                            Table 9. Used traffic types for EF BA test
                              incoming DSCP             outgoing DSCP
                                   101110                    101110
      Test Setup
      Initial Conditions:
       Set up the load generator 1 to create a flow 1 marked by the EF DSCP with a rate Rtest = 0.4
          * Cegr and a packet spacing of 1 / Rtest (CBR). Flow 1 is destined to the destination traffic
          analyser.
       Set up the load generator 2 to create a flow 2 marked as default with an average rate of
          0.55 * Cegr, bursty arrival (e.g. Poisson). Flow 2 is destined to a host other than the
          destination traffic analyser.
       Allocate a rate of at least 0.8 * Cegr on the egress interface to the EF class of the traffic flow
          1.
       Set the system to classify incoming IP traffic into the EF class.
      Procedure:


 2001 EURESCOM Participants in Project P1006                                          EDIN 0079-1006
page 76 (12826)                                                  EURESCOM Technical Specification
       Verify that the test environment is configured as described above.
       Start to send flow 1 on interface 1 for at least ten seconds.
       Send flow 2 simultaneously on interface 2.
       Measure the traffic received by the destination traffic analyser on losses.
      Observable results:
       The EF DS codepoint of the traffic measured by the destination traffic analyser must be the
          same as on ingress interface 1.
       No EF packets are lost.
      Problems:
      Results: PASSED                     FAILED                     INCONCLUSIVE




Multiple Field
Test# EF.1.2
      Test Label: classific.EF.codepoints.MF.IP.destaddr
      Last modification: 26.10.2000
      Purpose: To verify that the system classifies incoming IP traffic according to the destination
                address.
      References: PICS for EF PHB (2)
      Requirement: Optional
      Test Configuration: Figure 2.
      Used traffic types:
      Test Setup
      Initial Conditions:
       Set up the load generator 1 to create a flow 1 with a rate Rtest = 0.4 * C egr and a packet
          spacing of 1 / Rtest (CBR). Flow 1 is destined to the destination traffic analyser.
       Set up the load generator 2 to create a flow 2 marked as default with an average rate of
          0.55 * Cegr, bursty arrival (e.g. Poisson). Flow 2 is destined to a host other than the
          destination traffic analyser.
       Allocate a rate of at least 0.8 * Cegr on the egress interface to the EF class of the traffic flow
          1.
       Set the system to classify the incoming traffic flow 1 into the specific DSCP according to
          the IP destination address.
      Procedure:
       Verify that the test environment is configured as described above.
       Start to send flow 1 on interface 1 for at least ten seconds.
       Send flow 2 simultaneously on interface 2.
       Measure the traffic received by the destination traffic analyser on losses.
      Observable results:
       DS codepoint of egress traffic should be correct.


EDIN 0079-1006                                     2001 EURESCOM Participants in Project P1006
page 77 (12826)                                                  EURESCOM Technical Specification
       No EF packets are lost.
      Problems:
      Results: PASSED                     FAILED                     INCONCLUSIVE




Test# EF.1.3
      Test Label: classific.EF.codepoints.MF.IP.sourceaddr
      Last modification: 26.10.2000
      Purpose: To verify that the system classifies incoming IP traffic according to the source
                address.
      References: PICS for EF PHB (3)
      Requirement: Optional
      Test Configuration: Figure 2.
      Used traffic types: see below
      Test Setup
      Initial Conditions:
       Set up the load generator 1 to create a flow 1 with a rate Rtest = 0.4 * Cegr and a packet
          spacing of 1 / Rtest (CBR). Flow 1 is destined to the destination traffic analyser.
       Set up the load generator 2 to create a flow 2 marked as default with an average rate of
          0.55 * Cegr, bursty arrival (e.g. Poisson). Flow 2 is destined to a host other than the
          destination traffic analyser.
       Allocate a rate of at least 0.8 * Cegr on the egress interface to the EF class of the traffic flow
          1.
       Set the system to classify the incoming traffic flow 1 into the specific DSCP according to
          the IP source address.
      Procedure:
       Verify that the test environment is configured as described above.
       Start to send flow 1 on interface 1 for at least ten seconds.
       Send flow 2 simultaneously on interface 2.
       Measure the traffic received by the destination traffic analyser on losses.
      Observable results:
       DS codepoint of egress traffic should be correct.
       No EF packets are lost.
      Problems:
      Results: PASSED                     FAILED                     INCONCLUSIVE




Test# EF.1.4
      Test Label: classific.EF.codepoints.MF.IP.protID
      Last modification: 26.10.2000


 2001 EURESCOM Participants in Project P1006                                          EDIN 0079-1006
page 78 (12826)                                                  EURESCOM Technical Specification
      Purpose: To verify that the system classifies incoming IP traffic according to the protocol
                ID.
      References: PICS for EF PHB (4)
      Requirement: Optional
      Test Configuration: Figure 2.
      Used traffic types:
      Test Setup
      Initial Conditions:
       Set up the load generator 1 to create a flow 1 with a rate Rtest = 0.4 * Cegr and a packet
          spacing of 1 / Rtest (CBR). Flow 1 is destined to the destination traffic analyser.
       Set up the load generator 2 to create a flow 2 marked as default with an average rate of
          0.55 * Cegr, bursty arrival (e.g. Poisson). Flow 2 is destined to a host other than the
          destination traffic analyser.
       Allocate a rate of at least 0.8 * Cegr on the egress interface to the EF class of the traffic flow
          1.
       Set the system to classify the incoming traffic flow 1 into the specific DSCP according to
          the IP protocol ID.
      Procedure:
       Verify that the test environment is configured as described above.
       Start to send flow 1 on interface 1 for at least ten seconds.
       Send flow 2 simultaneously on interface 2.
       Measure the traffic received by the destination traffic analyser on losses.
      Observable results:
       DS codepoint of egress traffic should be correct.
       No EF packets are lost.
      Problems:
      Results: PASSED                     FAILED                     INCONCLUSIVE




Test# EF.1.5
      Test Label: classific.EF.codepoints.MF.IP.sourceport
      Last modification: 26.10.2000
      Purpose: To verify that the system classifies incoming IP traffic according to the source
                port.
      References: PICS for EF PHB (5)
      Requirement: Optional
      Test Configuration: Figure 2.
      Used traffic types: see below
      Test Setup
      Initial Conditions:


EDIN 0079-1006                                     2001 EURESCOM Participants in Project P1006
page 79 (12826)                                                  EURESCOM Technical Specification
       Set up the load generator 1 to create a flow 1 with a rate Rtest = 0.4 * Cegr and a packet
          spacing of 1 / Rtest (CBR). Set up the load generator to use specific UDP port numbers for
          source port and destination port. Flow 1 is destined to the destination traffic analyser.
       Set up the load generator 2 to create a flow 2 marked as default with an average rate of
          0.55 * Cegr, bursty arrival (e.g. Poisson). Flow 2 is destined to a host other than the
          destination traffic analyser.
       Allocate a rate of at least 0.8 * Cegr on the egress interface to the EF class of the traffic flow
          1.
       Set the system to classify the incoming traffic according to the source port.
      Procedure:
       Verify that the test environment is configured as described above.
       Start to send flow 1 on interface 1 for at least ten seconds.
       Send flow 2 simultaneously on interface 2.
       Measure the traffic received by the destination traffic analyser on losses.
      Observable results:
       DS codepoint of egress traffic should be correct.
       No EF packets are lost.
      Problems:
      Results: PASSED                     FAILED                     INCONCLUSIVE




Test# EF.1.6
      Test Label: classific.EF.codepoints.MF.IP.destport
      Last modification: 26.10.2000
      Purpose: To verify that the system classifies incoming IP traffic according to the destination
                port.
      References: PICS for EF PHB (6)
      Requirement: Optional
      Test Configuration: Figure 2.
      Used traffic types: see below
      Test Setup
      Initial Conditions:
       Set up the load generator 1 to create a flow 1 with a rate Rtest = 0.4 * Cegr and a packet
          spacing of 1 / Rtest (CBR). Set up the load generator to use specific UDP port numbers for
          source port and destination port. Flow 1 is destined to the destination traffic analyser.
       Set up the load generator 2 to create a flow 2 marked as default with an average rate of
          0.55 * Cegr, bursty arrival (e.g. Poisson). Flow 2 is destined to a host other than the
          destination traffic analyser.
       Allocate a rate of at least 0.8 * Cegr on the egress interface to the EF class of the traffic flow
          1.
       Set the system to classify the incoming traffic according to the destination port.
      Procedure:

 2001 EURESCOM Participants in Project P1006                                          EDIN 0079-1006
page 80 (12826)                                                  EURESCOM Technical Specification
       Verify that the test environment is configured as described above.
       Start to send flow 1 on interface 1 for at least ten seconds.
       Send flow 2 simultaneously on interface 2.
       Measure the traffic received by the destination traffic analyser on losses.
      Observable results:
       DS codepoint of egress traffic should be correct.
       No EF packets are lost.
      Problems:
      Results: PASSED                     FAILED                     INCONCLUSIVE




Test# EF.1.7
      Test Label: classific.EF.codepoints.MF.IP.combine
      Last modification: 26.10.2000
      Purpose: To verify that the system classifies incoming IP traffic according to the
               combination of the classification methods.
      References: PICS for EF PHB (7)
      Requirement: Conditional (If any test from EF.1.2 to EF.1.6 is Passed then Optional else
                   N/A)
      Test Configuration: Figure 2.
      Used traffic types: see below
      Test Setup
      Initial Conditions:
       Set up the load generator 1 to create a flow 1 with a rate Rtest = 0.4 * Cegr and a packet
          spacing of 1 / Rtest (CBR) and with specific source port, destination port, source IP
          address, destination IP address and protocol ID. Flow 1 is destined to the destination
          traffic analyser.
       Set up the load generator 2 to create a flow 2 marked as default with an average rate of
          0.55 * Cegr, bursty arrival (e.g. Poisson). Flow 2 is destined to a host other than the
          destination traffic analyser.
       Allocate a rate of at least 0.8 * Cegr on the egress interface to the EF class of the traffic flow
          1.
       Set the system to classify the incoming traffic flow 1 into the EF DSCP according to a
          specific combination of the successful results which was measured from EF.1.2 to
          EF.1.6.
      Procedure:
       Verify that the test environment is configured as described above.
       Start to send flow 1 on interface 1 for at least ten seconds.
       Send flow 2 simultaneously on interface 2.
       Measure the traffic received by the destination traffic analysers on losses.
      Observable results:


EDIN 0079-1006                                     2001 EURESCOM Participants in Project P1006
page 81 (12826)                                                  EURESCOM Technical Specification
       DS codepoint of egress traffic should be correct.
       No EF packets are lost.
      Problems:
      Results: PASSED                      FAILED                    INCONCLUSIVE
      Remark:




3.1.2.2 Traffic Profiling

Test# EF.2.1
      Test Label: Profile.EF.rate
      Last modification: 26.10.2000
      Purpose: To verify that the system profiles EF traffic to a rate.
      References: PICS for EF PHB (8)
      Requirement: Mandatory
      Test Configuration: Figure 2.
      Used traffic types: see below
      Test Setup
      Initial Conditions:
       Set the SUT to support EF traffic class.
       Allocate a buffer space of 5 ms on the egress interface for the EF class (this corresponds to
          0.04 * Cegr bytes if Cegr is taken as number without dimension).
       Set up the SUT to drop the incoming excess traffic after the load of this EF class excess
          0.25 * Cegr.
       Set up the load generator 1 to create a test flow 1 with a rate of R test = 0.2 * Cegr. Flow 1’s
          packets are spaced 1 / Rtest. Flow 1 is destined to the destination traffic analyser.
       Set up the load generator 2 to create a flow 2 marked as default with an average rate of 0.8
          * Cegr, bursty arrival (e.g. Poisson). Flow 2 is destined to a host other than the destination
          traffic analyser.
       Allocate a rate of 0.45 * Cegr on the egress interface to the EF class of flow 1.
       Allocate a rate of 0.55 * Cegr on the egress interface to flow 2.
      Procedure:
       Verify, that the test environment is configured as described above.
       Start to send flow 1 on interface 1 for at least ten seconds.
       Send flow 2 simultaneously on interface 2.
       Measure the traffic received by the destination traffic analyser on losses for the EF class.
      Repeat the test changing flow 1 with load rate Rtest = 0.3 * Cegr and Rtest = 0.4 * Cegr .
      Observable results:
       DS codepoint of egress traffic should be correct.
       Excess EF traffic above 0.25* Cegr should be dropped.


 2001 EURESCOM Participants in Project P1006                                          EDIN 0079-1006
page 82 (12826)                                                 EURESCOM Technical Specification
      Problems:
      Results: PASSED                     FAILED                    INCONCLUSIVE
      Remark:




Test# EF.2.2
      Test Label: Profile.EF.burstsize
      Last modification: 26.10.2000
      Purpose: To verify that - in case of pre-emptive forwarding mechanism - the system profiles
                EF traffic to a burst size.
      References: PICS for EF PHB (9)
      Requirement: Mandatory
      Test Configuration: Figure 2.
      Used traffic types: see below
      Test Setup
      Initial Conditions:
       Set the SUT to support EF traffic class.
       Allocate a buffer space of 5 ms on the egress interface for the EF class (this corresponds to
          0.04 * Cegr bytes if Cegr is taken as number without dimension).
       Set the SUT to use one kind of pre-emptive forwarding mechanism (eg. priority queuing)
          for EF traffic.
       Set up the load generator 1 to create a test flow 1 of rate Rtest = 0.4 * Cegr for the EF class.
          Create a bursty on-off arrival with a burst duration Ton = 10ms buffer size. (This
          corresponds to 0.08 * Cegr bytes if Cegr is taken as number without dimension). Flow 1 is
          destined to the destination traffic analyser.
       Set up the load generator 2 to create a flow 2 marked as default with an average rate of 0.8
          * Cegr, bursty arrival (e.g. Poisson). Flow 2 is destined to a host other than the destination
          traffic analyser.
       Allocate a rate of 0.45 * Cegr on the egress interface to the EF class of flow 1.
       Allocate a rate of 0.55 * Cegr on the egress interface to flow 2.
      Procedure:
       Verify, that the test environment is configured as described above.
       Start to send flow 1 on interface 1 for at least ten seconds.
       Send flow 2 simultaneously on interface 2.
       Measure the traffic received by the destination traffic analyser on losses for the EF class.
      Repeat the test with burst size 5ms and 2.5ms of load 1.
      Observable results:
       DS codepoint of egress traffic should be correct.
       Excess traffic above 5ms burst size should be dropped.
      Problems:
      Results: PASSED                     FAILED                    INCONCLUSIVE

EDIN 0079-1006                                      2001 EURESCOM Participants in Project P1006
page 83 (12826)                                                  EURESCOM Technical Specification
      Remark:




3.1.2.3 3.1.2..3 Marking

Test# EF.3.1
      Test Label: mark.EF.fromdefault
      Last modification: 26.10.2000
      Purpose: To verify that the system mark incoming IP traffic (DS 000000) into EF DSCP
                (101110).
      References: PICS for EF PHB (10)
      Requirement: Optional
      Test Configuration: Figure 2.
      Used traffic types: Table 10
                            Table 10. Used traffic type for marking test
                            incoming DSCP                  outgoing DSCP
                                000000                          101110
      Test Setup
      Initial Conditions:
       Set the SUT to support EF traffic class.
       Set up the load generator 1 to create a flow 1 marked by the DSCP=000000 with a rate Rtest
          = 0.4 * Cegr and a packet spacing of 1 / Rtest (CBR). Flow 1 is destined to the destination
          traffic analyser.
       Set up the load generator 2 to create a flow 2 marked as default with an average rate of
          0.55 * Cegr, bursty arrival (e.g. Poisson). Flow 2 is destined to a host other than the
          destination traffic analyser.
       Allocate a rate of at least 0.8 * Cegr on the egress interface to the EF class of the traffic flow
          1.
       Allocate a rate of at least 0.2 * Cegr on the egress interface to the DE class.
       Set the system to classify incoming IP traffic into the EF class and mark it with
          DSCP=101110.
      Procedure:
       Verify that the test environment is configured as described above.
       Start to send flow 1 on interface 1 for at least ten seconds.
       Send flow 2 simultaneously on interface 2.
       Measure the traffic received by the destination traffic analyser on losses.
      Observable results:
       The DS codepoint of the traffic measured by the destination traffic analyser must be
          101110.
       No EF packets are lost.
      Problems:


 2001 EURESCOM Participants in Project P1006                                             EDIN 0079-1006
page 84 (12826)                                                  EURESCOM Technical Specification
      Results: PASSED                     FAILED                     INCONCLUSIVE




Test# EF.3.2
      Test Label: mark.EF.fromother
      Last modification: 26.10.2000
      Purpose: To verify that the system mark incoming IP traffic (DS000000 and DS101110)
                into EF DSCP (101110).
      References: PICS for EF PHB (11)
      Requirement: Optional
      Test Configuration: Figure 2.
      Used traffic types: see below
      Test Setup
      Initial Conditions:
       Set the SUT to support EF traffic class.
       Set up the load generator 1 to create a flow 1 marked by a DSCP other than 000000 and
          101110 with a rate Rtest = 0.4 * Cegr and a packet spacing of 1 / Rtest (CBR). Flow 1 is
          destined to the destination traffic analyser.
       Set up the load generator 2 to create a flow 2 marked as default with an average rate of
          0.55 * Cegr, bursty arrival (e.g. Poisson). Flow 2 is destined to a host other than the
          destination traffic analyser.
       Allocate a rate of at least 0.8 * Cegr on the egress interface to the EF class of the traffic flow
          1.
       Allocate a rate of at least 0.2 * Cegr on the egress interface to the DE class.
       Set the system to classify incoming IP traffic into the EF class and mark it with
          DSCP=101110.
      Procedure:
       Verify that the test environment is configured as described above.
       Start to send flow 1 on interface 1 for at least ten seconds.
       Send flow 2 simultaneously on interface 2.
       Measure the traffic received by the destination traffic analyser on losses.
      Observable results:
       The DS codepoint of the traffic measured by the destination traffic analyser must be
          101110.
       No EF packets are lost.
      Problems:
      Results: PASSED                     FAILED                     INCONCLUSIVE




EDIN 0079-1006                                      2001 EURESCOM Participants in Project P1006
page 85 (12826)                                                  EURESCOM Technical Specification
3.1.2.4 Policing

Test# EF.4.1
      Test Label: police.EF.peakrate
      Last modification: 26.10.2000
      Purpose: To verify that the system police incoming EF traffic to a given peak rate.
      References: PICS for EF PHB (12)
      Requirement: Mandatory
      Test Configuration: Figure 2.
      Used traffic types: see below
      Test Setup
      Initial Conditions:
       Set the SUT to support EF traffic class.
       Allocate a buffer space of 5 ms on the egress interface for the EF class (this corresponds to
          0.04 * Cegr bytes if Cegr is taken as number without dimension).
       Set up the SUT to drop the incoming excess traffic after the load of this EF class excess
          0.25 * Cegr.
       Set up the load generator 1 to create a test flow 1 with a rate of Rtest = 0.2 * Cegr. Flow 1’s
          packets are spaced 1 / Rtest. Flow 1 is destined to the destination traffic analyser.
       Set up the load generator 2 to create a flow 2 marked as default with an average rate of 0.8
          * Cegr, bursty arrival (e.g. Poisson). Flow 2 is destined to a host other than the destination
          traffic analyser.
       Allocate a rate of 0.45 * Cegr on the egress interface to the EF class of flow 1.
       Allocate a rate of 0.55 * Cegr on the egress interface to flow 2.
      Procedure:
       Verify, that the test environment is configured as described above.
       Start to send flow 1 on interface 1 for at least ten seconds.
       Send flow 2 simultaneously on interface 2.
       Measure the traffic received by the destination traffic analyser on losses for the EF class.
      Repeat the test changing flow 1 with load rate Rtest = 0.3 * Cegr and Rtest = 0.4 * Cegr .
      Observable results:
       DS codepoint of egress traffic should be correct.
       Excess EF traffic above 0.25* Cegr should be dropped.
      Problems:
      Results: PASSED                      FAILED                    INCONCLUSIVE
      Remark:




Test# EF.4.2
      Test Label: police.EF.preemptive.forwarding
      Last modification: 26.10.2000

 2001 EURESCOM Participants in Project P1006                                          EDIN 0079-1006
page 86 (12826)                                                 EURESCOM Technical Specification
      Purpose: To verify that the system operates EF by a pre-emptive forwarding mechanism.
      References: PICS for EF PHB (13)
      Requirement: Mandatory
      Test Configuration: Figure 2.
      Used traffic types: see below
      Test Setup
      Initial Conditions:
       Set the SUT to support EF traffic class.
       Set the SUT to use one kind of pre-emptive forwarding mechanism (eg. priority queuing)
          for EF traffic.
       Allocate a buffer space of 5 ms on the egress interface for the EF class (this corresponds to
          0.04 * Cegr bytes if Cegr is taken as number without dimension).
       Set up the load generator 1 to create a test flow 1 of rate Rtest = 0.45 * Cegr for the EF class.
          Create a bursty on-off arrival with a burst duration Ton = 2.5ms buffer size. (This
          corresponds to 0.02 * Cegr bytes if Cegr is taken as number without dimension). Flow 1 is
          destined to the destination traffic analyser.
       Set up the load generator 2 to create a flow 2 marked as default with an average rate of 0.8
          * Cegr, bursty arrival (e.g. Poisson). Flow 2 is destined to a host other than the destination
          traffic analyser.
       Allocate a rate of 0.45 * Cegr on the egress interface to the EF class of flow 1.
       Allocate a rate of 0.55 * Cegr on the egress interface to flow 2.
      Procedure:
       Verify, that the test environment is configured as described above.
       Start to send flow 1 on interface 1 for at least ten seconds.
       Send flow 2 simultaneously on interface 2.
       Measure the traffic received by the destination traffic analyser on losses for the EF class.
      Observable results:
       DS codepoint of egress traffic should be correct.
       No EF traffic should be dropped.
      Problems:
      Results: PASSED                     FAILED                    INCONCLUSIVE
      Remark:




Test# EF.4.3
      Test Label: police.EF.burstsize
      Last modification: 26.10.2000
      Purpose: To verify that the system - in case of pre-emptive forwarding mechanism - police
                incoming EF traffic to a given burst size.
      References: PICS for EF PHB (14)
      Requirement: Conditional (If test EF.4.2 is Passed then Mandatory else N/A)

EDIN 0079-1006                                      2001 EURESCOM Participants in Project P1006
page 87 (12826)                                                 EURESCOM Technical Specification
      Test Configuration: Figure 2.
      Used traffic types: see below
      Test Setup
      Initial Conditions:
       Set the SUT to support EF traffic class.
       Allocate a buffer space of 5 ms on the egress interface for the EF class (this corresponds to
          0.04 * Cegr bytes if Cegr is taken as number without dimension).
       Set the SUT to use one kind of pre-emptive forwarding mechanism (eg. priority queuing)
          for EF traffic.
       Set up the load generator 1 to create a test flow 1 of rate Rtest = 0.4 * Cegr for the EF class.
          Create a bursty on-off arrival with a burst duration Ton = 10ms buffer size. (This
          corresponds to 0.08 * Cegr bytes if Cegr is taken as number without dimension). Flow 1 is
          destined to the destination traffic analyser.
       Set up the load generator 2 to create a flow 2 marked as default with an average rate of 0.8
          * Cegr, bursty arrival (e.g. Poisson). Flow 2 is destined to a host other than the destination
          traffic analyser.
       Allocate a rate of 0.45 * Cegr on the egress interface to the EF class of flow 1.
       Allocate a rate of 0.55 * Cegr on the egress interface to flow 2.
      Procedure:
       Verify, that the test environment is configured as described above.
       Start to send flow 1 on interface 1 for at least ten seconds.
       Send flow 2 simultaneously on interface 2.
       Measure the traffic received by the destination traffic analyser on losses for the EF class.
      Repeat the test with burst size 5ms and 2.5ms of load 1.
      Observable results:
       DS codepoint of egress traffic should be correct.
       Excess traffic above 5ms burst size should be dropped.
      Problems:
      Results: PASSED                     FAILED                    INCONCLUSIVE
      Remark:




Test# EF.4.4
      Test Label: police.EF.peakrate.set
      Last modification: 26.10.2000
      Purpose: To verify that the policing peak rate is configurable by the network administrator.
      References: PICS for EF PHB (15)
      Requirement: Mandatory
      Test Configuration: Figure 2.
      Used traffic types: see below
      Test Setup

 2001 EURESCOM Participants in Project P1006                                         EDIN 0079-1006
page 88 (12826)                                                    EURESCOM Technical Specification
      Initial Conditions:
       Set the SUT to support EF traffic class.
       Allocate a buffer space of 5 ms on the egress interface for the EF class (this corresponds to
          0.04 * Cegr bytes if Cegr is taken as number without dimension).
       Set up the SUT to drop the incoming excess traffic after the load of this EF class excess
          0.25 * Cegr.
       Set up the load generator 1 to create a test flow 1 with a rate of R test = 0.2 * Cegr. Flow 1’s
          packets are spaced 1 / Rtest. Flow 1 is destined to the destination traffic analyser.
       Set up the load generator 2 to create a flow 2 marked as default with an average rate of 0.8
          * Cegr, bursty arrival (e.g. Poisson). Flow 2 is destined to a host other than the destination
          traffic analyser.
       Allocate a rate of 0.45 * Cegr on the egress interface to the EF class of flow 1.
       Allocate a rate of 0.55 * Cegr on the egress interface to flow 2.
      Procedure:
       Verify, that the test environment is configured as described above.
       Start to send flow 1 on interface 1 for at least ten seconds.
       Send flow 2 simultaneously on interface 2.
       Measure the traffic received by the destination traffic analyser on losses for the EF class.
      Repeat the test with load rate Rtest = 0.3 * Cegr and Rtest = 0.4 * Cegr .
      Observable results:
       DS codepoint of egress traffic should be correct.
       Excess traffic above 0.25* Cegr should be dropped.
       The dropping threshold should be configurable.
      Problems:
      Results: PASSED                       FAILED                     INCONCLUSIVE
      Remark:




Test# EF.4.5
      Test Label: police.EF.burstsize.set
      Last modification: 26.10.2000
      Purpose: To verify that the policing burst size is configurable by the network administrator.
      References: PICS for EF PHB (16)
      Requirement: Conditional (If test EF.4.2 is Passed then Mandatory else N/A)
      Test Configuration: Figure 2.
      Used traffic types: see below
      Test Setup
      Initial Conditions:
       Set the SUT to support EF traffic class.



EDIN 0079-1006                                       2001 EURESCOM Participants in Project P1006
page 89 (12826)                                                 EURESCOM Technical Specification
       Set the SUT to use one kind of pre-emptive forwarding mechanism (eg. priority queuing)
          for EF traffic.
       Allocate a buffer space of 5 ms on the egress interface for the EF class (this corresponds to
          0.04 * Cegr bytes if Cegr is taken as number without dimension).
       Set up the SUT to drop the incoming excess traffic after the burst size of this EF class
          excess 25ms (this corresponds to 0.2 * Cegr bytes if Cegr is taken as number without
          dimension).
       Set up the load generator 1 to create a test flow 1 of rate Rtest = 0.4 * Cegr for the EF class.
          Create a bursty on-off arrival with a burst duration Ton = 10ms buffer size. (This
          corresponds to 0.08 * Cegr bytes if Cegr is taken as number without dimension). Flow 1 is
          destined to the destination traffic analyser.
       Set up the load generator 2 to create a flow 2 marked as default with an average rate of 0.8
          * Cegr, bursty arrival (e.g. Poisson). Flow 2 is destined to a host other than the destination
          traffic analyser.
       Allocate a rate of 0.45 * Cegr on the egress interface to the EF class of flow 1.
       Allocate a rate of 0.55 * Cegr on the egress interface to flow 2.
      Procedure:
       Verify, that the test environment is configured as described above.
       Start to send flow 1 on interface 1 for at least ten seconds.
       Send flow 2 simultaneously on interface 2.
       Measure the traffic received by the destination traffic analyser on losses for the EF class.
      Repeat the test changing the burst size of load 1 with 5ms and 2.5ms.
      Observable results:
       DS codepoint of egress traffic should be correct.
       Excess EF traffic above 5ms burst size should be dropped.
       The dropping threshold should be configurable.
      Problems:
      Results: PASSED                     FAILED                    INCONCLUSIVE
      Remark:




Test# EF.4.6
      Test Label: police.EF.discard.violated
      Last modification: 26.10.2000
      Purpose: To verify that the system discard incoming EF traffic violating policing
               parameters.
      References: PICS for EF PHB (17)
      Requirement: Mandatory
      Test Configuration: Figure 2.
      Used traffic types: see below
      Test Setup
      Initial Conditions:
 2001 EURESCOM Participants in Project P1006                                         EDIN 0079-1006
page 90 (12826)                                                  EURESCOM Technical Specification
       Set the SUT to support EF traffic class.
       Allocate a buffer space of 5 ms on the egress interface for the EF class (this corresponds to
          0.04 * Cegr bytes if Cegr is taken as number without dimension).
       Set up the SUT to drop the incoming excess traffic after the load of this EF class excess
          0.25 * Cegr. or the burst size of this EF class excess 25ms (this corresponds to 0.2 * C egr
          bytes if Cegr is taken as number without dimension).
       Set up the load generator 1 to create a test flow 1 of rate Rtest = 0.2 * Cegr for the EF class.
          Create a bursty on-off arrival with a burst duration Ton = 10ms buffer size. (This
          corresponds to 0.08 * Cegr bytes if Cegr is taken as number without dimension). Flow 1 is
          destined to the destination traffic analyser.
       Set up the load generator 2 to create a flow 2 marked as default with an average rate of 0.8
          * Cegr, bursty arrival (e.g. Poisson). Flow 2 is destined to a host other than the destination
          traffic analyser.
       Allocate a rate of 0.45 * Cegr on the egress interface to the EF class of flow 1.
       Allocate a rate of 0.55 * Cegr on the egress interface to flow 2.
      Procedure:
       Verify, that the test environment is configured as described above.
       Start to send flow 1 on interface 1 for at least ten seconds.
       Send flow 2 simultaneously on interface 2.
       Measure the traffic received by the destination traffic analyser on losses for the EF class.
      Repeat the test changing load rate Rtest = 0.3 * Cegr and Rtest = 0.4 * Cegr of flow 1.
      Repeat the test changing the burst size of load 1 with 5ms and 2.5ms.
      Observable results:
       DS codepoint of egress traffic should be correct.
       Excess EF traffic above 5ms burst size should be dropped.
       Excess EF traffic above 0.25 * Cegr should be dropped.
      Problems:
      Results: PASSED                      FAILED                    INCONCLUSIVE
      Remark:




Test# EF.4.7
      Test Label: police.EF.discard.all
      Last modification: 26.10.2000
      Purpose: To verify that the system discard all incoming EF traffic when no peak rate is
                configured and EF is enabled.
      References: PICS for EF PHB (18)
      Requirement: Mandatory
      Test Configuration: Figure 2.
      Used traffic types: see below
      Test Setup


EDIN 0079-1006                                      2001 EURESCOM Participants in Project P1006
page 91 (12826)                                                 EURESCOM Technical Specification
      Initial Conditions:
       Set the SUT to support EF traffic class.
       Clear peak rate setting of the EF class of the SUT.
       Set up the load generator 1 to create a test flow 1 with a rate of R test = 0.2 * Cegr. Flow 1’s
          packets are spaced 1 / Rtest. Flow 1 is destined to the destination traffic analyser.
       Set up the load generator 2 to create a flow 2 marked as default with an average rate of 0.8
          * Cegr, bursty arrival (e.g. Poisson). Flow 2 is destined to a host other than the destination
          traffic analyser.
       Allocate a rate of 0.55 * Cegr on the egress interface to flow 2.
      Procedure:
       Verify, that the test environment is configured as described above.
       Start to send flow 1 on interface 1 for at least ten seconds.
       Send flow 2 simultaneously on interface 2.
       Measure the traffic received by the destination traffic analyser on losses for the EF class.
      Observable results:
       All traffic in EF class should be dropped.
      Problems:
      Results: PASSED                     FAILED                    INCONCLUSIVE
      Remark:




3.1.2.5 Shaping

Test# EF.5.1
      Test Label: shape.EF.rate
      Last modification: 26.10.2000
      Purpose: To verify that the system shapes the incoming EF traffic based on a given peak
                rate.
      References: PICS for EF PHB (19)
      Requirement: Optional
      Test Configuration: Figure 2.
      Used traffic types: see below
      Test Setup
      Initial Conditions:
       Set the SUT to support EF traffic class.
       Switch on traffic shaping on the SUT. Set the peak rate parameter of traffic shaping to 0.3
          * Cegr. Allocate a burst size of 5ms on the egress interface to the EF class. (This
          corresponds to 0.04 * Cegr bytes if Cegr is taken as number without dimension).
       Set up the load generator 1 to create a test flow 1 of rate Rtest = 0.4 * Cegr for the EF class.
          Create a bursty on-off arrival with a burst duration Ton = 2,5ms buffer size. (This
          corresponds to 0.02 * Cegr bytes if Cegr is taken as number without dimension). Flow 1 is
          destined to the destination traffic analyser.

 2001 EURESCOM Participants in Project P1006                                        EDIN 0079-1006
page 92 (12826)                                                  EURESCOM Technical Specification
       Allocate a rate of 0.45 * Cegr on the egress interface to the EF class of flow 1.
      Procedure:
       Verify, that the test environment is configured as described above.
       Start to send flow 1 on interface 1 for at least ten seconds.
       Measure the traffic received by the destination traffic analyser on throughput of the EF
         class.
      Repeat the test with load rate 0.5 * Cegr and with load rate 0.6 * Cegr .
      Observable results:
       DS codepoint of egress traffic should be correct.
       The throughput of the EF traffic must be close to 0.3 * Cegr.
      Problems:
      Results: PASSED                     FAILED                     INCONCLUSIVE
      Remark:




Test# EF.5.2
      Test Label: shape.EF.burstsize
      Last modification: 26.10.2000
      Purpose: To verify that the system - in case of pre-emptive forwarding mechanism - shape
                the incoming EF traffic based on a given burst size.
      References: PICS for EF PHB (20)
      Requirement: Conditional (If test EF.4.2 is Passed then Optional else N/A)
      Test Configuration: Figure 2.
      Used traffic types: see below
      Test Setup
      Initial Conditions:
       Set the SUT to support the EF class.
       Switch on traffic shaping on the SUT. Set the peak rate parameter of traffic shaping to 0.3
          * Cegr. Allocate a burst size of 5ms on the egress interface to the EF class. (This
          corresponds to 0.04 * Cegr bytes if Cegr is taken as number without dimension).
       Set up the load generator 1 to create a test flow 1 of rate Rtest = 0.2 * Cegr for the EF class.
          Create a bursty on-off arrival with a burst duration Ton = 2,5ms buffer size. (This
          corresponds to 0.02 * Cegr bytes if Cegr is taken as number without dimension). Flow 1 is
          destined to the destination traffic analyser.
       Allocate a rate of 0.45 * Cegr on the egress interface to the EF class.
      Procedure:
       Verify, that the test environment is configured as described above.
       Start to send flow 1 on interface 1 for at least ten seconds.
       Measure the traffic received by the destination traffic analyser on throughput of the EF
         class.
      Repeat the test with load 1 burst size 5ms and 10ms.

EDIN 0079-1006                                     2001 EURESCOM Participants in Project P1006
page 93 (12826)                                                  EURESCOM Technical Specification
      Observable results:
       DS codepoint of egress traffic should be correct.
       The shaped burst size must be equal to or less than 5ms.
       EF packets above 5ms bursts size should be dropped.
      Problems:
      Results: PASSED                     FAILED                    INCONCLUSIVE
      Remark:




Test# EF.5.3
      Test Label: shape.EF.rate*burstsize
      Last modification: 26.10.2000
      Purpose: To verify that shaping parameters (peak rate, burst size) can be set by the network
                administrator.
      References: PICS for EF PHB (21)
      Requirement: Conditional (If test EF.5.2 or EF.5.2 is Passed then Mandatory else N/A)
      Test Configuration: Figure 2.
      Used traffic types: see below
      Test Setup
      Initial Conditions:
       Set the SUT to support the EF class.
       Switch on traffic shaping on the SUT. Set the peak rate parameter of traffic shaping to 0.3
          * Cegr and the burst size to 5ms (this corresponds to 0.04 * Cegr bytes if Cegr is taken as
          number without dimension).
       Set up the load generator 1 to create a test flow 1 of rate Rtest = 0.4 * Cegr for the EF class.
          Create a bursty on-off arrival with a burst duration Ton = 2,5ms buffer size. (This
          corresponds to 0.02 * Cegr bytes if Cegr is taken as number without dimension). Flow 1 is
          destined to the destination traffic analyser.
       Allocate a rate of 0.4 * Cegr on the egress interface to the EF class.
      Procedure:
       Verify, that the test environment is configured as described above.
       Start to send flow 1 on interface 1 for at least ten seconds.
       Measure the traffic received by the destination traffic analyser on throughput of the EF
         class.
      Repeat the test with another shaping parameters: peak rate 0.4 * Cegr and burst size 4ms.
      Observable results:
       DS codepoint of egress traffic should be correct.
       At the first test the throughput of the EF traffic must be close to 0.3 * C egr with burst size
          close to 4/3*2,5ms.
       At the second test the throughput of the EF traffic must be close to 0.4 * Cegr with burst size
          close to 4ms.


 2001 EURESCOM Participants in Project P1006                                        EDIN 0079-1006
page 94 (12826)                                                  EURESCOM Technical Specification
       Shaping parameters (peak rate and burst size) should be configurable.
      Problems:
      Results: PASSED                     FAILED                    INCONCLUSIVE
      Remark:




Test# EF.5.4
      Test Label: shape.EF.violation
      Last modification: 26.10.2000
      Purpose: To verify that the system shapes incoming EF traffic violating a given set of
               policing parameters.
      References: PICS for EF PHB (22)
      Requirement: Optional
      Test Configuration: Figure 2.
      Used traffic types: see below
      Test Setup
      Initial Conditions:
       Set the SUT to support the EF class.
       Switch on traffic shaping on the SUT. Set the peak rate parameter of traffic shaping to 0.3
          * Cegr and the burst size to 5ms (this corresponds to 0.04 * Cegr bytes if Cegr is taken as
          number without dimension).
       Set up the load generator 1 to create a test flow 1 of rate Rtest = 0.4 * Cegr for the EF class.
          Create a bursty on-off arrival with a burst duration Ton = 2,5ms buffer size. (This
          corresponds to 0.02 * Cegr bytes if Cegr is taken as number without dimension). Flow 1 is
          destined to the destination traffic analyser.
       Allocate a rate of 0.4 * Cegr on the egress interface to the EF class.
      Procedure:
       Verify, that the test environment is configured as described above.
       Start to send flow 1 on interface 1 for at least ten seconds.
       Measure the traffic received by the destination traffic analyser on throughput of the EF
         class.
      Observable results:
       DS codepoint of egress traffic should be correct.
       The throughput of the EF traffic must be close to 0.3 * Cegr with burst size close to
          4/3*2,5ms.
       EF packets should not be dropped.
      Problems:
      Results: PASSED                     FAILED                    INCONCLUSIVE
      Remark:




EDIN 0079-1006                                     2001 EURESCOM Participants in Project P1006
page 95 (12826)                                                    EURESCOM Technical Specification
3.1.2.6 3.1.2.6 Traffic Forwarding

Test# EF.6.1
      Test Label: forward.EF.set.deprate
      Last modification: 26.10.2000
      Purpose: To verify that a minimum departure rate for an EF aggregate is configurable on the
                system.
      References: PICS for EF PHB (23)
      Requirement: Mandatory
      Test Configuration: Figure 2.
      Used traffic types: see below
      Test Setup
      Initial Conditions:
       Set the SUT to support EF traffic class.
       Set the SUT to use one kind of pre-emptive forwarding mechanism (eg. priority queuing)
          for EF traffic. Set the serving rate to 0.25 * Cegr.
       Allocate a buffer space of 5 ms on the egress interface for the EF class (this corresponds to
          0.04 * Cegr bytes if Cegr is taken as number without dimension).
       Set up the SUT to drop the incoming excess traffic after the load of the EF class excess
          0.25 * Cegr.
       Set up the load generator 1 to create a test flow 1 with a rate of R test = 0.2 * Cegr. Flow 1’s
          packets are spaced 1 / Rtest. Flow 1 is destined to the destination traffic analyser.
       Set up the load generator 2 to create a flow 2 marked as default with an average rate of
          0.55 * Cegr, bursty arrival (e.g. Poisson). Flow 2 is destined to a host other than the
          destination traffic analyser.
       Allocate a rate of 0.45 * Cegr on the egress interface to the EF class of flow 1.
       Allocate a rate of 0.55 * Cegr on the egress interface to flow 2.
      Procedure:
       Verify, that the test environment is configured as described above.
       Start to send flow 1 on interface 1 for at least ten seconds.
       Send flow 2 simultaneously on interface 2.
       Measure the traffic received by the destination traffic analyser on losses for the EF class.
      Repeat the test with load rate Rtest = 0.3 * Cegr and Rtest = 0.4 * Cegr .
      Observable results:
       DS codepoint of egress traffic should be correct.
       Excess traffic above 0.25* Cegr should be dropped.
       Departure rate of the outgoing EF traffic should be configurable.
      Problems:
      Results: PASSED                       FAILED                     INCONCLUSIVE
      Remark:



 2001 EURESCOM Participants in Project P1006                                         EDIN 0079-1006
page 96 (12826)                                                 EURESCOM Technical Specification


Test# EF.6.2
      Test Label: forward.EF.independency
      Last modification: 26.10.2000
      Purpose: To verify that the departure rate for an EF aggregate independent from the
               intensity of other traffics.
      References: PICS for EF PHB (24)
      Requirement: Optional
      Test Configuration: Figure 2.
      Used traffic types: see below
      Test Setup
      Initial Conditions:
       Set the SUT to support EF traffic class.
       Set the SUT to use one kind of pre-emptive forwarding mechanism (eg. priority queuing)
          for EF traffic. Set the serving rate to 0.25 * Cegr .
       Allocate a buffer space of 5 ms on the egress interface for the EF class (this corresponds to
          0.04 * Cegr bytes if Cegr is taken as number without dimension).
       Set up the SUT to drop the incoming excess traffic after the load of the EF class excess
          0.25 * Cegr.
       Set up the load generator 1 to create a test flow 1 with a rate of Rtest = 0.2 * Cegr. Flow 1’s
          packets are spaced 1 / Rtest. Flow 1 is destined to the destination traffic analyser.
       Set up the load generator 2 to create a flow 2 marked as default with an average rate of
          0.55 * Cegr, bursty arrival (e.g. Poisson). Flow 2 is destined to a host other than the
          destination traffic analyser.
       Allocate a rate of 0.45 * Cegr on the egress interface to the EF class of flow 1.
       Allocate a rate of 0.55 * Cegr on the egress interface to flow 2.
      Procedure:
       Verify, that the test environment is configured as described above.
       Start to send flow 1 on interface 1 for at least ten seconds.
       Send flow 2 simultaneously on interface 2.
       Measure the traffic received by the destination traffic analyser on losses for the EF class.
      Repeat the test with increasing load rate by 0.1 * Cegr for flow 2 until 0.75 * Cegr is reached.
      Observable results:
       DS codepoint of egress traffic should be correct.
       Departure rate of the outgoing EF traffic should be independent from flow 2.
       No EF packets are lost.
      Problems:
      Results: PASSED                     FAILED                    INCONCLUSIVE
      Remark:




EDIN 0079-1006                                      2001 EURESCOM Participants in Project P1006
page 97 (12826)                                                    EURESCOM Technical Specification
Test# EF.6.3
      Test Label: forward.EF.min.deprate
      Last modification: 26.10.2000
      Purpose: To verify that the policing of an EF aggregate is configurable so that EF arrival
                rate is always equal to or less than the systems minimum departure rate for the EF
                aggregate.
      References: PICS for EF PHB (25)
      Requirement: Mandatory
      Test Configuration: Figure 2.
      Used traffic types: see below
      Test Setup
      Initial Conditions:
       Set the SUT to support EF traffic class.
       Set the SUT to use one kind of pre-emptive forwarding mechanism (eg. priority queuing)
          for EF traffic. Set the serving rate to 0.25 * Cegr.
       Allocate a buffer space of 5 ms on the egress interface for the EF class (this corresponds to
          0.04 * Cegr bytes if Cegr is taken as number without dimension).
       Set up the SUT to drop the incoming excess traffic after the load of the EF class excess
          0.25 * Cegr.
       Set up the load generator 1 to create a test flow 1 with a rate of R test = 0.2 * Cegr. Flow 1’s
          packets are spaced 1 / Rtest. Flow 1 is destined to the destination traffic analyser.
       Set up the load generator 2 to create a flow 2 marked as default with an average rate of
          0.55 * Cegr, bursty arrival (e.g. Poisson). Flow 2 is destined to a host other than the
          destination traffic analyser.
       Allocate a rate of 0.45 * Cegr on the egress interface to the EF class of flow 1.
       Allocate a rate of 0.55 * Cegr on the egress interface to flow 2.
      Procedure:
       Verify, that the test environment is configured as described above.
       Start to send flow 1 on interface 1 for at least ten seconds.
       Send flow 2 simultaneously on interface 2.
       Measure the traffic received by the destination traffic analyser on losses for the EF class.
      Repeat the test with load rate Rtest = 0.3 * Cegr and Rtest = 0.4 * Cegr.
      Observable results:
       DS codepoint of egress traffic should be correct.
       Throughput of EF traffic should be close to 0.25 * Cegr.
      Problems:
      Results: PASSED                       FAILED                     INCONCLUSIVE
      Remark:




Test# EF.6.4

 2001 EURESCOM Participants in Project P1006                                         EDIN 0079-1006
page 98 (12826)                                                 EURESCOM Technical Specification
      Test Label: forward.EF.overmeasured
      Last modification: 26.10.2000
      Purpose: To verify that the system forwards an EF aggregate on a rate averaging at least the
                configured rate, when measured over any time interval starting from an output
                service MTU sized packet at the configured rate.
      References: PICS for EF PHB (26)
      Requirement: Optional
      Test Configuration: Figure 2.
      Used traffic types: see below
      Test Setup
      Initial Conditions:
       Set the SUT to support EF traffic class.
       Allocate a buffer space of 5 ms on the egress interface for the EF class (this corresponds to
          0.04 * Cegr bytes if Cegr is taken as number without dimension).
       Set up the SUT to drop the incoming excess traffic after the load of the EF class excess 0.3
          * Cegr.
       Set up the load generator 1 to create a test flow 1 with a rate of Rtest = 0.4 * Cegr. Flow 1’s
          packets are spaced 1 / Rtest. Flow 1 is destined to the destination traffic analyser.
       Set up the load generator 2 to create a flow 2 marked as default with an average rate of
          0.55 * Cegr, bursty arrival (e.g. Poisson). Flow 2 is destined to a host other than the
          destination traffic analyser.
       Allocate a rate of 0.45 * Cegr on the egress interface to the EF class of flow 1.
       Allocate a rate of 0.55 * Cegr on the egress interface to flow 2.
      Procedure:
       Verify, that the test environment is configured as described above.
       Start to send flow 1 on interface 1 for at least ten seconds.
       Send flow 2 simultaneously on interface 2.
       Measure the traffic received by the destination traffic analyser on losses for the EF class.
      Repeat the test with different measurement intervals:
      3.   test: measurement interval [s] = output service MTU size [byte]*8/konfigured rate
           [bit/s].
      2. test: measurement interval [s] = 1 s
      3. test: measurement interval [s] = 30 s
      Observable results:
       DS codepoint of egress traffic should be correct.
       The throughput of the EF traffic must be close to 0.3 * Cegr.


      Problems:
      Results: PASSED                     FAILED                    INCONCLUSIVE
      Remark:



EDIN 0079-1006                                      2001 EURESCOM Participants in Project P1006
page 99 (12826)                                           EURESCOM Technical Specification

3.2     Single vendor tests - Performance tests

3.2.1 Test Descriptions

Test#    Test Label                     Purpose
         Performance Tests
Perf1.1 tcp.ingress.colouring.one       To characterise the impact of a simple colouring
                                        scheme at the ingress on the overall system
                                        performance
Perf1.2 udp.ingress.colouring.one       To characterise the impact of a simple colouring
                                        scheme at the ingress on the overall system
                                        performance
Perf2.1 tcp.egress.colouring.one        To characterise the impact of a simple colouring
                                        scheme at the egress on the overall system performance
Perf2.2 udp.egress.colouring.one        To characterise the impact of a simple colouring
                                        scheme at the egress on the overall system performance
Perf3.1 tcp.ingress.colouring.two       To characterise the impact of a bandwidth dependent
                                        colouring scheme at the ingress on the overall system
                                        performance
Perf3.2 udp.ingress.colouring.two       To characterise the impact of a bandwidth dependent
                                        colouring scheme at the ingress on the overall system
                                        performance
Perf4.1 tcp.egress.colouring.two        To characterise the impact of a bandwidth dependent
                                        colouring scheme at the egress on the overall system
                                        performance
Perf4.2 udp.egress.colouring.two        To characterise the impact of a bandwidth dependent
                                        colouring scheme at the egress on the overall system
                                        performance
Perf5.1 tcp.ingress.colouring.discard   To characterise the impact of a bandwidth limiting
                                        colouring scheme at the ingress on the overall system
                                        performance
Perf5.2 udp.ingress.colouring.discard   To characterise the impact of a bandwidth limiting
                                        colouring scheme at the ingress on the overall system
                                        performance
Perf6.1 tcp.egress.colouring.discard    To characterise the impact of a bandwidth limiting
                                        colouring scheme at the egress on the overall system
                                        performance
Perf6.2 udp.egress.colouring.discard    To characterise the impact of a bandwidth limiting
                                        colouring scheme at the egress on the overall system
                                        performance




 2001 EURESCOM Participants in Project P1006                                 EDIN 0079-1006
page 100 (12826)                                               EURESCOM Technical Specification

3.2.2   Performance Testing - Test Suites

Test Label:           tcp.ingress.colouring.one
Last modification:    22/09/2000
Purpose:              To characterise the impact of a simple colouring scheme at the ingress on
                      the overall system performance
References:
Requirement:
Test Figure:          Figure 3.
Used traffic types:   TCP
Test Setup            Router 1:
                      Route to TCP traffic generation server address range through ATM
                      interface
                      Default to discard traffic to unknown addresses
                      Colour traffic towards the TCP gen. server to a specific DSCP at the
                      ingress interface


                      Router 2:
                      Route to TCP traffic generation client address through ATM interface
                      Default to discard traffic to unknown addresses


                      Traffic generation server:
                      The traffic generation server responds to a whole Class C range of
                      addresses (253 addresses)


                      Traffic generation client:
                      The traffic generation client is programmed to send traffic to a fixed
                      amount of addresses for each test.
Initial Conditions:   With no traffic between the TCP traffic generation client and server, long
                      and mid term resource usage meters show stable values before each
                      measurement
Procedure:            Verify that the test environment is configured as described above.
                      For every fixed amount of test streams.
                      Wait until initial conditions are met.
                      Start TCP traffic generator client.
                      Examine egress traffic with traffic analyser.
                      Observe long and mid term resource usage counters and wait until they
                      stabilise
Observable results:   DSCP of egress traffic should be correct.
                      Mid and long-term resource usage counters
Problems:


EDIN 0079-1006                                 2001 EURESCOM Participants in Project P1006
page 101 (12826)                                     EURESCOM Technical Specification
Results:             Number     of    test Mid term CPU usage   Long term CPU usage
                     streams
                     1
                     5
                     10
                     50




 2001 EURESCOM Participants in Project P1006                          EDIN 0079-1006
page 102 (12826)                                         EURESCOM Technical Specification


Test Label:           udp.ingress.colouring.one
Last modification:    22/09/2000
Purpose:              To characterise the impact of a simple colouring scheme at the ingress
                      on the overall system performance
References:
Requirement:
Test Figure:          Figure 4
Used traffic types:   UDP
Test Setup            Router:
                      Ingress and egress interfaces are PPP interfaces (/30 address mask).
                      Route to a specific Class C through egress interface.
                      Colour all ingress traffic to a specific DSCP.
                      Traffic generator:
                      Generate UDP over ATM traffic using RFC1483 to random addresses
                      within the destination Class C addressing space
Initial Conditions:   With no traffic being generated by the ATM traffic generator, long
                      and mid term resource usage meters show stable values before each
                      measurement
Procedure:            Verify that the test environment is configured as described above.
                      For every test.
                      With no traffic, wait until initial conditions are met.
                      Program the ATM traffic generator to a given bandwidth
                      Start ATM traffic generator.
                      Examine outgoing traffic with traffic analyser.
                      Observe long and mid term resource usage counters and wait until
                      they stabilise
Observable results:   DSCP of outgoing traffic should be correct.
                      Mid and long-term resource usage counters
                      Maximum throughput in packets per second.
Problems:
Results:              Bandwidth(%          of Mid term CPU usage Long term CPU usage
                      STM1)
                      1
                      5
                      10
                      50
                      100




EDIN 0079-1006                              2001 EURESCOM Participants in Project P1006
page 103 (12826)                                             EURESCOM Technical Specification
Test Label:            tcp.egress.colouring.one
Last modification:     22/09/2000
Purpose:               To characterise the impact of a simple colouring scheme at the egress
                       on the overall system performance
References:
Requirement:
Test Figure:           Figure 3
Used traffic types:    TCP
                       Router 1:
Test Setup             Route to TCP traffic generation server address range through ATM
                       interface
                       Colour traffic towards the TCP gen. server to a specific DSCP at the
                       egress interface
                       Default to discard traffic to unknown addresses


                       Router 2:
                       Route to TCP traffic generation client address through ATM interface
                       Default to discard traffic to unknown addresses


                       Traffic generation server:
                       The traffic generation server responds to a whole Class C range of
                       addresses (253 addresses)


                       Traffic generation client:
                       The traffic generation client is programmed to send traffic to a fixed
                       amount of addresses for each test.
Initial Conditions:    With no traffic between the TCP traffic generation client and server,
                       long and mid term resource usage meters show stable values before
                       each measurement
Procedure:             Verify that the test environment is configured as described above.
                       For every fixed amount of test streams.
                       Wait until initial conditions are met.
                       Start TCP traffic generator client.
                       Examine egress traffic with traffic analyser.
                       Observe long and mid term resource usage counters and wait until they
                       stabilise
Observable results:    DSCP of egress traffic should be correct.
                       Mid and long-term resource usage counters
Problems:




 2001 EURESCOM Participants in Project P1006                                  EDIN 0079-1006
page 104 (12826)                                EURESCOM Technical Specification
Results:           Number    of   test Mid term CPU usage   Long term CPU usage
                   streams
                   1
                   5
                   10
                   50




EDIN 0079-1006                       2001 EURESCOM Participants in Project P1006
page 105 (12826)                                            EURESCOM Technical Specification


Test Label:            udp.egress.colouring.one
Last modification:     22/09/2000
Purpose:               To characterise the impact of a simple colouring scheme at the egress
                       on the overall system performance
References:
Requirement:
Test Figure:           Figure 4
Used traffic types:    UDP
Test Setup             Router:
                       Ingress and egress interfaces are PPP interfaces (/30 address mask).
                       Route to a specific Class C through egress interface.
                       Colour all traffic in the egress interface to a specific DSCP.
                       Traffic generator:
                       Generate UDP over ATM traffic using RFC1483 to random addresses
                       within the destination Class C addressing space
Initial Conditions:    With no traffic being generated by the ATM traffic generator, long and
                       mid term resource usage meters show stable values before each
                       measurement
Procedure:             Verify that the test environment is configured as described above.
                       For every test.
                       With no traffic, wait until initial conditions are met.
                       Program the ATM traffic generator to a given bandwidth
                       Start ATM traffic generator.
                       Examine egress traffic with traffic analyser.
                       Observe long and mid term resource usage counters and wait until they
                       stabilise
Observable results:    DSCP of egress traffic should be correct.
                       Mid and long-term resource usage counters
                       Maximum throughput in packets per second.
Problems:
Results:               Bandwidth(%          of Mid term CPU usage        Long term CPU usage
                       STM1)
                       1
                       5
                       10
                       50
                       100




 2001 EURESCOM Participants in Project P1006                                    EDIN 0079-1006
page 106 (12826)                                            EURESCOM Technical Specification
Test Label:           tcp.ingress.colouring.two
Last modification:    22/09/2000
Purpose:              To characterise the impact of a bandwidth dependent colouring scheme
                      at the ingress on the overall system performance
References:
Requirement:
Test Figure:          Figure 3
Used traffic types:   TCP
                      Router 1:
Test Setup            Route to TCP traffic generation server address range through ATM
                      interface
                      Default to discard traffic to unknown addresses
                      Colour traffic towards the TCP gen. server in the ingress interface to a
                      specific DSCP while within a specific bandwidth and to another one if
                      beyond that bandwidth


                      Router 2:
                      Route to TCP traffic generation client address through ATM interface
                      Default to discard traffic to unknown addresses


                      Traffic generation server:
                      The traffic generation server responds to a whole Class C range of
                      addresses (253 addresses)


                      Traffic generation client:
                      The traffic generation client is programmed to send traffic to a fixed
                      amount of addresses for each test.
Initial Conditions:   With no traffic between the TCP traffic generation client and server,
                      long and mid term resource usage meters show stable values before
                      each measurement
Procedure:            Verify that the test environment is configured as described above.
                      For every fixed amount of test streams.
                      Wait until initial conditions are met.
                      Start TCP traffic generator client.
                      Examine egress traffic with traffic analyser.
                      Observe long and mid term resource usage counters and wait until they
                      stabilise
Observable results:   DSCP of egress traffic should be correct.
                      Mid and long-term resource usage counters
Problems:



EDIN 0079-1006                               2001 EURESCOM Participants in Project P1006
page 107 (12826)                                     EURESCOM Technical Specification
Results:               Number     of   test Mid term CPU usage   Long term CPU usage
                       streams
                       1
                       5
                       10
                       50




 2001 EURESCOM Participants in Project P1006                          EDIN 0079-1006
page 108 (12826)                                             EURESCOM Technical Specification


Test Label:           udp.ingress.colouring.two
Last modification:    22/09/2000
Purpose:              To characterise the impact of a bandwidth dependent colouring scheme at
                      the ingress on the overall system performance
References:
Requirement:
Test Figure:          Figure 4
Used traffic types:   UDP
Test Setup            Router:
                      Ingress and egress interfaces are PPP interfaces (/30 address mask).
                      Route to a specific Class C through egress interface.
                      Colour traffic in the ingress interface to a specific DSCP below a given
                      bandwidth usage and to another one beyond that limit bandwidth.
                      Traffic generator:
                      Generate UDP over ATM traffic using RFC1483 to random addresses
                      within the destination Class C addressing space
Initial Conditions:   With no traffic being generated by the ATM traffic generator, long and
                      mid term resource usage meters show stable values before each
                      measurement
Procedure:            Verify that the test environment is configured as described above.
                      For every test.
                      With no traffic, wait until initial conditions are met.
                      Program the ATM traffic generator to a given bandwidth
                      Start ATM traffic generator.
                      Examine egress traffic with traffic analyser.
                      Observe long and mid term resource usage counters and wait until they
                      stabilise
Observable results:   DSCP of egress traffic should be correct.
                      Mid and long-term resource usage counters
                      Maximum throughput in packets per second.
Problems:
Results:              Bandwidth(%           of Mid term CPU usage        Long term CPU usage
                      STM1)
                      1
                      5
                      10
                      50
                      100




EDIN 0079-1006                                 2001 EURESCOM Participants in Project P1006
page 109 (12826)                                          EURESCOM Technical Specification


Test Label:              tcp.egress.colouring.two
Last modification:       22/09/2000
Purpose:                 To characterise the impact of a bandwidth dependent colouring
                         scheme at the egress on the overall system performance
References:
Requirement:
Test Figure:             Figure 3
Used traffic types:      TCP
                         Router 1:
Test Setup               Route to TCP traffic generation server address range through ATM
                         interface
                         Default to discard traffic to unknown addresses
                         Colour traffic towards the TCP gen. server in the egress interface to
                         a specific DSCP while within a specific bandwidth and to another
                         one if beyond that bandwidth
                         Router 2:
                         Route to TCP traffic generation client address through ATM
                         interface
                         Default to discard traffic to unknown addresses
                         Traffic generation server:
                         The traffic generation server responds to a whole Class C range of
                         addresses (253 addresses)
                         Traffic generation client:
                         The traffic generation client is programmed to send traffic to a fixed
                         amount of addresses for each test.
Initial Conditions:      With no traffic between the TCP traffic generation client and server,
                         long and mid term resource usage meters show stable values before
                         each measurement
Procedure:               Verify that the test environment is configured as described above.
                         For every fixed amount of test streams.
                         Wait until initial conditions are met.
                         Start TCP traffic generator client.
                         Examine egress traffic with traffic analyser.
                         Observe long and mid term resource usage counters and wait until
                         they stabilise
Observable results:      DSCP of egress traffic should be correct.
                         Mid and long-term resource usage counters
Problems:
Results:                 Number       of   test Mid term CPU usage Long term CPU usage
                         streams
                         1

 2001 EURESCOM Participants in Project P1006                                 EDIN 0079-1006
page 110 (12826)                   EURESCOM Technical Specification
                   5
                   10
                   50




EDIN 0079-1006           2001 EURESCOM Participants in Project P1006
page 111 (12826)                                           EURESCOM Technical Specification


Test Label:             udp.egress.colouring.two
Last modification:      22/09/2000
Purpose:                To characterise the impact of a bandwidth dependent colouring
                        scheme at the egress on the overall system performance
References:
Requirement:
Test Figure:            Figure 4
Used traffic types:     UDP
Test Setup              Router:
                        Ingress and egress interfaces are PPP interfaces (/30 address mask).
                        Route to a specific Class C through egress interface.
                        Colour traffic in the egress interface to a specific DSCP below a given
                        bandwidth usage and to another one beyond that limit bandwidth.
                        Traffic generator:
                        Generate UDP over ATM traffic using RFC1483 to random addresses
                        within the destination Class C addressing space
Initial Conditions:     With no traffic being generated by the ATM traffic generator, long
                        and mid term resource usage meters show stable values before each
                        measurement
Procedure:              Verify that the test environment is configured as described above.
                        For every test.
                        With no traffic, wait until initial conditions are met.
                        Program the ATM traffic generator to a given bandwidth
                        Start ATM traffic generator.
                        Examine egress traffic with traffic analyser.
                        Observe long and mid term resource usage counters and wait until
                        they stabilise
Observable results:     DSCP of egress traffic should be correct.
                        Mid and long-term resource usage counters
                        Maximum throughput in packets per second.
Problems:
Results:                Bandwidth(%          of Mid term CPU usage Long term CPU usage
                        STM1)
                        1
                        5
                        10
                        50
                        100




 2001 EURESCOM Participants in Project P1006                                     EDIN 0079-1006
page 112 (12826)                                            EURESCOM Technical Specification


Test Label:           tcp.ingress.colouring.discard
Last modification:    22/09/2000
Purpose:              To characterise the impact of a bandwidth limiting colouring scheme at
                      the ingress on the overall system performance
References:
Requirement:
Test Figure:          Figure 3
Used traffic types:   TCP
                      Router 1:
Test Setup            Route to TCP traffic generation server address range through ATM
                      interface
                      Default to discard traffic to unknown addresses
                      Colour traffic towards the TCP gen. server in the ingress interface to a
                      specific DSCP while within a specific bandwidth and discard traffic
                      beyond that limit
                      Router 2:
                      Route to TCP traffic generation client address through ATM interface
                      Default to discard traffic to unknown addresses
                      Traffic generation server:
                      The traffic generation server responds to a whole Class C range of
                      addresses (253 addresses)
                      Traffic generation client:
                      The traffic generation client is programmed to send traffic to a fixed
                      amount of addresses for each test.
Initial Conditions:   With no traffic between the TCP traffic generation client and server,
                      long and mid term resource usage meters show stable values before
                      each measurement
Procedure:            Verify that the test environment is configured as described above.
                      For every fixed amount of test streams.
                      Wait until initial conditions are met.
                      Start TCP traffic generator client.
                      Examine egress traffic with traffic analyser.
                      Observe long and mid term resource usage counters and wait until they
                      stabilise
Observable results:   DSCP of egress traffic should be correct.
                      Mid and long-term resource usage counters
Problems:
Results:              Number       of    test Mid term CPU usage        Long term CPU usage
                      streams
                      1


EDIN 0079-1006                               2001 EURESCOM Participants in Project P1006
page 113 (12826)                                EURESCOM Technical Specification
                       5
                       10
                       50




 2001 EURESCOM Participants in Project P1006                    EDIN 0079-1006
page 114 (12826)                                        EURESCOM Technical Specification


Test Label:           udp.ingress.colouring.discard
Last modification:    22/09/2000
Purpose:              To characterise the impact of a bandwidth limiting colouring scheme
                      at the ingress on the overall system performance
References:
Requirement:
Test Figure:          Figure 4
Used traffic types:   UDP
Test Setup            Router:
                      Ingress and egress interfaces are PPP interfaces (/30 address mask).
                      Route to a specific Class C through egress interface.
                      Colour traffic in the ingress interface to a specific DSCP below a
                      given bandwidth usage and discard traffic beyond that limit
                      bandwidth.
                      Traffic generator:
                      Generate UDP over ATM traffic using RFC1483 to random
                      addresses within the destination Class C addressing space
Initial Conditions:   With no traffic being generated by the ATM traffic generator, long
                      and mid term resource usage meters show stable values before each
                      measurement
Procedure:            Verify that the test environment is configured as described above.
                      For every test.
                      With no traffic, wait until initial conditions are met.
                      Program the ATM traffic generator to a given bandwidth
                      Start ATM traffic generator.
                      Examine egress traffic with traffic analyser.
                      Observe long and mid term resource usage counters and wait until
                      they stabilise
Observable results:   DSCP of egress traffic should be correct.
                      Mid and long-term resource usage counters
                      Maximum throughput in packets per second.
Problems:
Results:              Bandwidth(%          of Mid term CPU usage Long term CPU usage
                      STM1)
                      1
                      5
                      10
                      50
                      100


EDIN 0079-1006                              2001 EURESCOM Participants in Project P1006
page 115 (12826)                                             EURESCOM Technical Specification
Test Label:            tcp.egress.colouring.discard
Last modification:     22/09/2000
Purpose:               To characterise the impact of a bandwidth limiting colouring scheme at
                       the egress on the overall system performance
References:
Requirement:
Test Figure:           Figure 3
Used traffic types:    TCP
                       Router 1:
Test Setup             Route to TCP traffic generation server address range through ATM
                       interface
                       Default to discard traffic to unknown addresses
                       Colour traffic towards the TCP gen. server in the egress interface to a
                       specific DSCP while within a specific bandwidth and discard traffic
                       beyond that limit
                       Router 2:
                       Route to TCP traffic generation client address through ATM interface
                       Default to discard traffic to unknown addresses
                       Traffic generation server:
                       The traffic generation server responds to a whole Class C range of
                       addresses (253 addresses)
                       Traffic generation client:
                       The traffic generation client is programmed to send traffic to a fixed
                       amount of addresses for each test.
Initial Conditions:    With no traffic between the TCP traffic generation client and server,
                       long and mid term resource usage meters show stable values before
                       each measurement
Procedure:             Verify that the test environment is configured as described above.
                       For every fixed amount of test streams.
                       Wait until initial conditions are met.
                       Start TCP traffic generator client.
                       Examine egress traffic with traffic analyser.
                       Observe long and mid term resource usage counters and wait until they
                       stabilise
Observable results:    DSCP of egress traffic should be correct.
                       Mid and long-term resource usage counters
Problems:
Results:               Number       of    test Mid term CPU usage        Long term CPU usage
                       streams
                       1
                       5


 2001 EURESCOM Participants in Project P1006                                  EDIN 0079-1006
page 116 (12826)                   EURESCOM Technical Specification
                   10
                   50




EDIN 0079-1006           2001 EURESCOM Participants in Project P1006
page 117 (12826)                                            EURESCOM Technical Specification


Test Label:            udp.egress.colouring.discard
Last modification:     22/09/2000
Purpose:               To characterise the impact of a bandwidth limiting colouring scheme at
                       the egress on the overall system performance
References:
Requirement:
Test Figure:           Figure 4
Used traffic types:    UDP
Test Setup             Router:
                       Ingress and egress interfaces are PPP interfaces (/30 address mask).
                       Route to a specific Class C through egress interface.
                       Colour traffic in the ingress interface to a specific DSCP below a given
                       bandwidth usage and discard traffic beyond that limit bandwidth.
                       Traffic generator:
                       Generate UDP over ATM traffic using RFC1483 to random addresses
                       within the destination Class C addressing space
Initial Conditions:    With no traffic being generated by the ATM traffic generator, long and
                       mid term resource usage meters show stable values before each
                       measurement
Procedure:             Verify that the test environment is configured as described above.
                       For every test.
                       With no traffic, wait until initial conditions are met.
                       Program the ATM traffic generator to a given bandwidth
                       Start ATM traffic generator.
                       Examine egress traffic with traffic analyser.
                       Observe long and mid term resource usage counters and wait until they
                       stabilise
Observable results:    DSCP of egress traffic should be correct.
                       Mid and long-term resource usage counters
                       Maximum throughput in packets per second.
Problems:
Results:               Bandwidth(%          of Mid term CPU usage        Long term CPU usage
                       STM1)
                       1
                       5
                       10
                       50
                       100




 2001 EURESCOM Participants in Project P1006                                    EDIN 0079-1006
page 118 (12826)                                               EURESCOM Technical Specification

4        Backward compatibility
From the beginning DiffServ has a limited backward compatibility with the „old-fashion” IP
precedence-based classification and forwarding. The precedence values in the Ipv4 octet are
compatible by intention with the Class Selector Codepoints (Table 11), but which is otherwise not
DS-compliant. ). The precedence forwarding behaviours defined in [rfc791, rfc1812] comply with
the Class Selector PHB requirements defined also in [rfc2474].
    Table 11. Compatibility matrix between IP Precedence field and DiffServ Class Selector
                                         Codepoints
    IP Precedence bits           Keyword                Class Selector             Class Name
                                                         Codepoint
           000                   Routine                      000                       BE
           001                   Priority                     001                     AF1x
           010                  Immediate                     010                     AF2x
           011                     Flash                      011                     AF3x
           100                Flash-override                  100                     AF4x
           101                  Critic/ESP                    101                       EF
           11x              Internet/network                  11x                    reserved
                                 control


To distinguish normal, non-DS compliant routers and routers which can handle IP precedence field
correctly, we use the „Legacy router” expression. Legacy routers are also non-DS compliant
routers but with limited compatibility. Legacy routers use the IP precedence field to select per-hop
forwarding treatments in a similar way to the use proposed for the DSCP field. So in this way a
simplified DiffServ architecture can be quickly deployed by appropriately configuring the legacy
routers.
Another similarity is that for both methods the forwarding behaviours are defined. [RFC791,
RFC1812][RFC2474]. The key distinction between DS-compliant router and legacy router is, that
the legacy router may or may not interpret bits 3-6 of TOS byte [RFC1349]. Generally it will not
interpret these bits as specified in [RFC2474]
Differentiated services depend on the resource allocation mechanisms provided by per-hop
behaviour implementations in nodes. The quality or statistical assurance level of a service may
break down in the event that traffic transits a non-DS-compliant node, or a non-DS-capable
domain.
The problem is not so serious on a high speed, lightly loaded link. In this case, use of a legacy node
may be an acceptable alternative, assuming that the DS domain restricts itself to using only the
Class Selector Codepoints defined in [rfc2474], and assuming that the particular precedence
implementation in the legacy node provides forwarding behaviours which are compatible with the
services offered along paths which traverse that node. It is important to restrict the codepoints in
use to the Class Selector Codepoints, since the legacy node may or may not interpret bits 3-5 in
accordance with [RFC1349], thereby resulting in unpredictable forwarding results.
Let’s assume a second case, where services would like to traverse over a non-DS-capable domains.
If the non-DS-capable domain does not deploy traffic conditioning functions on domain boundary
nodes, even in the event that the domain consists of legacy or DS-compliant interior nodes, the lack
of traffic enforcement at the boundaries will limit the ability to consistently deliver some types of
services across the domain. A DS domain and a non-DS-capable domain may negotiate an
agreement which governs how egress traffic from the DS-domain should be marked before entry
into the non-DS-capable domain. This agreement might be monitored for compliance by traffic
sampling instead of by rigorous traffic conditioning. Alternatively, where there is knowledge that

EDIN 0079-1006                                    2001 EURESCOM Participants in Project P1006
page 119 (12826)                                           EURESCOM Technical Specification
the non-DS-capable domain consists of legacy nodes, the upstream DS domain may
opportunistically re-mark differentiated services traffic to one or more of the Class Selector
Codepoints. Where there is no knowledge of the traffic management capabilities of the downstream
domain, and no agreement in place, a DS domain egress node may choose to re-mark DS
codepoints to zero, under the assumption that the non-DS-capable domain will treat the traffic
uniformly with best-effort service. [rfc2475]
In the event that a non-DS-capable domain peers with a DS domain, traffic flowing from the non-
DS-capable domain should be conditioned at the DS ingress node of the DS domain according to
the appropriate SLA or policy. [rfc2475]




 2001 EURESCOM Participants in Project P1006                                  EDIN 0079-1006
page 120 (12826)                                                 EURESCOM Technical Specification

5          Advance tests

5.1        Application oriented tests

5.1.1      Introduction

Validating the performance of networks based on the DiffServ architecture makes it necessary to
test complex network scenarios using application oriented test traffic.
The following sections describe one approach to application oriented testing. The test method
consists of establishing a test environment comprising a DS-network which can be loaded with test
traffic. The key feature is to use synthetic traffic generators that can produce test traffic
corresponding to different applications.
In 5.1.2 the test environment is described. In section 5.1.3 the method for test traffic generation is
presented. Finally in section 5.1.4 some information related to test scenarios is presented.

5.1.2      Overview of the Test Environment

A basic scenario for application oriented testing is shown in Figure 6. Attached to a DS network is
a set of N traffic generators (Tgi, I = 1,…,N) each capable of generating synthetic traffic
corresponding to different application.
Each traffic generator is attached to an Ingress router and will generate an aggregate of traffic
consisting of a mix of different applications. The test traffic scenario is specified by a traffic matrix
giving the amount of traffic between source and destination for each traffic type. The traffic
generators can also act as traffic sinks and may perform other operations as gathering traffic
measurements.
A multitude of parameters can be varied in this test environment. Some of the aspects that will be
consider are:
          Network performance for different load levels
          Mapping of applications to service classes (determined by the classification rule applied in
           the access routers)
     Interaction between elastic (TCP) and non-elastic traffic (UDP)
The main objective is to investigate the network performance in relation to the QoS requirements
for the different service classes and traffic types. Although multi-domain issues is not considered
here, it is foreseen that the test environment can evolve into a platform suitable for multi-domain
investigations.



                            Tg5
                                                                                    Tg6

          Tg4
                                                                                               Tg7
                                                 DS-network

          Tg3                                                                                Tg8


                              Tg2          Tg1         Tg10           Tg9

                           Figure 6. Application oriented test environment

EDIN 0079-1006                                      2001 EURESCOM Participants in Project P1006
page 121 (12826)                                                   EURESCOM Technical Specification
5.1.3 Test Traffic Generation

As described in [Hee0200] different source modelling approaches can be considered to describe a
typical Internet source
        1.        Replay of a recorded stream of IP packets - a stream of IP packets obtained from a
                  measurement is replayed and offered to the test network
        2.        Generation of IP packets according to a class of stochastic processes („black box”)
        3.        Generation of IP packets from physically based source models („white box”)
Based on previous knowledge of source modelling a new traffic generator has been developed for
use in testing of new services and network mechanisms. This generator is denoted GenSyn
(GENeration of SYNthetic Internet Traffic) and is based on the („white box”) approach because of
its flexibility, scalability and physically explainable parameters. The details related to this test
traffic can be found in [Hee0200]. The following is a brief description based on that reference.
The idea for the new generator is to apply modern Web- and Java-technology and exploit the
Internet protocol suite (TCP/IP) that is already available. The generator is then only a software
process running on a PC simulating the user behaviour. The process has one or more dynamic links
(threads) that open and close physical HTTP- and TCP-connections. The generator is not a
simulator, it generates IP packets that flow through a real (test) network. However, the stochastic
process that imitates the user behaviour operates like a simulator except that during the simulation
the output is sending of real IP packets. See Figure 7 below for an illustration of the general
principle. [Hee0200]


                User behaviour
                                                               Server
                    model




                    Internet                                       Internet
                    protocols

                     PC
                            Figure 7. Traffic model of an Internet source
The sources are modelled by a finite state machine (FSM) that determines the source behaviour. If
all activity levels in a source are included, not all sources can be described by a finite set of states,
e.g. consider a variable bitrate video source. In [Helv1095] it is described how one can approximate
the continuous source behaviour by doing a discretisation into a few states. In GenSyn the lower
activity levels are not included and, hence, a set of finite states will normally be sufficient to
describe the essential behaviour.
The user behaviour is described by a finite state continuous time Markov process. A general state
has the following attributes:
-   a state identifier, I
-   a state sojourn time distribution (must be negative exponential when the state model is a
    composition of users not only a single user model),
-   list of transition rates and probabilities,
-   list of states that can be reached in one step from state i.
This FSM is the stochastic part of the source behaviour, including the stochastic (behaviour) states.
As part of the complete source model behaviour it is necessary to open physical (HTTP/TCP)
connections in the Internet to generate the actual packets. This is defined in the model as specific
comm states. A comm state has links (transitions) to the Markovian source model, but is not part of

 2001 EURESCOM Participants in Project P1006                                         EDIN 0079-1006
page 122 (12826)                                                        EURESCOM Technical Specification
the modelling of the stochastic source behaviour. The separation of the source model in a stochastic
and a comm part is illustrated in Figure 8.



                                                                conditional transition
                                ij 1           j2              (ex: download interrupted)

                                                                conditional transition
                                         ij2
                                                                (ex: download completed)




                               i
            j i 1
                                                     ij
                                                       k
                                                                   jk                        http/tcp-
                                                                                             connection



                     j i
                                                                                             http/tcp-
                                j i
                        2
                                                                 comm state                  connection
                                   K


                            Stochastic State Model   Comm State Model


           Figure 8. The source model is divided into a stochastic and a comm part
The stochastic model described above represents a single user model. A large number of users with
the same behaviour can be modelled by making a simple composition on the same model. This
means that every state in the state model contains a number of users, not only one single user. This
is opposed to a process oriented approach where every single user must invoke a separate, resource
demanding, process even if all users are described by the same behaviour. The numbers in each
state will now change dynamically as the user behaviour changes. The user will not have an
identity, meaning that if we have 1000 users, we are only interested in knowing the number of
users in the different stochastic states describing this type of user, not the identity of each single
user. When a comm state is generated we need to create a temporal identity that links the user and
the process thread. This is the reason for the separation of the state model description into the
stochastic (composite) states, and the comm states. The composition of many users onto a single
model requires that all processes involved are semi-Markovian.
The following GenSyn application models are currently available [Heeg0700]:
-   TCP traffic
-   Web – a model of users (clients) that are downloading web-pages with all their content
    (inclusive applets and images) from real web servers all over the world. The URL-addresses
    are found in a parameter list of predefined addresses that may dynamically be updated as the
    experiment evolves.
-   Ftp – a model of users (clients) that downloads real files from a server. The files are specified
    in a parameter list of files (trace from GenSyn below).
2. UDP traffic
-   VoIP – a model of the information/media stream from VoIP users. It sends a deterministic
    stream of packets (fixed size and inter packet arrival time) from each of the active users. The
    model does not model call set-up and disconnection (trace from GenSyn below).
-   Mpeg – a model of a video server that is sending MPEG-1 coded video sequences. Each video
    frames are converted to a number of fixed sized IP packets sent back to back. The inter frame
    distance is given by the MPEG-1 codex standard. The clients are modelled only as incoming
    requests.
-   Cbr – a model of a multiplex of deterministic streams of packets with phase shifts.


EDIN 0079-1006                                              2001 EURESCOM Participants in Project P1006
page 123 (12826)                                                     EURESCOM Technical Specification



5.1.4 Test Scenarios

This section gives some information related to test scenarios using the approach described in the
previous sections. Since the test scenarios (parameter values etc) may have to be modified as
experience is gained with the test network, many details related to the scenario definitions have
been left out. These will be included when presenting the test results.
The topology of the test network is shown in Figure 9. Altogether 10 GenSyn traffic generators are
attached to five ingress routers using 100 Mbit/s Fast Ethernet interfaces. Classification and
marking of the test traffic takes place in the ingress routers. The ingress routers are connected to a
backbone ring consisting of five transit routers. The links between the transit routers are 155 Mbit/s
POS (Packet over SDH).

                                           GenSyn        GenSyn
                                            Tg5           Tg6




 GenSyn
                                                                                                GenSyn
  Tg4
                                                                                                 Tg7


                                                                                                GenSyn
     GenSyn
                                                                                                 Tg8
      Tg3




                                 GenSyn                    GenSyn        GenSyn
                                              GenSyn
                                  Tg2                       Tg10          Tg9
                                               Tg1

                            Figure 9. Network topology with traffic generators
Appropriate QoS mechanisms will be enabled in the ingress and routers. The parameters used in
configuration of these mechanisms should be considered carefully.
Initially up to 4 service classes will be considered. Mapping of the applications onto service classes
is part of the load profile definition of each test case. Three different QoS provisioning scenarios
will be tested:
              1.      Best Effort – no classification and marking
              2.      Classification and marking without differentiation (i.e. mark all traffic as BE)
              3.      Classification and marking with differentiation in 4 service classes
The idea behind case 2 is to measure any influence of processing overhead related to classification
and marking.
Three different TCP/UDP mixtures will be used (case 1 and 2 are based on estimates in [Hee0700])
1.            85% TCP and 15% UDP – this is (almost) the situation today (at least it was „yesterday”)
2.            40% TCP and 60% UDP – this is the mixture expected in the near future.
3.            10% TCP and 90% UDP – this is the mixture likely to be observed when (interactive)
              video streaming is boosted in the Internet.



 2001 EURESCOM Participants in Project P1006                                             EDIN 0079-1006
page 124 (12826)                                            EURESCOM Technical Specification
For traffic measurement purposes, tcpdump will be run on each traffic generator. Router statistics
from the link interface cards will also be collected. Additionally a SmartBits generator/analyser
may be used for parallel end-to-end measurements.




EDIN 0079-1006                                  2001 EURESCOM Participants in Project P1006
page 125 (12826)                                              EURESCOM Technical Specification

6       Conclusion
Service and Network Providers are continuously looking for means to find adequate solutions to
provide some kind of IP QoS. Differentiated services are intended to enable scalable service
discrimination in the Internet without the need for per-flow state and signalling at every hop. A
variety of services may be built from a small, well-defined set of building blocks which are
deployed in network nodes. The services may be either end-to-end or intra-domain; they include
both those that can satisfy quantitative performance requirements (e.g., peak bandwidth) and those
based on relative performance (e.g., "class" differentiation).
The recommendations of IETF (RFC2474, 2475, 2597, 2598, 2697, 2698) outline the main concept
of DiffServ networks and propose two main PHB (per hop behaviour) group to transfer IP traffic
over a DiffServ domain. But providers need PICSs (Protocol Implementation Conformance
Statement) to know whether a specific implementation is conform with the standards or not. The
first part of the document give test suites for conformance (based on PICSs developed by
EURESCOM project P1006) and performance tests as well, and focus on a DiffServ domain with
the simplest configuration. Conformance tests contains all tests related to major functions of both
PHB which are already defined by IETF (Assured Forwarding and Expedited Forwarding):
           Classification
                   Multiple Field
                   Behaviour Aggregate
           Conditioning
                   Metering
                   Marking
                   Shaping
                   Dropping
Performance tests considers the fact, that the final usability indicator for a network device is the
throughput, which it can offer to the network. The strategy followed is to provide performance
measurements for each of the components in the DiffServ chain on its own and then concatenate
the different components to reality near configurations. The indicators used to quantify the load in
the router are:
           Main CPU load
           Line card CPU load
Performance tests consist of both UDP and TCP based tests.
Validating the performance of networks based on the DiffServ architecture makes it necessary to
test complex network scenarios using application oriented test traffic. The second part of the
document describes one approach to application oriented testing. The test method consists of
establishing a test environment comprising a DS-network which can be loaded with test traffic. The
key feature is to use synthetic traffic generators that can produce test traffic corresponding to
different applications.
All tests refer to the DiffServ network as a single DS domain, but considerations about multi-
domain DS-network and backward compatibility (interoperability with non-DS capable devices)
are outlined as well.




 2001 EURESCOM Participants in Project P1006                                     EDIN 0079-1006
page 126 (12826)                                            EURESCOM Technical Specification

7      References
[DiffFrame]    Y.Bernet, J.Binder, S.Blake, M.Carlson, B.E.Carpenter, S.Keshav, B.Ohlman,
               D.Verma, Z.Wang, W.Weiss: „A Framework for Differentiated Services” <draft-
               ietf-diffserv-framework-02.txt> Internet draft 1999.
[MODEL]        "A Conceptual Model for Diffserv Routers", draft-ietf-diffserv-model-01.txt,
               Bernet et. al.
[RFC1812]      Baker, F., Editor, "Requirements for IP Version 4 Routers", RFC 1812, June 1995.
[RFC2474]      Nichols, K., Blake, S., Baker, F. and D. Black, "Definition of the Differentiated
               Services Field (DS Field) in the IPv4 and IPv6 Headers", RFC 2474, December
               1998.
[RFC2475]      Black, D., Blake, S., Carlson, M., Davies, E., Wang, Z. and W. Weiss, "An
               Architecture for Differentiated Services", RFC 2475, December 1998.
[RFC2597]      F. Baker, J. Heinanen, W. Weiss, J. Wroclawski: "Assured Forwarding PHB
               Group"
[RFC2598]      V.Jacobson, K.Nichols, K.Poduri,         "An    Expedited    Forwarding     PHB",
               ftp://ftp.isi.edu/in-notes/rfc2598.txt
[RFC2697]      Heinanen, J. and R. Guerin, "A Single Rate Three Color Marker", RFC 2697,
               September 1999.
[RFC2698]      Heinanen, J. and R. Guerin, "A Two Rate Three Color Marker", RFC 2698,
               September 1999.
[Hee0200]      P. E. Heegaard, M. Lu, "GenSyn – Java based generator of synthetic Internet traffic
               ", SINTEF Report, February 2000
[Helv1095]     B. E. Helvik, "Synthetic Load generation for ATM Traffic Measurements",
               Telektronikk, vol. 91, no 2/3, pp. 174 - 194, October 1995
[Hee0700]      P. E. Heegaard, „GenSyn: run test scenarios” Telenor Internal Memo. July 2000.
[Heeg0700]     P. E. Heegaard, „GenSyn: details of run test scenarios” Telenor Internal Memo.
               July 2000.




EDIN 0079-1006                                 2001 EURESCOM Participants in Project P1006

				
DOCUMENT INFO