Information Society Technologies (IST)
Sixth Framework Programme
Project no: 507613
Project acronym: Euro-NGI
Project title: Design and Engineering of the Next Generation Internet, towards convergent multi-
Instrument: Network of Excellence (NoE)
Thematic Priority: Information Society Technologies (IST)
Deliverable reference number:
D.WP.IA.1.1.1, D.WP.IA.1.1.2, D.WP.IA.1.1.4
Deliverable title: " Global Cartography and Mapping of Euro-NGI research
activities on the design and engineering of the NGI "
Due date of deliverable: 2006/05/30
Actual submission date: 2006/07/27
Start date of project: 1st December 2003 Duration: 3 years
Organisation name of lead contractor for this deliverable: GET (2).
Editor’s name for this deliverable: Annie Gravey
Project co-funded by European Commission within the Sixth Framework Programme (2002-2006)
PU Public X
PP Restricted to other programme participants (including the Commission Services)
RE Restricted to a group specified by the consortium (including the Commission Services)
CO Confidential, only for members of the consortium (including the Commission Services)
Information Society Technologies (IST)
Sixth Framework Programme
Editor’s name: Annie Gravey
Editor’s e-mail address: firstname.lastname@example.org
With contributions of the following partners:
Partner Partner Contributor Name chapter Contributor e-mail address
2 GET-INT Monique Becker JRA.1.3 Monique.Becker@int-evry.fr
9 HUT Jorma Virtamo JRA.2.3 email@example.com
11 FT James Roberts James.firstname.lastname@example.org
11 FT Alexandre Proutiere JRA.2.4
13 ALCATEL Laurent Ciavaglia JRA.1.3 Laurent.email@example.com
14 Riadh Dhaou JRA.1.3 Riadh.firstname.lastname@example.org
14 Riadh Dhaou JRA.1.3 email@example.com
15 UST/IKR Zigmund Orlov JRA.1.3 firstname.lastname@example.org
17 UNIWUE Kurt Tutschku JRA.1.4 email@example.com
21 U-BAM Udo Krieger JRA.5.1 firstname.lastname@example.org
22 RC-AUEB Manos Dramitinos JRA.6.1 email@example.com
22 RC-AUEB Sergios Sourso JRA.1.4 firstname.lastname@example.org
27 Marco Mellia JRA.4.3 email@example.com
27 Dario Rossi JRA.4.3 firstname.lastname@example.org
29 TLC-PISA Stefano Giordano JRA.4.4 email@example.com
29 TLC-PISA Fabio Mustacchio JRA.4.4 firstname.lastname@example.org
30 CORITEL Paola Iovanna JRA.2.3 email@example.com
30 CORITEL Roberto Sabella JRA.2.3 firstname.lastname@example.org
31 URM2 Gianpaolo Oriolo JRA.2.3 email@example.com
32 NTNU Peder J. Emstad JRA 2.2 peder@Q2S.ntnu.no
34 AGH Piotr Chołda JRA.3.3 firstname.lastname@example.org
34 AGH Zbigniew Hulicki JRA.3.3 email@example.com
34 AGH Andrzej Jajszczyk JRA.3.3 firstname.lastname@example.org
35 IITIS-PAN Tadeusz Czachórski JRA.5.5 email@example.com
36 WUT Michał Pióro JRA.3.1 firstname.lastname@example.org
36 WUT Michał Zagożdżon JRA.3.1 email@example.com
37 Inesc-ID JRA 2.2
38 IT Rui Valadas JRA.4.3 firstname.lastname@example.org
42 UPC-DAC Sebastià Sallent JRA.3.2 email@example.com
44 UC Klaus Hackbarth JRA.3.4 firstname.lastname@example.org
44 UC Alberto E. García JRA.3.4 email@example.com
44 UC Laura Rodriguez JRA.6.2 firstname.lastname@example.org
45 GIRBA Vicente Casares email@example.com
47 KTH Mikael Johansson JRA.2.1 firstname.lastname@example.org
49 BTH Markus Fiedler email@example.com
49 BTH Adrian Popescu firstname.lastname@example.org
53 UniBrad D.D.Kouvatsos@scm.brad.ac.uk
55 UNIS Zhili Sun JRA.1.3 Z.Sun@surrey.ac.uk
56 UP Amine Houyou JRA.1.5 email@example.com
56 UP Hermann de Meer firstname.lastname@example.org
58 HWU Stan Zachary JRA.5.4 email@example.com
Project acronym: Euro-NGI.
Project full title: Design and Engineering of the Next Generation Internet, towards convergent multi-
Type of contract: NETWORK OF EXCELLENCE.
Contract N°: 507613.
Project URL: http://www.eurongi.org
TABLE OF CONTENTS
INTRODUCTION ................................................................................................................................................ 5
NETWORK ARCHITECTURE EVOLUTION ............................................................................................................ 8
WP.JRA.1.3 - IP NETWORKING EVOLUTION ................................................................................................................ 8
WP.JRA.1.4 - NEW SERVICES AND THEIR MANAGEMENT............................................................................................. 18
WP.JRA.1.5 - MOBILITY MANAGEMENT IN THE ALWAYS BEST CONNECTED SCENARIOS .................................................... 25
TRAFFIC ENGINEERING ................................................................................................................................... 33
WP JRA.2.1 - CONTROLLED BANDWIDTH SHARING .................................................................................................... 33
WP.JRA.2.2 - TRAFFIC MANAGEMENT IN A MULTI-PROVIDER CONTEXT ......................................................................... 42
WP.JRA.2.3 - INTER AND INTRA DOMAIN TRAFFIC ENGINEERING FOR COST EFFECTIVE NETWORKS ....................................... 50
WP JRA.2.4 - TRENDS AND CHALLENGES IN WIRELESS NETWORKING .............................................................................. 60
NETWORK OPTIMISATION .............................................................................................................................. 67
WP.JRA.3.1 - OPTIMISATION OF MULTI-LAYER CORE NETWORKS ................................................................................... 67
WP.JRA.3.2 - OPTICAL ACCESS NETWORKS .............................................................................................................. 77
WP.JRA.3.3 - NETWORK RESILIENCE EVOLUTION ....................................................................................................... 87
WP.JRA.3.4 - NETWORK DESIGN TOOL FOR NEXT GENERATION INTERNET ...................................................................... 91
EXPERIMENTATIONS AND VALIDATION THROUGH PLATFORMS .................................................................. 100
WP.JRA.4.3 - MEASUREMENT PLATFORMS ............................................................................................................ 100
WP.JRA.4.4 - ULTRAGIGABIT/S TRIALS FOR THE INVESTIGATION OF STRUCTURED MODELS................................................ 104
MODELLING AND MEASUREMENTS.............................................................................................................. 107
WP.JRA.5.1 - IP TRAFFIC CHARACTERIZATION, TRAFFIC ESTIMATION AND INTERNET DATA MINING ................................. 107
WP.JRA.5.4 - NETWORK OPTIMISATION AND CONTROL ............................................................................................. 114
WP.JRA.5.5 - NUMERICAL, SIMULATION, AND ANALYTIC METHODOLOGIES .................................................................. 121
SOCIO-ECONOMIC ASPECTS OF NGI ............................................................................................................. 131
WP.JRA.6.1 - QUALITY OF SERVICE FROM THE USER’S PERSPECTIVE ............................................................................. 131
WP.JRA.6.2 - COST- AND PRICING MODELS FOR NEXT GENERATION INTERNET .............................................................. 138
WP.JRA.6.3 - SECURITY ISSUES ............................................................................................................................ 145
The Next Generation Internet (NGI) will view multiservice-multimedia, mobility, services
convergence, fixed-mobile convergence, Quality-of-Service (QoS), variable connectivity and other
capabilities, as the norm. Technology diversity is exploding and mastering such a heterogeneous
environment becomes essential for network designers. The target of new architecture is “any
service, any time, everywhere”.
This new environment obsoletes the design and engineering methods and tools currently available
and forces the scientific community to develop new design, control, planning, dimensioning,
operation and management principles and tools that require to investigate new multi-technology
architectures for providing a seamless end-to-end connectivity by hiding the technology and
environment diversity from service developers and users.
In addition, future high-speed wire-line and wireless access technologies will provide instant high
bandwidth connectivity, which makes it difficult to forecast traffic and thus to apply existing traffic
For addressing this new scientific and technological environment, Euro-NGI integrates the scientific
community activities to fulfil two main goals:
Mastering technology diversity (vertical and horizontal integration) for the design of efficient and
flexible NGI architectures.
Providing required innovative traffic engineering architectures adapted to the new requirements and
developing the corresponding appropriate quantitative methods for analysis, simulation and
Experts in the Euro-NGI network bring the required skills in the various technologies to be integrated.
They also bring a worldwide recognized expertise on the various topics that compose the traffic
engineering and optimal dimensioning/design domain.
The main objective of the Euro-NGI network is to create and maintain the most prominent European
centre of excellence in Next Generation Internet design and engineering, leading towards a European
leadership in this domain.
One of the goals of the Euro-NGI NoE is to integrate and rationalize the European research
efforts. There is a very important research effort in the networking domain in Europe;
nevertheless, no institution in the world has a global view of all the ongoing research
activities. Different partners works on the same domain without collaborating and since
there is no a common global view, there is a risk of important topics not being covered.
One tool to integrate and rationalize European research in Networking is a Knowledge Map. The
Knowledge Map is an electronic document that is dynamically updated by the members of the NoE. It
reports results obtained by the various research teams, and explicitly identifies links between the
various teams. The Knowledge Map serves for coordinating, integrating, and rationalizing the
European research efforts to strengthen scientific and technological excellence. It also serves to
make the European research efforts more visible and readable.
Euro-NGI research activities are organized around the required research domains in order to
coordinate, rationalize and integrate the research efforts. The result is the definition of several
Architecture Domains and Research Domains, as presented in Figure 1.
Fixed Mobile IP Services
Access Access Networking Overlays
Network Architecture Evolution, Technology Integration,
Control, Managing the diversity
Traffic Engineering, Traffic Management, Congestion
Control and End to End QoS
Optimisation of Protected Multi-Layer Next Generation
Networks: Topology, Layout, Flow and Capacity Design
Experimentation and Validation Through Platforms
Modeling, Quantitative Methods and Measurements
Socio-Economic Aspects of the Next Generation Internet
Figure 1: Research Domains
The Architectural Domains are defined to take into account the different parts of the network, the IP
networking covering and hiding the transport technology diversity and the overlays for service
infrastructure (CDNs, per-to-peer, etc.).
The Research Domains are defined to face the various problems arising when integrating the various
Architectural Domains to design flexible network architectures. These Research Domains are:
Network Architecture Evolution, Technology Integration, Control and Managing the Diversity
Traffic Engineering, Traffic Management, Congestion Control and End to End QoS
Optimisation of Protected Multi-Layer Next Generation Internet Networks: Topology, Layout
Flow and Capacity Design
Experimentation and Validation Through Platforms
Modelling, Quantitative Methods and Measurements
Socio-Economic Aspects of the Next Generation Internet
Within each Research Domain, several Work Packages are defined that each represent an
international research team. The present document has been produced by these research teams in
order to help identifying insufficiently covered key NGI issues for which a major research effort
(through a synergy of the different research teams) is required at a European level so that Euro-NGI
can address them.
It is structured as the EuroNGI research domains, where each WP has answered, for its specific
domain, the following questions:
- What is the scope of the WP domain ?
- What are the main mid-term, long-term evolutions of this area ?
- What are the major open problems ?
- What is the work in progress within the WP ?
- What are the future priorities for the research in this area ?
This document thus presents a global cartography of Euro-NGI research activities on the design and
engineering of the NGI and identifies promising further research topics.
4. Network Architecture Evolution
1.2 WP.JRA.1.3 - IP networking Evolution
Zhili Sun, Monique Becker, Demetres Kouvasos, Hermann de Meer, Riadh Dhaou, Vicente Casares,
Zigmund Orlov and Laurent Ciavaglia
IP networking, IPv6, QoS, security, core network, access network, mobile access, service overlay.
4.1.3. Scope of the domain
The scope of WP.JRA.1.3 “IP networking evolution” focuses mainly on the network protocols,
architecture and technologies and related development in standardisation and evolution
activities. The development of the Internet protocols (IP) and Internet applications have
profound impacts on networking technologies and new services and applications. They have
gradually evolved towards the Internet protocols (IP), Internet architecture and Internet
technologies. Until recently, the principles of IP networking have not changed much though
evolve slowly. Due to the explosive expansion of Internet from 1990s up to today, the speed
of IP networking evolution has speeded up significantly and reached a critical point beyond
the original design of the Internet. These lead to new development of the Internet evolving
towards new Internet protocols, new services and applications and new networking
architecture for the future. This workpackage addresses all these issues and achieve the main
objectives of the WP including:
To achieve good understanding of networking evolution processes that many network
protocols and architectures in the past evolved into the current IP network protocols and
To study the current IP network evolution, particularly the transitions from IPv4 to IPv6
and new mechanisms for supporting Quality of Service (QoS)
To study the internetworking of legacy networks and the Next Generation Internet (NGI)
To study the future IP networking evolution and convergence of computer, TV and
telephony taking into new networking technologies including 3/4G mobile networks,
wireless LAN, broadband wireless access networks (WiMAX), optical core network,
satellite network (DVB-S, DVB-T, DVB-H), personal area network (PAN) and sensor
To study the impact of new services and applications (including triple plays of TV,
telephony and computer) on the IP networking evolution, particularly those generated by
the 3/4G user terminals, peer to peer, content delivery, VoIP and related services and
To study possible new IP networking protocols and architectures towards which future IP
network may evolve.
These cover the main topics of the current research defined in the WP within the framework of the
EuroNGI project. They also cover the future topics concerning new networking technologies, new
services and applications, future network protocols and future network architectures.
4.1.4. Mid-term/long term evolution in this area
One will see the evolution from IPv4 to IPv6. The IPv6 will enable to Internet to overcome the
problems of IPv4 including:
Small IPv4 address space.
Network Address Translator (NAT) to map multiple private addresses to a single public IP
Maintain large routing tables of Internet backbone routers.
Complicated network configuration.
Security at the IP level.
Quality of service (QoS).
This new version of Internet Protocol (IPv6) previously called IP-The Next Generation (IPng),
incorporates the concepts of many proposed methods for updating the IPv4 protocol. The design of
IPv6 is intentionally targeted for minimal impact on upper and lower layer protocols by avoiding the
random addition of new features. The following are the features of the IPv6 protocol:
New header format
Large address space
Efficient and hierarchical addressing and routing infrastructure
Stateless and stateful address configuration
Better support for QoS
New protocol for neighboring node interaction
IPv6 has 128-bit (16-byte) source and destination IP addresses. Although 128 bits can express over
multiple levels of subnetting and address allocation from the Internet backbone to the individual
subnets within an organization.
IPv6 global addresses used on the IPv6 portion of the Internet are designed to create an efficient,
hierarchical, and summarisable routing infrastructure that is based on the common occurrence of
multiple levels of Internet service providers.
To simplify host configuration, IPv6 supports both stateful address configuration, such as address
configuration in the presence of a DHCP server, and stateless address configuration (address
configuration in the absence of a DHCP server). With stateless address configuration, hosts on a link
automatically configure themselves with IPv6 addresses for the link (called link-local addresses) and
with addresses derived from prefixes advertised by local routers. Even in the absence of a router,
hosts on the same link can automatically configure themselves with link-local addresses and
communicate without manual configuration.
Support for IPSec is an IPv6 protocol suite requirement. This requirement provides a standards-based
solution for network security needs and promotes interoperability between different IPv6
New fields in the IPv6 header define how traffic is handled and identified. Traffic identification using
a Flow Label field in the IPv6 header allows routers to identify and provide special handling for
packets belonging to a flow, a series of packets between a source and destination. Because the traffic
is identified in the IPv6 header, support for QoS can be achieved even when the packet payload is
encrypted through IPSec.
The Neighbor Discovery protocol for IPv6 is a series of Internet Control Message Protocol for IPv6
(ICMPv6) messages that manage the interaction of neighboring nodes (nodes on the same link).
Neighbor Discovery replaces the broadcast-based Address Resolution Protocol (ARP), ICMPv4 Router
Discovery, and ICMPv4 Redirect messages with efficient multicast and unicast Neighbor Discovery
IPv6 can easily be extended for new features by adding extension headers after the IPv6 header.
Unlike options in the IPv4 header, which can only support 40 bytes of options, the size of IPv6
extension headers is only constrained by the size of the IPv6 packet.
4.1.6. Major Open Problems
The major open issues remain on IP networking evolution:
What is the IP networking beyond IPv6?
What is the future network architecture?
The notion of network architecture was introduced during the Internet research phase by the
research community that had developed the ARPAnet protocols. This community brought to bear on
the computer communication problem the kind of abstract thinking about resources and
relationships that came naturally to computer scientists who had designed hardware architectures
and/or computer software systems. This resulted in the development of an “design philosophy” to
accompany the design of the algorithms and protocols for the Internet. This philosophy was
elaborated over time to create the complete original architecture of the Internet protocol suite.
Network architecture can be considered as a set of high-level design principles that guide the
technical design of the network, especially the engineering of its protocols and algorithms.
The architecture can only provide a set of abstract principles against which we can check each
decision about the technical design. The role of the architecture is to ensure that the resulting
technical design will be consistent and coherent – the pieces will fit together smoothly – and that the
design will satisfy the requirements on network function associated with the architecture.
The development of architecture must be guided in part by an understanding of the requirements to
be met. It is therefore vital to articulate a set of goals and requirements. The technical requirements
for the Internet have changed considerably since 1975, and they will continue to change.
The relationship between requirements and architecture is not simple. While major requirements
arise from non-technical issues in the real world – e.g., business models, regulatory models, and
politics – other requirements are themselves the product of earlier technical decisions, i.e., depend
upon the architecture. As a result, a new-architecture design effort cannot be completely top-down.
There is not likely to be a unique answer for the list of requirements, and every requirement has
some cost. The cost of a particular requirement may become apparent only after exploration of the
architectural consequences of meeting that objective, in conjunction with the other objectives. It
therefore requires an iterative process, in which requirements can be re-examined and perhaps
promoted or demoted during the effort.
It is also crucial that with the transition of the Internet from research project to mainstream
infrastructure the range of applicability of the requirements must be much broader. This implies that
fewer and fewer of the requirements will be truly global - applying with the same importance
everywhere. Many of the requirements that the architecture must meet will apply with different
force, or not at all, in some situations and portions of the network. This makes the development of a
single ordered list of requirements, as was done to motivate the original Internet research program,
Instead, a new Internet architecture must deal with a multi-ordered requirements set; with many
requirements taking on different importance at different times, and in different regions of the
network. It seems likely that such a “meta-requirement” will have a significant impact on the
technical architecture. We touch briefly on one possible strategy for addressing it below; in any case
we believe that meeting this need represents one of the most challenging aspects of designing a new
architecture. The commercialization of the Internet has led to many of the new requirements.
An architecture of tomorrow must take into account the needs and concerns of commercial
providers if it is to be accepted and thus to be able to influence overall direction. Examples of these
concerns include (1) a framework for policy controls on inter-provider routing, (2) recognition that
service providers need some ability to see parts of the header for purposes of traffic planning,
regulation of usage, etc., and (3) support for a variety of payment models for network usage. For
example, since today there is no way to assign a “value assertion” to traffic flows, there is no way to
determine “settlements” by observing traffic patterns. One can count packets, but this does not
indicate which end paid to have them sent. One of the motivations for some of the overlay delivery
mechanisms that have recently been built over the Internet today, including the Akamai and
RealAudio delivery infrastructure, is that they implement a specific payment model (sender pays), so
that a class of users who match that value equation can associate themselves with this service.
Internet requirements continue to change. Some important new requirements that may influence
the new architecture are as follows.
The Internet architecture should support flexible, efficient, highly-dynamic mobility.
The Internet architecture should provide auto-configuration of end systems and routers, subject to
policy and administrative constraints.
Highly time-variable resources
The Internet architecture should support resources that are highly variable over short time-scales.
This may for example be due to switched backbone links, or due to mobile devices that can switch
physical transmission medium as the node moves.
Allocation of Capacity
Architecture of tomorrow must give users and network administrators the ability to allocate capacity
among users and applications. In today’s Internet, allocation occurs implicitly as a result of
congestion control. The goal has generally been some approximation of “fairness”; all slow down
together, but this is not always the right model. For commercial activities, there is a desire to allocate
capacity based on willingness to pay. For operational government activities, e.g., disaster response,
there is a need to allocate capacity based on priority of task. It is not (always) the role of the network
to tell the user how fast to go. The administrator should be able to ask the network for resources,
and the network should be able to inform the user if it cannot meet the requests due to resource
Extremely long propagation delays:
This requirement arises particularly in the proposed Interplanetary Internet, using the Internet
technology for NASA’s planetary exploration program. It is an extension of the more traditional “high
bandwidth-delay product” requirement; reflecting the fact that both delay itself and delay-
bandwidth interactions complicate the architecture of a network.
This discussion has dealt with technical requirements, but it is important to note that there are
significant non-technical drivers on Internet design. There are obvious commercial drivers, as
network providers learn how to make a profit from the Internet. Increasingly, there are also legal and
public policy drivers, including intellectual property law, encryption export law, police surveillance,
privacy and free speech, telecommunications laws, charging, and taxation. .These are all subject to
national variation, since the Internet is worldwide.
It is important to be aware of these issues, but our job is to concentrate on the technical
requirements within this broader context.
4.1.7. Work in progress within the WP
To achieve the objectives, this workpackage currently carries out research conversing the mid-term
objectives of the EuroNGI project related to new topics taking into account future new network
protocols, new network architectures and new network technologies.
These include the current network evolution focus on the IETF actions to develop new protocols
since 1992 to extend the capabilities of IPv4 as a basis for the evolution towards the "Next
Generation Internet" (IPng), known as IPv6. This work package specifically addresses the issues
related to the network layer functions of Next Generation Internet (NGI) and the transition process
from IPv4 to IPv6. This evolution provides significant extensions including
Extended address space from 32 to 128 bit addresses, allowing for a large number of end
systems to be addressed for the development to ubiquitous computing and multicast
Generalized packet formats by the concept of extension headers allowing packet size
differentiation end-to-end and hop-by-hop and jumbo-packets;
Flow label marking efficient switching in the core network based on flows to support QoS
for real time services and applications;
Neighbour discovery functions for automatic and dynamic system configuration and
Extended security functions to support authentication and encryption;
Advanced network architecture to support efficient IP mobility and multicast services.
Therefore, the main tasks include the following topics:
(1) Study the fundamental concepts and their major influences on the architecture and
implementation of the NGI, in particular for the co-existence of IPv4 and IPv6 during
transition period and techniques including address and header translations, tunnelling
and interoperation between IPv4 and IPv6 domains, etc.
(2) Quality of Service (QoS) related to traffic management and traffic engineering,
particularly the Integrated Services (IntServ) Architecture and Differentiated Service
(3) Flow label marking concept to support fast switching techniques based on Multi-Protocol
Label Switching (MPLS) Architecture
(4) Service level agreements (SLA) between ISP's and user groups, traffic aggregation and
traffic policing to guarantee the QoS for various traffic classes (particularly edge to edge
(5) Study Automatic configuration to meet the economic requirements of the carriers, and in
the light of ubiquitous communication and ad hoc networking concepts
(6) Set up testbed for experiments and measurement network traffic and QoS due to IP
The new Internet technologies include 3/4G mobile networks, wireless LAN, broadband wireless
access networks (WiMAX), optical core network, satellite network (DVB-S, DVB-T, DVB-H), personal
area network (PAN) and sensor network.
The new services and applications on the IP networking evolution, particularly those generated by
the 3/4G user terminals, peer to peer, content delivery, VoIP and related services and applications.
Due to requirements of new technologies, new services and applications, different traffic
characteristics and different QoS, IP networking protocols and architectures may evolve towards to
new protocols and architectures. Studies and research are required to address these issues to get a
deep understanding of all IP network level functions and their protocols.
The future generation Internet (FGI) could not start from scratch, discarding all Internet principles
and technology. As in any good science, this research should work from established principles as
much as possible. The development of FGI is likely to include the following components.
Examination of the areas in which the original architecture is known to have failed.
Examination of the changed and changing requirements.
Exploration and development of some proposed new architectural changes that have already
been suggested to meet these requirements
Exploration of possible new meta-principles for an architecture
Outline of several candidate architectures.
Consultation with experts in relevant technical areas, such as mobility, economic models, and
Agreement on a single candidate for further exploration.
Implementation of a proof-of-concept environment sufficient to evaluate the candidate
architecture through experiments and simulation.
Iteration based on feedback from proof-of-concept experiments.
The major end result of this project is expected to be sets of abstractions: design principles,
requirements, and objectives. However, past experience has shown the limitations of protocol suite
designs that are essentially top-down. To ensure that the conceptual work on a new architecture
does not become an idle exercise, the project includes a major proof-of-concept component. This
activity will build and test prototype protocols that are consistent with the architecture, using an
appropriate combination of experimental code and simulation. This reduction of theory to practice
will provide feedback to the architectural design, revealing many secondary technical issues that
arise from the architectural abstractions.
Clearly, the limited scope of the proposed project will not allow it to produce a complete prototype
of a new protocol suite, should that be called for by the new architecture. Indeed, the purpose of a
new architecture is to guide a large volume of further protocol engineering by researchers, vendors,
and other members of the Internet community; this will certainly not be accomplished directly under
the proposed project.
Rather, the research project would produce protocol code to serve as a proof of concept for
particular important elements of the new design, using existing Internet protocol stacks wherever
the resulting deviation from the new architecture is unimportant.
FGI is likely to imply some changes from the Internet protocol suite below the application layer, i.e.,
in parts of the stack that have traditionally been implemented within the kernel of a conventional
operating system. To prototype and test such a different protocol stack will require programmable
This can be achieved (1) in a local laboratory environment, (2) in a research testbed, or (3) tunneled
across the Internet. It is likely that all of these approaches will be used. However, requirements and
objectives are abstractions that may note be directly quantifiable, and some important principles
cannot be tested is a testbed of feasible cost.
In particular, requirements in the following four areas must be validated using a combination of
simulation and plausible argumentation: (1) scaling issues, (2) heterogeneity, (3) high performance,
and (4) interaction with economic and business models. In this context, it is particularly important to
note recent advances in multi-level and parallel simulation algorithms. The team expects to make use
of such next-generation simulation techniques to the extent supportable by the technology at the
time they are required.
4.1.10. Annex : list of participants
List here all the Euro-NGI colleagues, for each participant to the WP who are actively participating to
ongoing work within the WP.
Partner Partner Name Contributor Name Contributor e-mail address
55 UNIS Zhili Sun Z.Sun@surrey.ac.uk
55 UNIS Haitham Cruickshank H.Cruickshank@surrey.ac.uk
55 UNIS Bo Zhou B.Zhou@surrey.ac.uk
56 UP Hermann de Meer firstname.lastname@example.org
2 GET-INT Michel Marot Michel.email@example.com
37 Inesc-ID Augusto Casaca Augusto.firstname.lastname@example.org
14 IRIT-ENSEEIHT Riadh Dhaou Riadh.email@example.com
15 UST/IKR Zigmund Orlov firstname.lastname@example.org
15 UST/IKR Christoph Gauger Gauger@ikr.uni-stuttgart.de
13 Alcatel Laurent Ciavaglia Laurent.email@example.com
40 UPB Radu Lupu firstname.lastname@example.org
49 BTH Dragos Ilie email@example.com
2 GET-INT Monique Becker Monique.Becker@int-evry.fr
33 Telenor Terje Jeson firstname.lastname@example.org
45 GIRBA Vicente Casares email@example.com
38 IT Amaro de Sousa firstname.lastname@example.org
53 UniBrad Irfan Awah email@example.com
1.3 WP.JRA.1.4 - New Services and their Management
Kurt Tutschku, Sergios Sourso, and Adrian Popescu
Service Overlay, Network and Service Architecture, Fixed and Mobile Access
Networks, Routing, IP
4.1.13. Scope of the domain
The Next Generation Internet (NGI) will be dominated by highly distributed but loosely connected
applications which are located at the network edge. NGI users will view QoS, mobility, variable
connectivity, and dynamic addresses as the norm. The future NGI will permit significant autonomy
and symmetric roles to networked nodes. A node can be client, server, and application-level router at
the same time. In addition, the architecture of the NGI will be highly dynamic since nodes may join
or leave the network spontaneously. As a result, the operation of NGI and its services has to be highly
automatic and autonomous. It will require minimal or zero manual inference by human operators
even in unexpected situations like hardware and software failures.
Future high-speed fixed-line and wireless access technologies provide instant high bandwidth
connectivity. NGI services will be developed by an increasing number of non-technical oriented
Internet users taking the high bandwidth as granted. First instantiations of NGI services are content
distribution networks (CDN), such as Akamai’s Edge Service, and peer-to-peer (P2P) file sharing
services, like the BitTorrent file swapping application.
NGI will be required to support the full spectrum of networked group communication applications.
This will be achieved mainly by using virtual network structures, denoted as overlays. Overlays
provide application specific naming and routing services. Their architectural spectrum ranges from
well managed VPNs (Virtual Private Networks) for organizations to application specific overlays on
small or large scale and for highly mobile communities.
The NGI will include location-based services as well as and context-based services. The increasing
degree of interactivity in internet services, the delivery of multi media content over the internet (e.g.
Voice/Video-over-IP, TriplePlay), the usage of the internet as a computing platform (e.g. the GRID),
and finally the commercialization of the Web, however, require the differentiation between
applications in order to provide an appropriate grade of service. A well-suited service differentiation
can be achieved using the “slices” concept as proposed in the “PlanetLab” environment which
overcomes the drawbacks of integrated services networks by separating applications and facilitating
application-specific management mechanism.
In addition, service overlays in NGI should support high security mechanisms and new accounting
mechanisms in order to facilitate new business opportunity. The adaptation of overlays has to be
achieved autonomously on small times scales and/or within predefined spatial vicinity.
The above outlined features and requirements of NGI services prohibit or at least complicate
the application of traditional “client/server” architectures and “multiple services”
engineering techniques such as “DiffServ”. It is anticipated that new network control
architectures, as well as new planning, dimensioning, and management principles are
needed which address topics like application-specific naming and routing, reliability, security,
self-configuration, self-organization, or load balancing. It is expected that application-specific
or overlay-specific control entities are need. This work package gathers together
contributions concerning the new services and their architecture.
Future standards for the operation of service overlays are rapidly developing. A premier example is
the current forming of the “P2P-SIP” working group of the IETF, which aims at the definition of
standard interfaces for the control and the user management of P2P-VoIP service overlays.
Additional examples are the definition of 3GPP standard interfaces for supporting mobility by service
overlays and the DMTF (Distributed Management Task Force) activities on service management.
4.1.14. Mid-term/long term evolution in this area
The mid-term to long term evolution in this area can be characterized by:
Use of service specific overlays
o e.g. CDN, GRID, Virtual Provider Networks
Autonomic operation of service overlays
o Self-* features (configuration, organization, healing, protection, self-monitoring etc.)
o Reduction of capital and operational expenditures
o e.g. dynamic bandwidth management, fast mobility management, flexible contracts (Quality
on Demand) between with customers and ISPs or among ISPs
Increased network dependability
o e.g. multipath overlay routing
Definition of self-organisation models and application of self-organization mechanisms
o Game Theory
o Network Economics, Incentive Mechanisms
o Distributed Hash Tables
o Random Networks / Small World networks
o Bio-inspired self-organization, e.g. ant routing
Application-specific networks services
o Application-specific naming and addresses
o Application-specific routing
o Application-specific security (media security – DRM, accounting security, etc.)
o Application-specific accounting
Interoperability and Transition
o Reduced layer scheme (e.g. Physical layer, Resource Mediation Layer, Application Layer)
o Interworking with the naming and routing protocols of today’s Internet, e.g. with respect to
IP Version 4, Version 6
Standard routing protocols such as BGP, OSPF
o Interoperability with today’s Internet service architectures such as IMS
4.1.15. Major Open Problems
A major open research area is “self-organisation” in application, service and network operation. Self-
organization deals with the forming of roles, responsibilities, and re-useable structures. It is expected
that it will be the fundamental driving force behind an autonomic operation of future networks and
On the one side, a huge number of self-organization mechanisms are available in physics, mathematic
and in biology; on the other side the different functions in autonomic network operation will need
specific self-organization mechanisms tailored for the application. Hence, a major open problem is the
selection of an appropriate self-organization mechanism. Such a decision would require the detailed
understanding of their operation, their advantages and their disadvantages. However, only limited
knowledge exists here so far. In detail the following self-organisation issues need to be addressed:
Self-organisation of structured and unstructured overlays
Self-organization of over- and underlays and in cross layering
Self-organization in role-based and multilevel systems
Emergence and related theoretical concepts
Quality of Service/Security/Accountability and Self-organization
The role of self-organization in heterogeneous network convergence
The evolutionary principles of the Internet
Self-organization of over- and underlays and in cross layering
Self-organization in role-based and multilevel systems
The human in the loop of self-organizing networks
Inspiring models of self-organizing in nature and society
Robustness, fault tolerance and self-organisation
Service and Network Architectures for Distributed Autonomic Operation
The future Internet will be dominated by highly distributed but loosely connected applications which
are located at the network edge. It will permit significant autonomy and symmetric roles to networked
nodes, e.g. a node can be client, server, and application-level router at the same time, and it will
permit competition among services and their diversification, i.e. a service will be offered by different
service providers which operate the service in different flavours.
A major open issue is the question what degree of decentralization will be allowed in order to facilitate
autonomous, edge-based services while maintaining control on the network resources like bandwidth.
Open research areas are:
Highly distributed resource mediation mechanisms
Highly distributed content and resource exchange mechanisms
Reduction of the network layering concept (e.g. Physical layer, Resource Mediation Layer,
Dynamic Infrastructure (e.g. join and departure of access nodes)
Leading groups outside EuroNGI on the above mentioned topics are:
University of Berkeley, Berkeley, USA. (Randy Katz, Ion Stoica)
ICSI Center for Internet Research, Berkeley, USA. (Scott Shenker)
Network Systems Group, Princeton University, Princeton, USA. (Larry Peterson)
Leading groups outside EuroNGI on the topic are:
University of Würzburg, Würzburg, Germany. (Phuoc Tran-Gia, Kurt Tutschku)
University of Passau, Passau, Germany. (Hermann de Meer)
Athens University of Economics and Business, Athens, Greece. (Costas Courcoubetis)
Blekinge Institute of Technology, Karlskrona, Sweden. (Adrian Popescu, Markus Fiedler)
4.1.16. Work in progress within the WP
University of Würzburg
The University of Wuerzburg is focusing on the development and the performance evaluation of P2P-
based, cooperative mechanisms for service and network control. Particular interest is laid on the
investigation of P2P content distribution mechanisms for mobile networks, the application of P2P-
based self-organisation mechanisms for mobile network control, the use of P2P-based mechanism
for autonomic network management and autonomic network monitoring.
University of Passau
The University of Passau is focusing on the development P2P-based mobility management
mechanisms for Beyong 3G networks using P2P overlays.
Athens University of Economics and Business
The AUEB-RC partner is focusing in many of these areas. Much work must has be done in the area of
P2P systems, with the economic modelling of P2P systems, the proposition of certain incentive-based
mechanisms for contribution and reputation-based mechanisms that enforce cooperation between
Blekinge Institute of Technology, Karlskrona, Sweden.
The Blekinge Institute of Technology is focusing on autonomic monitoring concept and on overlay
The two above mentioned problems (Self-Organization, Distributed Service and Network
Architectures) are of great importance for the evolution of the Internet. But if a prioritization exists,
it is necessary focus on the fundamental mechanisms, e.g. self-organization, before new services are
4.1.18. Annex: list of participants
The following groups are currently actively participating in this work package:
University of Würzburg, Würzburg, Germany. (Phuoc Tran-Gia, Kurt Tutschku)
University of Passau, Passau, Germany. (Hermann de Meer)
Athens University of Economics and Business, Athens, Greece. (Costas Courcoubetis)
Blekinge Institute of Technology, Karlskrona, Sweden. (Adrian Popescu, Markus Fiedler)
1.4 WP.JRA.1.5 - Mobility Management in the Always Best Connected
Vicente Casares-Giner (Universidad Politécnica de Valencia, Spain), Demetres Kouvatsos (University of
Bradford), Amine Houyou (University of Passau)
Mobile Access Networks
18.104.22.168. Scope of the domain Formatted: Bullets and Numbering
The new paradigm of multiple access technologies are becoming part of a common wireless
infrastructure. The habits of users are changing with the new wireless technology. For instance, a
hypothetic mobile user, one weekly day start working at her/his home, probably connected to an
ADSL line, checking urgent e-mails or accessing to the e-newspaper. Eventually s/he will leave the
home towards her/his office, taking a fast speed train and during a trip of around 45 minutes s/he
will wish to continue on her/his work: answering e-mails, or ending and sending documents, or
receiving calls, etc. It is quite likely that, when the train is between stations, the telecommunication
services will be provided by a 3G-UMTS operator. Moreover, when the train enters a station, the
mobile terminals or laptops will sense the access point of a WLAN (WIFI, WIMAX). Later, at the end
of the journey, s/he will arrive to her/his working place and will resume the activities via a wireless
access (WLAN, UMTS, …) or a wired access (xDSL, LAN, MAN,…).
During the trip from home to the office, our user will pass many heterogeneous access networks,
mainly wireless access. The mobile terminal (MT) will perform a set of procedures such as access
discovery, registration/deregistration, authentication, authorization and accounting (AAA), profile
handling, content adaptation, and attach/detach, among others. The mobile terminal can be any
handheld device such as mobile phone, portable computer, personal digital assistant (PDA), etc., all
of which can vary significantly in their processing, memory, and battery capacities.
In this context, mobility management1 is a key issue in wireless communications and mobile
computing. In heterogeneous scenarios envisaged for 3G, beyond 3G (B3G) and 4G wireless
networks, the interoperability between UMTS, WLAN and PCN will be a fundamental requirement to
fulfil the Always Best Connected (ABC) solution. Mobile terminals might be able to choose among
several access points from a set of different access networks in order to achieve the desired QoS
while the mobile terminals are roaming in a wide geographical area. The choice will strongly depend
on the user requirements, QoS and charging aspects, to name a few.
The integration of these technologies to interoperate together is a very likely scenario that will form
part of the Next Generation Internet. It also poses new requirements and offers new possibilities for
mobility solutions to roaming users. This scenario fits with the all-IP world vision that offers
interoperability at the IP layer across heterogeneous link technologies. Practically talking, however,
once roaming from GSM to UMTS or to IEEE 802.11x, many technical and technological constraints
have to be taken into consideration. First, a mobile device will be able to connect through several
interfaces to the outside world probably simultaneously for the same communication session or for
several connections used by different applications. Handover2 is invoked to allow these
communication sessions to continue wherever the roaming user is. In the case of real-time
applications extra considerations are given to make the handover as seamless and as fast as possible
(especially in the case of fast moving mobile nodes).
Therefore, we can remark the following main features for Mobility Management in ABC scenarios:
The identification of mobile user’s behaviour by using mobility models.
The development of motion prediction algorithms based on the information theory.
The vertical integration between heterogeneous access networks, with new location update and
paging algorithms –location management.
As a consequence of the above three bullet points,
The management of profile users among heterogeneous access networks and network operators.
The AAA (Authentication, Authorization and Accounting) that mainly focused on the
development of requirements for AAA to be applied in the heterogeneous access network
Mobility management is the set of procedures that has to be implemented in order to keep track of the
mobile terminal while it is on the move.
Handover (handoff) management is the procedure that allows the network to guarantee the continuity of a
communication that is in progress when the mobile terminal crosses the borders between two neighbouring
The vertical integration between heterogeneous access networks, with new proactive handoff
Research on the optimality use of resources dedicated to guarantee a certain QoS for handoff
To extend the concepts of Peer-to-Peer resource sharing to manage mobility between different
location management systems.
22.214.171.124. Mid-term/long term evolution in this area Formatted: Bullets and Numbering
In traditional mobile cellular networks such as GSM, mobility management has been solved using
static or global location update algorithms. Concepts such as location areas can not work efficiently
any more in heterogeneous scenarios that will contain a huge amount of mobile users demanding a
plethora of services. The new paradigm requires new system databases, with architectures that
require the cooperation between different access networks and operators.
In traditional mobile cellular networks such as GMS, horizontal handover (within the same
technology) only requires a link layer adaptation optimized for that single technology and managed
centrally. However, when talking about vertical handover (across different standards), a change in
the access network is required. At the IP level, changing access networks also suggests that the
access router that handles addressing and forwarding packets to the mobile node will change.
All variants of Mobile-IP solutions offer session continuity at the high price of several seconds
interruptions needed to complete IP-layer handover. Even if this might be acceptable for Internet
browsing, it is far from meeting the needs of a VoIP application for instance. In addition to mobile-IP,
link layer technologies have their share of blame in incurring delay. This is especially the case when
talking of integrating free spectra wireless technologies with vertically integrated mobile systems.
In order to improve the performance of Mobile IP, several proposals have been made and promising
results are expected within the next coming years:
A first large group of proposals try to keep the signalling latency low, by handling local
movements locally, also called micro-mobility protocols (e.g. Hawaii, Cellular IP, Hierarchical
Mobile IP). This subject has got a lot of attention during the past years but is currently not as hot
A second class of protocols are based on location awareness. These mechanisms make use of
information about the path a mobile node is following (e.g. a train along the railroad, a car along
the highway, a car using its navigation systems, etc.). There is still interesting on going research
addressing this subject covered in the WP.JRA.1.5.
A third approach to improve handoff performance is the use of lower layer information. Mobile
IP is designed without any assumption concerning the underlying link layer. However, the use of
lower layer triggers may inform the network layer about an upcoming handoff. In addition,
information about the L2 link status (link up and link down) may be useful for L3 handoff
enhancements. This area of research still needs attention.
Free spectra technologies are mostly used in home networks and around public buildings, but lack
organization of infrastructure on a wide area or metropolitan level. On the other hand, we notice an
increasing trend of people who are offering their wireless access points as hot spots to the outside
world, willingly and sometimes unwillingly (e.g. War Driving). Another important trend is that of
increasing numbers of self-organized peer-to-peer communities, in which users are sharing files,
contents and in the future other hardware related resources.
In both the free spectra random infrastructure and at the IP level, handover uses “blind” discovery
mechanisms and consumes a considerable amount of time to discover movement in itself and also to
identify the new access network. In an example scenario, a moving vehicle crosses an urban
environment at a relatively high speed and tries to stay online by connecting to IEEE 802.11 access
points. Lengthy layer 2 and mobile IP handovers would then occur very often. This scenario allows us
to identify the limitations of handover management across different wireless technologies. The
handover scenario will occur often and across different technologies and standards.
126.96.36.199. Major Open Problems Formatted: Bullets and Numbering
Mobility management is a key issue in mobile systems. Mobility management has two main
components: Location management and handoff management. To that purpose the cooperation of
mobile terminals is required. While a mobile terminal is active (attached to the network) and
following certain rules, it must inform the system database about its current position. Current
location management techniques involve system database architectures. Location management has
mainly focused on the procedure of database updating. Location management can be divided in:
location update and call delivery. Location update algorithms can be divided into two main groups:
static and dynamic. In the static algorithm, location updates are triggered based on the topology of
the network. As an example, we have the conventional location area (LA) based-scheme, by which all
terminals behave in the same way, i.e. each mobile terminal sends a location update message each
time it crosses the LA borders. This is the algorithm adopted in the pan-European GSM standard and
in the TIA/EIA-41 standard. Those standards are based on two-tier hierarchical databases, specifically
HLR and VLR. Clearly, this is not efficient because the LA should be determined to suit each terminal’s
mobility and traffic characteristics. Another proposal does not impose any partition on the cellular
map, but designates some cells as reporting cells (RC) where the mobile terminal must update upon
In the dynamic algorithms, the location update is based on the user’s call and mobility patterns.
Three major schemes fall under the dynamic category, i.e. (i) distance-based, (ii) movement-based
and (iii) time-based. With reference to the previous update, in (i), the mobile terminal sends a
location update message each time its travelled distance reaches the threshold D. In (ii) the mobile
terminal counts the number of cells it visits during its travel and updates its position when a certain
threshold M is reached. In (iii) the mobile terminal sends periodical updates to the networks, with
threshold or period T. A performance comparison study between LA-based scheme and RC-based
scheme has been presented in some studies.
A number of novel location update algorithms have also been proposed recently. Among
others we name:
The adaptive threshold scheme (the MT sends an update message every T´ time unites, where
the parameter T´ varies with the current signalling load on the uplink control channel of the cell)
The predictive distance-based update scheme (the MT reports location and velocity during the
The state-based scheme (the system state includes the current location and the time elapsed
since the last update)
The path-based update scheme (such as the LeZi update algorithm in which the movement
history rather than the current location is sent in an update message). LeZi scheme is based on
the entropy associated with the user’s uncertainty position.
A related concept is that of Location based Services (LBS), which is a recent type of location
management service for mobile users based on mobile terminal (MT) location. Although many
service providers are developing LBS, there are still open issues to be resolved due to the different
requirements of each service relating to accuracy, response time, signalling overhead and number of
subscribers who can be localized at the same time.
In handoff management, the network must track the user’s location perfectly during a call or a
session. An important issue is the prediction of which cell the mobile terminal will visit next. Location
prediction is a dynamic strategy in which the network tries to proactively estimate the position
(sometimes direction and speed as well) of the mobile terminal. Location prediction could provide
substantial aid in the decision of resource allocation in cells. Instead of blindly allocating excessive
resources in the cell-neighbourhood of a mobile terminal, we could selectively allocate resources to
the “most-probable-to-move” cells. The goal behind that idea is to cut down the resources wastage
without introducing an increase in the forced termination probability.
Recently, within the scope of motion prediction, information theory has been used as a tool that can
significantly improve many aspects of the analysis of mobile communications. Information theory is
the mathematical framework within which uncertainty is expressed, quantified and managed. One of
the main goals considered in this proposal is to explore the use of information theory in the mobile
computing arena. Consequently, we are planning to use information theory as a powerful tool to
predict the user mobility. Also, according to geographical scenarios users will demand a certain type
of service: location aware is also of paramount importance.
Mobilty in P2P application is also of key importance. P2P is a paradigm that can be used to solve
problems and provide new services in mobile environments, e.g. supporting handovers, providing
location based services, or a simple information or file sharing medium between end-users. To use
P2P services in heterogeneous mobile networks, their design and performance has to be improved.
Therefore, the mobility management in P2P overlay networks is another research issue. There are
some open questions in this area that need to be solved in order to improve the P2P protocols to be
used in mobile environments and to provide new services in mobile environments.
188.8.131.52. Work in progress within the WP Formatted: Bullets and Numbering
The following works related with location management (location updates + paging algorithms) and
hand off management (performance analysis and quality of service (QoS)) are currently in progress:
The use of hash functions and Bloom filters in mobility management and P2P applications.- The
main goal is to research on the use of compressed information and the potential application in
real wireless systems
The proposal of hybrid schemes of local location updates and the efficient use in wireless access
The design and development of efficient clustering algorithms towards LBS optimization based
on graph theoretic decomposition arguments and the concept of entropy functional.
Performance modelling and analysis of handover-priority channel assignment schemes for QoS
optimisation in the context of wireless 3G and 4G cellular architectures and networks with bursty
multimedia traffic flows under generalized handling mechanisms.
184.108.40.206. Priorities Formatted: Bullets and Numbering
The following activities should be launched within the WP or articulated in a new WP:
Activity 1: Mobility models for mobile internet users.- The main goal is the characterization of the
user’s movement and their behaviours, in particular when they act as internet users that are using
portable multimedia terminals and they want to use them anywhere and anytime.
Activity 2: The use of information theory in mobility and resource management.- The use of
information theory concepts such as entropy and rate-distortion will allow us to dig deep into the
arena of location and handoff management, to discover how close are the proposed algorithms with
Activity 3: Micro and macro-mobility solutions for handoff management.- To study the sate
of the art of micro-mobility (CIP, Hawaii, …) and macro-mobility (MIP for non-real time, SIP
for real time, …) management protocols, and to contribute to new hybrid solutions that
overcomes existing drawbacks (the triangle problem,…).
Referring to layer 2 assisted handover schemes, Mobile IP has been designed without any
assumption concerning the underlying link layer. However, this layer separation implies that the
registration process can only start after completion of the layer 2 handoff. In this activity, we intend
to investigate layer 3 handoff schemes that are based on layer 2 information received by means of L2
triggers. These triggers may be used as an early notice of an upcoming change in the L2 point of
attachment or may indicate the status of the L2 link (link-up and link-down). The availability of this
triggers in currently used link layers, such as WiFi or WiMax, are also the subject of this study.
Activity 4: Location awareness in P2P applications. It aims at the study of context and location aware
handover management across heterogeneous wireless networks. This work is also concerned with
wireless resource management and vertical integration. Concepts of sharing in P2P applications are
investigated and the use of distributed hash table is further studied to offer a common middleware
to networks and network operators. The information exchange between heterogeneous location
servers and data-bases allows network management through a common P2P layer and a signalling
language that trespasses the borders of single autonomous mobile network.
Activity 5: Design and performance of P2P services in mobile networks. To apply structured or
unstructured P2P services in mobile environments some problems have to be solved. Standard P2P
protocols do not face the problems introduced by lossy channels, asymmetric air interfaces or the
high churn rate of these environments. To provide new P2P based services like file-sharing, VoIP, etc,
managing overlays have to be enhanced to work properly in a heterogeneous mobile environment.
Activity 6: Performance modelling and evaluation of wireless cells and networks in both discrete time
and continuous time domains, subject to extended hand off priority rules based on stochastic control
schemes and queue time out periods.
4.1.26. Annex: list of participants
List here all the Euro-NGI colleagues, for each participant to the WP who are actively participating to
ongoing work within the WP.
Partner number Name Affiliation
Paul J. KUEHN Institute of Communication Networks and
Computer Engineering, University of Stuttgart
Eugen BORCOCI Universitatea POLITEHNICA din Bucuresti
Vicente CASARES GINER ITACA-Universidad Politecnica de Valencia
Pablo GARCIA ESCALLE ITACA-Universidad Politecnica de Valencia
53 The University of Bradford
55 University of Surrey
56 University Passau
Hermann DE MEER
56 University Passau
Amine M Houyou
56 University Passau
5. Traffic Engineering
1.5 WP JRA.2.1 - Controlled Bandwidth Sharing
Mikael Johansson and James Roberts
Core Fixed Network, Access Network, TCP/IP Networking.
5.1.3. Scope of the domain
When a communication network becomes congested, a trade-off must be made between traffic that
is carried and traffic that is not. In telephone networks the traditional mechanism has been a call
admission control, which blocks a newly arriving call if any of the resources that would be needed by
the call are congested. In contrast, congestion in the current Internet causes traffic through a
congested resource to suffer packet loss, and this in turn causes at least some end-systems to reduce
their load on that resource. In the Internet, the load imposed by end-systems is much less
predictable than in the telephone network, and the bandwidth achieved by a user fluctuates as a
consequence of the behaviour of other users. Buffers are used to smooth statistical fluctuations in
demand for scarce transmission capacity.
A major challenge is the design of a network so that it can carry existing loads on telephone
networks and the Internet, as well as new forms of traffic. A problem is that some current
Internet traffic needs large buffers, causing unacceptable queuing delays for real-time
applications such as telephony. Attempts to solve the problem are generally based on a small
number of service classes with well defined quality and prices. By giving higher priority to
some packets, and using large buffers for others, it may be possible to carry real-time
applications over the same transmission capacity as other less delay-sensitive traffic. This
possibility is explored in elsewhere within EuroNGI. As an alternative, it is possible that a
simple packet network might be able to support an arbitrarily differentiated set of services,
by conveying information on congestion from the network to intelligent end-nodes, which
themselves determine what should be their demands on the packet network. This approach
requires that market mechanisms are able to broadly align supply and demand for
communications capacity. In particular, congestion marking would give shadow prices at the
finest possible granularity, which could be aggregated and reflected as costs or prices to
Congestion control requires users to respond in an appropriate manner to congestion signals
delivered by the network elements. Many proposals have been made to improve the default option
of signalling congestion by dropping packets in FIFO buffers. So-called active queue management
schemes like RED (Random Early Detection) and ECN (Explicit Congestion Notification) improve
efficiency and fairness of resource sharing when adaptive transport protocols and applications
represents the majority of the traffic. More precise control can be achieved with the use of flow
aware scheduling. While scalability concerns have until now limited the deployment of per flow fair
queuing in the Internet, a future generation of routers might incorporate such scheduling avoiding
the need to rely on user cooperation, with or without economic incentives.
220.127.116.11. Mid-term/long term evolution in this area Formatted: Bullets and Numbering
There are several current trends that have a large impact on current congestion control solutions and
should guide the future evolution of bandwidth sharing mechanisms, including:
Increased transmission speeds in the access and the core. Currently, most flows are limited by their
access rate, which reduces the need for congestion control on backbone links. Some predict that the
transmission rates in the access will grow faster than in the core, which would make congestion
control more critical for the overall network. Additionally, traditional congestion control solutions
need to be adapted to work more efficiently in high-speed networks.
Emerging applications bring new constraints and requirements for congestion control. This includes
an increasing proportion of streaming traffic that may or may not be rate-adaptive, gaming
applications requiring very low latency, multi-path flows in P2P applications and in solutions for
reliable point-to-point transfers, and also an increasing number of pervasive computing devices (e.g.
sensor networks) and associated services.
A shift toward optical technologies in core routers is becoming necessary in order to scale routers to
meet increasing capacity requirements. Optics may, according to some (e.g. N. McKeown, Stanford
University), favour a return to circuit switching. The difficulty in realizing optical buffers necessitates
a rethink on buffer sizing – to make small buffers viable may require a change to congestion control
algorithms and may need the introduction of packet pacing,
Pricing and business models. Proposals to use congestion marks as a basis for end-user pricing (cf. F.
Kelly, Cambridge University) tend to recede due to the unlikely acceptance by end users. However, it
has recently been proposed to include congestion pricing as one component of inter-provider
settlements (B. Briscoe et al, Sigcomm 2005).
In addition, wireless Internet access, via WLAN, Mesh Networks or Cellular, is increasing in
importance. However, aspects on congestion control over wireless networks will be covered in
another part of this document.
Based on the trends above, we believe that the mid-term evolution in this area will include
methodological advances in modelling, analysis and design of congestion control algorithms,
extensions and enhancements of existing congestion control protocols and the refinement of
alternative paradigms such as flow-aware networking and size-based scheduling.
Several congestion control protocols with improved performance on high-speed networks thanks to
adapted end-to-end congestion avoidance algorithms are under development (HSTCP, FAST, Scalable
TCP, BIC, etc.), as well as congestion control mechanisms based on more explicit network feedback
than packet loss or ECN bits (e.g. XCP and RCP, which attempt to calculate the appropriate rate for a
flow and communicate this back to the source). These developments go hand-in-hand with an
increased desire to leverage modelling and analysis tools to a point where one can perform reliable
model-based design of packet-based congestion control mechanisms based on mathematical (e.g.
The current use of a single path for forwarding a given flow has well-known limitations. Recent work
(e.g. extensions of the standardized STCP) has demonstrated how coordinated congestion control
performed over a set of paths allows to avoid congested links and to make the network more robust
The use of small router buffers allows integration of streaming flows, and new scalable video codecs
and standardized rate-control protocols open up the possibility of congestion control for streaming
video applications. For multicast applications, alternative approaches based on Raptor codes have
been suggested to complement traditional TCP-solutions.
18.104.22.168. Major Open Problems Formatted: Bullets and Numbering
Design and evaluation of new congestion control protocols for flow-oblivious (in contrast to flow-
- it remains to turn optimal stable theoretical fluid flow congestion control algorithms into
effective packet-based algorithms; main labs in this area: CalTech (S. Low), Cambridge (F.
Kelly), U. Illinois (Srikant),
- design of strategies for introducing new protocols while preserving performance of legacy TCP
and allowing concurrent streaming applications,
- design and evaluation of new network centric flow-oblivious algorithms like XCP (MIT (Katabi),
USC (Faber)) and RCP (Stanford (McKeown)),
- definition and implementation of means to ensure user compliance in correctly adjusting rate
(cf. work by BT on policing TCP response to ECN, B. Briscoe))
-definition and implementation of an overload control preventing generalized throughput
degradation when demand exceeds capacity,
- Development and evaluation of mechanisms and analytical models focusing on streaming
video traffic, both inelastic and scalable video.
Design and evaluation of flow-aware congestion control:
- further elaborating the FT proposal for a lightweight flow-aware networking approach (FT in
cooperation with Polish Telecom/Warsaw UT, AGH Krakow, ENST Paris)
- designing and implementing efficient realizations of fair queueing and implicit admission
- evaluating the performance of end-to-end congestion control when fairness is ensured by the
- designing new protocols that exploit the imposed fairness (eg, revisiting packet-pair proposed
by Keshav 1994)
- security issues: evaluating vulnerabilities introduced by flow-aware mechanisms, ; using these
mechanisms to prevent or alleviate attacks,
- evaluating flow-aware networking solutions based on signalling as proposed by Anagran (L.
Roberts), BT (J. Adams) and Caspian Networks and partially standardized at ITU (new IP
transfer capabilities introduced in Recommendation Y.1221).
- recent work suggests router buffers should be considerably smaller than the current
"bandwidth delay product"; it remains to validate propositions for a drastic reduction to around
20 packets or, conversely, to determine what modifications to congestion control and AQM
would be required if optical technology imposes this reduction (Stanford (McKeown),
UCLondon (D. Wischik)).
- flow-oblivious and flow-aware congestion control should be generalized to allow flows to be
forwarded simultaneously over several network paths; generalization of the utility
maximization framework introduced by Kelly has been performed by Cambridge U (Kelly and
Voice), U. Illinois (Srikant), Microsoft Research (Key and Massoulié) and Purdue U (N. Shroff);
preliminary work on flow-aware multi-path routing has been performed by FT (S. Oueslati and
- given the size distribution of Internet flows, it is now well-known that performance is better
when the flow having emitted the least amount of data is given maximum bandwidth on a
given bottleneck link (as opposed to fair sharing); it remains to design and implement practical
sharing schemes that realize this advantage in a network setting; cf work by INRIA
(Avrachenkov), FT (Brown), CWI (Nunez-Queija), among others.
Network coding and fountain codes
- the requirement for retransmission of lost packets can be removed by employing recent
coding techniques like fountain codes (patented by Digital Fountain) and network coding. This
can radically change the nature of congestion control, particularly in specific environments like
22.214.171.124. Work in progress within the WP Formatted: Bullets and Numbering
Flow-aware networking (FT in cooperation with WUT, AGH, ENST)
performance of congestion control algorithms when routers implement fair queueing Formatted: Bullets and Numbering
-required buffer size accounting for actual traffic characteristics in terms of flow size and rate Formatted: Bullets and Numbering
design and evaluation of multi-path routing using admission control to guide path choice
study of possible implementations of FAN mechanisms Formatted: Bullets and Numbering
introducing flow-aware mechanisms in the access network (this is mainly a traffic Formatted: Bullets and Numbering
management activity, discussed in JRA.2.2)
Evaluation of size-based scheduling and design of practical implementations (FT, INRIA)
We are currently investigating the effect of heavy-tailed file size distributions on size-based
scheduling. An important example of size-based scheduling is Two Level Processor Sharing
(TLPS). In particular, we are studying how the optimal threshold in TLPS separating short and
long flows in the context of heavy-tail file size distributions. We also study the stability of a
network with TLPS nodes.
Buffer-sizing (FT, INRIA)
We are studying required buffer size accounting for actual traffic characteristics in terms of
flow size and rate limitations, and also investigating how the problem of buffer sizing can be
formulated in the context of constrained and unconstrained optimization.
Reliable transport using Fountain codes (HUT)
We are investigating how fountain codes can be used to develop reliable transport solutions
for real-time communication over unreliable links.
Performance analysis of high-speed TCP (INRIA)
Our current research in this area is on intra- and inter-protocol fairness for high-speed TCP
versions. In particular, we are analyzing fairness issues of Scalable TCP and TCP Westewood+.
Both these TCP version are very promising for future high-speed networks.
Dynamic modelling and analysis of congestion control (KTH)
Development of simple models that reflect the ‘full’ protocol dynamics (including estimators,
filters, etc) and allow a reliable model-based design.
Modeling and analysis of ACK-clock/link interactions in window-based congestion control
Estimation issues that arise when estimating the network state with implicit information
available at the source.
Congestion control for rate-adaptive streaming (NTNU)
Interface between scalable video coders and the transport level congestion control protocol.
Development of congestion control techniques optimised for scalable video applications with
respect to perceived quality
Development of appropriate models for studying the joint behaviour of different congestion
control mechanisms for elastic and streaming traffic, e.g. based on the fluid flow approach.
Congestion control for multi-homed hosts (Torino) Formatted: Bullets and Numbering
Currently, we are looking at transport protocols that support multihoming in network
scenarios with overlapping wireless coverages. In our view, this is an original approach to the
Always Best Connected (ABC) issue, allowing users a transport-layer, end-to-end estimation
of which network to use at any time.
The current priorities of the work package members are reflected by the work in progress. In
addition, we feel the need to prioritize the following areas:
Flow-oblivious congestion control:
definition and implementation of means to ensure user compliance in correctly adjusting Formatted: Bullets and Numbering
rate (cf. work by BT on policing TCP response to ECN (Briscoe))
definition and implementation of an overload control preventing generalized throughput
degradation when demand exceeds capacity,
implementation of the mechanisms in routers Formatted: Bullets and Numbering
develop a flow-aware architecture integrating constraints and advantages of optical
Multi-path and multi-network routing
choice of possible paths by the network (two or more hops? disjoint paths?) Formatted: Bullets and Numbering
standardize use of IPv6 flow label for load balancing over possible paths (hash function
designating path for a given flow or sub-flow includes flow label in argument)
Definition of network selection strategies that weigh bandwidth, delay, monetary cost, and
reliability in ABC problems.
5.1.8. Annex : list of participants
France Telecom (FT)
Helsinki University of Technology (HUT)
Royal Institute of Technology (KTH)
K. H. Johansson
Norwegian University of Science and Technology (NTNU)
P. J. Emstad
Politechnico di Torino (Torino)
1.6 WP.JRA.2.2 - Traffic Management in a Multi-provider Context
Paulo Rogério Pereira, Jim Roberts and Peder J. Emstad
Core Network Fixed, Access Network, IP networking ,Metro Network
126.96.36.199. Scope of the domain Formatted: Bullets and Numbering
This workpackage concentrates on existing service models like Intserv and Diffserv and investigates
how they can be used separately or in combination to meet user requirements for end to end QoS.
The traffic management facilities brought with MPLS and proposed new service models like flow-
aware networking are also important topics.
Of particular interest is the study of admission control schemes acting at a range of traffic
granularities. In Intserv the entity in question is the microflow defined by a set of traffic specifications
and performance requirements. Network calculus can be employed for the deterministic guarantees
of guaranteed service while more flexible measurement-based approaches are recommended for the
soft QoS guarantees of controlled load. In Diffserv, reservations are made for broadly defined
aggregates for the traffic classes corresponding to different per-hop behaviours. MPLS adds
considerable flexibility since LSPs can be established for individual communications or a range of
traffic aggregates. A common concern is the appropriateness of the token bucket traffic specification
as a useful traffic characterization (see JRA. Activity 5, WP 5.1 on traffic characterization). This WP
covers also the evaluation of the feasibility, efficiency and appropriateness of the different
approaches existing in the literature and incorporated in the network architectures proposed in JRA
Pricing is recognized to be a key to efficient traffic management. Different forms of congestion
pricing allow optimal use of scarce resources by ensuring that these are attributed to users with
highest utility. It is necessary, however, in a commercial setting, to reconcile such schemes with a
requirement for the price to bring return on investment. This WP covers also this dual aspect of
proposed pricing schemes with due regard to the important question of user acceptability.
Service differentiation generally implies both price differentiation and quality differentiation. It is
important therefore to be able to achieve predictable levels of QoS in order to justify such class of
service differentiation. Another research topic for this WP is how queue management schemes like
priority queuing, WRED and class-based fair queuing can be used to this effect. An important
objective is to derive the rules allowing appropriate parameter settings for the mechanisms of a
Evaluation of the traffic performance relation for Internet traffic suggest the requirement for a flow-
aware network architecture capable of performing admission control and routing at the level of a
user defined flow. Scalability considerations and the need for incremental implementation impose a
lightweight architecture based on "on the fly" flow identification and implicit admission control. One
objective of the WP is to evaluate the performance of such a QoS solution and to study its
An overriding concern is to establish mechanisms, protocols and standards allowing end to end QoS
provision in a multi-provider, multi-technology network. The present work package will endeavour to
provide the tools to appraise and evaluate different architectural proposals made in JRA Activity 1.
188.8.131.52. Mid-term/long term evolution in this area Formatted: Bullets and Numbering
The following trends are particularly significant for traffic management:
- increasing bit rates in the access and core network; traffic management depends on the
relative capacity limits in the access and core of the network; currently, the relatively low
access rate limits the possible impact on congestion of user traffic; if the access rate
increases faster than core bandwidth, more precise traffic management may become
- traffic management will have to take account of new types of traffic; these are largely
unknown but we can already imagine a strong increase in machine to machine (M2M)
applications and the generalization of sensor networks; streaming and gaming applications
are also likely to expand, possibly reducing the current preponderance of elastic traffic.
- it is still unclear whether the network will offer a range of services, as in the current triple
play model (voice, video and data as distinct services managed by the access network
provider), or just constitute an ubiquitous support for services created at the edge or as
overlays (like Skype or current servers of digital videos for playback); facility of traffic
management is a very significant issue in determining the outcome of this opposition.
- DiffServ is believed to be the dominating QoS architecture for the future Internet, because of
its scalable, simplistic and robust design. However, its success depends on the resolution of
some open problems related to end-to-end QoS provisioning.
- traffic management in the access network is becoming increasingly flow-aware with the
development of flow-aware bandwidth managers as add-on appliances or being incorporated
in routers (Cisco with P-Cube, Caspian flow-aware routing); the scope for traffic management
clearly depends on the granularity at which traffic entities are defined (e.g. broad Diffserv
aggregates or application layer flows).
- developments in the core network will see an increasing use of optical switching bringing
specific traffic management issues (e.g. relative performance of optical circuit, burst and
packet switching); the distribution of traffic management functions over layers 2 and 3, with
increasing penetration of carrier-class Ethernet technology and the generalization of
MPLS/GMPLS, remains to be clarified.
- wireless introduces its own specific and very difficult traffic management issues; these are
mainly dealt with in WP JRA.2.4; however, it is important to understand how the limitations
of the wireless medium will impact user expectations for quality of service in an envisaged
"seamless network"; for example, deterministic bounds of Intserv are clearly meaningless
when capacity at the physical layer is a random quantity.
184.108.40.206. Major Open Problems Formatted: Bullets and Numbering
Inter-provider QoS (using a Diffserv-like architecture) is a major open problem. The following issues
have recently been highlighted within the Communications Futures Program
(http://cfp.mit.edu/groups/internet/qos.html) where a working group has been considering inter-
provider issues (prominent participants: MIT (Clark), Cisco (Davie), BT (Briscoe)):
- providers need to agree on a minimal common set of service classes;
- SLA performance metrics (e.g. loss, rate, jitter) must be standardized;
- it is necessary to specify how SLA compliance can be measured (frequency, accuracy, by
- providers are sensitive about revealing data on their network operations but it is necessary
to communicate measurement data on realized performance;
- means are required to allow users to know what service classes and what assurances are
provided by alternative providers;
- end-to-end performance targets must be apportioned to individual providers along a
For any QoS architecture (Diffserv in particular) it is necessary to provide engineering rules allowing
performance targets to be satisfied by provisioning or traffic controls like admission control and
shaping. Study of the relation between traffic demand (volume and characteristics), capacity
(bandwidth, buffer size, sharing mechanisms) and realized performance (loss, delay, throughput) is
essential for defining such rules, but rarely performed.
Meeting QoS targets is another major open problem. Besides Inter-provider and provisioning issues,
some other relevant issues are:
- Diffserv is an aggregate based architecture and treats all flows belonging to an aggregate in
the same way. In this way, the QoS that can be given to an individual flow may be too coarse.
Networked multimedia applications are so diverse that their QoS requirements may be far
from each other. Flow-aware networking is a possible solution to provide QoS on a flow-
- the use of a qualitative or proportional service model, where there is a QoS relation between
the different classes, instead of quantitative service guarantees. The proportional model is
controllable, consistent, scalable, avoids admission control, signalling and traffic policing, but
end-to-end QoS, combining qualitative with quantitative service guarantees and the
feasibility of the differentiation for a given traffic load are still research issues.
Flow-aware networking has been proposed as a means of facilitating traffic management in the
network core (cf. work of Oueslati and Roberts, FT). It remains to fully explore the following issues:
- efficient realization and implementation of the fair queuing and implicit admission control
- design of an efficient measurement based admission control accounting for the range of
possible traffic characteristics;
- efficient detection of the end of a flow, by some timeout and perhaps also some
consideration as to whether or not a long lasting flow should go through a new admission
- security issues: evaluating vulnerabilities introduced by flow-aware mechanisms, using these
mechanisms to prevent or alleviate attacks;
- use of flow-aware networking in conjunction with other traffic management mechanisms
An alternative flow-aware approach based on the use of signalling and explicit resource reservation
has recently been included in ITU Recommendation Y.1221. It is necessary to explore the traffic
management advantages and limitations of this approach coming from Anagran (L. Roberts), BT (J.
Adams), Caspian Networks.
Admission control is an issue in almost all QoS architectures:
- using signalling to convey information for Admission Control imposes significant constraints
to the network and inhibits the scalability of its implementation. Implicitly using the IP
header to convey information about the flow peak rate will eliminate the need of signalling,
increase scalability and may significantly reduce the complexity of the admission control.
Currently there are no unused bits in the IP header. Standardization is needed. A proposal to
base admission control at the edge of a domain on received explicit congestion notification
(ECN) marks has been described in Internet drafts (authored by BT (Briscoe), Nortel (Kwok)
- techniques of providing scalable admission control is still a hot topic. Measurement Based
Admission Control (MBAC) with its several advantages proved to be an important technique
in providing stochastic service guarantees. For MBAC more research is needed in finding
good techniques that work well with low flow aggregation (e.g. at the network edge, when
the connection is wireless or in a case where one service class is given a small fraction of the
total bandwidth). In these cases, the utilization achieved by peak rate allocation may actually
be higher than that achieved by MBAC. Because flows may be bursty and accurately
describing a peak-rate may be difficult a priori, MBAC is still useful in situations with low flow
aggregation, thus finding optimal solutions for these situations is of great importance.
Traffic management in the access network brings specific opportunities and constraints. Issues
- mechanisms necessary for providing triple play operator controlled services (admission
control, multicast, scheduling,...);
- comparative evaluation from the traffic management perspective of provider controlled
services (like IPTV) and edge-based service provision, via P2P overlays, for example;
- use of flow-aware networking in the access network building, for example, on the use of the
layer 7 bandwidth management approaches of companies like Packeteer, Allot, Ellacoya,...
Traffic management in wireless networks (should be covered under JRA.2.4):
- cross-layer design of QoS mechanisms - accounting for physical layer, MAC layer, etc.;
- definition of SLAs accounting for the non-deterministic nature of wireless communication at
the physical layer;
- how to define end-to-end QoS when a network path can "seamlessly" include one or more
220.127.116.11. Work in progress within the WP Formatted: Bullets and Numbering
The following is a summary of FT work on traffic management:
FT work on flow-aware networking in the core covers the following topics:
- definition of an efficient robust measurement based admission control algorithm;
- promoting the implementation of flow-aware mechanisms: algorithms, demonstrator;
- accounting for technological evolutions: use of layer 2 technologies Ethernet and (G)MPLS
with FAN, impact of optical switching.
FT work on traffic management in the access network covers the following topics:
- use of flow-aware techniques for scheduling traffic on the user's access line or "last mile"
- evaluation of provider controlled triple play architectures;
- comparative evaluation of edge based service provision using P2P overlays, for instance;
- analysis of traffic characteristics and realized network performance through the analysis of
trace ADSL user data.
FT work on traffic management in wireless networks covers the following topics (this work in
reported under WP JRA.2.4):
- understanding the demand-capacity-performance relation in wireless networks, especially in
the case of multi-cell, multi access point, and multi-hop networks;
- devising and evaluating centralized and distributed resource sharing mechanisms capable of
meeting required performance of streaming and elastic applications.
NTNU has proposed a new framework, implicit admission control (iAC) [jiang06], to be used in the
DiffServ environment which answers many of the short-comings of DiffServ. The key idea of the iAC
approach is to let each packet of a flow carry both its service requirement and (possibly coarse)
traffic information. It is suggested to use the two bits currently in use for Explicit Congestion
Notification. This will not conflict with the use of ECN because traffic that uses these bits does not
need admission control. Or put in another way, traffic that goes through an admission control does
not need the use of ECN. As such, together with local service and traffic information, a router can
perform admission control to the flow without the need of an explicit signalling protocol. To further
simplify admission control, [jiang06] also introduces two flow aggregation approaches, one for EF
and one for AF, to ensure that a newly admitted flow does not adversely affect the agreed QoS
guarantees of any existing EF/AF flow. Currently NTNU is testing many of the ideas of the iAC with a
simulation study. We are investigating new traffic characteristics to be used for MBAC and the design
of on-line algorithms for measuring such traffic characteristics.
Inesc-ID is doing research on service level agreements, the use of Multi-protocol Label Switching
Constraint-based routing (MPLS-CR) for service differentiation, QoS provisioning, proportional service
differentiation and QoS control for video streaming.
[jiang06] Y.Jiang, A.Nevin, P.J.Emstad. Implicit Admission Control for a Differentiated Services
Network. The Second EuroNGI Conference on Next Generation Internet Design and Engineering, April
18.104.22.168. Priorities Formatted: Bullets and Numbering
All the topics listed in 4 are clearly priorities for the authors of this text. In addition, the following
items merit further attention:
- Providing end-to-end QoS in a DiffServ network is the ultimate goal. There is a great benefit
if additional bits to be used for traffic description are standardized, as referred in the open
problems section and [jiang06].
- The characterization of traffic and finding metrics to be used for MBAC.
- Pricing and its relation to traffic management, requiring the standardization of service classes
and SLA metrics, as referred in the open problems section.
- The relation of traffic management with other WPs, namely congestion control of
WP.JRA.2.1, traffic engineering of WP.JRA.2.3 and wireless and sensor networks of
5.1.16. Annex : list of participants
Partner Partner Name Participant Name
11 FT (France Telecom) Jim Roberts
11 FT (France Telecom) S. Oueslati
11 FT (France Telecom) P. Olivier
11 FT (France Telecom) R. Ben-Abdeljelil
11 FT (France Telecom) J. Augé
11 FT (France Telecom) P. Brown
11 FT (France Telecom) J-L. Costeux
11 FT (France Telecom) D. Collange
11 FT (France Telecom) S. Petrovic
32 NTNU Peder J. Emstad
32 NTNU Yuming Jiang
32 NTNU Anne Nevin
37 Inesc-ID Paulo Rogério Pereira
37 Inesc-ID Augusto Casaca
37 Inesc-ID Mário Serafim Nunes
37 Inesc-ID Teresa Vazão
37 Inesc-ID António Grilo
37 Inesc-ID Fernando Corte Real
37 Inesc-ID José Santiago
37 Inesc-ID Pedro Estrela
37 Inesc-ID Ricardo Pereira
37 Inesc-ID António Varela
1.7 WP.JRA.2.3 - Inter and intra domain Traffic Engineering for cost effective
Paola Iovanna, Roberto Sabella, Gianpaolo Oriolo and Jorma Virtamo
Core Network, IP networking, Metro Network
22.214.171.124. Scope of the domain Formatted: Bullets and Numbering
This workpackage addresses main issues related to traffic engineering in convergence networks.
Multi service networks will allow full convergence of fixed/mobile services independently on which
access network is considered. The first important issue that moved the evolution from traditional
telecom networks towards future generation networks is the introduction of some intelligence in the
network by means of GMPLS based control. The use of GMPLS based control is a valid mean to move
classical hierarchical infrastructure toward a unique multi-service core network, but, although in
theory GMPLS control enable the concept of intelligent network, a lot of work has to be done to
assess automation in such infrastructure. In order to promptly react to traffic change, while
preserving adequate QoS, the network of the future should apply full automation of traffic
engineering (TE) according to a distributed approach. In such context each element of the network
has the knowledge of the whole network and, as a typical internet-like approach, can act
automatically. The ultimate vision of future network is to introduce some automatism in the
control/management planes of the network that will allow future networks to automatically react to
traffic changes in order to optimize its resources. In fact, the realization of self-adapting networks, in
which intelligence introduced at the control/management planes, will allow the network to be
promptly reactive to traffic changes while fulfilling the challenging requirements in terms of QoS and
The first step to move toward such challenging goal is to provide integrated TE solution where,
instead of investigating routing, resiliency, and bandwidth management mechanisms as independent
and autonomous topics, they are integrated in a single solution to be sold to network operators as a
whole. In this context joint activity will be carried out with JRA3.
The second step is to introduce automation of such integrated TE solution in order to apply self-
adapting concepts. The cost of automation in terms of performance, feasibility, and scalability will be
evaluated by measurements obtained using test bed and tools in joint activity with WP.JRA.4.
Another important topic to be considered in the TE solutions is the traffic matrix estimation inferred
by measurements. This is a good element to consider not only for planning issues but also for
triggering new configuration on an operating network. Specifically, traffic variability in future IP
networks is much larger than in more stable telephone network. In this contest link measurements
can be supplied with a time granularity as fine as five minutes. As a consequence, new methods are
required to infer the traffic matrix. Specific routing approaches, such as “robust routing” represent a
consist method to deal with designing solution strategies and algorithms for routing problems, where
we are not given a single traffic matrix but, instead, we are given a set of traffic matrices.
As further evolution step towards FGN, this WP will deal with also TE solutions to apply to edge core
network nodes when layered architecture principles are adopted. According to standards such
Megaco (in the IETF), Tiphon (in ETSI) and the Multi services Switching Forum (MSF), future core
network nodes con be organize in WAN/MAN/LAN of cluster networks where functionality and nodes
are arranged in layers according to their specific areas of use such as services, connectivity and
control. Ethernet technology based on GMPLS control plane will be used within such clusters.
Inter working issues among core edge nodes network (based on Ethernet technology) and internal
core nodes (based on MPLS) will be also investigated.
Very key issue, that will be carried out within this WP is inter-domain TE. Delivery end-to-end QoS
across the Internet to support multi-service traffic requires coordination between the providers of
the different networks that the traffic has to cross. Typically the traffic will cross several different
domains, each operated by a different network provider. Much work has been conducted in the past
few years on Intra-domain traffic engineering for QoS; it is the objective of this work-package to
focus on resolving issues associated with QoS-aware Inter-domain TE.
126.96.36.199. Mid-term/long term evolution in this area Formatted: Bullets and Numbering
The following trends are particularly significant to provide traffic engineering solution for future
The possibility to have integrated solutions of traffic engineering that support different functions,
such as path (circuit) provisioning, dynamic routing, QoS support, priority handling, bandwidth
management, protection and restoration, as parts of a unique solution. This requires to define the
architecture of the final solution composed by different building blocks and to analyze the
interactions among the different components.
This approach requires tight collaboration with JRA 3 activity, where optimization issues will be
analyzed for investigating routing, resiliency, and bandwidth management mechanisms in integrated
Innovative routing to handle point to multi point, multipoint to point, multipoint to multipoint label
switch path and comparison with TE in peer to peer networks will be also analyzed.
In order to allow the evolution towards ALL-IP networks, automation of TE solutions will be analyzed
in this workpackage to apply self-adapting concepts. Different time scales will be considered to
address management and control mechanisms. Measurements on tool and test bed will be
considered in order to analyze feasibility and scalability of the solutions when apply in real nodes
using standard protocols in joint activity with JRA4. Several approaches for that will be considered
including solution based on the use of mobile agents.
Particularly important is the activity on traffic matrix integration, which will be part of the integrated
TE system. Suitable routing approaches, such as “robust routing “will be considered in order to deal
with a big number of traffic matrix.
A further step of the activity carried out in this WP, will extend TE solutions considered for internal
core network nodes, to edge core network nodes. According to layered network architecture
principle for the edge core network nodes, functionality and nodes are arranged in layers according
to their specific areas of use such as services, connectivity and control. For sake of simplicity, but
without loosing of generality, the edge nodes of the core network could be organized in clusters that
represent one or more virtual nodes. One of the objectives of this WP is to analyze and to define
architectural and technological issues for such a cluster when Ethernet technology will be used. In
order to extend TE solutions to this node clusters GMPLS control for Ethernet will be investigated.
Moreover, inter working issues among core edge nodes network (based on Ethernet technology) and
internal core nodes (based on MPLS) will be investigated.
This activity will be carried out as joint activity with WP.JRA3.1/3.3, 4 and the SW developed in the
test bed used to test the feasibility of the solutions will be put at disposal of EuroNGI by IA2.2.
Concerning to inter-domain TE, the workpackage will investigate new models for providing Inter-
domain QoS – cooperative behavior between network providers, partitioning of QoS delivery
responsibility among several network providers, overlay TE approaches including inter-domain MPLS,
offline TE optimization algorithms and QoS enhancements to BGP. The workpackage will have within
its remit both offline and online (dynamic) approaches to Inter-domain TE. The WP will also include
investigation of the impact of contractual aspects, such as service level specifications, on Inter-
188.8.131.52. Major Open Problems Formatted: Bullets and Numbering
It has been discussed so far that advanced traffic engineering solutions could allow the realization of
networks able to automatically react to traffic changes to optimize their resources (self-adapting
networks), that is networks able to automatically react and auto-adapt to cope with a dynamically
varying traffic demand.
However, there are several hot topics that remain open problems, which are worth being mentioned
in this context, and that will be surely investigated by the research community worldwide in the
years to come.
The first aspect is that network will be multi-layers. This means that when approaching key issues like
routing, protection and restoration strategies, preemption mechanisms and so forth, the designers
should take into account this aspect to better exploit the resources. Anyway, technical and scientific
literature is still quite divided. In fact, most of the literature in the last decade in optical networking
has treated in depth the routing and wavelength assignment (RWA) problem in the optical layer of
the network, independently of which layer is above that one. Another relevant portion of the optical
network literature is devoted to protection and restoration issues in optical networks. Even in those
cases, most of the paper just considers the optical network layer. In some cases the network aspects
of the optical layer are treated taking into consideration transmission aspects that impact on the
performance of the optical network.
In the last few years, several papers deal with specific TE functions such as routing, wavelength
assignment, and pre-emption algorithms in an optical layer, possibly overlaid to an electrical layer.
The “multi-layer” aspect is being considered in some way. In those papers the two sub-problems of i)
design of logical topology of the optical network (i.e. the set of wavelength paths), and ii) the routing
of the data flows at the IP/MPLS layer onto the logical topology, are solved in a separate way (e.g. in
two different steps). Differently, a multi-layer approach would consist in simultaneously solving these
Instead, the literature covering networking in IP-based network, often consider the optical layer as a
big pipe network providing the upper layers a lot of capacity at disposal.
Advanced traffic engineering strategies should instead taking into account, to some extend
simultaneously, the different network layers, and the different types of classes of services to be
supported, with different levels of QoS and resilience.
In the following, the different issues are briefly reviewed.
A. Routing in a multi-layer fashion
One of the most significant features of the GMPLS paradigm is the possibility to control the entire set
of network resources simultaneously, through one control plane. This means that the control plane
knows all the information about the network nodes, both optical (e.g. OXCs) and electrical (e.g.
IP/MPLS routers), and all the information about the status of occupancy of each link, at either the
optical or the electrical layer. This practically implies the possibility to make routing decision in a
multi-layer fashion with the significant advantage of optimizing network resources in terms of link
capacity, node throughput, and number of data flows to be controlled and so forth.
If this kind of statement is often cited in many occasions, it is not so well known how to do that in an
effective and practical way. This is still a hot issue that many researchers are being considering. To
address this point it is necessary to set the architectural problem in the right way and to approach
the routing models and related algorithms by exploiting at the best the methods of modern
operational research. For sure, multilayer routing will remain a hot issue in the next years: many
steps ahead need to be made.
B. Off-line or on-line routing?
Another passionate debate relates to the approach of making routing and related pros and cons.
There are basically two approaches. The off-line approach consents the possibility of exploiting the
power of operational research algorithms that allow the routing solution to be found in a nearly
optimal way. This method, combined with a multi-layer scheme could allow optimizing network
resources for a given demand of traffic. For these reasons, off-line methods are particularly suitable
for configuring an entire set of data paths, at different layers, in an optimal way, for a given network
topology and a given traffic demand matrix expressed, for instance, in terms of set of data flows to
be accommodated, with given bandwidth attribute, for a each pair of network nodes. The entity that
could perform such an off-line path provisioning could be either a management system or similar
network unit. Of course, this type of operation can be done each time the traffic demand changes
significantly in a semi-permanent way. It is neither indicated to handle fast traffic variations, nor
unpredictable traffic demand, nor even suitable to promptly accommodate, “on-demand”, new
Conversely, the on-line approach is suitable to promptly react to traffic changes, in a dynamic
fashion. To properly works it needs routing protocols that flood throughout the network updated
information about the status of the links and the nodes, in order to take appropriate routing
decisions. Of course, the price to be paid in case on-line dynamic routing algorithms are used lays in a
sub-optimal utilization of network resources with respect to off-line methods. Furthermore, if the
routing time is comparable with the time related to the generation of new requests, instability could
It is clear that the choice of the right routing strategy depends on the network context it has to
Many times intermediate or hybrid approaches are mentioned in the literature; even though there
are not many solutions in the literature very assessed. Hybrid solutions in the framework of next
generation Internet networks remain a challenging open issue to be addressed.
C. Pre-emption and re-routing strategies
The fact that new generation infrastructures must be multi-service and multi-classes and thus must
on one hand guarantee different levels of QoS and, on the other hand, make an effective use of
network resources, lead to address another relevant subject: the possibility of pre-empt lower
priority data flows at the advantage of higher priority ones. Moreover, if the goal of not loosing
traffic and make effective resource utilization is mandatory, it is necessary to foresee the possibility
of re-routing traffic. Sometimes this latter topic is regarded as “bandwidth management” or, in
broader sense “traffic engineering”.
The challenging issues in this case are: i) what are the criteria that lead to pre-empt data flows? ii)
what is the price that can be paid in terms of data flows to be pre-empted and possibly lost? Pre-
empted traffic have to be re-routed elsewhere? How traffic segregation has to be accomplished?
How these issues can be applied in a multi-layer scenario? The answers to these questions represent
several open issue that have to be addressed.
D. Protection and restoration strategies
The considerations made so far regarding routing strategies and related algorithms can be essentially
replicated in the considered topic.
The multi-service mandate of next generation networks lead to the fact that the network has to offer
different levels of protection and/or restoration against node or link failures. If on one hand the
network will be required to provide the carrier class performance to mission critical services (let’s
consider for instance the 50 ms time guaranteed by SDH system in case of failure), on the other hand
there is no point to provide the same level of performance for low priority traffic (e.g. best effort).
Even in this case network resources can be utilized to offer several levels of protection and
All the questions of the previous sections remain valid. In particular, we could ask what traffic
segregation? What levels of protections? How to distribute protection and restoration over the
different network layers? Is there any escalation strategy from a layer to another in case of failure?
All those issues impact the network architecture significantly.
It is certain that there is a lot of research work to be done to provide convincing answers and
practical solutions for real systems.
E. Integrated traffic engineering solutions
The largest amount of the literature address specific topics, as the ones mentioned above,
individually. What are really needed are integrated solutions that incorporate such functions
consistently. For instance, the multiplicity of classes of services requires a variety of levels of QoS
and, as consequence, several routing and resiliency needs. This means that the integrated TE
solution, which allows the network to self-adapt, has to incorporate different functions that operate
concurrently. Clearly, this leads to increase the complexity of the solutions because different and
heterogeneous inputs and timescales are to be handled. Thus, the important issue is the
performance of the integrated system rather than the performance of the individual function
independently of that context.
F. What traffic engineering is really needed?
Finally, yet importantly, it is necessary to find a reasonable and convincing answer to the question
above. Besides the charm of the topics mentioned above, which push many researchers to propose
ideas and solutions, the most important aspect lays in finding strategies and related algorithms that
can be concretely applied in real network to solve real problems of carriers and operators and that
lead ultimately to save CAPEX and OPEX or even better to earn much more money! Actually, too
often fascinating solutions are too complex to be applied, and lead to small gains. Each time one find
a TE solution should ask him/her self: why the solution I have found is much more convenient of an
over-provisioning solution? How much I gain using it? How complex is the realization and the
operation of such a solution?
Even though these questions seem more uncomfortable and perhaps less challenging, from an
academic point of view, with respect to the ones of the previous sections, they should be taken
seriously into account when planning a research activity. A good answer to that question could have
a dramatic impact in convincing operators to adopt systems employing traffic engineering.
Moreover, there are other practical open problems that need to be addressed:
- Integrated TE solutions require proper platforms to be tested. Simulation environments, such
as Network Simulator, are not usually suitable to simulator complex TE functions considering
- The feasibility of the solutions should be tested also within test bed with real nodes, in order
to analyze the impact on signaling and routing protocols, in terms of scalability and
- Providers are sensitive about revealing data traffic on their networks, this makes difficult to
consider suitable traffic models.
- Network operators require that TE solutions can be integrated in their product according to a
smooth evolution path in order to save cost. This requires that the architecture of the
solutions allows following this approach.
184.108.40.206. Work in progress within the WP Formatted: Bullets and Numbering
CoRiTeL, URM2 - Methods to automate TE functions based on traffic measurements are
under investigation. These methods trigger the TE in such a way to track in a reasonable way
traffic variations by accommodating that traffic minimizing the loosing and/or decreasing the
assigned resource in order to save bandwidth or, in general, network resources. Traffic
models for mobile and fixed traffic will be considered in order to provide a suitable
environment to test the solutions.
HUT- The work in this WP will focus on the traffic matrix estimation. So far the major milestones
have been: A novel estimation approach was developed in a joint paper with Euro-NGI partner GET
(ENST Bretagne). This quick method uses the link covariance matrix as the additional information.
Compared to the maximum likelihood approach the method is not quite as accurate but is much
faster.A theoretical result yielding the statistical Cramer-Rao lower bounds for the variance of
estimates was developed in a joint paper with Euro-NGI partner GET (ENST Bretagne) as well as
Universidad de la Republica, Uruguay. A Licentiate Thesis giving a comprehensive overview of the
current methods in traffic matrix estimation was written. In the future the following issues will be
addressed. The majority of current methods can be divided into two basic groups based on the
additional information used in the estimation: the gravity model based methods and the likelihood
methods. Currently we are working on comparing different estimation methods, specifically to find
out the sensitivity of each method to its underlying assumptions.
CoRiTeL; URM2 Within the joint activity with WP.JRA.3 , in order to provide integrated solutions for
TE Fine Routing/Protection for Data-Paths in Multi-Layer Networks based on GMPLS paradigm is
under investigation. The work aims at addressing a new routing/protection strategy for data-paths,
taking into account different levels of protection and QoS. The main topic is that of designing a
strategy for routing off-line traffic as to protect it against possible failures of network elements.
The reference multi-layer network is composed by an MPLS layer over a WDM layer, with the GMPLS
control-plane. GMPLS paradigm allows the handling of paths at different levels of data granularity:
from the lowest hierarchy data-flow up to the highest one (lightpath).
The approach that we want to pursue should allow a network operator to protect each data-path
with the desired level of protection: either considering each data path independently, or varying the
granularity of protection as to take into account heterogeneous data flows with higher hierarchy or
URM2. Robust Routing. A commonly encountered routing problem is that of routing a given traffic
matrix, e.g. as to minimize congestion. In several applications, communication patterns change over
time, and therefore we are given a set of non-simultaneous traffic matrices to support, rather than a
single one. Still we want that the routing to be static, that is, we do not want to change the routing
templates as to follow the changes in the traffic matrix. Our aim is that of designing routing
strategies that are static but effective when the traffic matrices are uncertain.
CoRITEL , ULUND, WUT
In the frame of cooperation between WP.3.1 and WP.2.3 single-path routing optimization in
networks with situation dependant backups is studied. This problem appears in the context of
routing design in failure robust IP/MPLS networks. Since, the optimization problem does not impose
any specific objective function, researchers study influence of different objective function types on
the problem complexity. Load balancing seems to be a good candidate criterion for this problem. A
model covering fair link load balancing is studied.
220.127.116.11. Priorities Formatted: Bullets and Numbering
All the topics listed in 4 are clearly priorities for the authors of this text. In addition, the
following items merit further attention:
- The characterization of mobile (e.g. UMTS; HSPDA) and fixed traffic in a convergence
network scenario, based also on real measurements from running networks.
- Provide control plane architecture for supporting the solutions investigated on the WP
- Provide a test- bed for analyze the solution investigated in the WP.
- The relation with other WPs, namely optimization issues and resiliency with WP.JRA.3.1,
WP.JRA3.3, traffic management of WP.JRA.2.2, test bed and SW modules of the control plane
of WP.JRA.4.2 and WP.IA2.2.
5.1.25. Annex : list of participants
Partner Partner Name Participant Name
30 CoRiTeL R. Sabella
30 CoRiTeL P.Iovanna
30 CoRiTeL L. Vellante
30 CoRiTeL A. Colamarino
31 URM2 P. Italiano
31 URM2 G. Oriolo
31 URM2 M. Naldi
31 URM2 A. Pacifici
9 HUT J. Virtamo
9 HUT A. Juva
9 HUT R. Susitaival
1.8 WP JRA.2.4 - Trends and challenges in wireless networking
Alexandre Proutiere and Jim Roberts
5.1.28. Scope of the domain
The last few decades have seen a tremendous growth of the Internet, due mainly to the
development of data applications such as email, file transfers, web, file sharing, etc. This
sustained growth has been possible mainly because all the mechanisms involved in the
network were designed to remain simple, scalable, and to have a distributed implementation.
At the same time portable digital devices have become increasingly popular for both business
and pleasure, and wireless networks have experienced a similar exponential development.
Voice traffic has continuously increased since the deployment of the first cellular networks
while the use of data applications is rapidly developing particularly in Wireless Local Area
Networks (WLANs) installed in the home, the office and so-called hot spots. New
multiservice cellular systems like UMTS/HSDPA and CDMA 1EvDo are now being deployed
to support data applications in addition to traditional voice traffic. There is a parallel strong
interest in using the IEEE 802.11 WLAN protocols for voice and other streaming services. To
support the increasing demand on the radio interface, there is currently a significant research
effort on developing new radio access technologies. These technologies have evolved from
the simple TDMA and FDMA-based systems that are still in use for GSM, to the most
elaborate opportunistic MIMO/OFDM-based system that is likely to be implemented in future
Internet growth is being accelerated by the extension of the coverage and capacity of wireless
networks and by the increasing popularity of new bandwidth-consuming applications. In the
near future, we can expect to see an Internet access network composed of a large number of
portable wireless devices, providing ubiquitous connectivity based on all types of wireless
access technologies, and enabling a full range of real time and data communication
Designing this future broadband wireless access network raises numerous interesting
challenges. Some of these are related purely to the radio access technology. The radio
spectrum being a scarce resource, its use has to be optimized so as to offer high capacity
reliable links at the physical layer. Further challenges arise from the very scarcity of this
resource that exacerbates the usual problems of traffic management, resource allocation and
congestion control. It is necessary here to apply a cross-layer approach taking due account of
the physical limitations of radio access technology.
18.104.22.168 Mid-term/long-term evolution in this area
The following current trends have a significant impact on the way future wireless networks
should be designed:
High capacity radio links.
Increasing the spectral efficiency of the radio interface has always been, and will always be,
the major issue in wireless communications. This problem becomes even more crucial as the
envisaged applications require more and more resources. A clear consensus is emerging in
favour of MIMO/OFDM based access technology although it largely remains to specify the
systems that can fully exploit its theoretical capabilities.
Application-oriented MAC and physical layers.
In wireless there is not a unique optimal way to exploit radio resources. Traditionally the
physical and MAC layers were designed independently of the applications the network was
supposed to handle. It is now recognized, however, that these layers need to account for the
characteristics and the QoS requirements of the intended applications. The example of the
recent CDMA 1EvDo standard well illustrates the capacity gain one can achieve with an
application-oriented design. This standard jointly exploits the relative packet-delay tolerance
of data applications and the information-theoretical principle of multi-user diversity to
increase radio capacity. A significant research effort is required to establish which access
technology is optimal for which applications.
Multiservice wireless networks.
An application-oriented design of wireless system seems to be contradictory to the need to
handle different types of traffic, namely streaming and data traffic, on the same radio
interface. Hybrid access technologies combining the advantages of streaming-oriented and
data-oriented technologies should be investigated. For instance, it remains to fully define the
way radio resources are shared between streaming and data applications in UMTS/HSDPA
Distributed multi-access networks.
As the network will be composed by a huge and increasing number of portable mobile
devices, centralized radio resource sharing policies are not suitable. Distributed sharing
policies are likely to be implemented, especially in networks where coordination is not
possible, such as WLANs or ad-hoc/sensor networks. The design of distributed resource
allocation schemes is a highly critical research issue in view of the need for schemes to
sustain rapid and harmonious network growth.
Load balancing in multi-interface networks.
A given wireless device may have simultaneous access to different types of wireless access
networks (e.g., cellular networks and WLANs). The development of efficient and
decentralized load balancing schemes is one the major issues currently addressed in the
wireless networking community. A related question is the design of cooperative wireless
access networks as discussed below.
Cooperative wireless access.
The broadcast nature of the wireless access allows users located in the coverage area of
several wireless networks to use these networks simultaneously and thus increase their
Internet access speed. An example of such a scenario is what is referred to as neighbourhood
wireless meshed networks, where users share their fixed DSL accesses via a meshed WLAN.
Self-organized multi-hop networks.
Multi-hop wireless networks allow users far from a terrestrial Internet access to be connected
via intermediate nodes. They can be used to extend Internet coverage in regions with no
infrastructure, or to offer communication capabilities for emergency services during disaster
recovery, for example. By definition, these networks have to be scalable and adaptive. A
major design concern is the definition of efficient distributed resource allocation schemes and
Congestion control algorithms for wireless networks.
The transmission control protocol TCP has been a major factor behind the success and rapid
growth of the Internet. Unfortunately TCP, in its current form, and wireless access are
somewhat incompatible. The interaction between MAC/physical layers and TCP often results
in a significant decrease in performance. For example, in CDMA EvDo-based cellular
networks, the burstiness of packet arrivals created by TCP reduces the gain obtained using
opportunistic scheduling at the MAC/physical layers. The particular characteristics of TCP
traffic have yet more severe impact in multi-hop networks. A further issue with TCP in
wireless networks is that the network has to transport TCP acknowledgements and this can be
particularly detrimental to the performance of WLAN networks. One solution to the above
problems might be to bring incremental enhancements to the TCP algorithms or to the MAC
protocols. Alternative more radical solutions, such as the use of fountain codes to avoid the
need for acknowledgements, are also under investigation.
User mobility is one of the main challenges in wireless networking. When mobility implies
fast fading variations (which means that the channel variations occur at a time scale smaller
than the delay typically required at the application level) one can take advantage of mobility,
using the principle of opportunistic scheduling, for example. When mobility results in
position or fading variations at a larger time scale, it is not possible to exploit these variations
except if large delays are allowed as in so-called delay tolerant networks. Otherwise their
impact on performance has to be quantified. Mobility management (e.g., hand-over policy in
cellular networks) is also an important current issue that has to be jointly considered with the
design of load balancing schemes.
5.1.31. Major open problems
We describe below the major open issues in wireless networking that have to be addressed by
Design of (centralized) multi-service cellular networks.
- identify the radio access technology best suited to the type of application to be
handled (namely streaming or data applications);
- model and design resource allocation schemes for existing and future multiservice
design scheduling algorithms for CDMA 1xEvDv or UMTS/HSPDA
networks (of particular interest is the design of a scheduler adapted to
optimize the resource allocation in OFDM-based networks (such as
WiMax, WLANs,etc.) realizing an appropriate trade-off between
efficiency and complexity;
provide efficient cell selection schemes (the cell to which a given
terminal is attached);
design inter-cell resource allocation algorithms (as a way to deal with
provide and evaluate adaptive admission control policies;
- quantify the impact of the interaction between the MAC and congestion control
Design and evaluation of distributed wireless networks (WLANs, adhoc and sensor networks)
- design efficient distributed scheduling, power control and spectrum allocation
- provide scalable and efficient routing protocols;
- evaluate the interaction between resource allocation and routing and provide a
joint design of these two mechanisms;
- propose optimized load sharing/multi-path routing algorithms;
- address the privacy and security issues.
Congestion control in wireless networks
- model and evaluate the interaction between TCP and different resource allocation
schemes implemented in cellular and WLANs;
- study the behavior of TCP in the case of multihop networks;
- propose optimal procedures to adapt TCP parameters to the wireless environment;
- provide alternative congestion control algorithms;
- propose and evaluate congestion control algorithms in loss insensitive networks
(e.g., when data is transmitted using fountain codes).
Mobility in wireless networks
- provide realistic mobility models;
- evaluate the impact of mobility on the performance in cellular, WLANs and ad hoc
- propose an efficient mobility management scheme (e.g., optimize the hand-over
algorithm in cellular networks).
5.1.32. Work in progress within the WP
We now list ongoing studies performed by EuroNgI partners involved in Work package
Research at HUT (Helsinki University of Technology) is focused on modelling and evaluating
the performance of cellular, mesh and ad hoc networks. Mobility and its impact on
performance are studied in the context of cellular networks.
At the Universidad Politecnica de Valencia, researchers are working on the deign of
admission control policies for cellular networks, on mobility management, and on the
performance of IEEE 802.11-based WLANs.
Research at Polito is mainly focussed on the performance evaluation and design of various
IEEE 802.11-based networks.
The University of Antwerp is also working on WLAN performance and the definition new
mechanisms to enhance this performance. They also work on mobility management, on
routing algorithms for mesh networks, and on bandwidth reservation schemes to provide QoS
in ad hoc networks.
The University of Pisa is analyzing the performance of WLANs supporting both data and
streaming traffic. New MAC mechanisms are being developed to improve the capacity of
Researchers at GET/ENST are mainly working on resource allocation in ad hoc networks.
They are also designing resource reservation schemes to provide QoS in these networks.
Research at GET/INT is focused on the interaction between TCP and MAC layer protocols in
UMTS/HSDPA systems, on routing in ad hoc networks, and on multiservice WiMax
The group at Ghent University considers the design and performance evaluation of ARQ
mechanisms in wireless data networks, and investigates mobility issues in aggregation
Research at AUEB (Athens University of Economics and Business) is focussed on modelling
resource control in CDMA networks using an economic framework. They are also developing
an auction-based reservation scheme for UMTS and UMTS/HSDPA networks.
The group at INRIA is studying the performance in WLANs, and the interaction of TCP and
of MAC protocols. They are also analyzing the coverage and connectivity of ad hoc networks.
In addition, they are working on a number of issues in the area of multiservice cellular
France Telecom R&D is analyzing the performance of multiservice WLANs. They are
working in collaboration with CWI on the design of new intra- and inter-cell resource
allocation schemes in cellular networks. They are also seeking to extend their previously
obtained results on the impact of user mobility on performance.
Researchers at KTH are working on the design of scheduling and resource allocation in multi-
hop wireless networks. They are developing distributed algorithms based on optimization
The issues addressed in priority by the partners involved in the work package are those
currently investigated (refer to the previous section). We believe that the work package should
also prioritize the following topics:
- resource allocation in MIMO / OFDM – based networks;
- optimal scheduling in loss insensitive networks;
- Implementation issues and capacity of realistic wireless multi-hop networks;
- delay/capacity trade-off of delay tolerant networks.
6. Network Optimisation
1.9 WP.JRA.3.1 - Optimisation of multi-layer core networks
Michał Pióro, Klaus Hackbarth, Gianpaolo Oriolo, Michał Zagożdżon
core fixed network, multi-layer network , resilience, IP networking
6.1.3. Scope of the domain
A main trend in evolution of the next generation Internet leads to for example a two-layer
architecture of IP over optical network (i.e., IP layer over optical layer, for example over
DWDM, enriched with MPLS/GMPLS). This architecture provides powerful functionalities for
routing, switching/multiplexing (including traffic grooming), and protection/restoration.
These functionalities can be exploited in both layers in many different combinations. A
particular combination of routing, switching/ multiplexing, and protection/restoration
mechanisms applied in both layers (in a coordinated way) will have a great impact on the size
and composition of the basic resources in the network nodes (routers, optical cross-
connects) and links (optical transmission systems), and thus on the cost of installed
resources. This, combined with high modularity of network resources (transmission systems,
ports, switching matrices), necessarily requires network modelling with explicit appearance
of the internal structure of the telecommunication nodes exposing such pieces of equipment
as ports, switching matrices, and internal links between ports in the two resource layers (i.e.,
in IP layer and in optical layer). This type of explicit node modelling will allow for computing
proper node configurations in order to minimize the overall cost of equipment, and to
compare capabilities of different solutions in terms of demand satisfaction and resilience to
failures, at the same time answering such crucial yet so far unanswered questions of how to
route connections, where to switch capacity modules (in the lower optical layer, in the upper
IP layer, or in both, and if so in what proportions), where to restore flows (in the lower
optical layer, in the upper IP layer, or in both, and if so in what proportions).
WP.JRA.3.1 deals with the above described issues. The main objective is advancing of modeling and
design methodology for the design of protected multi-layer core NGI networks. The research is
focused on a set of design methods, algorithms and software procedures supporting the tool
prototype described in WP.JRA.3.4. First, in WP.JRA.3.1 we have elaborated relevant variants of the
NGI core network models (and sub-models), consisting of one or more layers of resources, each
equipped with specific routing and protection mechanisms. The network architectures resulting from
WP.JRA.1.4 have been considered. Special attention has been given to two-layer architectures with
the IP packet layer over DWDM optical transport layer. Second, a set of selected important design
problems and optimization tasks has been identified and specified. The routing strategies considered
in the packet layer were selected on the basis of recommendations resulting from WP.JRA.2.3, whilst
the protection mechanisms studied in both layers have been considered in WP.JRA.3.3. The
optimization methods and algorithms for solving the specified design problems are developed using
the methods of Linear, Mixed-Integer, Concave and Convex Programming, as well as stochastic
heuristics. Also, new methods developed in WP.JRA.5.4 and WP.JRA.5.5 are potentially applicable.
Finally, the algorithms will be implemented in the design tool of WP.JRA.3.4, and tested on the
network examples developed in WP.JRA.4.2. The design procedures can be used in WP.JRA.6.2 for
evaluating the network cost.
6.1.4. Mid-term/long term evolution in the area
The convergence of most of new telecommunication services in the Internet-based networks
forces the operators to develop new solutions to better control and manage the resulting
traffic. One of the main issues in the intra-domain traffic management is to achieve a simple,
efficient and adaptable control of the traffic using the means provided by a combination of
traditional IGP protocols and more flexible routing paradigms such as MPLS (in IP layer) and
(GMPLS in the IP over DWDM architecture). Although the features offered by MPLS seem
very attractive at the first sight, many telecommunication operators have considerably had
delayed the introduction of MPLS in their networks and still heavily rely on the use of
traditional shortest-path IGP routing protocols (OSPF or IS-IS). The reason is that, although
MPLS is potentially very powerful, noone really knows how to efficiently use the mechanism.
In effect, when deployed in a network, MPLS is either used in a close combination with the
underlying IGP protocol (and hence uses very few of new features), or is deployed by hand to
offer VPN services, or is used in a non-scalable full-mesh version.
Although more and more attention is focused on the difficult issue of how to route traffic
and how to set routing parameters in order to optimize the use of existing resources, to our
knowledge, no approach has tried to address the problem in a global scale. Some authors
have considered the problem of routing traffic demands on single paths (one path for each
demand, i.e., for each pair origin-destination). Some authors have tackled the problem of
defining jointly the routes and a compatible set of administrative link weights in order to
minimize a function of the link loads or to design the least cost network. Some authors have
also considered the case where several shortest-path route might co-exist, in which case, the
traffic is split along the several routes according to the ECMP protocol (Equal Cost Multi-
Another traffic management issue for network operators is that of fair sharing of resources.
As low probability of service denial is a measure for quality of service, it is reasonable to
require that a traffic admission control mechanism imposes fair sharing of the network
resources between its users. The validity of this problem scales from small to large-scale
networks. For a small network a “user” may correspond to a single terminal, whereas in a
large backbone network a “user” may be constituted by thousands of terminals connected to
the backbone via a smaller access network. As for most network design problems, the
difficult requirements of shortest path routing (based on link weights), single-path routing
(unsplittable flows), modular flows, etc., are relevant for this type of fairness problems.
Today’s telecommunication networks are composed of several layers of resources. In order to
achieve effective utilization of resources, simplify management, operation and new service
provisioning it is not sufficient to design/optimize the networks of the different layers separately.
Instead, an integrated design of the multi-layer network has to be performed. Most of the network
design problems for single layer networks (e.g., those described above) that capture such details of
real networks, as single-path routing, modular resources, etc., are very hard to solve (often, they are
NP-hard). The multi-layer network design problems are even harder, because different restrictions
on different layers have to be taken into account, as well as the relations between the layers. New
emerging technologies and protocols, such as generalized MPLS (GMPLS), automatic switched
transport network (ASTN), bring new opportunities for network automation. However, they impose
considerable modelling difficulties. In the literature, often two layer networks IP/MPLS-over-DWDM
are considered, because this architecture, as already mentioned, will likely dominate the Internet
core networks in the near future. Effective modelling and design tools are required for multi-layer
networks and these are considered within this project.
Next generation network infrastructure should support different services and several levels of
quality of service (QoS) and resilience. The main requirements for such multi-service
networks are flexibility, effective utilization of network resources, diversified QoS and
protection against failures. Multi-layer network, composed by the IP layer over the DWDM
layer, offers a solution to these new challenges. In fact, the network paradigm based on the
MPLS/GMPLS technique allows matching the “circuit-switched” nature of the transport layer
based on the DWDM optical technology with the connectionless nature of the Internet.
Moreover, the generalized version of the MPLS control-plane, GMPLS, allows for a
possibility of controlling such networks and to optimize the use of the resources.
In the current literature, routing/protection strategies allowing for different QoS and
protection against failures are usually treated in a single-layer approach. Many papers deal
with survivability mechanisms at the DWDM layer and show that they usually offer fast
recovery, but they need to consider a coarse granularity of the protection (lightpath
protection). On the other hand, survivability mechanisms in the IP/MPLS layer may use a
finer granularity, leading to a better resource utilization. However, protection at the MPLS
layer is generally slower than that at the DWDM layer.
Most of the papers in core netwrok design are restricted to single-layer optimization approaches,
and currently there are not many attempts making use of the multi-layer modelling. The multi-layer
optimization is the area receiving more attention only recently. However, even when considering the
multi-layer environment, most papers report approaches based on a separation between the optical
layer and the MPLS layer. On the other hand, the first studies about the integration of the DWDM
and the IP/MPLS layer survivability mechanisms, show that they provide many advantages, such as
flexible granularity, better resources utilization and different fast recovery mechanisms according to
the different levels of QoS. The difficulty here is that of designing a strategy that allows for different
levels of QoS and different levels of protection against failures, but, at the same time, is easy to
6.1.5. Major Open Problems
22.214.171.124 Pure Internet IGP shortest-path Internet routing (Partner WUT, #36)
- Basic optimization variables are link administrative weights (the vector of all link weights will be
denoted by w) and the objective is to find a vector of weights w0 which induces feasible link
loads. Single-path routing (shortest paths induced by w0 are unique for all the demands) as well
as the ECMP split can be assumed. Additionally, some TE objective can be imposed, e.g., to
minimize the maximum link utilization.
- The basic optimization approach is to use an MIP node-link formulation (of the type specified in
Section 7.2.1 in “Flow, and Capacity Design in Communication and Computer Networks“ by M.
Pioro and D. Medhi; note that this formulation allows for the single-path assumption as well as
for ECMP) and apply a B&C (branch-and-cut) approach. This approach is under intensive
investigation in many laboratories in the world, still no decisive solutions are found so far.
- Due to the fact, that branch-and-cut is not so far able to produce solutions for large networks in a
reasonable time, approximative methods are required. One such method is a two-phase
approach: to find a set of (single) paths which solve a weight-independent version of the flow
allocation problem, and then to find a vector of weights w that induces the system of single
paths resulting from phase 1. In phase 1 certain necessary conditions can be imposed on the
system of paths to make it more likely to successfully pass phase 2.
Internet routing combining IGP shortest-paths and MPLS tunnels (LSPs)
(Partner WUT, #36)
The basic objective is to realize a given demand pattern by means of the shortest-path weight-based
routing optimal in a certain TE sense. As a first approximation, we can assume that an optimal single-
path routing (flow) pattern is given, and the problem is then to realize this routing pattern essentially
as the system of the unique shortest-paths induced by a weight vector w0 (hence defining all
administrative weights). If this is not possible, then we can realize a certain subset of the assumed
path system as LSPs, not as the IGP paths. In such a case we aim at using the smallest number of such
LSPs (out of all end-to end single paths).
The main objective is to find a weight system w0 which induces single shortest-paths for a maximal
number of demands, and a set of single shortest paths (LSPs) for the remaining demands such that
resulting flow pattern is feasible, and the number of LSPs is minimized. Another objective can be
added to such a formulation resulting in a weighted objective function (the number of LSPs plus
The problem is NP-complete. There can also be variants in which one allows to route a demand
(commodity) using both shortest-paths and LSPs; more precisely, the routing path of each
commodity can be made partly of shortest-paths and partly of MPLS tunnels when such tunnels are
available. In this context, one should first define precisely how the two routing protocols interact
(according to existing standards or possible evolutions).
Non-convex Max-Min Fairness (Partner ULUND, #48)
A commonly desired type of network resource sharing is the scheme of max-min fairness. Obtaining
a max-min fair distribution of resources is well solved in the case of convex constraint sets. However,
convex constraint sets are unfortunately an ideal, non-realistic feature of the optimization problems
associated with modern communication networks. Therefore, considerable research efforts are now
devoted to max-min fair allocation problems with complicating assumptions making the resulting
optimization problems non-convex. Specifically, problems that are considered in this context do
usually include the use of integer variables, in some form. More precisely, these new problems can
still be classified under linear max-min fairness but with decision variables being restricted to
integers. The reason for this is that in practice, decision variables, such as e.g. flow allocated to a
demand, cannot take on an arbitrary value. Another realistic requirement might be that a flow
between a node-pair cannot be split arbitrarily between connecting paths, but can only use one path,
selected in the optimization process. Also this makes it necessary to model the allocation problem
with integer variables, rendering substantially more complicated optimization problems.
The design of multi-layer networks (Partner ULUND, #48)
The two layer (IP/MPLS-over-DWDM) network architecture has been considered in the
studies carried within the project. Several different multi-layer network design problems
have been studied. It is common to all of them that network topology, user demands (either
exact demand volume, or defined by upper and lower bounds) and candidate path lists for
routing in all network layers, as well as a list of failure scenarios are given. Given this, one has
to find routing of flows in both layers and in all failure scenarios that fulfils demand volumes.
Also, link capacities can be given, or is a subject to optimization. Both options have been
captured by the studies performed. The difference between the different studies is what
routing and link capacity restrictions, recovery mechanisms and objective functions are
assumed. Several realistic network routing and recovery scenarios have been modelled, such
as single path routing and modular link capacities in both network layers. Among the
considered recovery mechanisms, single backup paths, path diversity, and failure-dependent
backup paths have been considered, also assuming that unaffected by a failure flows are not
rerouted. All these restrictions can be modelled only with the use of integer/binary variables,
thus making the optimization problems non-convex. Thus effective approximate methods
are needed for solving the problems. A framework based on the iterative algorithm for
solving the discussed problems approximately has been developed. As to the objective
functions, network throughput maximization, cost minimization, proportionally fair
bandwidth distribution among demands and load balancing on links (of the upper or lower
layer) has been considered.
Efficient routing of prioritized Internet traffic. (Partner UC, #44)
Consider a network defined by a set of Nodes N and a set of links E that interconnects them where
each link ei has a specific capacity qi and a cost bi. In each nodel nj of the network there is a set of
demands Dj and each demand djk is characterized by the origin node ojk , the destination node djk,
the required capacity cjk , and the prioritization factor pjk. This prioritizing factor may be related with
the price the user or customer is willing to pay for taking the demand from the origin to the
destination. The sum of the costs of the links a demand uses in their path from the origin to the
destiny nodes must be lower than pjk. The objective is to transport as much demands as possible and
hence to maximize the revenue the network obtains for carrying the demands. As the problem is NP
complete we have to use a corresponding heuristic mainly for large scale networks. Anyway as a first
approximation and mainly when the number of demands, nodes and links are limited to small or
medium size networks, a simple method of a deepest path first (DPF) heuristic can be applied. These
DPF heuristic bases in calculating the solutions resulting from different demand classification and
applying a shortest path routing reaching different feasible solution and selection the best one.
Whether this method works in case of large scale network should be studied but we assume that the
proceeding time for large scale networks will limit the number of demand classification and hence
limit the optimality of the solution.
Our objective is to deal with this problem using modern-metaheuristics, specifically genetic
algorithms, which has been applied to problems in communication networks. Now we
provide a brief introduction to these optimizers
Genetic algorithms are robust problem's solving techniques based on natural evolution processes.
They are population-based techniques which codify a set of possible solutions to the problem, and
evolve it through the application of the so called genetic operators. Our objective is to define a
procedure to solve the problem based in genetic algorithms and to compare the solution with DFS
algorithm under shortest path mechanisms.
6.1.6. Work in progress within the WP
In WP.JRA.3.1 research activites are focused on routing design problems, multi-layer
network design, and fairness in networks. In the case of pure IGP routing desing, exact
optimization methods are studied. In the first approach, a problem is formulated as a MIP
and strenghtened by deriving valid inequalities for the shortest path routing. In the second
approach solution decomposition methods are considered. Approximation methods and
heuristics are planned as the next step of research activities. Simultaneously, variants of
hybrid combinations of IGP shortest-paths and MPLS LSPs are being investigated.
Considering different problem variants incorporating non-convex MMF optimization,
different MIP-based solution algorithms are tested. An approximation algorithm for the
“fixed-paths modular flows case” is also evaluated. Special emphasis is put on the
computational complexity of the studied problems. For the multi-layer network design
problems, currently the exact solution methods based on branch-and-bound/cut/price and
their application to the network design problems are studied. Also, another class of multi-
layer network design problems is being considered, namely—network equipment design
Furthermore, we aim at addressing new routing/protection multi-layer strategies for traffic requests
that take into account different levels of protection and QoS and make an efficient use of the
network resources. Clearly, in order to prove the goodness of these strategies, we need to develop
algorithmic solutions implementing these strategies. We are currently deriving some optimization
problems (mainly integer linear programs) that allow for designing such strategies in an effective
way. For the solution of these problems we are developing some simple combinatorial heuristics,
mainly shortest paths algorithms. This choice is motivated by two main issues: 1) shortest path
algorithms are simple, well known, widely used and many tools implementing them are available; 2)
it is our intention to show that, in spite of the accuracy of these strategies, they may be realized via
The results of research activities obtained so far can be found in WP.3.1 deliverables
(WP.JRA.3.1.1-4) on EuroNGI Intranet at the following address:
and in Knowledge Map Tool (http://omega.enstb.org/eurongi/).
Considering research activities performed in the scope of WP.JRA.3.1, the following priorities can be
- Research on exact and approximate solution methods for IGP routing design for real size
networks supported by numerical studies,
- For pure IGP routing design problems, network operational states (eg. network failure states)
will be taken into account,
- Further research on combined IGP shortest-paths and MPLS LSPs routing – for selected
problem variants exact and approximate solution methods will be elaborated
- Elaboration of exact methods for solving the considered multi-layer network design
- Deriving multi-layer strategies allowing for routing/protection strategies of traffic requests
that take into account different levels of protection and QoS and making an efficient use of
the network resources. Clearly, in order to prove the goodness of multi-layer strategies, we
need to develop algorithmic solutions implementing these strategies. We believe these
algorithms need to be simple so that they can be implemented in practice. On the other
hand, we believe it is crucial to assess the validity of these algorithms by comparing their
solutions with the (optimal) solutions found by exact solvers (e.g. CPLEX).
6.1.8. Annex : list of participants
Partner Partner Name Contributor Name Contributor e-mail address
2 GET Samer Lahoud firstname.lastname@example.org
2 GET Géraldine Texier -
2 GET Laurent Toutain -
13 Alcatel Dominique Verchere Dominique.Verchere@alcatel.fr
17 Uni Wue Michael Menth email@example.com
30 Coritel Paola Iovanna firstname.lastname@example.org
31 URM2 Gianpaolo Oriolo email@example.com
34 AGH-Dot Andrzej Jajszczyk firstname.lastname@example.org
34 AGH-Dot Piotr Chołda email@example.com
36 WUT Michał Pióro firstname.lastname@example.org
36 WUT Artur Tomaszewski email@example.com
36 WUT Michał Jarociński firstname.lastname@example.org
36 WUT Mateusz Dzida email@example.com
36 WUT Michał Zagożdżon firstname.lastname@example.org
38 IT Amaro De Sousa email@example.com
38 IT Carlos Lopes firstname.lastname@example.org
44 UC Klaus Hackbarth email@example.com
44 UC Carlos Diaz firstname.lastname@example.org
44 UC Antonio J. Portilla email@example.com
44 UC L. Rodriquez -
44 UC Alexia Menedez firstname.lastname@example.org
48 ULUND Ulf Kömer email@example.com
48 ULUND Eligijus Kubilinskas firstname.lastname@example.org
48 ULUND Pål Nilsson email@example.com
1.10 WP.JRA.3.2 - Optical Access Networks
126.96.36.199.1 Sebastià Sallent
Access Networks, Passive Optical Networks (PONs),
6.1.10. Scope of the domain
By the beginning of the nineties, the commercial deployment of optical access networks started
mainly based on standards and proprietary commercial solutions. Such networks usually had star or
tree topology and so they offer point-to-multipoint communications where the central node or head-
end controls the medium access of the end users.
In optical access networks the optical medium is shared among users, especially the upstream
channel (from end user to head-end) which access is controlled by the head-end. The users request
bandwidth resources to the head-end and this node decides to acknowledge or deny the access to
the medium. The downstream channel (from head-end to end user) is broadcast and free of
collisions and all users receive the data sent by the head-end.
At this initial phase, the access medium protocols were based in proprietary TDM techniques and
mainly used an optical carrier for the downstream channel and another for the upstream channel.
Generally, the optical carrier was taken from the third window for the downstream channel and from
the second window for the upstream channel. The medium access control, resource management,
and signaling and operation mechanisms, were implemented in devices (modems) that perform base
band modulation. The head-end modem was called OLT (Optical Line Terminal) and the user modem
ONU (Optical Network Unit) or ONT (Optical Network Terminal). Most of these optical access
networks were passive in the sense that all the intermediate elements between OLT and ONUs did
not require power supply.
By the middle of nineties the first efforts commenced towards the standardization of the passive
optical networks and this was the beginning of the xPON technologies that were standardized by the
ITU and IEEE.
The first of them was APON/BPON, based on ATM proposed by the FSAN (Full Service Access
Network) consortium in 1995 and standardized as ITU-T G.983 (1998-2003). The maximum
downstream channel capacity was 622 Mbps. Another one was the GPON technology based on GPF
(Generic Framing Procedure), also proposed by the FSAN consortium and standardized by the ITU as
G.984 (2003-2004) it offers a maximum downstream channel capacity up to 2.5 Gbps. Finally, the
EPON technology (Ethernet PON) proposed and standardized as IEEE 802.3.ah by June 2004, is based
on Ethernet technology.
Also, some active optical networks manufacturers have extended Ethernet technology and elements
to the access networks. These active networks are simple and reliable, but are more expensive than
All the mentioned standards describe the medium access mechanism, network elements
(ONU/ONT, splitters and OLT), operation, administration, management and protocols syntax.
They all leave the optimal assignment procedure at the head-end open to implementation.
Some features of the current xPON technology are:
- Sub-utilization of the optical medium due to the use of only two optical channels. These
channels are designed to take two different carriers: the upstream one in the second window
and the downstream one in the third window.
- Provision of different categories and service classes in order to support triple play services:
real time services, elastic traffic and best-effort traffic.
- Medium access protocols based on TDM defined over each optical carrier.
- Support of transmission rates around 1 Gpbs in each channel in general.
- No transparency to the core networks, MANs or LANs. The OLT should perform a translation
of the access format to the other environment, resynchronize, apply error control, and so on.
- The resource management algorithms located at the head-end, are not standardized: they
are neither optimal nor efficient.
- The central or root node (OLT) controls the networks and is in charge of the network
signaling which is synchronous.
- The service provisioning and network management are centralized and are based on simple
mechanism that can be improved.
6.1.11. Mid-term/long term evolution in this area
There have been three big revolutions in the Internet. The first two big movements were the
convergence of electronics and networks (eighties), and the convergence of information
technologies and communications (nineties).
Currently we are experiencing the third revolution in Internet. It is concerned with the integration of
the audiovisual world and networks. In the next ten years. Internet will be able to generate,
transport, process, store and show any kind of audiovisual content in a distributed way. The contents
could be shown in real time or postponed, and either in a point-to-point, multipoint-to-point or
point-to-multipoint mode. This convergence demands wired and wireless access networks that are
- Be transparent for any kind of information transfer.
- Offer bandwidth adapted to user data requirements. The requested bandwidth may be from
a few Mbps to tens of Gbps. For example, high definition distribution services require from
1.4 to 10 Gpbs. by the audiovisual channel.
- Provide and configure the available resources according to the users’ demands. The
provision times should be around a few ms. or even ns.
- Offer flexible access network topologies. The network should guarantee high availability, low
error tolerance and low recovery times and provide an optimal deployment and resource
- Support any kind of logical and physical topologies like star, tree, double ring, double bus,
- Be end-to-end transparent to any technology. The access networks should converge towards
the core network such that the information transfer between both networks or between the
access network and the user’s local network can be performed without reassembling
packets, resynchronizing, using signalling proxies or translating protocols in the
management, operation or maintenance planes.
- Provide centralized schemes, like the existing ones. Also, the new access technologies could
provide a decentralized scheme where the controlling role of the central node or OLT
In the next years, the optical access networks will be passive networks like PONs that will utilize the
physical (optical) medium with a high efficiency. The access protocols will be based on a combination
of time, frequency and code multiplexing. For example, several optical channels could be
distinguished by means of DWDM, while each channel could be slotted in time and the users could
select their logical channel by TDMA and CDMA techniques.
The user devices (ONU’s) will access the available resources dynamically. ONU’s will be able to select
the downstream and upstream channels available (DWDM carriers), and they could assign the time
(TDMA) or code (CDMA) resources of the channel dynamically.
The resource assignment will be permanent while the user connection is active or there is no
degradation on the channel. When the connection finishes, the resources will be available again so
that they can be distributed among other users. This means that the time, frequency and code
resources are assigned dynamically and distributed among the users according to their requirements
established by SLA. The resource assignment should be done in very small time intervals (in the order
The dynamic resource assignment requires the design of either centralized algorithms (located at the
central node or OLT), or mechanisms distributed among all ONUs that manage the network resources
These new families of access networks have to implement medium access protocols that are able to
support any logical o physical topology. For example, the DWDM-CDMA/CA protocol should be
deployed for tree, bus or star topologies among others.
On the other hand, we should consider the expansion of layer 2 technologies. The switching
technologies based on Ethernet have and will have a wide expansion towards the core and
metropolitan networks. This will bring the new access networks PON’s to efficiently transport and
process layer 2 frames, and the data should be formatted to fit Ethernet frames.
Summarizing, in the next ten years the optical access networks will be passive networks deployed
over any physical or logical topology. They will be able to manage their resources in a centralized or
distributed manner. The bandwidth could change dynamically according to the user demands. Data
encapsulation will be transparent to the core network. Dynamic resource provisioning will be
performed with high resiliency, security, availability and ultra fast recovery.
6.1.12. Major Open Problems
The next generation of Passive Optical Networks poses a set of new problems that can be classified
Medium Access Control protocols
Reliability, Survivability, and network recovery
Operation, Administration & Management (OAM) of the network
At the physical level the main open problems can be classified on:
Proposing new modulation schemes: FSK/IM, ..
Designing new switching schemes: circuit switching, optical burst switching, packet switching
or hybrid switching.
Providing bidirectional transmission over one single fiber using WDM-TDMA-CDMA schemes.
Combining several multiplexing techniques WDM-TDMA-CDMA.
Designing ranging and synchronization mechanisms.
Proposing new physical topologies capable to support several multiplexing techniques.
The second set of open problems is related with MAC protocols. The MAC must be designed jointly
with WDM/TDMA/CDMA multiplexing techniques for a large number of physical network topologies.
These problems can be depicted as:
The design of Hybrid Medium Access Control protocols jointly with WDM-TDMA-CDMA
offering several service categories and QoS with a metropolitan geographical coverage.
Hence, it is required to develop new deterministic or random MAC protocols that provide
different quality for services conveying real-time, elastic or best-effort services. These MAC
protocols will be designed for either time slotted or non slotted networks, and for either an
access geographical area or metropolitan coverage. Also these protocols will be either
centralized or distributed.
The designed MAC protocols will be independent of the physical topology of the access
network. Their implementation will be adapted to and deployed for several topologies (bus,
tree, dual bus among others).
On the other hand these new MAC protocols manage the downstream and upstream
channels over one single fiber. This leads to a full bidirectional transmission over a single
fiber where all multiplexing techniques are combining simultaneously.
At the framing level, the information encapsulation techniques implemented in the access network
will achieve a high degree of transparency. The same kind of frame will travel between the transport
and the access network:
The framing homogeneity between adjacent networks (transport-access-local) will allow us
to increase the throughput of the access network up to several Gbps per logical connection.
Consequently, the OLT or gateway, which interconnects the access network and the
transport network, will have the framing processing time drastically reduced. In such a way
it will not be necessary to re-frame the information when it traverses the network.
The trend is to convey the information using Ethernet frame format. The Ethernet frame
will be compatible with minor changes with the medium access protocol of the access
network and the core Ethernet frame.
This framing makes the transmission of user information with Quality of Service easy, and it
can be combined with additional information supplied by access control parameters,
ranging, synchronization control information, or resource management elements.
Other open issue, perhaps the most important, will be the resource management of the access
network. The main points are:
The important issue of designing new decentralized algorithms that will assign the
resources requested by the end users or by the gateway dynamically. These algorithms will
control the resources wasted by the connections in real time (channels in the frequency
domain, time slots and code words). This will lead to allocate and reallocate resources as a
function of the instantaneous user demands.
The response time of this kind of algorithms will be in the ns-ms. scale because of high
transmission rates. These algorithms will be very complex because they will control and
combine the frequency domain, with the time and code domains.
The resource management algorithms will allow the network operator and the end user to
select resources dynamically according to the SLA, received requests and available network
The last step is to provide new Operational, Administration and Maintenance mechanisms and
recovery algorithms that supply high network availability:
The new recovery algorithms will re-reroute and find new alternative paths quickly in the
time scale of ns.
These new mechanisms will work over different topologies and switching schemes (circuit
switching, optical burst switching, etc.).
In the literature we can find several pioneering proposals for next generation of access networks ,
, , , , , , , that raise the mentioned open problems. Also, some experimental test
beds and platforms have started to be used for the evaluation of new MAC protocols, new
multiplexing and switching techniques and advanced resource management protocols , .
6.1.13. Work in progress within the WP
The main WP. JRA.3.2. activity is to propose new MAC algorithms, to develop resource management
schemes and to provide analytical tools for the performance evaluation of access protocols. Until
now, one of the WP results has been the design and evaluation of algorithms located on the OLT,
which control the access network resources in the time domain. These algorithms provide QoS and
different service categories.
Also, several MAC protocols for optical burst switching are being proposed and evaluated. The
proposed algorithms can be deployed on metropolitan and access networks and can be applied on
several physical topologies (start, bus, ring,.).
Finally several analytical methods to asses the survivability and reliability of PONs have been
introduced and compared analytically.
In the last period interest has grown in the search for new bidirectional MAC protocols over one
single fiber, and the design of new resource management schemes that combine and integrate the
assignment of wavelength, the time slot and code word. Also new OBS protocols for generalized
topologies have been proposed.
Other open issues, which have not been analyzed inside the WP JRA 3.2, can be covered by the large
experience of research groups in the NOE. Recovery, robustness, or analytical tools can be covered
by JRA 3.1, and 2.3 among others.
Considering the main open issues of the new generation PON’s, the priorities of the WP JRA 3.2 can
be summarized as:
1. To design new bidirectional access protocols maximizing the network capacity.
2. To design a new set of distributed resource management protocols based on a complex
combination of multiplexing techniques (WDM-TDMA-CDMA)
3. To develop new access switching paradigms (OBS-circuit switching)
4. To build new analytical tools for the evaluation of next generation PON’s.
5. To develop methodologies and tools for improve the PON reliability.
6.1.15. Annex : list of participants
Partner Partner Name Contributor Name Contributor e-mail address
5 PATS, Kathleen Spaey firstname.lastname@example.org
University of Antwerp,
34 AGH, University of Zbigniew Hulicki email@example.com
42 Technical University Anna Agustí firstname.lastname@example.org
of Catalonia (UPC)
42 Technical University Cristina Cervelló email@example.com
of Catalonia (UPC)
42 Technical University Luis Gutierrez Luisg@entel.upc.es
of Catalonia (UPC)
42 Technical University Marilet de Andrade firstname.lastname@example.org
of Catalonia (UPC)
42 Technical University Sebastia Sallent email@example.com
of Catalonia (UPC)
 An, F., Kim, K., Hsueh, Y., Rogge, M., Shaw, W., Kazovsky, L.: Evolution, Challenges and
Enabling Technologies for Future WDM-Based Optical Access Networks. 2nd Symposium on
Photonics, Networking and Computing, (2003).
 Stok, A., Sargent, E.: The Role of Optical CDMA in Access Networks. IEEE Communications
 [Chae, C., Wong, E., Tucker, R.: Optical CSMA/CD media access scheme for Ethernet over
passive optical network. IEEE Photonics Technology Letters, (2002).
 Foh, C., Andrew, L., Zukerman, M., Wong, E.: FULL-RCMA: a high utilization EPON. Optical
Fiber Communications Conference, 2003, OFC 2003, (2003).
 Kramer, G., Mukherjee, B., Pessavento, G.: Ethernet PON (ePON): Design and Analysis of an
optical access network. Photonic Network Comm. (2001).
 Ma, M., Zhu, Y., Cheng, T.: A Bandwidth Guaranteed Polling MAC Protocol for Ethernet
Passive Optical Networks. INFOCOM 2003, IEEE, (2003).
 Hsueh, Y., An, F., Kim, K., Kazovsky, L.: A New Media Access Control Protocol with Quality of
Service and Fairness Guarantee in Ethernet-based Passive Optical Networks. 2nd Symposium on
Photonics, Networking, and Computing, (2003).
 Y.L Hsueh et al.,: A Highly Flexible andEefficient Passive Optical Network Employing Dynamic
Wavelength Allocation. J. Lightwave Tech, Jan. 2005.
 Y.L Hsueh et al.,: Success PON Demonstrator: Experimental Exploration of Next-Generation
Optical Access Networks. IEEE Comm. Magazine, Vol.43, Nº 8, August 2005.
 Yongmei Sun; Hashiguchi, T.; Vu Quang Minh; Wang, X.; Morikawa, H.; Aoyama, T. : Design
and implementation of an optical burst-switched network testbed. IEEE Coommunication
Magazine, vol 23, issue 11, pp S48-S55, Nov 2005.
1.11 WP.JRA.3.3 - Network resilience evolution
Piotr Chołda, Zbigniew Hulicki and Andrzej Jajszczyk
Core Network Fixed, Access Network, IP networking
6.1.17. Scope of the domain
WP.JRA.3.3 focuses mainly on mechanisms related to network resilience. Traditional network
recovery procedures were deployed in two main streams: pro-active (protection), related to circuit-
switched networks and re-active (restoration, rerouting) related to packet-switching paradigm. With
the evolution of the virtual circuit concepts the idea of convergence of recovery methods was
introduced to the resilience field: protection methods began to be used in packet/cell switched
networks (ATM). The introduction of the intelligent control plane in optical networks made possible
the application of restoration procedures in circuit-switched environments.
As future networks will be essentially multi-layer networks with the lower level related to the optical
technology (DWDM paradigm) and the higher/logical level based on IP/MPLS concepts the usage of
different protection and restoration methods in one networks is envisaged. Additionally, the role of
the coordination between methods used in different layers increases. Also, the role of the automatic
control plane is very important as it allows to provide users with fast and cost-effective recovery
6.1.18. Mid-term/long term evolution in this area
We predict that there will be two independent paths of network recovery evolution:
- related to access and metro networks,
- related to core networks.
As the role of Ethernet based solutions in access/metro networks steadily grows, we envision that
the usage of sophisticated resilience procedures in these networks will be possible. Now, there is
hardly any efficient procedures for such networks. The works on access networks recovery (e.g., in
EPONs) and metro networks (e.g., RPR) are now performed and will be continued.
There are resilience procedures applied in core networks nowadays (protection in SONET/SDH
networks, link state protocols based re-routing in IP networks). We predict that the paradigm of
GMPLS will be introduced in larger parts of Internet with its label forwarding as well as with
signalling, routing and link management protocols suite. Thus, recovery methods related to this
technique will be developed and deployed. We predict that there will be not many new mechanisms
invented but the effective usage of well-known methods will be better adjusted to new areas (multi-
layer, multi-service, mutli-domain).
6.1.19. Major Open Problems
As it was mentioned in the previous section, we envisage that the works on recovery
methods will be focused on the adaptation of the known ideas to the new fields:
- Multi-layer recovery: as networks are evolving to multi-layer ones, the problem of recovery
methods operating simultaneously in different techniques must be addressed; the problem
of the choice of the proper layer where the traffic is recovered as well as the problem of the
signalling where it is performed and the issue of the coordination must be solved.
- Mutli-service recovery: the idea of the service differentiation based on the level of the
provided recovery is now intensively studied; as the usage of many different recovery
methods, offering a large variety of features (different levels of the reliability parameters,
QoS, etc.) is now possible in one network, the operator can take advantage of this fact and
prepare a broad portfolio of services adjusted to user’s needs; the problem of the
signalization as well as control and assessment of the level of recovery must be solved.
- Multi-domain recovery: now, recovery methods are designed with the full knowledge of the
network; however, such a comfortable situation takes place only in the case of a domain
managed by a selected carrier; this is not a case when end-to-end connections in the Internet
are taken into account; the perspectives of resilience provisioning across different networks
must be elaborated; also the problem of multi-service in multi-domain environments should
6.1.20. Work in progress within the WP
The works of WP.JRA.3.3 are generally related to the two last points enumerated in the open issues.
Partners engaged in the workpackage focus mainly on multi-service (AGH, NTNU) and multi-domain
issues (NTNU). Recently, some works have been done on resilient routing in IP networks (Telenor),
MPLS fast reroute and self-protecting multipaths (Uni Wue), as well as on multi-fault tolerance,
particularly in connection with mobile ad hoc networks.
Although multi-layer recovery is of the main interest for the WP.JRA.3.3, the issues related to
increasing the resilience level of the next (or future) generation networks seem to be obvious.
Hence, in future, the efforts should be generally concentrated on:
- multi-service recovery in multi-domain environments
- methods for self-protecting multipaths and fast rerouting
- recovery methods for emergent networks based on Optical Burst Switching paradigm
- general resilience analysis and provisioning of resilient QoS for both the core and access
parts of communication networks
- dependability of network architecture, protocols and software.
6.1.22. Annex : list of participants
Partner Partner Name Contributor Name Contributor e-mail address
17 Uni Wue M. Menth firstname.lastname@example.org
30 Coritel P. Iovanna email@example.com
31 URM2 G. Oriolo firstname.lastname@example.org
32 NTNU B. E. Helvik email@example.com
32 NTNU O. Wittner firstname.lastname@example.org
32 NTNU A. Mykkeltveit email@example.com
32 NTNU Q. Gan firstname.lastname@example.org
33 Telenor A. Fosselie-Hansen email@example.com
34 AGH-DoT P. Chołda firstname.lastname@example.org
34 AGH-DoT Z. Hulicki email@example.com
34 AGH-DoT A. Jajszczyk firstname.lastname@example.org
36 WUT M. Pióro email@example.com
36 WUT M. Dzida firstname.lastname@example.org
48 LTH E. Kubilinskas email@example.com
1.12 WP.JRA.3.4 - Network Design tool for Next Generation Internet
Klaus D. Hackbarth and Alberto E. García
Core Network Fixed, Access Network, IP networking
6.1.25. Scope of the domain
The objective of this WP is the common development of a prototype of a computer based tool (called
Macro-Tool) for the Next Generation Internet design by the members of the consortium. Macro-Tool
will cover, in its horizontal dimension, three Architectural Domains (core, fixed and mobile access),
taking IP Networking (multi-operator environment) into account, as well as the demand resulting
from the traffic generated by different services and overlay networks. In its vertical dimension the
tool will cover both the logical and the physical layers. The architecture of Macro-Tool will be based
on modern software engineering methods such as object oriented approach and distributed
computation. Hence Macro-Tool will be composed of a large number of design procedures, each of
them implemented as an individual tool and distributed over the servers of the NoE participants. The
members of the NoE will have access to Marco-tool via a WEB based interface (remote software
The development of the tool will be carried out in three phases where the objective of the
first phase is the collection and initial integration of individual tools delivered by the
members of the NoE and as a result of open source development by other groups. In the
second and third phase these tools and their integration will be improved due to new models
and methods developed in other WP’s of the NoE. The final objective of Macro Tool is to
provide a European reference tool for integrated network design covering most of the
horizontal and vertical aspects of the NGI under a distributed software environment. The
development of the tool will continue beyond the duration of this NoE project.
The development of the Marco tool consider different aspects resulting form Eurpean collaboration
taking into a account, that the following aspects: The concept of European convergence includes the
definition of collaborative environments for technology development within the high-priority actions
included in the Sixth Frame Program. Excellence networks and integrated projects have to
interconnect laboratories with very different characteristics, with completely independent
developments and investigations. The exchange of knowledge may be difficult when the interests of
each partner can be in opposition. Intellectual property, commercial commitments and other factors
might block the free flow of information, limiting the collaborations to isolated and specific
developments. When the exchanges of code or algorithms are impossible, only the result might be
interchanged. The different forms of sharing applications supported by the Marco tool facilitate the
interactive exchange of results without transgressing the intellectual property rights for algorithms,
simulation-programs, or source codes.
6.1.26. Mid-term/long term evolution in this area
Sharing of software applications is a widely-used method for the introduction and presentation of
applications and computer solutions. The programmer shows the characteristics and power of
his/her programs distributing a limited part of the code. The user cannot access the complete
program until acquiring the license for the final version. There are two forms of sharing applications:
the first one uses a closed or limited code which the proprietor delivers to the interested users, while
the second one provides access to a remote and/or distributed execution of the full or only slightly
A freely distributed software application is named "freeware", in the case where an executable is
delivered, and "open source" for the case where the source code and the corresponding algorithm is
included. On the other hand, when the distributed applications are limited to a demonstration
version, they are called "shareware", where these demonstration versions generally include
limitations in their use. These limitations can be either a limitation in the number or times of
execution and/or limitation in the capacity of calculation of the algorithms included, see .
For a collaborative environment among research laboratories the "shareware" concept is sufficient
for tools and knowledge exchange. During the period of active collaboration the associated
limitations should be minimum or null. However, in some cases, previous developments present
copyright or exclusivities associated with companies or collaborating organizations or financial
institutions. In these cases the exchange is strongly limited or even impossible.
Anyway, faced with the impossibility of physically sharing the application, in some cases, a time
limited remote use of the application might be allowed. Under this circumstance the code or
application resides in the place of the owner and its use is under the control of the programmer or
proprietor of the license or copyright. The user provides all values for the input parameters which are
transferred to the execution location where the software application runs the calculation with the
parameters supplied. After finishing the calculation the user receives only the corresponding results.
The corresponding method is generally named Remote Procedure Call (RPC), see .
6.1.27. Major Open Problems
In recent years the techniques and protocols based on RPC have become more important due to the
increased development of services based on distributed applications, such as for example e-
commerce. Often the most important applications use distributed processing mainly for problems
requiring large amounts of processing time. The techniques of distributed processing consist in the
fragmentation of calculation intensive problems and processing them separately in different
locations. The systems used are usually multiprocessor-based systems or multiple computers. This
last case presents an additional problem: it requires establishing a specific communication system
which carries out the delivery of input parameters among the different systems. Moreover, the
collection of the corresponding results is carried out in the same way. In both systems, the main
problem is to calculate a specific application in a completely remote way without any physical
intervention by the user. Initially the problem of RPC applied specific protocols for these tasks .
These protocols allowed carrying out calls to processes previously registered in the operating system
of each machine. Hence the calls to these processes require that the process is already activated.
Currently distributed systems provide a previous communication process to send a specialised
program to the client. When the user installs the program, it establishes a connection to obtain
blocks of tasks from the main server. The technique is called “Grid Computing”  and it is
implemented in researching studies such as for example the SETI project  or the Distributed.Net
The importance of remote processing becomes clearer observing the different solutions adopted by
most commercial programming platforms. There are specific solutions that include RPC methods and
extensions such as for JAVA, #NET or CORBA, and even specific platforms, such as for example
GLOBUS or CONDOR, see , , .
Most of these systems create remote objects that implement methods for their execution
from other objects located in different machines. The communication among objects uses
ad-hoc interfaces. The whole system includes complex administration methods for the
objects and procedures and integrates the communication system using specific methods
associated with each object. In these cases, the process of developing remote/distributed
applications must be carried out according to the environment of the specific systems and
their programming philosophy. For this reason the application of these specific RPC
environments can require the partial or total re-encoding of the selected software
On the other hand, we find simplified methods for remote execution of applications or specific
commands of some programming languages. The former occurs for example in the use of RPC
protocol . The latter occurs when “spawn” or “rexec” commands are used under JAVA encoding
schemes, see . These methods are very simple to implement but they do not incorporate control
over the execution memory space or the state of the processor. Also, the interruption management
and the communication support are treated as independent functionalities outside the scope of the
 ASP: “What is Shareware”: from Web site of Association of Shareware Professionals),
http://www.asp-shareware.org/users/faq.asp, last update Jan 2005
 Birrell, A.D. & Nelson, B.J. "Implementing Remote Procedure Calls." ACM Transactions on
Computer Systems 2, 1, 39-59, Feb. 1984.
 Barkley, J: “Comparing Remote Procedure Calls”, http://hissa.nist.gov/rbac/5277/, Oct 1993
 http://www.gridcomputing.com/, last update Oct 2004
 Seti@Home: Search for Extraterrestrial Intelligence http://setiathome.ssl.berkeley.edu/, last
update Feb 2005
 Distributed.net Project: http://www.distributed.net/ , last update Jan 2005
 Sun MicroSystems: “Getting Started Using RMI”.
 Liang-Jie Zhang,Yao Chung, Qun Zhou: “Developing Grid computing applications”, IBM T.J.
Watson Research Center, http://www-106.ibm.com/developerworks/library/gr-grid1/ , Nov 2002
 “The Condor® Project Homepage”, http://www.cs.wisc.edu/condor/ , last update Feb 2005
 RFC 1050: “RPC: Remote Procedure Call Protocol specification”, 1988
6.1.28. Work in progress within the WP
A strong application sharing implementation needs complex communication architecture.
Additionally a simple procedure call without any more control could violate the integrity of the
applications. Hence the resulting environment needs to establish a proprietary architecture that
allows carrying out the application sharing in an agile and safe way. Two examples of this type of
implementation of the approach to the load balancing for remote execution using Java modules ;
-CNR to build Problem-Solving Environments (PSE) for the execution
of complex applications on a Web-based metacomputer using LDAP (Light Directory Access Protocol)
services . The Network of Excellence EURO-NGI requires the implementation of various types of
common collaboration spaces among all the partners, facilitating the exchange of ideas and
knowledge at all levels. The members of the work package for the NGI network design tool
development (WP.JRA3.4) have developed an application sharing system based on Web services,
named WeBaSeRex (Web Based Service for Remote Execution), see . This system allows the
remote sharing of applications, without any necessity of code or executable exchanges, intermediate
services as LDAP, or load balancing.
WeBaSeRex implements a frame to access applications developed for the design of new generation
IP networks using Web forms. The applications to be integrated are classified according to their
degree of readiness. A preliminary classification distinguishes among Freeware, Shareware and
Commercial; more details appear in .
Additionally, the readiness of the applications depends on the type of user that accesses the service,
distinguishing between guests and full right users. Full right users have exclusive access to a fourth
type of applications called Webware. These types of applications are, in most cases, commercial
applications with strict limitations in the moment of acquiring licenses. The laboratories of the
EuroNGI have carried out these applications with different intentions. The partners cannot usually
give licenses of their programs (to respect commercial agreements with other institutions, to
safeguard the corresponding know-how, to avoid the indiscriminate use or illegal copies or license
cracking, etc). However, they want to show the functionalities of these applications without the
limitations that a shareware version presents. The only way of maintaining intact a license is not to
let that license leave its proprietor's environment. The service WeBaSeRex allows carrying out
remote executions of limited program versions in a specific and fully controlled location. The input
parameter files and files of results make up the information exchanged between the users of the
tools and the web service. Hence neither the code nor the executable has to be exchanged in any
moment. The users even ignore the physical characteristics of the executed program, and the
location of the server that carries out the execution.
The Work package counts currently with the active participation of four partners (34, 36, 38, 44)
including their proper tools to the WeBaSeReX environment. The main objective of the WP was the
implementation of a common tool- framework, which is called as WeBaSeReX. The remote execution
environment is active, but delayed in its implementations. All the main functionalities are actives,
and some examples of remote tools are included. Actually the work is directed towards the inclusion
of additional remote tools to the main environment. Other big problem that is being solved is the
secure access to the environment, with the digital certificates validation, which must be obtained
directly using the common EuroNGI accessing procedures.
The WeBaSeReX environment was initially presented in the Barcelona WP meeting (2005), inside the
common plenary meeting. The main functionalities related with the security and the access to the
remote tools was explained. Following the conclusions of the first meeting on 2004, the collection of
tool components and its integration is included into a corresponding platform implemented on a so
called tool server. This integration follows three phases:
1) Study of proposed remote tools, which allow the hosting and integration of the partner
tool under the common WeBaSeReX platform.
2) Collection of the different components and interfaces related with the remote tool and
their evaluations from the owner partners.
3) Design and implementation of an integration scheme including the accessing forms, and
remote hosting policies.
The first version of WeBaSeReX Portal was presented on April, into the 2004 EuroNGI
Workshop. The collaboration between partners began then, with initial contacts with IT
(partner 38) and AGH (partner 34), with their proposals of remote tools. On July WeBaSeReX
was internationally showed into the ISCC IEEE Congress, Orlando – USA.
Additionally some short time (1- 2 days) and bilateral visits was made for research discussion
between partner’s representatives and PhD students.
 G. Haring, G. Kotsis, A. Puliafito, O. Tomarchio:”REBELS: REmote Execution BasEd Load-
balancing System”, 2nd European International Conference on Parallel and Distributed Systems
(EURO-PDS’98), Vienna (Austria), July 1998.
 -based Metacomputing Problem-
Solving Environment for building Complex Applications”, ERCIM News No.45 -
http://www.ercim.org/publication/Ercim_News/enw45/baraglia.html , April 2001
 “WP.JRA.3.4 Development of a European Network Design Tool for NGI”, WP Description from
EuroNGI, http://eurongi.enst.fr/archive/61/JRA.3.4.pdf , Dec. 2004
 García A.E., Hackbarth K.D.,.Portilla J.A, Ortiz R.: “Collaborative Environment for Tool Sharing
in the Framework of Euro-NGI Network of Excellence”, 2nd International Working Conference on
Performance Modelling and Evaluation of Heterogeneous Networks HET-NETs'04, 2004
The work in the second year concentrated to the finish of the first phase (Design and Implementation
of the EURO-NGI TOOL PORTAL) and beginning of the second phase (Collection of remote tools form
the partners and study of integration into a Macro-Tool). The main objectives and related works
cover the following points:
- Security schemes adapted to a universal access environment: Following the works initiate on
the first year, the accessing scheme to the WeBaSeReX portal might be centralized through
the general EURONGI web, using the corresponding partners accounting without additional
password overheads. Actually some problems with the certification process delayed all the
advances into this work, and the access is made in separate way.
- Sharing Application environment: WeBaSeReX establishes a common environment under a
remote execution scheme to access to different applications provided from partners without
any code interchange. This environment divides into three main parts: Shared tools, Remote
Execution server and Communication server. The second deliverable explains detailed data
about the EURO-NGI TOOL PORTAL implementation and the accessing by the EURONGI users.
- A first collection of possible tool components provided by the partners. The EURO-NGI TOOL
PORTAL includes a database of partners and related network design and dimensioning tools.
The third deliverable identifies already some tools which will be included in the next months
into the sharing application environment.
In the design and implementation of the EURO-NGI TOOL PORTAL the following progress was
- Security schemes adapted to universal access environment: Actually all the account
translation schemes between the EuroNGI main Server, and EuroNGI Tools Portal are
implemented. Unfortunately the access to the WeBaSeReX environment suffers some
problems which must be solved in the next months.
- Sharing Application environment: Currently the Remote Execution Server and
Communication Server are fully operatives with some remote sharing applications.
- Proper Web based EURO-NGI TOOL PORTAL: called WeBaSeReX. The main structures are
implemented. Basic and some advanced functionalities are included into the current version.
- A first collection of possible tool components: Two partners answered to the request of
remote sharing tools. Their applications was evaluated and integrated into the WeBaSeReX
environment. Currently these remote tools are in testing phase, as a first way to integrate
new models of interfaces with the WeBaSeReX environment:
JASON Simulator (AGH, Partner 34): It is an event driven simulator of ASON which
provides the environment for simulation and evaluation of NG optical networks.
Simulator allows evaluation of algorithms for wavelength assignment and routing
and provides the capability to compose any physical topologies, with bidirectional
or/and unidirectional links. As a result of talks and discussion during the working visit
of Z. Hulicki to UC, AGH-DoT has developed and provided UC with the user manual
for the abovementioned network design tool.
NetDim (IT, partner 38): It is a generic network design tool appropriate for virtual
circuit networks (e.g., MPLS networks). It accepts as input parameters (i) a traffic
demand matrix, (ii) a network graph with links representing pairs of nodes where
transmission facilities can be set-up and (iii) the data characterising the transmission
facilities. It determines a physical solution, together with the demand routing
solution. NetDim considers both source based routing (routing paths are
independent) and single path minimum weight routing (the routing paths are the
minimum weight paths defined by a set of link weights). In the second case, NetDim
produces an additional file (in LP format) that can be used by an Integer Linear
Programming tool to solve the link weight assignment problem. As a result of e-mail
exchanges, IT provides different versions of NetDim, including some graphical
Partner 44 as WP leader provides all the works related with the design and implementations
of the EURONGI TOOL PORTAL. Collaboration with other partners was allocated into the 3rd
deliverable, and they allow identifying some interesting applications to be modified and
adapted to the sharing application environment. Since this event, collaborations were
allocated accordingly with the developing related with the rest of WPs of the JRA 3 Section.
Additionally, JRA 3.4 generated the following papers and publications:
- A.E. García. J. Alvarez, K.D. Hackbarth, R. Ortiz: “Applications of security protocols for tele
education environments: virtual exams”, IADAT Journal of Advanced Technology on
Education, ISSN 1696-1073, April 2005.
- A.E. García, K.D. Hackbarth, R. Ortiz: “Web-Based Service for Remote Execution: NGI
- Network Design Application”, 2005 NGI Conference, 18-20 April 2005, ISBN 0-7803-8901-8,
IEEE Catalog Number 05EX998C.
- A.E. García, K.D. Hackbarth, R. Ortiz: “WeBaSeReX: Next Generation Internet Network Design
Using Web Based Remote Execution Environments”, 2005 proceeding of IEEE International
Conference on Services Computing, ISBN 0-7695-2408-7, pp. 200-209, 11-15 July, Orlando,
- A.E. García, K.D. Hackbarth, R. Ortiz: “WeBaSeReX (Web Based Service for Remote
Execution): Implementación de Servicios de Acceso Seguro a Aplicaciones Compartidas”,
TELECOM I+D 2005, ISBN: 84-689-3794-0, 22-24 Nov. 2005, Madrid.
6.1.31. Annex : list of participants
ID Partner Name
18 Infosim Gmbh&Co. S. Kohler
18 Infosim Gmbh&Co. M. Schmit
34 AGH Krakow P. Cholda
34 AGH Z. Hulicki,
34 AGH M. Kantor
38 Institute of Telecommunication A. de Sousa
38 Institute of Telecommunication C. Lopes
43 TID, Telefónica I+D3 M. Villén
43 TID, Telefónica I+D S. Azcoitia
43 TID, Telefónica I+D G. Carrión
43 TID, Telefónica I+D C. Mora
44 UC-Spain K Hackbarth
44 UC-Spain A.E. García
44 UC-Spain R. Ortiz
44 UC-Spain J.A.Portilla
44 UC-Spain C.Díaz
Since April 2005 withdrawn from EURO-NGI
7. Experimentations and Validation Through Platforms
1.13 WP.JRA.4.3 - Measurement Platforms
Rui Valadas, Marco Mellia, Dario Rossi…
Traffic platforms, traffic measurements, statistical analysis, core network fixed, access
network, IP networking, mobile access network, metro network.
7.1.3. Scope of the domain
The characteristics of Internet traffic are becoming more and more complex due to the large and
growing diversity of applications, the drastic differences in user behaviours, and the complexity of
traffic generation and control mechanisms. Therefore, traffic measurements and analysis play a
critical role in successfully enabling the future Internet, from both the user and the provider
perspectives. Indeed, traffic measurements and analysis provides the former with a means to check
the negotiated service level agreements and the latter with tools to both verify and improve the
efficiency of network engineering.
From a very high-level perspective, measurements can be classified in either passive (i.e., by inferring
performance measures from the passive observation of live traffic) or active (i.e., by injecting ad-hoc
traffic in the network and collecting measures from this traffic) methods.
Presently, the main standardization efforts in the area of traffic measurements are being developed
under the scope of the IETF through the following working groups: Packet Sampling (psamp), IP Flow
Information Export (ipfix) and IP Performance Metrics (ippm). In particular, the ippm group is
developing the One-Way Active Measurement Protocol, in which EuroNGI has actively participated.
7.1.4. Mid-term/long-term evolution in this area
Given that network management will be increasingly based on network measurements the issue of
scalability will assume a substantial importance in the next generation Internet. Recently, this issue
has motivated strong research on sampling techniques and on alternative architectures for packet-
Clearly centralized architectures are not appropriate for the next generation Internet measurement
platforms, due to the large amounts of data that need to be processed and stored. It is expected that
the architectures of measurement infrastructures will become more distributed, allowing for
distributed storage, replication and efficient retrieval of network measurements, possibly mimicking
the peer-to-peer paradigm.
Passive and active measurements have been used isolated in the past. However, the joint use of both
methods will enable a deeper understanding of the network and its users, allowing besides the cross-
checking of the statistical results. Therefore, it is expected that next generation will integrate both
active and passive measurements.
Traffic measurements have been mostly used to characterize the network state through
simple statistics like averages or variances (e.g. average packet delay and packet delay
variation). The ever growing complexity of the Internet traffic will require a more detailed
description of its characteristics, through a larger number of summary statistics and
eventually through traffic models. It is expected that measurement platforms will
progressively incorporate the resources for computing more sophisticated statistics. Given
the above mentioned scalability issues, it is expected that these resources will be provided
locally (close to the measurement points) and incorporated in a distributed architecture.
The performance metrics derived from packet and flow level measurements are sometimes difficult
to correlate with the actual quality of service being perceived by the users when using specific
applications. It is expected that traffic measurements will be designed to address more directly the
Due to the growing diversity of applications and the security problems of the Internet it is becoming
more and more difficult to extract information from packet headers for traffic analysis and
classification. In fact, it is more complicated to detected applications based on transport level ports
since most of them do not use common ports or run on non-standard ports. Peer-to-peer traffic,
today the largest portion of Internet traffic, is probably the best example that can be considered.
Moreover, network security is today one of the major problems, and is predicted to become one of
the most challenging topics for the next generation Internet. With the continuous evolution of the
applications, it is expected that cryptographic techniques will be extensively used by developers to
overcome rules imposed by network administrators, therefore making it more difficult to monitor
network traffic. One possibility to circumvent these problems is to resort to advanced statistical
techniques for traffic analysis and classification, and it is expected that in the near future these
techniques will be researched and developed more extensively.
7.1.5. Major Open Problems
Taking into account the mid-term/long-term evolution in this area, the major open problems are the
- Distributed architectures for measurement platforms.
- Accuracy of network measurements
- Privacy issues regarding network measurements
- Techniques for jointly exploring passive and active measurements.
- Advanced statistical analysis for traffic analysis and classification.
7.1.6. Work in progress within the WP
The WP JRA.4.3 – Measurement platforms is addressing all the research topics mentioned in section
3. Besides that the WP is performing the first steps towards the specification and the development of
a distributed measurement platform. The WP has actively participated in the development of the
OWAMP (One-Way Active Measurement Protocol) standard.
7.1.8. Annex : list of participants
António Nogueira, IT
Dario Rossi, TGN-Polito
Geraldine Texier, GET
Hélder Veiga, IT
Marco Mellia, TGN-Polito
Patrik Arlos, BTH
Paulo Salvador, IT
Rui Valadas, IT
1.14 WP.JRA.4.4 - Ultragigabit/s trials for the investigation of structured
S. Giordano and F. Mustacchio
Metro Networks, Ultagigabit/s, Traffic charachterization, Structured models
7.1.12. Scope of the domain
Performance and functional validation of the technical solutions proposed for the Next Generation
Internet are better obtained in a controlled network environment. The activity in this WP makes use
of a multigigabit/s network infrastructure (realized on a private optical substrate completely owned
by the University of Pisa, interconnected as an experimental platform to the national research
network GARR-G and to GEANT) that will be adopted as an “in vitro” environment where to gradually
introduce critical multimedia interactive traffics, traditional applications and new GRID computing
traffic. The network is so far implemented as a ring but by the adoption of D-WDM O-ADM other
logical network topologies will be obtained over the same physical substrate. The measurement
activities will be also carried out at Gigabit/s second speed and comparison with traditional analysis
at 10 or 100 Mbit/s speed will be considered. The field trial will be realized to offer a DiffServ over
MPLS environment composed of commercial (Cisco, Juniper) router devices and PC, Linux based,
open-routers. The routers are equipped with Gigabit/s ports and the O-ADM multiplex information
streams realizing superimposed lambda rings. The adoption of gigabit/s trials will permit to better
understand the statistical nature of aggregated multimedia sources that will be the most critical ones
in guaranteed end-to-end performances. In this case also, as much as possible, REAL sources will be
adopted as traffic load of the metro network (H.323, SIP VoIP sources and MPEG-2, MPEG-4 video
sources). This activity is strongly related to the issue of proper aggregation strategies, token bucket
aggregate characterization (inverse token bucket problem), limit behaviors which could be very
relevant in dimensioning future very large bandwidth backbones offering differentiated services.
The name "structured models" comes from the idea of not performing just "time series
analysis" or "tractable model fitting" but, as much as possible, investigating a modelling
process based on functional/architectural motivation of the stochastic nature of single flows
and flow aggregates. This activity will lead to a better understanding of a possible
"convergence" of the traffic to some form of invariant nature in large scale, ultra gigabit/s
backbones were over-provisioning has so far the worst consequences.
7.1.13. Mid-term/long term evolution in this area
The research evolution in this area is quite slow evolving and future results are difficult to predict.
This is mainly due to the difficulties related to build gigabit/s network infrastracture that could be
dedicated to this kind of experiments and to realize an operative environment to perform realistic
7.1.14. Major Open Problems
The traffic characteristics of ultra-gigabit/s networks have not yet been predicted. Current traffic
models may not be applicable for theis category of applications. Better knowledge of traffic in ultra-
gigabit/s is needed to be able to predict network performance and to obtain a functional validation of
the technical solutions proposed for the Next Generation Internet. For example, what are the
expected characteristics of image retrieval, WWW server clustering, network caching for data-
intensive computing applications, and high-definition video delivery on high vapacity links?
Another key issue is to develop measurement system that are able to capture traffic on high-speed
links. This is an hot topic which requires further investigations.
7.1.15. Work in progress within the WP
This workpackage covers in the framework of Activity 4 platform deployment and measurements
related to ultra gigabit/s speeds which will be common expecially in a metro environment. The
experiments on the field trial it will be directed to analyze the interaction of real best-effort and
multimedia/interactive real-time sources in a controllable environment. It is well known that so far the
Internet is mainly loaded by TCP traffic (around 95% of the traffic is TCP) but this is also due to the
fact that Real-Time interactive sources were definitely discouraged due to the fact that today Internet
offers only a best-effort transport capability. Nevertheless in a global environment also TCP traffic is
changing its nature due to the impressive increase of peer-to-peer application environments
completely absent just few years ago. Grid computing and real VBR voice and video sources are also
changing the nature of traditional UDP traffic. In the framework of this activity a controlled environment
composed by Voice over IP peripheral networks, MPEG-2 over IP real entertainment video traffic,
GRID computing traffic, peer-to-peer communications and artificial sources will be analysed. The VoIP
environment will be composed by H.323 and SIP trials while real MPEG-2 over IP video streaming will
be obtained receiving DVB-SAT mapped without any decoding on the MPEG-2 information units of
unicast and multicast IP packets. GRID computing traffic will be obtained by extending over a
metropolitan and geographical area the classical GLOBUS middleware environment which is receiving
the most promising technical consensus for clusters of computer usually interconnected on Local
Area Networks. These experiments will give the opportunity to realize an "in-vitro" environment very
different from traditional "uncontrollable" network scenario were there is no any specific understanding
on the use of the network and on its basic traffic engineering. This will give the opportunity to link the
performance metrics related to network and packet level measurements with the PERCEPTUAL
Quality observed end to end (using if possible subjective and objective PQoS measures).
- Deploying a metropolitan area network based on monomode fibers and Diffserv over MPLS
- Extending this simple ring topology by the use of D-WDM O-ADM nodes realizing a Virtual
Optical Interconnection Network;
- Deploying a real-time multimedia environment composed by VBR packet video streams,
Voice over IP sources, GRID computing sources, peer-to-peer applications and artificial
(computer generated or pre-recorded) traffics;
- Deploying of advanced measurements environment composed by RMON, SNMP probes,
Gigabit/s capable protocol analyzers, open source tools for traffic generation and
measurements and GPS synchronization devices (required for instance for one-way-delay
7.1.17. Annex : list of participants
29 Univ. of PISA Stefano Giordano
29 Univ. of PISA Michele Pagano
29 Univ. of PISA Franco Russo
29 Univ. of PISA Rosario G. Garroppo
29 Univ. of PISA Davide Adami
29 Univ. of PISA Fabio Mustacchio
29 Univ. of PISA Gregorio Procissi
8. Modelling and Measurements
1.15 WP.JRA.5.1 - IP Traffic Characterization, Traffic Estimation and Internet
IP Networking, Traffic Characterization, Traffic Estimation, Internet Data Mining
8.1.4. Scope of the domain
IP traffic characterization, traffic estimation and Internet data mining by advanced statistical
methods is a vital area of current Internet research. The field covers the following main topics among
● traffic characterization of different service models and network environments
Internet and Web traffic modeling and analysis
■ client-server traffic analysis
■ peer-to-peer traffic analysis
traffic characterization of mobile users in current wireless networks and NGNs
modeling of user behavior
● characterization of traffic demands
estimation of traffic matrices
analysis of the structure of IP traffic
● modeling and analysis of the structure of the Internet and World Wide Web
graph-theoretical models of the Internet
modeling the connectivity of the Web
modeling the dynamics and growth of the Web
● traffic analysis, estimation and generation techniques
analysis and reconstruction techniques for long-range-dependent and self-similar traffic
statistical estimation of traffic characteristics
prediction of traffic streams
generation of traffic loads
mathematical and statistical tools and techniques
■ stochastic of marked point processes
■ theory of Semi-Markovian processes
■ time series analysis
■ theory of statistical classification
■ statistical learning theory
The field is characterized by a rich and diverse methodology for the different subjects. Currently, it is
not possible to match these diverse objects onto a common set of standards.
The topics are partly covered by EuroNGI WP.JRA.5.1 “Traffic Characterization, Measurements and
8.1.5. Mid-term/long term evolution in this area
Currently, traffic characterization, classification and estimation contributes to the mathematical basis
of network design, traffic engineering and control. The simplest related models, methods and
algorithms are partly implemented in network elements to support monitoring, management and
control tasks as well as in corresponding analysis and design tools. However, the richness of the
methodology is neither fully developed nor sufficiently applied yet. Compared to other engineering
areas of transport with deep economic and social impact, e.g., design and control of planes and air
traffic, both the theoretical basis and its implementation have to be consolidated.
Based on our current understanding, in the near future an improved instrumentation of monitoring
functions in network elements of high-speed backbone networks, e.g. Gigabit routers and optical
switches, and the availability of fine grained distributed measurements will provide the basis for a
more detailed traffic characterization on demand and the end-to-end QoS assessment. The latter
task is performed by means of marked point processes along the full set of hierarchically ordered
time scales of Internet traffic. The currently used mean-value analysis of traffic in terms of pure loads
(Mbps) may be enhanced on demand by zooming deeper into the loads and the related point
processes. It can be expected that a more detailed analysis including an identification and
classification of the traffic structure, may be triggered by network or system management activities
similar to the monitoring and control functionality established in flight systems.
A prerequisite for the improved QoS control, reliability and security support in backbone networks
will be provided by the further development of network management functionality on top of the
current signaling, transport and service infrastructure of the Internet and Web. In this context, the
spin-off results of traffic classification and estimation will be implemented to support network
Regarding access networks current traffic characterization, particularly the growing portion of fixed
and mobile peer-to-peer traffic, influences the design of network structures, e.g.the deployment of
symmetric access links by associated xDSL technology, and stimulates the development of autonomic
network solutions including self-healing and self-optimization based on a more detailed traffic
characterization. From our current perspective, it is expected that this trend will prevail and create
further demand for on-line traffic characterization and estimation procedures.
Considering security issues and the prevention of desasterous attacks, it is expected that on-line
identification and classification schemes based on statistical learning and self-healing procedures
triggered by the gathered information will play a more important role in network implementation
than at the present stage.
Long term evolution
On a long term perspective, the future development of the network and service infrastructure
depends critically on the successful deployment of new services and business models. It is expected
that high traffic volumes will be switched in fully optical backbones and that these streams will come
closer to the homes than ever before using high-speed optical and wireless access infrastructure.
Corresponding traffic characterization severely depends on these unknown service profiles. Its tasks
to identify invariants of these profiles and to classify and estimate them will always exist and always
be an important part of network design, management and control.
It is expected that the short-term trend to provide the system and network elements with more
autonomous intelligence will be maintained to cope with the growing complexity of the
infrastructure. By further development of the autonomic networking and computing concepts it is
expected that automatic on-line traffic identification, classification and estimation processes will be
established and be linked tighter to system as well as network management and control than ever
before. This development will constitute a prerequisite to balance the OPEX of future network and
By the expected wider deployment of grid applications and the tighter and more robust coupling of
computational power and intelligence at the edges of high-speed networks it can be expected that
the improved characterization of perceived end-to-end QoS objectives at the application layer will
become more important than it is nowadays.
Provided that the investment into system and network management infrastructure can be afforded
and robust traffic models can be developed, a thorough supervision of network and Web traffic as
well as service overlays can be established by hierarchical distributed monitoring systems, analysis
and control elements in a way similar to the flight and space supervision environments and the
monitoring and control of aircrafts and space vehicles.
8.1.6. Major Open Problems
The major problems are related to the on-line processing of measurement, analysis, estimation and
classification tasks. Some important tasks comprise:
● efficient on-line characterization of IP traffic attributes and their changes on high-speed links,
e.g. by improved time-space analysis (filter banks, wavelets and ramifications)
● traffic prediction in wireless networks
● on-line classification of Web traffic
● on-line estimation and prediction of Internet/Web traffic
● modeling of user behavior, particularly in wireless networks
● on-line identification of malicious traffic patterns and attacks
The reason for the problems mainly arise from
● the high dynamics and substantial changes in the Internet
● the complexity of the traffic issues that cover multiple time scales
● a lack of profound mathematical and statistical foundation of efficient and robust algorithms
since its development cannot follow so quickly the high dynamics of the Internet
Since accurate and effective traffic characterization, classification and estimation provides the basis
of any efficient network design, management and control it is important to follow the dynamics of
the Internet and to put substantial efforts into its further development. Poor understanding will
generate a weak implementation of management and control algorithms and generate poor
network and service performance or its severe degradation under critical circumstances.
8.1.7. Work in progress within the WP
WP JRA.5.1 “Traffic Characterization, Measurements and Statistical Methods” deals with traffic
characterization of different service models and network environments and traffic analysis,
estimation and generation techniques.
The characterization of traffic demands is handled by WP 3. The development of a distributed
measurement infrastructure by WP.JRA.4.3, IT (38), BTH (49), which closely cooperates with JRA.5.1.
Traffic characterization in wireless networks of type 2.5G and wireless LANs is handled by VTT (8)
and UBAM (21). Analysis algorithms based on 2-dimensional traffic attributes are developed.
Considering the analysis and estimation of Web traffic attributes with heavy-tailed features new off-
line and on-line approaches are developed by ICS (41).
Considering client-server architectures and the corresponding Internet traffic, TCP flow analysis is
performed and anomalies are identified and characterized by appropriate measurement and
corresponding analysis methods by partner TNG-Polito (21). Peer-to-peer traffic is studied by BTH
(49). Multimedia traffic and related QoS issues are studied by UBAM (21) and NTNU (32). Off-line
classification of Web traffic is investigated by CEMAT (39). Traffic characterization of self-similar
traffic by wavelet techniques is performed by UPC (42).
Mathematical and statistical tools for traffic modeling and estimation are developed by CEMAT (38)
and ICS (41). Traffic generation for high precision measurements are developed by TLC-Pisa (29).
Regarding the requirements of high-speed networking in NGN besides the specified off-line
characterization of Web and Internet traffic new efficient on-line analysis, traffic characterization and
estimation methods are needed. In particular, automatic on-line classification of traffic features is
needed. A further integration of analysis tools into measurement platforms seems useful.
Furthermore, the on-line identification of malicious traffic patterns and attacks has to be addressed,
e.g. by applying the methodology of statistical learning and a systematic identification and
comparison with normal traffic patterns. The issues have been partly addressed by other EU actions
including MOME and its successors.
Load identification, traffic characterization and classification has to be further developed as integral
part of self-optimization and self-healing components in autonomic networking and autonomic
An open issue not covered by the workpackage concerns the modeling of user behavior in advanced
Web service or peer-to-peer contexts since it requires deeper psychological knowledge not provided
by members of the WP.
The sketched research issues are not mature enough to be considered in standardization approaches
8.1.9. Annex : list of participants
Partner Partner Name Contributor Name Contributor e-mail address
8 VTT J. Kilpi Jorma.Kilpi@vtt.fi
21 UniBamberg U. Krieger firstname.lastname@example.org
21 UniBamberg W. Sandmann email@example.com-
27 TNG-Polito L. Muscariello firstname.lastname@example.org
27 TNG-Polito M. Mellia email@example.com
27 TNG-Polito D. Rossi firstname.lastname@example.org
29 TNG-Pisa M. Pagano email@example.com
32 NTNU P. Emstad peder@Q2S.ntnu.no
38 IT R. Valadas firstname.lastname@example.org
39 CEMAT A. Pacheco email@example.com
39 CEMAT N. Antunes firstname.lastname@example.org
39 CEMAT R. Oliveira email@example.com
39 CEMAT H. Ribeiro firstname.lastname@example.org
39 CEMAT F. Ferreira email@example.com
41 ICS N. Markovich firstname.lastname@example.org
42 UPC D. Rincon email@example.com
49 BTH M. Fiedler Markus.Fiedler@bth.se
49 BTH P. Arlos Patrick.Arlos@bth.se
49 BTH A. Popescu firstname.lastname@example.org
35 IITiS PAN K. Grochla email@example.com
1.16 WP.JRA.5.4 - Network optimisation and control
S Zachary and all
Core Network Fixed, Access Network, Mobile Access Network, Service Overlay
8.1.12. Scope of the domain
Digital communications networks, and in particular the Internet, present system designers and
engineers with a rich assortment of challenging optimisation and control problems. These vary from
problems of optimal resource allocation in well-characterised and cooperative environments to
problems of optimisation and control in complex stochastic environments in which both network
resources (in both fixed and wireless networks) and user demands evolve with uncertainty, states are
transient and conditions are adversarial. The following major topics have been identified as falling
particularly within the scope of this workpackage.
Fluid models, both deterministic and stochastic. Among the issues to study are continuous linear
programming techniques, modelling and optimisation of multiclass queueing networks, the
study of transient communication networks over finite time horizon and the study and
optimisation of network protocols such as TCP.
Achievable region, which is related to scheduling problems in networks that allows not only to
optimise with respect to a criterion but to solve problems which have multi criteria. The
achievable region approach is aimed at identifying tradeoffs between various criteria such as
expected delays at different queues served by a large class of possible priority rules.
Game theoretic models. These aim at studying optimisation problems that involve competition
between decision makers in networks. In particular, we aim at studying competition at the
access to a common channel, cooperation and pricing in Ad Hoc networks and competition
between service providers in terms of pricing strategies and quality of service they offer.
Optimal control and resource allocation in networks, with the aim of maximising network
usability and of providing the correct incentives (e.g. charging) to users so that individual
user optimisation leads to network optimisation.
Performance evaluation and dimension rules based on max-plus or on min-plus Algebra, and
more generally, on stochastic network calculus. Note in particular that the so-called Lindley
equations (which describe the workload process in many queueing systems) are special cases
of stochastic max-plus systems.
8.1.13. Mid-term/long term evolution in this area
Key technical evolutions in the area covered by the workpackage will include:
1. The development of control strategies for efficient resource usage in networks, both with
regard to capacity utilisation (e.g. the avoidance of entrainment in networks with
simultaneous resource requirements) and with regard to efficient resource distribution (e.g.
routing in fixed and mobile ad hoc networks, and frequency spectrum in cellular networks).
Such control strategies involve the development of algorithms for admission control,
scheduling, prioritisation, routing and bandwidth allocation. Of particular importance here
are questions of stability.
2. Methodology for network design and dimensioning. This is closely related to 1. above and is
of particular importance in mobile networks, both cellular and ad hoc.
3. The development of resource allocation strategies amongst competing users classes of users.
These may be based on economic models involving “willingness to pay”, either for
bandwidth or for priority in congested networks.
Notably, fluid and diffusion models have been used for stability analysis and performance
evaluation of increasingly complex feedback policies in multiclass queueing networks. They
were also used to establish asymptotically optimal policies. Recently, there have been two
important contributions to these techniques in association with internet congestion control
and the control of multiclass queueing networks.
- Srikant and co-authors considered fluid models with delay to incorporate important
features of TCP traffic for internet congestion control problems. Rieder and Bauer
introduced fluid models for controlled stochastic networks with delayed dynamics
and studied asymptotically optimal policies.
- Veatch developed approximate linear programming based on fluid models to
compute nearly optimal average cost policies for multiclass queueing networks.
- Both approaches promise further insights for the control and the system
performance when extended to more complex and larger systems.
With regard to 3. above, there is a growing tendency to treat computing capacity as a
commodity that can be traded on the open market. Service provisioning systems are being
designed and implemented, and some are already operational (e.g., at Sun Microsystems).
Computing grids are being deployed in academic settings, and are being used for serious
scientific purposes (e.g., biology, particle physics and astronomy). These tendencies are likely
to continue and accelerate, provided that the problems associated with costs, efficiency,
dependability and performance can be solved.
8.1.14. Major Open Problems
This workpackage is concerned with mathematical research into optimisation and control of networks.
The complexity of the subject is such that only special cases of problems can ever be solved in their
entirety, and hence all the topics falling within the scope of the workpackage and identified in 1 above
constitute ongoing open problems. We note in particular the following;
Optimal control and resource allocation. Within this area there is a vast range of modelling and
optimisation problems. The solution to these is vital to the improvement of network performance, in
particular to the avoidance of congestion and unacceptable delays, and to network stability. Of
particular importance are problems of admission control, prioritisation, scheduling, routing and
bandwidth allocation. One notable and very important problem is that of networks carrying
heterogeneous traffic, e.g. file transfers versus streaming applications in the Internet, where the former
have elastic and the latter inelastic bandwidth requirements. Outside of EuroNGI, much of the
research in this area is concentrated in an extremely large number of North American universities and
research laboratories. There are also a number of teams at the University of Cambridge and some
other European institutions.
Fluid and diffusion models. With regard to EuroNGI research, optimisation of transient
communication networks over finite time horizons using online policies is a formidable task and
largely intractable. Therefore the approaches mentioned in 2 were initiated to find approximately
optimal policies to control complex systems. Even with these tools it is still challenging to solve
problems of realistic size and further research is needed to refine these methods.
Groups working on fluid models with delay:
- Prof. R. Srikant, University of Illinois at Urbana-Champaign
- Prof. S. Meyn, University of Illinois at Urbana-Champaign
- Prof. U. Rieder, University of Ulm
Groups working on approximate linear programming:
- Prof. B. van Roy, Stanford University
- Prof. D.P. de Farias, Massachussetts Institute of Technology
- Prof. M.H. Veatch, Gordon College
- Prof. U. Rieder, University of Ulm
Dynamic scheduling and control. The random nature of user demand (including changes of demand
patterns over time), possibly coupled with random periods of server unavailability, can lead to
temporary overloading of some servers, and underloading of others. In such situations it is
important, both to the users of the system and to the service provider, to have efficient policies for
job distribution and for allocation of servers to the various job types. Those policies should be
dynamic, so as to be able to react to changing demand conditions. However, transferring of jobs
among servers, as well as server reconfigurations from one type of service to another, incur costs.
Thus, any consideration of a dynamic policy must involve a careful calculation of possible gains and
Noncooperative behaviour. Among the major open problems in the control of networks is the problem
of selfishness of the different network actors. While it is quite easy to fix some control issues in
networks, such as congestion, by suggesting protocols that each actor should follow, no insurance can
be given concerning the fact that everybody will obey those protocols. Indeed, it might be preferable
for some actors, with regards to their individual objective, to cheat the system by adopting a malicious
behaviour. Such selfish behaviours may cause the overall network performance to collapse, and have
therefore to be taken into account. The framework of Game Theory is particularly well-suited to study
that kind of systems, where actors (players) have different objectives. The open questions would then
become: how should we design the network control protocols so that there exist Nash equilibria of the
induced game, and those Nash equilibria are globally satisfying for the network? Such problems occur
for example when considering peer-to-peer networks, ad-hoc networks or inter-domain routing, and
the tools available to the designer are the introduction of pricing, reputation schemes, security.
Determining the best use of those tools in terms of overall network performance remains a major open
problem in many types of networks.
8.1.15. Work in progress within the WP
Optimal control and resource allocation. All the teams listed in the Annex (INRIA, ULM, HAIFA, EUT,
UC3M, UCAM, NEWCASTLE and HWU) are working on problems in this area, notably on problems of
admission control, prioritisation, scheduling, routing and bandwidth allocation. Of particular interest
is network control in the face of uncertainty, the dynamic behaviour of networks, the use of control
theory, of game theory and of microeconomic theory in order to optimise resource allocation. This is
combined with more traditional optimisation and programming techniques. These many different
mathematical tools combine to provide powerful techniques for solving the practical problems.
Fluid and diffusion models. Again these models form part of the research of all the teams listed in the
Annex, but notably at HAIFA, INRIA, ULM and HWU. The latter team are particularly concerned with
models for optimising resource allocation in networks with simultaneous resource (e.g. bandwidth)
requirements from different parts of the network, e.g. streaming applications in the Internet. The
INRIA team also works on the relation between two fluid type approaches for the modelling of general
flow control mechanisms on the internet, in which the transmission rate increases in time at a rate that
depends on the current transmission nrate, as long as there is no congestion. When congestion is
detected, the transmission rate decreases by multiplying by some (rate dependent) factor. This type
of behaviour is compared with various averaged dynamics (as the well known formulation by Kelly) in
which the increase and decrease parts of the dynamics are replaced by their average value so that the
TCP dynamics is described using a differential equation where an increase term and a decrease term
always appear. Fluid and diffusion models also form the basis of much research on admission control
(HAIFA, INRIA and other teams).
Dynamic scheduling and control. All the teams listed in the Annex are working on aspects of this. In
particular the work of the HWU team is concerned with dynamic resource allocation. The UC3M
team continues the development of new, effective dynamic index policies for resource allocation in
communication network models. The work of the Newcastle team concentrates on the design,
analysis, evaluation and optimization of dynamic operating policies for multi-server and multi-class
systems. Among the problems being tackled are:
- Optimization of routing and load balancing among multiple heterogeneous servers.
- Optimal allocation of resources in static and dynamic networks.
- New methodology for deriving simple, robust and accurate solutions for heavily loaded
Markov modulated queues.
Noncooperative behaviour. The INRIA team is working on the definition and study of pricing
mechanisms for communication networks. After a deep study of some existing congestion control
protocols like Random Early Discard (RED) and TCP, the current work focuses more precisely on the
introduction of pricing in those protocols, in order to provide quality of service differentiation among
users, still controlling the reactions of selfish users when faced with such a system.
The INRIA team is also working on the following problems in the application game theory to networks:
- Routing games in which several service providers route have to determine the paths that
their traffic will follow in a common network. The decisions are taken so as to minimize
delays or other costs. The contribution is to allow for multicast for which the standard
conservation rules in routing do not hold: the sum of output flows from a node can be larger
that the sum of input nodes, since packets may be replicated in a node so as to send them to
- Game theoretical aspects related to non-cooperative sensors or mobiles which cause
interference to one another, in particular (i) the forwarding problem, in which a packet may
need to be forwarded by several mobiles in order to reach its destination; in that case
mobiles acting selfishly may prevent connectivity, and (ii) the problem of access control.
As previously noted, this workpackage is concerned with mathematical research into optimisation and
control of networks. It is impossible to predict with accuracy just which mathematical advances will
prove to be of greatest practical use. However, progress in the areas of optimisation of admission
control, dynamic resource allocation, control, optimisation and (micro)economic theory has a very high
level of interdependency and simultaneous progress in all these areas is a priority.
8.1.18. Annex : list of participants
University of Ulm (ULM)
University of Haifa (HAIFA)
Eindhoven University of Technology (EUT)
J van Leeuwaarden
University Carlos III of Madrid (UC3M)
University of Cambridge (UCAM)
University of Newcastle (NEWCASTLE)
Heriot-Watt University (HWU)
1.17 WP.JRA.5.5 - Numerical, Simulation, and Analytic Methodologies
188.8.131.52 Demetres Kouvatsos and Tadeusz Czachórski
Core Network Fixed, Access Network, IP Networking, Mobile Access Network, Metro Network,
Queueing Theory, Queueing Network Models, Performance Evaluation, Simulation, Rare Events,
Markov Chains, Diffusion Approximation, Maximum Entropy, Minimum Relative Entropy, Fluid Flow
Approximation, TCP/IP Models, Petri Nets, Wavelet Transform.
8.1.21. Scope of the Domain
Over recent years a considerable amount of effort has been devoted, both in industry and academia,
towards the performance modelling, evaluation and convergence of multi-service networks of
diverse technology, such as IP, ATM, MPLS, D-WDM, IPO, WLL, xDSL, Metro-WDM, Gigabit Ethernet,
WLAN, all-optical networks, ad-hoc wireless networks as well as GSM, GPRS, UMTS & 4G mobile
systems and beyond. However, many interesting and important performance engineering issues
towards the design and optimisation of convergent network architectures, such as those involving
technology integration, traffic engineering, management and congestion control, need to be
addressed and resolved before a global and wide-scale integrated broadband network infrastructure
can be established for the efficient support of multimedia applications with different quality-of-
service (QoS) guarantees. Of crucial importance is the design, dimensioning and engineering of the
next generation (NG) Internet including the creation of generic evaluation platforms capable of
measuring and validating the performance of heterogeneous networks and multi-services’
interoperability. In this context, robust quantitative models and efficient methodologies are needed,
leading to both credible and cost-effective exact and approximate algorithms towards the theoretical
underpinning of the performance prediction of heterogeneous networks and protocols.
Queueing network models (QNMs) with finite capacity and, thus, blocking are robust evaluation tools
for representing switch architectures of networks of diverse technology and optimising their
performance. Exact and approximate solutions of these models can be obtained by making use of
numerical, simulation and analytic methodologies. These methodologies reflect the special features
of the traffic, the size of packets and the characteristics of associated control mechanisms.
The growing speed of hardware and the development of new programming techniques facilitate the numerical solution of complex Markov
chains (MC). They are flexible enough to represent synchronisation constraints, concurrent execution, blocking, pre-emption, state-
dependent routing and complex traffic patterns such as burstiness, short-range dependence, long-range dependence and self-similarity.
However, implementing related software may suffer from several drawbacks such as restrictive traffic assumptions, state space explosion,
stiffness and ill conditioning of the systems of generated equations. Moreover, the effectiveness of accelerated simulation techniques for
rare events (e.g., packet losses), such as those based on importance sampling (SAMPL) and restart (RESTART) formalisms, may be limited
due to the complexity of the required analysis for the choice of the importance function. On the other hand, analytic methodologies, such
as those based on diffusion approximation (DA), fluid flow (FF) approximation, maximum entropy (ME) and minimum relative entropy
(MRE), a generalisation, as well as the generating function (GF) approach, based on queueing, asymptotic and information theoretic
concepts, offer alternative tools of investigation of QNMs at both steady and transient states. In particular, DA generally describes the
dynamics of queues by a system of differential equations with special boundary conditions and with parameters depending on the time
and value of the process. Moreover, ME and its generalisations capture the varying behaviour of queues and networks by making use of
non-linear optimisation subject to prior state probability distribution estimates and fully decomposable subset and aggregate mean value
The generic scope of the work-package is to device robust numerical, simulation and analytic
methods for building and solving queueing network models (QNMs) and complex Markov chains
(MCs) with multiple classes and finite capacity. Of major importance are the evolving critical
investigation into validation and cost of numerical, simulation and analytic methodologies for
complex QNMs and the progressive development of software tools for the performance modelling,
design, prediction and congestion control of heterogeneous networks, such as those based on all-IP,
mobile, wireless and all-optical switch architectures.
8.1.22. Mid-term/long Term Evolution in this Area
Trends and key technical evolutions in performance modelling, evaluation, congestion control and
QoS of convergent multi-service networks of diverse technology, such as 3G and 4G mobile
architectures, ad hoc wireless networks and all-optical networks, have a significant impact on current
and future evolution of numerical, simulation and analytic methodologies for QNMs. In particular,
there is a need for further advances into the approximate analysis of arbitrary QNMs, based on
queueing theoretic concepts, advanced numerical methods and simulation techniques, information
theoretic principles of ME and MRE and DA and FF approximations.
Performance Modelling of a Wireless 3G and 4G Cell Architectures under GPS Scheme with Hand-
The performance modelling of wireless 3G and 4G cell architectures with new multimedia and hand
flows consisting of IP voice calls, streaming media and data packets, subject to a buffer threshold-
based traffic handling generalized partial sharing (GPS) scheme, requires the development of novel
analytic frameworks for complex QNMs with interacting multiple priority-based traffic classes, finite
buffers and queue time-out periods.
Reservation based MAC Protocols for the Performance Modelling of Wired Ad-Hoc Networks
The design and development of novel MAC-based protocols controlling the orderly and
efficient use of the shared medium in wired ad hoc networks necessitate the development
and quantitative analysis of novel QNMs with server vacations and/or polling.
DOCSIS Protocol and Performance Models
The study of the performance impact of DOCSIS on TCP applications requires the development of
new simulation and analytic solutions of open multiple-class QNMs with upstream contention and
Mobility Management and Optimal Broadcasting Schemes in MANETS
The role of information theory, graph theory and queueing network concepts for location and hand
off management methods as well as optimal broadcasting/multicasting schemes is of vital
importance towards the mobility provision of an Always Best Connected (ABC) Solution in MANETS.
Performance Modelling of Optical Networks
The performance optimisation of optical networks is motivated by the demand for more bandwidth
to support Internet applications such as peer-to-peer (P2P) networking applications and the increase
of users and services. In this context, novel QNMs with multiple classes under different scheduling
disciplines are needed for buffer dimensioning and capturing the impact of congestion at the edges
of the network.
8.1.23. Major Open Problems
Evaluation of the Complexity of Underlying Traffic Processes for QNMs
- The assessment of the impact of more complex bursty and correlated traffic flows on the
performance analysis of QNMs. These include batch renewal, batch Bernoulli, Markov
modulated, short/long range dependence, self-similar, and transient traffic processes for
QNMs. Central to the tractability of the analysis is the knowledge of the circumstances under
which a simpler traffic process may be used to approximate, with a tolerable accuracy, a
more complex process, and thus, facilitate understanding of how the superposition of arrival
processes is shaped deep into the network.
Enhancement of Numerical and Simulation Methods
- The extension of parallel and distributed numerical methods, such as those based on
Arnoldi’s projection and conjugate and bi-conjugate gradients;
- The creation of solvers for transient and steady states, supplemented by a library of C++
classes for an efficient generation of the transition matrices of the underlying MC and the
solution of very large (but possibly structured) MC models with tens of millions of states
using a standard workstation;
- The enhancement of accelerated simulation techniques, such as those based on importance
sampling and restart formalisms, for the efficient investigation of rare events (e.g., packet
losses) in heterogeneous networks.
Generalisation of Analytic Methodologies for Arbitrary QNMs
- The development of more generic computational methods for queueing models based on
branching type processes with or without migration
- The extension of analytic methodologies utilizing DA and FF approximations, the principles of
ME and MRE and the generating functions (GF) approach, based on asymptotic, information
and queueing theoretic concepts, respectively, for the study of both steady state and
transient behaviour of arbitrary QNMs with blocking, service / space priority classes and
buffer management scheme under both stationary and transient conditions.
- The characterisation of extended product-form approximations, stochastic theoretical /
experimental bounds and efficient algorithms for complex QNMs, subject to prior state
probability estimates and fully decomposable subset and aggregate mean value constraints;
Validation and Applications
- The validation of analytic approximations and related algorithms for complex QNMs against
numerical solutions and simulation results as well as via actual network measurements, as
- The development of software tools and their utility towards innovative applications into the
performance modelling, design, prediction and congestion control of multi-service networks
of diverse technology such as those based on all-IP networks, 3G and 4G mobile cell
architectures, ad hoc wireless networks, and all-optical network switch architectures.
8.1.24. Work in Progress within the WP
Modelling bursty and correlated traffic steams by using indices of dispersion and the
concept of continuous time batch renewal processes (UNIS #55 and UniBrad #53).
Generalised exponential (GE-type) interacting multiple class queueing and delay models,
multiple servers and queue time-out periods for the performance analysis of hand-off
schemes in 4G cells (UniBrad #53).
IP fragmentation and framing over optical networks and buffer dimensioning of a SONET
over a WDM optical edge node (UniBrad #53).
Performance evaluation (queueing analysis and simulation) of a new distributed reservation
scheme for ad hoc wireless networks, based on the PRMA protocol with random polling (UPC
#42 and UniBrad #53).
The role of information theory in mobility management / location-based services involving
ME / MRE analysis of QNMs subject to graph theoretic decomposition criteria (UPV #45,
UniPassau #56, and UniBrad#53).
Heterogeneous Networks with LRD Traffic Flows: QoS, Analysis and Congestion Control (A
collaborative research project involving UGent #6, UNIS #55, WUT #36, Coritel #30 and
Wireless Ad Hoc Networks: Performance Modelling and Efficient Group Communication (UPV
#45, BTH #49, UPC #42 and UniBrad #53).
Routing Optimisation in Overlay Networks (BTH #49), UPC #42, TLC Group-Pisa #29 and
Characterisation of Internet Traffic in Wireless Networks (UNIS #55, GET #2, UNI-LIN #4 and
Performance analysis of optical packet switching in ring and mesh networks using unslotted
CSMA/CA protocol: studies of burst mode and packet scheduling algorithms, optical packet
format versus traffic and queueing delay characteristics, load balancing versus elf-similar
traffic (GET, INT #2).
Queueing models for systems impatient customers and service times depending on
interarrival times and their applications for optical networks (Univ. of Antwerp, #5).
Analysis techniques, based on of probability generating functions for the analysis of a variety
of discrete-time queueing models (UGent #6).
Bounding techniques in case of very large models, as an alternative to Monte Carlo methods
(INRIA # 10).
Stochastic comparison methods, especially for the transient analysis of Markov chains and
the related performance measures (PRiSM #12 and GET, INT #2, CEMAT #39).
Analytical and simulation models for performance evaluation of communication protocols at
all-optical networks (PRiSM #12, IITiS #35).
Large scale Internet simulation models for evaluation of stability of BGP protocol and for
testing its new versions (PRiSM #12).
Rare event simulation with Monte Carlo methods (TLC #29, INRIA #10, UBAM #21, URM2#31,
TCP flow models including VoIP multiplexed traffic and analysis of the effect of unresponsive
traffic on elastic flow, markovian characterization of connections (TLC #29, CEMAT #39).
Markov models: methods and software for numerical solution of very large Markov systems
(IITiS #35, UP#56), a new methodology for description of heavily loaded Markov modulated
queues (Newcastle Univ. #57), level crossing ordering of Markov and semi-Markov processes
Wavelet-based modelling of self-similar and long-range dependent traffic (joined research of
UPC-DAC #42, AGH #34, TCL#29, IITiS #35).
Optimization of routing and load balancing among multiple heterogeneous servers, optimal
allocation of resources (Newcastle Univ. #57, UEDIN #59).
Tool Support for Performance Modeling: The Modeling, Specification, and Evaluation
Language (MOSEL-2) (UniPassau #56).
Exact analysis of a G/G/infty queue with a non-Markovian arrival process (INRIA#10).
.Study of symmetric polling systems with an arbitrary number of stations (INRIA #10,
Ugent#6). Determining waiting times in 2-station polling systems with correlated arrivals
Performance modeling of self-organizing systems (UniPassau #56).
The assessment of the impact of simpler approximations of more complex bursty and
correlated traffic flows on the performance analysis of QNMs.
The extension of parallel and distributed numerical methods.
The enhancement of accelerated simulation techniques for rare events.
The extension of analytic methodologies and the characterisation of credible product form
approximations for arbitrary QNMs.
Performance modelling applications for convergent multi-service heterogeneous networks
such as those based on all-IP, mobile, wireless and all-optical switch architectures.
8.1.27. Annex: List of Participants
Partner Partner Name Participant Name Participant’s E-mail Address
2 GET, INT Tulin Atmaca firstname.lastname@example.org
2 GET, INT Daniel Popa Daniel.Popa@int-evry.fr
2 GET, INT Gerard Hebuterne Gerard.Hebuterne@int-evry.fr
2 GET, INT Viêt Hùng Nguyen Viet_Hung.Nguyen@int-evry.fr
2 GET, INT Fatih Haciomeroglu Fatih.Haciomeroglu@int-evry.fr
2 GET, INT Mohamad Chaitou Mohamad.Chaitou@int-evry.fr
2 GET, INT Hind Castel Hind.Castel@int-evry.fr
5 University of Antwerp Kathleen Spaey email@example.com
6 Universiteit Gent Sabine Wittevrongel sw@telin.UGent.be
6 Universiteit Gent Herwig Bruneel firstname.lastname@example.org
6 Universiteit Gent Koenraad Laevens email@example.com
6 Universiteit Gent Joris Walraevens firstname.lastname@example.org
6 Universiteit Gent Dieter Fiems email@example.com
7 Technical Univ. of Vilius Benetis firstname.lastname@example.org
10 INRIA Bruno Tuffin email@example.com
10 INRIA Gerardo Rubino Gerardo.Rubino@irisa.fr
10 INRIA Eitan Altman Eitan.Altman@sophia.inria.fr
12 University of Versailles St Nihal Pekergin firstname.lastname@example.org
Quentin en Yvelines,
12 University of Versailles Anna Busic Ana.Busic@prism.uvsq.fr
St Quentin en Yvelines,
20 Universitat ULM Harald Bauer email@example.com
21 University of Bamberg Werner Sandman firstname.lastname@example.org
29 Università di Pisa Michele Pagano email@example.com
29 Università di Pisa Stefano Giordano firstname.lastname@example.org
29 Università di Pisa Davide Adami email@example.com
29 Università di Pisa Raffaello Secchi firstname.lastname@example.org
30 Coritel Roberto Sabella email@example.com
35 IITiS PAN Tadeusz Czachórski firstname.lastname@example.org
IITiS PAN Krzysztof Grochla email@example.com
35 IITiS PAN Piotr Pecka firstname.lastname@example.org
35 IITiS PAN Przemyslaw Glomb email@example.com
35 IITiS PAN Joanna Domanska firstname.lastname@example.org
36 WUT Michal Pioro email@example.com
36 WUT Andrzei Bak firstname.lastname@example.org
39 CEMAT Antonio Pacheco email@example.com
39 CEMAT Fátima Ferreira firstname.lastname@example.org
39 CEMAT Helena Ribeiro email@example.com
42 Universitat Politècnica de David Rincón Rivera firstname.lastname@example.org
42 Universitat Politècnica de David Remondo email@example.com
42 Universitat Politècnica de Mari Carmen Doming firstname.lastname@example.org
42 Universitat Politècnica de Jordi Martinez Martos email@example.com
43 Telefónica Investigación Manuel Villen- firstname.lastname@example.org
y Desarrollo SA Altamirano
49 BTH Adrian Popescu email@example.com
49 BTH Dragos Ilie firstname.lastname@example.org
49 BTH Doru Constantinescu email@example.com
49 BTH David Erman firstname.lastname@example.org
51 Eindhoven University of Jacques Resing email@example.com
53 UniBrad Demetres Kouvatsos D.D.Kouvatsos@scm.brad.ac.uk
53 UniBrad Geyong Min g.min@Bradford.ac.uk
53 UniBrad Rod Fretwell R.J.Fretwell@scm.brad.ac.uk
53 UniBrad Salam Adli Assi S.A.Assi1@Bradford.ac.uk
53 UniBrad Is-Haka Mkwawa I.M.Mkwawa@Bradford.ac.uk
53 UniBrad A.S. Tsokanos A.Tsokanos@Bradford.ac.uk
53 UniBrad S. Kannan S.Kannan@Bradfortd.ac.uk
53 UniBrad C. Mouchos C. Mouchos@Bradford.ac.uk
53 UniBrad S. Tantos S.Tantos@BrADFORD
53 UniBrad Y. Li Y.Li5@Bradford.ac.uk
55 UNIS Bao Zhou firstname.lastname@example.org
55 UNIS Dan He email@example.com
55 UNIS Lei Liang firstname.lastname@example.org
55 UNIS Zhili Sun email@example.com
56 University of Passau Patrick Wüchner firstname.lastname@example.org
56 University of Passau Amine Houyou email@example.com
57 University of Newcastle Isi Mitrani firstname.lastname@example.org
9. Socio-Economic Aspects of NGI
1.18 WP.JRA.6.1 - Quality of Service from the user’s perspective
Markus Fiedler and Manos Dramitinos
Access Network, IP networking , Mobile Access Network, Service Overlay
9.1.3. Scope of the domain
This domain focuses on service quality from the user’s point of view. It deals with the matching
between subjective, user-perceived and objective, measurable quality and the feedback of quality
evaluation towards different stakeholders such as users, providers and operators.
Recently, this topic has gained importance due to the emergence of new, experience-oriented
services (such as TV over IP, gaming, etc.) in both wired and wireless environments and the growing
competition between different service providers and network operators. Also, equipment
manufacturers and providers put special attention to Quality of Experience, which is amongst others
reflected by a recent White Paper “Quality of Experience (QoE) of mobile services: Can it be
measured and improved?” issued by Nokia. QoE captures end-to-end Quality of Service (QoS);
technical factors such as coverage and level of support; and other subjective factors such as user
experience and requirements. QoE is directly linked to both user satisfaction and user loyalty. Bad
experience with service availability, service accessibility, service access time, service continuity,
session quality, ease of use and service support leads to churn, i.e. a user leaves a provider and
probably even makes others to follow by telling them about the bad experience. So, obviously,
service quality has become a major economical enabler for the wealth of a provider, which is
increasingly interested in Key Performance Indicators (KPI) reflecting user perception.
The well-recognised problem is, however, to determine those KPIs – after all, they have to be
observable parameters. The translation between user perception and technical QoS parameters
(such as throughput variations, delays, delay jitter, loss) is one major scope of the activities of this
work package. The other scope relates to the generation of feedback towards service providers,
network operators and maybe even users in case of problems, e.g. when deviations from “normal”
QoS (QoE) values occur. In classical Internet, this feedback is rather implicit through problematic
packet delivery; just major incidents such as “link down” are reported by default. Typically available
network management information can hardly be related to end-user perception. Feedback is an
important sensor part of quality assurance: Depending on the nature and severeness of the –
discovered and quantified – problem, appropriate countermeasures can be taken (and if is only a
trust-building notification towards the user).
Standards within the domain cover only a part of this upcoming scenario. ITU-T standards (especially
series E, G and X) address QoS definitions (e.g. E.800, E.860, G.1000), QoS categories (e.g. G.1010)
and the Mean Opinion Score (MOS; e.g. P.800.1) as a means of describing user rating. Most
standards do not quantify QoS, which is left to the user of the standard (e.g. manufacturer or
provider). Exceptions are the E-Model for voice (G.107), the most widely used model (for voice
communication) incorporating user perception, and the Perceptual Evaluation of Speech Quality
(PESQ) (P.862). The ETSI/3GPP standards, on the other hand, cover the mobile world pretty well.
Traffic classes (conversational, streaming, interactive, background) are defined (3GPP TS23.207), and
QoS aspects for popular services in GSM and 3G networks are addressed (ETSI TS 102 250), covering
QoS indicators; QoS parameters and their computation; procedures for QoS measurements;
minimum requirements of QoS measurements; test profiles for benchmarking; and procedures for
statistical calculations. Finally, the IETF working group IPPM (IP Performance Metrics) works on
generic metrics addressing quality, performance and reliability of Internet data delivery services.
These metrics can be evaluated by operators and end users and are targeted at providing unbiased
quantitative measures of performance that are not necessarily related to subjective parameters. In
general, one can say that standards quantitatively addressing end-user QoS are merely related to
voice and mobile services.
9.1.4. Mid-term/long term evolution in this area
In the future, best-effort – which has been a basic principle since Internet has been around –
will not be good enough from the viewpoint of the end user. The effort should rather be
differentiated according to the task the user is performing and to its valuation. Some
applications (e.g. Skype or videoconferencing) measure and evaluate the end-to-end
performance and adapt themselves to network conditions. However, with the advent of new,
demanding, experience-oriented (and charged) services, warranting a certain subjective
service quality will become more important than it was before. However, this quality needs
to be translated into observable parameters, e.g. through Pseudo-Subjective Quality
Assessment (PSQA), image/video-related quality metrics, or (network) utility functions. Such
translations will gain importance even for non-voice/video-types of applications.
Many customers will be able to choose from different providers. QoS from the user’s point of view
will become more important, which also puts more pressure onto the operators and service
providers. Therefore, Key Performance Indicators will gain more attention for improved service
delivery, but also as a means of competition between providers.
Perceived bandwidth/throughput has become an issue of increasing importance. Better connectivity
(in terms of bandwidth) gives shorter download times or simply access to new services such as
(HD)TVoIP, VoD, etc. The effects of bandwidth sharing, upon which Internet connectivity is built, can
disturb or even disable delay-jitter-critical services in a very efficient way. Intelligent capacity sharing
and allocation in resource-limited environments will be of increased importance.
Even though the Digital Divide w.r.t. broadband access (baseline: 1 Mbps) is likely to diminish in
Europe during the coming years/decade, there will still be a differentiation regarding the usability of
bandwidth-demanding services depending on the available access technology. The 1 Mbps-
connectivity, for instance, is by far not sufficient for the new type of streaming services mentioned
above. Users should have a clear picture of which services they can use; today’s mostly implicit
feedback (of type “I clicked on it, but nothing happens”) is by far not sufficient. Moreover, the users
need to be aware of the trade-off between performance, cost and security which applies to their
service. Clear and easy-to-understand indicators (e.g. traffic-light or progress indicators) are needed.
Services will need to adapt themselves to the perceived network quality. This is taken care of by
certain transport protocols (TCP) or certain applications (Skype), but will need to be developed much
further, amongst others through explicit feedback of network offerings and conditions.
Alternatively, applications (or application portals) will be enabled to select suitable network
connectivity in the sense of “Always Best Connected”. This happens preferably in a seamless way, i.e.
the underlying network can be changed during ongoing transmission.
The user will be relieved from configuration and troubleshooting issues. In general, user interfaces to
(future networked) services will have to be addressed and streamlined. Services should more or less
handle themselves (following the Autonomic Computing principle). A “one-stop” service concept is
devised. In general, Network Management will have to take the user perspective into account, i.e.
utilise sensor / monitoring points close to the user.
9.1.5. Major Open Problems
1. Preferably simple formulae to relate key network-level performance parameters and end-user
experience of specific services are needed beyond of what is standardised today. Thus,
special effort needs to be put into upcoming applications such as gaming. These formulae
signal (non-)suitability of a certain network connectivity for a certain service and enable the
choice of suitable network connectivity in case several possibilities exist. They represent a
rather preventive part of quality control.
2. End-to-end QoS / QoE monitoring of parameters that can be related to end-user perception
needs to be considered as a reactive part of quality control. Particular attention should be
put to lightweight solutions that can be used even in resource-scarce environments, e.g. for
mobile services. Another interesting option is self-organised monitoring and quality control,
which simplifies the deployment of the management and assures its scalability.
3. In general, better interplay between the network stack and the application has to be
implemented, as implicit feedback through late or non-delivery of IP packets is not sufficient
anymore. Cross-layer design (e.g. the possibility to adapt TCP parameters and to monitor TCP
buffers, for instance) might need to be realised through local management interfaces. Such a
management interface, whose use is rather optional than mandatory, can
Answer to polling (i.e. looking up those Key Performance Indicators that are relevant
for an application);
Send notifications (interrupts; traps) in case of perceived problems (e.g. if a Key
Performance Indicator crosses a pre-defined threshold).
Requests, responses and notifications can be evaluated both by the local application and by
remote management entities at the provider or operator in order to get a view of user-
perceived service quality (cf. also item 6).
4. Resource allocation: It is important to study the user-perceived QoS and the user resource
allocation patterns attained in auction-based resource reservation (i.e., the use of the
DiffServ principle close to the user). Indeed, the past years auctions have been proposed by
many researchers as a solution to the problem of efficiently allocating network resources to
the users who value them the most. Studying the effect of bidding functions in such schemes
and evaluating whether users indeed get service that is in good accordance with their
respective preferences is of prominent importance. To this end, both simulation software
and theoretical models have been developed that depict the impact of bidding functions in
auction-based resource allocation. This is an open problem, since auctions have only recently
received attention regarding network resource allocation. Moreover, outcomes of auctions
deliver valuable feedback for operators and providers (cf. item 6).
5. Seamless communications: An automated network selection aiming at “Always Best
Connectivity” would relieve the user from caring about the compromise between pricing,
performance and security. This function is especially interesting in wireless and mobile
environment due to their shifting transmission conditions.
6. Explicit quality feedback towards provider and operator. The operator needs to know
whether the customer has problems before (or at latest when) the customer calls the
support in order not to compromise the customer’s trust. This can be achieved through
efficient end-to-end monitoring and corresponding on-line analysis, which is directly linked
to the providers’ and operators’ management systems.
7. Intelligent use of COTS technology as far as available in order to
the Point Coordination Function instead of the Distributed Coordination Function in
802.11e QoS extensions
different codecs on application level
This includes also optimisation of settings in network elements and network stacks in order
to improve user perception.
9.1.6. Work in progress within the WP
Partners UniVie (3) and JKU (4) work on perceptual QoS issues for new types of
services with focus on wireless and mobile networks (relates to item 1 in the list
Partner DTU (7) works on valuation of perceived quality calculations for Mobile
Multiservice networks (relates to item 1).
Partner IRISA (10) has specialized on Pseudo-Subjective Quality Assessment, which
is a promising method combining user-perceived service quality and network-level
parameters (item 1), and the intelligent use of commercial-off-the-shelf techniques
Partners UniWue, infosim and BTH (17, 18 and 49) carried out the Specific Joint
Research Project “AutoMon” (http://www3.informatik.uni-werzburg.de/research/
automon) during 2005 and are still cooperating on these issues as of today. AutoMon
denotes a self-organising monitoring architecture, including investigations on user-
perceived QoS measures. In particular the Network/Throughput Utility Function
concept is investigated in collaboration between UniWue (17) and BTH (49) and
implemented by infosim (18). The demonstration tool “QJudge” is available, offering
on-line evaluation of end-to-end throughput reflecting user-perceived QoS, packet and
bit rate traces and possibilities for the customer to rate the perceived quality online
(traffic-lights approach). Feedback happens through a standard Network Management
Interface. Recent dissemination activities include a Dagstuhl seminar on Autonomic
Networking and 19th IRTF NMRG meeting in Stockholm, including a demonstration.
The AutoMon activity relates to items 1, 2, 3, 6 and partly 7.
Partner RC-AUEB (22) focuses on further studying the user-perceived QoS attained
for users who are allocated resources by bidding in a series of auctions. In particular
more utility functions that are also used as bidding functions in the auction
mechanisms ATHENA and HERA (proposed for network resource reservation in 3G
networks) and their properties were studied by means of a theoretical model and
simulations (relates to item 4).
Partner Telenor (33) focuses on Key Performance Indicators (item 1).
Partner BTH (49: Within the national project PIITSA
(http://www.aerotechtelub.se/piitsa), considerations and measurements of application-
/user-perceived QoS have been carried out and reported in various EuroNGI contexts.
The basic idea consists in selecting the right network for the right task. An extension
of this work is currently implemented in collaboration with Partner 51: Two students
from CWI will join BTH during spring in order to work on measurements and analysis
of application-perceived throughput and delays. Other activities include investigations
of the applicability of perceptual image quality metrics for real-time quality
assessment of Motion JPEG2000 (MJ2) video streams over wireless channels. In
particular, a reduced-reference hybrid image quality metric (HIQM) is identified as
suitable for an extension to video applications. It outperforms other known metrics in
terms of required overhead and prediction performance. These activities relate to items
1, 2 and 5.
Thinking about new types or extensions of transport and/or application-level protocols that
interface directly with the application (combining quality evaluation, shaping,
pricing/auctioning and security negotiations). Such protocols might apply self-organisation,
resource allocation and enable seamless communications. Obviously, they will be a matter
for standardisation efforts. They relate to items 2—6 in the list presented above.
Pricing of bandwidth (projection), taking statistics on different time intervals into account.
Such an activity would concatenate the QoE-related WP.JRA.6.1 to the economy-related
WP.JRA.6.2. It would also provide important input to items 1, 4 and 5.
Joint interdisciplinary projects that can provide interesting solutions to the resource
allocation problem. An option would be to focus on the design of innovative trading
mechanism that by means of pricing provide proper incentives for network usage and result
in good QoS for the users. In case of congestion, a prioritization of these users according to
their willingness to pay, thus expressing their urgency to receive and valuation from receiving
service is needed. There is strong correlation with research on both WP.JRA.6.1 and
WP.JRA.6.2. The aspect of security (WP.JRA.6.3) may also affect the users’ willingness to pay
for the service, as well as the impact of admitting them to the network. (This activity relates
to item 4.)
9.1.8. Annex : list of participants
Explicit contribution by
Manos Dramitinos (RC-AUEB, partner 22)
Indirect contributors (through the work within WP.JRA.6.1)
Helmut Hlavacs (UniVie, partner 3)
Gabriele Kotsis (JKU, partner 4)
Vilius Venetis (DTU, partner 7)
Gerardo Rubino (IRISA, partner 10)
Kurt Tutschku (UniWue, partner 17)
Terje Jensen (Telenor, partner 33)
1.19 WP.JRA.6.2 - Cost- and pricing models for Next Generation Internet
184.108.40.206 Klaus- D. Hackbarth, Laura Rodriguez. University of Cantabria,
IP networking, Service Overlay, cost models, price models, auction methods
9.1.11. Scope of the domain
Current cost and price models for in IP based telecommunications are applied mainly for the
determination of prices for transit peering, which means the interconnection between
telecommunication services and network providers. They are based mainly on two cost
models, Fully Accounted Cost (FCA) and Long-Run Incremental Cost (LRIC). Additionally
pricing schemes for IP services under auction mechanism get an increasing importance and
the results from the current studies indicates already the richness and flexibility of these
auction models under incorporating QoS parameters, to adapt to different network
environment as NGI backbone or its different access possibilities (wireless…) and to be
applied in various management tasks as bandwidth management, scheduling etc.
The difficulty of NGI service pricing and cost modelling arises from the fact, that pricing
schemes and QoS parameters have to be determined on an end to end basis on using a
transport tube over different network types (heterogeneous transport tube). First applications
are currently proposed by the European Regulator group for Wholesale Internet-Bit-Stream
Access services over xDSL and some National Regulation Authorities are going to study
corresponding cost models based on a Total Element based LRIC (TELRIC) approach.
Additionally to xDSL costing and pricing, models over WIMAX, LMDS and UMTS as
alternative or complementary access are studied. Anyway the price setting is studied under
both a regulated environment and a free market one.
The outlined work in the current EURO-NGI is going to provide already initial results for the
implementation, integration and application of the cost and pricing models in real NGI
network configurations obtained by a corresponding network emulator. This activity will
continue considering extended models studied under the development of improved network
There are strong relations with fields studied in other work packages of the EURO-NGI, which gives
place to wide integrated and interdisciplinary work. First of all this work-package has strong relation
with the other two work-packages inside of the EURO-NGI action line six ”Socio-Economical Aspects
of Next Generation Internet”. Additional relation results from the methods and models of network
design and dimensioning treated in action line three “Optimisation of Protected Multi Layer Next
Generation Internet Networks” mainly for the purpose of the real NGI network emulation. The
current EURO-NGI is going to provide a tool for this emulation forming a test bed for the evaluation
of different cost and pricing models e.g. auction models. The next step is the integration and
implementation of the costing and or pricing rules into the network design and dimensioning process
to improve tools for NGI network design into the direction of optimality.
9.1.12. Mid-term/long term evolution in this area
Service costing and pricing in service integrated networks has to consider the complete connection
which range over different network parts. From this aspect results in general three (Broadband)
network parts, the User access (BSAN), the access to the core (BAN) and one or more core network
parts (BCoN). Under current technology the BSAN is based mainly under xDSL or cable modem
technology, the BAN in form of a layer 2 ATM network and the core applies IP/MPLS/SDH-SONET
architectures over the three layers. Interconnection between IP networks is provided from the
technical point of view by border gateway router implementing a corresponding external routing
protocol e.g. BGP and from the economical point of view by pure peering schemes (either transit or
private peering). In contrast the interconnection with legacy networks as PSTN/ISDN or 2º generation
mobile network (GSM) is provided by a scheme of call termination cost which could be regulated ex
ant or ex post.
Due to the change in the architecture current cost models and current price cap calculation are not
more valid but new cost and pricing scheme has to be considered. An important area is the use of
auctions models which can be widely applied to different cost and pricing scheme. On the other site
these models results hard (NP complete) in its calculation schemes and hence corresponding
heuristic must designed and studied.
Network evolution will bring differentiated BSAN and BAN networks and more or less
uniform network architecture to the core. The differentiation of technologies in the
subscriber access and backhaul part of the network implies costing problems for which
corresponding models are required. These models has to determine in which areas the use of
on of the different technologies are competitive against one or more other or used only as
complementary. A current example is the use of WIMAX for access and backhaul which is
perhaps in future competitive in cities and complementary in rural zones.
9.1.13. Major Open Problems
Cost and pricing model might apply in the easiest case on only on a singular network element (link or
part of the equipment of an network node, in a more general way on a network section and in the
most general on a complete connection between two entities outside from the network (e.g. a host
and a server). In case of multicast or conference services even multiples connection has to be
Even in the easiest cases cost and pricing model has to consider that network elements are highly
shared by multiples services and that the cost drivers composes by a strong fixed cost part and a
traffic depending one, where each service requires proper bandwidth and QoS values from the point
of application layer and corresponding GoS figures in the Connection Admission Control resulting
individual blocking values for each service
One possible solution is based on determining the cost considering an equivalent capacity for each
service required inside a network element. Anyway determining this equivalent capacity requires the
solution of a combined loss and waiting model for multiples services and might results different
values on distinct network element due to different stochastic multiplexing effects.
Solutions of cost and pricing problems over a network section or even an end to end
connection can apply the TELRIC approach which means that the cost of each network
element is studied on the section or connection using models mentioned before. Anyway it
remains the strong problem how to consider in the costing and pricing the fixed cost part.
A quit different approach has to be considered in the case that costing and pricing has to apply or a
predetermined set of services demand which compete during a certain time interval to a common
set of network resources. The easiest case results from one type of service competing to the
capacities of one network element (e.g. IP packets to the capacities of an interconnection link).
Corresponding auction models can be used for pricing. Anyway, thinks get complicated when the
traffic from multiple services compete to the limited capacities of a network element which is the
case mainly in wireless and mobile access. In a more general case the service does not require only
the capacity from one element but over a complete network section (e.g. an interconnection path in
a Tier 2 network). All these cases might be solved by an extension and generalisation of
corresponding auction models.
9.1.14. Work in progress within the WP
The work in process is based on different types of cost and pricing models where the partners
considers different network types and parts in combination of different service classes. Concerning
network types mainly IP based fixed networks but also UMTS mobile networks and wireless type of
networks like WIMAX are considered while in the network parts both aggregation- and core part is
considered. Additionally interconnection between different core networks is taking into account.
From service type unicast as multicast is considered. A more detailed description from different
INRIA (partner 10)
Pricing access networks, where congestion is likely to occur is of particular importance in order to
provide proper QoS characteristics and control demand. This seems not to be the case ifor th
backbone, but many ISPs are willing to differentiate services in this case too in order to induce
fairness among users, but also to ensure a better revenue. Partner 10 is pursuing its work on pricing
models in several directions. It has a particular focus on wireless access networks where congestion
is becoming a problem. CDMA and 802.11 and HSDPA are of particular interest. Wimax (802.16) is
naturally gaining interest too, since it will probably be a major element very soon. Another class of
application is inter-domain pricing between ISP that need an exchange to deliver their customers’
traffic. In this context, some rules have to be defined, and their economic properties analysed.
Additionally, partner 10 is becoming more interested of particular classes of models such as
stochastic games, evolutionary games which rather deals with socially conditioned individuals instead
of fully rational ones (a much more valid assumption), or learning in games.
PRiSM (partner 12)
Today inter-domain market plays on two different time scales: A long term time scale (in term of
months or even years) where economic contracts are negotiated and a short term (at a timescale of
seconds) where routing decisions are made based on the concluded business relationships. Some
recent works propose to couple those two processes more tightly by enabling a more dynamic
interaction between transit price propositions and routing decisions. In order to capture the dynamic
aspect of such interaction, a repeated game approach seems to be appropriate. Indeed, the
autonomous systems who propose to route the traffic to a common destination are modelled as
players that compete for the traffic adopting different strategies of fixing prices. The repeated game
framework enables to capture how the threat of a future behaviour can impact the current actions of
players. In our proposal, we still consider an interaction between transit price negotiation and
routing decision process. The bilateral economic nature of the Internet is still maintained by a
cascade like pricing where each agent negotiates low prices only with its immediate neighbour. The
objective of each provider is clearly to maximize its own benefit by proposing interesting transit
prices but also by choosing itself the lowest providers. We focus on an adequate repeated game
approach to analyse the problem.
RC-AUEB (partner 22)
An issue of great importance that is expected to affect both pricing and user-perceived QoS is
multicast. This technology enables the efficient use of the various networks, since only one copy of
data traverses the network towards the recipients. This is extremely important for wireless networks
where the spectrum of the radio access network is scarce. Therefore, when multicast is enabled
more users can be admitted to the network and receive high-quality network. This is why there is
high interest in developing multicast/broadcast technologies over the past years such as WiMax for
wireless and IMS/MBMS for UMTS networks. An interesting area of research is therefore the design
of auction-based resource allocation schemes that take into account the fact that both unicast and
multicast sessions compete for service.
In addition, as many alternative network technologies emerge, it is interesting to study the behaviour
of users when they can receive service from at least two alternative network paths. These paths may
be pertaining to different network technologies, e.g. fibre or wireless access, and thus be valued
differently from users. It is interesting to study the way users would express their demand in such a
context. A typical case would be participating in two auctions for these two different paths.
URM2 (Partner 31)
The introduction of competition in telecommunications and the evolution of a number of
technological platforms have enriched the number of operators offering telecommunications
services. Due to the limited geographical reach of any of them a significant portion of the traffic
spans multiple operators and therefore requires interconnection agreements and the ensuing money
transfer associated to the traffic exchange, subject to the pricing rules defined in the interconnection
agreements. In most cases the interconnecting operators have more than one choice as to their
interconnecting partner. The interconnection pricing schemes have therefore a very relevant role in
the economic equilibrium of the operator. Partner 31 is pursuing a research line on this theme
investigating the feasibility and the performance of dynamic pricing schemes in this context, namely
through auctions. The aim is to arrive at a pricing scheme that, in addition to the desired economic
properties, is also computationally feasible for real time operations. The class of applications, at first
limited to the interconnection of ISPs, could extend to a number of other contexts where new
operators are entering specific market segments, e.g. WiFi operators or MVNOs.
University of Cantabria UC (partner 44)
Partner 44 is currently working on cost and pricing models for broadband access and core networks
considering different technologies. The main applications motivating this work results from cost
studies for national and European telecommunication regulation for the so call (Broadband) Bistream
access services (BAS) defined from the European Regulator Group which joins the different national
regulator for the EC. These BAS are defined as a wholesale services provided by a dominant network
operator to an ISP with limited proper infrastructure. The main motivation is to assure an open
market access for the different players in the production and transport of IP based broadband
services. Partner 44 studies currently the influence of changes in the network architecture mainly in
the BSAN and the BAS and is going to provide a model and a corresponding tool which allows
emulating the BSAN, BAS and even the overlying core part for a national broadband network forming
the kernel for the future NGI. This network emulator allows incorporating different cost and pricing
model, e.g. auction based one, and studying its influence to corresponding cost and pricing schemes.
Open problems to by solved results from the relation of GoS and QoS parameter values related with
the different type of services and corresponding cost model considering the integration benefit
resulting form the common production of different type of services in the network elements.
From the description of the current work results, that the in the next step a stronger differentiation
of services should be taken into account with its corresponding QoS parameter. A common
classification results real-time, streaming, data and best effort classes.
Service performance parameters are normally required based on an end-to-end basis. Hence at the
next step the correlation among the cost and service classes in the different network part has to be
studied with the objective to provide optimal QoS budged distribution over the different network
parts and to calculate end-to-end cost pricing schemes
Finally the integration of different network parts with different network types has to be studied
mainly considering QoS budget and corresponding models for cost and pricing schemes
determination. Mainly fixed model and fixed wireless integration has to be considered.
The overall objective resulting from the different steps is to get cost and pricing models for different
classes and types of services which use heterogeneous connections where the corresponding traffic
is routed over different network parts and different network architectures. These models have strong
application for determining cost and prices for services inside of a network domain and between
different network domains. Examples are ex antes and ex post regulation policies, wholesale pricing
schemes to service provider applied from network provider etc.
9.1.17. Annex: list of participants
Partner Nr. Partner Inst. Name e-mail
10 INRIA Patrick Maillé email@example.com
10 INRIA Bruno Tuffin firstname.lastname@example.org
10 INRIA Yezekael Hayel email@example.com
12 PRISM Dominique Barth firstname.lastname@example.org
12 PRISM Loubna Echabbi Loubna.Echabbi@loria.fr
22 AUEB Marina Bitsaki email@example.com
22 AUEB George D. Stamoulis firstname.lastname@example.org
22 AUEB Costas Courcoubetis email@example.com
31 URM2 Maurizio Naldi firstname.lastname@example.org
31 URM2 Giuseppe D’Acquisto email@example.com
44 UC-Spain Klaus D. Hackbarth firstname.lastname@example.org
44 UC-Spain Laura Rodríguez de Lope email@example.com
1.20 WP.JRA.6.3 - Security Issues
Access Network, IP networking ,Mobile Access Network, Service Overlay
9.1.20. Scope of the domain
An increasing number of users are able to connect to global Internet. Thus, they can access a huge
variety of services, but at the same time, they are exposed to global security risks in an increasingly
hostile environment. Thus, there is an obvious risk of using the common network for increasingly
critical processes (e-commerce, e-health, etc.), while at the same time the ever-increasing
possibilities of accessing the Internet, e.g. though wireless or mobile networks pose new challenges
of keeping the level of security high enough and to answer the trust that users put in ICT systems.
Security breaches – whether they are real or just rumours – might keep people from using Internet and
its services. In the short run, this implies reduced income for application, service and network
providers; in the long run, Internet’s basic reputation might be severely damaged. Protection against
misuse or attacks is therefore a vital objective to establish, maintain and strengthen the trust of users
and service providers of IT technology. On this background, strong, sustainable and individualized
network and computer security architectures have to be developed and deployed. Security services and
protection levels are to be defined, and (adaptive) multi-lateral security concepts are to be designed
Security aspects are manifold; they depend on the role of a communication participant (user, service
provider, network operator, equipment vendor, “third parties” such as banks, or trusted authorities), on
the management and organization of security services, and on the technical methods used to support
security objectives (technical security). Security levels are relative and depend on both the strength of
the measures to protect the information assets as well as the strength of the attacker.
This work package aims at establishing security (linked to quality and pricing) as an integral part of
Next Generation Internet services (A4C = authentication, authorization, accounting, auditing and
charging). Users should become more aware of risks and should be provided with integrated,
adequate, negotiable and easy-to-manage security facilities. In order to decrease the risk that
security is dropped by users struggling with tiresome security settings, the whole security process
should be streamlined and integrated, with security measures being adequate in terms of resource
Standards in this domain mainly refer to basic security concepts and have been issued by all major
standardization organizations. A basic security architecture for distributed systems is found in the
international standard ISO 7498-2:1989 Information processing systems – Open Systems
Interconnection – Basic Reference Model – Part 2: Security Architecture. The standard refers to the
OSI-model and aims at establishing “a framework for coordinating the development of existing and
future standards for the interconnection of systems”. The ITU X.509 certificates are used in a wide
variety of applications, providing a means to assure identity, secure communications, encrypt data,
sign data (such as other certificates), and deliver authorization information based on these capabilities.
This standard is even available as Internet RFC 3280. Other IETF standards refer to AAA
(authentication, authorization and accounting), site security and Mobile IP. The 3GPP organization
specifies a mutual authentication (user, provider) and network security with strong user data, voice,
signaling data encryption and authentication in (3GPP TS 21.133) with focus on mobile networks. The
Public Key Cryptography Standards (PKCS) are specifications developed by RSA Security in
conjunction with vendors worldwide for the purpose of accelerating the deployment of public key
9.1.21. Mid-term/long term evolution in this area
The trend towards universal connectivity in terms of “anything-over-IP-over-anything” will most
likely hold on. Depending on the access scenarios, different levels of risk, cost and configuration effort
incurred to end users will exist in the mid-term.
For instance, in Internet Multimedia System (IMS)-based Next Generation Networks (NGN),
particular efforts to provide security are undertaken. In such scenarios, TriplePlay services such as
telephony and TV are provided within a “walled garden” (i.e. the domain of the network provider),
which significantly lowers the risk of malicious access from outside. Still, through the third TriplePlay
service, traditional Internet access, users may still be exposed to attacks from outside. Such risks can
be reduced by operating firewalls and anti-virus software. Those extra services are usually charged to
the end user.
Wireless LAN hotspots in public areas (e.g. airports, major hotels, conference centers, etc.) have
become very popular. Still, the majority of those access points provide AAA mainly for charging
purposes, however renouncing to encryption, as the configuration procedure would become even more
tedious when network keys are to be distributed. Also, Virtual Private Networks (VPN) connectivity
usually needs a lot of (manual) configuration effort. There is an obvious risk that impatient users
rather renounce to security measures than to insecure network connectivity. Yet another problem
domain is that of unconscious usage of wireless devices especially in ad-hoc communication
scenarios. For instance, a Bluetooth-equipped phone or computer can easily be infected when being
discovered and sent a file containing a virus that is not discovered by some locally installed antivirus
software. In general, users have to be prepared that someone else finds a possibility to “enter” their
systems somehow, which makes systems to protect access to stored information and Digital Right
Management strategies unavoidable in some cases.
During the recent years, peer-to-peer applications have attracted an immense number of users. While
mostly used to distribute files (music, movies, software, etc.), these self-organizing applications have
the potential of distributing malicious software. Owners of precious content need to protect it, for
instance by using Digital Right Management solutions. On the other hand, their self-organization
potential might also be used to implement security measures and services in a distributed and scalable
In order to provide for future secure communications granting confidentiality, integrity, accountability,
availability, accessibility and anonymity, Internet-based services, their users and their devices will
need to be sheltered in a much better way in the future than what is the case today. Consciousness of
the risks is important, but just knowledge alone can’t protect users from choosing inappropriate levels
of security. Users, their processes and devices need to be sheltered by trustworthy, provable security
services to a much larger extent than what is the case today. This holds in particular for mission-
critical services, e.g. e-health, e-voting, remote control, e-business etc., as security problems might
turn into trust problems between the corresponding parties (e.g. citizens and authorities). Such a
security service will need to addresses A5 (= authentication, authorization, accounting, anonymity
and auditing). Basically, security has to become yet another Quality of Service (QoS) parameter. As
security measures impact performance and can be costly in terms of resource usage, and both capital
and operational expenditures, the type of security measures and its corresponding strengths have to be
carefully chosen. This imposes the need for quantitative assessment of risks and security offerings in
order to tailor the security service to the task to be performed. Given the multitude of access
possibilities, seamless security across different domains will become vital for successful security
The concept of “Always Best Security” will become very important, amongst others offering adaptive
and in particular Lightweight Security solutions to be used when both risks and resources are limited.
Much work has been put into the proposal and implementation of new and stronger security
solutions. However, less effort has been directed towards the actual act of determining a sufficient
security level with respect to different criteria. There are concerns when strong security is
unnecessarily used, since the load on the processor and the power consumption is increased (even
though we believe that the processing overhead is of greater concern than the power consumption).
By offering security based on a need determined by a decision model it is possible to optimize
security such as to reduce the cost. Performance and efficiency issues are particularly important in
environments with constrained capacity. A further step would involve the development of a type of
cognitive security, not in the psychological sense, but as a notion of cognitive technology. Cognition
refers to the act of processing or knowing, including awareness, recognition, judgment, and
reasoning. In this specific case, the envisioned system would be able to sense the environment, learn
how to handle threats and then take countermeasures to improve the overall security.
Besides preventive methods such as pre-selecting security measures and levels for certain types of
task, reactive methods need to be enabled by continuous monitoring of the security conditions and – if
necessary – adapting the security level (e.g. by switching from lightweight to more advanced access
control) and/or issuing alarms.
Last but not least, the user will have to be relieved from the need to “puzzle” together a security
solution, balancing risk level and price-worthiness. This will decrease the “human risk factor”. On the
other hand, the user should also be warned about risks and selected countermeasures to a much
broader extent than what is the case today. The common goal of all these approaches is to provide the
user with trust in the communication such that the service of interest is felt to be a utility and not a
9.1.22. Major Open Problems
Given the broad realm of usage, networks and risk scenarios and the fact that security is an essential
part of network design and management, a huge field of potential activities opens up. In the
following, a collection of relevant research topics is presented.
1. Quantitative security models are important to assess levels of security and to relate them to
perceived service quality (WP.JRA.6.1) and pricing (WP.JRA.6.2) aspects. Quantitative
measures of security constitute an important aspect of Quality of Service (QoS) for end users,
and additional work in this field is needed. The challenge is to define useful methods of
determining quantitative values in the security field, which are valuable and serve as an
acceptable QoS parameter. Quantitative security is also related to the decision making area
and it is of importance when selecting the adequate authentication level. Performance
evaluation of security protocols is required in order to find suitable compromises between
performance (WP.JRA.6.1) and security.
2. Provable secure protocols and services need to be designed especially for mission-critical
services and for hostile environments such as wireless networks. To this end, the
vulnerabilities of services need to be assessed. Security needs to be evaluated, amongst
others by using formal methods. Access to anonymity (see below) must be either controlled
or revocation of anonymous communications must be provided. The integration of
anonymity, pseudonimity, authentication, integrity, accountability, legitimate usage, access
control, key management etc. represents an important task.
3. Lightweight Security: Due to the nature of small and constrained mobile devices (such as
PDAs and cellular phones) it is necessary to consider the development of lightweight security
mechanisms. These mechanisms should contribute to the understanding of the trade-off
between performance and security. The developed mechanisms have to be resource
efficient, able to handle per-packet security, robust in terms of handling packet loss, payload
independent and adjustable due to a changing environment. Even though the mechanisms
are lightweight, the probability of detecting an ongoing attack needs to be high, which makes
it possible to take countermeasures based on a predefined security policy.
4. Always Best Security. The complexity of several authentication mechanisms coupled with the
need to integrate authentication into environments with limited resources can make the
selection of an adequate authentication level difficult. Therefore, it is necessary to develop a
decision model which takes into consideration a wide range of factors; including objective
and subjective aspects. A decision-making models needs to take into account a range of
factors, including objective and subjective considerations, before selecting an adequate
authentication level. It has to be a flexible model which is capable of formalizing quantitative
and qualitative considerations of defined criteria with regards to QoS and security.
5. Monitoring (e.g. and Intrusion Detection) need to be improved. The sensor part of these
activities is related to performance monitoring (cf. WP.JRA.6.1).
6. The management of security policies and services needs to be streamlined. For instance,
secure user mobility in multi-provider and multiple-terminal environments featuring
changing points of attachment in unsafe environments is an important issue today. This
comprises also security of mobile devices.
7. Due to the existence of universal access possibilities and well-developed services for
spreading content, the protection of (distributedly) storage data as well as Digital Rights
Management needs to be addressed.
8. Anonymity aims at hiding the user’s identity completely, which is e.g. an issue for electronic
voting. Pseudonymity hides the user’s real identity behind some virtual identity, which can be
important when storing sensitive information regarding specific users. Anonymity is fragile in
the (mobile) Internet society, as it can easily be abused; this implies the need for research on
revocable anonymity and group authentication.
9. Cryptography is the main security mechanism for the implementation of network security
services. As a prerequisite for cryptography, the Key Management System (KMS) becomes
for many system designers and deployers the major challenge to open the way for pervasive
security applications over the Internet. More specifically, assuming the security of the
algorithms depends on the secrecy of the keys (a.k.a. Kerchott’s principle), the underlying key
management system itself must be protected by highly secure services in a distributed
10. Security in mobile systems: There is a need to study and identify authentication and data
protection mechanisms for mobile multimedia users in the framework of beyond 3G mobile
11. Security and P2P networks. P2P introduces a whole new class of security issues: (a) security
within the virtual P2P networks themselves such as trust management due to lack of
centralized trusted entity and (b) security with respect to the environment they ride on such
as viruses, trojans, and backdoor access. On the other hand, these decentralized and scalable
algorithms can as well help to establish new types of security solutions.
12. Accounting. Authenticating and counting packets is critical to guarantee accounting
correctness and prevent resource stealing. Usage-based accounting requires that the packets
are examined and that the cost in terms of accounting overhead is reduced. Therefore,
lightweight authentication protocols are effective for secure usage-based accounting, which
provides a link towards WP.JRA.6.2.
13. Economy of security. Internet is a large scale medium, where every security project requires
enormous global outlay and even smallest individual threat results in giant total losses. Thus,
planning every security project we must take into account this scale effect.
14. Scalable security. It is an approach that unifies Always Best Security and Economy of security
with the performance of network (expressed, e.g, as QoS). It makes possible to deal (in a
homogeneous way) with problems of small and large scale finding the balance of the
protection level, system performance and the required costs.
9.1.23. Work in progress within the WP
Partner CoRiTel (30) is leading a consortium that amongst others covers partners 32,36, 38,
49 and 56. This consortium currently prepares a project proposal regarding NGI Security
Aspects by Integrating Peer-to-Peer into Beyond-3G IP Multimedia Subsystem (IMS),
amongst others for mission-critical systems.
Partner NTNU (32) addresses the following items
o Network monitoring
o Cryptographic primitives
o Secure protocols
o Security for mobile users
o LAN security
o Security evaluation
o Formal methods
o Quantitative modeling
Partner WUT (36) addresses the following items
o Cryptographic protocols
o Scalable security with elements of economy of security
o Anonymous authentication of mobile agents
o Anonymous group authentication
o Authentication in P2P networks and mobile ad-hoc networks (MANETs)
o Protection of mobile agents against traffic analysis
o Protection of mobile agents integrity by a method using zero knowledge protocols
o A new protocol of data integrity and authenticity for VoIP, using digital
watermarking, was proposed
Partner IT (38) focuses on four main areas:
o Electronic voting protocols for the Internet
o Lightweight PKI infra-structures for supporting authentication of people in multi-
authority roaming environments
o Security architectures for LANs
o Distributed intrusion detection systems
Partner UPB (40) focuses on
o Key distribution in mobile systems
Partner BTH (49) deals with
o Tradeoff between performance and security
o Lightweight security
o Always Best Security and related decision making
o Security in wireless an mobile systems
o Security in peer-to-peer systems.
Partner UP (56) focuses on
o Security in mobile systems
o Trust and reputation mechanisms for overlay networks
o The effect of reputation in managed networks
o Controlled anonymity
o Security and decentralisation (P2P)
o Reputation Management for Critical Infrastructures
A consortium currently prepares a project proposal regarding NGI Security Aspects by
Integrating Peer-to-Peer into Beyond-3G IP Multimedia Subsystem (IMS), amongst others for
Choice of security levels for different services and security-related QoS in combination with
Security and self-organisation
Security – QoS – bandwidth – costing. Concatenates all WP.JRA.6.*
Economy and security. Concatenates WP.JRA.6.2 and WP.JRA.6.3.
Always Best Security / Adaptive security (WP.JRA.6.*)
9.1.25. Annex : list of participants
Direct contributions to this roadmap:
Svein Knapskog, NTNU (32)
Zbigniew Kotulski (36)
Henric Johnson, BTH (49)
Jens Oberender (56)
Indirect contributions (through reports, deliverables):
Andreas Gutscher (15)
Sebastian Kiesel (15)
Cristiano Paris (31)
André Zúquete (36)
Aneta Zwierko (36)
Radu Lupu (40)
Hermann de Meer (56)