Tesi_Mazza by huangyuarong

VIEWS: 16 PAGES: 107

									                 UNIVERSITÀ DEGLI STUDI DI ROMA
                        “TOR VERGATA”




                       FACOLTÀ DI INGEGNERIA
             Corso di Laurea in Ingegneria delle Telecomunicazioni

                       TESI DI LAUREA MAGISTRALE

                     SOLUTIONS TO ENHANCE
                      THE INTERNET WITH AN
                  INFORMATION CENTRIC MODEL,
                     EXPLOITING OPENFLOW

Candidato:
Giorgio Mazza
Matricola 0151841
                                                                       Relatore:
                                                 Prof. Nicola Blefari Melazzi
                                                                     Correlatore:
                                                      Prof. Stefano Salsano


                             28 febbraio 2012
Solutions to enhance the Internet with an information centric model, exploiting Openflow




                                                                                           A Te
                                                                     che, più di chiunque altro,
                                                                         avresti voluto esserci.




                                           2
              Solutions to enhance the Internet with an information centric model, exploiting Openflow




Contents
Introduction                                                                                                              1

1 The Information Centric Networking                                                                                      3
     1.1 A shift of paradigm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .      4
     1.2 Principal aspects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .      6
            1.2.1 Naming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .      6
            1.2.2 Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .      7
            1.2.3 Routing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .     9
     1.3 Main advantages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .        11
     1.4 Development solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .            14
     1.5 The need for experimental facilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .               16

2 OpenFlow and the OFELIA project                                                                                         18
     2.1 The OFELIA project . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .         19
     2.2 Experimenting with OFELIA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .              20
     2.3 OpenFlow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .    22
     2.4 OpenFlow working mechanisms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                     24
            2.4.1 The Data Path . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .          24
            2.4.2 The Control Path . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .           28
     2.5 The NOX controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .        29
     2.6 Slicing the network with FlowVisor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                 31

3 Enhancing OpenFlow to support Information Centric Networking 34
     3.1 The EXOTIC proposal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
     3.2 The evolutionary approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
     3.3 The CONET framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
           3.3.1 Model of operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
           3.3.2 Naming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
           3.3.3 Packet structure and protocol stack . . . . . . . . . . . . . . . . . . . . . . . . . 43
           3.3.4 IPv4 and IPv6 CONET Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
     3.4 Analysis of CONET functionalities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47

4 Analysis for the realization of a real CONET scenario                                                             49
     4.1 Analysis of operations performed by Border Nodes and Internal Nodes . . . . . 50
           4.1.1 Border Nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
           4.1.2 Internal Nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
     4.2 Analysis of OpenFlow features for CONET support . . . . . . . . . . . . . . . . . . . . 54
           4.2.1 A long term approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
     4.3 Short term approach for CONET support in OpenFlow . . . . . . . . . . . . . . . . . . 57
           4.3.1 Analysis of MPLS tag solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
                                                  3
              Solutions to enhance the Internet with an information centric model, exploiting Openflow


              4.3.2 Analysis of IP protocol and Ports tag solution . . . . . . . . . . . . . . . . . . 62
              4.3.3 Analysis of MAC address tag solution . . . . . . . . . . . . . . . . . . . . . . . . 62

5 The implementation of a CONET solution                                                                                    63
     5.1 Basic settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
     5.2 The network topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
     5.3 The Open vSwitch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
     5.4 NOX settings and usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
           5.4.1 The first controller implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
           5.4.2 The handling of ICMP messages . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
     5.5 NOX enhancements to support the CONET architecture . . . . . . . . . . . . . . . . 81
           5.5.1 The handling of CONET traffic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
           5.5.2 Example of usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
     5.6 The communication with the Cache Server . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
           5.6.1 The Cache Server application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
           5.6.2 The handling of JSON messages . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
           5.6.3 Example of usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
           5.6.4 Applications to improve the CONET . . . . . . . . . . . . . . . . . . . . . . . . . 103

Final considerations                                                                                                    108

Grazie a...                                                                                                             109

List of Figures                                                                                                         111

Listings                                                                                                                112

Bibliography                                                                                                            113




                                                            4
             Solutions to enhance the Internet with an information centric model, exploiting Openflow




Introduction

In recent years, there have been several proposals and different solutions related to the
evolution of the existing Internet architecture.
Among them, the most widely accepted and extensively debated is the concept of
Information Centric Networking, that foresees the transition from the existing host-centric
model to a solution based on information objects and their properties. Since nowadays the
majority of Internet usage is represented by contents retrieval and service access, ICN
states that the actual Internet architecture hardly fits this use and needs to evolve towards
a solution that puts contents and information objects at the centre, decoupling them from
their location. Although this shift would be conceptually clear and simple, it raises diverse
questions and opens several aspects that have to be taken into account while developing
this new architectural solution.
        Most of the advantages and features provided by ICN, in fact, result from the shift
from the existing host-centric networking paradigm to a Content Centric one, that needs to
be developed and extensively tested on network facilities that are not suitable for this
purpose. Because of this, Information Centric Networking is strictly related to the topic of
network virtualization and programmability and with the concept of Software Defined
Networks.
In latest years, within the United States and the European Union some relevant projects
have been launched with the goal of realizing network research infrastructures dedicated
to experimentation. Among them, the European OFELIA project aims to realize a Europe-
wide research facility, based on OpenFlow technology, for experimenting with new network
architectures and distributed systems. Therefore, OFELIA represents the ideal testbed
where to experiment new networking solution able to offer content centric functionalities,
with the purpose of moving the Internet to a new architecture.
        Although this new model seems to be possibly realized only starting from scratch,
on the other hand, it is worth to underline that the existing IP model, over which the


                                                        5
             Solutions to enhance the Internet with an information centric model, exploiting Openflow


Internet is based, is too widely deployed to be abandoned at once in order to migrate to a
new content centric solution.
The EXOTIC proposal, which this work is part of, takes into account these considerations
and proposes a new evolutionary approach, with the goal of developing Content Centric
functionalities, while keeping the existing networking structure. In particular, it proposes to
adopt a framework for Information Centric Networking called CONET (COntent NETwork),
which presents characteristics that make it suitable for deployment in accordance to the
Software Defined Networks.

My thesis work goes in that direction, with the purpose of providing a virtualized solution
for Information Centric Networking, following CONET principles. In particular, exploiting
OpenFlow technology, I realized the control logic and the data plane of a small network,
able to support Content Centric features and to provide caching functionalities.
In the first chapter I present in details Information Centric Networking paradigms,
highlighting the key functionalities and outlining the main advantages that the adoption of
this architecture could offer.
The second chapter is focused on presenting the OFELIA project and the OpenFlow
switching technology, which are fundamental for network experimentation and
programmability and that have been extensively used during this work.
In the third chapter, I describe the evolutionary approach, which CONET architecture is
based on, and then I show all the characteristics of the project, presenting the solutions
provided in response to main ICN questions.
After that, I start to analyse possible solutions and enhancements for the project,
investigating all the operations and functionalities that need to be performed in order to
support Content Centric features within an OpenFlow network.
Other than this long term view, I also suggest some solutions that is possible to implement
in order to realize a working prototype of a CONET network in the short term and, in the
fourth chapter, I present a detailed analysis of the operation that would be needed.
In the final chapter, I show how it is possible to realize the envisaged scenario, presenting
the OpenFlow network I created and the enhancements needed to realize a networking
solution able to support CONET functionalities.
Moreover, at the end of this work, I describe and implement also a communication model
that involves a Cache Server, with the purpose of enhancing the previously presented
CONET network with caching functionalities.




                                                        6
             Solutions to enhance the Internet with an information centric model, exploiting Openflow




Chapter 1

The Information Centric Networking
In the 50 years since the creation of packet networking, computers and their attachments
have become cheap, ubiquitous commodities, making the Internet to evolve greatly from
its original incarnation. The engineering principles and architecture of today’s Internet were
created in the 1960s and ’70s, when the problem networking aimed to solve was resource
sharing. The communication model that resulted is a conversation between exactly two
machines, one wishing to use the resource and another providing access to it. Because of
this, the Internet architecture was designed around host-to-host applications and the same
IP packet contains two identifiers (addresses), one for the source and one for the
destination of a conversation between pairs.
So, in classic Internet applications, the user explicitly indicates the destination address to
communicate with another host and the network’s only role is to carry packets to that
address, written in the packet header.
        On the contrary, today the Internet is used principally for data retrieval and service
access: in this situation the user cares about contents and is oblivious to their location. In
fact, the user knows that she wants to visit a web page, to download a video from
YouTube, or access to her bank account without knowing nor caring on which machine the
desired data or service actually resides [1].
Conversely, the underlying IP communication model is still address centric (or host-
centric); that is, the network layer has to be fed by IP addresses, which are used to
ascertain from “where” contents have to be taken. Therefore, there is a mismatch between
the content-centric usage model of the Internet and the address-centric service model
offered by the IP layer. Such a mismatch gives rise to several problems that would not
exist if the network layer were a content-centric one [2].




                                                        7
             Solutions to enhance the Internet with an information centric model, exploiting Openflow




1.1 A shift of paradigm

There is a growing consensus in the recent literature that the central role of the IP address
poorly fits the actual form of Internet usage. In particular, data retrieval and service access,
which are some of the most requested functionalities, can be obviously supported by the
current Internet architecture, but they do not fit comfortably within the host-to-host model
[1].
So, since contents and informations has become key elements for today communications
and networking, there is the need for a more suitable and efficient way of accessing,
retrieving and delivering them.
        Information (or Content) Centric Networking is a new logical and architectural
solution that aims to answer to the fact that people value the Internet for what content it
contains and not for where this content comes from. Van Jacobson was one of the first
people proposing to replace where with what, arguing that “named data is a better
abstraction for today’s communication problems than named hosts” [3].
Naming data means that user can ask for a specific content without knowing where it is
located and, specifically, requires an architecture and a packet-exchange model that
replaces packet addresses with content names. There have been several proposal for
developing this new model: some of them foresee a “clean slate” approach, building a new
architecture based on names, while others aim to integrate Content Centric Networking in
today’s communication structure, stating that replacing the existing IP layer could not find
development and commercial success.
Among the firsts, there is the Content Centric Network (CCN) solution proposed by Van
Jacobson, that suggests to introduce a CCN’s network layer that can be layered over
anything, including IP itself.




              Figure 1.1: CCN moves network stack from IP to chunks of named content.

                                                        8
             Solutions to enhance the Internet with an information centric model, exploiting Openflow




While preserving the design decisions that make TCP/IP simple, robust and scalable, this
solution introduces also two new layers, the strategy and the security one, as shown in
Figure 1.1.
CCN is designed to take maximum advantage of multiple simultaneous connectivities (e.g.
ethernet, 3G, bluetooth) due to its simple relationship with layer 2; the strategy layer
makes the fine-grained, dynamic optimization choices needed to best exploit multiple
connectivities under changing conditions. Since CCN talks about data, not to nodes, it
does not need to obtain or bind a layer 3 identity (IP address) to a layer 2 identity such as
a MAC address. Because of that, even when connectivity is rapidly changing, CCN can
always exchange data as soon as it is physically possible to do so.
        One of the key features of Content Centric Networking is that it is based on a simple
communication mechanisms, driven by the consumers of data, that offers, natively,
support for mobility. In fact, for every kind of communication, there are only two CCN
packet types, Interest and Data.
A consumer asks for content by broadcasting its Interest over all available connectivity.
Any node hearing the Interest and having the Data that satisfies it, can respond with a
Data packet. Data is transmitted only in response to an Interest and consumes that
Interest [3].
Since both Interest and Data identify the content being exchanged by name, multiple
nodes interested in the same content can share transmissions over a broadcast medium
using standard multicast suppression techniques [4].
Therefore, it is evident the central role played by names in a model where everything that
goes through the network needs a specific, unique, valid identifier. Moreover, it would be
necessary to implement several of the most natural features one would want for service
access and data retrieval, such as persistence, availability, and authentication.
These requirements impose to deal with architectural choices and to solve problems that I
will outline in next sections.




                                                        9
             Solutions to enhance the Internet with an information centric model, exploiting Openflow




1.2 Principal aspects

There have been several recent proposals for content-oriented network architectures,
whose underlying protocols are often similar in spirit, but which differ in many details,
leading to slightly different solutions regarding some key aspects.
In particular, I will highlight the following ones: naming, security and routing.



1.2.1 Naming

In a content-centric network, routing is based on the name of the content; therefore,
choosing the structure of names is an important aspect of the overall architecture.
In fact, a content is named directly, placed in the network by “publishers”, replicated in
caches by network elements and requested through a find or fetch primitive based on the
name itself. A copy of the desired data object could be find through the equivalent of a
name-based broadcast distribution of the request and the object can be returned from any
host (or network element) holding a copy. The validity of the object must therefore be
ascertained directly, not by verifying which host delivered the object [5].
There are also some other properties that contents should have and that
directly reflect on the choice of naming scheme:

      persistence

      hierarchical structure

      memorability

      security

Persistence means that once given a name for some data or service, the user would like
that name to remain valid as long as the underlying data or service is available. In order to
do so, there is the need for a naming scheme not related to location, so that it avoids the
equivalent of today’s “broken links” when data is moved to another site.
A hierarchical structure for content names, could be useful in order to combine contents
and allows efficient, distributed, hierarchical aggregation of routing and forwarding
informations while allowing for fast lookups. Moreover, an object like com.CNN.headlines
can fall under the aggregates com and com.CNN and has also the attractive property of
being simple to remember [5].
The memorability property is desirable for contents that need to be advertised, like the
home-page of a business company or of a newspaper. Conversely, memorability may be
not desirable for contents that have a very narrow interest, like a content describing the
status of a DHL shipment, or the value measured by a sensor. All these properties lead to
a name that could present a structure quite similar to URL names.

                                                       10
             Solutions to enhance the Internet with an information centric model, exploiting Openflow


On the other hand, authenticity implies that protection and trust travel with the content
itself, rather than being a property of the connections over which it travels. Therefore, even
if content is provided by an untrusted server or is transferred on an untrusted network, it
can be validated by the consumer. Such a content-based security also enables the
replication of secure contents by any network node, which is very important as users can
get contents not only from the original content creator or source node but also from any
node or user that has already downloaded that content [2].
A naming solution that embodies these security properties is the usage of self-certifying
names, which are the cryptographic hash of the content (and, eventually, some metadata)
signed with publisher’s Private Key. Obviously, these cryptographic operations, result in a
name that is not human-readable and extremely difficult to remember. Moreover, it seems
difficult to aggregate objects named with a self-certifying name in a hierarchical structure,
in order to simplify forwarding operations. In spite of appearances, while maintaining their
unreadable structure, self-certifying names could also be organized in a hierarchical way
so that they can satisfy the need for hierarchy and persistence, while offering an intrinsic
and strong binding between the name and the key of the entity that is actually publishing a
content.
Because of this strong, built-in trustworthiness, they has to be considered a must and are
widely recognized to be the best naming solution for Content Centric Networking.

1.2.2 Security

Security is extensively known to be one of the main problems bound to a connection-
focused communication model, such as today’s Internet. In fact, the security of a content is
tied to the reliability of the host that stores it and to the connection over which it was
retrieved.
Users would like to know that the data comes from the appropriate source, rather than
from some spoofing adversary and that the content retrieved is valid, namely an
authenticated, unmodified copy of the desired content.
Because of this, Internet security has to be intended as a combination of three
aspects:

      integrity of the content

      trustworthiness of the source

      security of the connection


In particular, last point is achieved through the combined action of DNS and some
cryptographic protocol, such as TLS. The first one provides a reliable indicator of where to
find a network entity authorized to deliver the content a host is interested in, while the
second secures the channel to the source, assuring that the content is also protected from
eavesdropping by encryption.
       CCN aims to decouple where from what, securing the content itself, rather than the
connections over which it travels, in order to avoid many of the host-based vulnerabilities
that plague nowadays IP networking.
Content-based, rather than connection-based, security is therefore designed toallow users
to securely retrieve desired content by name, and authenticate the result regardless of
                                                       11
             Solutions to enhance the Internet with an information centric model, exploiting Openflow


where it comes from – the original source or a copy on their local disk – or how it was
obtained – via insecure HTTP, or new content-based mechanisms [6].
Among these ones, stands out the solution proposed in [6], where the authors argue to
authenticate neither names nor contents, but the link between them. In that way, a
publisher decides a name N for a content C and signs, with her key, this couple.
While this proposal has some advantages, such as location independence and the use of
plain names, the solution proposed in this thesis work is quite different and basically
aligned to [7].
In particular, this solution foresee the presence of different namespaces, each
one coordinated by a Name Routing System Node, responsible for assisting
routing operations and for managing names distribution.
In fact, these entities are responsible to distribute unique names, called principals, to hosts
who want to publish a content, within a namespace. Each namespace follows its
governance rules to release unique principal identifiers, including possible interactions of
the principal with a naming authority in order to obtain the principal identifier.
Every publisher with a valid principal can name her content without restrictions
with a name called label.
It is worth to point out that both principals and labels are plain names, which are, generally,
human-readable. For instance, a namespace named www could support the actual WEB
names, principals identifiers could be DNS domains, like cnn.com, and a typical label
would be news.html [7].
The complete name is a combination of namespace, principal and label, jointed to form a
self-certifying name with the following structure:

                          < namespace;Hash(principal);Hash(label) >

where Hash() denotes a cryptographic hash function useful to transform principals and
labels into a fixed number of bytes.
This kind of name has the advantages of being unique, hierarchical and protected by
encryption. Moreover, it grants a strict binding between a publisher and the content itself,
since the principal is unique and serves as a Public Key.
In addition, the trustworthiness of publishers and the validity of names within a namespace
is managed by a specific network entity, namely the Name Routing System Node, with the
result of more complexity, compensated by a stronger overall security.




                                                       12
             Solutions to enhance the Internet with an information centric model, exploiting Openflow




1.2.3 Routing

Routing is a thicklish subject in CCN, as one of the key aspects of Content Centric
Networking is to ask for contents, independently of where they reside. Thereby, with object
names that are location-independent, it is not possible to use common topology-based
routing and forwarding algorithms based on these names.
In literature, it is possible to find several solution for content forwarding, but, even if they
are different in the implementation, they are all based on publish/subscribe or
query/response model, exploiting Interest and Data packets.




                                    Figure 1.2: CCN routing scheme [8]

The basic idea behind this picture is that a request for a content is routed towards the
network entity where that content has been originally published. During this path, the
caches of the nodes traversed on the way towards the source are checked for copies of
the requested content. As soon as the requested object or a cached copy is found, it is
returned to the requester within a Data packet along the path the Interest came from.
During this reverse path, all the nodes caches a copy of the content, according to some
policy, in order to faster response to a further request for it.
As pointed out in [8], there are two general approaches in CCN networks to handle routing,
both depending on the properties of the naming scheme adopted.
The first one is basically the one pointed out before, and it is based on the direct routing of
the Interest messages from the requester to one or multiple data sources. Since this
approach rest on the name of the desired content, in literature this is known as Name-
Based Routing and generally presents algorithmic solutions heavily dependent on the
namespace adopted.
The second approach, instead, does not route directly requests to the publisher, but uses
a Name Routing System responsible for binding object names to topology-based locators.
The Name Routing System has a fundamental role in this solution since it performs the
logical association between content names and the address needed to reach them, as the

                                                       13
             Solutions to enhance the Internet with an information centric model, exploiting Openflow


DNS does today. Other than being in charge of unique name distribution, NRS is involved
into content forwarding procedures, during which it could also perform security checks and
names validations.
In this approach, the first operation performed is the routing of the request message to the
nearest NRS, that checks for the validity of the requested content name and verify that it
was published by a reliable entity. After this check, the Name Routing System answers
back to the requester, with one or more source addresses where it is possible to find the
desired content.
Then the requester can easily direct his Interest packet through the source and get the
selected object back, simply following normal routing mechanisms. It is worth to point out
that this second solution could be loosely or tightly integrated with CCN architectures and
that Name Routing System could perform name-location binding even before new Name-
Based Routing algorithms take place.




1.3 Main advantages
As I already said, there is a growing consensus and an increasing development of new
theories and solutions regarding the “Internet of Things” and, therefore, the Information
Centric Networking. This tendency is not only due to the soaring diffusion of multimedia
applications and data retrieval, but also to the fact that CCN offers advantages and
development possibilities unreachable with today’s Internet.
It is worth to point out five main advantages, that, in my opinion, are the driving factors of
the Internet evolution towards an Information Centric model:

      integration with Real World Objects

      in network caching

      content oriented security model

      per content QoS

      mobility

The first one is probably the most important, because it would affect our everyday life, as it
foresee the digitalization and availability in networks of every kind of object, from a car to a
dishwasher timer. In fact, with the name Real World Object is meant every kind of object it
is possible to interact with through network connection and that could be tracked and
located by its owner.
The European project CONVERGENCE goes in that direction, defining the concept of
Versatile Digital Item, that is a very general data format holding information related to any
virtual or physical item and binding together meta-information and resources [9].
Thus, it is evident how easily these Real World Objects could fit into an Information Centric
model, based on the concept of addressing things and that aims to identify contents
instead of locations. Conversely, an evolution towards the “Internet of Things”, is difficult to
imagine within an host-centric model such as today’s Internet.
                                                       14
             Solutions to enhance the Internet with an information centric model, exploiting Openflow


        The second feature listed above is another advantage that comes natively
 CCN and that could bring several benefits, in particular regarding to content distribution.
The growing request of multimedia content and the quick develop of video supplier,
resulted, in recent years, into a wide diffusion of Content Delivery Networks. These
networks are systems of computers containing copies of data placed at various nodes of a
network with the goal of improving access to cached data (usually web objects, media files
or applications), by increasing access bandwidth and redundancy and reducing access
latency. This kind of service, nowadays, is highly demanded and requires several analysis
in order to understand which are the most popular contents and where is better to place
caches into the network in order to store them. Conversely, in CCN content are requested
by names, making easier to understand which are the object that it is worth to store into
caches. Moreover, content could be cached “in path”, that is they could be stored while
data packets are coming back to the requester, into the nearest cache server.
For what concerns security, Information Centric Networking foresee a location-
independent security model, designed around contents instead of connections.
As highlighted above, IP security is difficult to achieve nowadays, as it has to solve, at the
same time, the problems of the integrity of the content, the trustworthiness of the source
and the security of the connection the desired object travels through.
On the contrary, CCN decouple location from identity and adopt a naming scheme that
simply offers cryptographic protection from eavesdropping. Therefore, CCN is exempt from
the problem of securing the connection between source and destination and contents
could be securely retrieved regardless of where they come from. Furthermore, validation of
content and publisher identity are built-in properties, as they are strictly bound to the
naming scheme adopted and, because of this, are the result of several discussion and
thoughts that aim to avoid later hacks needed to grant security.
        The naming scheme plays a crucial role also when it comes to provide various
levels of services and differentiating traffic classes. Currently, there have been several
solutions, like DiffServ or MPLS, used to split traffic into various classes in order to offer
different quality of service. Although they work fine and they are highly demanded, all this
solutions are based on a tag or a label to recognize which class a certain packet belongs
to. So it introduces overhead and requires additional protocols, such as LDP, to manage
the correspondence between label and traffic classes.
In Content Centric Networks, instead, these possibilities come natively, provided that an
appropriate naming scheme is chosen. In fact is it possible to define different priorities for
different contents simply binding traffic class with some labels, within a namespace.
Thanks to this, it allows to choose the number and the granularity of different traffic
classes simply changing the set of labels a quality of service is bound. That is a real
advantage for service providers, since they do not have to divide traffic into flows and then
assign tags anymore.
        Last but not least advantage that I want to point out is mobility. Today we are in
front of an unseen diffusion of mobile devices, tablets and smartphones that are gradually
allowing the access to the Internet from almost everywhere.
In fact, there is a growing request for connectivity: people wants to access their e-mail or
web page out of their homes and be able to work during journeys. Unfortunately, IP does
not support mobility in his original structure, not decoupling host names from address, i.e.
locations.
Information Centric Networking, instead, take the maximum advantage of this separation,
offering support for mobility and being able to provide content continuously even to a
traveling requester.

                                                       15
             Solutions to enhance the Internet with an information centric model, exploiting Openflow


In fact, content are provided back, answering to an Interest packet, along the same path
followed by the request. So, if the requester is moving, at a certain time, Interest packets
would start to follow a different route and, after having reached the desired object source,
the corresponding Data packets would follow this new path too, reaching the requester
without problems.
Mobility is supported not only in this case, but also if the publisher of a content is moving
into the network.
In fact, depending on the routing approach used, the moving entity could notify or not to
the network that it is registering to another namespace or that is connected to another
switch. In the first case Interest packets could be routed to the publishing entity, as
network routing engine has been updated thanks to the notification, while in the second
case requests can be broadcasted. Therefore, in both cases the source of the content
would be reached by the request. Because an Interest packet is consumed by a Data
packet with the same name, the first Data packet received by the requester would give to
the network a piece of information about the new location of an entity storing that content
and the following Interest packets would be routed there.
Thus, independently of the approach chosen and regardless of which entity is moving
through the network, Information Centric Network can support content delivery in a mobility
scenario.




1.4 Development solutions
As pointed out before, Content Centric Networking could offer several benefits to a lot of
services, since it is able to provide natively features like content content distribution or
mobility. Unfortunately, these advantages comes to a price: Information Centric
Networking requires big changes into Internet architecture in order to fully exploit these
advantages.
In particular, addressing contents instead of hosts could require significant modification to
network and transport layer and also the introduction of new features, could foresee
completely new layers, such as the security one proposed in [3] and showed in Figure 1.1.
In literature, it is possible to find three main approaches to the problem of how to deploy
CCN, quite different in spirit and technology:

      clean slate approach

      overlay approach

      integration approach

       The first one, as the name suggests, foresee a complete change in actual Internet
architecture, stating that it is possible to fully develop Content Centric Networking
functionalities only building from scratch packets structure and layers organization. Clean
slate supporters, in fact, argue that it would be better to re-build part of the Internet
architecture around objects, creating new layers on top of connectivity ones, namely Layer
2, able to exploit every kind of underlying connection and dedicated to solve problems like
reorder or security.
                                                       16
              Solutions to enhance the Internet with an information centric model, exploiting Openflow


Not being based on actual architecture, according to this approach, also packets structure
need to be reconstructed around new layers. So, a plausible structure for packets, could
be the one showed in Figure 1.3, taken from [3].




                                        Figure 1.3: CCN packet types



        At the completely opposite side, there is the overlay approach, that aims to offer
CCN functionalities on top of existing technologies.
While taking into account all the benefits that could came with a completely rebuilt
architecture, researchers following this solution believe that IP is too widely developed to
be suddenly abandoned. According to this approach, the clean slate one requires too
many changes to get economic relevance, as it would be too expensive to replace global
Internet architecture and infrastructures. Because of this, in order to develop a solution
that could easily deployed, there is the need to hold the existing architecture and packet
structure, trying to offer CCN features inside them.
Therefore, this approach foresee to leave unchanged the IP layer and tries to offer Content
Centric functionalities on overlay layers and inserting Interest and Data packets into
UDP/IP tunnels.
The integration approach proposed in [7] and adopted in this work, instead, takes an
intermediate position, trying to take major advantages from both solutions described
above.
According to the overlay solution, integration supporters believe that it would be extremely
difficult to replace the existing IP infrastructures, since it would cause relevant costs for
operators. On the other hand, this approach agrees with the clean slate one while talking
about potentialities of new built-in features, difficult to be efficiently developed in an overlay
structure due to the significant overhead required to perform tunneling and to inefficiencies
related to a model that is still host-centric addresses. Because of that, the integration
approach not only tries to develop natively all the key features listed above, but also
foresee an integration with the existing network layer, mainly IP, with the purpose of
allowing simple deployment, not requiring substantial changes in the architecture.


                                                        17
              Solutions to enhance the Internet with an information centric model, exploiting Openflow


This work is based on this approach and in particular it follows the CONET solution for
Information Centric Networking [10], since it is part of this project: thus, in the following the
integration approach will be embraced with the declension given in CONET proposal [11].
This solution, that I will expand in the following, integrates Content Centric functionalities in
the IP protocol by means of a novel IPv4 option or by means of an IPv6 extension header.



1.5 The need for experimental facilities
Whichever of the three approaches listed above is chosen to try to deploy Information
Centric Networking, the effort needed to realize CCN functionalities is remarkable. This
effort becomes even more significant if considering that almost all the advantages and
features provided by ICN have in common the shift from the existing host-centric paradigm
to a content-centric one. Therefore, researchers are currently facing the problem of
developing and testing new technologies on networks and testbeds that are not suitable
for their purposes.
Because of this asymmetry between network facilities and research needs, there is a
growing attention for software solutions that allow to virtualize portions of networks in a
very modular way. In fact, this possibility grants to researchers the ability to conduct
several different experiments with the same software running on their machine.
Although this could be a solution for preliminary simulations, in order to realize a working
experiment with real performance constraints and benchmark, this first results have to be
transferred on a physical network, capable to support the experiment.




                          Figure 1.4: From computer to network virtualization [13]



In order to answer to this need, in latest years, within the United States and the European
Union some relevant projects have been launched with the goal of realizing network
research infrastructures dedicated to experimentation. These projects are planned to

                                                        18
             Solutions to enhance the Internet with an information centric model, exploiting Openflow


support a wide range of experimental protocols, data dissemination techniques running
over facilities such as fiber optics or city-wide urban radio networks, shared among a large
number of individual and simultaneous experiments.
In order to do so, these projects are highly based on virtualization techniques which allow
multiple researchers to simultaneously share the infrastructure in a slice-based,
programmable way. In particular, GENI project forecast an experiment as an
interconnected set of reserved resources on platforms in diverse locations, that could be
reserved, programmed, debugged and able to run the experiment on his own, not
interacting with other experiments and offering the view of the only reserved resources
[12].
        With the same spirit of resource slicing, there is also the European project OFELIA,
that this work is part of, that aims to “create a unique experimental facility that allows
researchers to not only experiment on a test network but to control the network itself,
precisely and dynamically” [13]. In particular, OFELIA is based on eight main experimental
centers, called islands, placed all over Europe and interconnected with a physical
infrastructure based on OpenFlow. Because this work is highly based on OpenFlow and is
realized within the OFELIA project, technology details as well as project objectives and
results deserve a more extensive exposition, that will be given in the following chapter.




                                                       19
             Solutions to enhance the Internet with an information centric model, exploiting Openflow




Chapter 2

OpenFlow and the OFELIA
Project

As pointed out at the end of last chapter, today, there is almost no practical way to
experiment with new Internet architectures or network protocols with real traffic and in a
sufficiently lifelike scenario. Just because the enormous amount of equipments and
protocols already installed and the reluctance to allow experimenters to exploit real traffic
led to the common belief that the network infrastructure is “ossified” [14], often leaving new
network researches and ideas untried and untested.
With the purpose to lower the barrier to entry for new ideas and solutions, the OFELIA
project aims to realize a Europe-wide research facility for experimenting with new network
architectures and distributed systems.
In order to provide an investigation space which allows for flexible integration of test and
production traffic, this project exploits virtualized programmable networks as well as
programmable switches and routers that, using virtualization, can process simultaneously
packets for multiple isolated experimental networks.
The OFELIA facility is based on OpenFlow, a currently emerging networking technology
that allows researchers to run experiments on heterogeneous switches in a uniform way at
line rate and with high port density. OpenFlow offers virtualization and control of the
network environment through secure and standardized interfaces, offering a set of
functions that run in many switches and routers, so that vendors do not need to expose the
internal workings of their facilities [14].
In the following sections will be outlined OFELIA project organization and objectives and
then OpenFlow technology and features will be exposed in greater detail because of the
central role played into the project architecture and since they are extensively used in this
thesis work.




                                                       20
             Solutions to enhance the Internet with an information centric model, exploiting Openflow




2.1 The OFELIA project

OFELIA, acronym for “OpenFlow in Europe: Linking Infrastructure and Applications”, is a
collaborative project within the European Commission’s FP7 ICT Work Programme.
Started in 2010, this three years project aims to create multi-layer and multi-technology
experimentation networks by means of interconnecting eightmain islands based on
OpenFlow infrastructure. These islands are placed at academic institutions in Europe and
interconnected thanks to an OpenFlow enabled network equipment in order to provide
realistic test scenarios for experimenting new technologies.
The project is organized around three main phases, for a gradual deployment of facilities
and improvement of features, incorporating the feedback of the user community and
extending its reach to other test framework.
In fact, at the starting date, the islands were only five and the first phase has been about
placing into these islands switches and OpenFlow controllers, in order to be able to
conduct fist local experiments. Then, it was planned to interconnect existing islands and to
extend project partners and facilities, by means of two open calls. Through one of these,
won by the EXOTIC proposal, which I will present in the next chapter, three new islands
are about to be added, with the result of extending the geographical and research impact
of the overall project.
Therefore, the resulting islands disposition is shown in the following figure, where it is
possible to see the three new Italian islands and, in particular, the “CNIT - Tor Vergata”
one, where this work, jointed with other significant contributions, is going to be tested.




                       Figure 2.1: Geographic distribution of OFELIA islands [15]



After that, during the third phase, all the facilities and the institutions added through open
calls will be started and interconnected to existing islands.
There will be also another open call, to further extend project partners that will be invited to
implement specific use cases in their island.
                                                  21
               Solutions to enhance the Internet with an information centric model, exploiting Openflow


In the end, is scheduled the integration of federation facilities with the European project
FIRE and to offer a modular automated provisioning of OpenFlow slices across multiple
islands.




2.2 Experimenting with OFELIA
As mentioned before, one of the objective of the project is to allow researchers to conduct
their experiments simultaneously, sharing the beneath infrastructure but running each
experiment within its own, isolated set of facilities.
In particular, OFELIA offers the possibility to create one’s own slice, namely an empty
container into which experiments can be instantiated and to which researchers and
resources may be bound. More precisely, for conducting experiments, users receive a
network slice consisting of:

      a number of Xen-based virtual machines, as end points

      a virtual machine to deploy their OpenFlow-capable network controller and its

       applications

      parts of switches that connect to the user’s OpenFlow controller and to the end

       nodes

      best effort links between end points and switch ports

      control of a subset of the flowspace in the selected parts of the switches

In particular, switches’ selectable parts are, generally, a set of physical ports where
booked virtual machines are connected.The term flowspace, instead, means a set of
property and features, defined by the OpenFlow protocol, through which it is possible to
define flows and, thus, to identify and distinguish someone’s traffic.
       The connection to the facility happens via OpenVPN connections through the
central island, placed in Ghent, Belgium. Before users can setup an OpenVPN connection
to enter the OFELIA facility, they need to acquire an OFELIA user account [16].
Through “Expedient”, the web user interface of the OFELIA’s Control Framework, user can
create and run experiments, within the OFELIA autonomous and federated facilities. Each
island runs the Expedient web-based user interface, so that a user can reserve resources
and simply start to experiment, while the control framework handles the separation of the
experiments and the monitoring procedures.




                                                         22
             Solutions to enhance the Internet with an information centric model, exploiting Openflow




                   Figure 2.2: Creation of a project through “Expedient” user interface


After having created a project it is possible to set permissions for other experimenters and,
then, to set up a slice, choosing where to create the desired virtual machines within the
island topology that is presented by Expedient.
In Figure 2.2, for instance, it is shown the topology of i2CAT island, in Barcelona, where I
run the first part of the experiments presented in this work.
After these topology considerations, a user can decide the kind of virtual machine she is
going to use and select peculiar features of her traffic. These characteristics define the
flowspace, which is, basically, a list of packet fields that OpenFlow can recognize and the
user intend to exploit in order to separate different flows over which conduct experiments.




                       Figure 2.3: Creation of a VM and definition of the flowspace




                                                       23
             Solutions to enhance the Internet with an information centric model, exploiting Openflow




2.3 OpenFlow
OpenFlow is an open standard that enables researchers to deploy experimental protocols
in production networks [17]. Essentially, OpenFlow is an API added as a feature to
commercial Ethernet switches, routers and wireless access points that provides
standardized secure interfaces and functions, with the purpose of making already
deployed networks programmable and hardwareindependent.
        In a classical router or switch, the fast packet forwarding, the data path, and the
high level routing decisions, the control path, occur on the same device. An OpenFlow
Switch, instead, separates these two functions giving a remote controller the power to
modify the behaviour of network devices. The data path portion still resides on the switch,
while high-level routing decisions are moved to a separate controller, typically a standard
server [17].
Exploiting the fact that most modern Ethernet switches and routers contain flow-tables,
that run at line-rate to implement firewalls, NAT, etc., OpenFlow is designed around a
common set of functions, actions and fields, typically implemented in every network entity.
Thanks to this abstraction the data path of an OpenFlow Switch simply consists of a
pipeline containing one or more flow tables (see Figure 2.4), which presents a clean
definition. Every table, in fact, is populated by several entries with different priorities and
each flow table entry contains a set of packet fields to match and an action to apply to the
matching packet [14].




                    Figure 2.4: Flow tables concatenation forms OpenFlow data path


Obviously, the behaviour of the data path is determined by the control plane, that decides
how to separate different flows and which kind of actions apply to diverse kind of traffic.
Therefore a controller, which is a logically separated entity, adds and removes flow-entries
from the flow table on behalf of experimenters. For example, a static controller might be a
simple application running on a PC that statically establishes flows to interconnect a set of
test virtual machines, as could happen within OFELIA. But controllers could also be
proactive and dynamically add or remove flows under different conditions, as an
experiment progresses.
       As outlined before, OpenFlow offers a standardized interface that allows commands
and packets to be sent between a controller and the switch, avoiding need for researchers
to directly program the switch itself. This happens by means of the OpenFlow Protocol,
which provides an open and standard way for a controller to communicate with a switch,
exploiting a Secure Channel connection.
                                                       24
             Solutions to enhance the Internet with an information centric model, exploiting Openflow


Figure 2.5 sums up the idealized OpenFlow Switch structure, which is made up of an
external Controller, talking the OpenFlow Protocol with the switch through a Secure
Channel and populating the switch’s Flow Table.




                               Figure 2.5: The OpenFlow basic architecture


Originally developed at Stanford University in 2008, OpenFlow was initially thought to be
deployed in campus and college networks. From that point, OpenFlow greatly evolved and
it is gaining more and more relevance and users. At the moment, it is available for almost
all Linux distributions and have been developed also some plugins for common traffic
analyzer, such as Wireshark, to make them recognize the OpenFlow protocol.
Currently the newest version of OpenFlow is the 1.1.0, but in this work I used the previous
one, namely 1.0.0, as it is the one installed within OFELIA islands.




2.4 OpenFlow working mechanisms
In this section I will try to outline the way OpenFlow works within a switch and to highlight
the most important features useful to implement the control logic in the external controller.
In order to do that, I will focus on the way logical operation are performed along the data
path and within the OpenFlow protocol messages exchanged between the switch and the
controller.
First of all, the following terminal instructions are needed to install OpenFlow on a Ubuntu
machine:

$   wget http://openflow.org/downloads/openflow-1.0.0.tar.gz
$   tar xzf openflow-1.0.0.tar.gz
$   cd openflow-1.0.0
$   ./configure
$   make

                                                       25
             Solutions to enhance the Internet with an information centric model, exploiting Openflow


$ sudo make install



2.4.1 The Data Path

The Data Path of an OpenFlow switch consists of one or more flow tables, as shown in
Figure 2.4, that packets come through until they find a flow entry that matches some
properties and whose actions are performed.
Matching starts at the first flow table and may continue to additional ones.
Within each flow table, there could be several flow entries, that match packets in priority
order, with the first matching entry in each table being used [18]. If a matching entry is
found, then the instructions associated with the specific flow entry are executed. On the
contrary, if there is no match, the outcome depends on the switch configuration, with the
packet that can be forwarded to the controller over the OpenFlow channel, dropped, or
continue to the next flow table.
Therefore, the main instrument for researchers to decide if an action has to be performed
over a packet, is to write a flow entry, that, logically, has the following structure:

                            Match Fields Counters                 Instructions

                        Table 2.1: Main components of a flow entry in a flow table


where Match Fields, Counters and Instructions are, respectively, a set of packet fields
used to recognize different flows, a counter for the number of matching packets and a set
of actions to be performed over those matching datagrams.
OpenFlow is based on a standard packet structure, that is exploited to distinguish traffic
and to identify flows by means of comparing the incoming packet against these Match
Fields. As shown in Table 2.2, other than standard fields within packet headers, matches
can also be performed against the ingress port and metadata fields, which may be used to
pass information between tables in a switch.




                                   Table 2.2: OpenFlow matching fields


Each entry, usually, contains a specific value but it could also be wildcarded with the value
any, which matches any value. Furthermore, if the switch supports arbitrary bitmasks, for
instance on the Ethernet source/ destination, or on the IP source/destination fields, these
masks can more precisely specify matches [18].

                                                       26
             Solutions to enhance the Internet with an information centric model, exploiting Openflow


On receipt of a packet, an OpenFlow Switch starts performing a table lookup in the first
flow table, and, based on pipeline processing, may perform table lookup in following flow
tables.




                Figure 2.6: Flowchart detailing packet flow through an OpenFlow switch



A packet matches a flow table entry if the values in the match fields used for the lookup
(as defined in Table 2.2) match those defined in the flow table.
If that happens, or if a flow table field has a value of “any”, that matches all possible values
in the header, the consequent set of Instructions is executed.
In particular, the most important Instructions that could be performed on a matching
packet are the following:

APPLY-ACTIONS that may be used to modify the packet between two tables or to
     execute multiple actions of the same type. The actions are specified as an action
     set;
WRITE-METADATA that writes the, optionally masked, metadata value into the
     metadata field;
GOTO-TABLE that indicates the next table in the processing pipeline. The table-id must
     be greater than the current table-id.

More precisely, the Apply-Actions instruction takes as an argument a particular type of
action or a set of them, which are applied in the order specified below, regardless of the
order that they were added to the set.

   1. copy TTL inwards: apply copy TTL inward actions to the packet;

   2. pop: apply all tag pop actions to the packet;

   3. push: apply all tag push actions to the packet;


                                                       27
             Solutions to enhance the Internet with an information centric model, exploiting Openflow


   4. copy TTL outwards: apply copy TTL outwards action to the packet;

   5. decrement TTL: apply decrement TTL action to the packet;

   6. set: apply all set-field actions to the packet;

   7. qos: apply all QoS actions, such as set queue to the packet;

   8. group: if a group action is specified, apply the actions of the relevant group

       bucket(s) in the order specified by this list;

   9. output: if no group action is specified, forward the packet on the port specified by

       the output action.

While the functioning of some of the previous actions is simple, the output, set and
push/pop deserve particular attention.
Last one, in particular, enables OpenFlow switches to add or remove VLAN or MPLS tag,
allowing experimenters to change the flow a packet belongs to, simply popping a tag.
Another remarkable and powerful action provided by OpenFlow is the set instruction. In
fact, it allows to modify the value of almost all fields recognized by the switch, with the
result of offering great modularity and features to the control logic, since it is possible to
heavily modify a matching packet. This actions is able to access and modify Ethernet
source and destination addresses, VLAN and MPLS tags, IPv4 addresses as well as TOS
and TTL and also transport protocol source and destination ports.




                             Figure 2.7: Example of an OpenFlow flow entry


Also the output action plays an important role in the way of functioning of a switch, as it is
responsible to forward a matching packet out of the correct port. OpenFlow switches
support forwarding to physical ports and switch-defined virtual ports, generally used to
perform actions different from normal forwarding. So, other than sending a packet out of a
physical port, OpenFlow switches support also Flood, In_Port, Local and Controller
ports.

                                                       28
             Solutions to enhance the Internet with an information centric model, exploiting Openflow


As they names suggest, the first two ports are used to respectively perform, i.e. sending
the packet out of all ports except the ingress one and to send the packet back. Local one,
instead, is used to send the packet to the switch’s local networking stack, enabling remote
entities to interact with the switch via the OpenFlow network, rather than via a separate
control network.
On the other hand, Controller port is used to encapsulate a packet into an OpenFlow
protocol packet and to send it out towards the external controller. This port is not only used
within flow entries, but also in the default behavior of a switch, when a packet does not
match any flow entry. In this case, often referred as a table miss, depending on the
configuration of a table, packets can be dropped or encapsulated and sent to the
controller, making it responsible of handling the situation and proactively decide what to
do.

2.4.2 The Control Path

The control path of an OpenFlow enabled switch consists of the communication, that uses
the OpenFlow Protocol, between the switch and the controller. The OpenFlow channel is
the interface through which the controller configures and manages the switch, receives
events from it, and sends packets out to the switch. This channel is usually encrypted
using TLS, but may be run directly over TCP and all the messages exchanged over this
channel must be formatted according to the OpenFlow protocol.
       The OpenFlow protocol supports three message types, controller-to-switch,
asynchronous, and symmetric, each with multiple sub-types.
The first type of messages is largely used by the controller in order to collect information
about switch’s state and features as well as to modify the behaviour of a switch, by adding
or removing entries into its flow tables.
Other than accessing of modifying switch’s internal structure and functioning, controller-to-
switch messages include also the Packet_out one, used to send back to the switch data
packets that were received after a table miss and that need to return into the data path.
       Asynchronous messages, instead, are sent without the solicitation of a controller
and, because of that, are often used firstly by the switch, to inform the control logic about a
packet arrival, a switch state change, or an error.
Among them, stands out the Packet_in message, always sent out to the Controller port
as a response to a table miss in the data path. It has the task of soliciting the control path,
that, after this Packet_in_event has to decide if the received packet is valid and what are
the actions that has to be performed on that kind of datagrams.
Therefore, after receiving a Packet_in message, usually a controller sends a controller-to-
switch Modify_State message, inserting a flow entry into switch’s flow table and then
answers back with the received packet, encapsulated within the OpenFlow Protocol
Packet_out message.
Another message that belongs to this category is the Flow_Removed one, used by the
switch to notify the controller that a flow entry timed out and should be removed as a lack
of activity. Then, also Error messages, used to notify a problem, are considered as
asynchronous, since they are meant to be handled by the control logic, namely the
controller.
       Last but not least, there are the symmetric messages, exchanged in both directions
and essentially used to set up the connection between the switch and the controller.
OpenFlow offers to the switch the ability to establish communication with a controller at a
                                                       29
             Solutions to enhance the Internet with an information centric model, exploiting Openflow


user-configurable (but otherwise fixed) IP address, using a user-specified port. If the
switch knows the IP address of the controller, the switch initiates a standard TLS or TCP
connection, that, as a default behaviour, runs on port 6633.




2.5 The NOX controller
In recent literature, there is a growing interest for virtualization and abstraction solutions
that would provide a uniform and centralized programmatic interface to the entire network.
The same OpenFlow aims to this goal, separating the traffic pipeline from the control plane
and leaving to an external controller the ability to manage switch functioning, decide how
packets are forwarded and processed and handle errors.
The answer to this need could be summarized in a “operating system” for networks [19],
that would provide the ability to observe and control a network through a programmatic
interface, on top of which it would be possible to write simple applications responsible for
actual management tasks.
        NOX is a network control platform and, basically an OpenFlow controller which
purpose is to provide a high-level standard interface useful for achieving flow-level control
of the network.
Designed to support both large enterprise and smaller networks of a few hosts NOX’s core
provides applications with an abstracted view of the network resources, including the
network topology and the location of all detected hosts [20]. NOX, which has been used in
this thesis work, offers a high-level API for OpenFlow as well as to other network
management functions and controls the switches in the network through the OpenFlow
protocol. In order to do that, it provides an abstracted interface to OpenFlow, so that it is
possible to exploit and perform all the actions listed above, such as determine whether
(and how) to forward the flow on the network, collect statistics or modify the packets in the
flow.
        Therefore it is possible to implement completely from scratch the entire control logic
of an OpenFlow switch simply adding a C++ or Phyton application on top of NOX. In
particular, NOX applications are built around and event-handling mechanism, achieved
through two main types of structures: Events and Components.
After having connected to a switch, the NOX controller usually listen to the switch for
asynchronous OpenFlow messages, which are translated into Events that need to be
handled by a Component, properly designed to manage the situation and to instruct the
switch to follow the desired behaviour.
It is worth to point out that NOX offers to developers the possibility of defining new Events
and Components, that could follow a different way of working and, for instance, insert
entries into a flow table without handling an Event.
In this work, instead, I did not create new Events, but I exploited the existing ones, listed
below, handling them with my own control logic. As explained in NOX website [20], these
are the core events that a controller should be able to support and manage:

      Datapath_join_event: issued whenever a new switch is detected on the network.

      Datapath_leave_event: issued whenever a switch has left the network.

                                                       30
             Solutions to enhance the Internet with an information centric model, exploiting Openflow


      Packet_in_event: called for each new packet received by NOX. The event includes

       the switch ID, the incoming port, and the packet buffer.

      Flow_mod_event: issued when a flow has been added or modified by NOX.

      Flow_removed_event: raised when a flow in the network expires or has been

       removed.

      Port_status_event: indicates a change in port status. Contains the current port

       state including whether it is disabled, speed, and the port name.

      Port_stats_in: raised when a Port_stats message is received by a controlled

       switch in response to a Port_stats_request message and includes the current

       counter values for a given port (such as rx,tx, and errors).

All these events has to be handled by a Component, which is just an encapsulation of
functionalities with some additional frills to allow declaration of dependencies. There aren’t
provided so many Components with NOX packaging, because NOX core is intended to be
only a platform for programming network behaviour and, because of that, is focused on
extensibility rather than functionalities.
Thus, the possibility to experiment new protocols as well as new features is achieved by
researchers by writing their own Components that would be able to control operations
performed by one or more switches and general network behaviour.
Currently, it is possible to develop Components and Events either in C++ or Python.
During this work, I chose to use Python and I wrote my own control logic able to support,
under some assumptions, the Information Centric Networking.

2.6 Slicing the network with FlowVisor
While Openflow allows experimenters to conduct their tests on a real network
infrastructure by separating the data path from the control logic, it can do nothing about
virtualization and resource sharing. Thus, it leaves open the problem of running multiple
experiments on the same production network.
As outlined above, to achieve this goal it is necessary a slicing layer especially designed to
divide network resources and separate traffic belonging to different users.
FlowVisor slices the network hardware by placing a software layer between the control
plane and the data plane enabling a single data path to be controlled by multiple control
plane, each belonging to a separate experiment. While, in principle, FlowVisor could slice
any control message format, it has been developed with particular attention to OpenFlow,
making it the default slicing software for OpenFlow networks, insomuch as it is also used
in the OFELIA project to separate different experiments.

                                                       31
             Solutions to enhance the Internet with an information centric model, exploiting Openflow


Architecturally, FlowVisor acts as a transparent proxy. Network devices generate
OpenFlow protocol messages, which go to the FlowVisor and are then routed by network
slice to the appropriate researcher(s), as show in Figure 2.8. OpenFlow messages from
researcher controllers are vetted by the FlowVisor to ensure that the isolation between
slices is maintained and then are forwarded to switches [21].




                      Figure 2.8: FlowVisor can recursively slice network resources


Thus, FlowVisor enforces transparency and isolation between slices by inspecting,
rewriting, and policing OpenFlow messages as they pass. Depending on the resource
allocation policy, message type, destination and content, FlowVisor will forward a given
message unchanged, translate it to a suitable message type and forward it, or reject the
message, sending it back to the sender in the form of an OpenFlow error message [22].
        In order to successfully realize this isolation mechanism there is the need for a clear
separation between slices, which is achieved thanks to a slice policy that defines network
resources, the flowspace, and the OpenFlow controller allocated to each slice. Each policy
is described by a text configuration file that defines the fraction of total link bandwidth
available, the budget for switch CPU and forwarding table entries as well as actual network
topology which is specified as a list of network nodes and ports. Other than these physical
requirements, each slice policy must indicate its flowspace, as previously shown in Figure
2.3, which is defined by an ordered list of tuples similar to firewall rules. Each rule
description has an associated action, such as allow, read-only, or deny, and permits
FlowVisor to simply check if a received message belongs to a particular slice.
        FlowVisor carefully rewrites messages from the OpenFlow switch to the slice
controller and sends messages to the control plane only if the source switch is actually in
the slice’s topology. Moreover, FlowVisor rewrites OpenFlow feature negotiation
messages so that the slice controller only sees the physical switch ports that appear in the
slice policy. FlowVisor strongly exploits message rewriting techniques, so that it can easily
simulate network events, such as link or node failures.
In the opposite direction, FlowVisor also rewrites messages from the slice controller to the
OpenFlow switch and, in particular, the ones in charge of adding or deleting rules in the
flow tables. In fact, given a forwarding rule modification message, the FlowVisor has to
control that it does not violate the list of tuples indicated into the slice’s flowspace.
In order to avoid this problem, FlowVisor rewrites both the flow definition and the set of
actions, making them intersect with local flowspace and, thus, respecting the slice policy
and isolation.
                                                       32
             Solutions to enhance the Internet with an information centric model, exploiting Openflow


        Since FlowVisor grants the complete isolation between slices it turns out to be a
valuable solution to allow researchers to experiment on production networks, as research
traffic can be efficiently separated from the production one. Moreover, as outlined in [22], it
ensures also an high degree of scalability, meaning that FlowVisor allows to slice
repeatedly a network, without almost any impact on performances. In addition, it is worth
noting that it is possible to achieve further scaling by using multi-threaded implementation
or multiple FlowVisor instances, as shown in Figure 2.8.
Thanks to all these features, FlowVisor is used as “slicing layer” also within the OFELIA
project. In fact, it is responsible to separate different experiments acting in background
while a user is reserving resources through the Expedient GUI, described above.

Hence, the OFELIA project combines almost all the features listed above, providing a
logical architecture that could be summed up as in Figure 2.9. In fact, OFELIA offers line
rate packet forwarding hardware commodities, which is decoupled from the control logic
that determines its behaviour. Between the control plan and the data path sits the slicing
layer, namely the FlowVisor, in charge of dividing network resources as well as various
sets of flows and controlling their isolation. On top of that, there are several NOX
controllers, that act as network operating systems, offering many extensible API to
experimenters, in order to allow them to implement their own control logic.




                                   Figure 2.9: OFELIA logic components




                                                       33
             Solutions to enhance the Internet with an information centric model, exploiting Openflow




Chapter 3

Enhancing OpenFlow to support
Information Centric Networking

As previously said, one of the most discussed topic about Future Internet is the transition
from the actual architecture to an Information Centric model, based on contents rather
than hosts. Since this transition requires a remarkable changes into Internet structure, in
order to be correctly valued and tested, Information Centric Networking solutions need to
be extensively tested and developed in realistic scenarios.
The European project OFELIA aims to provide pan-European network facilities dedicated
to research, with the purpose of helping experimenters to test their ideas and solutions, in
order to solve the so called “network ossification”.
Because of this, OFELIA results to be the ideal testbed to try and develop new solutions
and theories related to Information Centric Networking.
Therefore, OFELIA is strongly appreciated within research community and it is receiving a
growing number of experimentation proposals answering to its open calls.
       One of the two winner of the first call, has been the EXOTIC proposal, which this
work is part of, submitted by the Department of Electrical Engineering of the University of
Rome “Tor Vergata” and CNIT.
EXOTIC, that will be extensively described in this chapter, suggest a solution to develop
Information Centric Networking on OFELIA facilities, by means of the extension of
OpenFlow existing features, in order to support a content oriented architecture.
In the following I will outline the most important properties of the proposal as well as the
requirements that is necessary to meet in order to successfully develop the suggested
Information Centric Networking solution.




                                                       34
             Solutions to enhance the Internet with an information centric model, exploiting Openflow



3.1 The EXOTIC proposal

EXOTIC, acronym for “EXtending OpenFlow To Support a Future Internet with a Content
Centric Model”, is one of the two proposals that OFELIA community accepted to develop
within project’s facilities.
As it name suggests, it foresee the possibility to apply the separation between control logic
and data path, on which OpenFlow is based, to Information Centric Networking. In doing
so, EXOTIC proposes to add to CCN routers an OpenFlow interface so that the CCN
router can be controlled and managed by an external controller [7].
        This approach is completely new and EXOTIC team is currently working in order to
define the functionality that can be offered on the OpenFlow interface by a CCN router. To
this end, EXOTIC will take into account the CCN functionality that are under definition in
the CONVERGENCE project.
However, a first set of functionalities that need to be offered by a CCN switch will be the
basic functions of a traditional IP router, such as taking a routing decision, forwarding or
rewriting an IP packet. It is worth to point out that a complete list of properties is currently
under development, but, other than traditional IP functionalities, the main content centric
features that EXOTIC aims to satisfy are the following:

      address contents, adopting an addressing scheme based on names, not including

       references to their location;

      route a user request, which includes a “destination” content-name, toward the

       closest copy of the content and deliver the content back to the requesting host;

      provide a native, in-network caching functionality for efficient content delivery in

       fixed and mobile environments;

      exploit security information embedded in the name, to protect the content and avoid

       the diffusion of fake versions of it.

The controller/switch separation enforced in the OpenFlow architecture can be exploited
very well in the context of a Content Centric Network and has been introduced exactly to
take advantage of this isolation of the control logic from the data plan.
For example, the handling of the cache in CCN switches could be autonomously managed
by each node, that can cache some content and provide it upon request from end-nodes.
Instead, if this procedure is performed according to the OpenFlow approach, nodes could
rely on a logically centralized controller to decide which content is to be cached and when
a new content is requested, making simple to build ordered lists of most accessed
contents.
Moreover, the controller can proactively push some contents into a CCN node, therefore
anticipating the “automatic” caching procedures and offering a very useful and performing
functionality for distribution of live contents.
                                                 35
             Solutions to enhance the Internet with an information centric model, exploiting Openflow


       Unfortunately, at the moment, it is clear that the current OpenFlow specification [18]
does not support the CCN features that are needed to support the envisaged scenario.
Therefore EXOTIC will propose enhancements to the OpenFlow specification
in order to suit the CCN requirements.
On the other hand, it will also provide its own solution to Information Centric Networking,
offering an evolutionary approach based on the extension of the current IP networking
architecture.
The proposed framework, called CONET (COntent NETwork), is modular and open and
lends itself to support different solutions for specific issues like naming, name based
routing, forwarding and transport mechanisms, as outlined in next sections.



3.2 The evolutionary approach
Information Centric Networking has often been presented as a revolutionary approach to
networking that proposes to replace the current forwarding mechanism of TCP/IP
networks, based on host addresses, with a mechanism based on the name of the
information to be transported. This radical approach, described as a clean slate solution,
foresee the complete replacement of the existing IP networking layer by a new information
centric networking layer, so that also non-content based communication patterns, such as
conversational communications, would be supported by content oriented networking
mechanisms.
         EXOTIC, instead, proposes a different approach towards Information Centric
Networking, based on extending the current IP networking architecture rather than
considering its replacement.
The proposed framework, called CONET (COntent NETwork), argues (and demonstrates)
that it is possible to add ICN functionality to IP in a backward compatible way, so that most
advantages of ICN can be exploited without dropping IP.
In this “evolutionary” scenario, content oriented communication patterns can be supported
by through extensions to the IP protocol, while conversational communications can
continue to run over traditional IP.
The integration of content oriented features and services into existing IP protocol, will be
achieved, as explained better later, by means of a newly defined IP Option, usable both
with IPv4 and IPv6 [23].
         The principal motivations that led the authors not to completely accept a clean slate
solution and that resulted into this new evolutionary approach can be summarized into the
two following statements:

Content-oriented communication pattern is overestimated,                                       conversational
     communication pattern still remains very important.

Having an evolutionary path from existing IP networks could be the only way for a
     real deployment of ICN.


The second one has already been discussed and, essentially, argues that IP is too widely
deployed and currently exploited to simply be dismantled at once to migrate to ICN.
Furthermore, the evolutionary approach not only tries to develop natively all the content
                                                       36
             Solutions to enhance the Internet with an information centric model, exploiting Openflow


oriented key features, but also foresees an integration with the existing network layer, with
the purpose of allowing simple deployment of ICN, since it does not require substantial
changes in the architecture.
The first statement, instead, deserves particular attention, because, in a certain way,
seems to go against Information Centric Networking principles, stating that there would not
be particular benefits in changing the way conversational communication pattern is
supported by current IP based networks.
Conversational communication pattern, instead, applies every time a user or an application
wants to exchange information in a bidirectional way with a remote entity. This pattern
clearly applies to voice and audio/video calls, but also to most of the current typical
services on the web. For example, most social networking applications require a
continuous exchange of information from the client (the browser) to the server. These
applications are not only about downloading or playing a content, but they are also about
sending lots of information from the client to the server and, therefore, require a
conversational communication pattern [23].
More in general, most web applications can be seen as composed by contentoriented
communication patterns mixed up with conversational patterns.
       Therefore, it is true that Internet is strongly evolving towards a content oriented
model, with a growing number of data retrieval applications, but this is not completely
replacing conversational communications, which still remain a substantial part of the
overall architecture. Because of that, a clean slate approach that aims at fully replacing the
existing IP layer, in order to support this kind of communications, would be forced to build
the socket abstractions which are currently used in TCP/IP network and that already
work without significant problems.
This necessity will lower the benefits of building from scratch new CCN functionalities,
since they would have to take into account the the need for realizing also conversational
features, which are known to work fine when structured in a host oriented way.
Considering also the difficulties to deploy a completely new architecture, the clean slate
solution seems to present too many obstacles, that are, on the contrary, quite simply
overcame adopting the evolutionary approach.

3.3 The CONET framework
The CONET architecture is composed of a set of CONET nodes interconnected by
CONET Sub Systems (CSS), defined as a generic network with homogeneous networking
technology and homogeneous native addressing space.
CONET nodes exchange CONET Information Units (CIUs), which are used to convey both
requests of named resources, called Interest CIUs, and chunks of named resources
themselves, such as part of files or videos, called Named-Data CIUs.
A CONET Sub System contains a set of nodes and exploits an under-CONET technology
to transfer requests and data among the nodes. Therefore, CONET could be defined as an
inter-network that interconnects several CSSs.
A given CSS can be for example:

      two nodes connected by a point-to-point link;

      a layer 2 network like Ethernet;


                                                       37
             Solutions to enhance the Internet with an information centric model, exploiting Openflow


      a layer 3 network e.g. a private IPv4 or IPv6 network or a whole Autonomous

       System.

The basic elements of the proposed CONET framework have been inspired by CCN
definition [3] and share some design principles with it. One of the main differences is
exactly this integration with existing IP underlying technology, that is exploited to carry
information about content centric features.
To best fit the transfer units of an under-CONET technology, both Interest CIUs and
Named-Data CIUs are carried in small CONET data units named Carrier Packets, which
also include routing information of the end-to-end communication session they serve to.
The different types of CONET nodes are shown in Figure 3.1, where are also highlighted
various types of CONET Sub Systems that can be passed through in the process of
sending an Interest packet and receiving back a Named-Data CIU.
In particular, within the CONET architecture depicted in the figure, it is possible to outline
the following basic network entities, with their main functionalities:




                                      Figure 3.1: CONET Architecture




END NODES: are user devices that request named-resources by issuing Interest CIUs.

SERVING NODES: store, advertise and provide named-resources by splitting the related
     sequence of bytes in one or more Named-Data CIUs.

BORDER NODES: interconnect different CONET Sub Systems. They forward Carrier
    Packets by using CONET routing mechanisms, may reassemble Carrier Packets
    and cache the related Named-Data CIU, and may send back cached Named-Data
    CIUs.

NAME ROUTING SYSTEM (NRS) NODES: optional within an IP-CSS, may be used to
    assist the CONET routing operations.

                                                       38
             Solutions to enhance the Internet with an information centric model, exploiting Openflow



INTERNAL NODES: could be deployed inside an IP-CSS to provide additional in-
    network caches.

As shown in Figure 3.1, Border Nodes interconnect different CSSs and, therefore, the end-
to-end forward-by-name process can be seen as the process of finding a sequence of
Border Nodes from the End-Node up to the Serving Node.
When the CSN is an IP network (IP-CSN), the internal devices use an autonomous IP
addressing space and an interior gateway protocol to set up IP reachability among them.
Within an IP-CSS, optional Name System Nodes may be used to assist the CONET
routing operations.
Moreover, optional CONET Internal-Nodes could be deployed inside an IP-CSS to provide
additional in-network caches. In fact, albeit optional, internal-nodes are the only ones that
could provide caching functionality, when the End Node and the Serving Node belong to
the same IP-CSS; indeed no border-node is traversed in this cases.
It is also worth noting that the IP-CSS between downstream and upstream Border Node
can be composed by an arbitrary number of plain IP Routers and Internal Nodes. The
latter can be seen as “enhanced” IP routers in the sense that they can perform content
caching but are not able to perform “forward-by-name”, being only able to route packets at
IP level.


3.3.1 Model of operations

After having presented the most important network entities that forms CONET architecture,
it is worth to point out the different functionalities performed by these nodes and the basic
mechanisms needed in order to successfully request a content and receive it back.
In particular, for the sake of clarity, it is important to outline the differences between
forward-by-name, content routing and data forwarding operations.

FORWARD-BY-NAME is the mechanism used by ICN nodes to relay an incoming
    content request to an output interface. The output interface is chosen by looking up
    a “name-based” forwarding table. The size of the forwarding tables is critical when
    high speed packet forwarding is needed.

CONTENT ROUTING is the mechanism used to disseminate information about location
    of contents, so as to properly set up the name-based forwarding tables. For
    instance, content routing could use plain IP distance vector mechanisms, where
    name prefixes are distributed instead of IP prefixes. Content routing is one of the
    assets of ICN, as a provider could use content routing to improve the efficiency and
    reliability of content access in its network.

DATA FORWARDING is the mechanism that allows contents to be sent back to the
    device that issued a content request. Data forwarding cannot use the forward-by-
    name mechanisms, because the devices are not addressed by the content routing
    plane of an ICN.

Therefore, CONET requires two different forwarding strategies to forward content requests
and to deliver the data.
                                                       39
              Solutions to enhance the Internet with an information centric model, exploiting Openflow


Thus, when a CSS is an IP network, the downstream Border Node performs a forward-by-
name procedure to resolve a content name and gets as a result the IP address of the
upstream Border node. Then, it sends the content-request Interest packet to the upstream
Border Node using IP and answers back with the requested content using IP forwarding
mechanisms [23].
Taking into account the differences between these routing mechanisms and the
architecture depicted in Figure 3.1, the operations performed to provide a End Node with a
chunk of Named Resource can be described as follows:

      An End Node requests a chunk of a Named Resource by issuing an Interest CIU,

       which is encapsulated in a Carrier Packet, named XXX.

      Name-based forwarding engines in the End Node and intermediate Border-Nodes

       forward-by-name the packet XXX upward the proper Serving-Node. Forward-by-

       name means that, on the base of the network-identifier contained in the carrier-

       packet XXX, a name-based routing-engine singles out the CSS address of the next

       upward Border-Node towards the Serving-Node.

      Since a CSS address is an interface address, consistent with the CSS technology

       (e.g., an IPv4 address in case of an IP-CSS), the name-based routing engine

       encapsulates the carrier-packet XXX in a data-unit of the underlying CSS

       technology and uses the CSS address as the destination address.

      The CSS address of the End Node and the set of CSS addresses of the traversed

       Border Nodes in the “upward” path towards the Serving Node are appended to the

       Carrier Packet XXX, within a control field named path-state.

      If present, the Internal Nodes forward the Carrier Packet XXX by using the

       underlying routing engine (e.g. IP RIB), but are able to parse the Carrier Packet

       XXX.

      The first in-path CONET Node, Border, Internal or Serving Node, which is able to

       provide the chunk of the Named Data requested within XXX, will send back the

       Named Data CIU, without further propagating the Interest CIU.

                                                        40
             Solutions to enhance the Internet with an information centric model, exploiting Openflow


      The Named Data CIU is encapsulated in a Carrier Packet, named YYY. The carrier-

       packet YYY follows the same path of the Carrier Packet XXX, but in the downward

       direction and will reach the requesting End Node.

      The reverse-path routing is carried out in a source-routing fashion by using a path-

       state control information appended to the Carrier Packet YYY. This path-state is the

       copy of the one set up in the Interest CIU during the upward routing.

      All the Border Nodes and Internal Nodes in the downward path may cache the

       Named Data CIU, according to their policies and available resources [24].



3.3.2 Naming

The issue of naming in an Information Centric network is a very hot topic and different
proposals are currently suggested, as outlined in the first chapter. In particular, the two
most agreed solutions offer different implementations, one suggesting that names should
be “human readable”, while the other states that names needs not to be human readable
and therefore they may have other interesting security properties like for example being
“self-certified”.
        The CONET options proposed in [25] can support any choice even in parallel, so
that different types of naming can be supported. The content name is transported into a
field called ICN-ID, which is a tuple with this structure:

                                       < namespaceID; name >

The namespace ID determines the format of the name field and, therefore, the name field
is a namespace-specific string. Each namespace follows its own rules to release unique
names with its own format, so that, in principle, there could be different contents with the
same name, provided that they have been named within different namespaces.
Going into details, the ICN-ID starts with a two bytes field called ICN-ID namespace ID,
that determines the structure of the rest of the ICN-ID.
ICNID namespace values to be used in the future public Internet needs to be assigned by
a registration authority like IANA.
        Within CONET, there is a specific proposal for a naming format. The content name
is the composition of two hash values, which are giustapposte in this structure:

                                  < Hash(Principal);Hash(Label) >

Principal and label [1] are flat-names that can be human readable and a hash function
transforms them to a fixed number of bytes. A Principal is the owner of her address space
identified by the Principal identifier. The hash of the Principal identifier must be unique in

                                                       41
             Solutions to enhance the Internet with an information centric model, exploiting Openflow


the namespace (therefore an entity should assign Principal identifiers). Label is an
identifier chosen by the Principal to uniquely differentiate each content.
The hash function is applied to provide cryptographic protection to the name and also
offers the advantage of transforming variable length Principal identifier and Label into a
fixed size structure. For instance, to support the WEB resources it is possible to define the
namespace www, which follows the actual domain name assignment rules and uses a
domain name, for instance www.cnn.com, as Principal Identifier and a URL path, such as
/news/index.html, as label.
        An ICN-ID, also called Network Identifier, is the complete name of a resource in
whole CONET and results to have the following structure:

                          < namespace;Hash(Principal);Hash(Label) >

It is worth noting that this structure grants the uniqueness of a resource, since every part
of this name is assigned in a controlled and unique way by the superiore entity. Therefore
the same label can be assigned by different principals and even two identici principals
could exist at the same time, but within different namespaces.


3.3.3 Packet structure and protocol stack

As outlined before, in CONET architecture there are two main types of packets, the
Interest packets and the Data packets, responsible to transport the request for a content
and to carry the content itself.
Other than these two principal packets, EXOTIC proposal introduces also the concept of
Carrier packets, that have the goal of improving the forwarding speed of the CONET.
       In fact, Carrier packets has been introduced because a Named Data CIU could be
too large to be transported by a single under-CONET data unit and thus needs to be
segmented. A Named Data CIU can correspond to a chunk of 256=512 kbytes, while the
maximum data unit size of under-CONET technologies can be smaller (e.g. 1500 bytes for
Ethernet, 64 kbytes for IP).
The optimal chunk size comes out from several tradeoffs; EXOTIC argues that the chunk
size should be in the order of the one currently used in P2P systems, e.g. 256=512 kbytes,
nevertheless the CONET architecture could support variable chunk sizes.
Contents to be transported over an Information Centric Network, in fact, can be very
variable in size, from few bytes to hundreds of Gigabytes, since it is not easy to set an
upper bound to content size. Therefore, in order to be handled by CCN nodes contents
need to be segmented in smaller size data units, typically called chunks, which are
inserted into Carrier packets.
Those chunks, contained into Carrier packets, are the basic data unit to which caching and
security is applied. This means that:

      a single chunk out of a larger content can be requested by an End Node;

      single chunks will be signed by the origin Serving Node for security reasons;




                                                       42
             Solutions to enhance the Internet with an information centric model, exploiting Openflow


      Border and Intermediate Nodes will authenticate the chunks and store them in their

       caches as needed.

Thus, a named resource is split into different chunks, which are all inserted in one Named
Data CIU. Then, in order to meet under-CONET technical requirements, these Named
Data CIUs can be further on split into smaller Carrier packets. Therefore, as shown in
Figure 3.2, CONET framework proposes to handle the segmentation of a content with two
levels. At the first level the content is segmented in chunks, while, at the second level,
chunks are segmented into smaller data units that are handled by an ICN specific
transport protocol.




                    Figure 3.2: CONET Information Units (CIUs) and Carrier packets



So, as it is possible to see in the right part of the previous figure, a chunk of content is
inserted in a CONET Information Unit, either an Interest or a Data packet, which, in order
to identify the content, contains the Network Identifier, which is the complete name of the
carried content, and the chunk number. This field is useful to reassemble the original
content and is exploited also by Interest packets that can request a set of bytes of a
particular chunk of a named-resource, e.g. from byte 1000 to byte 2000 (segment info
field) of chunk n. 3 (chunk number field). The Data packet, moreover, contains
also the Security Data, that makes it possible to validate a named-data CIU before caching
it or, in the End Node, delivering it to the API.
         Carrier-packets are low-level carriers of CIUs. Since they are the data units of the
forwarding process, Carrier packets need to carry all the information needed in order to
identify, validate, successfully deliver and reassemble the chunk of content they are
transporting.
Because of that, their header is composed by:



                                                       43
              Solutions to enhance the Internet with an information centric model, exploiting Openflow


      a minimal set of control information of the CIUs, i.e. network-identifier, chunk

       number and a field to indicate the CIU type (Interest or Named Data);

      a payload-header, which identifies the byte boundaries of the carried segment

       (segment info);

      the payload (existing only in the case of Named Data CIU), which embodies a part

       of the sequence of bytes contained in the Security Data and Data Chunk fields of a

       Named Data CIU;

      the Path State field.

The Path State field includes control information strictly related to the specific
communication session between an End Node and a Serving Node or a cache.
In fact it is responsible to carry the information needed by a content to pass through the
same set of nodes traversed by the corresponding Interest packet and, because of that, to
reach the network entity that issued the request.
        This session-like mechanism, other than allowing proper content delivery, can also
help the transport protocol to perform reliability and congestion control, much like TCP
does for the transfer of a byte stream in current TCP/IP networks.
The proposed transport protocol is based on the principles described in [1] and presents a
receiver-driven approach implementing the same algorithms of TCP (slow-start,
congestion avoidance, fast retransmit, fast recovery). The transport algorithm issues a
sequence of Interest CIUs and each of them requests only a fraction of a Named Data
CIU. By controlling the sending of these Interest CIUs, is it possible to obtain a transport
protocol that is TCP friendly and can achieve fairness both among multiple competing ICN
content downloaded and also among ICN content download and regular TCP connections.
In some CCN proposals, these mechanisms are implemented at the application level,
above the API toward the ICN layer. CONET developers, instead, believe that this is not
efficient and preferred to embed the transport protocol below the API towards the
application, just above the networking layer. That is because this would result in the same
approach used in TCP/IP networks, where the TCP is implemented in the kernel and the
applications are provided with an API that hides the details of the transport protocol [23].


3.3.4 IPv4 and IPv6 CONET Options

As mentioned above, the packet field responsible to distinguish CONET packets from
normal traffic is the CONET Option, a newly defined option of IPv4 and IPv6 protocols.
The full details on the definition and usage of the proposed IPv4 and IPv6 CONET options
can be found in [26]. However, the CONET IP Options have the same content for IPv4 and
IPv6, except for the initial byte, which is different according to the different rules for coding
of options.

                                                        44
             Solutions to enhance the Internet with an information centric model, exploiting Openflow




                                Figure 3.3: IPv4 and IPv6 CONET Options


In Ipv6, the CONET Option will be transported in the Hop-by-Hop Options header, that is
meant to be analysed by all routers in the path.
The CONET IP Option includes the content name, called ICN-ID, which can be of variable
or fixed size according to the selected naming approach.
In fact, the Namespace ID is designed to support different types of naming scheme. As
shown in Figure 3.3, a Chunk Sequence Number (CSN) of variable length can be
optionally present, making possible to offer the chunk number information to the ICN layer
in a “standard” way. The alternative approach would be to define a naming scheme that
includes the Chunk number, so that this information is transported in the ICN-ID.
The CONET IP Option includes a “Diffserv and Type” field. This one byte length field is
used to differentiate quality of services that can be provided by the network to the
delivered content and to identify also the content type.
        It is worth to point out how EXOTIC authors demonstrated, in [27], that the addition
of a new, unrecognized option is handled by current routers in the Internet. Therefore,
inserting all the CONET information needed for a Content Centric Networking solution in a
new option, not only offers the possibility to maintain the existing IP architecture, but also
allows, barring few exceptions, keeps possible achieve end-to-end connectivity.




3.4 Analysis of CONET functionalities
Focusing on the internal structure, both Border and Internal Nodes can be represented as
an OpenFlow switch, responsible for fast packet forwarding, connected through an SSL
channel to an HostPC running the controller, that is in charge of managing switch’s flow
tables.
In CONET architecture, in fact, only Border and Internal Nodes inspect packets and,
therefore, are able to recognize if a datagram is a CONET Interest or Named Data or,
instead, it belongs to traditional traffic. Because of this, OpenFlow switches play a
fundamental role in CONET framework, as they are meant to recognize if an incoming
packet is a CONET one and to perform consequent actions.
       While leaving for the next chapter all the considerations about how to recognize
CONET packets with OpenFlow, in this section I want to provide an identification of the


                                                       45
             Solutions to enhance the Internet with an information centric model, exploiting Openflow


Content Centric (CC) functionalities [24] that needs to be implemented either in the Switch
or in the Controller within CONET architecture.
In particular, as shown in Figure 3.4, these are the key functionalities that it is essential to
develop in order to obtain a working platform:

LOOKUP AND CACHE: responsible for the forwarding of packets towards the next hop
    and/or the cache. It deals with flows and it is realized through OpenFlow’s flow
    tables.

CC ROUTING: responsible for the dissemination of the information related to the location
     of content, it is present only in BNs. Thanks to this functionality it is possible to set
     up forward-by-name tables. If a Name Routing System is present, CC Routing is
     performed in the NRS and Border Nodes and End Nodes perform DNS queries to
     the NRS.




                             Figure 3.4: Content Centric main functionalities


CC ROUTING CLIENT: it queries the NRS to obtain the resolution of a content name.

CC CACHING: responsible for the management of content caches.

CC POLICIES: responsible for converting the indication of the CC Routing and CC
     Caching into policies which will be used to create the flow tables managed by the
     Lookup and Cache.

CC DATA PLANE: responsible for encapsulating packets arriving from the application
    layer into packets generated/directed by/to the application. It is present only into
    End Nodes and Serving Nodes.




                                                       46
             Solutions to enhance the Internet with an information centric model, exploiting Openflow




Chapter 4

Analysis for the realization of a real
CONET scenario
In this chapter, I want to focus with more detail on my work, presenting all the
considerations and analysis needed in order to proceed towards the realization of a
working CONET Sub System.
My work, in fact, concentrates in particular on one Sub System, delimited by two Border
Nodes. Because of that, it will analyse all the functionalities and problems that is
necessary to solve within these nodes, but it will not deal with naming or forwarding-by-
name issues, which are assumed to have the characteristics explained above.
In particular, in next sections, I will outline the chain of operations that has to be
implemented into Border Nodes as well as into Internal Nodes and then I will concentrate
on the problem of integrating CONET with current OpenFlow specifications.
In fact, if the choice of identifying CONET packets by means of an IP option has several
advantages, it makes difficult the use of OpenFlow as switching technology. In the
following, I will expose two different approaches, bound to different time horizons, to
overcome this problem and I will point out a solution that would integrate OpenFlow
functionalities within a CONET scenario, especially in the short term.
After that, in next chapter a practical realization of this solution is given, showing how it is
possible to obtain a working Internal Node, able to recognize and separate CONET
packets from normal traffic and to switch them all over a CONET Sub System.



4.1 Analysis of operations performed by Border
    Nodes and Internal Nodes
In order to better understand how all the functionalities listed in Section 3.4 can be
implemented, it is worth to outline what is the sequence of operations that has to be
performed into Internal and Border Nodes, differentiating between them, since they
perform diverse operations.

                                                       47
              Solutions to enhance the Internet with an information centric model, exploiting Openflow


       In the following I will assume, for the sake of simplicity, an underlying IP-CSS
technology, as it is the most widely deployed and because it is the one CONET relies on.
Moreover, it is worth noting that this is only an exposition simplification that could be easily
replaced by any other underlying technology.
The first thing that is necessary to do in this exposition, is to distinguish between Border
Nodes and Internal Nodes, since INs perform only classic router and cache policies, while,
in addition to that, BNs do also forwardingby-name operations.
Another separation that is necessary to do is the one between Interest and Named Data
packets, since the operations performed on them are diverse and involve different
functionalities.
Therefore, in the following I will list the operations that need to be performed within
CONET network entities and I will be concerned with two different types of caches:

      a cache for the contents, i.e., for the chunks of Data packets;

      a cache for the routing information, as routing tables are handled with the “lookup-

       and-cache” approach.

I will call “content-cache” the former cache and the “route-cache” the latter one.


4.1.1 Border Nodes

Arrival of an Interest

       1. Inspection of the packet:

             if it is not a CONET packet, perform traditional routing.

             if it’s a CONET packet

                                                         ↓

       2. Perform content-cache lookup:

             if the packet belongs to a chunk that is in the content-cache, answer back

              providing the requested content and satisfy the Interest.

             if the content-cache does not contain the requested content.



                                                             ↓




                                                        48
             Solutions to enhance the Internet with an information centric model, exploiting Openflow


      3. Lookup in the routing-by-name table (route-cache):

            If there is a correspondence between the requested content and a

             destination IP address, forward the packet to that node, inserting into the

             “Path state” field his own IP address and refresh the route-cache.

            If there is not a correspondence between the requested content and a

             destination IP address,

                                                       ↓

      4. Query the Name Resolution Server (NRS) node.

      5. Update the forward-by-name table (route-cache) according to the NRS
      answer.

      6. Forward the packet, inserting its IP address into the “Path state” field.

Previous functions have to be intended as logical functions and entities.
In practice, they can be realized as one architectural entity (e.g. forwarding table and
cache-lookup table can be merged in a unique table).


Arrival of an Data Unit

      1. Inspection of the packet:

            if it is not a CONET packet, perform traditional routing.

            if it’s a CONET packet

                                                            ↓


      2. Forward the packet according to normal routing table.
         In parallel the node should decide whether to cache the data in the content-
         cache.

      3. The node checks if the data packet belongs to a chunk that is already in the
      content-cache:

            if yes → just refresh the timer in the content-cache for the given content (do

             Content-Cache Timer Refresh)

                                                       49
              Solutions to enhance the Internet with an information centric model, exploiting Openflow


             if not → try to insert the packet in the cache (doPrecache)


With the word “Precache”, is intended a system that stores data that are waiting to be
reassembled in a chunk. The decision to hold a data packet in the Precache system may
depend on the current processing and memory load of the Precache itself, as well as on
the number of Interest and Data packets received for that chunk.
At the end of the precaching phase, when a full data chunk is available, security checks
should be performed before storing the content in the cache.
Obviously, storage in the cache will be subject to cache policy algorithms, such as LRU,
especially when there are cache size restrictions.


4.1.2 Internal Nodes


Arrival of an Interest

      1. Inspection of the packet:

             if it is not a CONET packet, perform traditional routing.

             if it’s a CONET packet

                                                         ↓

      2. Perform content-cache lookup:

             if the packet belongs to a chunk that is in the content-cache, answer back

              providing the requested content and satisfy the Interest.

             if the content-cache does not contain the chunk to which the packet belongs

                                                             ↓

      3. Perform “normal” lookup in the IP routing table, as the IP destination
      address is the IP of the Border Node.



Arrival of an Data Unit

      2. Inspection of the packet:

             if it is not a CONET packet, perform traditional routing.


                                                        50
              Solutions to enhance the Internet with an information centric model, exploiting Openflow


             if it’s a CONET packet

                                                             ↓


      2. Forward the packet according to normal routing table.
         In parallel the node should decide whether to cache the data in the content-
         cache.

      3. The node checks if the data packet belongs to a chunk that is already in the
      content-cache:

             if yes → just refresh the timer in the content-cache for the given content (do

              Content-Cache Timer Refresh)

             if not → try to insert the packet in the cache (doPrecache)


Focusing on Internal Nodes, it is particularly evident that the first operation that has to be
realized is the separation between CONET packets and normal traffic, which would be
forwarded following traditional routing by the OpenFlow switch.
Therefore, since all the information related to the CONET is carried within the header of a
Carrier Packet and stored into an IPv4 or IPv6 option, the only difference between CONET
traffic and normal packets is represented by this IP CONET Option.
In particular, this option contains a double information for an Internal Node, because it
identifies if the packet is a CONET one, and if it is an Interest or a Data Unit.
So an Internal node can exploit this information to:

      1. decide whether the incoming packet is a CONET packet;
      2. distinguish between Interest and Data Unit.

Thus, by means of the IP CONET Option, the above scheme can be combined in the first
part as it follows:

      1. Arrival of a packet.
      2. Inspection of the IP CONET Option. Is it a CONET packet?

              NO → perform normal routing
              YES → is it an Interest or a Data Unit?

and then follow the above parts, depending of the type of packet.




                                                        51
             Solutions to enhance the Internet with an information centric model, exploiting Openflow



4.2 Analysis of OpenFlow features for CONET
Support
As outlined before, the CONET Option plays a crucial role in the routing process, both in
Internal Nodes and in Border Nodes.
In addition, the first part of this option, namely Network Identifier + Chunk Number, offers a
simple and clean possibility to perform a fast cache-lookup, due to his uniqueness and
hierarchical structure.
Because of that, this option could be used to address content into the cache as well as to
recognize packets and decide if it is necessary to perform cache-lookup or if the packet
needs normal routing.
         The downside of putting all the information related to CONET, and so the possibility
of recognizing a CONET Information Unit from standard traffic, into one single field is that
it has to be perfectly handled by all the applications and technologies that would interact
with it.
As previously highlighted in Table 2.2 and here recalled for the sake of clarity, OpenFlow
supports only the following fields into his matching entries:




                                   Table 4.1: OpenFlow matching fields


Unfortunately, at the moment, OpenFlow cannot realize a match against the IPv4 Option
field, so that a node cannot recognize CONET traffic through OpenFlow rules. Moreover, it
is worth to point out that the IPv4 Option field does not have a fixed length, so the CONET
option will not appear in a stable position within this field.
Therefore, even if this seems to be the cleanest and most efficient solution, at the moment
the use of the IP CONET Option to differentiate CCN traffic is not possible in a native way
and could be realized only by means of some adjustments.
        Thus, the complete deployment of the use of CONET Option within the IP layer has
to be considered as a long-term target, to be achieved when a new OpenFlow 2+ version,
able to match against the whole IP header, will be deployed.




                                                       52
             Solutions to enhance the Internet with an information centric model, exploiting Openflow



 4.2.1 A long term approach

As previously said, EXOTIC’s ultimate goal is to exploit the IPv4 CONET Option (or IPv6
CONET Option in case of IPv6 traffic) to distinguish CCN traffic and to decide whether an
incoming packet is an Interest or a Data Unit.
In order to do that, CONET architecture foresee to map the Carrier Packet Header into the
IPv4 (or IPv6) CONET Option, as shown in the following figure.




                 Figure 4.1: Mapping between CONET Header and IP CONET Option



It is evident that in order to successfully develop such an architecture, there is the need to
simply define new flows, based on the CONET Option, so that the Lookup and Cache
functionality becomes simpler and faster. Because of that, EXOTIC developers envisage
the deployment of a new OpenFlow 2+, able to match against the whole IP header and,
thus, able to identify the length of the IP Option field and the position of the CONET Option
on the inside. Reaching this target, would allow to exploit OpenFlow as a fast
and modular switching technology while simply developing Content Centric functionalities
into existing IP networks.
        Other than the CCN enhancement of OpenFlow, for the long term EXOTIC foresees
also an efficient distribution of the functionalities listed is Section 3.4, in order to achieve
best possible performances.
This may result in the following disposition of functionalities:




                                                       53
             Solutions to enhance the Internet with an information centric model, exploiting Openflow




                          Figure 4.2: Functionalities disposition in the long term


In the long term, in fact, forward-by-name cache and normal layer 3 routing table have to
be necessarily located on the switch, in order to improve performances. Moreover, it
seems seasonable to put caches physically on the switch, so that Lookup and Cache
functionality becomes faster as well as content distribution, not having to get the data from
an external cache.
It is worth to point out that, contrary to one might think, adding several levels of caches is
not helpful in improving performances once the population behind the first cache grows
past a small threshold [28]. Therefore, even if in the previous figure the cache is integrated
into the switch, it is also possible, and probably useful, to foresee only some Cache
Servers per CSS, which would be shared among all the Internal Nodes.
Using an internal data cache, there is no need for a communication between the switch
and the Cache Server since contents are stored directly on the switch.
On the contrary, there still is the need for a script that advertises which contents have
been stored, so that Interests could be satisfied locally, without any further routing. This
could be realized as an OpenFlow API that locally modifies flow tables routing Interests
towards the cache or giving the information as input to the external NOX controller.
The CC Routing client module may run directly on the switch, requesting, in case of a BN,
the NRS for the resolution of content names.



4.3 Short term approach for CONET support
in OpenFlow
As previously mentioned, this work is focused on a short term solution, regarding, with
particular attention, the operations needed to implement CONET functionalities within an
IP-CSS. Being part of the OFELIA project and due to the lack of an OpenFlow version able
to recognize the IP option field, this work, as the entire project, relies on the OpenFlow

                                                       54
             Solutions to enhance the Internet with an information centric model, exploiting Openflow


1.0.0 specification and, therefore, tries to develop a solution that can run on the existing
switches and API.
In order to do so, there is the need to realize a correspondence between the Carrier
Packet header, which identifies CONET traffic, and another field, currently exploited by
OpenFlow 1.0.0. Thus, taking into account the fields listed in Table 2.2, the following
possibility are envisaged:

      a MPLS tag;

      a new CONET code for “IP Protocol” and the “Ports” fields of the IPv4 header;

      locally administered MAC addresses.

Since it is not possible to define flows exploiting the IPv4 CONET Option, for a short term
implementation I decided to use the existing ability of OpenFlow 1.0.0 to match a tag in
one of the three ways listed above and to define flows on these basis.
To achieve these solutions, there is the need for an application that, on the edge of a
Border Node, is able to recognize CONET traffic and to map it into one of these three
fields. This operation has to be considered mandatory before allowing the ingress of a ICN
packet into an IP-CSS based on OpenFlow and could be realized in cooperation with the
Name Routing System. In fact, as pointed out in next sections, also these local tags should
be unique, raising the necessity for an entity that controls their distribution.
It is also worth to point out that, at the moment, CONET developers are not working
extensively on the realization of this “mapping application”, especially because it would not
be part of the long term solution, since this mapping would no longer be necessary.
However, assuming a fixed position of the CONET Option within the packet header, it
would be quite simple to realize an application that matches a string of bits and, carefully,
maps the Network Identifier just extracted to a new, unused tag.
In case of MPLS, for example, within a Border Node it is necessary to realize a mapping
between the IPv4 CONET Option and a fake MPLS tag, used not to offer label based
forwarding, but to differentiate CCN packets, as shown in the following figure.




                      Figure 4.3: Mapping between CONET Header and MPLS tag


                                                       55
             Solutions to enhance the Internet with an information centric model, exploiting Openflow


As highlighted by the previous figure, the CONET Option field is kept also within the IP
Option, even if the matching is realized with the MPLS tag. I suggest to do that in order to
have a structure that is “forward compatible” and will exactly fit in the long term solution,
only removing the MPLS matching.
       Also functionalities disposition would be a little different from the one we envisaged
for the long term.
In particular CC Routing and CC Caching do not run on the switch, but on the same
separate server where the controller runs, as depicted in the following figure. Considering
the study in [28], it is also possible to envisage a scenario in which not all Internal Nodes
have their own cache, but a solution in which there are some Cache Servers, that could be
shared among several nodes.
This kind of considerations, can find a realization in a solution where caches are running
on a separate server where the CC Caching runs. Moreover those distributed caches can
perform a communication with CC Policies, so that an Interest packet is routed to the
cache only if the corresponding Data Unit is present.
IP Routing, instead, is realized through virtualization using components like RouteFlow
and Quagga and could easily run on the separate server.
More precisely, Quagga and RouteFlow provide layer 3 functionalities of fering their results
to the NOX Controller, which runs on the same server and performs CC Policies, updating
switch’s flow tables according to that information.
So, Lookup and Cache is realized through switch entries, decided by the controller and
pushed from the server to the switch.




                          Figure 4.4: Functionalities disposition in the short term




                                                       56
              Solutions to enhance the Internet with an information centric model, exploiting Openflow



4.3.1 Analysis of MPLS tag solution

The use of the MPLS tag as matching field has some disadvantages but could turn out to
be a good solution, if it is realized it in a way that answers to the following questions:

   1. Is it possible to keep offering label switching with MPLS in CONET architecture?

   2. Is there enough space to realize this correspondence without tag collisions?

   3. Is it still possible to preserve a hierarchical structure, that identifies in a simple way

       all the chunks of a content and enables quick cache lookup?

   4. Are tags re-used after some time? If yes, what should be a correct time?

   5. Who is meant to realize this matching?

A first raw implementation could be done exploiting 31 bits out of the 32 the MPLS header
to identify CCN packets. Doing so, it is possible to leave the highest value bit to normal
MPLS usage, which means offering the possibility to                        MPLS flows to go
through the network and granting to the CONET other        tags to identify CCN traffic.
It is important to remember that Carrier Packet header has the structure “Network Identifier
+ Chunk Number + Payload Type”, so that every chunk of a content is requested by an
Interest with the same name, namely “Network Identifier + Chunk Number”, of the content
itself.
This means that for one logical basic entity (the chunk) that goes along the network two
different tags are needed.
Considering also that the differentiation between Interest and Data Unit is one of the first
operation implemented in a node, in order to decide whether perform cache-lookup or
forwarding the packet and refreshing the timer, it could be useful to insert that information
in a place easy to find. Because of that, I suggest the use of the least significant bit to
differentiate between Interest and Data Unit, so that the identifying process in an Internal
Node becomes simple in that way:

Arrival of a packet:

      Is this a MPLS packet?

          – If NOT, perform normal forwarding.

          – If YES, check the first bit.

      Is it a CONET packet?

          – If NOT, perform normal MPLS forwarding.

          – If YES, check the last bit. Is it an Interest or a Data Unit?

And so on, as described in Section 4.1.
                                                        57
              Solutions to enhance the Internet with an information centric model, exploiting Openflow


Going in this direction, it is possible to have                       tags to be used to identify
CONET chunks of content through the network.
Considering the fact that CONET plans to have chunks of about 128-256KB and
supposing an average size of contents about 32MB, it means that there are almost 120
chunks per content, so that it will be used              tags for every content.
This will result in                            contents that could go through the
network at the same time.
         It is worth to underline the temporal aspect of this solution: in fact, a MPLS tag
assigned to a chunk will be for sure re-used after a certain time, as happens in normal
MPLS. Moreover it would be smart to use the MPLS tag assigned to every chunk also to
address it into Internal Nodes’caches, so that cache/precache lookup becomes quick and
simple.
The downside of this solution is that it is necessary to be sure that when a tag is re-
assigned (so that it refers to a second content) there will be no valid copies of the first
content into network caches, so that the correspondence between tag and content is
effective and there are no errors in delivering contents.
A solution for this issue could be the following: when a Border Node assigns a new MPLS
tag (I will expand this point later) it sets a timeout timer, refreshed every time a packet with
that tag pass through it. Granting that this timer is longer than the Internal Nodes cache
timeout timer would be sufficient to prevent association mistakes.
In fact, assuming that Border Nodes are the entities that assign tags, it is sure that in the
process of sending an Interest and receiving a Data Unit, they would be the last to
encounter a packet with a certain tag. So, if a BN’s timer is over and it decides to re-use
the tag, there will not definitely be any copies of previous content bound to that tag, since
the cache-timeout used was shorter than Border Node’s one.
         Going back to the point of who is meant to assign tags, my suggestions are all
about Border Nodes, eventually in cooperation with the Name Routing System. I suggest
that because it is also the entity that performs routing by name, providing the so-called
“Integration approach”, as it allows portion of networks to act simply and unaware of
content networking.
Therefore, there would not be logical differences with long-term architecture, except that
now is exploited a fake MPLS tag instead of the CONET Option, to recognize CCN traffic.
Moreover, Border Nodes are the only network entities that communicate with the Name
Routing System, which is centralized and could talk with all nodes simultaneously,
preventing tag conflicts. In my opinion, in fact, there is the need for a centralized entity that
allows BNs to assign those MPLS tags to packets, preventing the fact that two different
nodes could assign the same tag to two different packets.
Therefore I suggest that Name Routing System manages those 2 tags, dividing them into
sets to be assigned to Border Nodes, that have to manage them, and re-use them
according to timeout policies.
If this solution is adopted, considering a network made of 32 Border Nodes, there would be
      tags per BN, resulting in                    contents addressable at the same time, by
every BN.
Considering that this mapping is meant to be realized as a short term solution, that would
be tested in small environments, the ability of addressing more that 2 millions of contents
simoultaneously is fully acceptable for a short term trials on a testbed.




                                                        58
             Solutions to enhance the Internet with an information centric model, exploiting Openflow



4.3.2 Analysis of IP protocol and Ports tag solution

It is worth to highlight the fact that the use of MPLS tag is just one of the possible
implementations of a short term solution, that could be realized also exploiting another
kind of mapping between the IPv4 CONET Option and one or more fields already present
into the IP header which OpenFlow is aware of.
        In particular, another solution that I consider useful to investigate and I will use in
my work, foresees the use of a new IP Protocol Type to identify CCN traffic, with the 32
bits of the “Ports” field used as a tag for chunks.
The basic idea is to exploit one of the unassigned codes for the Protocol Type, in order to
avoid conflicts with TCP or UDP datagrams and to simply identify CCN traffic. Once that
distinction is done, the 32 bits originally assigned to Ports field lose their original meaning
and could be exploited (together or separately) as a tag for Interest and Data chunks.
It is necessary to underline that all the considerations about tag re-use, conflict and
dissemination are still valid and has to be solved in a smart way in this solution as well as
in the MPLS one. Moreover, this solution could be more efficient in solving tag conflicts,
since it foresee the complete use of the 32 bits originally assigned to layer 4 ports.




                Figure 4.5: Mapping between CONET Header and IP Protocol and ports



Because of that, there are      tags available that would result, according to previous
assumptions, into    contents available at the same time in the network. Thus, compared
to the MPLS solution, this one offers a doubled number of contents addressable,
resulting into an even more scalable and acceptable implementation of a CONET
scenario.


                                                       59
             Solutions to enhance the Internet with an information centric model, exploiting Openflow


       The downside of this solution is that it would be exploited layer 4 header fields in
order to distinguish and separate CCN traffic. According to the “Integration approach”,
instead, the recognition of CONET packets from normal traffic is meant to be as an
enhancement of the existing layer 3, not involving the analysis of upper layers’ headers.


4.3.3 Analysis of MAC address tag solution
Instead of doing this mixing layer solution, another suggestion could be the use of MAC
addresses to realize such mapping. This solution is still under investigation, but a hint
could be to exploit the so called “locally administered addresses” so that there would be 48
bits available as CCN chunk’s tags.
Although this solution could offer an even larger number of addressable contents, it still
needs further analysis, especially regarding packet forwarding, before being successfully
developed.




                                                       60
             Solutions to enhance the Internet with an information centric model, exploiting Openflow




Chapter 5

The implementation of a CONET
Solution



In this chapter I will outline in details all the operations and procedures that I followed in
order to realize a virtualized CONET Sub System.
Firstly, I will describe the scenario I decided to realize and the assumptions over it is
based.
Then, after giving the instructions to create such a testbed in a virtualized environment, I
will explore in greater detail the control logic of the OpenFlow switch I realized during this
work. Obviously, the implementation solutions described in next sections take into account
all the analysis and considerations previously given and rest on them for their realization.
Afterwards, I will outline the operations needed to extend OpenFlow’s controller in order to
make it able to recognize CONET packets presenting a new IP protocol field. In fact, I
chose to exploit this approach, since it was the one that presented more advantages and
scalability properties.
Therefore, I will explain how to extend NOX controller and I will show how to make it able
to recognize and properly install new flow entries when it receives a CONET packet.
After that I will add a new virtual machine to my scenario, making it act as a Cache Server,
able to advertise the contents it has stored or deleted and able to provide back Data Units
in response to Interests packets.
In the end, I will focus my attention on the communication between the Internal Node and
this Cache Server, responsible of storing and providing chunks of contents.
In particular, it will be exposed an example of communication between these two entities,
showing how to properly reroute a CONET packet towards a closer Cache Server able to
distribute a desired content.




                                                       61
              Solutions to enhance the Internet with an information centric model, exploiting Openflow




5.1 Basic settings
During this work, I initially started to experiment with the OFELIA Control Framework, so
that I aligned all the specifications and versions of programs and tools I was going to
exploit with the ones used within the project. Because of that, in the following, I will refer to
OpenFlow 1.0.0 and NOX 0.9.0 (Zaku) specifications.
After some problems with the FlowVisor in the i2CAT island, in Barcelona, currently
solved, I decided to develop my work locally on my machine exploiting a virtualization tool.
In fact, keeping the same specifications used in the project would make it simple to
migrate my experiment within OFELIA facilities, for more extensive and severe tests.
         Therefore, I developed my work on my Ubuntu 11.04 machine using Virtualbox [29],
a general-purpose full virtualizer for x86 hardware, targeted at server, desktop and
embedded use.
Virtualbox allows to easily create complete virtual machines running the desired Operating
System and can assign to them a variable amount of resources, such as ram or hard disk
capacity. Moreover, it can provide virtual machines with up to four network interfaces, so
that it is also possible to create machines that act as a switch.




                                 Figure 5.1: Screenshot of Virtualbox usage


This property makes this tool really versatile and, because it made possible to create a VM
acting as a switch, it has been extensively used during this work. Figure 5.1, shows an

                                                        62
             Solutions to enhance the Internet with an information centric model, exploiting Openflow


example of usage, with the Virtualbox interface and, in the background, two different Linux
virtual machines, both running Ubuntu 10.04 LTS.
        Before proceeding with the description of the network topology I realised, it is worth
to point out the way in which Virtualbox manages networks and, in particular, how it
handles a connection between two virtual machines. In fact, Virtualbox offers the
possibility to connect a network interface to the Internet as well as to an internal network,
allowing to conduct offline experiments.
Therefore, putting two network interfaces of different machines on the same internal
network would make them to communicate as if they were on the same Ethernet network,
so that when a valid address is assigned, they can easily ping each other.
        In my scenario, I wanted to connect two virtual machines, acting as simple peers, to
another virtual machine acting as an OpenFlow switch. My first goal was to make the two
peer communicate, putting them on two separate Ethernet network, connected by the
OpenFlow switch, while assigning them two addresses belonging to the same IP network.
During these preliminary steps, I noticed that there were some problems with the ARP
protocol, as if there was something blocking between the two peers and the switch. In fact,
ARP requests were not handled and forwarded properly by the switch, as if they had
already passed through a switch. Thus, I discovered that, when Virtualbox creates an
internal network between two machines, it handles all the packets going through that
network acting as a switch, so that the two machines do not result to be connected by a
cable, but they are seen as two machines in the same switched network.
Since I wanted to perform these switching operations controlling the OpenFlow switch
placed between the two peers, it has been necessary to modify Virtualbox behaviour,
making it act as a simple repeater between two machines.
This adjustment has been possible exploiting VDE [30], a Virtual Distributed Ethernet,
which is an Ethernet compliant virtual network that can be spawned over a set of physical
computer over the Internet.
After some difficulties, I succeeded in integrating VDE with Virtualbox [31]through the
following instructions that need to be launched in my machine:

$ cd vde/vde-2/
$ vde_switch -x -d -s /tmp/my_vswitch1 -M /tmp/switch_mgmt1
$ vde_switch -x -d -s /tmp/my_vswitch2 -M /tmp/switch_mgmt2

In fact, after having installed VDE, the above instructions are necessary to create two
virtual networks between the OpenFlow switch and the peers. It is worth to highlight the -d
option, that, according to the specifications given in [32],is responsible to set the behaviour
of the virtual network to a transparent hub.
        After doing that, in order to integrate Virtualbox, it is necessary to pass the just
created virtual network as the network to which virtual machines interfaces’ are connected
to.
That could be simply done by selecting “Generic Driver” as network type and then
specifying the use of VDE and the name of the proper virtual network, as shown in the
following figure.




                                                       63
             Solutions to enhance the Internet with an information centric model, exploiting Openflow




                             Figure 5.2: Setting network options through VDE

Doing so for all the virtual machines that want to exchange traffic by means of the
OpenFlow switch should solve all the connectivity issues, leaving to the switch the ability
to perform all the routing and switching operations allowed by the OpenFlow protocol.




                                                       64
              Solutions to enhance the Internet with an information centric model, exploiting Openflow



5.2 The network topology
In this work, all the simulations and the experiments have been conducted in a network
made up of virtual machines, disposed and connected as in the topology depicted by the
following figure.




                        Figure 5.3: Network topology and entities of the experiments



In particular, it is worth to outline the operations and logical functions that will be performed
by each entity:

PC1 AND PC2: are two simple peers that exchange different kind of traffic in order to test
     switch behaviour. They reside within an IP-CSS and, in the following, they will
     generate also CONET packets, using the tag mechanism described in Section
     4.3.2.
     Even if lacking of some functionalities, they can be considered as Border Nodes for
     what it concerns operations oriented towards the CSS.




                                                        65
             Solutions to enhance the Internet with an information centric model, exploiting Openflow



OPENFLOW SWITCH: is the basic entity of the network and it is instructed by the
    controller through the SSL channel. It represents the Data plane and the forwarding
    operations of the IP-CSS Internal Node.

NOX CONTOLLER: is the brain of the whole network and it constitutes the control logic of
    the Internal Node. It is responsible for all the operations performed by the switch. In
    the following will be extensively described, especially for what it concerns the
    extensions needed in order to recognize and handle CONET traffic.

CACHE SERVER: provides the caching functionalities for the Internal Nodes. It is an
    external entity, so that it can be shared among several nodes. It is responsible for
    storing and providing contents according to a policy, as well as informing the
    controller about the contents present in the cache.

After this brief description of the most important operations performed by network entities, I
want to outline in detail their disposition and IP addresses, in order to clarify the static
situation I worked on in my experiments.
As shown in Figure 5.4, the OpenFlow switch is connected with all the other entities, by
means of its four network devices. Since this virtual machine is a switch and forwards
packets at layer 2, there is no need to assign an address to the interfaces used for packet
forwarding.




                         Figure 5.4: Network topology and addressing schemes




                                                       66
             Solutions to enhance the Internet with an information centric model, exploiting Openflow


On the contrary, the interface eth0 is responsible to manage the communication between
the switch and the external NOX controller and must be able to handle a SSL connection
on the port 6633 and, therefore, needs an IP address.
Although there is no specific need of separating control traffic, namely the messages
exchanged by the controller and the switch, from normal data packets, I decided to
separate them and I chose to use a different network, namely the 10.0.0.0/24 one.
I chose also to place in a different network the communications between the Cache Server
and the controller, in order to isolate the messages sent from the cache to the controller,
with the purpose of advertising new contents present in cache or that a content has been
deleted, due to timeout policies.
It is worth to remember that caches can be placed within the Internal Node or can be
distributed and even shared among different nodes.
Thus, in my scenario, contents are stored externally and all the datagrams carrying
information about this communication mechanisms have to pass through the network.
While the “control messages” between the Cache Server and the controller go in a
separate network, the interface through which Interest and Data packets are sent and
received belongs to the addressing space of the whole IP-CSS, which is 10.0.1.0/24. It is
possible to see that this IP network is the same for all the other machines, so that both
PC1, PC2 and the Cache Server have an IP address that belong to that network: therefore
these machines are placed in the same layer 3 network.
For what it concerns layer 2, instead, the two peers and the Cache Server are placed in
three different Virtual Distributed Ethernet networks and they are unreachable to each
other without forwarding mechanisms.
        This solution is not randomly chosen, but, on the contrary, is the one that allows the
OpenFlow switch to forward packets from/to all network entities without requiring layer 3
routing. In fact, if the machines were placed within the same Ethernet domain they would
communicate directly, thanks to the ARP protocol, without the need for the switch.
Otherwise, if they were on different layer 2 domains and also on different IP networks, it
would be impossible to send packets from one peer to the other without using layer 3
routing mechanisms, that, in this work, are not implemented on top of the OpenFlow
switch.



5.3 The Open vSwitch
As pointed out before, the key element of the experiments is the Internal Node, which is
made up of the virtual machine running the NOX controller and the OpenFlow switch, that
has to be created starting from the virtual machines with four network interfaces.
In fact, until now, this machine cannot act as a switch, being a simple computer and need
to be converted into a switch.
In order to do so, at the moment, there are two different OpenFlow 1.0.0 switch
implementations available. One is Stanford’s software reference design and the other is
Open vSwitch [33] implementation. While the former has user-space implementation, the
latter runs in kernel-space and, because of this, offer better forwarding performances and
has been used in this work.
        Open vSwitch is a production quality open source software switch designed to be
used as a switch in virtualized server environments. Open vSwitch can forward traffic
between different VMs on the same physical host and allows also to forward traffic
                                                       67
               Solutions to enhance the Internet with an information centric model, exploiting Openflow


between VMs and the physical network. It supports standard management interfaces and
it is open to programmatic extension and control using OpenFlow.
Therefore it is the ideal tool to realize an OpenFlow compliant switch and, in the following,
I will describe the procedures I followed to successfully install Open vSwitch and the
simple script I created in order to obtain a working switch and datapath able to connect
with the NOX controller.
After having solved all the needed dependencies, I used the following instructions,
necessary to install the version 1.1.0 of Open vSwitch:

$   wget http://openvswitch.org/releases/openvswitch-1.1.0pre2.tar.gz
$   tar zxvf openvswitch-1.1.0pre2.tar.gz
$   cd openvswitch-1.1.0pre2
$   ./boot.sh
$   ./configure --with-l26=/lib/modules/‘uname -r‘/build
$   make
$   sudo make install

After that, I loaded the kernel modules with:

$ sudo insmod datapath/linux-2.6/openvswitch_mod.ko

It is possible to simply verify that the module has been loaded properly by Typing:

$ lsmod | grep openvswitch_mod
  openvswitch_mod        61304 0

At this point, it has been necessary to make OVS kernel module loaded automatically at
every boot of the system.

$   sudo   mkdir /lib/modules/’uname -r’/kernel/net/ovs
$   sudo   cp datapath/linux-2.6/*.ko /lib/modules/’uname -r’/kernel/net/ovs/
$   sudo   depmod -a
$   sudo   modprobe openvswitch_mod

Afterwards, it has been necessary to edit the /etc/modules file as it follows and then to
reboot the system:

$ cd /etc
$ sudo vi modules

At the end of the file, I inserted the kernel module, as follows and then I saved the file.

openvswitch_mod

The previous step were necessary for a correct installation and configuration of the switch.
Once installed, Open vSwitch is quite simple to use and, essentially, requires to create a

                                                         68
             Solutions to enhance the Internet with an information centric model, exploiting Openflow


datapath, that would be the identifier of the switch in an OpenFlow network, and to
configure it. This can be done by adding, with the ovs-dpctl add command all the network
interfaces though which traffic is meant to pass, leaving out of this list the interface
connected to the controller. In fact, this one is handled by the switch when configuring the
connection with the controller through the command ovs-openflowd.
The following are the instructions I use when I want to configure the switch in the network
previously described; since they are always the same, I put them into a script, in order to
speed up operations.

$ cd openvswitch-1.1.0pre2/
$ sudo ./utilities/ovs-dpctl add-dp dp0
$ sudo ./utilities/ovs-dpctl add-if dp0 eth3
$ sudo ./utilities/ovs-dpctl add-if dp0 eth6
$ sudo ./utilities/ovs-dpctl add-if dp0 eth7
$ sudo ./utilities/ovs-openflowd dp0 --datapath-id=0000000000000001
tcp:10.0.0.10 port 6633 --out-of-band

It is worth to highlight also another useful Open vSwitch command, that allows to access
the switch’s flow table and to print on the screen all the inserted entries.
This command has been extensively used in this work, in order to verify the correct
behaviour of the controller.

$ sudo ./utilities/ovs-dpctl dump-flows dp0




5.4 NOX settings and usage
After the previous step, the switch is up and running and tries, recursively to connect to the
controller located at the IP address specified above.
In this section I will deal with a first implementation of the NOX controller, explaining how
to install NOX [34] and then describing how I realized a control logic that makes the
OpenFlow switch act as a learning switch and that is able to recognize and handle normal
IP, ICMP, TCP and UDP traffic.
First of all, it has been necessary to solve the dependencies needed by NOX, updating the
repositories and making apt-get to self resolve them:

$   cd /etc/apt/sources.list.d
$   sudo wget http://openflowswitch.org/downloads/debian/nox.list
$   sudo apt-get update
$   sudo apt-get install nox-dependencies

After that I successfully downloaded, compiled and installed the stable branch of NOX by
typing:

$ git clone git://noxrepo.org/nox
                                                       69
             Solutions to enhance the Internet with an information centric model, exploiting Openflow


$   cd nox
$   ./boot.sh
$   mkdir build/
$   cd build/
$   ../configure
$   make -j 5

After these steps, NOX is successfully installed and it can launch all the basic components
provided within the package as well as the ones that will be created by the users. To
launch the controller, it is sufficient to invoke the following command in the build/src
directory:
                $ ./nox_core -v -i ptcp:6633 name_of_your_controller

It is worth noting that the -v options sets NOX to use the verbose behaviour, while
ptcp:6633 instructs NOX and the controller to listen for incoming connections from the
Openflow switches on port 6633, which is the Openflow protocol port and that has been
also specified during Open vSwitch settings.
Obviously, name_of_your_controller has been replaced by the name of my controller.
I chose to develop my control logic in Python and, in order to make it work properly, I had
to follow the basic structure needed for pure Python components and all the steps listed in
[35].
        After having written my controller, that will be described in detail in the following, I
had to add my file, named “controllore.py”, to src/nox/coreapps/ examples/. Then I had to
add my Python file to NOX_RUNTIMEFILES in src/nox/apps/examples/Makefile.am and to
update the src/nox/apps/examples/meta.json file to include my component.
While doing this operation, It is important to be sure that “python” is a dependency, as well
as eventual other components exploited by the controller.
After that it has been necessary to recompile NOX, following the same three steps listed
above.
        As said before, there is a basic structure that needs to be respected in order to write
a working controller. The following, is the one I used and within which I implemented all the
desired operations and functionalities.
________________________________________________________________________

from nox.lib. core import *

class controllore ( Component ):

def _ _init_ _ (self , ctxt ):
       Component . _ _init_ _ (self , ctxt )

def install ( self ):
       # r e g i s t e r f o r e vent he r e
       pass

def getInterface ( self ):
       return str( controllore )


                                                       70
             Solutions to enhance the Internet with an information centric model, exploiting Openflow


def getFactory ( ):
       class Factory :
              def instance (self , ctxt ):
                     return controllore ( ctxt )

return Factory ( )
______________________________________________________________________________________

                      Listing 5.1: Mandatory structure for Python NOX components

As it is possible to guess, the above code does not perform any action and represents only
the structure in which operations have to be performed.
In particular, within the install( ) instruction NOX events have to be registered,
according to the logic described in Section 2.5, and handled by their callback functions.

5.4.1 The first controller implementation
As previously said, the first implementation of my control logic does not deal with CONET
packets, but it is just a learning 2 switch, able to recognize ARP, IP, ICMP, TCP and UDP
packets, since they are the default packet type that NOX is able to manage.
      In particular, within the install() function, I post three basic events, each one
handled by the corresponding call back function, as showed in the following snippet:

________________________________________________________________________
def install ( self ):
       self . register_for_datapath_join ( self .
              datapath_join_callback )
       self . register_for_datapath_leave ( self .
              datapath_leave_callback )
       self . register_for_packet_in ( self . packet_in_callback )
______________________________________________________________________________________

                          Listing 5.2: First realization of the install() function

Therefore, the three events listed above are the only ones handled by my control logic at
this moment. Two of them are fairly similar, as they represent the connection, namely the
datapath_join event, and the disconnection, namely the datapath_leave of a switch
to/from the controller.
These events are handled by their callback functions that, essentially, print out the
occurred situation, notifying that a switch has joined or left the controller.
       The packet_in event, instead, deserves more attention, since it occurs when an
incoming packet does not match any entry in the flow table and it is sent to the controller,
in order to be managed.
Thus, the packet_in_callback function that I created is responsible for recognizing
incoming packets and has the task of inserting into switch’s flow table an appropriate flow
entry, so that other packets of that type would be handled and properly forwarded by the
switch, without asking to the controller.
In the following snippet, in particular, it is possible to see the list of parameters that is
necessary to pass to the function and the management of the ARP packets. In fact, after
the control of the reason why the packet is received by the controller, is invoked the

                                                       71
              Solutions to enhance the Internet with an information centric model, exploiting Openflow


function learn_l2(), that is responsible to learn the layer 2 topology of the network and
that will be described in the following.
After that the callback, reads the fields within the received packet and through an if
clause handles the different situations. In particular, the attrs{} dictionary is created and
populated with the most relevant fields, such as the source and destination addresses.
This dictionary plays a crucial role, as it is the collection of attributes that will be inserted in
the flow entry and, therefore, over which the following packets would be filtered and
forwarded.
Because of that, for ARP packets, the only characteristics that I inserted in the attrs{}
dictionary are the addresses, the packet type and the switch’s input port.
________________________________________________________________________
def packet_in_callback (self , dp_id , inport , ofp_reason ,
  total_frame_len , buffer_id , packet ):
      if ( ofp_reason == openflow . OFPR_NO_MATCH ) :
               print "No ma t c h i n g r u l e i n t h e f l ow t a b l e . "

              learn_l2 (dp_id , inport , packet )

              ethr_pkt = packet . find ( ’ e t h e r n e t ’ )
              if( ethr_pkt . type == ethernet . ARP_TYPE ):
                    m_arp = ethr_pkt . find ( ’ a r p ’ )
                    src_ip = ip_to_str ( m_arp . protosrc )
                    dst_ip = ip_to_str ( m_arp . protodst )
                    attrs = {}
                    attrs [ core . IN_PORT ] = inport
                    attrs [ core . DL_TYPE ] = ethr_pkt . type
                    attrs [ core . DL_SRC ] = ethr_pkt . src
                    attrs [ core . DL_DST ] = ethr_pkt . dst
                    idle_timeout = openflow . OFP_FLOW_PERMANENT
                    hard_timeout = openflow . OFP_FLOW_PERMANENT
                    priority = 5

                  forward_l2 (self , dp_id , inport , packet , buffer_id ,
                        attrs , idle_timeout , hard_timeout , priority )
______________________________________________________________________________
                           Listing 5.3: Snippet of packet_in_callback() function


Moreover, I set the priority of this entry to the minimum value, so that if there were other
entries with the same attributes, they would be executed instead of the one needed to
manage only ARP packets.
At the end of this snippet, is invoked the forward_l2() function, that takes as input the
attributes previously set and, according to the results of the learning function, installs the
flow entry.


                                                        72
              Solutions to enhance the Internet with an information centric model, exploiting Openflow


In fact, the controller is designed to insert a flow entry according to the attributes passed,
only if it had previously learned the port through which the packet has to be sent out, in
order to be properly forwarded towards its destination.
Because of the importance of the two learn_l2() and forward_l2() functions in the
management of the switch, it is worth to show them and to describe how I realized the
learning and forwarding process.
The first function, in particular, has the purpose of making the controller to learn where the
machines connected to the switch are located and to bind their MAC addresses with the
physical port of the switch they are connected to.
This association is realized by filling the global “double” dictionary arp_table, which is a
dictionary that uses, as keys, the datapath_ids of all the switches that connect to the
controller. The value associated to each key is in turn another dictionary, that binds the
MAC address to the port, saving the first as key and the second as value.
________________________________________________________________________
def learn_l2 (dp_id , inport , packet ):
        global arp_table
        srcaddr = packet .src . tostring ()
        if arp_table . has_key ( dp_id ):
               if arp_table [ dp_id ]. has_key ( srcaddr ):
                      dst = arp_table [ dp_id ][ srcaddr ]
                      if inport != arp_table [ dp_id ][ srcaddr ]:
                      arp_table [ dp_id ][ srcaddr ] = inport
                      print ’MAC ha s moved f rom ’ +str(dst )+ ’ t o ’ +str(
                      inport )
               else :
                      arp_table [ dp_id ][ srcaddr ] = inport
                      print ’ l e a r n e d MAC ’ , mac_to_str ( packet .src), ’
                          on d a t a p a t h ’ , dp_id , ’ and p o r t ’ ,inport
        else :
               arp_table [ dp_id ] = { srcaddr : inport }
               print ’ l e a r n e d MAC ’ , mac_to_str ( packet .src), ’ on
                      d a t a p a t h ’ , dp_id , ’ and p o r t ’ ,inport
               print arp_table
________________________________________________________________________________
                                       Listing 5.4: learn_l2() function


The function, initially checks if the key dp_id is already in the dictionary and, if it is the first
packet that comes from that switch, stores the datapath value as well as the couple
address - source port. Instead, if other packets had been previously received from that
switch, the second if clause checks whether the source MAC address of the incoming
packet has already been stored or not.
If the address was already in the dictionary, the function controls if the associated port
remained the same and, if needed, updates the source port, notifying that the machine
with that address has changed its position.


                                                        73
             Solutions to enhance the Internet with an information centric model, exploiting Openflow


Instead, if it is the first time that a packet with that source address comes to the controller,
the function updates the dictionary adding both the MAC address and the port, printing out
a message that notifies the happened learning operation.
Therefore, this function essentially populates a dictionary, which is a global variable and
exists in all functions, in a way that it is simply exploited by the functions that realize
switching operations, inserting flow entries.
The function in charge of doing that, is the forward_l2() method, that is invoked in the
callback after the analysis of a packet and takes as input all the parameters used to
describe and filter packets through flow entries.
In details, this function checks if the destination MAC address of the packet is already in
the arp_table dictionary and, depending on this, inserts a flow entry or floods the packet
out of all the switch’s ports, except for the input one. In fact, if the MAC address is not
there, it means that learn_l2() has not yet received a packet from that source and,
therefore, does not know on which port it could be connected. Therefore, in order to
discover the position of this entity, it floods the packet. This function performs flooding also
if the port bound to the destination is the same from which the packet arrived and prints
out a warning. In fact this would happen only if an error occurs or if the arp_table
dictionary is not up to date, so that it is necessary to execute again learn_l2() function.
Instead, if the MAC address is known and bound to a correct port, this function executes
the self.install_datapath_flow() command, that is responsible of inserting an entry in
the switch’s flow table. This command takes as input all the parameters previously
described and, mainly, the actions parameter, that indicates the kind of actions that has to
be performed on the packets matching this flow’s characteristics.
In this function, the only action performed and therefore passed as input, is the forwarding
of the packet, as shown in the following code.
However, it is possible to instruct the switch to perform even more than one of the actions
that have been previously listed in 2.4.1.
________________________________________________________________________

def forward_l2 (self , dp_id , inport , packet , buffer_id , attrs ,
    idle_timeout , hard_timeout , priority ):
  global arp_table
  dstaddr = packet .dst . tostring ()
  if arp_table . has_key ( dp_id ):
      if arp_table [ dp_id ]. has_key ( dstaddr ):
          outport = arp_table [ dp_id ][ dstaddr ]
          if outport == inport :
                  print "WARNING! l e a r n e d p o r t = i n p o r t "
                  self . send_openflow (dp_id , buffer_id , packet ,
                        openflow . OFPP_FLOOD , inport )
          else:
                  # We know the outpo r t , s e t up a f l ow
                  actions =[[ openflow . OFPAT_OUTPUT ,[0 , outport ]]]

                      self . install_datapath_flow (dp_id , attrs ,
                         idle_timeout , hard_timeout , actions , None ,

                                                       74
             Solutions to enhance the Internet with an information centric model, exploiting Openflow


                          priority , inport , None )
           else:
                  # haven ’ t l e a r n e d d e s t i n a t i o n MAC. Flood
                  print "F L O O D ! "
                  self . send_openflow (dp_id , buffer_id , packet ,
                        openflow . OFPP_FLOOD , inport )
________________________________________________________________________________
                                     Listing 5.5: forward_l2() function


Thanks to the two functions just described, it is possible to learn the position of the other
network entities and to perform forwarding operations, according to the policies decided
when populating the attrs{} dictionary and the priority variable.
        In the first controller implementation, in particular, other than ARP packets, also
ICMP, TCP and UDP traffic is handled, by a code similar to the one in Listings 5.3. In fact,
as it is possible to see in the following snippet of the packet_in_callback() function, input
packets are explored and, if they are recognized to be valid IPv4 datagrams, they are
selected by means of their protocol field.
It is worth noting that this snippet follows the previous one and, because of that, the
learn_l2() function has already been invoked, so that the destination address has already
been bound to an output port. Thus, in order to manage ICMP packets, an entry with
maximum priority and layer 3 fields is inserted into the flow table by means of the
forward_l2() method.
_______________________________________________________________________

elif ( ethr_pkt . type == 0 x0800 ):

      if ( ip_pkt . protocol == 1):
            attrs = {}
            attrs [ core . IN_PORT ] = inport
            attrs [ core . DL_TYPE ] = ethr_pkt . type
            attrs [ core . NW_SRC ] = ip_pkt . srcip
            attrs [ core . NW_DST ] = ip_pkt . dstip
            attrs [ core . NW_PROTO ] = ip_pkt . protocol
            attrs [ core . NW_TOS ] = ip_pkt .tos
            idle_timeout = openflow . OFP_FLOW_PERMANENT
            hard_timeout = openflow . OFP_FLOW_PERMANENT
            priority = 101
            forward_l2 (self , dp_id , inport , packet , buffer_id ,
               attrs , idle_timeout , hard_timeout , priority )
_____________________________________________________________________________
                                    Listing 5.6: Handling of ICMP traffic




                                                       75
             Solutions to enhance the Internet with an information centric model, exploiting Openflow




5.4.2 The handling of ICMP messages
The previous code has been tested and works properly, resulting in the following
described generation of messages.
After having assigned network addresses to all machines as discussed before, I started
the Open vSwitch and my NOX controller, that connected without errors nor warnings.
Then, within PC1 console, I started the ping 10.0.1.129 instruction, in order to generate
ICMP traffic towards PC2.
The expected behaviour was that firstly the exchange of ARP messages started, in order
to resolve the MAC address associated to the IP one I was pinging. During these
messages, the controller was meant to learn the position of the two PC1 and PC2
machines, so that, when the actual generation of ICMP traffic started, it already knew the
output port and could handle that situation. In fact, if the output port was known, the first
two ICMP messages should have been sent to the controller that would install proper
entries within the switch’s flow table. Then, after the installation of these entries, the ping
should work without invoking the controller.
It is worth recalling that OpenFlow rules are one-way, so that, in order to perform a
bidirectional communication, such as ping, two flow entries are necessary and, therefore,
packet_in_callback() function has to run at least two times, identifying ICMP traffic.




                                 Figure 5.5: Handling of ICMP messages

The previous screenshot has been taken during the operations described above and
shows how the imagined situation happens.
On the background, in fact, it is possible to see the controller console, that prints out
warning messages, notifying that a table miss has happened and, therefore, the controller
is handling the situation. In the console, it is possible to see the messages about three

                                                       76
             Solutions to enhance the Internet with an information centric model, exploiting Openflow


ICMP packets that have been sent to the controller. The first one is flooded out of all the
ports, since the controller has not yet learned the location of all the MAC addresses in the
network.
After this one, however, it is possible to see the two expected ICMP packets: the first one
comes from PC2 and it is the answer to the message previously flooded, while the second
represents the following ping and is forwarded properly, through the installation of the
corresponding flow entry.
In fact, after these packets, there are no messages within the controller console, indicating
that no more packets are sent to the control logic as a result of a table miss. Therefore, the
two flow entry have been successfully installed and all the following ping messages
exactly match them.
The result of all these operations is that in PC1 console, after a while, ping starts to work,
receiving ICMP answers back with a decreasing total round trip time.

As mentioned before, the first implementation of the controller, handles also TCP and UDP
traffic showing exactly the same behaviour that it has when it manages ICMP packets. In
fact, in order to be consistent with other packet types and because of the simplicity and
efficiency of the solution exposed for ICMP traffic, both UDP and TCP packets are handled
with a similar if clause and, after having filled the attrs{} dictionary, the callback invokes
the forward_l2() method.



5.5 NOX enhancements to support the CONET
architecture
The first implementation of the controller, as shown before, works properly and manages
all the standard packets defined within NOX and supported by OpenFlow. So, it offers the
logical structure necessary to handle packets but it does not support CONET traffic, which
is, instead, the goal of this work.
Therefore, with the purpose of offering the ability to support this kind of traffic within a
node, I extended NOX, in order to make it able to recognize CONET packets and filter
over them.
Since the scenario I am presenting is an IP-CSS that uses OpenFlow, in order to identify
CONET traffic, there is the need to use one of the three solutions described in Section 4.3.
In particular, I chose to use the IP protocol and Ports tag solution, since it is the most
scalable one and could be implemented defining a new IP protocol type reserved to
CONET traffic while exploiting UDP packet structure to carry the tag information.
Since packets fields representing the ports can be accessed only if the previous IP
protocol type is UDP, in order to realize CONET support for NOX, I had to define a new
CONET class with the same structure as the UDP one, but associated to the IP protocol
199, currently unassigned, that I chose to use to identify CCN traffic.
Therefore, I looked into nox/src/nox/lib/packet/udp.py in order to find out UDP protocol
definition and representation, which are showed in the following snippet, taken from the
udp.py file.




                                                       77
              Solutions to enhance the Internet with an information centric model, exploiting Openflow




________________________________________________________________________
#                                            UDP Header Format                                      #
#                                                                                                  #
#                   0             7 8             15 16               23 24                31      #
#                +------------------+------------------+-------------------+-------------------+   #
#                |          Source Port                |         Destination Port              |    #
#                +----------+----------+-----------+----------+                                    #
#                 |             Length                 |              Checksum                 |   #
#                 +---------------------+----------------------+                                   #
#                 |                             data octets …                                    | #
#                 +--------------------------------------------+                                   #
################################################################################

class udp( packet_base ):
      "UDP p a c k e t s t r u c t "
      MIN_LEN = 8
      def __init__ (self , arr =None , prev = None ):
            self . prev = prev
            if type (arr) == type ( ’ ’ ):
                arr = array ( ’B ’ , arr)
            self . srcport = 0
            self . dstport = 0
            self . len = 8
            self . csum = 0
            self . payload = ’ ’
________________________________________________________________________________
                               Listing 5.7: UDP packets format and definition

As I decided to use the source and destination field as a tag for CONET packets, the best
way to implement this solution was to create a new class, called “conet.py” with the same
structure of the “udp.py” one. Because of that, “conet class” would have the same
structure of the udp one, so that source and destination ports could be exploited to filter
traffic into switch’s flow table.
In order not to alter too much NOX source code, I left the name srcport and dstport to
conet class subfields, but, as I already said, these two fields, lose their original meaning
becoming 32 plain bits used for the tag for CONET packets. Thus, in the same folder, I
created new “conet.py” file defining the CONET_PROTOCOL type, that follows the exact
structure of the udp class. After that, I had to solve the dependencies needed to
successfully use my new class and, in particular, I needed to update the
nox/src/nox/lib/packet/ipv4.py file, that defines the IPv4 layer in this way, in order to
make it support also this new layer 4 protocol.
In order to do that, I added the import line and the protocol definition, as well as the two
last lines in the following snippet, needed to correctly access CONET header and fields.
Since my class derives from the udp one, for all the modifications in NOX files, I followed
the same structure in which udp-related commands are given.
                                                        78
             Solutions to enhance the Internet with an information centric model, exploiting Openflow




________________________________________________________________________

from   tcp import *
from   udp import *
from   icmp import *
from   conet import *

class ipv4 ( packet_base ):
      " IP p a c k e t s t r u c t "
      MIN_LEN = 20
      IPv4 = 4
      ICMP_PROTOCOL = 1
      TCP_PROTOCOL = 6
      UDP_PROTOCOL = 17
      CONET_PROTOCOL = 199
      .
      .
      .
length = self . iplen
if length > dlen :
      length = dlen # Clamp to what we ’ ve g o t
if self . protocol == ipv4 . UDP_PROTOCOL :
      self . next = udp (arr= self .arr[ self .hl *4: length ], prev =
            self )
elif self . protocol == ipv4 . TCP_PROTOCOL :
      self . next = tcp (arr= self .arr[ self .hl *4: length ], prev =
            self )
elif self . protocol == ipv4 . ICMP_PROTOCOL :
      self . next = icmp (arr= self .arr[ self .hl *4: length ], prev =
            self )
elif self . protocol == ipv4 . CONET_PROTOCOL :
      self . next = conet (arr= self .arr[ self .hl *4: length ], prev =
            self )
________________________________________________________________________________
                        Listing 5.8: CONET support within the IP packet structure


At this point, the basic definition of CONET_PROTOCOL was complete, but there were a lot of
dependencies to solve, in order to make NOX successfully compile and run. In particular,
what I had to do was to replicate the code that uses and exploits udp class and functions
and to adapt it to CONET class. Furthermore, I had to do this operation for all files in NOX
folder, so that leaving to conet class the same structure of the udp one.
In order to solve all these issues, I looked for all the files containing the “udp” word with
grep command and I saved the output into a file.

                                                       79
              Solutions to enhance the Internet with an information centric model, exploiting Openflow




$ grep -r "udp" /home/ofelia/nox/ >>grep-result.txt
I tried to modify all the file where the string “udp” appeared, adding the corresponding
CONET instruction, but it did not work, resulting in NOX compilation errors.
In fact, there was no need to modify variables and functions describing OpenFlow, neither
NOX basic files. Therefore after having created my “conet.py” file, I just solved the
dependencies into related files, such as “ipv4.py” and “vlan.py”.
The following is the complete list of the files that is necessary to modify in order to make
NOX successfully compile and recognize CONET traffic:

-   nox/src/nox/lib/util.py
-   nox/src/nox/lib/packet/packet_utils.py
-   nox/src/nox/lib/packet/ipv4.py
-   nox/src/nox/lib/packet/udp.py
-   nox/src/nox/lib/packet/vlan.py
-   nox/build/src/nox/lib/util.py
-   nox/build/src/nox/lib/packet/packet_utils.py
-   nox/build/src/nox/lib/packet/ipv4.py
-   nox/build/src/nox/lib/packet/udp.py
-   nox/build/src/nox/lib/packet/vlan.py
-   nox/src/utilities/nox-flow-xlate
-   nox/src/nox/lib/packet/Makefile.am
-   nox/build/src/nox/lib/packet/Makefile
-   nox/src/include/packets.h

Among the files listed above, the two Makefile deserve particular attention, because it is
necessary to add “conet.py” to EXTRA_DIST and to NOX_RUNTIMEFILES, in order to
successfully recompile NOX.
Also packets.h file needs to be modified, adding the following lines to define the CONET
header:

#define CONET_HEADER_LEN 8
struct conet_header {
      uint16_t conet_src;
      uint16_t conet_dst;
      uint16_t conet_len;
      uint16_t conet_csum;
};
BOOST_STATIC_ASSERT(CONET_HEADER_LEN == sizeof(struct conet_header));

After that, it is necessary to recompile NOX, following the simple four steps specified
above and here reminded, inside the nox/ folder:

$ ./boot.sh
$ cd build/

                                                        80
             Solutions to enhance the Internet with an information centric model, exploiting Openflow


$ ../configure
$ make -j 5
Previous operations allows to support Content Centric functionalities within a network
made up of OpenFlow switches that run NOX controllers. In fact, after these modifications
to NOX code, it has been possible to extend the previously described
packet_in_callback() function, making it able to manage CONET packets.
These operations are described in details in the following sections.


5.5.1 The handling of CONET traffic
Thanks to the just described operations necessary to enhance NOX and make it recognize
CONET traffic, in this section I will show another piece of the packet_in_callback()
function and, in particular, the code used to distinguish and manage CCN packets.
Since I chose to use number 199 as IP protocol type, after the if clause that distinguish
ICMP, UDP and TCP packets, I added another elif where I put the code responsible for
managing CONET packets. As it is possible to see in the following snippet, firstly the
function identifies the CONET header and then it starts to access the fields that in UDP
identified the ports and, now, are meant to made up the local name of CCN packets.
________________________________________________________________________

elif ( ip_pkt . protocol == 199) :
      conet_pkt = packet . find ( ’ c o n e t ’ )
      first_tag = conet_pkt . srcport
      second_tag = conet_pkt . dstport
      print " This is a CONET packet with the following tag : " ,
            first_tag , second_tag
      first_tag_bin = int2bin ( first_tag , 16)
      second_tag_bin = int2bin ( second_tag ,16)
      print "The f i r s t b i n a r y t a g i s : " , first_tag_bin , "The
            s e c o n d b i n a r y t a g i s : " , second_tag_bin
      tag =" ". join ([ first_tag_bin , second_tag_bin ])
      print " Therefore the complete binary tag is : " , tag
      a= int (tag ,2)
      c = a%2
________________________________________________________________________________
                            Listing 5.9: Handling of CONET packets - first part

Since an Interest packet and the corresponding Data Unit have the same name for what it
concerns the first 31 bits and differ only for the last one, in the function I chose to obtain
the binary representation of the name.
That is achieved by converting both ports to a string of 16 bits, by means of the int2bin()
method and then by joining them into the complete binary representation of the name,
exploiting the string type built-in function join().
After that, the function performs a casting to the int type and then calculates and stores
the remainder of the division by 2 (module operation %) in the support variable c.

                                                       81
             Solutions to enhance the Internet with an information centric model, exploiting Openflow


This operation is useful to distinguish Interest packets from Data Units, since they are
identified by the same string of bits, except the last one. In fact, while converting to an
integer these two tags, they result to be two consecutive numbers. I chose to identify as
Interests the packets presenting a tag that ends with the value 1 and as Data the one
whose tag ends by 0.
Therefore, during the conversion to integer just described, all the odds tags represent
Interest packets, while the even ones identify Data Units.
In the next code, that follows exactly the previous one, is represented the management of
Interest packets, that are processed in the same way as other traffic. It is worth to
underline that the output port is the one that leads to the following Border Node, according
to normal CSS forwarding, and that Interest are not sent towards the Cache Server. That
is because, in the model of communications I decided to realize and I will describe later,
Interests are forwarded to the Cache Server only if the cache had previously informed the
controller that the requested content is present.
________________________________________________________________________

if (c == 1):
      print " Th i s i s an INTEREST p a c k e t "
      attrs = {}
      attrs [ core . IN_PORT ] = inport
      attrs [ core . DL_TYPE ] = ethr_pkt . type
      attrs [ core . NW_SRC ] = ip_pkt . srcip
      attrs [ core . NW_DST ] = ip_pkt . dstip
      attrs [ core . NW_PROTO ] = nw_proto
      attrs [ core . NW_TOS ] = tos
      attrs [ core . TP_SRC ] = first_tag
      attrs [ core . TP_DST ] = second_tag
      idle_timeout = openflow . OFP_FLOW_PERMANENT
      hard_timeout = openflow . OFP_FLOW_PERMANENT
      priority = 101
      forward_l2 (self , dp_id , inport , packet , buffer_id ,
            attrs , idle_timeout , hard_timeout , priority )
________________________________________________________________________________
                         Listing 5.10: Handling of CONET packets - second part

Instead, if the incoming packet is a Data Unit, it needs a different treatment.
In fact, I wanted to perform a caching policy as efficient as if the cache was in the Internal
Node, so that it would see all the data packets going through the CSS. In order to make
the Cache Server able to see all the Data Units, so that it can realize the appropriate
caching policy, I decided to apply two actions over data packets. Exploiting OpenFlow
ability to duplicate packets and send them out to different ports, I defined a double
forwarding action to be performed: one towards the proper MAC address and the other
that sends the packet out of the port connected to the Cache Server.
As it is possible to see in the following code, initially the dictionary that describes flow
entries is populated and all the parameters needed by forward_l2() function are set. After
that, there is an if statement, responsible to check whether the source MAC address of
the incoming packet is the same of the Cache Server one. In fact, if the Data Unit comes

                                                       82
             Solutions to enhance the Internet with an information centric model, exploiting Openflow


from the cache, it means that the content is already there and its timer has just been
refreshed by the corresponding Interest, so that there is no need to send it back to the
cache.
Therefore the if clause separates this two cases, before the corresponding action is
executed. In the first case the flow entry is installed directly, since it has to perform a
double output, while in the second case forward_l2() method is invoked, as the packet
needs normal forwarding.
________________________________________________________________________

elif (c == 0):
      print " Th i s i s a DATA p a c k e t "
      attrs = {}
      attrs [ core . IN_PORT ] = inport
      attrs [ core . DL_TYPE ] = ethr_pkt . type
      attrs [ core . NW_SRC ] = ip_pkt . srcip
      attrs [ core . NW_DST ] = ip_pkt . dstip
      attrs [ core . NW_PROTO ] = nw_proto
      attrs [ core . NW_TOS ] = tos
      attrs [ core . TP_SRC ] = first_tag
      attrs [ core . TP_DST ] = second_tag
      idle_timeout = openflow . OFP_FLOW_PERMANENT
      hard_timeout = openflow . OFP_FLOW_PERMANENT
      priority = 101
      if packet .src. tostring () != ’ \ x08 \ x00 \ x27 \ x c c \ x77 \ x1c ’
            : # Check i f the pa cke t i s coming from the c a che
            s e r v e r
            outport = arp_table [ dp_id ][ packet .dst. tostring ()]
            actions =[[ openflow . OFPAT_OUTPUT , [0, outport ]] ,[
                 openflow . OFPAT_OUTPUT ,[0 , 2]]
            self . install_datapath_flow (dp_id ,attrs ,
                 idle_timeout , hard_timeout , actions ,None ,
                 priority ,inport , None )
      else :
            forward_l2 (self , dp_id , inport , packet , buffer_id ,
                  attrs , idle_timeout , hard_timeout , priority )
________________________________________________________________________________
                           Listing 5.11: Handling of CONET packets - third part




5.5.2 Example of usage
In order to test this code and verify the correct handling of both Interests and Data Units, it
has been necessary to generate CONET traffic from one peer to another, Cache Server
included. In fact, to test the correct working mechanism of the Listing 5.11 snippet, it has

                                                       83
             Solutions to enhance the Internet with an information centric model, exploiting Openflow


been necessary to generate traffic, tagged as CONET data packets, also from the Cache
Server.
To reach this purpose, I decided to use Scapy, a powerful interactive packet manipulation
program, written in Python. I made this choice because it is very simple to use and offers a
lot of functionalities, as it is able to forge or decode packets of a wide number of protocols,
to send them on the wire, to capture them and also to send invalid frames [36].
The following figure, is a screenshot of the instruction that I used, in a terminal, after
having installed the program.




                                   Figure 5.6: Example of Scapy usage


After having launched the program, with the sudo ./run_scapy instruction, the user gets the
program console, where it is possible to create and send traffic in an arbitrary way.
In the example here presented, I created a packet made up of an Ethernet header, that
contains an IP packet. It is worth noting Scapy’s extensive possibilities to manipulate fields
within an header packet, as I did while setting the source and destination address and,
most of all, the IP protocol type.
I also decided to set the IP option, which need to be supported, even if not accessible, by
NOX controller, since they would carry the CONET name information in a long term
scenario. The packet just described has an IP protocol set to 199, but it contains the
standard UDP header as well as some randomly chosen data.
After that, I accessed the source and destination ports field, setting them respectively to
1234 and 2002. It is important to recall that, in the so far described scenario, the
destination port plays a crucial role, since it discriminates between Interest, that are
represented by odd numbers, and Data Units, bound to even numbers. So, in Figure 5.6, I
created a Data packet and then, with the sendp() instruction, I sent it in a recursive way
(loop=1), out from the interface bound to the source address, within PC1.

                                                       84
              Solutions to enhance the Internet with an information centric model, exploiting Openflow


      While producing this traffic on PC1, the controller is meant to forward it by
generating a packet_in event, since there are not yet entries that match CONET packets,
and handle it by means of the packet_in_callback() code previously described.




                                  Figure 5.7: Handling of CONET packets


The expected behaviour, is the same as normal traffic, since NOX has been extended and
enhanced exactly for this purpose. Therefore, after the first packet, all the following
CONET Data Units generated by Scapy’s sendp() instruction should find their
corresponding entry into the switch’s flow table and be forwarded according to it.
Figure 5.7 shows precisely the desired handling of CONET packets, with a little exception,
due to timing differences. In fact, within the controller console, it is possible to see two, and
not one, messages concerning CONET received packets. That is because of the fast
transmission rate of the source, which sends the second packet just after the flow entry is
installed, so that even the second CONET datagram does not find any entry within the flow
table. It is worth noting that this situation happens just once and that it could be simply
solved by testing the desired scenario with real computers, instead of running everything
into virtual machines on a single laptop.
However, all the following packets match the entry inserted by the callback function, so
that no other packet is sent to the controller.
Moreover, looking at PC2 console, it is possible to see tcpdump in action, that prints out a
big number of received packet messages. Those are exactly the datagrams generated by
Scapy, as it is possible to note paying attention to the IP protocol, that is set to 199.
Therefore, for what it concerns the short term approach, the integration of CONET traffic
within an OpenFlow-based network is realized without problems, making possible to
recognize and manipulate CCN traffic in the same way as standard packets.




                                                        85
             Solutions to enhance the Internet with an information centric model, exploiting Openflow



5.6 The communication with the Cache Server
In previous sections, have been described all the enhancements needed in order to
support CONET within an OpenFlow network, showing how it is possible realize a
communication between two virtual machines. This communication, however, until this
moment is a simple forwarding of packets between a source and a destination, even if
both PC1 and PC2 can be considered as Border Nodes.
Because of this, in this section I want to outline how it is possible to implement and how I
realized a communication scheme that involves also the Cache Server, making possible to
offer caching functionalities, that are one of the most important advantages of Information
Centric Networking.
First of all, it is worth recalling the network topology, depicted in Figure 5.3, this work is
based on, that presents a Cache Server connected to the OpenFlow switch and to the
controller. The first one is a normal link, necessary to send packets to the server and to
retrieve the desired contents, when it is present.
The connection between the Cache Server and the Controller, instead, is useful for the
cache to notify to the controller when a content has been stored or, otherwise, is deleted
due to timeout policies.
The fundamental idea of this approach is to send Interest packets to the cache, only if the
corresponding data are certainly present, since the controller has been previously
informed about that.




               Figure 5.8: Communication between Cache Server, Controller and Switch

I suggest to implement this solution, especially in distributed-caches environments,
because, in my opinion, it is much more efficient than sending all the packets, both
                                        86
             Solutions to enhance the Internet with an information centric model, exploiting Openflow


Interests an Data, to the Cache Severs. In fact, in that case, for every packet received, the
cache has to look into a table if the desired content is present and, eventually, to re-route
the packet towards next Border Node.
On the contrary, a copy of Data Units that are travelling through the network, should be
sent to the Cache Server even if it is not directed to the cache, so that the caching
algorithm could decide whether to store or not the content and, consequently, inform the
controller. This is a mandatory operation for network in which the cache is separate from
the Internal Node because, otherwise, the Cache Server would not receive any data
packet and cannot realize active caching policies.
         In order to realize that behaviour, there are several operations that need to be
realized, both on the controller side and, most of all, within the Cache Server. For instance,
in order to successfully implement the above mentioned situation there is the need to
realize, in the cache, an application able to intercept all the packets that arrive to its
interfaces, even if they would normally be discarded. In fact, if all Data Units are sent also
to the Cache Server, this one would receive, thanks to layer 2 forwarding performed by the
switch, only packets with an IP destination address different from the one associated to its
interfaces. Therefore, to prevent an early discard, there is the necessity to open a raw
socket, which first receives all the packets and then filters them, recognizing CONET
traffic.
Such an application has been realized from the University of Catania within the EXOTIC
project and will be used for the last test in this work.
Other than this raw socket application, the fundamental part of the communication that is
not present at the moment is the exchange of messages between the Cache Server and
the controller, that I realized through the two interfaces on network 10:0:10:0=24 depicted
in Figure 5.4.
This communication is achieved through the exchange of JSON messages, sent almost
only from the cache to the controller, used to communicate the action performed over a
certain content, as will be explained in next section.
I chose to use JSON messages, because it is a lightweight data interchange format, easy
to read and write and, therefore, suitable for the simple information that is necessary to
exchange. In fact, after two connection setup messages, the only communication that, in
my opinion, was necessary to implement is made up of simple messages sent only from
the cache to the controller. In particular, the only information stored within these messages
would be a couple of key:value entries of the form:

                            {CONTENT : “name"; TYPE : “action"}

Another reason to implement this communication through JSON messages has been NOX
ability to handle this kind of messages, by means of a JSONMsg_event, already present and
written in C++. Therefore, after some modification to NOX, in order to make it able to rise
this event even within my Python controller, I had just to write a callback function to handle
the event and to make the controller reactive to the received messages.
In next sections, I will describe the application that resides on the Cache Server sending
JSON messages and, then, the controller function that receives this information and
properly instructs the switch. In the end, I will merge my work with the application realized
by University of Catania, showing an almost complete and working scenario.




                                                       87
             Solutions to enhance the Internet with an information centric model, exploiting Openflow



5.6.1 The Cache Server application
The first realization of the application that runs on the Cache Server has been written in
Python and, essentially, establish a connection with the controller, sending some
messages about the three types of actions I envisaged to perform
on contents.
Therefore, there is no caching policy at the moment, so that messages are sent to the
controller without a precise logic and have the purpose of testing the ability of this one to
react in front of different situations. Another feature that is not present in the application I
am presenting is the raw socket that captures all the packets and filters them, recognizing
CONET traffic.
This characteristic would be present while merging my work with the one realized by
EXOTIC partners.
In fact, from my point of view, the Cache Server should be organized as in the following
figure, with three main applications: a Raw Socket application, a Cache Manager and a
Sender, that communicates with the controller.




                                 Figure 5.9: Cache Server main modules

This kind of structure is modular and easily extensible. In fact, at the moment, I realized
the Sender application, that works without any input and also the raw socket is
implemented in a way that only recognize CONET packets and answers back to the
Interest with the corresponding data.
In the last test I will present, these two applications communicate successfully, even if
exchanging randomly generated messages. Therefore, it has been possible to realize a
working testbed even without the caching logic, that could be realized later and easily
placed between the sender and the raw socket.
        The first implementation of the sender application is written in Python and, as sown
in the following code, and, essentially, opens a client socket towards the controller, binding
it to port 2703, which is the default one used by NOX JSONMsg_event. After having defined
three different messages, each one related to the same content but to a different action, it


                                                       88
             Solutions to enhance the Internet with an information centric model, exploiting Openflow


transforms these dictionaries into JSON messages by means of the json.dumps()
function.
I want to outline that the only three messages that I decided to implement during the
communication between the cache and the controller are the following: stored, refreshed
and deleted.
In fact, in my opinion, the controller need to modify the behaviour of the switch only if a
new content is stored within the cache or if a content has been considered old and,
therefore has been deleted. I chose also to add the refreshed message, used to refresh
flow entries timer, in order to avoid the deletion, due to a timeout, of a switch entry
responsible to send an Interest to the cache. In fact, a content that is not very popular, so
that it does not hit frequently the switch entries, could anyway be on the Cache Server
and, therefore, its Interest has to be forwarded there. Thus, I chose to delete a flow entry
only if the cache explicitly communicates that it does not have a content anymore.
________________________________________________________________________

HOST = ’ 1 0 . 0 . 1 0 . 1 ’ # The remote ho s t
PORT = 2703
s = socket . socket ( socket . AF_INET , socket . SOCK_STREAM )
s. settimeout ( None )
s. connect (( HOST , PORT ))
print " Conne c t ed t o s e r v e r : " + str (s. getpeername ())
first_message = {" t y p e ":" Co n n e c t i o n s e t u p " ,"MAC":" 0 8 : 0 0
: 2 7 : c c
      : 7 7 : 1 c " ," IP ":" 1 0 . 0 . 1 0 . 2 "} # Your MAC and IP addresses
store = {"CONTENT NAME":" 34371 " , " t y p e ":" s t o r e d "}
refresh = {"CONTENT NAME":" 34371 " , " t y p e ":" r e f r e s h e d "}
delete = {"CONTENT NAME":" 34371 " , " t y p e ":" d e l e t e d "}
json_message = json . dumps ( first_message )
json_store = json . dumps ( store )
json_refresh = json . dumps ( refresh )
json_delete = json . dumps ( delete )

s. send ( json_message )
while True :
      data = s. recv (10240)
      if not data : break
      print ’ Re c e i v e d ’ , repr ( data )
      time . sleep (5.0)
      s. send ( json_store )
      time . sleep (5.0)
      s. send ( json_refresh )
      time . sleep (5.0)
      s. send ( json_delete )
________________________________________________________________________________
                          Listing 5.12: Application that sends JSON messages

                                                       89
             Solutions to enhance the Internet with an information centric model, exploiting Openflow


After having obtained the JSON messages, this client socket application, starts the
communication, sending the “Connection setup” message, to the controller. This message
contains also information about the IP address of the sending interface and the MAC
address towards which contents should be forwarded (eth3 in Figure 5.4).
Then the socket remains open for all the execution, thanks to the while True: statement
and listens for incoming data from the controller. In fact, before sending any information, it
is better to receive an acknowledgement from the controller that the connection has been
successfully started and the JSONMsg_event raised by NOX has been handled correctly.
After that, this application simply send the previously defined messages to the controller,
with 5 seconds of pause between one and another.
       It is worth recalling that the just presented first implementation of the sender
application has the only purpose of testing the communication between the cache and the
controller and to allow to verify the correctness of the reactive behaviour imposed by the
NOX controller to the OpenFlow switch.
Because of that, the message that are sent cover all the possible actions, but the do not
correspond to any particular logic.
When the Cache Manager, responsible for performing caching algorithms and policies, will
be deployed, instead, this Sender application will be modified, in order to take as input the
actions performed within the cache and to simply repeat them towards the controller.


5.6.2 The handling of JSON messages
As previously said, the management of JSON messages is performed through a callback
function that handles the NOX built-in JSONMsg_event. In the NOX version I used, this kind
of event is implemented only in C++, so that it could be used only within C++ written
controllers.
Fortunately, in the “Destiny branch” of NOX, which is the developers, untested version, it is
possible to handle this event also in Python and, therefore, it could be used even within
the controller I described until now.
So, to make this event work, I downloaded the Destiny branch messenger folder and I
substituted the original with the one just downloaded and then recompiled NOX. These
operations did not raise any errors and also the controller resulted to be installed correctly,
while invoking the JSONMsg_event, using the following instructions within the callback
function:
________________________________________________________________________

JSONMsg_event . register_event_converter ( self . ctxt )
self . register_handler ( JSONMsg_event . static_get_name () , self .
      json_message_callback )
________________________________________________________________________________
                    Listing 5.13: Installation of JSONMsg_event in Python components



It is worth noting that the event is handled by the json_message_callback function, but the
socket necessary to communicate to the Cache Server is created in background by NOX,
so that there is no need for the user to create one. This server socket listens for any


                                                       90
             Solutions to enhance the Internet with an information centric model, exploiting Openflow


incoming connection on all addresses of the controller and is bound to port 2703, so that
the use of this port in Listing 5.12 has to be considered mandatory.
Since the server socket is up and running, the first thing I decided to realize in my callback
function has been the exchange of a couple of connection setup messages, where the
controller accepts the connection coming from the cache, answering with a welcome
message.
________________________________________________________________________

def json_message_callback (self , e):
      import json
      global cache_server_table
      global cache_server_MAC
      global datapath
      messaggio = json . loads (e. jsonstring )
      cache_server_table . update ( messaggio )
      print messaggio
      print cache_server_table
      if cache_server_table . has_key (" t y p e "):
            if cache_server_table [" t y p e "] == " Connection setup ":
                  cache_server_MAC = cache_server_table ["MAC"]
                  print cache_server_MAC
                  e. reply ( json . dumps ({ " t y p e ":"Welcome ! I am t h e
                        C o n t r o l l e r " }))
 _______________________________________________________________________________
               Listing 5.14: Connection acceptance in json_message_callback() function

After the import instructions, the previous code stores the received message in a dictionary
and checks if the received JSON messages has a valid syntax, containing the key “type”.
The dictionary where the received information is stored is continuously updated thanks to
the update method, that is invoked for every received message. After that, if the message
is valid, the callback function saves the MAC address of the Cache Server into the global
variable cache_server_MAC and answers back with a welcome message.
The MAC address sent to the controller is important for next operation realized within this
function, as it is the one of the interface connected to the switch and, therefore, the one
towards which packets need to be forwarded if they have to reach the cache.
Once the connection has been successfully started, the callback function is realized in
order to manage the reception of the three different types of messages listed above. In
particular, when a “stored” message is received, it prints out a message and immediately
accesses the “content name” field. The received name, should be, as working hypothesis,
the integer conversion of the 32 bit string obtained by the joining of the binary
representation of the two ports field and it has to be a Data Unit name. Therefore, the
function firstly converts into binary the received name and then splits it into two equal
pieces, that are converted into integers again, resulting to be the decimal representation of
the two ports fields. These action are performed since the representation of different flows
within the flow table is given according to port fields, that have to be specified as integers.
Then, with an if clause, it checks if the received name really belong to a Data packet and
then it populates a dictionary, that would contain, at first, basic CONET information.
                                                       91
             Solutions to enhance the Internet with an information centric model, exploiting Openflow


       In fact, if a content is stored within the Cache Server, the controller has to delete all
the switch entries that forward Interests corresponding to that Data Unit and to replace
them with an entry that forwards those Interests towards the cache. Furthermore, the new
entry has to set as MAC destination address of the packet the Cache Server one,
previously received during the connection setup, before forwarding the packet. In fact, this
operation is fundamental for packets to reach the cache within distributed caches
scenarios.
These operations are performed through the population of the dictionary attrs{} and the
delete_datapath_flow() instruction, that actually removes the existing entry. It is worth
noting that the only entries that are deleted are the one matching the characteristics listed
in attrs{} and, therefore, there is a deletion only if an entry with IP protocol set to 199 and
the same ports already existed.
Another thing that is important to outline is that there is no need to delete entries
concerning to Data Units, since they are already routed to their destination as well as to
the Cache Server by default and, therefore, there is no need to change this behaviour.
_______________________________________________________________________

            elif cache_server_table [" t y p e "] == " s t o r e d ":
                  print ’ Co n t e n t named ’ , cache_server_table ["
                      CONTENT NAME"], ’ ha s be en s t o r e d ’
                  name = int2bin (int( cache_server_table ["CONTENT
                      NAME" ]) ,32)
                  first_tag = name [0:16]
                  second_tag = name [16:32]
                  source_port = int( first_tag ,2)
                  dst_port = int( second_tag ,2)
                  dst_interest_port = dst_port
                  if dst_port %2 == 0:
                      dst_interest_port = dst_port + 1
                  attrs = {}
                  attrs [ core . NW_PROTO ] = 199
                  attrs [ core . TP_SRC ] = source_port
                  attrs [ core . TP_DST ] = dst_interest_port
                  self . delete_datapath_flow ( datapath , attrs )
                  idle_timeout = openflow . OFP_FLOW_PERMANENT
                  hard_timeout = openflow . OFP_FLOW_PERMANENT
                  priority = 101
                  actions = [[ openflow . OFPAT_SET_DL_DST , ’ 08 : 00 : 27 :
                       c c : 77 : 1 c ’ ],[ openflow . OFPAT_OUTPUT ,[0 , 2]]]
                  self . install_datapath_flow ( datapath , attrs ,
                       idle_timeout , hard_timeout , actions ,None ,
                       priority , None , None )
________________________________________________________________________________
                           Listing 5.15: Handling of JSONMsg_event - first part



                                                       92
              Solutions to enhance the Internet with an information centric model, exploiting Openflow


At the end of previous listing it is possible to see that, after the deletion of previous entries,
the dictionary is populated with timing and priority characteristics and then the actions
dictionary is set. So, the fist action to be performed is to set the destination MAC address
to the cache one and then Interest packets can be normally forwarded out of port 2, which
is the one connected to the Cache Server.
As done before, the entry is inserted thanks to the install_datapath_flow() instruction.
After previous snippet, the callback function goes on with the check of the received JSON
messages and, in particular, examines if the message has the purpose to inform that a
content has been refreshed. This action is not particularly relevant in my function, since I
chose tu use permanent entries while setting the idle_timeout and hard_timeout. In fact,
the purpose of the “refreshed” message is to refresh the timer of flow entries in order to
avoid a situation in which a content is currently present within the cache, but Interests are
routed towards the exit Border Node, due to the deletion of flow entries. In fact, Interests
are explicitly directed to the Cache Server only after having received the “stored” JSON
message, but if this entries is deleted, then they follow the ordinary packet_in_callback
and do not reach the cache.
Therefore, in handling the “refreshed” message, the callback needs to use a flow modify
command or, as I have done in my function, to exploit the same structure used for the
“stored” message, removing the entry and installing another one, directing Interests
towards the cache, just after that.
The result would be correct, since the new installed entry has a renewed timer.
        The last kind of message that the json_message_callback() is designed to receive
is the “deleted” one, that refers to a deletion of a content in the cache, due to inactivity or
caching policies. This event is managed by the following code, that has the same structure
previously described.
________________________________________________________________________

              elif cache_server_table [" t y p e "] == " d e l e t e d ":
                    print ’ Co n t e n t named ’ , cache_server_table ["
                        CONTENT NAME"], ’ ha s be en d e l e t e d ’
                    name = int2bin (int( cache_server_table ["CONTENT
                        NAME" ]) ,32)
                    first_tag = name [0:16]
                    second_tag = name [16:32]
                    source_port = int( first_tag ,2)
                    dst_port = int( second_tag ,2)
                    dst_interest_port = dst_port
                    dst_data_port = dst_port
                    if dst_port %2 == 1:
                        dst_data_port = dst_port -1
                    else :
                        dst_interest_port = dst_port + 1
                    attrs = {}
                    attrs [ core . NW_PROTO ] = 199
                    attrs [ core . TP_SRC ] = source_port
                    attrs [ core . TP_DST ] = dst_interest_port

                                                        93
             Solutions to enhance the Internet with an information centric model, exploiting Openflow


                 self . delete_datapath_flow ( datapath , attrs )
 _______________________________________________________________________________
                           Listing 5.16: Handling of JSONMsg_event - second part


After a check on the ports and, thus, on the name of content, this function populates the
dictionary and then deletes all the entries matching the passed parameters, which are the
IP protocol and the CONET ports. Therefore, every flow entry relating to CONET packets
identified by the name made up of the two specified ports, is deleted. It is worth noting
that, again, only flow entries concerning Interest packets need to be removed, because
Data Units do not have to change their forwarding policy, being already sent to the correct
destination as well as to the cache.


5.6.3 Example of usage

In this section I want to show the correct handling of JSON messages as well as the
correct updating of the switch’s forwarding behaviour. In order to test the previously
presented program, it is necessary to install the controller in the same way as done before
and to connect it to the Cache Server, launching the sending application that runs there.
Moreover, in the test I am presenting, I decided to generate CONET traffic with Scapy, in
the way described in Section 5.5.2 and, in particular, creating Interest packets.
In fact, as described above, Data Units are not subjected to any change whether the cache
has decided to store a copy of them or not, while Interests need to be forwarded in a
different way.
        To demonstrate the correct way of working of the above presented controller, I have
generated Interest packets within PC1 and sent them towards PC2. After that, I started the
sending application on the Cache Server and observed the behaviour of packets both on
PC2 and on the interface of the switch connected to the cache. In fact, it has not been
possible to capture packets directly on the Cache Server, since the raw socket application
was not present during this test and, therefore, the received Interest packets, that
presented an IP destination address different from the cache one, would have been
discarded by the Operating System.
The resulting behaviour of this simulation was that packets passed through the switch
reaching their correct destination, namely PC2, until the sender application started. At that
moment, when a “stored” JSON message arrived to the controller, all the flow entries
concerning that Interests were deleted and substituted by new ones, in charge of
forwarding those packets to the cache. In fact, it has been possible to see how the capture
made with tcpdump on PC2 suddenly stopped, while the one launched on the switch
interface connected to the cache immediately started to receive packets.
In the following screenshot, it is possible to see the exact moment in which the message
“stored” is received and several flow_expired events are raised in response to the
deletion of some entries.
In the right part of the figure, instead, there are the two captures and it is clear how the one
running PC2 stopped and the other one started to receive packets that presented IP
protocol 199.




                                                       94
             Solutions to enhance the Internet with an information centric model, exploiting Openflow




               Figure 5.10: JSON “stored” message receipt and Switch’s table updating


Then, when a “refreshed” message was received, nothing changed, as expected, because
that means that the content were still present in the cache and Interests had to go there,
as they were already doing.
On the contrary, when a “deleted” message was received, the controller deleted all the
flow entries concerning those Interest packets, with the result of making the capture issued
on the switch to stop. Instead, the one launched on PC2 started again to capture packets,
as it is was supposed to do.
The described situation is showed in Figure 5.11, where it is possible to see the receipt of
the “deleted” message followed by a flow_expired event. In the right part of the figure, as
expected, checking the times of the input packets in the two captures it comes out how
PC2 restarted to receive CONET traffic while no packets pass through the interface
connected to the Cache Server.
        The deletion of all entries in the switch flow table brings back to a situation where a
packet_in event is raised and, therefore, the standard callback code is executed.
Because of that, packets started again to be managed exactly in the same way as they
were forwarded before the sender application had started.
This situation comes clear while looking at the lower part of the controller console, where a
normal packet in event was raised.




                                                       95
             Solutions to enhance the Internet with an information centric model, exploiting Openflow




               Figure 5.11: JSON “deleted” message receipt and Switch’s table updating


Therefore, in this section has been showed how it is possible to realize a successful
connection between the control logic of an Internal Node and its Cache Server. Moreover,
it has been implemented a working mechanism able to handle the messages sent from the
cache to the controller and, consequently, to update the Switch forwarding behaviour
needed to perform a correct packet forwarding and to offer caching functionalities.



5.6.4 Applications to improve the CONET
The architectural solution provided so far works properly and offers an extensible platform
to develop other CCN solutions, since it integrates CONET within OpenFlow and NOX. In
order to obtain a complete working CSS, however, there is the need to add other
functionalities, such as the Cache Manager and the Raw Socket, within the Cache Server.
Other operations that need to be performed, are the one that should take place within
Border Nodes, such as the communication with the Name Routing System and the
association between the IP CONET Option and an internal tag, to be used until OpenFlow
would be enhanced to recognize the whole IP header.
       While I was developing this work, EXOTIC partners from University of Catania
realized some applications useful to complete the previously given scenario. In this section
I will describe briefly their work, showing how a preliminary integration of all the
applications has been possible and I will present the results obtained with this merging.
Their work focused on realizing a couple of applications able to create and receive CONET
packets with a correct structure, namely the IPv4 CONET Option, starting from a fictitious
content. They also realized the Raw Socket application for the Cache Server, able to
receive all packets and filter CONET ones, providing back a content and communicating
the receipt of an Interest or a Data Unit to the Cache Manager.

                                                       96
              Solutions to enhance the Internet with an information centric model, exploiting Openflow


It is worth noting that at the moment, there is no Cache Manager application, so that the
Raw Socket communicates directly with my Sender application, that has been slightly
modified in order to interoperate.
All these applications are based on some Java classes, that allows to perform packet
crafting and to construct CONET Information Units and Carrier Packets starting from a
Network Identifier and a Chunk Sequence Number.
Therefore, once the NID and the CSN are known, these applications make possible to
construct an IP packet with the IPv4 CONET Option that stores in the correct way all the
information concerning the content.
More precisely, EXOTIC partners developed two applications to be launched on PC1 or
PC2, one creating Interests and the other Data Units starting from a couple NID - CSN. In
order to make these packets pass through the switch and to be recognized and correctly
handled by the control logic I have written, they also performed a dummy hash function,
responsible to map the couple NID - CSN to a 31 bit string.
Doing so, it is possible to add a 0 or a 1, depending on the packet being created, and to
obtain the 32 bit string that identifies contents within the CONET Sub System. Thus, the
packet carries the correct IP CONET Option and is realized as an IP datagram that
presents the IP protocol field set to 199 and the following bit structured as the UDP
header, with the first 32, that would have been the ports, set to the value obtained as a
result of the hash function.
It is important to underline how this solution has been chosen in order to meet NOX and
OpenFlow requirements and to realize a working test within the virtualized scenario I had
previously realized. Furthermore, the use of the hash function to map the IP CONET
Option to the local tag has to be considered as a temporary solution, since this operation
should be performed by the Name Routing System, queried by the Border Node.
Other than the two sending applications, a simple receiver has also been realized, able to
recognize the incoming packet and to decode the IP CONET Option field, printing out the
information of the content. This application has been thought to be run on the other
machine acting as peer and, in the test I conducted, I launched it on PC2, since I created,
on PC1, a script that runs several times one of the two sending programs.
        The other application realized by the University of Catania is the one that is in
charge of opening a raw socket and, therefore, is meant to run on the Cache Server. This
program listens for all packets and then filters them on the basis of their IP protocol,
discarding all packets except the one with the value 199. Since these packets have been
created by the two other applications, this program expects to receive packets with a valid
IP CONET Option. In fact, it is designed to decode this option field and to recognize the
content name, the sequence number and to distinguish Interests from Data Unit. Once
these operation are performed, if the incoming packet was an Interest, it answers back to
the switch, creating the corresponding Data Unit. In fact, this application relies on the
same classes used to craft and send the packets and, therefore, it is also able to perform
the correct binding between the CONET name and the local tag.
Instead, if the received packet was a Data Unit, it should communicate to the Cache
Manager the receipt of the packet, so that it could decide if it is necessary to store it or not.
Therefore, while opening the raw socket, this application creates also an internal socket to
communicate this kind of information. Since, at this moment, the Cache Manager has not
been realized yet, the Internal sockets connect directly to my sender application and
communicates sending JSON messages. In fact, when a Data Unit is received, this
application choose randomly an action between “stored”, “refreshed” and “deleted” and


                                                        97
             Solutions to enhance the Internet with an information centric model, exploiting Openflow


then send to the sender application a JSON message containing the local name of the
content and the action applied.
Thus, I had to modify the previously described Sender application, in order to make it able
to receive JSON messages from the Raw Socket and, then, to send them towards the
controller. In particular, as showed in the following snippet, the new version of this
application opens two socket, one connected to the controller and acting as a client and
the other connected, internally, to the Raw Socket application and working as server.
While the connection with the controller has not been modified, except for the messages
sent, the internal sockets waits for JSON messages, that are decoded and stored within
the data_local variable. After that, the same message is sent towards the controller
thanks to the s.send() instruction, that sends the message on the socket named s, which
is the one opened towards the controller.
________________________________________________________________________

HOST_LOCAL = ’ 1 2 7 . 0 . 0 . 1 ’
PORT_LOCAL = 9999
server_socket = socket . socket ( socket . AF_INET , socket .
      SOCK_STREAM )
server_socket . bind (( HOST_LOCAL , PORT_LOCAL ))
server_socket . listen (1)
conn , addr = server_socket . accept ()
print " Conne c t ed t o hookup��c l i e n t "
            .
            .
            .
while True :
      data_local = conn . recv (10240)
      if not data_local :
            break
      print ’ Re c e i v e d ’ , repr ( data_local )
      messaggio = json . loads ( data_local )
      messagio_json = json . dumps ( messaggio )
      s. send ( messagio_json )
      print ’ Sent ’ , messaggio , ’ t o t h e c o n t r o l l e r ’
______________________________________________________________________________________
                         Listing 5.17: New version of Sender application


The last simulation I am presenting is the union of the previously described scenario with
the applications exposed in this section.
In particular, the controller remained exactly the same, since it was already able to handle
JSON messages and to forward properly CONET packets.
So, in this simulation, Scapy has been replaced by the sending applications realized by
EXOTIC partners that have been launched on PC1.
PC2, instead, continued to act as a receiving peer, so the only application launched there
has been the receiver one, that has the purpose of showing the kind of packet received
and the name of the content they referred to.
                                                       98
             Solutions to enhance the Internet with an information centric model, exploiting Openflow


The network entity that presented the most important changes was the Cache Server, that
has been enhanced to intercept packets and to provide the corresponding content back.
        Within PC1, I created a script that invokes several times the two sender
applications, so that it has been possible to generate two kinds of packets, Interests and
Data, related to the same content and, therefore, presenting two consecutive local tags.
On PC2, instead I launched only the receiving application while, on the Cache Server, I
had to run two programs.
The first one has been the new version of the Sender, that connected to the controller and
opened a server socket waiting for data from the Raw Socket application, which has been
launched just after.
The following screenshot shows the described situation, with the sender application that, in
the lower right part of the figure, send data packets.
In the upper part, instead, it is possible to see the Raw Socket, that recognize
the incoming data packet, inspects the CONET Option field printing out the
string NID + CSN = [buonasera, 170] and then sends a message to the sender
application with an action.
It is worth to underline the correctness of the whole simulation, that is clear while looking at
the controller console, that receives two messages that resulted to be exactly the ones
generated by the Raw Socket, i.e. deleted and stored. After the receipt of these
messages it is possible to see all the action described in previous sections that make the
controller to react and, consequently, update the switch flow table.




                       Figure 5.12: Screenshot of the complete working simulation


Therefore, even the integration with the Raw Socket application and with programs that
send packets following a logic coherent with the project has been tested and realized
successfully.


                                                       99
             Solutions to enhance the Internet with an information centric model, exploiting Openflow


It is worth noting that this test does not offer significant improvements in the handling of
CONET traffic by the switch or by the control logic, but reflects an integration effort and
show an almost complete scenario of Information Centric Networking, where proper
applications ask for contents following CONET architecture and, thanks to the control
plane of the switch and to the Cache server, receive back the desired data.
At this moment, the crafting of Data Units made by the Raw Socket application is still
under test, since it has some little bug in the building of the packet, discovered by the fact
that the switch and the controller were not able to handle it, while, previously, they
managed without problems the traffic generated both by Scapy and by the two sending
applications.
Except for this little issue, the presented scenario has been successfully tested and
resulted to work properly.
Therefore, it could be considered a first CONET networking prototype, to be used as
starting point for developing other functionalities and building all the application that are
not present at the moment, but represent a necessary enhancement for CONET.




                                                      100
             Solutions to enhance the Internet with an information centric model, exploiting Openflow




Final considerations
As outlined before and especially in last chapter, this thesis work has provided a
virtualized solution to support Information Centric Networking, exploiting OpenFlow
technology, inside the existing Internet architecture.
In particular, it has been focused, on the realization of the control logic of an OpenFlow
switch, acting as Internal Node inside CONET architecture.
The controller here presented, in fact, successfully performed layer 2 learning and
forwarding operations, as well as it realized all the control features and functionalities
needed to install several flow entries, which make an OpenFlow switch able to distinguish
packets and filter traffic.
Afterwards, the controller has been enhanced in order to support Information Centric
Networking mechanisms and a CONET solution for the short term has been successfully
designed and implemented.
Furthermore, the presented solution provided also caching functionalities, realized through
the communication between a Cache Server and the controller, opportunely improved to
receive JSON messages and made able to reactively update switch’s flow tables and
forwarding behaviour.

Therefore, this thesis work has to be considered as a first, working realization of a CONET
networking solution, especially in the last version presented, where this work has been
merged with the programs developed by EXOTIC partners.
In fact, even if it does not present all the CONET foreseen functionalities, this thesis
properly developed some key aspects, such as the network control plane and the
interaction with the cache, that can be used and integrated in further tests and
demonstrations.
Moreover, the functionalities that were not developed in this work and that are left as future
development tasks, have been extensively analysed and discussed.




                                                      101
              Solutions to enhance the Internet with an information centric model, exploiting Openflow




Grazie a...
Grazie al professor Blefari Melazzi che, con disponibilità, insegnamenti e sensibilità, mi ha
seguito per tutto questo percorso, contribuendo alla mia crescita culturale e personale.

Grazie al professor Salsano che, sempre pronto a chiarire ogni mio dubbio, mi ha aiutato
durante questi mesi con consigli e suggerimenti preziosi.

Thanks to Felipe, Mohamed and Saverio, who, with professionalism and kindness, helped
me with ideas and brilliant suggestions during my internship at NEC Laboratories.

Grazie ai miei genitori, ad Ilaria e agli zii che, con il loro affetto, il loro incoraggiamento e la
loro partecipazione, mi sono sempre stati vicini, nelle gioie e nelle difficoltà.

Grazie a Mattia, Annalisa, Tiziana e Loris insostituibili compagni di questo percorso, per
aver condiviso, con simpatia e collaborazione, ogni avventura in questi cinque anni.

Grazie a Matteo, Martina, Vittorio, Miriam, Chiara ed Andrea, amici speciali, che mi sono
stati accanto in ogni momento, regalandomi emozioni uniche.

Grazie a Marco, la cui amicizia va oltre ogni distanza.




                                                       102
                  Solutions to enhance the Internet with an information centric model, exploiting Openflow




List of Figures
1.1 CCN moves network stack from IP to chunks of named content . . . . . . . . . . . . . .                               4
1.2 CCN routing scheme [8] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .       9
1.3 CCN packet types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .   14
1.4 From computer to network virtualization [13] . . . . . . . . . . . . . . . . . . . . . . . . . . . .                 16

2.1 Geographic distribution of OFELIA islands [15] . . . . . . . . . . . . . . . . . . . . . . . . . .                   19
2.2 Creation of a project through “Expedient” user interface . . . . . . . . . . . . . . . . . . .                       21
2.3 Creation of a VM and definition of the flowspace . . . . . . . . . . . . . . . . . . . . . . . . .                   21
2.4 Flow tables concatenation forms OpenFlow data path . . . . . . . . . . . . . . . . . . . . .                         22
2.5 The OpenFlow basic architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .            23
2.6 Flowchart detailing packet flow through an OpenFlow switch . . . . . . . . . . . . . . . .                           25
2.7 Example of an OpenFlow flow entry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .              27
2.8 FlowVisor can recursively slice network resources . . . . . . . . . . . . . . . . . . . . . . . .                    31
2.9 OFELIA logic components . . . . . . . . . . . . . . . . . . . .                                                      33

3.1 CONET Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .     39
3.2 CONET Information Units (CIUs) and Carrier packets . . . . . . . . . . . . . . . . . . . . . .                       44
3.3 IPv4 and IPv6 CONET Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .            46
3.4 Content Centric main functionalities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .         48

4.1 Mapping between CONET Header and IP CONET Option . . . . . . . . . . . . . . . . . .                                 55
4.2 Functionalities disposition in the long term . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .             56
4.3 Mapping between CONET Header and MPLS tag . . . . . . . . . . . . . . . . . . . . . . . .                            58
4.4 Functionalities disposition in the short term . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .            59
4.5 Mapping between CONET Header and IP Protocol and ports. . . . . . . . . . . . . . . . .                              59

5.1 Screenshot of Virtualbox usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .         64
5.2 Setting network options through VDE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .            66
5.3 Network topology and entities of the experiments . . . . . . . . . . . . . . . . . . . . . . . . .                   67
5.4 Network topology and addressing schemes . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                    68
5.5 Handling of ICMP messages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .          79
5.6 Example of Scapy usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .       88
5.7 Handling of CONET packets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .        89
5.8 Communication between Cache Server, Controller and Switch. . . . . . . . . . . . . . . .                             91
5.9 Cache Server main modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .        93
5.10 JSON “stored” message receipt and Switch’s table updating . . . . . . . . . . . . . . . .                           101
5.11 JSON “deleted” message receipt and Switch’s table updating . . . . . . . . . . . . . . .                            102
5.12 Screenshot of the complete working simulation . . . . . . . . . . . . . . . . . . . . . . . . . .                   106

                                                              103
                 Solutions to enhance the Internet with an information centric model, exploiting Openflow




Listings
5.1 Mandatory structure for Python NOX components . . . . . . . . . . . . . . . . . . . . . . . .                         73
5.2 First realization of the install() function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
5.3 Snippet of packet_in_callback() function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
5.4 learn_l2() function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
5.5 forward_l2() function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
5.6 Handling of ICMP traffic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .    78
5.7 UDP packets format and definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .           81
5.8 CONET support within the IP packet structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
5.9 Handling of CONET packets - first part . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
5.10 Handling of CONET packets - second part . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
5.11 Handling of CONET packets - third part . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .               87
5.12 Application that sends JSON messages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
5.13 Installation of JSONMsg_event in Python components . . . . . . . . . . . . . . . . . . . . . . 96
5.14 Connection acceptance in json_message_callback() function . . . . . . . . . . . . . . . 96
5.15 Handling of JSONMsg_event - first part . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
5.16 Handling of JSONMsg_event - second part . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
5.17 New version of Sender application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105




                                                            104
             Solutions to enhance the Internet with an information centric model, exploiting Openflow




Bibliography
[1]    T. Koponen, M. Chawla, B. Chun, A. Ermolinskiy, K.H. Kim, S. Shenker, I. Stoica, A
       Data-Oriented (and Beyond) Network Architecture, (2007)

[2]    A. Detti, N. Blefari Melazzi, Network layer solutions for a content-centric Internet,
       (2010)

[3]    V. Jacobson, D. Smetters, J.D. Thornton, M.F. Plass, N.H. Briggs, R.L. Braynard,
       Networking Named Content, (2009)

[4]    B. Adamson, C. Bormann, M. Handley, J. Macker,                                    Multicast Negative-
       Acknowledgement (NACK) Building Blocks, (2008)

[5]    A. Ghodsi , T. Koponen, J. Rajahalme, P. Sarolahti, S. Shenker, Naming in Content-
       Oriented Architectures, (2011)

[6]    D. Smetters, V. Jacobson, Securing Network Content, (2009)

[7]    N. Blefari Melazzi, G. Bianchi, S. Salsano, S. Shenker, EXtending Open- Flow to
       Support a future Internet with a Content-Centric model (EXOTIC), OFELIA proposal,
       (2011)

[8]    D. Kutscher, P. Aranda, B. Levin et al., The Network of Information: Architecture
       and Applications, (2011)

[9]    N. Blefari Melazzi, CONVERGENCE: extending the media concept to include
       representations of Real World Objects, (2009)

[10]   N. Blefari Melazzi, M. Cancellieri, A. Detti, M. Pomposini, S. Salsano, The CONET
       solution for Information Centric Networking, Technical Report ,(2012)

[11]   A. Detti, N. Blefari Melazzi, S. Salsano, M. Pomposini, CONET: A Content Centric
       Inter-Networking Architecture, (2011)

[12]   The GENI Project Office, GENI System Overview, (2008)

[13]   M. Suñé, The FP7 Ofelia project, presentation (2011)

                                                      105
             Solutions to enhance the Internet with an information centric model, exploiting Openflow


[14]    N. McKeown, T. Anderson, H. Balakrishnan, G. Parulkar, L. Peterson, J. Rexford,
       S. Shenker, J. Turner, OpenFlow: Enabling Innovation in Campus Networks, (2008)
[15]   http://www.fp7-ofelia.eu/ofelia-facility-and-islands/

[16]   http://www.fp7-ofelia.eu/

[17]   http://www.openflow.org/

[18]   G. Gibb, KK Yap, M. Casado, M. Kobayashi, R. Sherwood et al., OpenFlow Switch
       Specification - Version 1.1.0 Implemented, (2011)

[19]   N. Gude, T. Koponen, J. Pettit, B. Pfaff, M. Casado, N. McKeown, S. Shenker,
       NOX: Towards an Operating System for Networks, (2008)

[20]   http://noxrepo.org/noxwiki/index.php/Main_Page

[21]   R. Sherwood, M. Chan, G. Gibb, N. Handigol, T. Huang, P. Kazemian, M.
       Kobayashi, D. Underhill, KK Yap, G. Appenzeller, N. McKeown, Carving Research
       Slices Out of Your Production Networks with OpenFlow, (2009)

[22]   R.Sherwood, G. Gibb, KK Yap, G. Appenzeller, M. Casado, N. McKeown, G.
       Parulkar, Can the Production Network Be the Testbed?, (2010)

[23]   N. Blefari Melazzi, A. Detti, M. Pomposini, S. Salsano, CONET: an Evolutionary
       Path to Information Centric Networking, (2011)

[24]   G. Mazza, G. Morabito, S. Salsano, Supporting COntent NETworking in OpenFlow,
       (2012)

[25]   A. Detti, S. Salsano, N. Blefari Melazzi, An IPv4 Option to support Content
       Networking, draft-detti-conet-ip-option-00, (2011)

[26]   L. Zhang, D. Estrin, J. Burke, V. Jacobson, J. Thornton, D. Smetters, B. Zhang, G.
       Tsudik, D. Krioukov, D. Massey, C. Papadopoulos, T. Abdelzaher, L. Wang, P.
       Crowley, E. Yeh, Named Data Networking (NDN) Project, (2010)

[27]   A. Detti, N. Blefari Melazzi, S. Salsano, M. Pomposini, CONET: A Content Centric
       Inter-Networking Architecture, (2012)

[28]   A. Ghodsi, T. Koponen, B. Raghavan, S. Shenker, A. Singla, J. Wilcox, Information-
       Centric Networking: Seeing the Forest for the Trees, (2011)

[29]   https://www.virtualbox.org/

[30]   http://vde.sourceforge.net/

[31]   https://forums.virtualbox.org/viewtopic.php?f=1&t=36492&p=201890#p201890


                                                      106
             Solutions to enhance the Internet with an information centric model, exploiting Openflow


[32]   http://wiki.virtualsquare.org/wiki/

[33]   http://openvswitch.org/

[34]   http://noxrepo.org/noxwiki/index.php/Installation/DebianUbuntu

[35]   http://noxrepo.org/noxwiki/index.php/Developing_in_NOX

[36]   http://www.secdev.org/projects/scapy/




                                                      107

								
To top