OSI Reference Model
Layer 1 Hardware
The Open System Interconnect (OSI) reference model is a model, developed by the International
Standards Organization (ISO), which describes how data from an application on one computer can be
transferred to an application on another computer. The OSI reference model consists of seven
conceptual layers which each specify different network functions. Each function of a network can be
assigned to one, or perhaps a couple of adjacent layers, of these seven layers and is relatively
independent of the other layers. This independence means that one layer does not need to be aware
of what the implementation of an adjacent layer is, merely how to communicate with it. This is a major
advantage of the OSI reference model and is one of the major reasons why it has become one of the
most widely used architecture models for inter-computer communications.
The seven layers of the OSI reference model, as shown in Figure 1, are:
Diagram of the OSI Reference Model Layers
Revised June 4, 2008 Page 1 of 19
OSI Reference Model
Over the next few articles I will be discussing each layer of the model and the networking hardware
which relates to that layer. This article, as you have probably guessed from the title, will discuss layer
1; the physical layer.
While many people may simply state that all networking hardware belongs exclusively in the physical
layer, they are wrong. Many networking hardware devices can perform functions belonging to the
higher layers as well. For example, a network router performs routing functions which belong in the
What does the physical layer include? Well, the physical layer involves the actual transmission of
signals over a medium from one computer to another. This layer includes specifications for the
electrical and mechanical characteristics such as: voltage levels, signal timing, data rate, maximum
transmission length, and physical connectors, of networking equipment. For a device to operate solely
in the physical layer, it will not have any knowledge of the data which it transmits. A physical layer
device simply transmits or receives data.
There are four general functions which the physical layer is responsible for. These functions are:
Definitions of hardware specifications
Encoding and signaling
Data transmission and reception
Topology and physical network design
Definitions of Hardware Specifications
Each piece of hardware in a network will have numerous specifications. If you read my previous
article titled Copper and Glass: A Guide to Network Cables [link this title to my previous article of that
title], you will learn about some of the more common specifications which apply to network cables.
These specifications include things like the maximum length of a cable, the width of the cable, the
protection from electromagnetic interference, and even the flexibility.
Another area of hardware specifications are the physical connectors. This includes both the shape
and size of the connectors as well as the pin count and layout, if appropriate.
Encoding and Signaling
Encoding and signaling is a very important part of the physical layer. This process can get quite
complicated. For example, let's look at Ethernet. Most people learn that signals are sent in '1's and
'0's using a high voltage level and a low voltage level to represent the two states. While this is useful
for some teaching purposes, it is not correct. Signals over Ethernet are sent using Manchester
encoding. This means that '1's and '0's are transmitted as rises and falls in the signal. Let me explain.
If you were to send signals over a cable where a high voltage level represents a '1' and a low voltage
signal represents a '0' the receiver would also need to know when to sample that signal. This is
usually done with a separate clock signal being transmitted. This method is called a Non-return to
Zero (NRZ) encoding, and has some serious drawbacks. First, if you do include a separate clock
signal you are basically transmitting two signals and doubling the work. If you don't want to transmit
the clock signal, you could include an internal clock in the receiver but this must be in near perfect
synchronization with the transmitter clock. Let's assume you can synchronize the clocks, which
becomes much harder as the transmission speed increases, there is still the problem of keeping this
synchronization when there is a long stretch of the same bit being transmitted; it is the transitions
which help synchronize the clocks.
The limitations of the NRZ encoding can be overcome by technology developed in the 1940s at the
Revised June 4, 2008 Page 2 of 19
OSI Reference Model
University of Manchester [link University of Manchester to http://www.manchester.ac.uk/], in
Manchester, UK. Manchester encoding combines the clock signal with the data signal. While this does
increase the bandwidth of the signal, it also makes the successful transmission of the data much
easier and reliable.
A Manchester encoded signal, transmits data as a rising or falling edge. Which edge represents the '1'
and which represents the '0' must be decided first, but both are considered Manchester encoded
signals. Ethernet and IEEE standards use the rising edge as a logical '1'. The original Manchester
encoding used the falling edge as a '1'.
One situation which you may be thinking about is that if you need to transmit two '1's in a row the
signal will already be high when you need to transmit the second '1'. This isn't the case because the
rising or falling edge which represents data is transmitted in the middle of the bit boundaries; the edge
of the bit boundaries either contain a transition or do not, which puts the signal in the right position for
the next bit to be transmitted. The end result is that at the center of every bit is a transition, the
direction of the transition represents either a '1' or a '0' and the timing of the transition is the clock.
While there are many other encoding schemes, many of which are much more advanced than NRZ or
Manchester encoding, the simplicity and reliability of Manchester encoding has kept it a valuable
standard still widely in use.
Data Transmission and Reception
Whether the network medium is an electrical cable, an optical cable, or radio frequency, there needs
to be equipment that physically transmits the signal. Likewise, there also needs to be equipment that
receives the signal. In the case of a wireless network, this transmission and reception is done by
highly designed antennas which transmit, or receive, signals at predefined frequencies with
Optical transmission lines use equipment which can produce and receive pulses of light, the
frequency of which is used to determine the logical value of the bit. Equipment such as amplifiers and
repeaters, which are commonly employed in long-haul optical transmissions, are also included in the
physical layer of the OSI reference model.
Topology and Physical Network Design
The topology and design of your network is also included in the physical layer. Whether your network
is a token ring, star, mesh, or a hybrid topology, the decision of which topology to use was chosen
with the physical layer in mind. Also included in the physical layer is the layout of a high availability
In general all you need to remember is that if a piece of hardware is not aware of the data being
transmitted then it operates in the physical layer.
Layer 2 Hardware
In my last article, I introduced the Open System Interconnect (OSI) reference model and discussed it's
first layer; the Physical Layer. In this article I will discuss the second layer, the Data Link Layer, from a
The data link layer provides functional and procedural methods of transferring data between two
points. There are five general functions which the Data Link layer is responsible for. These functions
Logical Link Control
Revised June 4, 2008 Page 3 of 19
OSI Reference Model
Media Access Control
Logical Link Control
The Logical Link Control (LLC) is usually considered a sublayer of the Data Link layer (DLL), as
opposed to a function of the DLL. This LLC sublayer is primarily concerned with multiplexing protocols
to be sent over Media Access Control (MAC) sublayer. The LLC does this by splitting up the data to
be sent into smaller frames and adding descriptive information to these frames, called headers.
Media Access Control
Like LLC, the Media Access Control (MAC) is considered a sublayer of the DLL, as opposed to a
function of the DLL. Included in this sublayer is what is known as the MAC address. The MAC
address provides this sublayer with a unique identifier so that each network access point can
communicate with the network. The MAC sublayer is also responsible for the actual access to the
network cable, or communication medium.
If one were to simply send data out onto the network medium not much would happen. The receiver
has to know how, and when, to read the data. This can happen in a number of ways and is the sole
purpose of framing. In general terms, framing organizes the data to be transferred and surrounds this
data with descriptive information, called headers. What, and how much, information these headers
contain is determined by the protocol used on the network, like Ethernet.
The structure of a frame adhering to the Ethernet protocol is shown below in Figure 1.
Structure of an Ethernet Frame
Addressing in layer 2 happens, as I mentioned earlier, with the MAC address of the MAC sublayer. It
is very important not to confuse this with network or IP addressing. It can be helpful to associate the
MAC address with a specific network access point and the network or IP address associated with an
entire device (i.e. a computer, server, or router).
Speaking of routers, keep in mind that routers operate in layer 3, not layer 2. Switches and hubs do
operate in layer two, and therefore direct data based on layer 2 addressing (MAC addresses) and are
unaware of IP or network addressing. And, just so that I don't get an inbox filled with complaints ... yes
I know... some routers also include layer 2 functionality. I will discuss routers with layer 2 functionality
in another future article.
Error Detection and Handling
Whenever data is sent over any kind of transmission medium, there exists a chance that the data will
not be received exactly as it was sent. This can be due to many factors including interference and, in
Revised June 4, 2008 Page 4 of 19
OSI Reference Model
the case of long transmissions, signal attenuation. So, how can a receiver know if the data received is
error free? There are several methods that can be implemented to accomplish this. Some of these
methods are simple and somewhat effective – others are complicated and very effective.
Parity bits are an example of an error detection protocol that is simple and, despite its limited
effectiveness, its use is widespread. A parity bit, simply put, is an extra bit added to a message. There
are two options for the value of this bit. Which value is chosen depends on the flavor of parity bit
detection that is in use. These two flavors are even and odd parity detection. If even parity is in use,
then the parity bit is set to the value ('1' or '0') to make the number of '1's in the message even.
Likewise, if odd parity is in use the parity bit is set to the value needed to make the number of '1's in
the message odd.
When using parity bit error detection the receiver will check all '1's in the frame, including the parity bit.
The receiver will have a setting for even or odd parity; if the number of '1's in the frame does not
match this setting, an error is detected. Now this is great, but as I mentioned earlier the effectiveness
of this error detection method is limited. It is limited because if there is an even number of errors in the
frame then the evenness or oddness of the number of '1's will be maintained and this method will fail
to detect any errors – thus the need for a more rigorous error detection method.
A checksum error detection method can give us more rigor especially if used with a parity bit method.
A checksum method, as its name suggests, will basically check the sum of all the '1's in a message
and check that value against the checksum value added by the sender to the message. While a
checksum method can provide more rigor to your error detection efforts, there are still limitations. For
example, a simple checksum cannot detect an even number of errors which sum to zero, an insertion
of bytes which sum to zero, or even the re-ordering of bytes in the message. While there are some
more advanced implementations of the checksum method, including Fletcher's checksum method, I
will discuss an even more rigorous method here.
One of the most rigorous methods of error detection is the cyclic redundancy check (CRC). What a
CRC does is convert the message to a polynomial where the value of the coefficients correspond to
the bits in the message and then divide that polynomial by a predetermined, or standard, polynomial
called a key. The answer, more specifically the remainder part of the answer, is what is sent along
with the message to the receiver. The receiver performs the same polynomial division with the same
key and then checks the answer. If the answers match, then the chances are pretty good that there
were no errors. I say pretty good because there are a lot of possible polynomials one could use for a
key and not all polynomials provide equally good error detection. As a general rule, longer
polynomials provide better error detection but the mathematics involved with this are quite complex
and as with many aspects of technology there is some debate as to which implementations of this
method provide the best error detection.
Lastly, I would like to point out that these error detection methods are not limited to transmissions of
data over a network medium; they can be used equally well in a data storage scenario where one
wants to check that the data has not been corrupted.
In my next article I will discuss layer 3 of the OSI model. I will also explain in a little more detail why
routers (mostly) belong in the 3rd layer and not the 2nd. And as always, if you have any questions
about this or any previous article, please do not hesitate to email me and I will do my best to answer
any and all questions.
Layer 3 Hardware
In my last two articles I discussed the Open System Interconnect (OSI) reference model and its first
two layers. In this article I will discuss the third layer; the network layer. The network layer is
Revised June 4, 2008 Page 5 of 19
OSI Reference Model
concerned with getting data from one computer to another. This is different from the data link layer
(layer 2) because the data link layer is concerned with moving data from one device to another
directly connected device. For example, the data link layer is responsible for getting data from the
computer to the hub it is connected to, while the network layer is concerned with getting that same
data all the way to another computer, possibly on the other side of the world.
The network layer moves data from one end point to another by implementing the following functions:
Those who have read my previous article may be curious why layer 3 implements addressing when I
also said that layer 2 implements addressing. To cure your curiosity, remember that I wrote that the
layer 2 address (the MAC address) corresponds to a specific network access point as opposed to an
address for an entire device like a computer. Something else to consider is that the layer 3 address is
purely a logical address which is independent of any particular hardware; a MAC address is
associated with particular hardware and hardware manufacturers.
An example of layer 3 addressing is the Internet Protocol (IP) addressing. An illustration of an IP
address can be seen here in figure 1.
Illustration of an IP Address
It is the job of the network layer to move data from one point to its destination. To accomplish this, the
network layer must be able to plan a route for the data to traverse. A combination of hardware and
software routines accomplish this task known as routing. When a router receives a packet from a
source it first needs to determine the destination address. It does this by removing the headers
previously added by the data link layer and reading the address from the predetermined location
within the packet as defined by the standard in use (for example, the IP standard).
Once the destination address is determined the router will check to see if the address is within its own
Revised June 4, 2008 Page 6 of 19
OSI Reference Model
network. If the address is within its own network the router will then send the packet down to the data
link layer (conceptually speaking that is) which will add headers as I described in my previous article
(link previous article to my OSI Layer 2 article) and will send the packet to its destination. If the
address is not within the router's own network, the router will look up the address in a routing table. If
the address is found within this routing table the router will read the corresponding destination network
from the table and send the packet down to the data link layer and on to that destination network. If
the address is not found in this routing table the packet will be sent for error handling. This is one
source of errors which can be seen in data transmission across networks, and is an excellent example
of why error checking and handling is required.
When a router sends a packet down to the data link layer which then adds headers before
transmitting the packet to its next point, this is an example of encapsulation for the data link layer.
Like the data link layer, the network layer is also responsible for encapsulating data it receives from
the layer above it. In this case it would be from the data received from layer 4, the transport layer.
Actually, every layer is responsible for encapsulating data it receives from the layer above it. Even the
seventh and last layer, the application layer, because an application encapsulates data it receives
When the network layer sends data down to the data link layer it can sometimes run into trouble. That
is, depending on what type of data link layer technology is in use the data may be too large. This
requires the network layer have the ability to split the data up into smaller chunks which can each be
sent to the data link layer in turn. This process is known as fragmentation.
Error handling is an important aspect of the network layer. As I mentioned earlier, one source of errors
is when routers do not find the destination address in their routing table. In that case, the router needs
to generate a destination unreachable error. Another possible source of errors is the TTL (time to live)
value of the packet. If the network layer determines that the TTL has reached a zero value, a time
exceeded error is generated. Both the destination unreachable error and the time exceeded error
messages conform to specific standards as defined in the Internet Control Message Protocol (ICMP).
Fragmentation can also cause errors. If the fragmentation process takes too long, the device can
throw an ICMP time exceeded error.
Another responsibility of the network layer is congestion control. As I am sure you know, any given
network device has an upper limit as to the amount of throughput the device can handle. This upper
limit is always creeping upward but there are still times when there is just too much data for the device
to handle. This is the motivation for congestion control.
There are many theories for how to best accomplish this, most of which are quite complicated and
beyond the scope of this article. The basic idea of all of these methods is that you want to make the
data senders compete for their messages to be the ones to get accepted into the throughput. The
congested device wants to do this in a way that lowers the overall amount of data it is receiving. This
can be accomplished by 'punishing' the senders which are sending the most data which causes the
senders to 'slow' their sending activity to avoid the punishment and thereby reducing the amount of
data seen by the congested device (which at this point is no longer congested).
Author's rant: The congestion control algorithms are quite complex for various reasons. Firstly, the
mathematics involved is intense. So, for all of you who have ever wondered why people study
Revised June 4, 2008 Page 7 of 19
OSI Reference Model
mathematics in university and what job they could possibly get with that education.... this is an
important one, and one that pays well with networking companies such as CISCO and Nortel.
Secondly, after you have determined the proper mathematics to accomplish this task, how can it be
implemented in a efficient and fast manner? This is the domain of engineers, who need to understand
the mathematics, possible software implementation strategies, possible hardware implementation
strategies, and design methodologies. Many people, including those who work in the tech industry, do
not really understand what these, and other, professions bring to the table: they should. It is important.
Layer 4 Hardware
The Transport layer provides the functionality to transfer data from one end point to another across a
network. The Transport layer is responsible for flow control and error recovery. The upper layers of
the OSI Reference Model see the Transport Layers as a reliable, network independent, end-to-end
service. An end-to-end service within the transport layer is classified in one of five different levels of
service; Transport Protocol (TP) class 0 through TP class 5.
TP class 0
TP class 0 is the most basic of the five classification levels. Services classified at this level perform
segmentation and reassembly.
TP class 1
TP class 1 services perform all of the functions of those services classified at TP class 0 as well as
error recovery. A service at this level will retransmit data units if they were not received by the
TP class 2
TP class 2 services perform all of the functions of those services classified at TP class 1 as well as
multiplexing and demultiplexing, more on this below.
TP class 3
TP class 3 services perform all of the functions of those services classified at TP class 2 as well as
sequencing of the data units to be sent.
TP class 4
TP class 4 services perform all of the functions of those services classified at TP class 3 as well as
the ability to provide its services over either a connection oriented or connectionless network. This
class of Transport Protocols is the most common and is very similar to the Transmission Control
Protocol (TCP) of the Internet Protocol (IP) suite.
I say that TP class 4 is very similar to TCP because there are some key differences. TP class 4 uses
10 data types while TCP uses only one. This means that TCP is simpler but also means that it must
contain many headers. TP class 4, while more complicated, can contain one quarter of the headers
that TCP contains which obviously reduces a lot of overhead.
Connection Oriented Networks
Connection oriented networks are like your telephone. A connection is made before data is sent and
is maintained throughout the entire process of sending data. With this type of network, routing
information only needs to be sent while setting up the connection and not during data transmission.
This reduces a lot of overhead which improves communication speed. This type of communication is
also very good for applications, like voice or video communications, where the order of the data
received is especially important.
Revised June 4, 2008 Page 8 of 19
OSI Reference Model
Connectionless networks are the opposite of connection oriented networks, in that they do not set up
a connection prior to sending data. Nor do they maintain any connection between two end points. This
requires that routing information is sent with each packet, which therefore increases the
Keep in mind that just because data is being sent in packets does not mean that it is a connectionless
network; virtual circuits are an example of a connection oriented network that use packets.
Since, in my previous articles, I have already covered aspects of error detection and recovery and
since this article is focused on hardware I am going to give a basic introduction to a widely known (yet
poorly understood) aspect of the Transport Layer; multiplexing and demultiplexing.
Multiplexing (or muxing as it is often referred to) is one of those words that people often hear while not
really understanding what it means. Many people may know that muxing is the process of combining
two or more signals into one signal, but how exactly is that done? Well, there are multiple ways in
which this can be done. Digital signals can be muxed in one of two ways, time-division multiplexing
(TDM) and frequency division multiplexing (FDM). Optical signals use a method called wavelength-
division multiplexing, although this is the same thing as FDM (wavelengths of course being inversely
proportional to frequency).
To demonstrate how muxing works, let's take a simple case of TDM. In this example let's assume a
two signal input. A two input muxing device will require three inputs; one for each of the signals and
one for the control signal. A two input muxing device will also have one output. This device will
alternate between the two input signals putting the resulting signal onto its output.
Logic Gate Schematic of a Two Input Mux
Revised June 4, 2008 Page 9 of 19
OSI Reference Model
Figure 1, above, shows a two input mux. The two signals are represented as d0 and d1 while the
control signal is represented as c. The output, which is a function of the two inputs, is represented as f.
The symbols in this figure are standard symbols for representing logic gates. Figure 2, shows the
meaning of these three gates.
Basic Logic Gates
The mux works by receiving a digital signal on the c input. This c signal goes directly to one input of
the 1 'AND' gate, and to the 'NOT' gate. The 'NOT' gate inverts the signal and then sends it to one
input of the 2 'AND' gate. The outputs of the 'AND' gates will only be high when the control signal and
the input signal (d0 or d1) are high. Since the control signal is sent through a 'NOT' gate prior to
reaching the 2 'AND' gate only one of the two 'AND' gates will see a high control signal at any one
instant in time. This process means that f will alternate between being equal to d0 and then to d1 at
the frequency of c.
Now you might be thinking "that's great, but who cares about getting half the signal". Well, that does
not necessarily have to be the case. If the frequency of the control signal is at least twice the
frequency of input signals, then the output f will contain enough information about both d0 and d1 that
a demuxer will be able to reconstruct the original input signals. This is the core idea of the Nyquist-
Shannon sampling theorem.
Looking at the logic gates in Figures 1 and 2 those of you with programming or scripting experience
will recognize these logic functions as common tools in a programmer’s repertoire. Keep in mind that
while these functions are found in software programs, I am strictly talking about hardware functions
which are carried out with a series of transistors, acting as switches, arranges in clever ways to
achieve these logic functions.
Revised June 4, 2008 Page 10 of 19
OSI Reference Model
A demuxer is basically the opposite of a muxer. A demuxer will have one input signal, and in the case
described above will have two output signals. A demuxer, of course, also has a control signal
although with demuxers it is often called the addressing signals. This control signal is called an
address signal because the demuxing circuit can also be used to simply choose which output pin to
put the input signal on to.
Layer 5 Hardware
In the previous few articles I have discussed the first four layers of the Open Systems Interconnect
(OSI) Reference Model. In this article I will discuss the fifth. The fifth layer of the OSI Reference Model
is called the Session layer. This layer, as you might imagine, is responsible for the management of
sessions between two communicating end points. This includes the authentication, set up, termination,
and reconnecting if needed.
One of the more interesting aspects of the session layer, or rather the protocols that implement its
functionality, is the duplex level. When two end points are communicating with each other they can
either communicate in a simplex mode, full-duplex mode, half-duplex mode, or a full duplex emulation
Simplex communication is a one way only type of communication that flows from the designed
transmitter to the designed receiver. The radio in your car is an example of this, the station transmits
the played-too-often music along with the occasional not-funny joke which is received by the radio in
your dashboard; the car radio does not communicate back to the station in any way. Figure 1 shows a
diagram of a simplex communication.
A simplex Communication Diagram
Full-duplex means that communication can occur in both directions at the same time. Ethernets are
examples of a full-duplex medium; with twisted pair cables one pair of twisted wires can be used for
transmitting and another pair used for receiving. This of course refers to newer Ethernets as older
ones employing coax cabling are strictly half-duplex (is anyone still using these networks?). Ethernets
using fiber optics are also full-duplex. Figure 2 shows a diagram of a full-duplex communication.
Half-duplex communication means that communication between two end points can only occur in one
direction at a time. Thinnet and thicknet Ethernets are an example of half-duplex systems as are
many walkie-talkie devices. Figure 2 also shows a diagram of a half-duplex communication system.
Revised June 4, 2008 Page 11 of 19
OSI Reference Model
Half-duplex and Full-duplex Communication Diagram
Half-duplex systems may seem somewhat antiquated to many readers of this website as modern
computing networks are built for full-duplex communication which typically provides better
performance for the users. However, there are many situations where simplex or half-duplex are
desirable. Networks designed to feed information from one source to many different end points may
not have use for the capability to receive messages from the end points. RSS feeds are an example
of a system which sends information out to end users but does not receive information even though it
is communicating over a medium which is capable of full-duplex.
Many industrial networks also have no need for full-duplex communication. Think of a widget
manufacturer with a widget factory which has a widget conveyor belt. If a big order for widgets comes
in the supervisor may want to speed up the conveyor belt. Perhaps the factory has a Programmable
Logic Controller (PLC) which the supervisor could use to increase the speed of the conveyor belt, the
conveyor belt controller would receive a signal to increase the speed of the conveyor belt then send
an acknowledge signal with the current speed of the conveyor belt which is in turn displayed on the
PLC. In this example there is absolutely no need for full-duplex communication, and in fact it would
only make it more costly and, as always, more components means more things to maintain, so many
industrial networks such as this widget factory choose half-duplex communication systems for many
of their needs.
For many applications full-duplex is desired even though the medium only supports half-duplex. In
these situations there are ways to simulate full-duplex emulation which may be more attractive than
upgrading the network to full-duplex.
TIME DIVISION DUPLEXING
Time division duplexing (TDD) is very much like time division multiplexing in that it uses the same
medium to both send and receive signals which is controlled by a clock. In TDD, both forward and
reverse signals use the same medium and are each assigned time slots. One big advantage of this
method of full-duplex emulation is that if the amount of data flowing in any one direction is quite
variable then the time slot allocation can be optimized for communication in one of the directions to
Revised June 4, 2008 Page 12 of 19
OSI Reference Model
suit the needs of the application, this can happen dynamically. The IEEE 802.16 standard for WiMAX
(Worldwide Interoperability for Microwave Access) allows for the use of either TDD or Frequency
Division Duplexing. TDD is best suited for asymmetric data communication such as that found on over
the Internet, this is because of the dynamic time slot allocation capability.
FREQUENCY DIVISION DUPLEXING
Frequency division duplexing or FDD is a term used for a communication system which assigns one
frequency for uploading and another frequency for downloading. In this type of full-duplex emulation
both transmitting and receiving can happen over the same medium but there must be a frequency
offset (the bandwidth between the upload and download frequencies) so that the data does not
interfere with each other. This frequency offset can be a major disadvantage for some systems. Take
WiMAX for example, FDD is supported although this means that the communication between two
endpoints takes up more of the frequency spectrum available. On the other hand, TDD can have a
greater inherent latency and requires more complex circuitry which may be more power hungry. TDD
also requires time offset for the allocated time slots.
FDD systems are typically more advantageous for communication applications which require equal
upload and download bandwidths which would thus eliminate the dynamic time slot allocation that
TDD provides. Most cellular systems work on FDD.
Echo cancellation is another method of full-duplex emulation. In a communication mode which uses
echo cancellation, both end points put data onto the same medium at the same frequency at the same
time and each end point receives all data put onto the medium including the data that it sent itself.
Each end point must then isolate the data that it sent itself and read all other data. Telephone
networks use echo cancellation. This echo cancellation capability can either be implemented in a
hardware or a software solution. Although, some forms of echo are desirable. For example, when you
speak into a phone your voice is transferred to the ear piece even before it is transmitted to the
person you are calling; this is desirable because if you can't hear yourself speak as you are speaking
then you will think the phone isn't working. Other applications however, such as a dial up modem, are
more sensitive to this echo and need to cancel this in order to work properly.
In my next article I will discuss the sixth layer of the OSI Reference Model; the Presentation layer. As
always if there are any questions or comments on this article please feel free to send me an email and
I will do my best to get back to you promptly.
Layer 6 Hardware
In my last five articles I have written about the lower five layers of the Open Systems Interconnect
(OSI) Reference Model. In this article I will discuss the sixth. Layer 6, the Presentation Layer, is the
first layer concerned with transmitting data across a network at a more abstract level than just ones
and zeros; for instance when transmitting letters, how are they represented as ones and zeros (or
rather, how are they 'presented' to the lower layers of the OSI Reference Model).
This functionality is referred to as translation, and allows different applications (often on different
computing hardware) to communicate using commonly known standards of translation, called transfer
syntax. Besides transfer syntaxes which can represent strings as ones and zeros, there are others
which can transfer more complex data, like objects in Object Oriented Programming languages.
Extensible Markup Language (XML) is an example of this.
Another important function of the presentation layer is compression. Compression is often used to
maximize the use of bandwidth across a network, or to optimize disk space when saving data.
Revised June 4, 2008 Page 13 of 19
OSI Reference Model
There are two general types of compression Lossless and lossy. Lossless compression, as its name
suggests, will compress data in such a way that when decompressed the data will be exactly the
same as before it was compressed; there is no loss of data. Lossless data compression will typically
not compress a file as much as lossy compression techniques, and may take more processing power
to accomplish the compression; these are the trade-offs one must consider when choosing a
One common way to implement lossless data compression is to use a dictionary. This method, often
called a substitution coder, will search for matches between the message to be sent and messages in
the dictionary. For example, you could use a complete English dictionary as the dictionary and when
you wanted to compress the contents of a book you would simply replace each word by that word's
location in the dictionary. Decompressing this compressed message works in the opposite way, the
locations are replaced by the word in that location.
Substitution coders can also be much more complex than the example above. For instance, the LZ77
and LZ78 algorithms work with a dictionary referred to as a sliding window. A sliding window
dictionary is a dictionary which changes throughout the compression process. Basically, a sliding
window dictionary contains every sub-string seen in the last N bytes of data already compressed.
When using a sliding window dictionary, the compressed data will require two values to identify the
string instead of just one. The two values are the location of the start of the sub-string, which states
that the sub-string is found in the sliding window starting X number of bytes before the current location,
and the length of the sub-string.
Another basic example of lossless compression is Run-length Encoding. A Run-length encoding
algorithm will replace a subset of data which is repeated many times, with the data subset and a
number representing the number of repetitions. A real-life example where run-length encoding is quite
effective is the fax machine. Most faxes are white sheets with the occasional black text. So, a run-
length encoding scheme can take each line and transmit a code for white then the number of pixels,
then the code for black and the number of pixels, and so on. Because most of the fax is white the
length of the transmitted message will be greatly reduced.
One must use this method of compression carefully. If there is not a lot of repetition in the data then it
is possible that the run-length encoding scheme would actually increase the size of a file.
Of course it is not always feasible or desirable to use a lossless compression technique. In many
instances lossless compression methods will simply not compress the data enough to be useful. In
other instances, the lossless compression techniques will take too much processing power to
compress and/or decompress, and in many situations lossy compression methods can give results
virtually indistinguishable by humans. Figure 1 shows a graph of relative compression speeds.
Revised June 4, 2008 Page 14 of 19
OSI Reference Model
Graph of Compression Speeds
DIGITAL IMAGE COMPRESSION
Compressing digital images is a situation where one should be careful when choosing whether to use
a lossless or lossy compression method. Often the choice is dependent upon the image being
compressed. Images, like medical images, where fine details are critically important will most likely
require lossless compression. While photos of your family vacation could probably benefit from the
reduced file sizes provided by lossy compression methods.
In the case of your family vacation photos, choosing a lossy compression method does not mean you
will end up with poor quality photos. In fact many lossy compression methods for digital images can
take advantage of the fact that the human eye is more sensitive to brightness than to slight changes in
color. This means that the compression method will save very similar colors as the same color while
saving the brightness data in a lossless fashion. This is called chroma sub-sampling.
DIGITAL AUDIO COMPRESSION
Another example where lossy compression methods may be a good choice is with digital audio
compression. Lossy digital audio compression techniques take advantage of a field of study known as
psychoacoustics. Basically, psychoacoustics is the study of how humans hear and perceive sound.
Revised June 4, 2008 Page 15 of 19
OSI Reference Model
One aspect of psychoacoustics relevant to digital audio compression is the fact that humans can only
hear sounds at a frequency between 20Hz and 20kHz. Many lossy digital audio compression
techniques take advantage of this and do not save any information related to frequencies outside of
Also related to the frequency range which humans can hear, is the fact that sounds must be louder to
be heard at all at higher frequencies. This means that lossy compression techniques can sample
sounds of low intensity at these frequencies much less rigorously, or not at all. This also means that
designers of these compression techniques can 'hide' any noise artifacts (as a result of the
compression) in these high frequencies where they will not be perceived.
Another aspect of psychoacoustics which is used extensively in lossy digital audio compression is a
phenomenon called masking. This is where a loud sound causes a quieter sound which occurs at the
same time to be inaudible. This, of course, is frequency dependent but nevertheless this is a
phenomenon widely exploited by engineers in the audio compression field. Basically, when there is a
loud sound on an audio track the compressed file will not save data related to other sounds at the
instant. The result, if done carefully, is that the human ear will perceive all sounds the same as in the
One area where lossless digital audio compression is gaining popularity is with digital archiving. Audio
engineers and consumers who want to save an exact copy of their audio files are increasingly turning
to lossless digital audio compression. One reason for this is that the cost of digital storage is dropping
and people can afford to use storage space for this reason.
Despite the low cost of digital storage, lossy digital audio compression is still king when it comes to
portable storage of music. For example, your iPod will use lossy digital audio compression because
when you want to carry the storage around with you, there is only so much data you can store; using
lossy compression will allow you to carry more songs with you. Another area where lossy
compression is king is in audio streaming. Even though the cost of bandwidth has decreased
significantly in recent years, there is still the need to reduce the bandwidth used by many applications.
So, everything from online radio to VoIP applications tend to use lossy compression techniques.
Layer 7 Hardware
The seventh and final layer of the OSI Reference Model is the Application Layer. The Application
Layer is arguably the most important layer of the OSI Reference Model, this is because without
interesting network applications there would be no need to have a network. All of the ways that we
interact with the network are with network applications. That is, the web browser, email programs,
instant messaging applications, Voice Over Internet Protocol (VoIP) applications, and many more are
all network applications that interact with the lower layers of the OSI Reference Model and the
There are three general functions provided by implementations of the application layer. They are:
Ensure that all necessary system resources are available.
Matches the application to the appropriate application protocol.
Synchronizes the transmission of data from the application to the application protocol.
Application Layer Protocols
The Application Layer contains both network applications and application protocols. Application
protocols are basically rules for how to communicate with that application. Many Application Layer
protocols are publicly available, such as the Hyper Text Transfer Protocol (HTTP). This means that
any web browser which adheres to the HTTP protocol can transfer any file from a web server which
Revised June 4, 2008 Page 16 of 19
OSI Reference Model
also implements the HTTP protocol. This web browser, the web server, and the HTTP protocol
together make up the network application.
Some Application Layer protocols are proprietary and therefore not available to the public; VoIP
protocols are an example of this. This is why you cannot use a generic user interface to access your
Skype account; you must use Skype's user interface.
Software vs Hardware
When most people think of Application Layer protocols like HTTP, SMTP, or POP3, they also think of
software applications which are the interface for these applications. But this is not always the case.
With a little thought we can easily think of examples where the interface for the applications are
hardware implementations. For example, take many of today's cordless phones which are capable of
connecting to one's VoIP account. Now while there is software on these phones it is easy to imagine
that the majority of the work is done by hardware. In fact, your voice is collected by a microphone and
hardware processes it so that it is compatible with the proprietary VoIP application protocol by
hardware inside the phone. This hardware can be either an Application Specific Integrated Circuit
(ASIC) or a Field Programmable Gate Array (FPGA).
Another example of a hardware implementation of an Application Layer protocol is found within
Bluetooth. Bluetooth, in its entirety, covers many layers of the OSI Reference Model but we will focus
on the application layer implementation. Within Bluetooth devices you can find many applications
falling within the Application Layer. One such application is one which would allow a wireless ear
piece, like the one shown in Figure 1, to communicate with a cell phone in your pocket. In this case,
the ear piece, which has a Bluetooth chip inside, will convert the signal it receives from the phone to a
form acceptable to the speaker completely through hardware. Likewise, the ear piece will receive a
signal of your voice from the microphone and covert it to a form acceptable to the Bluetooth chip
which will then send the signal to your phone. This is all done through hardware.
File Transfer Protocol
One of the most common software applications which fall within the OSI Application Layer is the File
Transfer Protocol (FTP); or rather software applications which implement the FTP fall within the
FTP allows for the transfer of files over a network. FTP requires two end points, one which acts as an
FTP server and one which acts as the FTP client. FTP also requires two ports, one for data and one
for control. The FTP control port is port number 21 and the FTP data port is port number 20. Of
course the FTP client from ports are randomized and are not any well-known port.
There are two general types of FTP; active and passive. In active FTP an FTP client sends an FTP
request to the control port of the FTP server. The FTP server then sends the requested data from its
data port to a port specified by the client (on the control port). This was the original way that FTP was
Revised June 4, 2008 Page 17 of 19
OSI Reference Model
designed. However this can lead to many problems. This is because when the server starts sending
data from its data port to a port on the client it looks very much like an intruder is uploading data onto
the client. For this reason many firewalls will not allow this type of data transfer.
Passive FTP was developed to meet the security needs of the client. Passive FTP does not use the
standard FTP data port. The FTP server, upon receiving a request from an FTP client will reply with a
non well-known port on which the data will be sent. The FTP client then sends a request to this port
which then replies with the data requested.
Voice Over Internet Protocol
As I mentioned at the beginning of this article VoIP is also a network application which falls within the
Application Layer. VoIP by its definition is a protocol optimized for transmitting voice over packet
based networks. Although here, as is quite common, I am referring to the entirety of the applications
which implement such protocols.
VoIP is an excellent example of a family of applications which have many different implementations.
Figure 2 illustrates different forms of implementations of VoIP. All of these different implementations
are able to communicate with each other because they all rely on the different layers of the OSI
Reference Model. While each implementation may use different implementations of other functions in
different layers, each implementation is compatible with each other.
Many Different Implementations of VoIP
In general the OSI Reference Model is an abstract model which should be used as a guide for both
understanding how networks function as well as for developing network applications. By separating
elements of a design into layers of the OSI Reference Model a designer will increase the usability of
the application as well as make the application easier to maintain and update over time.
Elements of a design do not have to adhere strictly to the OSI Reference Model layers. In fact, there
Revised June 4, 2008 Page 18 of 19
OSI Reference Model
is often some debate about which functions should belong in which layer. One such debate centers
around Application Service Elements (ASEs). Many people consider ASEs as part of the Presentation
Layer, while others consider them as part of the Application Layer as explained here in a document by
Cisco. In practice it really does not matter which layer you place them in because there function is to
work between the layers. Other functions provided by the layers are not even necessary and will
sometimes not be present in a design; encryption is an example of this.
Revised June 4, 2008 Page 19 of 19