Learning Center
Plans & pricing Sign in
Sign Out

Samlet cisco - Semester 1

VIEWS: 11 PAGES: 119

Software Networking - Internet

More Info
									Module 1: Introduction to Networking
The Internet is a valuable resource, and connection to it is essential for business, industry, and
education. Building a network that will connect to the Internet requires careful planning. Even for
the individual user some planning and decisions are necessary. The computer itself must be
considered, as well as the device itself that makes the connection to the local-area network (LAN),
such as the network interface card or modem. The correct protocol must be configured so that the
computer can connect to the Internet. Proper selection of a web browser is also important.
1.1.1 Requirements for Internet connection

The Internet is the largest data network on earth. The Internet consists of a multitude of
interconnected networks both large and small. At the edge of this giant network is the individual
consumer computer. Connection to the Internet can be broken down into the physical connection,
the logical connection, and the application.
A physical connection is made by connecting a specialized expansion card such as a modem or a
network interface card (NIC) from a computer (PC) to a network. The physical connection is used
to transfer signals between PCs within the local network and to remote devices on the Internet.
The logical connection uses standards called protocols. A protocol is a formal description of a set of
rules and conventions that govern how devices on a network communicate. Connections to the
Internet may use multiple protocols. The Transmission Control Protocol/Internet Protocol (TCP/IP)
suite is the primary protocol used on the Internet. TCP/IP is a suite of protocols that work together
to transmit data.
The application that interprets the data and displays the information in an understandable form is
the last part of the connection. Applications work with protocols to send and receive data across the
Internet. A web browser displays Hypertext Markup Language (HTML) as a web page. File
Transfer Protocol (FTP) is used to download files and programs from the Internet. Web browsers
also use proprietary plug-in applications to display special data types such as movies or flash
This is an introductory view of the Internet, and it may seem an overly simple process. As the topic
is explored in greater depth, it will become apparent that sending data across the Internet is a
complicated task.
1.1.2 PC basics
Because computers are important building blocks in a network, it is important to be able to
recognize and name the major components of a PC. Many networking devices are themselves
special purpose computers, with many of the same components as normal PCs.
In order to use a computer as a reliable means of obtaining information, such as accessing Web-
based curriculum, it must be in good working order. To keep a PC in good working order will
require occasional troubleshooting of simple problems with the computer hardware and software.
Therefore it is necessary to be able to recognize the names and purposes of the following PC
Small, Discrete Components
Transistor – Device that amplifies a signal or opens and closes a circuit.
Integrated circuit (IC) – Device made of semiconductor material that contains many transistors
and performs a specific task.
Resistor – Device made of material that opposes the flow of electric current.
Capacitor – Electronic component that stores energy in the form of an electrostatic field that
consists of two conducting metal plates separated by an insulating material.
Connector – The part of a cable that plugs into a port or interface.
Light emitting diode (LED) – Semiconductor device that emits light when a current passes
through it.
Personal Computer Subsystems
Printed circuit board (PCB) – A thin plate on which chips or integrated circuits and other
electronic components are placed.
CD-ROM drive – Compact disk read-only memory drive, which is a device that can read
information from a CD-ROM.
Central processing unit (CPU) – The brains of the computer where most calculations take
Floppy disk drive – A disk drive that can read and write to floppy disks.
Hard disk drive – The device that reads and writes data on a hard disk.
Microprocessor – A silicon chip that contains a CPU.
Motherboard – The main circuit board of a microcomputer
Bus – A collection of wires through which data is transmitted from one part of a computer to
Random-access memory (RAM) – Also known as Read-Write memory, new data can be written
to it and stored data can be read from it. RAM requires electrical power to maintain data storage. If
the computer is turned off or loses power, all data stored in RAM is lost.
Read-only memory (ROM) – Computer memory on which data has been prerecorded. Once data
has been written onto a ROM chip, it cannot be removed and can only be read.
System unit – The main part of a PC, which includes the chassis, microprocessor, main memory,
bus, and ports. The system unit does not include the keyboard, monitor, or any external devices
connected to the computer.
Expansion slot – A socket on the motherboard where a circuit board can be inserted to add new
capabilities to the computer.
Power supply – The component that supplies power to a computer.
Backplane Components
Backplane – The large circuit board that contains sockets for expansion cards.
Network interface card (NIC) – An expansion board inserted into a computer so that the computer
can be connected to a network.
Video card – A board that plugs into a PC to give it display capabilities.
Audio card – An expansion board that enables a computer to manipulate and output sounds.
Parallel port – An interface capable of transferring more than one bit simultaneously that is used to
connect external devices such as printers.
Serial port – An interface that can be used for serial communication, in which only 1 bit is
transmitted at a time.
Mouse port – A port designed for connecting a mouse to a PC.
Power cord – A cord used to connect an electrical device to an electrical outlet that provides power
to the device.
Think of the internal components of a PC as a network of devices, which are all attached to the
system bus. In a sense, a PC is a small computer network.
1.1.3 Network interface card
A network interface card (NIC) is a printed circuit board that provides network communication
capabilities to and from a personal computer.      Also called a LAN adapter, it resides in a slot on
the motherboard and provides an interface connection to the network media. The type of NIC must
match the media and protocol used on the local network.
The NIC communicates with the network through a serial connection and with the computer
through a parallel connection. The NIC uses an Interrupt Request (IRQ), an I/O address, and upper
memory space to work with the operating system. An IRQ is a signal informing the CPU that an
event needing attention has occurred. An IRQ is sent over a hardware line to the microprocessor
when a key is pressed on the keyboard. Then the CPU enables transmission of the character from
the keyboard to RAM. An I/O address is a location in the memory used to enter data or retrieve data
from a computer by an auxiliary device. Upper memory refers to the memory area between the first
640 kilobytes (KB) and 1 megabyte (MB) of RAM.
When selecting a NIC, consider the following factors:
Protocols – Ethernet, Token Ring, or FDDI
Types of media – Twisted-pair, coaxial, wireless, or fiber-optic
Type of system bus – PCI or ISA
 1.1.4 NIC and modem installation
Connectivity to the Internet requires an adapter card, which may be a modem or NIC.
A modem, or modulator-demodulator, is a device that provides the computer with connectivity to a
telephone line. The modem converts (modulates) the data from a digital signal to an analog signal
that is compatible with a standard phone line. The modem at the receiving end demodulates the
signal, which converts it back to digital. Modems may be installed internally or attached externally
to the computer using a serial or USB interface.
The installation of a NIC, which provides the interface for a host to the network, is required for each
device on the network. NICs are available in different types depending on the individual device
configuration. Notebook computers may have a built-in interface or use a PCMCIA card. Figure
shows PCMCIA wired and wireless NICs. Desktop systems may use an internal or external NIC.

Situations that require NIC installation include the following:
Adding a NIC to a PC that does not already have one
Replacing a bad or damaged NIC
Upgrading from a 10-Mbps NIC to a 10/100-Mbps NIC
To perform the installation of a NIC or modem the following resources may be required:
Knowledge of how the adapter is configured, including jumpers and plug-and-play software
Availability of diagnostic tools
Ability to resolve hardware resource conflicts
1.1.5 Overview of high-speed and dial-up connectivity
In the early 1960s, modems were introduced to provide connectivity for dumb terminals to a
centrally based computer. Many companies used to rent computer time due to the expense of
owning an on-site system, which was cost prohibitive. The connection rate was very slow, 300 bits
per second (bps), translating to about 30 characters per second.
As PCs became more affordable in the 1970s, Bulletin Board Systems (BBS) appeared. These
BBSs allowed users to connect and post or read messages on a discussion board. Running at 300
bps was acceptable, as this exceeds the speed at which most people can read or type. In the early
1980s, use of bulletin boards increased exponentially and the 300 bps speed quickly became too
slow for the transfer of large files and graphics. By the 1990s modems were running at 9600 bps
and reached the current standard of 56 kbps (56,000 bps) by 1998.
Inevitably the high-speed services used in the corporate environment, such as Digital Subscriber
Line (DSL) and cable modem access, moved to the consumer market. These services no longer
required expensive equipment or a second phone line. These are "always on" services that provide
instant access and do not require a connection to be established for each session. This gives greater
reliability and flexibility, and has led to the ease of Internet connection sharing by small office and
home networks.
 1.1.6 TCP/IP description and configuration
Transmission Control Protocol/Internet Protocol (TCP/IP) is a set of protocols or rules developed to
allow cooperating computers to share resources across a network. To enable TCP/IP on the
workstation, it must be configured using the operating system tools. The process is very similar
whether using a Windows or Mac operating system.
 1.1.7 Testing connectivity with ping
Ping is a utility used to verify Internet connectivity. It is named after the sonar operation used to
locate and determine the distance to an underwater object.
The ping command works by sending multiple IP packets to a specified destination. Each packet
sent is a request for a reply. The output response for a ping contains the success ratio and round-trip
time to the destination. From this information, it is possible to determine if there is connectivity to a
destination. The ping command is used to test the NIC transmit/receive function, the TCP/IP
configuration, and network connectivity. The following examples describe the types of ping tests
that are commonly used in a network:
ping - This ping is unique and is called an internal loopback test. It verifies the operation
of the TCP/IP stack and NIC transmit/receive function.
ping IP address of host computer - A ping to a host PC verifies the TCP/IP address configuration
for the local host and connectivity to the host.
ping default-gateway IP address - A ping to the default gateway verifies whether the router that
connects the local network to other networks can be reached.
ping remote destination IP address - A ping to a remote destination verifies connectivity to a
remote host.
1.1.8 Web browser and plug-ins
A web browser performs the following functions:
Contacts a web server
Requests information
Receives information
Displays the results on the screen
A web browser is software that interprets hypertext markup language (HTML), one of the
languages used to code web page content. Other markup languages with more advanced features are
part of the emerging technology. HTML, the most common markup language, can display graphics,
play sound, movies, and other multimedia files. Hyperlinks are embedded in a web page providing a
quick link to another location on the same or an entirely different web page.
Two of the most popular web browsers are Internet Explorer (IE) and Netscape Communicator.
While identical in the tasks they perform, there are differences between these two browsers. Some
websites may not support the use of one or the other, and it can be beneficial to have both programs
installed on the computer.
Netscape Navigator:
The first popular browser
Takes less disk space
Displays HTML files, performs e-mail and file transfers, and other functions
Internet Explorer (IE):
Powerfully integrated with other Microsoft products
Takes more disk space
Displays HTML files, performs e-mail and file transfers, and other functions
There are also many special, or proprietary, file types that standard web browsers are not able to
display. To view these files the browser must be configured to use the plug-in applications. These
applications work in conjunction with the browser to launch the program required to view the
following special files:
Flash – plays multimedia files, which was created by Macromedia Flash
Quicktime – plays video files, which was created by Apple
Real Player – plays audio files
In order to install the Flash plug-in, do the following:
Go to the Macromedia website.
Download .exe file. (flash32.exe)
Run and install in Netscape or IE
Verify installation and proper operation by accessing the Cisco Academy website
Beyond getting the computer configured to view the Cisco Academy curriculum, computers
perform many other useful tasks. In business, employees regularly use a set of applications that
come in the form of an office suite, such as Microsoft Office. Office applications typically include
the following:
Spreadsheet software contains tables consisting of columns and rows, and it is often used with
formulas to process and analyze data.
A word processor is an application used to create and edit text documents. Modern word processors
allow the user to create sophisticated documents, which include graphics and richly formatted text.
Database management software is used to store, maintain, organize, sort, and filter records. A
record is a collection of information identified by some common theme such as customer name.
Presentation software is used to design and develop presentations to deliver at meetings, classes, or
sales presentations.
A personal information manager includes an e-mail utility, contact lists, a calendar, and a to-do list.
Office applications are now a part of every day work, as typewriters were before the personal
1.1.9 Troubleshooting Internet connection problems
In this troubleshooting lab, problems exist in the hardware, software, and network configurations.
The goal, in a pre-determined length of time, is to locate and repair the problems, which will
eventually allow access to the curriculum. This lab will demonstrate the complexity in configuring
even the simple process of accessing the web. This includes the processes and procedures involved
with troubleshooting computer hardware, software, and network systems.
1.2.1 Binary presentation of data
Computers work with and store data using electronic switches that are either ON or OFF.
Computers can only understand and use data that is in this two-state or binary format. 1 is
represented by an ON state, and 0 is represented by an OFF state. The ones and zeros are used to
represent the two possible states of an electronic component in a computer. They are referred to as
binary digits or bits.
The American Standard Code for Information Interchange (ASCII) is the most commonly used
code for representing alpha-numeric data in a computer. ASCII uses binary digits to represent the
symbols typed on the keyboard. When computers send ON/OFF states over a network, electrical,
light, or radio waves are used to represent the 1s and 0s. Notice that each character has a unique
pattern of eight binary digits assigned to represent the character.
Because computers are designed to work with ON/OFF switches, binary digits and binary numbers
are natural to them. Humans use the decimal number system, which is relatively simple when
compared to the long series of 1s and 0s used by computers. So the computer binary numbers need
to be converted to decimal numbers.
Sometimes binary numbers need to be converted to Hexadecimal (hex) numbers which reduces a
long string of binary digits to a few hexadecimal characters. This makes it easier to remember and
to work with the numbers.

 1.2.2 Bits and bytes
A binary 0 might be represented by 0 volts of electricity (0 = 0 volts).
A binary 1 might be represented by +5 volts of electricity (1 = +5 volts).
Computers are designed to use groupings of eight bits. This grouping of eight bits is referred to as a
byte. In a computer, one byte represents a single addressable storage location. These storage
locations represent a value or single character of data, such as an ASCII code. The total number of
combinations of the eight switches being turned on and off is 256. The value range of a byte is from
0 to 255. So a byte is an important concept to understand when working with computers and
 1.2.3 Base 10 number system
Numbering systems consist of symbols and rules for using those symbols. The most commonly
used numbering system is the decimal, or Base 10, numbering system. Base 10 uses the ten symbols
0, 1, 2, 3, 4, 5, 6, 7, 8, and 9. These symbols, can be combined to represent all possible numeric
The decimal number system is based on powers of 10. Each column position of a value, from right
to left, is multiplied by the number 10, which is the base number, raised to a power, which is the
exponent. The power that 10 is raised to depends on its position to the left of the decimal point.
When a decimal number is read from right to left, the first or rightmost position represents 100 (1),
the second position represents 101 (10 x 1= 10). The third position represents 102 (10 x 10 =100).
The seventh position to the left represents 106 (10 x 10 x 10 x 10 x 10 x 10 =1,000,000). This is true
no matter how many columns the number has.
2134 = (2x103) + (1x102) + (3x101) + (4x100)
There is a 4 in the ones position, a 3 in the tens position, a 1 in the hundreds position, and a 2 in the
thousands position. This example seems obvious when the decimal number system is used. Seeing
exactly how the decimal system works is important because it is needed to understand two other
numbering systems, Base 2 and hexadecimal Base 16. These systems use the same methods as the
decimal system.
 1.2.4 Base 2 number system
Computers recognize and process data using the binary, or Base 2, numbering system. The binary
system uses only two symbols, 0 and 1, instead of the ten symbols used in the decimal numbering
system. The position, or place, of each digit from right to left in a binary number represents 2, the
base number, raised to a power or exponent, starting from 0. These place values are, from right to
left, 20, 21, 22, 23, 24, 25, 26, and 27, or 1, 2, 4, 8, 16, 32, 64, and 128 respectively.
101102 = (1 x 24 = 16) + (0 x 23 = 0) + (1 x 22 = 4) + (1 x 21 = 2) + (0 x 20 = 0) = 22 (16 + 0 + 4 + 2
+ 0)
If the binary number (101102) is read left to right, there is a 1 in the 16s position, a 0 in the 8s
position, a 1 in the 4s position, a 1 in the 2s position, and a 0 in the 1s position, which adds up to
decimal number 22.
1.2.5 Converting decimal numbers to 8-bit binary numbers
There are several ways to convert decimal numbers to binary numbers. The flowchart in Figure
describes one method. The process is trying to figure out which values of the power of 2 that add
together to get the decimal number being converted to a binary number. This method is one of
several methods that can be used. It is best to select one method and practice with it until it always
produces the correct answer.
Conversion exercise
Use the example below to convert the decimal number 168 to a binary number:
128 fits into 168. So the left most bit in the binary number is a 1. 168 - 128 leaves 40.
64 does not fit into 40. So the second bit in from the left is a 0.
32 fits into 40. So the third bit in from the left is a 1. 40 - 32 leaves 8.
16 does not fit into 8 so the fourth bit in from the left is a 0.
8 fits into 8. So the fifth bit in from the left is a 1. 8 - 8 leaves 0. So, the remaining bits to the right
are all 0.
Result: Decimal 168 = 10101000
For more practice, try converting decimal 255 to binary. The answer should be 11111111.
The number converter activity in Figure will provide more practice.
1.2.6 Converting 8-bit binary numbers to decimal numbers
There are two basic ways to convert binary numbers to decimal numbers. The flowchart in Figure
shows one example.
Binary numbers can also be converted to decimal numbers by multiplying the binary digits by the
base number of the system, which is Base 2, and raised to the exponent of its position.
Convert the binary number 01110000 to a decimal number.
Note: Work from right to left. Remember that anything raised to the 0 power is 1. Therefore 20 = 1
             0 x 20 = 0
           0 x 21 = 0
           0 x 22 = 0
           0 x 23 = 0
           1 x 24 = 16
           1 x 25 = 32
           1 x 26 = 64
           0 x 27= 0
        + –––––––––––
           Note: The sum of the powers of 2 that have a 1 in their position
The number converter activity will provide more practice.
 1.2.7 Four-octet dotted decimal representation of 32-bit binary numbers
Currently, addresses assigned to computers on the Internet are 32-bit binary numbers. To make it
easier to work with these addresses, the 32-bit binary number is broken into a series of decimal
numbers. To do this, split the binary number into four groups of eight binary digits. Then convert
each group of eight bits, also known as an octet into its decimal equivalent. Do this conversion
exactly as was shown in the binary-to-decimal conversion topic on the previous page.
When written, the complete binary number is represented as four groups of decimal digits separated
by periods. This is referred to as dotted decimal notation and provides a compact, easy to remember
way of referring to the 32 bit addresses. This representation is used frequently later in this course,
so it is necessary to understand it. When converting to binary from dotted decimal, remember that
each group, which consists of one to three decimal digits represents a group of eight binary digits. If
the decimal number that is being converted is less than 128, zeros will be needed to be added to the
left of the equivalent binary number until there are a total of eight bits.
Convert to its 32-bit binary equivalent.
Convert 10000000 01011101 00001111 10101010 to its dotted decimal equivalent.
1.2.8 Hexadecimal
Hexadecimal (hex) is used frequently when working with computers since it can be used to
represent binary numbers in a more readable form. The computer performs computations in
binary, but there are several instances when the binary output of a computer is expressed in
hexadecimal to make it easier to read.
Converting a hexadecimal number to binary, and a binary number to hexadecimal, is a common
task when dealing with the configuration register in Cisco routers. Cisco routers have a
configuration register that is 16 bits long. The 16-bit binary number can be represented as a four-
digit hexadecimal number. For example, 0010000100000010 in binary equals 2102 in hex. The
word hexadecimal is often abbreviated 0x when used with a value as shown with the above number:
Like the binary and decimal systems, the hexadecimal system is based on the use of symbols,
powers, and positions. The symbols that hex uses are 0 - 9, and A, B, C, D, E, and F.
Notice that all possible combinations of four binary digits have only one hexadecimal symbol,
where it takes two in decimal. The reason why hex is used is that two hexadecimal digits, as
opposed to decimal that would require up to four digits, can efficiently represent any combination
of eight binary digits. In allowing two decimal digits to represent four bits, using decimal could also
cause confusion in reading a value. For example, the eight bit binary number 01110011 would be
115 if converted to decimal digits. Is that 11-5 or 1-15? If 11-5 is used, the binary number would be
1011 0101, which is not the number originally converted. Using hexadecimal, the conversion is 1F,
which always converts back to 00011111.
Hexadecimal reduces an eight bit number to just two hex digits. This reduces the confusion of
reading long strings of binary numbers and the amount of space it takes to write binary numbers.
Remember that hexadecimal is sometimes abbreviated 0x so hex 5D might be written as "0x5D".
To convert from hex to binary, simply expand each hex digit into its four bit binary equivalent.
 1.2.9 Boolean or binary logic
Boolean logic is based on digital circuitry that accepts one or two incoming voltages. Based on
the input voltages, output voltage is generated. For the purpose of computers the voltage difference
is associated as two states, on or off. These two states are in turn associated as a 1 or a 0, which are
the two digits in the binary numbering system.
Boolean logic is a binary logic that allows two numbers to be compared and a choice generated
based on the two numbers. These choices are the logical AND, OR and NOT. With the exception of
the NOT, Boolean operations have the same function. They accept two numbers, which are 1 or 0,
and generate a result based on the logic rule.
The NOT operation takes whatever value is presented, 0 or 1, and inverts it. A one becomes a zero
and a zero becomes a one. Remember that the logic gates are electronic devices built specifically
for this purpose. The logic rule that they follow is whatever the input is, the output is the opposite.
The AND operation takes two input values. If both are 1, the logic gate generates a 1 output.
Otherwise it outputs a 0. There are four combinations of input values. Three of these combinations
generate a 0, and one combination generates a 1.
The OR operation also takes two input values. If at least one of the input values is 1, the output
value is 1. Again there are four combinations of input values. This time three combinations generate
a 1 output and the fourth generates a 0 output.
The two networking operations that use Boolean logic are subnetwork and wildcard masking. The
masking operations provide a way of filtering addresses. The addresses identify the devices on the
network and allows the addresses to be grouped together or controlled by other network operations.
These functions will be explained in depth later in the curriculum.
 1.2.10 IP addresses and network masks
The 32-bit binary addresses used on the Internet are referred to as Internet Protocol (IP) addresses.
  The relationship between IP addresses and network masks will be addressed in this section.
When IP addresses are assigned to computers, some of the bits on the left side of the 32-bit IP
number represent a network. The number of bits designated depends on the address class. The bits
left over in the 32-bit IP address identify a particular computer on the network. A computer is
referred to as the host. The IP address of a computer consists of a network and a host part that
represents a particular computer on a particular network.
To inform a computer how the 32-bit IP address has been split, a second 32-bit number called a
subnetwork mask is used. This mask is a guide that indicates how the IP address should be
interpreted by identifying how many of the bits are used to identify the network of the computer.
The subnetwork mask sequentially fills in the 1s from the left side of the mask. A subnet mask will
always be all 1s until the network address is identified and then be all 0s from there to the right
most bit of the mask. The bits in the subnet mask that are 0 identify the computer or host on that
network. Some examples of subnet masks are:
11111111000000000000000000000000 written in dotted decimal as
11111111111111110000000000000000 written in dotted decimal as
In the first example, the first eight bits from the left represent the network portion of the address,
and the last 24 bits represent the host portion of the address. In the second example the first 16 bits
represent the network portion of the address, and the last 16 bits represent the host portion of the
Converting the IP address to binary would result in:
Performing a Boolean AND of the IP address and the subnet mask produces
the network address of this host:
Converting the result to dotted decimal, is the network portion of the IP address, when
using the mask.
Performing a Boolean AND of the IP address and the subnet mask
produces the network address of this host:
Converting the result to dotted decimal, is the network portion of the IP address, when
using the mask.
This is a brief illustration of the effect that a network mask has on an IP address. The importance of
masking will become much clearer as more work with IP addresses is done. For right now it is only
important that the concept of the mask is understood.
Module 2: Networking Fundamentals
Bandwidth is a crucial component in networking. Bandwidth decisions are among the most
important when a network is designed. This module discusses the importance of bandwidth,
explains how it is calculated, and how it is measured.
Functions of networking are described using layered models. This module covers the two most
important models, which are the Open System Interconnection (OSI) model and the Transmission
Control Protocol/Internet Protocol (TCP/IP) model. The module also presents the differences and
similarities between the two models.
In addition, this module presents a brief history of networking. It also describes network devices, as
well as cabling, physical, and logical layouts. This module also defines and compares LANs,
MANs, WANs, SANs, and VPNs.
 2.1.1 Data networks
Data networks developed as a result of business applications that were written for microcomputers.
   At that time microcomputers were not connected as mainframe computer terminals were, so there
was no efficient way of sharing data among multiple microcomputers. It became apparent that
sharing data through the use of floppy disks was not an efficient or cost-effective manner in which
to operate businesses. Sneakernet created multiple copies of the data. Each time a file was modified
it would have to be shared again with all other people who needed that file. If two people modified
the file and then tried to share it, one of the sets of changes would be lost. Businesses needed a
solution that would successfully address the following three problems:
How to avoid duplication of equipment and resources
How to communicate efficiently
How to set up and manage a network
Businesses realized that networking technology could increase productivity while saving money.
Networks were added and expanded almost as rapidly as new network technologies and products
were introduced. In the early 1980s networking saw a tremendous expansion, even though the early
development of networking was disorganized.
In the mid-1980s, the network technologies that had emerged had been created with a variety of
different hardware and software implementations. Each company that created network hardware
and software used its own company standards. These individual standards were developed because
of competition with other companies. Consequently, many of the new network technologies were
incompatible with each other. It became increasingly difficult for networks that used different
specifications to communicate with each other. This often required the old network equipment to be
removed to implement the new equipment.
One early solution was the creation of local-area network (LAN) standards. Because LAN
standards provided an open set of guidelines for creating network hardware and software, the
equipment from different companies could then become compatible. This allowed for stability in
LAN implementation.
In a LAN system, each department of the company is a kind of electronic island. As the use of
computers in businesses grew, it soon became obvious that even LANs were not sufficient.
What was needed was a way for information to move efficiently and quickly, not only within a
company, but also from one business to another. The solution was the creation of metropolitan-
area networks (MANs) and wide-area networks (WANs). Because WANs could connect user
networks over large geographic areas, it was possible for businesses to communicate with each
other across great distances. Figure summarizes the relative sizes of LANs and WANs.
 2.1.2 Network history
The history of computer networking is complex. It has involved many people from all over the
world over the past 35 years. Presented here is a simplified view of how the Internet evolved. The
processes of invention and commercialization are far more complicated, but it is helpful to look at
the fundamental development.
In the 1940s computers were large electromechanical devices that were prone to failure. In 1947 the
invention of a semiconductor transistor opened up many possibilities for making smaller, more
reliable computers. In the 1950s mainframe computers, which were run by punched card programs,
began to be used by large institutions. In the late 1950s the integrated circuit that combined several,
then many, and now millions, of transistors on one small piece of semiconductor was invented.
Through the 1960s mainframes with terminals were commonplace, and integrated circuits were
widely used.
In the late 1960s and 1970s, smaller computers, called minicomputers came into existence.
However, these minicomputers were still very large by modern standards. In 1977 the Apple
Computer Company introduced the microcomputer, also known as the personal computer. In 1981
IBM introduced its first personal computer. The user-friendly Mac, the open-architecture IBM PC,
and the further micro-miniaturization of integrated circuits led to widespread use of personal
computers in homes and businesses.
In the mid-1980s users with stand-alone computers started to share files using modems to connect
to other computers. This was referred to as point-to-point, or dial-up communication. This concept
was expanded by the use of computers that were the central point of communication in a dial-up
connection. These computers were called bulletin boards. Users would connect to the bulletin
boards, leave and pick up messages, as well as upload and download files. The drawback to this
type of system was that there was very little direct communication and then only with those who
knew about the bulletin board. Another limitation was that the bulletin board computer required one
modem per connection. If five people connected simultaneously it would require five modems
connected to five separate phone lines. As the number of people who wanted to use the system
grew, the system was not able to handle the demand. For example, imagine if 500 people wanted to
connect at the same time. Starting in the 1960s and continuing through the 70s, 80s, and 90s, the
Department of Defense (DoD) developed large, reliable, wide-area networks (WANs) for military
and scientific reasons. This technology was different from the point-to-point communication used in
bulletin boards. It allowed multiple computers to be connected together using many different paths.
The network itself would determine how to move data from one computer to another. Instead of
only being able to communicate with one other computer at a time, many computers could be
reached using the same connection. The DoDs WAN eventually became the Internet.
 2.1.3 Networking devices
Equipment that connects directly to a network segment is referred to as a device. These devices are
broken up into two classifications. The first classification is end-user devices. End-user devices
include computers, printers, scanners, and other devices that provide services directly to the user.
The second classification is network devices. Network devices include all the devices that connect
the end-user devices together to allow them to communicate.
End-user devices that provide users with a connection to the network are also referred to as hosts.
These devices allow users to share, create, and obtain information. The host devices can exist
without a network, but without the network the host capabilities are greatly reduced. Host devices
are physically connected to the network media using a network interface card (NIC). They use this
connection to perform the tasks of sending e-mails, printing reports, scanning pictures, or accessing
databases. A NIC is a printed circuit board that fits into the expansion slot of a bus on a computer
motherboard, or it can be a peripheral device. It is also called a network adapter. Laptop or
notebook computer NICs are usually the size of a PCMCIA card. Each individual NIC carries a
unique code, called a Media Access Control (MAC) address. This address is used to control data
communication for the host on the network. More about the MAC address will be covered later. As
the name implies, the NIC controls host access to the medium.
There are no standardized symbols for end-user devices in the networking industry. They appear
similar to the real devices to allow for quick recognition.
Network devices provide transport for the data that needs to be transferred between end-user
devices. Network devices provide extension of cable connections, concentration of connections,
conversion of data formats, and management of data transfers. Examples of devices that perform
these functions are repeaters, hubs, bridges, switches, and routers. All of the network devices
mentioned here are covered in depth later in the course. For now, a brief overview of networking
devices will be provided.
A repeater is a network device used to regenerate a signal. Repeaters regenerate analog or digital
signals distorted by transmission loss due to attenuation. A repeater does not perform intelligent
routing like a bridge or router.
Hubs concentrate connections. In other words, they take a group of hosts and allow the network to
see them as a single unit. This is done passively, without any other effect on the data transmission.
Active hubs not only concentrate hosts, but they also regenerate signals.
Bridges convert network transmission data formats as well as perform basic data transmission
management. Bridges, as the name implies, provide connections between LANs. Not only do
bridges connect LANs, but they also perform a check on the data to determine whether it should
cross the bridge or not. This makes each part of the network more efficient.
Workgroup switches add more intelligence to data transfer management. Not only can they
determine whether data should remain on a LAN or not, but they can transfer the data only to the
connection that needs that data. Another difference between a bridge and switch is that a switch
does not convert data transmission formats.
Routers have all the capabilities listed above. Routers can regenerate signals, concentrate multiple
connections, convert data transmission formats, and manage data transfers. They can also connect to
a WAN, which allows them to connect LANs that are separated by great distances. None of the
other devices can provide this type of connection.
 2.1.4 Network topology
Network topology defines the structure of the network. One part of the topology definition is the
physical topology, which is the actual layout of the wire or media. The other part is the logical
topology, which defines how the media is accessed by the hosts for sending data. The physical
topologies that are commonly used are as follows:
A bus topology uses a single backbone cable that is terminated at both ends. All the hosts connect
directly to this backbone.
A ring topology connects one host to the next and the last host to the first. This creates a physical
ring of cable.
A star topology connects all cables to a central point of concentration.
An extended star topology links individual stars together by connecting the hubs and/or switches.
This topology can extend the scope and coverage of the network.
A hierarchical topology is similar to an extended star. However, instead of linking the hubs and/or
switches together, the system is linked to a computer that controls the traffic on the topology.
A mesh topology is implemented to provide as much protection as possible from interruption of
service. The use of a mesh topology in the networked control systems of a nuclear power plant
would be an excellent example. As seen in the graphic, each host has its own connections to all
other hosts. Although the Internet has multiple paths to any one location, it does not adopt the full
mesh topology.
The logical topology of a network is how the hosts communicate across the medium. The two most
common types of logical topologies are broadcast and token passing.
Broadcast topology simply means that each host sends its data to all other hosts on the network
medium. There is no order that the stations must follow to use the network. It is first come, first
serve. Ethernet works this way as will be explained later in the course.
The second logical topology is token passing. Token passing controls network access by passing an
electronic token sequentially to each host. When a host receives the token, that host can send data
on the network. If the host has no data to send, it passes the token to the next host and the process
repeats itself. Two examples of networks that use token passing are Token Ring and Fiber
Distributed Data Interface (FDDI). A variation of Token Ring and FDDI is Arcnet. Arcnet is token
passing on a bus topology.
The diagram in Figure shows many different topologies connected by network devices. It shows a
network of moderate complexity that is typical of a school or a small business. It has many
symbols, and it depicts many networking concepts that will take time to learn.
 2.1.5 Network protocols
Protocol suites are collections of protocols that enable network communication from one host
through the network to another host. A protocol is a formal description of a set of rules and
conventions that govern a particular aspect of how devices on a network communicate. Protocols
determine the format, timing, sequencing, and error control in data communication. Without
protocols, the computer cannot make or rebuild the stream of incoming bits from another computer
into the original format.
Protocols control all aspects of data communication, which include the following:
How the physical network is built
How computers connect to the network
How the data is formatted for transmission
How that data is sent
How to deal with errors
These network rules are created and maintained by many different organizations and committees.
Included in these groups are the Institute of Electrical and Electronic Engineers (IEEE), American
National Standards Institute (ANSI), Telecommunications Industry Association (TIA), Electronic
Industries Alliance (EIA) and the International Telecommunications Union (ITU), formerly known
as the Comité Consultatif International Téléphonique et Télégraphique (CCITT).
 2.1.6 Local-area networks (LANs)
LANs consist of the following components:
Network interface cards
Peripheral devices
Networking media
Network devices
LANs make it possible for businesses that use computer technology to locally share files and
printers efficiently, and make internal communications possible. A good example of this technology
is e-mail. They tie data, local communications, and computing equipment together.
Some common LAN technologies are:
Token Ring
 2.1.7 Wide-area networks (WANs)
WANs interconnect LANs, which then provide access to computers or file servers in other
locations. Because WANs connect user networks over a large geographical area, they make it
possible for businesses to communicate across great distances. Using WANs allows computers,
printers, and other devices on a LAN to share and be shared with distant locations. WANs provide
instant communications across large geographic areas. The ability to send an instant message (IM)
to someone anywhere in the world provides the same communication capabilities that used to be
only possible if people were in the same physical office. Collaboration software provides access to
real-time information and resources that allows meetings to be held remotely, instead of in person.
Wide-area networking has also created a new class of workers called telecommuters, people who
never have to leave their homes to go to work.
WANs are designed to do the following:
Operate over a large geographically separated areas
Allow users to have real-time communication capabilities with other users
Provide full-time remote resources connected to local services
Provide e-mail, World Wide Web, file transfer, and e-commerce services
Some common WAN technologies are:
Integrated Services Digital Network (ISDN)
Digital Subscriber Line (DSL)
Frame Relay
US (T) and Europe (E) Carrier Series – T1, E1, T3, E3
Synchronous Optical Network (SONET)
 2.1.8 Metropolitan-area networks (MANs)
A MAN is a network that spans a metropolitan area such as a city or suburban area. A MAN usually
consists of two or more LANs in a common geographic area. For example, a bank with multiple
branches may utilize a MAN. Typically, a service provider is used to connect two or more LAN
sites using private communication lines or optical services. A MAN can also be created using
wireless bridge technology by beaming signals across public areas.
 2.1.9 Storage-area networks (SANs
A SAN is a dedicated, high-performance network used to move data between servers and storage
resources. Because it is a separate, dedicated network, it avoids any traffic conflict between clients
and servers.
SAN technology allows high-speed server-to-storage, storage-to-storage, or server-to-server
connectivity. This method uses a separate network infrastructure that relieves any problems
associated with existing network connectivity.
SANs offer the following features:
Performance – SANs enable concurrent access of disk or tape arrays by two or more servers at
high speeds, providing enhanced system performance.
Availability – SANs have disaster tolerance built in, because data can be mirrored using a SAN up
to 10 kilometers (km) or 6.2 miles away.
Scalability – Like a LAN/WAN, it can use a variety of technologies. This allows easy relocation of
backup data, operations, file migration, and data replication between systems.
 2.1.10 Virtual private network (VPN)
A VPN is a private network that is constructed within a public network infrastructure such as the
global Internet. Using VPN, a telecommuter can access the network of the company headquarters
through the Internet by building a secure tunnel between the telecommuter’s PC and a VPN router
in the headquarters.
 2.1.11 Benefits of VPNs
Cisco products support the latest in VPN technology. A VPN is a service that offers secure, reliable
connectivity over a shared public network infrastructure such as the Internet. VPNs maintain the
same security and management policies as a private network. They are the most cost-effective
method of establishing a point-to-point connection between remote users and an enterprise
customer's network.
The following are the three main types of VPNs:
Access VPNs – Access VPNs provide remote access to a mobile worker and small office/home
office (SOHO) to the headquarters of the Intranet or Extranet over a shared infrastructure. Access
VPNs use analog, dialup, ISDN, digital subscriber line (DSL), mobile IP, and cable technologies to
securely connect mobile users, telecommuters, and branch offices.
Intranet VPNs – Intranet VPNs link regional and remote offices to the headquarters of the internal
network over a shared infrastructure using dedicated connections. Intranet VPNs differ from
Extranet VPNs in that they allow access only to the employees of the enterprise.
Extranet VPNs – Extranet VPNs link business partners to the headquarters of the network over a
shared infrastructure using dedicated connections. Extranet VPNs differ from Intranet VPNs in that
they allow access to users outside the enterprise.
 2.1.12 Intranets and extranets
One common configuration of a LAN is an Intranet. Intranet Web servers differ from public Web
servers in that the public must have the proper permissions and passwords to access the Intranet of
an organization. Intranets are designed to permit access by users who have access privileges to the
internal LAN of the organization. Within an Intranet, Web servers are installed in the network.
Browser technology is used as the common front end to access information such as financial data or
graphical, text-based data stored on those servers.
Extranets refer to applications and services that are Intranet based, and use extended, secure access
to external users or enterprises. This access is usually accomplished through passwords, user IDs,
and other application-level security. Therefore, an Extranet is the extension of two or more Intranet
strategies with a secure interaction between participant enterprises and their respective intranets.
 2.2.1 Importance of bandwidth
Bandwidth is defined as the amount of information that can flow through a network connection in a
given period of time. It is essential to understand the concept of bandwidth when studying
networking for the following four reasons:
Bandwidth is finite.
In other words, regardless of the media used to build the network, there are limits on the capacity of
that network to carry information. Bandwidth is limited by the laws of physics and by the
technologies used to place information on the media. For example, the bandwidth of a conventional
modem is limited to about 56 kbps by both the physical properties of twisted-pair phone wires and
by modem technology. However, the technologies employed by DSL also use the same twisted-pair
phone wires, yet DSL provides much greater bandwidth than is available with conventional
modems. So, even the limits imposed by the laws of physics are sometimes difficult to define.
Optical fiber has the physical potential to provide virtually limitless bandwidth. Even so, the
bandwidth of optical fiber cannot be fully realized until technologies are developed to take full
advantage of its potential.
Bandwidth is not free.
It is possible to buy equipment for a local-area network (LAN) that will provide nearly unlimited
bandwidth over a long period of time. For wide-area network (WAN) connections, it is almost
always necessary to buy bandwidth from a service provider. In either case, an understanding of
bandwidth and changes in demand for bandwidth over a given time can save an individual or a
business a significant amount of money. A network manager needs to make the right decisions
about the kinds of equipment and services to buy.
Bandwidth is a key factor in analyzing network performance, designing new networks, and
understanding the Internet.
A networking professional must understand the tremendous impact of bandwidth and throughput on
network performance and design. Information flows as a string of bits from computer to computer
throughout the world. These bits represent massive amounts of information flowing back and forth
across the globe in seconds or less. In a sense, it may be appropriate to say that the Internet is
The demand for bandwidth is ever increasing.
As soon as new network technologies and infrastructures are built to provide greater bandwidth,
new applications are created to take advantage of the greater capacity. The delivery over the
network of rich media content, including streaming video and audio, requires tremendous amounts
of bandwidth. IP telephony systems are now commonly installed in place of traditional voice
systems, which further adds to the need for bandwidth. The successful networking professional
must anticipate the need for increased bandwidth and act accordingly.
 2.2.2 Analogies
Bandwidth has been defined as the amount of information that can flow through a network in a
given time. The idea that information flows suggests two analogies that may make it easier to
visualize bandwidth in a network. Since both water and traffic are said to flow, consider the
following analogies:
Bandwidth is like the width of a pipe.
A network of pipes brings fresh water to homes and businesses and carries waste water away. This
water network is made up of pipes of different diameters. The main water pipes of a city may be
two meters in diameter, while the pipe to a kitchen faucet may have a diameter of only two
centimeters. The width of the pipe determines the water-carrying capacity of the pipe. Therefore,
the water is like the data, and the pipe width is like the bandwidth. Many networking experts say
that they need to put in bigger pipes when they wish to add more information-carrying capacity.
Bandwidth is like the number of lanes on a highway.
A network of roads serves every city or town. Large highways with many traffic lanes are joined by
smaller roads with fewer traffic lanes. These roads lead to even smaller, narrower roads, which
eventually go to the driveways of homes and businesses. When very few automobiles use the
highway system, each vehicle is able to move freely. When more traffic is added, each vehicle
moves more slowly. This is especially true on roads with fewer lanes for the cars to occupy.
Eventually, as even more traffic enters the highway system, even multi-lane highways become
congested and slow. A data network is much like the highway system. The data packets are
comparable to automobiles, and the bandwidth is comparable to the number of lanes on the
highway. When a data network is viewed as a system of highways, it is easy to see how low
bandwidth connections can cause traffic to become congested all over the network.
 2.2.3 Measurement
In digital systems, the basic unit of bandwidth is bits per second (bps). Bandwidth is the measure of
how much information, or bits, can flow from one place to another in a given amount of time, or
seconds. Although bandwidth can be described in bits per second, usually some multiple of bits per
second is used. In other words, network bandwidth is typically described as thousands of bits per
second (kbps), millions of bits per second (Mbps), and billions of bits per second (Gbps) and
trillions of bits per second (Tbps).
Although the terms bandwidth and speed are often used interchangeably, they are not exactly the
same thing. One may say, for example, that a T3 connection at 45Mbps operates at a higher speed
than a T1 connection at 1.544Mbps. However, if only a small amount of their data-carrying capacity
is being used, each of these connection types will carry data at roughly the same speed. For
example, a small amount of water will flow at the same rate through a small pipe as through a large
pipe. Therefore, it is usually more accurate to say that a T3 connection has greater bandwidth than a
T1 connection. This is because the T3 connection is able to carry more information in the same
period of time, not because it has a higher speed.
 2.2.4 Limitations
Bandwidth varies depending upon the type of media as well as the LAN and WAN technologies
used. The physics of the media account for some of the difference. Signals travel through twisted-
pair copper wire, coaxial cable, optical fiber, and air. The physical differences in the ways signals
travel result in fundamental limitations on the information-carrying capacity of a given medium.
However, the actual bandwidth of a network is determined by a combination of the physical media
and the technologies chosen for signaling and detecting network signals.
For example, current understanding of the physics of unshielded twisted-pair (UTP) copper cable
puts the theoretical bandwidth limit at over one gigabit per second (Gbps). However, in actual
practice, the bandwidth is determined by the use of 10BASE-T, 100BASE-TX, or 1000BASE-TX
Ethernet. In other words, the actual bandwidth is determined by the signaling methods, network
interface cards (NICs), and other items of network equipment that are chosen. Therefore, the
bandwidth is not determined solely by the limitations of the medium.
Figure shows some common networking media types along with the limits on distance and
bandwidth when using the indicated networking technology.
Figure summarizes common WAN services and the bandwidth associated with each service.
 2.2.5 Throughput
Bandwidth is the measure of the amount of information that can move through the network in a
given period of time. Therefore, the amount of available bandwidth is a critical part of the
specification of the network. A typical LAN might be built to provide 100 Mbps to every desktop
workstation, but this does not mean that each user is actually able to move one hundred megabits of
data through the network for every second of use. This would be true only under the most ideal
circumstances. The concept of throughput can help explain why this is so.
Throughput refers to actual measured bandwidth, at a specific time of day, using specific Internet
routes, and while a specific set of data is transmitted on the network. Unfortunately, for many
reasons, throughput is often far less than the maximum possible digital bandwidth of the medium
that is being used. The following are some of the factors that determine throughput:
Internetworking devices
Type of data being transferred
Network topology
Number of users on the network
User computer
Server computer
Power conditions
The theoretical bandwidth of a network is an important consideration in network design, because
the network bandwidth will never be greater than the limits imposed by the chosen media and
networking technologies. However, it is just as important for a network designer and administrator
to consider the factors that may affect actual throughput. By measuring throughput on a regular
basis, a network administrator will be aware of changes in network performance and changes in the
needs of network users. The network can then be adjusted accordingly.
 2.2.6 Data transfer calculation
Network designers and administrators are often called upon to make decisions regarding bandwidth.
One decision might be whether to increase the size of the WAN connection to accommodate a new
database. Another decision might be whether the current LAN backbone is of sufficient bandwidth
for a streaming-video training program. The answers to problems like these are not always easy to
find, but one place to start is with a simple data transfer calculation.
Using the formula transfer time = size of file / bandwidth (T=S/BW) allows a network administrator
to estimate several of the important components of network performance. If the typical file size for
a given application is known, dividing the file size by the network bandwidth yields an estimate of
the fastest time that the file can be transferred.
Two important points should be considered when doing this calculation.
The result is an estimate only, because the file size does not include any overhead added by
The result is likely to be a best-case transfer time, because available bandwidth is almost never at
the theoretical maximum for the network type. A more accurate estimate can be attained if
throughput is substituted for bandwidth in the equation.
Although the data transfer calculation is quite simple, one must be careful to use the same units
throughout the equation. In other words, if the bandwidth is measured in megabits per second
(Mbps), the file size must be in megabits (Mb), not megabytes (MB). Since file sizes are typically
given in megabytes, it may be necessary to multiply the number of megabytes by eight to convert to
Try to answer the following question, using the formula T=S/BW. Be sure to convert units of
measurement as necessary.
Would it take less time to send the contents of a floppy disk full of data (1.44 MB) over an ISDN
line, or to send the contents of a ten GB hard drive full of data over an OC-48 line?
2.2.7 Digital versus analog
Radio, television, and telephone transmissions have, until recently, been sent through the air and
over wires using electromagnetic waves. These waves are called analog because they have the same
shapes as the light and sound waves produced by the transmitters. As light and sound waves change
size and shape, the electrical signal that carries the transmission changes proportionately. In other
words, the electromagnetic waves are analogous to the light and sound waves.
Analog bandwidth is measured by how much of the electromagnetic spectrum is occupied by each
signal. The basic unit of analog bandwidth is hertz (Hz), or cycles per second. Typically, multiples
of this basic unit of analog bandwidth are used, just as with digital bandwidth. Units of
measurement that are commonly seen are kilohertz (KHz), megahertz (MHz), and gigahertz (GHz).
These are the units used to describe the bandwidths of cordless telephones, which usually operate at
either 900 MHz or 2.4 GHz. These are also the units used to describe the bandwidths of 802.11a
and 802.11b wireless networks, which operate at 5 GHz and 2.4 GHz.
While analog signals are capable of carrying a variety of information, they have some significant
disadvantages in comparison to digital transmissions. The analog video signal that requires a wide
frequency range for transmission cannot be squeezed into a smaller band. Therefore, if the
necessary analog bandwidth is not available, the signal cannot be sent.
In digital signaling all information is sent as bits, regardless of the kind of information it is. Voice,
video, and data all become streams of bits when they are prepared for transmission over digital
media. This type of transmission gives digital bandwidth an important advantage over analog
bandwidth. Unlimited amounts of information can be sent over the smallest or lowest bandwidth
digital channel. Regardless of how long it takes for the digital information to arrive at its destination
and be reassembled, it can be viewed, listened to, read, or processed in its original form.
It is important to understand the differences and similarities between digital and analog bandwidth.
Both types of bandwidth are regularly encountered in the field of information technology. However,
because this course is concerned primarily with digital networking, the term ‘bandwidth’ will refer
to digital bandwidth.
 2.3.1 Using layers to analyze problems in a flow of materials
The concept of layers is used to describe communication from one computer to another. Figure
shows a set of questions that are related to flow, which is defined as the motion through a system of
either physical or logical objects. These questions show how the concept of layers helps describe
the details of the flow process. This process could be any kind of flow, from the flow of traffic on a
highway system to the flow of data through a network. Figure shows several examples of flow
and ways that the flow process can be broken down into details or layers.
A conversation between two people provides a good opportunity to use a layered approach to
analyze information flow. In a conversation, each person wishing to communicate begins by
creating an idea. Then a decision is made on how to properly communicate the idea. For example, a
person could decide to speak, sing or shout, and what language to use. Finally the idea is delivered.
For example, the person creates the sound which carries the message.
This process can be broken into separate layers that may be applied to all conversations. The top
layer is the idea that will be communicated. The middle layer is the decision on how the idea is to
be communicated. The bottom layer is the creation of sound to carry the communication.
The same method of layering explains how a computer network distributes information from a
source to a destination. When computers send information through a network, all communications
originate at a source then travel to a destination.
The information that travels on a network is generally referred to as data or a packet. A packet is a
logically grouped unit of information that moves between computer systems. As the data passes
between layers, each layer adds additional information that enables effective communication with
the corresponding layer on the other computer.
The OSI and TCP/IP models have layers that explain how data is communicated from one computer
to another. The models differ in the number and function of the layers. However, each model can be
used to help describe and provide details about the flow of information from a source to a
2.3.2 Using layers to describe data communication
In order for data packets to travel from a source to a destination on a network, it is important that all
the devices on the network speak the same language or protocol. A protocol is a set of rules that
make communication on a network more efficient. For example, while flying an airplane, pilots
obey very specific rules for communication with other airplanes and with air traffic control.
A data communications protocol is a set of rules or an agreement that determines the format and
transmission of data.
Layer 4 on the source computer communicates with Layer 4 on the destination computer. The
rules and conventions used for this layer are known as Layer 4 protocols. It is important to
remember that protocols prepare data in a linear fashion. A protocol in one layer performs a certain
set of operations on data as it prepares the data to be sent over the network. The data is then passed
to the next layer where another protocol performs a different set of operations.
Once the packet has been sent to the destination, the protocols undo the construction of the packet
that was done on the source side. This is done in reverse order. The protocols for each layer on the
destination return the information to its original form, so the application can properly read the data.

 2.3.3 OSI model
The early development of networks was disorganized in many ways. The early 1980s saw
tremendous increases in the number and size of networks. As companies realized the advantages of
using networking technology, networks were added or expanded almost as rapidly as new network
technologies were introduced.
By the mid-1980s, these companies began to experience problems from the rapid expansion. Just as
people who do not speak the same language have difficulty communicating with each other, it was
difficult for networks that used different specifications and implementations to exchange
information. The same problem occurred with the companies that developed private or proprietary
networking technologies. Proprietary means that one or a small group of companies controls all
usage of the technology. Networking technologies strictly following proprietary rules could not
communicate with technologies that followed different proprietary rules.
To address the problem of network incompatibility, the International Organization for
Standardization (ISO) researched networking models like Digital Equipment Corporation net
(DECnet), Systems Network Architecture (SNA), and TCP/IP in order to find a generally applicable
set of rules for all networks. Using this research, the ISO created a network model that helps
vendors create networks that are compatible with other networks.
The Open System Interconnection (OSI) reference model released in 1984 was the descriptive
network model that the ISO created. It provided vendors with a set of standards that ensured greater
compatibility and interoperability among various network technologies produced by companies
around the world.
The OSI reference model has become the primary model for network communications. Although
there are other models in existence, most network vendors relate their products to the OSI reference
model. This is especially true when they want to educate users on the use of their products. It is
considered the best tool available for teaching people about sending and receiving data on a
 2.3.4 OSI layers
The OSI reference model is a framework that is used to understand how information travels
throughout a network. The OSI reference model explains how packets travel through the various
layers to another device on a network, even if the sender and destination have different types of
network media.
In the OSI reference model, there are seven numbered layers, each of which illustrates a particular
network function. - Dividing the network into seven layers provides the following advantages:
It breaks network communication into smaller, more manageable parts.
It standardizes network components to allow multiple vendor development and support.
It allows different types of network hardware and software to communicate with each other.
It prevents changes in one layer from affecting other layers.
It divides network communication into smaller parts to make learning it easier to understand.
 2.3.5 Peer-to-peer communications
In order for data to travel from the source to the destination, each layer of the OSI model at the
source must communicate with its peer layer at the destination. This form of communication is
referred to as peer-to-peer. During this process, the protocols of each layer exchange information,
called protocol data units (PDUs). Each layer of communication on the source computer
communicates with a layer-specific PDU, and with its peer layer on the destination computer as
illustrated in Figure .
Data packets on a network originate at a source and then travel to a destination. Each layer depends
on the service function of the OSI layer below it. To provide this service, the lower layer uses
encapsulation to put the PDU from the upper layer into its data field; then it adds whatever headers
and trailers the layer needs to perform its function. Next, as the data moves down through the layers
of the OSI model, additional headers and trailers are added. After Layers 7, 6, and 5 have added
their information, Layer 4 adds more information. This grouping of data, the Layer 4 PDU, is called
a segment.
The network layer provides a service to the transport layer, and the transport layer presents data to
the internetwork subsystem. The network layer has the task of moving the data through the
internetwork. It accomplishes this task by encapsulating the data and attaching a header creating a
packet (the Layer 3 PDU). The header contains information required to complete the transfer, such
as source and destination logical addresses.
The data link layer provides a service to the network layer. It encapsulates the network layer
information in a frame (the Layer 2 PDU). The frame header contains information (for example,
physical addresses) required to complete the data link functions. The data link layer provides a
service to the network layer by encapsulating the network layer information in a frame.
The physical layer also provides a service to the data link layer. The physical layer encodes the data
link frame into a pattern of 1s and 0s (bits) for transmission on the medium (usually a wire) at Layer
 2.3.6 TCP/IP model
The historical and technical standard of the Internet is the TCP/IP model. The U.S. Department of
Defense (DoD) created the TCP/IP reference model, because it wanted to design a network that
could survive any conditions, including a nuclear war. In a world connected by different types of
communication media such as copper wires, microwaves, optical fibers and satellite links, the DoD
wanted transmission of packets every time and under any conditions. This very difficult design
problem brought about the creation of the TCP/IP model.
Unlike the proprietary networking technologies mentioned earlier, TCP/IP was developed as an
open standard. This meant that anyone was free to use TCP/IP. This helped speed up the
development of TCP/IP as a standard.
The TCP/IP model has the following four layers:
Application layer
Transport layer
Internet layer
Network access layer
Although some of the layers in the TCP/IP model have the same name as layers in the OSI model,
the layers of the two models do not correspond exactly. Most notably, the application layer has
different functions in each model.
The designers of TCP/IP felt that the application layer should include the OSI session and
presentation layer details. They created an application layer that handles issues of representation,
encoding, and dialog control.
The transport layer deals with the quality of service issues of reliability, flow control, and error
correction. One of its protocols, the transmission control protocol (TCP), provides excellent and
flexible ways to create reliable, well-flowing, low-error network communications.
TCP is a connection-oriented protocol. It maintains a dialogue between source and destination while
packaging application layer information into units called segments. Connection-oriented does not
mean that a circuit exists between the communicating computers. It does mean that Layer 4
segments travel back and forth between two hosts to acknowledge the connection exists logically
for some period.
The purpose of the Internet layer is to divide TCP segments into packets and send them from any
network. The packets arrive at the destination network independent of the path they took to get
there. The specific protocol that governs this layer is called the Internet Protocol (IP). Best path
determination and packet switching occur at this layer.
The relationship between IP and TCP is an important one. IP can be thought to point the way for the
packets, while TCP provides a reliable transport.
The name of the network access layer is very broad and somewhat confusing. It is also known as
the host-to-network layer. This layer is concerned with all of the components, both physical and
logical, that are required to make a physical link. It includes the networking technology details,
including all the details in the OSI physical and data link layers.
Figure illustrates some of the common protocols specified by the TCP/IP reference model layers.
Some of the most commonly used application layer protocols include the following:
File Transfer Protocol (FTP)
Hypertext Transfer Protocol (HTTP)
Simple Mail Transfer Protocol (SMTP)
Domain Name System (DNS)
Trivial File Transfer Protocol (TFTP)
The common transport layer protocols include:
Transport Control Protocol (TCP)
User Datagram Protocol (UDP)
The primary protocol of the Internet layer is:
Internet Protocol (IP)
The network access layer refers to any particular technology used on a specific network.
Regardless of which network application services are provided and which transport protocol is used,
there is only one Internet protocol, IP. This is a deliberate design decision. IP serves as a universal
protocol that allows any computer anywhere to communicate at any time.
A comparison of the OSI model and the TCP/IP models will point out some similarities and
Similarities include:
Both have layers.
Both have application layers, though they include very different services.
Both have comparable transport and network layers.
Both models need to be known by networking professionals.
Both assume packets are switched. This means that individual packets may take different paths to
reach the same destination. This is contrasted with circuit-switched networks where all the packets
take the same path.
Differences include:
TCP/IP combines the presentation and session layer issues into its application layer.
TCP/IP combines the OSI data link and physical layers into the network access layer.
TCP/IP appears simpler because it has fewer layers.
TCP/IP protocols are the standards around which the Internet developed, so the TCP/IP model gains
credibility just because of its protocols. In contrast, networks are not usually built on the OSI
protocol, even though the OSI model is used as a guide.
Although TCP/IP protocols are the standards with which the Internet has grown, this curriculum
will use the OSI model for the following reasons:
It is a generic, protocol-independent standard.
It has more details, which make it more helpful for teaching and learning.
It has more details, which can be helpful when troubleshooting.
Networking professionals differ in their opinions on which model to use. Due to the nature of the
industry it is necessary to become familiar with both. Both the OSI and TCP/IP models will be
referred to throughout the curriculum. The focus will be on the following:
TCP as an OSI Layer 4 protocol
IP as an OSI Layer 3 protocol
Ethernet as a Layer 2 and Layer 1 technology
Remember that there is a difference between a model and an actual protocol that is used in
networking. The OSI model will be used to describe TCP/IP protocols.
 2.3.7 Detailed encapsulation process
All communications on a network originate at a source, and are sent to a destination. The
information sent on a network is referred to as data or data packets. If one computer (host A) wants
to send data to another computer (host B), the data must first be packaged through a process called
Encapsulation wraps data with the necessary protocol information before network transit.
Therefore, as the data packet moves down through the layers of the OSI model, it receives headers,
trailers, and other information.
To see how encapsulation occurs, examine the manner in which data travels through the layers as
illustrated in Figure . Once the data is sent from the source, it travels through the application layer
down through the other layers. The packaging and flow of the data that is exchanged goes through
changes as the layers perform their services for end users. As illustrated in Figure , networks must
perform the following five conversion steps in order to encapsulate data:
Build the data.
As a user sends an e-mail message, its alphanumeric characters are converted to data that can travel
across the internetwork.
Package the data for end-to-end transport.
The data is packaged for internetwork transport. By using segments, the transport function ensures
that the message hosts at both ends of the e-mail system can reliably communicate.
Add the network IP address to the header.
The data is put into a packet or datagram that contains a packet header with source and destination
logical addresses. These addresses help network devices send the packets across the network along
a chosen path.
Add the data link layer header and trailer.
Each network device must put the packet into a frame. The frame allows connection to the next
directly-connected network device on the link. Each device in the chosen network path requires
framing in order for it to connect to the next device.
Convert to bits for transmission.
The frame must be converted into a pattern of 1s and 0s (bits) for transmission on the medium. A
clocking function enables the devices to distinguish these bits as they travel across the medium. The
medium on the physical internetwork can vary along the path used. For example, the e-mail
message can originate on a LAN, cross a campus backbone, and go out a WAN link until it reaches
its destination on another remote LAN.

Module 3: Networking Media
Copper cable is used in almost every LAN. Many different types of copper cable are available, with
each type having advantages and disadvantages. Proper selection of cabling is key to efficient
network operation. Because copper carries information using electrical current, it is important to
understand some basics of electricity when planning and installing a network.
Optical fiber is the most frequently used medium for the longer, high bandwidth, point-to-point
transmissions required on LAN backbones and on WANs. Using optical media, light is used to
transmit data through thin glass or plastic fiber. Electrical signals cause a fiber-optic transmitter to
generate the light signals sent down the fiber. The receiving host receives the light signals and
converts them to electrical signals at the far end of the fiber. However, there is no electricity in the
fiber-optic cable itself. In fact, the glass used in fiber-optic cable is a very good electrical insulator.
Physical connectivity allowed an increase in productivity by allowing the sharing of printers,
servers, and software. Traditional networked systems require that the workstation remains
stationary permitting moves only within the limits of the media and office area.
The introduction of wireless technology removes these restraints and brings true portability to the
computing world. Currently, wireless technology does not provide the high-speed transfers,
security, or uptime reliability of cabled networks. However, flexibility of wireless has justified the
trade off.
Administrators often consider wireless when installing a new network or when upgrading an
existing network. A simple wireless network could be working just a few minutes after the
workstations are turned on. Connectivity to the Internet is provided through a wired connection,
router, cable or DSL modem and a wireless access point that acts as a hub for the wireless nodes. In
a residential or small office environment these devices may be combined into a single unit.
3.1.1 Atoms and electrons
All matter is composed of atoms. The Periodic Table of Elements lists all known types of atoms and
their properties. The atom is comprised of:
Electrons – Particles with a negative charge that orbit the nucleus
Nucleus – The center part of the atom, composed of protons and neutrons
Protons – Particles with a positive charge
Neutrons – Particles with no charge (neutral)
To help explain the electrical properties of elements/materials, locate helium (He) on the periodic
table. Helium has an atomic number of 2, which means that helium has 2 protons and 2 electrons.
It has an atomic weight of 4. By subtracting the atomic number (2) from the atomic weight (4), it is
learned that helium also has 2 neutrons.
The Danish physicist, Niels Bohr, developed a simplified model to illustrate the atom. This
illustration shows the model for a helium atom. If the protons and neutrons of an atom were the size
of an adult (#5) soccer ball in the middle of a soccer field, the only thing smaller than the ball would
be the electrons. The electrons would be the size of cherries and would be orbiting near the outer-
most seats of the stadium. In other words, the overall volume of this atom, including the electron
path, would be about the size of the stadium. The nucleus of the atom where the protons and
neutrons exist would be the size of the soccer ball.
One of the laws of nature, called Coulomb's Electric Force Law, states that opposite charges react to
each other with a force that causes them to be attracted to each other. Like charges react to each
other with a force that causes them to repel each other. In the case of opposite and like charges, the
force increases as the charges move closer to each other. The force is inversely proportional to the
square of the separation distance. When particles get extremely close together, nuclear force
overrides the repulsive electrical force and keeps the nucleus together. That is why a nucleus does
not fly apart.
Examine Bohr's model of the helium atom. If Coulomb's law is true, and if Bohr's model describes
helium atoms as stable, then there must be other laws of nature at work. How can they both be true?
Coulomb's Law – Opposite charges attract and like charges repel.
Bohr’s model – Protons are positive charges and electrons are negative charges. There is more than
1 proton in the nucleus.
Electrons stay in orbit, even though the protons attract the electrons. The electrons have just enough
velocity to keep orbiting and not be pulled into the nucleus, just like the moon around the Earth.
Protons do not fly apart from each other because of a nuclear force that is associated with neutrons.
The nuclear force is an incredibly strong force that acts as a kind of glue to hold the protons
The protons and neutrons are bound together by a very powerful force. However, the electrons are
bound to their orbit around the nucleus by a weaker force. Electrons in certain atoms, such as
metals, can be pulled free from the atom and made to flow. This sea of electrons, loosely bound to
the atoms, is what makes electricity possible. Electricity is a free flow of electrons.
Loosened electrons that stay in one place, without moving, and with a negative charge, are called
static electricity. If these static electrons have an opportunity to jump to a conductor, this can lead
to electrostatic discharge (ESD). A discussion on conductors follows later in this chapter.
ESD, though usually harmless to people, can create serious problems for sensitive electronic
equipment. A static discharge can randomly damage computer chips, data, or both. The logical
circuitry of computer chips is extremely sensitive to electrostatic discharge. Use caution when
working inside a computer, router, and so on.
Atoms, or groups of atoms called molecules, can be referred to as materials. Materials are classified
as belonging to one of three groups depending on how easily electricity, or free electrons, flows
through them.
The basis for all electronic devices is the knowledge of how insulators, conductors and
semiconductors control the flow of electrons and work together in various combinations.
 3.1.2 Voltage
Voltage is sometimes referred to as electromotive force (EMF). EMF is related to an electrical
force, or pressure, that occurs when electrons and protons are separated. The force that is created
pushes toward the opposite charge and away from the like charge. This process occurs in a battery,
where chemical action causes electrons to be freed from the negative terminal of the battery. The
electrons then travel to the opposite, or positive, terminal through an EXTERNAL circuit. The
electrons do not travel through the battery itself. Remember that the flow of electricity is really the
flow of electrons. Voltage can also be created in three other ways. The first is by friction, or static
electricity. The second way is by magnetism, or electric generator. The last way that voltage can be
created is by light, or solar cell.
Voltage is related to the electrical fields emanating from the charges associated with particles such
as protons, electrons, etc. Voltage is represented by the letter V, and sometimes by the letter E, for
electromotive force. The unit of measurement for voltage is volt (V). Volt is defined as the
amount of work, per unit charge, needed to separate the charges.
 3.1.3 Resistance and impedance
The materials through which current flows offer varying amounts of opposition, or resistance to the
movement of the electrons. The materials that offer very little, or no, resistance, are called
conductors. Those materials that do not allow the current to flow, or severely restrict its flow, are
called insulators. The amount of resistance depends on the chemical composition of the materials.
All materials that conduct electricity have a measure of resistance to the flow of electrons through
them. These materials also have other effects called capacitance and inductance associated with the
flow of electrons. The three characteristics comprise impedance, which is similar to and includes
The term attenuation is important when learning about networks. Attenuation refers to the resistance
to the flow of electrons and why a signal becomes degraded as it travels along the conduit.
The letter R represents resistance. The unit of measurement for resistance is the ohm ( ). The
symbol comes from the Greek letter , omega.
Electrical insulators, or insulators, are materials that allow electrons to flow through them with great
difficulty, or not at all. Examples of electrical insulators include plastic, glass, air, dry wood, paper,
rubber, and helium gas. These materials have very stable chemical structures, with orbiting
electrons tightly bound within the atoms.
Electrical conductors, usually just called conductors, are materials that allow electrons to flow
through them with great ease. They flow easily because the outermost electrons are bound very
loosely to the nucleus, and are easily freed. At room temperature, these materials have a large
number of free electrons that can provide conduction. The introduction of voltage causes the free
electrons to move, resulting in a current flow.
The periodic table categorizes some groups of atoms by listing them in the form of columns. The
atoms in each column belong to particular chemical families. Although they may have different
numbers of protons, neutrons, and electrons, their outermost electrons have similar orbits and
behave similarly when interacting with other atoms and molecules. The best conductors are metals,
such as copper (Cu), silver (Ag), and gold (Au), because they have electrons that are easily freed.
Other conductors include solder, a mixture of lead (Pb) and tin (Sn), and water with ions. An ion is
an atom that has more electrons, or fewer electrons, than the number of protons in the nucleus of the
atom. The human body is made of approximately 70% water with ions, which means that the human
body is a conductor.
Semiconductors are materials where the amount of electricity they conduct can be precisely
controlled. These materials are listed together in one column of the periodic chart. Examples
include carbon (C), germanium (Ge), and the alloy, gallium arsenide (GaAs). The most important
semiconductor which makes the best microscopic-sized electronic circuits is silicon (Si).
Silicon is very common and can be found in sand, glass, and many types of rocks. The region
around San Jose, California is known as Silicon Valley because the computer industry, which
depends on silicon microchips, started in that area.
 3.1.4 Current
Electrical current is the flow of charges created when electrons move. In electrical circuits, the
current is caused by a flow of free electrons. When voltage, or electrical pressure, is applied and
there is a path for the current, electrons move from the negative terminal along the path to the
positive terminal. The negative terminal repels the electrons and the positive terminal attracts the
electrons. The letter “I” represents current. The unit of measurement for current is Ampere (Amp).
Amp is defined as the number of charges per second that pass by a point along a path.
If amperage or current can be thought of as the amount or volume of electron traffic that is flowing,
then voltage can be thought of as the speed of the electron traffic. The combination of amperage and
voltage equals wattage. Electrical devices such as light bulbs, motors and computer power supplies
are rated in terms of watts. A watt is how much power a device consumes or produces.
It is the current or amperage in an electrical circuit that really does the work. As an example, static
electricity has very high voltage, so much that it can jump a gap of an inch or more. However, it has
very low amperage and as a result can create a shock but not permanent injury. The starter motor in
an automobile operates at a relatively low 12 volts but requires very high amperage to generate
enough energy to turn over the engine. Lightning has very high voltage and high amperage and can
do severe damage or injury.
 3.1.5 Circuits
Current flows in closed loops called circuits. These circuits must be composed of conducting
materials, and must have sources of voltage. Voltage causes current to flow, while resistance and
impedance oppose it. Current consists of electrons flowing away from negative terminals and
towards positive terminals. Knowing these facts allows people to control a flow of current.
Electricity will naturally flow to the earth if there is a path. Current also flows along the path of
least resistance. If a human body provides the path of least resistance, the current will flow through
it. When an electric appliance has a plug with three prongs, one of the three prongs serves as the
ground, or zero volts. The ground provides a conducting path for the electrons to flow to the earth
because the resistance traveling through the body would be greater than the resistance flowing
directly to the ground.
Ground typically means the zero volts level, when making electrical measurements. Voltage is
created by the separation of charges, which means that voltage measurements must be made
between two points.
A water analogy helps to explain concepts of electricity. The higher the water and the greater the
pressure, the more the water will flow. The water current also depends on the size of the space it
must flow through. Similarly, the higher the voltage and the greater the electrical pressure, the more
current will be produced. The electric current then encounters resistance that, like the water tap,
reduces the flow. If the electric current is in an AC circuit, then the amount of current will depend
on how much impedance is present. If the electric current is in a DC circuit, then the amount of
current will depend on how much resistance is present. The pump is like a battery. It provides
pressure to keep the flow moving.
The relationship among voltage, resistance, and current is voltage (V) = current (I) multiplied by
resistance (R). In other words, V=I*R. This is Ohm’s law, named after the scientist who explored
these issues.
Two ways in which current flows are Alternating Current (AC) and Direct Current (DC).
Alternating current (AC) and voltages vary over time by changing their polarity, or direction. AC
flows in one direction, then reverses its direction and flows in the other direction, and then repeats
the process. AC voltage is positive at one terminal, and negative at the other. Then the AC voltage
reverses its polarity, so that the positive terminal becomes negative, and the negative terminal
becomes positive. This process repeats itself continuously.
DC always flows in the same direction, and DC voltages always have the same polarity. One
terminal is always positive, and the other is always negative. They do not change or reverse.
An oscilloscope is an electronic device used to measure electrical signals relative to time. An
oscilloscope graphs the electrical waves, pulses, and patterns. An oscilloscope has an x-axis that
represents time, and a y-axis that represents voltage. There are usually two y-axis voltage inputs so
that two waves can be observed and measured at the same time.
Power lines carry electricity in the form of AC because it can be delivered efficiently over large
distances. DC can be found in flashlight batteries, car batteries, and as power for the microchips on
the motherboard of a computer, where it only needs to go a short distance.
Electrons flow in closed circuits, or complete loops. Figure shows a simple circuit. The chemical
processes in the battery cause charges to build up. This provides a voltage, or electrical pressure,
that enables electrons to flow through various devices. The lines represent a conductor, which is
usually copper wire. Think of a switch as two ends of a single wire that can be opened or broken to
prevent electrons from flowing. When the two ends are closed, fixed, or shorted, electrons are
allowed to flow. Finally, a light bulb provides resistance to the flow of electrons, causing the
electrons to release energy in the form of light. The circuits involved in networking use a much
more complex version of this very simple circuit.
For AC and DC electrical systems, the flow of electrons is always from a negatively charged source
to a positively charged source. However, for the controlled flow of electrons to occur, a complete
circuit is required. Remember, electrical current follows the path of least resistance. Figure shows
part of the electrical circuit that brings power to a home or office.
 3.1.6 Cable specifications
Cables have different specifications and expectations pertaining to performance:
What speeds for data transmission can be achieved using a particular type of cable? The speed of bit
transmission through the cable is extremely important. The speed of transmission is affected by the
kind of conduit used.
What kind of transmission is being considered? Will the transmissions be digital or will they be
analog-based? Digital or baseband transmission and analog-based or broadband transmission are the
two choices.
How far can a signal travel through a particular type of cable before attenuation of that signal
becomes a concern? In other words, will the signal become so degraded that the recipient device
might not be able to accurately receive and interpret the signal by the time the signal reaches that
device? The distance the signal travels through the cable directly affects attenuation of the signal.
Degradation of the signal is directly related to the distance the signal travels and the type of cable
Some examples of Ethernet specifications which relate to cable type include:
10BASE-T refers to the speed of transmission at 10 Mbps. The type of transmission is baseband, or
digitally interpreted. The T stands for twisted pair.
10BASE5 refers to the speed of transmission at 10 Mbps. The type of transmission is baseband, or
digitally interpreted. The 5 represents the capability of the cable to allow the signal to travel for
approximately 500 meters before attenuation could disrupt the ability of the receiver to
appropriately interpret the signal being received. 10BASE5 is often referred to as Thicknet.
Thicknet is actually a type of network, while 10BASE5 is the cabling used in that network.
10BASE2 refers to the speed of transmission at 10 Mbps. The type of transmission is baseband, or
digitally interpreted. The 2, in 10BASE2, represents the capability of the cable to allow the signal to
travel for approximately 200 meters, before attenuation could disrupt the ability of the receiver to
appropriately interpret the signal being received. 10BASE2 is often referred to as Thinnet. Thinnet
is actually a type of network, while 10BASE2 is the cabling used in that network.
 3.1.7 Coaxial cable
Coaxial cable consists of a hollow outer cylindrical conductor that surrounds a single inner wire
made of two conducting elements. One of these elements, located in the center of the cable, is a
copper conductor. Surrounding the copper conductor is a layer of flexible insulation. Over this
insulating material is a woven copper braid or metallic foil that acts as the second wire in the circuit
and as a shield for the inner conductor. This second layer, or shield reduces the amount of outside
electro-magnetic interference. Covering this shield is the cable jacket.
For LANs, coaxial cable offers several advantages. It can be run longer distances than shielded
twisted pair, STP, and unshielded twisted pair, UTP, cable without the need for repeaters. Repeaters
regenerate the signals in a network so that they can cover greater distances. Coaxial cable is less
expensive than fiber-optic cable, and the technology is well known. It has been used for many years
for many types of data communication, including cable television.
When working with cable, it is important to consider its size. As the thickness of the cable
increases, so does the difficulty in working with it. Remember that cable must be pulled through
existing conduits and troughs that are limited in size. Coaxial cable comes in a variety of sizes. The
largest diameter was specified for use as Ethernet backbone cable, because it has a greater
transmission length and noise rejection characteristics. This type of coaxial cable is frequently
referred to as thicknet. As its nickname suggests, this type of cable can be too rigid to install easily
in some situations. Generally, the more difficult the network media is to install, the more expensive
it is to install. Coaxial cable is more expensive to install than twisted-pair cable. Thicknet cable is
almost never used anymore, except for special purpose installations.
In the past, ‘thinnet’ coaxial cable with an outside diameter of only 0.35 cm was used in Ethernet
networks. It was especially useful for cable installations that required the cable to make many twists
and turns. Since thinnet was easier to install, it was also cheaper to install. This led some people to
refer to it as cheapernet. The outer copper or metallic braid in coaxial cable comprises half the
electric circuit and special care must be taken to ensure a solid electrical connection at both ends
resulting in proper grounding. Poor shield connection is one of the biggest sources of connection
problems in the installation of coaxial cable. Connection problems result in electrical noise that
interferes with signal transmittal on the networking media. For this reason thinnet is no longer
commonly used nor supported by latest standards (100 Mbps and higher) for Ethernet networks.
3.1.8 STP cable
Shielded twisted-pair cable (STP) combines the techniques of shielding, cancellation, and twisting
of wires. Each pair of wires is wrapped in metallic foil. The four pairs of wires are wrapped in an
overall metallic braid or foil. It is usually 150-Ohm cable. As specified for use in Ethernet network
installations, STP reduces electrical noise within the cable such as pair to pair coupling and
crosstalk. STP also reduces electronic noise from outside the cable, for example electromagnetic
interference (EMI) and radio frequency interference (RFI). Shielded twisted-pair cable shares many
of the advantages and disadvantages of unshielded twisted-pair cable (UTP). STP affords greater
protection from all types of external interference, but is more expensive and difficult to install than
A new hybrid of UTP with traditional STP is Screened UTP (ScTP), also known as Foil Twisted
Pair (FTP). ScTP is essentially UTP wrapped in a metallic foil shield, or screen. It is usually 100-
Ohm or 120-Ohm cable.
The metallic shielding materials in STP and ScTP need to be grounded at both ends. If improperly
grounded or if there are any discontinuities in the entire length of the shielding material, STP and
ScTP become susceptible to major noise problems. They are susceptible because they allow the
shield to act like an antenna picking up unwanted signals. However, this effect works both ways.
Not only does the shield prevent incoming electromagnetic waves from causing noise on data wires,
but it also minimizes the outgoing radiated electromagnetic waves. These waves could cause noise
in other devices. STP and ScTP cable cannot be run as far as other networking media, such as
coaxial cable or optical fiber, without the signal being repeated. More insulation and shielding
combine to considerably increase the size, weight, and cost of the cable. The shielding materials
make terminations more difficult and susceptible to poor workmanship. However, STP and ScTP
still have a role, especially in Europe.
 3.1.9 UTP cable
Unshielded twisted-pair cable (UTP) is a four-pair wire medium used in a variety of networks.
Each of the 8 individual copper wires in the UTP cable is covered by insulating material. In
addition, each pair of wires is twisted around each other. This type of cable relies solely on the
cancellation effect produced by the twisted wire pairs, to limit signal degradation caused by EMI
and RFI. To further reduce crosstalk between the pairs in UTP cable, the number of twists in the
wire pairs varies. Like STP cable, UTP cable must follow precise specifications as to how many
twists or braids are permitted per foot of cable.
TIA/EIA-568-A contains specifications governing cable performance. It calls for running two
cables, one for voice and one for data, to each outlet. Of the two cables, the one for voice must be
four-pair UTP. CAT 5 is the one most frequently recommended and implemented in installations
Unshielded twisted-pair cable has many advantages. It is easy to install and is less expensive than
other types of networking media. In fact, UTP costs less per meter than any other type of LAN
cabling. However, the real advantage is the size. Since it has such a small external diameter, UTP
does not fill up wiring ducts as rapidly as other types of cable. This can be an extremely important
factor to consider, particularly when installing a network in an older building. In addition, when
UTP cable is installed using an RJ-45 connector, potential sources of network noise are greatly
reduced and a good solid connection is practically guaranteed. There are disadvantages in using
twisted-pair cabling. UTP cable is more prone to electrical noise and interference than other types
of networking media, and the distance between signal boosts is shorter for UTP than it is for coaxial
and fiber optic cables.
UTP was once considered slower at transmitting data than other types of cable. This is no longer
true. In fact, today, UTP is considered the fastest copper-based media.
When communication occurs, the signal that is transmitted by the source needs to be understood by
the destination. This is true from both a software and physical perspective. The transmitted signal
needs to be properly received by the circuit connection designed to receive signals. The transmit pin
of the source needs to ultimately connect to the receiving pin of the destination. The following are
the types of cable connections used between internetwork devices.
In Figure , a LAN switch is connected to a computer. The cable that connects from the switch port
to the computer NIC port is called a straight-through cable.
In Figure , two switches are connected together. The cable that connects from one switch port to
another switch port is called a crossover cable.
In Figure , the cable that connects the RJ-45 adapter on the com port of the computer to the
console port of the router or switch is called a rollover cable.
The cables are defined by the type of connections, or pinouts, from one end to the other end of the
cable. See images two, four, and six. A technician can compare both ends of the same cable by
placing them next to each other, provided the cable has not yet been placed in a wall. The technician
observes the colors of the two RJ-45 connections by placing both ends with the clip placed into the
hand and the top of both ends of the cable pointing away from the technician. A straight through
cable should have both ends with identical color patterns. While comparing the ends of a cross-over
cable, the color of pins #1 and #2 will appear on the other end at pins #3 and #6, and vice-versa.
This occurs because the transmit and receive pins are in different locations. On a rollover cable, the
color combination from left to right on one end should be exactly opposite to the color combination
on the other end.
 3.2.1 The electromagnetic spectrum
The light used in optical fiber networks is one type of electromagnetic energy. When an electric
charge moves back and forth, or accelerates, a type of energy called electromagnetic energy is
produced. This energy in the form of waves can travel through a vacuum, the air, and through some
materials like glass. An important property of any energy wave is the wavelength.
Radio, microwaves, radar, visible light, x-rays, and gamma rays seem to be very different things.
However, they are all types of electromagnetic energy. If all the types of electromagnetic waves are
arranged in order from the longest wavelength down to the shortest wavelength, a continuum called
the electromagnetic spectrum is created.
The wavelength of an electromagnetic wave is determined by how frequently the electric charge
that generates the wave moves back and forth. If the charge moves back and forth slowly, the
wavelength it generates is a long wavelength. Visualize the movement of the electric charge as like
that of a stick in a pool of water. If the stick is moved back and forth slowly, it will generate ripples
in the water with a long wavelength between the tops of the ripples. If the stick is moved back and
forth more rapidly, the ripples will have a shorter wavelength.
Because electromagnetic waves are all generated in the same way, they share many of the same
properties. They all travel at a rate of 300,000 kilometers per second (186,283 miles per second)
through a vacuum.
Human eyes were designed to only sense electromagnetic energy with wavelengths between 700
nanometers and 400 nanometers (nm). A nanometer is one billionth of a meter (0.000000001 meter)
in length. Electromagnetic energy with wavelengths between 700 and 400 nm is called visible light.
The longer wavelengths of light that are around 700 nm are seen as the color red. The shortest
wavelengths that are around 400 nm appear as the color violet. This part of the electromagnetic
spectrum is seen as the colors in a rainbow.
Wavelengths that are not visible to the human eye are used to transmit data over optical fiber. These
wavelengths are slightly longer than red light and are called infrared light. Infrared light is used in
TV remote controls. The wavelength of the light in optical fiber is either 850 nm, 1310 nm, or 1550
nm. These wavelengths were selected because they travel through optical fiber better than other
 3.2.2 Ray model of light
When electromagnetic waves travel out from a source, they travel in straight lines. These straight
lines pointing out from the source are called rays.
Think of light rays as narrow beams of light like those produced by lasers. In the vacuum of empty
space, light travels continuously in a straight line at 300,000 kilometers per second. However, light
travels at different, slower speeds through other materials like air, water, and glass. When a light ray
called the incident ray, crosses the boundary from one material to another, some of the light energy
in the ray will be reflected back. That is why you can see yourself in window glass. The light that is
reflected back is called the reflected ray.
The light energy in the incident ray that is not reflected will enter the glass. The entering ray will be
bent at an angle from its original path. This ray is called the refracted ray. How much the incident
light ray is bent depends on the angle at which the incident ray strikes the surface of the glass and
the different rates of speed at which light travels through the two substances.
The bending of light rays at the boundary of two substances is the reason why light rays are able to
travel through an optical fiber even if the fiber curves in a circle.
The optical density of the glass determines how much the rays of light in the glass bends. Optical
density refers to how much a light ray slows down when it passes through a substance. The greater
the optical density of a material, the more it slows light down from its speed in a vacuum. The ratio
of the speed of light in a material to the speed of light in a vacuum is called the Index of Refraction.
Therefore, the measure of the optical density of a material is the index of refraction of that material.
A material with a large index of refraction is more optically dense and slows down more light than a
material with a smaller index of refraction.
For a substance like glass, the Index of Refraction, or the optical density, can be made larger by
adding chemicals to the glass. Making the glass very pure can make the index of refraction smaller.
The next lessons will provide further information about reflection and refraction, and their relation
to the design and function of optical fiber.
 3.2.3 Reflection
When a ray of light (the incident ray) strikes the shiny surface of a flat piece of glass, some of the
light energy in the ray is reflected. The angle between the incident ray and a line perpendicular to
the surface of the glass at the point where the incident ray strikes the glass is called the angle of
incidence. The perpendicular line is called the normal. It is not a light ray but a tool to allow the
measurement of angles. The angle between the reflected ray and the normal is called the angle of
reflection. The Law of Reflection states that the angle of reflection of a light ray is equal to the
angle of incidence. In other words, the angle at which a light ray strikes a reflective surface
determines the angle that the ray will reflect off the surface.
 3.2.4 Refraction
When a light strikes the interface between two transparent materials, the light divides into two parts.
Part of the light ray is reflected back into the first substance, with the angle of reflection equaling
the angle of incidence. The remaining energy in the light ray crosses the interface and enters into
the second substance.
If the incident ray strikes the glass surface at an exact 90-degree angle, the ray goes straight into the
glass. The ray is not bent. However, if the incident ray is not at an exact 90-degree angle to the
surface, then the transmitted ray that enters the glass is bent. The bending of the entering ray is
called refraction. How much the ray is refracted depends on the index of refraction of the two
transparent materials. If the light ray travels from a substance whose index of refraction is smaller,
into a substance where the index of refraction is larger, the refracted ray is bent towards the normal.
If the light ray travels from a substance where the index of refraction is larger into a substance
where the index of refraction is smaller, the refracted ray is bent away from the normal.
Consider a light ray moving at an angle other than 90 degrees through the boundary between glass
and a diamond. The glass has an index of refraction of about 1.523. The diamond has an index of
refraction of about 2.419. Therefore, the ray that continues into the diamond will be bent towards
the normal. When that light ray crosses the boundary between the diamond and the air at some
angle other than 90 degrees, it will be bent away from the normal. The reason for this is that air has
a lower index of refraction, about 1.000 than the index of refraction of the diamond.
 3.2.5 Total internal reflection
A light ray that is being turned on and off to send data (1s and 0s) into an optical fiber must stay
inside the fiber until it reaches the far end. The ray must not refract into the material wrapped
around the outside of the fiber. The refraction would cause the loss of part of the light energy of the
ray. A design must be achieved for the fiber that will make the outside surface of the fiber act like a
mirror to the light ray moving through the fiber. If any light ray that tries to move out through the
side of the fiber were reflected back into the fiber at an angle that sends it towards the far end of the
fiber, this would be a good “pipe” or “wave guide” for the light waves.
The laws of reflection and refraction illustrate how to design a fiber that guides the light waves
through the fiber with a minimum energy loss. The following two conditions must be met for the
light rays in a fiber to be reflected back into the fiber without any loss due to refraction:
The core of the optical fiber has to have a larger index of refraction (n) than the material that
surrounds it. The material that surrounds the core of the fiber is called the cladding.
The angle of incidence of the light ray is greater than the critical angle for the core and its cladding.

When both of these conditions are met, the entire incident light in the fiber is reflected back inside
the fiber. This is called total internal reflection, which is the foundation upon which optical fiber is
constructed. Total internal reflection causes the light rays in the fiber to bounce off the core-
cladding boundary and continue its journey towards the far end of the fiber. The light will follow a
zigzag path through the core of the fiber.
A fiber that meets the first condition can be easily created. In addition, the angle of incidence of the
light rays that enter the core can be controlled. Restricting the following two factors controls the
angle of incidence:
The numerical aperture of the fiber – The numerical aperture of a core is the range of angles of
incident light rays entering the fiber that will be completely reflected.
Modes – The paths which a light ray can follow when traveling down a fiber.
By controlling both conditions, the fiber run will have total internal reflection. This gives a light
wave guide that can be used for data communications.
 3.2.6 Multimode fiber
The part of an optical fiber through which light rays travel is called the core of the fiber. Light
rays can only enter the core if their angle is inside the numerical aperture of the fiber. Likewise,
once the rays have entered the core of the fiber, there are a limited number of optical paths that a
light ray can follow through the fiber. These optical paths are called modes. If the diameter of the
core of the fiber is large enough so that there are many paths that light can take through the fiber,
the fiber is called “multimode” fiber. Single-mode fiber has a much smaller core that only allows
light rays to travel along one mode inside the fiber.
Every fiber-optic cable used for networking consists of two glass fibers encased in separate sheaths.
One fiber carries transmitted data from device A to device B. The second fiber carries data from
device B to device A. The fibers are similar to two one-way streets going in opposite directions.
This provides a full-duplex communication link. Just as copper twisted-pair uses separate wire pairs
to transmit and receive, fiber-optic circuits use one fiber strand to transmit and one to receive.
Typically, these two fiber cables will be in a single outer jacket until they reach the point at which
connectors are attached.
Until the connectors are attached, there is no need for twisting or shielding, because no light escapes
when it is inside a fiber. This means there are no crosstalk issues with fiber. It is very common to
see multiple fiber pairs encased in the same cable. This allows a single cable to be run between data
closets, floors, or buildings. One cable can contain 2 to 48 or more separate fibers. With copper, one
UTP cable would have to be pulled for each circuit. Fiber can carry many more bits per second and
carry them farther than copper can.
Usually, five parts make up each fiber-optic cable. The parts are the core, the cladding, a buffer, a
strength material, and an outer jacket.
The core is the light transmission element at the center of the optical fiber. All the light signals
travel through the core. A core is typically glass made from a combination of silicon dioxide (silica)
and other elements. Multimode uses a type of glass, called graded index glass for its core. This glass
has a lower index of refraction towards the outer edge of the core. Therefore, the outer area of the
core is less optically dense than the center and light can go faster in the outer part of the core. This
design is used because a light ray following a mode that goes straight down the center of the core
does not have as far to travel as a ray following a mode that bounces around in the fiber. All rays
should arrive at the end of the fiber together. Then the receiver at the end of the fiber receives a
strong flash of light rather than a long, dim pulse.
Surrounding the core is the cladding. Cladding is also made of silica but with a lower index of
refraction than the core. Light rays traveling through the fiber core reflect off this core-to-cladding
interface as they move through the fiber by total internal reflection. Standard multimode fiber-optic
cable is the most common type of fiber-optic cable used in LANs. A standard multimode fiber-optic
cable uses an optical fiber with either a 62.5 or a 50-micron core and a 125-micron diameter
cladding. This is commonly designated as 62.5/125 or 50/125 micron optical fiber. A micron is one
millionth of a meter (1µ).
Surrounding the cladding is a buffer material that is usually plastic. The buffer material helps shield
the core and cladding from damage. There are two basic cable designs. They are the loose-tube and
the tight-buffered cable designs. Most of the fiber used in LANs is tight-buffered multimode
cable. Tight-buffered cables have the buffering material that surrounds the cladding in direct contact
with the cladding. The most practical difference between the two designs is the applications for
which they are used. Loose-tube cable is primarily used for outside-building installations, while
tight-buffered cable is used inside buildings.
The strength material surrounds the buffer, preventing the fiber cable from being stretched when
installers pull it. The material used is often Kevlar, the same material used to produce bulletproof
The final element is the outer jacket. The outer jacket surrounds the cable to protect the fiber against
abrasion, solvents, and other contaminants. The color of the outer jacket of multimode fiber is
usually orange, but occasionally another color.
Infrared Light Emitting Diodes (LEDs) or Vertical Cavity Surface Emitting Lasers (VCSELs) are
two types of light source usually used with multimode fiber. Use one or the other. LEDs are a little
cheaper to build and require somewhat less safety concerns than lasers. However, LEDs cannot
transmit light over cable as far as the lasers. Multimode fiber (62.5/125) can carry data distances of
up to 2000 meters (6,560 ft).
 3.2.7 Single-mode fiber
Single-mode fiber consists of the same parts as multimode. The outer jacket of single-mode fiber is
usually yellow. The major difference between multimode and single-mode fiber is that single-mode
allows only one mode of light to propagate through the smaller, fiber-optic core. The single-mode
core is eight to ten microns in diameter. Nine-micron cores are the most common. A 9/125 marking
on the jacket of the single-mode fiber indicates that the core fiber has a diameter of 9 microns and
the surrounding cladding is 125 microns in diameter.
An infrared laser is used as the light source in single-mode fiber. The ray of light it generates enters
the core at a 90-degree angle. As a result, the data carrying light ray pulses in single-mode fiber are
essentially transmitted in a straight line right down the middle of the core. This greatly increases
both the speed and the distance that data can be transmitted.
Because of its design, single-mode fiber is capable of higher rates of data transmission (bandwidth)
and greater cable run distances than multimode fiber. Single-mode fiber can carry LAN data up to
3000 meters. Multimode is only capable of carrying up to 2000 meters. Lasers and single-mode
fibers are more expensive than LEDs and multimode fiber. Because of these characteristics, single-
mode fiber is often used for inter-building connectivity.
Warning: The laser light used with single-mode has a longer wavelength than can be seen. The
laser is so strong that it can seriously damage eyes. Never look at the near end of a fiber that is
connected to a device at the far end. Never look into the transmit port on a NIC, switch, or router.
Remember to keep protective covers over the ends of fiber and inserted into the fiber-optic ports of
switches and routers. Be very careful.
Figure compares the relative sizes of the core and cladding for both types of fiber optic in
different sectional views. The much smaller and more refined fiber core in single-mode fiber is the
reason single-mode has a higher bandwidth and cable run distance than multimode fiber. However,
it entails more manufacturing costs.
 3.2.8 Other optical components
Most of the data sent over a LAN is in the form of electrical signals. However, optical fiber links
use light to send data. Something is needed to convert the electricity to light and at the other end of
the fiber convert the light back to electricity. This means that a transmitter and a receiver are
The transmitter receives data to be transmitted from switches and routers. This data is in the form of
electrical signals. The transmitter converts the electronic signals into their equivalent light pulses.
There are two types of light sources used to encode and transmit the data through the cable:
A light emitting diode (LED) producing infrared light with wavelengths of either 850nm or 1310
nm. These are used with multimode fiber in LANs. Lenses are used to focus the infrared light on the
end of the fiber
Light amplification by stimulated emission radiation (LASER) a light source producing a thin beam
of intense infrared light usually with wavelengths of 1310nm or 1550 nm. Lasers are used with
single-mode fiber over the longer distances involved in WANs or campus backbones. Extra care
should be exercised to prevent eye injury
Each of these light sources can be lighted and darkened very quickly to send data (1s and 0s) at a
high number of bits per second.
At the other end of the optical fiber from the transmitter is the receiver. The receiver functions
something like the photoelectric cell in a solar powered calculator. When light strikes the receiver,
it produces electricity. The first job of the receiver is to detect a light pulse that arrives from the
fiber. Then the receiver converts the light pulse back into the original electrical signal that first
entered the transmitter at the far end of the fiber. Now the signal is again in the form of voltage
changes. The signal is ready to be sent over copper wire into any receiving electronic device such as
a computer, switch, or router. The semiconductor devices that are usually used as receivers with
fiber-optic links are called p-intrinsic-n diodes (PIN photodiodes).
PIN photodiodes are manufactured to be sensitive to 850, 1310, or 1550 nm of light that are
generated by the transmitter at the far end of the fiber. When struck by a pulse of light at the proper
wavelength, the PIN photodiode quickly produces an electric current of the proper voltage for the
network. It instantly stops producing the voltage when no light strikes the PIN photodiode. This
generates the voltage changes that represent the data 1s and 0s on a copper cable.
Connectors are attached to the fiber ends so that the fibers can be connected to the ports on the
transmitter and receiver. The type of connector most commonly used with multimode fiber is the
Subscriber Connector (SC connector). On single-mode fiber, the Straight Tip (ST) connector is
frequently used.
In addition to the transmitters, receivers, connectors, and fibers that are always required on an
optical network, repeaters and fiber patch panels are often seen.
Repeaters are optical amplifiers that receive attenuating light pulses traveling long distances and
restore them to their original shapes, strengths, and timings. The restored signals can then be sent on
along the journey to the receiver at the far end of the fiber.
Fiber patch panels similar to the patch panels used with copper cable. These panels increase the
flexibility of an optical network by allowing quick changes to the connection of devices like
switches or routers with various available fiber runs, or cable links.
3.2.9 Signals and noise in optical fibers
Fiber-optic cable is not affected by the sources of external noise that cause problems on copper
media because external light cannot enter the fiber except at the transmitter end. A buffer and an
outer jacket that stops light from entering or leaving the cable cover the cladding.
Furthermore, the transmission of light on one fiber in a cable does not generate interference that
disturbs transmission on any other fiber. This means that fiber does not have the problem with
crosstalk that copper media does. In fact, the quality of fiber-optic links is so good that the recent
standards for gigabit and ten gigabit Ethernet specify transmission distances that far exceed the
traditional two-kilometer reach of the original Ethernet. Fiber-optic transmission allows the
Ethernet protocol to be used on Metropolitan Area Networks (MANs) and Wide Area Networks
Although fiber is the best of all the transmission media at carrying large amounts of data over long
distances, fiber is not without problems. When light travels through fiber, some of the light energy
is lost. The farther a light signal travels through a fiber, the more the signal loses strength. This
attenuation of the signal is due to several factors involving the nature of fiber itself. The most
important factor is scattering. The scattering of light in a fiber is caused by microscopic non-
uniformity (distortions) in the fiber that reflects and scatters some of the light energy.
Absorption is another cause of light energy loss. When a light ray strikes some types of chemical
impurities in a fiber, the impurities absorb part of the energy. This light energy is converted to a
small amount of heat energy. Absorption makes the light signal a little dimmer.
Another factor that causes attenuation of the light signal is manufacturing irregularities or
roughness in the core-to-cladding boundary. Power is lost from the light signal because of the less
than perfect total internal reflection in that rough area of the fiber. Any microscopic imperfections
in the thickness or symmetry of the fiber will cut down on total internal reflection and the cladding
will absorb some light energy.
Dispersion of a light flash also limits transmission distances on a fiber. Dispersion is the technical
term for the spreading of pulses of light as they travel down the fiber.
Graded index multimode fiber is designed to compensate for the different distances the various
modes of light have to travel in the large diameter core. Single-mode fiber does not have the
problem of multiple paths that the light signal can follow. However, chromatic dispersion is a
characteristic of both multimode and single-mode fiber. When wavelengths of light travel at slightly
different speeds through glass than do other wavelengths, chromatic dispersion is caused. That is
why a prism separates the wavelengths of light. Ideally, an LED or Laser light source would emit
light of just one frequency. Then chromatic dispersion would not be a problem.
Unfortunately, lasers, and especially LEDs generate a range of wavelengths so chromatic dispersion
limits the distance that can be transmitted on a fiber. If a signal is transmitted too far, what started
as a bright pulse of light energy will be spread out, separated, and dim when it reaches the receiver.
The receiver will not be able to distinguish a one from a zero.
3.2.10 Installation, care, and testing of optical fiber
A major cause of too much attenuation in fiber-optic cable is improper installation. If the fiber is
stretched or curved too tightly, it can cause tiny cracks in the core that will scatter the light rays.
Bending the fiber in too tight a curve can change the incident angle of light rays striking the core-to-
cladding boundary. Then the incident angle of the ray will become less than the critical angle for
total internal reflection. Instead of reflecting around the bend, some light rays will refract into the
cladding and be lost.
To prevent fiber bends that are too sharp, fiber is usually pulled through a type of installed pipe
called interducting. The interducting is much stiffer than fiber and can not be bent so sharply that
the fiber inside the interducting has too tight a curve. The interducting protects the fiber, makes it
easier to pull the fiber, and ensures that the bending radius (curve limit) of the fiber is not exceeded.
When the fiber has been pulled, the ends of the fiber must be cleaved (cut) and properly polished to
ensure that the ends are smooth. A microscope or test instrument with a built in magnifier is used
to examine the end of the fiber and verify that it is properly polished and shaped. Then the
connector is carefully attached to the fiber end. Improperly installed connectors, improper splices,
or the splicing of two cables with different core sizes will dramatically reduce the strength of a light
Once the fiber-optic cable and connectors have been installed, the connectors and the ends of the
fibers must be kept spotlessly clean. The ends of the fibers should be covered with protective covers
to prevent damage to the fiber ends. When these covers are removed prior to connecting the fiber to
a port on a switch or a router, the fiber ends must be cleaned. Clean the fiber ends with lint free lens
tissue moistened with pure isopropyl alcohol. The fiber ports on a switch or router should also be
kept covered when not in use and cleaned with lens tissue and isopropyl alcohol before a connection
is made. Dirty ends on a fiber will cause a big drop in the amount of light that reaches the receiver.
Scattering, absorption, dispersion, improper installation, and dirty fiber ends diminish the strength
of the light signal and are referred to as fiber noise. Before using a fiber-optic cable, it must be
tested to ensure that enough light actually reaches the receiver for it to detect the zeros and ones in
the signal.
When a fiber-optic link is being planned, the amount of signal power loss that can be tolerated must
be calculated. This is referred to as the optical link loss budget. Imagine a monthly financial budget.
After all of the expenses are subtracted from initial income, enough money must be left to get
through the month.
The decibel (dB) is the unit used to measure the amount of power loss. It tells what percent of the
power that leaves the transmitter actually enters the receiver.
Testing fiber links is extremely important and records of the results of these tests must be kept.
Several types of fiber-optic test equipment are used. Two of the most important instruments are
Optical Loss Meters and Optical Time Domain Reflectometers (OTDRs).
These meters both test optical cable to ensure that the cable meets the TIA standards for fiber. They
also test to verify that the link power loss does not fall below the optical link loss budget. OTDRs
can provide much additional detailed diagnostic information about a fiber link. They can be used to
trouble shoot a link when problems occur.
 3.3.1 Wireless LAN organizations and standards
An understanding of the regulations and standards that apply to wireless technology will ensure that
deployed networks will be interoperable and in compliance. Just as in cabled networks, IEEE is the
prime issuer of standards for wireless networks. The standards have been created within the
framework of the regulations created by the Federal Communications Commission (FCC).
A key technology contained within the 802.11 standard is Direct Sequence Spread Spectrum
(DSSS). DSSS applies to wireless devices operating within a 1 to 2 Mbps range. A DSSS system
may operate at up to 11 Mbps but will not be considered compliant above 2 Mbps. The next
standard approved was 802.11b, which increased transmission capabilities to 11 Mbps. Even though
DSSS WLANs were able to interoperate with the Frequency Hopping Spread Spectrum (FHSS)
WLANs, problems developed prompting design changes by the manufacturers. In this case, IEEE’s
task was simply to create a standard that matched the manufacturer’s solution.
802.11b may also be called Wi-Fi™ or high-speed wireless and refers to DSSS systems that operate
at 1, 2, 5.5 and 11 Mbps. All 802.11b systems are backward compliant in that they also support
802.11 for 1 and 2 Mbps data rates for DSSS only. This backward compatibility is extremely
important as it allows upgrading of the wireless network without replacing the NICs or access
802.11b devices achieve the higher data throughput rate by using a different coding technique from
802.11, allowing for a greater amount of data to be transferred in the same time frame. The majority
of 802.11b devices still fail to match the 11 Mbps throughput and generally function in the 2 to 4
Mbps range.
802.11a covers WLAN devices operating in the 5 GHZ transmission band. Using the 5 GHZ range
disallows interoperability of 802.11b devices as they operate within 2.4 GHZ. 802.11a is capable of
supplying data throughput of 54 Mbps and with proprietary technology known as "rate doubling"
has achieved 108 Mbps. In production networks, a more standard rating is 20-26 Mbps.
802.11g provides the same throughout as 802.11a but with backwards compatibility for 802.11b
devices using Othogonal Frequency Division Multiplexing (OFDM) modulation technology. Cisco
has developed an access point that permits 802.11b and 802.11a devices to coexist on the same
WLAN. The access point supplies ‘gateway’ services allowing these otherwise incompatible
devices to communicate.
 3.3.2 Wireless devices and topologies
A wireless network may consist of as few as two devices. - The nodes could simply be desktop
workstations or notebook computers. Equipped with wireless NICs, an ‘ad hoc’ network could be
established which compares to a peer-to-peer wired network. Both devices act as servers and clients
in this environment. Although it does provide connectivity, security is at a minimum along with
throughput. Another problem with this type of network is compatibility. Many times NICs from
different manufacturers are not compatible.
To solve the problem of compatibility, an access point (AP) is commonly installed to act as a
central hub for the WLAN "infrastructure mode". The AP is hard wired to the cabled LAN to
provide Internet access and connectivity to the wired network. APs are equipped with antennae and
provide wireless connectivity over a specified area referred to as a cell. Depending on the
structural composition of the location in which the AP is installed and the size and gain of the
antennae, the size of the cell could greatly vary. Most commonly, the range will be from 91.44 to
152.4 meters (300 to 500 feet). To service larger areas, multiple access points may be installed with
a degree of overlap. The overlap permits "roaming" between cells. This is very similar to the
services provided by cellular phone companies. Overlap, on multiple AP networks, is critical to
allow for movement of devices within the WLAN. Although not addressed in the IEEE standards, a
20-30% overlap is desirable. This rate of overlap will permit roaming between cells, allowing for
the disconnect and reconnect activity to occur seamlessly without service interruption.
When a client is activated within the WLAN, it will start "listening" for a compatible device with
which to "associate". This is referred to as "scanning" and may be active or passive.
Active scanning causes a probe request to be sent from the wireless node seeking to join the
network. The probe request will contain the Service Set Identifier (SSID) of the network it wishes
to join. When an AP with the same SSID is found, the AP will issue a probe response. The
authentication and association steps are completed.
Passive scanning nodes listen for beacon management frames (beacons), which are transmitted by
the AP (infrastructure mode) or peer nodes (ad hoc). When a node receives a beacon that contains
the SSID of the network it is trying to join, an attempt is made to join the network. Passive scanning
is a continuous process and nodes may associate or disassociate with APs as signal strength
 3.3.3 How wireless LANs communicate
After establishing connectivity to the WLAN, a node will pass frames in the same manner as on any
other 802.x network. WLANs do not use a standard 802.3 frame. Therefore, using the term wireless
Ethernet is misleading. There are three types of frames: control, management, and data. Only the
data frame type is similar to 802.3 frames. The payload of wireless and 802.3 frames is 1500 bytes;
however, an Ether frame may not exceed 1518 bytes whereas a wireless frame could be as large as
2346 bytes. Usually the WLAN frame size will be limited to 1518 bytes as it is most commonly
connected to a wired Ethernet network.
Since radio frequency (RF) is a shared medium, collisions can occur just as they do on wired shared
medium. The major difference is that there is no method by which the source node is able to detect
that a collision occurred. For that reason WLANs use Carrier Sense Multiple Access/Collision
Avoidance (CSMA/CA). This is somewhat like Ethernet CSMA/CD.
When a source node sends a frame, the receiving node returns a positive acknowledgment (ACK).
This can cause consumption of 50% of the available bandwidth. This overhead when combined
with the collision avoidance protocol overhead reduces the actual data throughput to a maximum of
5.0 to 5.5 Mbps on an 802.11b wireless LAN rated at 11 Mbps.
Performance of the network will also be affected by signal strength and degradation in signal
quality due to distance or interference. As the signal becomes weaker, Adaptive Rate Selection
(ARS) may be invoked. The transmitting unit will drop the data rate from 11 Mbps to 5.5 Mbps,
from 5.5 Mbps to 2 Mbps or 2 Mbps to 1 Mbps.
3.3.4 Authentication and association
WLAN authentication occurs at Layer 2. It is the process of authenticating the device not the user.
This is a critical point to remember when considering WLAN security, troubleshooting and overall
Authentication may be a null process, as in the case of a new AP and NIC with default
configurations in place. The client will send an authentication request frame to the AP and the
frame will be accepted or rejected by the AP. The client is notified of the response via an
authentication response frame. The AP may also be configured to hand off the authentication task to
an authentication server, which would perform a more thorough credentialing process.
Association, performed after authentication, is the state that permits a client to use the services of
the AP to transfer data.
Authentication and Association types
Unauthenticated and unassociated
The node is disconnected from the network and not associated to an access point.
Authenticated and unassociated
The node has been authenticated on the network but has not yet associated with the access point.
Authenticated and associated
The node is connected to the network and able to transmit and receive data through the access point.
Methods of authentication
IEEE 802.11 lists two types of authentication processes.
The first authentication process is the open system. This is an open connectivity standard in which
only the SSID must match. This may be used in a secure or non-secure environment although the
ability of low level network ‘sniffers’ to discover the SSID of the WLAN is high.
The second process is the shared key. This process requires the use of Wireless Equivalency
Protocol (WEP) encryption. WEP is a fairly simple algorithm using 64 and 128 bit keys. The AP is
configured with an encrypted key and nodes attempting to access the network through the AP must
have a matching key. Statically assigned WEP keys provide a higher level of security than the open
system but are definitely not hack proof.
The problem of unauthorized entry into WLANs is being addressed by a number of new security
solution technologies.
3.3.5 The radio wave and microwave spectrums
Computers send data signals electronically. Radio transmitters convert these electrical signals to
radio waves. Changing electric currents in the antenna of a transmitter generates the radio waves.
These radio waves radiate out in straight lines from the antenna. However, radio waves attenuate
as they move out from the transmitting antenna. In a WLAN, a radio signal measured at a distance
of just 10 meters (30 feet) from the transmitting antenna would be only 1/100th of its original
strength. Like light, radio waves can be absorbed by some materials and reflected by others. When
passing from one material, like air, into another material, like a plaster wall, radio waves are
refracted. Radio waves are also scattered and absorbed by water droplets in the air.
These qualities of radio waves are important to remember when a WLAN is being planned for a
building or for a campus. The process of evaluating a location for the installation of a WLAN is
called making a Site Survey.
Because radio signals weaken as they travel away from the transmitter, the receiver must also be
equipped with an antenna. When radio waves hit the antenna of a receiver, weak electric currents
are generated in that antenna. These electric currents, caused by the received radio waves, are equal
to the currents that originally generated the radio waves in the antenna of the transmitter. The
receiver amplifies the strength of these weak electrical signals.
In a transmitter, the electrical (data) signals from a computer or a LAN are not sent directly into the
antenna of the transmitter. Rather, these data signals are used to alter a second, strong signal called
the carrier signal.
The process of altering the carrier signal that will enter the antenna of the transmitter is called
modulation. There are three basic ways in which a radio carrier signal can be modulated. For
example, Amplitude Modulated (AM) radio stations modulate the height (amplitude) of the carrier
signal. Frequency Modulated (FM) radio stations modulate the frequency of the carrier signal as
determined by the electrical signal from the microphone. In WLANs, a third type of modulation
called phase modulation is used to superimpose the data signal onto the carrier signal that is
broadcast by the transmitter.
In this type of modulation, the data bits in the electrical signal change the phase of the carrier signal.
A receiver demodulates the carrier signal that arrives from its antenna. The receiver interprets the
phase changes of the carrier signal and reconstructs from it the original electrical data signal.
 3.3.6 Signals and noise on a WLAN
On a wired Ethernet network, it is usually a simple process to diagnose the cause of interference.
When using RF technology many kinds of interference must be taken into consideration.
Narrowband is the opposite of spread spectrum technology. As the name implies narrowband does
not affect the entire frequency spectrum of the wireless signal. One solution to a narrowband
interference problem could be simply changing the channel that the AP is using. Actually
diagnosing the cause of narrowband interference can be a costly and time-consuming experience.
To identify the source requires a spectrum analyzer and even a low cost model is relatively
All band interference affects the entire spectrum range. Bluetooth™ technologies hops across the
entire 2.4 GHz many times per second and can cause significant interference on an 802.11b
network. It is not uncommon to see signs in facilities that use wireless networks requesting that all
Bluetooth™ devices be shut down before entering. In homes and offices, a device that is often
overlooked as causing interference is the standard microwave oven. Leakage from a microwave of
as little as one watt into the RF spectrum can cause major network disruption. Wireless phones
operating in the 2.4GHZ spectrum can also cause network disorder.
Generally the RF signal will not be affected by even the most extreme weather conditions.
However, fog or very high moisture conditions can and do affect wireless networks. Lightning can
also charge the atmosphere and alter the path of a transmitted signal.
The first and most obvious source of a signal problem is the transmitting station and antenna type.
A higher output station will transmit the signal further and a parabolic dish antenna that
concentrates the signal will increase the transmission range.
In a SOHO environment most access points will utilize twin omnidirectional antennae that transmit
the signal in all directions thereby reducing the range of communication.
 3.3.7 Wireless security
As previously discussed in this chapter, wireless security can be difficult to achieve. Where wireless
networks exist there is little security. This has been a problem from the earliest days of WLANs.
Currently, many administrators are weak in implementing effective security practices.
A number of new security solutions and protocols, such as Virtual Private Networking (VPN) and
Extensible Authentication Protocol (EAP) are emerging. With EAP, the access point does not
provide authentication to the client, but passes the duties to a more sophisticated device, possibly a
dedicated server, designed for that purpose. Using an integrated server VPN technology creates a
tunnel on top of an existing protocol such as IP. This is a Layer 3 connection as opposed to the
Layer 2 connection between the AP and the sending node.
EAP-MD5 Challenge – Extensible Authentication Protocol is the earliest authentication type,
which is very similar to CHAP password protection on a wired network.
LEAP (Cisco) – Lightweight Extensible Authentication Protocol is the type primarily used on
Cisco WLAN access points. LEAP provides security during credential exchange, encrypts using
dynamic WEP keys, and supports mutual authentication.
User authentication – Allows only authorized users to connect, send and receive data over the
wireless network.
Encryption – Provides encryption services further protecting the data from intruders.
Data authentication – Ensures the integrity of the data, authenticating source and destination
VPN technology effectively closes the wireless network since an unrestricted WLAN will
automatically forward traffic between nodes that appear to be on the same wireless network.
WLANs often extend outside the perimeter of the home or office in which they are installed and
without security intruders may infiltrate the network with little effort. Conversely it takes minimal
effort on the part of the network administrator to provide low-level security to the WLAN.

Module 4: Cable Testing
Networking media is literally and physically the backbone of a network. Inferior quality of network
cabling results in network failures and unreliable performance. Copper, optical fiber, and wireless
networking media all require testing to determine the quality. These tests involve certain electrical
and mathematical concepts and terms, such as signal, wave, frequency, and noise. Understanding
this vocabulary is helpful when learning about networking, cabling, and cable testing.
The goal of the first lesson in this module is to provide some basic definitions so that the cable
testing concepts presented in the second lesson will be better understood.
The second lesson of this module describes the issues relating to the testing of media used for
physical layer connectivity in local-area networks (LANs). In order for the LAN to function
properly, the physical layer medium must meet the industry standard specifications.
Attenuation (signal deterioration) and noise (signal interference) cause problems in networks
because the data is not recognizable when it is received. Proper attachment of cable connectors and
proper cable installation are important. If standards are followed in these areas, attenuation and
noise levels are minimized.
After cable has been installed, it must be tested with quality cable testers to verify that the
specifications of the TIA/EIA standards are met. This module also describes the various important
tests that are performed.

4.1.1 Waves
A wave is energy traveling from one place to another. There are many types of waves, but all can be
described with similar vocabulary.
It is helpful to think of waves as disturbances. A bucket of water that is completely still does not
have waves, because there are no disturbances. Conversely, the ocean always has some sort of
detectable waves due to disturbances such as wind and tide.
Ocean waves can be described in terms of their height, or amplitude, which could be measured in
meters. They can also be described in terms of how frequently the waves reach the shore, using
period and frequency. The period of the waves is the amount of time between each wave, measured
in seconds. The frequency is the number of waves that reach the shore each second, measured in
Hertz. One Hertz is equal to one wave per second, or one cycle per second. Experiment with these
concepts by adjusting the amplitude and frequency in Figure .
Networking professionals are specifically interested in voltage waves on copper media, light waves
in optical fiber, and alternating electric and magnetic fields called electromagnetic waves. The
amplitude of an electrical signal still represents height, but it is measured in volts instead of meters.
The period is the amount of time to complete one cycle, measured in seconds. The frequency is the
number of complete cycles per second, measured in Hertz.
If a disturbance is deliberately caused, and involves a fixed, predictable duration, it is called a pulse.
Pulses are important in electrical signals because they determine the value of the data being
 4.1.2 Sine waves and square waves
Sine waves, or sinusoids, are graphs of mathematical functions. Sine waves have certain
characteristics. Sine waves are periodic, which means that they repeat the same pattern at regular
intervals. Sine waves are continuously varying, which means that no two adjacent points on the
graph have the same value.
Sine waves are graphical representations of many natural occurrences that change regularly over
time. Some examples of these occurrences are the distance from the earth to the sun, the distance
from the ground while riding a Ferris wheel, and the time of day that the sun rises. Since sine waves
are continuously varying, they are examples of analog waves.
Square waves, like sine waves, are periodic. However, square wave graphs do not continuously
vary with time. The wave holds one value for some time, and then suddenly changes to a different
value. This value is held for some time, and then quickly changes back to the original value. Square
waves represent digital signals, or pulses. Like all waves, square waves can be described in terms of
amplitude, period, and frequency.
 4.1.3 Exponents and logarithms
In networking, there are three important number systems:
Base 2 – binary
Base 10 – decimal
Base 16 – hexadecimal
Recall that the base of a number system refers to the number of different symbols that can occupy
one position. For example, binary numbers have only two different placeholders, 0 and 1. Decimal
numbers have 10 different placeholders, the numbers 0-9. Hexadecimal numbers have 16 different
placeholders, the numbers 0-9 and the letters A-F.
Remember that 10x10 can be written as 102. 102 means ten squared or ten raised to the second
power. When written this way, it is said that 10 is the base of the number and 2 is the exponent of
the number. 10x10x10 can be written as 103. 103 means ten cubed or ten raised to the third power.
The base is still 10, but the exponent is now 3. Use the Media Activity below to practice calculating
exponents. Enter x, and y is calculated, or enter y, and x is calculated.
The base of a number system also refers to the value of each digit. The least significant digit has a
value of base0, or one. The next digit has a value of base1. This is equal to 2 for binary numbers, 10
for decimal numbers, and 16 for hexadecimal numbers.
Numbers with exponents are used to easily represent very large or very small numbers. It is much
easier and less error-prone to represent one billion numerically as 109 than as 1000000000. Many
calculations involved in cable testing involve numbers that are very large, so exponents are the
preferred format. Exponents can be explored in the flash activity.
One way to work with the very large and very small numbers that occur in networking is to
transform the numbers according to the rule, or mathematical function, known as the logarithm.
Logarithms are referenced to the base of the number system being used. For example, base 10
logarithms are often abbreviated log.
To take the “log” of a number use a calculator or the flash activity. For example, log (109) equals 9,
log (10-3) = -3. You can also take the logarithm of numbers that are not powers of 10, but you
cannot take the logarithm of a negative number. While the study of logarithms is beyond the scope
of this course, the terminology is used commonly in calculating decibels, a way of measuring
signals on copper, optical, and wireless media.
 4.1.4 Decibels
The decibel (dB) is a measurement unit important in describing networking signals. The decibel is
related to the exponents and logarithms described in prior sections. There are two formulas for
calculating decibels:
dB = 10 log10 (Pfinal / Pref)
dB = 20 log10 (Vfinal / Vreference)
The variables represent the following values:
dB measures the loss or gain of the power of a wave. Decibels are usually negative numbers
representing a loss in power as the wave travels, but can also be positive values representing a gain
in power if the signal is amplified
log10 implies that the number in parenthesis will be transformed using the base 10 logarithm rule
Pfinal is the delivered power measured in Watts
Pref is the original power measured in Watts
Vfinal is the delivered voltage measured in Volts
Vreference is the original voltage measured in Volts
The first formula describes decibels in terms of power (P), and the second in terms of voltage (V).
Typically, light waves on optical fiber and radio waves in the air are measured using the power
formula. Electromagnetic waves on copper cables are measured using the voltage formula. These
formulas have several things in common.
Enter values for dB and Pref to discover the correct power. This formula could be used to see how
much power is left in a radio wave after it has traveled over a distance through different materials,
and through various stages of electronic systems such as a radio. To explore decibels further, try the
following examples using the flash activities:
If Pfinal is one microWatt (1 x 10-6 Watts) and Pref is one milliWatt (1 x 10-3 Watts), what is the gain
or loss in decibels? Is this value positive or negative? Does the value represent a gain or a loss in
If the total loss of a fiber link is -84 dB, and the source power of the original laser (Pref) is one
milliWatt (1 x 10-3 Watts), how much power is delivered?
If two microVolts (2 x 10-6 Volts) are measured at the end of a cable and the source voltage was one
volt, what is the gain or loss in decibels? Is this value positive or negative? Does the value represent
a gain or a loss in voltage?
 4.1.5 Viewing signals in time and frequency
One of the most important facts of the information age is that data symbolizing characters, words,
pictures, video, or music can be represented electrically by voltage patterns on wires and in
electronic devices. The data represented by these voltage patterns can be converted to light waves or
radio waves, and then back to voltage waves. Consider the example of an analog telephone. The
sound waves of the caller’s voice enter a microphone in the telephone. The microphone converts the
patterns of sound energy into voltage patterns of electrical energy that represent the voice.
If the voltage patterns were graphed over time, the distinct patterns representing the voice would be
displayed. An oscilloscope is an important electronic device used to view electrical signals such
as voltage waves and pulses. The x-axis on the display represents time, and the y-axis represents
voltage or current. There are usually two y-axis inputs, so two waves can be observed and measured
at the same time.
Analyzing signals using an oscilloscope is called time-domain analysis, because the x-axis or
domain of the mathematical function represents time. Engineers also use frequency-domain analysis
to study signals. In frequency-domain analysis, the x-axis represents frequency. An electronic
device called a spectrum analyzer creates graphs for frequency-domain analysis. Experiment with
this graphic by adding several signals, and try to predict what the output will look like on both the
oscilloscope and the spectrum analyzer.
Electromagnetic signals use different frequencies for transmission so that different signals do not
interfere with each other. Frequency modulation (FM) radio signals use frequencies that are
different from television or satellite signals. When listeners change the station on a radio, they are
changing the frequency that the radio is receiving.
 4.1.6 Analog and digital signals in time and frequency
To understand the complexities of networking signals and cable testing, examine how analog
signals vary with time and with frequency. First, consider a single-frequency electrical sine wave,
whose frequency can be detected by the human ear. If this signal is transmitted to a speaker, a tone
can be heard. How would a spectrum analyzer display this pure tone?
Next, imagine the combination of several sine waves. The resulting wave is more complex than a
pure sine wave. Several tones would be heard. How would a spectrum analyzer display this? The
graph of several tones shows several individual lines corresponding to the frequency of each tone.
Finally, imagine a complex signal, like a voice or a musical instrument. What would its spectrum
analyzer graph look like? If many different tones are present, a continuous spectrum of individual
tones would be represented.
 4.1.7 Noise in time and frequency
Noise is an important concept in communications systems, including LANS. While noise usually
refers to undesirable sounds, noise related to communications refers to undesirable signals. Noise
can originate from natural and technological sources, and is added to the data signals in
communications systems.
All communications systems have some amount of noise. Even though noise cannot be eliminated,
its effects can be minimized if the sources of the noise are understood. There are many possible
sources of noise:
Nearby cables which carry data signals
Radio frequency interference (RFI), which is noise from other signals being transmitted nearby
Electromagnetic interference (EMI), which is noise from nearby sources such as motors and lights
Laser noise at the transmitter or receiver of an optical signal
Noise that affects all transmission frequencies equally is called white noise. Noise that only affects
small ranges of frequencies is called narrowband interference. When detected on a radio receiver,
white noise would interfere with all radio stations. Narrowband interference would affect only a few
stations whose frequencies are close together. When detected on a LAN, white noise would affect
all data transmissions, but narrowband interference might disrupt only certain signals. If the band of
frequencies affected by the narrowband interference included all frequencies transmitted on the
LAN, then the performance of the entire LAN would be compromised.
 4.1.8 Bandwidth
Bandwidth is an extremely important concept in communications systems. Two ways of
considering bandwidth that are important for the study of LANs are analog bandwidth and digital
Analog bandwidth typically refers to the frequency range of an analog electronic system. Analog
bandwidth could be used to describe the range of frequencies transmitted by a radio station or an
electronic amplifier. The units of measurement for analog bandwidth is Hertz, the same as the unit
of frequency. Examples of analog bandwidth values are 3 kHz for telephony, 20 kHz for audible
signals, 5 kHz for AM radio stations, and 200 MHz for FM radio stations.
Digital bandwidth measures how much information can flow from one place to another in a given
amount of time. The fundamental unit of measurement for digital bandwidth is bits per second
(bps). Since LANs are capable of speeds of millions of bits per second, measurement is expressed in
kilobits per second (Kbps) or megabits per second (Mbps). Physical media, current technologies,
and the laws of physics limit bandwidth.
During cable testing, analog bandwidth is used to determine the digital bandwidth of a copper cable.
Analog frequencies are transmitted from one end and received on the opposite end. The two signals
are then compared, and the amount of attenuation of the signal is calculated. In general, media that
will support higher analog bandwidths without high degrees of attenuation will also support higher
digital bandwidths.
 4.2.1 Signaling over copper and fiber optic cabling
On copper cable, data signals are represented by voltage levels that represent binary ones and zeros.
The voltage levels are measured with respect to a reference level of zero volts at both the
transmitter and the receiver. This reference level is called the signal ground. It is important that both
transmitting and receiving devices refer to the same zero volt reference point. When they do, they
are said to be properly grounded.
In order for the LAN to operate properly, the receiving device must be able to accurately interpret
the binary ones and zeros transmitted as voltage levels. Since current Ethernet technology supports
data rates of billions of bits per second, each bit must be recognized, even though duration of the bit
is very small. The voltage level cannot be amplified at the receiver, nor can the bit duration be
extended in order to recognize the data. This means that as much of the original signal strength must
be retained, as the signal moves through the cable and passes through the connectors. In anticipation
of ever-faster Ethernet protocols, new cable installations should be made with the best available
cable, connectors, and interconnect devices such as punch-down blocks and patch panels.
There are two basic types of copper cable: shielded and unshielded. In shielded cable, shielding
material protects the data signal from external sources of noise and from noise generated by
electrical signals within the cable.
Coaxial cable is a type of shielded cable. It consists of a solid copper conductor surrounded by
insulating material, and then braided conductive shielding. In LAN applications, the braided
shielding is electrically grounded to protect the inner conductor from external electrical noise. The
shielding also helps eliminate signal loss by keeping the transmitted signal confined to the cable.
This helps make coaxial cable less noisy than other types of copper cabling, but also makes it more
expensive. The need to ground the shielding and the bulky size of coaxial cable make it more
difficult to install than other copper cabling.
There are two types of twisted-pair cable: shielded twisted-pair (STP) and unshielded twisted pair
STP cable contains an outer conductive shield that is electrically grounded to insulate the signals
from external electrical noise. STP also uses inner foil shields to protect each wire pair from noise
generated by the other pairs. STP cable is sometimes called screened twisted pair (ScTP). STP cable
is more expensive, more difficult to install, and less frequently used than UTP. UTP contains no
shielding and is more susceptible to external noise but is the most frequently used because it is
inexpensive and easier to install.
Fiber optic cable is used to transmit data signals by increasing and decreasing the intensity of light
to represent binary ones and zeros. The strength of a light signal does not diminish like the
strength of an electrical signal does over an identical run length. Optical signals are not affected by
electrical noise, and optical fiber does not need to be grounded. Therefore, optical fiber is often
used between buildings and between floors within the building. As costs decrease and demand for
speed increases, optical fiber may become a more commonly used LAN media.
 4.2.2 Attenuation and insertion loss on copper media
Attenuation is the decrease in signal amplitude over the length of a link. Long cable lengths and
high signal frequencies contribute to greater signal attenuation. For this reason, attenuation on a
cable is measured by a cable tester using the highest frequencies that the cable is rated to support.
Attenuation is expressed in decibels (dB) using negative numbers. Smaller negative dB values are
an indication of better link performance.
There are several factors that contribute to attenuation. The resistance of the copper cable converts
some of the electrical energy of the signal to heat. Signal energy is also lost when it leaks through
the insulation of the cable and by impedance caused by defective connectors.
Impedance is a measurement of the resistance of the cable to alternating current (AC) and is
measured in ohms. The normal, or characteristic, impedance of a Cat5 cable is 100 ohms. If a
connector is improperly installed on Cat5, it will have a different impedance value than the cable.
This is called an impedance discontinuity or an impedance mismatch.
Impedance discontinuities cause attenuation because a portion of a transmitted signal will be
reflected back to the transmitting device rather than continuing to the receiver, much like an echo.
This effect is compounded if there are multiple discontinuities causing additional portions of the
remaining signal to be reflected back to the transmitter. When this returning reflection strikes the
first discontinuity, some of the signal rebounds in the direction of the original signal, creating
multiple echo effects. The echoes strike the receiver at different intervals making it difficult for the
receiver to accurately detect data values on the signal. This is called jitter and results in data errors.
The combination of the effects of signal attenuation and impedance discontinuities on a
communications link is called insertion loss. Proper network operation depends on constant
characteristic impedance in all cables and connectors, with no impedance discontinuities in the
entire cable system.
 4.2.3 Sources of noise on copper media
Noise is any electrical energy on the transmission cable that makes it difficult for a receiver to
interpret the data sent from the transmitter. TIA/EIA-568-B certification of a cable now requires
testing for a variety of types of noise.
Crosstalk involves the transmission of signals from one wire to a nearby wire. When voltages
change on a wire, electromagnetic energy is generated. This energy radiates outward from the
transmitting wire like a radio signal from a transmitter. Adjacent wires in the cable act like
antennas, receiving the transmitted energy, which interferes with data on those wires. Crosstalk can
also be caused by signals on separate, nearby cables. When crosstalk is caused by a signal on
another cable, it is called alien crosstalk. Crosstalk is more destructive at higher transmission
Cable testing instruments measure crosstalk by applying a test signal to one wire pair. The cable
tester then measures the amplitude of the unwanted crosstalk signals induced on the other wire pairs
in the cable.
Twisted-pair cable is designed to take advantage of the effects of crosstalk in order to minimize
noise. In twisted-pair cable, a pair of wires is used to transmit one signal. The wire pair is twisted so
that each wire experiences similar crosstalk. Because a noise signal on one wire will appear
identically on the other wire, this noise be easily detected and filtered at the receiver.
Twisting one pair of wires in a cable also helps to reduce crosstalk of data or noise signals from an
adjacent wire pair. Higher categories of UTP require more twists on each wire pair in the cable to
minimize crosstalk at high transmission frequencies. When attaching connectors to the ends of UTP
cable, untwisting of wire pairs must be kept to an absolute minimum to ensure reliable LAN
 4.2.4 Types of crosstalk
There are three distinct types of crosstalk:
Near-end Crosstalk (NEXT)
Far-end Crosstalk (FEXT)
Power Sum Near-end Crosstalk (PSNEXT)
Near-end crosstalk (NEXT) is computed as the ratio of voltage amplitude between the test signal
and the crosstalk signal when measured from the same end of the link. This difference is expressed
in a negative value of decibels (dB). Low negative numbers indicate more noise, just as low
negative temperatures indicate more heat. By tradition, cable testers do not show the minus sign
indicating the negative NEXT values. A NEXT reading of 30 dB (which actually indicates -30 dB)
indicates less NEXT noise and a better cable than does a NEXT reading of 10 dB.
NEXT needs to be measured from each pair to each other pair in a UTP link, and from both ends of
the link. To shorten test times, some cable test instruments allow the user to test the NEXT
performance of a link by using larger frequency step sizes than specified by the TIA/EIA standard.
The resulting measurements may not comply with TIA/EIA-568-B, and may overlook link faults.
To verify proper link performance, NEXT should be measured from both ends of the link with a
high-quality test instrument. This is also a requirement for complete compliance with high-speed
cable specifications.
Due to attenuation, crosstalk occurring further away from the transmitter creates less noise on a
cable than NEXT. This is called far-end crosstalk, or FEXT. The noise caused by FEXT still
travels back to the source, but it is attenuated as it returns. Thus, FEXT is not as significant a
problem as NEXT.
Power Sum NEXT (PSNEXT) measures the cumulative effect of NEXT from all wire pairs in the
cable. PSNEXT is computed for each wire pair based on the NEXT effects of the other three
pairs. The combined effect of crosstalk from multiple simultaneous transmission sources can be
very detrimental to the signal. TIA/EIA-568-B certification now requires this PSNEXT test.
Some Ethernet standards such as 10BASE-T and 100BASE-TX receive data from only one wire
pair in each direction. However, for newer technologies such as 1000BASE-T that receive data
simultaneously from multiple pairs in the same direction, power sum measurements are very
important tests.
 4.2.5 Cable testing standards
The TIA/EIA-568-B standard specifies ten tests that a copper cable must pass if it will be used for
modern, high-speed Ethernet LANs. All cable links should be tested to the maximum rating that
applies for the category of cable being installed.
The ten primary test parameters that must be verified for a cable link to meet TIA/EIA standards
Wire map
Insertion loss
Near-end crosstalk (NEXT)
Power sum near-end crosstalk (PSNEXT)
Equal-level far-end crosstalk (ELFEXT)
Power sum equal-level far-end crosstalk (PSELFEXT)
Return loss
Propagation delay
Cable length
Delay skew
The Ethernet standard specifies that each of the pins on an RJ-45 connector have a particular
purpose. A NIC transmits signals on pins 1 and 2, and it receives signals on pins 3 and 6. The
wires in UTP cable must be connected to the proper pins at each end of a cable. The wire map test
insures that no open or short circuits exist on the cable. An open circuit occurs if the wire does not
attach properly at the connector. A short circuit occurs if two wires are connected to each other.
The wire map test also verifies that all eight wires are connected to the correct pins on both ends of
the cable. There are several different wiring faults that the wire map test can detect. The reversed-
pair fault occurs when a wire pair is correctly installed on one connector, but reversed on the other
connector. If the orange striped wire is on pin 1 and the orange wire on pin 2 at one end, but
reversed at the other end, then the cable has a reversed-pair fault. This example is shown in the
A split-pair wiring fault occurs when two wires from different wire pairs are connected to the wrong
pins on both ends of the cable. Look carefully at the pin numbers in the graphic to detect the wiring
fault. A split pair creates two transmit or receive pairs each with two wires that are not twisted
Transposed-pair wiring faults occur when a wire pair is connected to completely different pins at
both ends. Contrast this with a reversed-pair, where the same pair of pins is used at both ends.
Transposed pairs also occur when two different color codes on punchdown blocks, representing
T568-A and T568-B, are used at different locations on the same link.
 4.2.6 Other test parameters
The combination of the effects of signal attenuation and impedance discontinuities on a
communications link is called insertion loss. Insertion loss is measured in decibels at the far end of
the cable. The TIA/EIA standard requires that a cable and its connectors pass an insertion loss test
before the cable can be used as a communications link in a LAN.
Crosstalk is measured in four separate tests. A cable tester measures NEXT by applying a test signal
to one cable pair and measuring the amplitude of the crosstalk signals received by the other cable
pairs. The NEXT value, expressed in decibels, is computed as the difference in amplitude between
the test signal and the crosstalk signal measured at the same end of the cable. Remember, because
the number of decibels that the tester displays is a negative number, the larger the number, the
lower the NEXT on the wire pair. As previously mentioned, the PSNEXT test is actually a
calculation based on combined NEXT effects.
The equal-level far-end crosstalk (ELFEXT) test measures FEXT. Pair-to-pair ELFEXT is
expressed in dB as the difference between the measured FEXT and the insertion loss of the wire
pair whose signal is disturbed by the FEXT. ELFEXT is an important measurement in Ethernet
networks using 1000BASE-T technologies. Power sum equal-level far-end crosstalk (PSELFEXT)
is the combined effect of ELFEXT from all wire pairs.
Return loss is a measure in decibels of reflections that are caused by the impedance discontinuities
at all locations along the link. Recall that the main impact of return loss is not on loss of signal
strength. The significant problem is that signal echoes caused by the reflections from the impedance
discontinuities will strike the receiver at different intervals causing signal jitter.
4.2.7 Time-based parameters
Propagation delay is a simple measurement of how long it takes for a signal to travel along the cable
being tested. The delay in a wire pair depends on its length, twist rate, and electrical properties.
Delays are measured in hundredths of nanoseconds. One nanosecond is one-billionth of a second, or
0.000000001 second. The TIA/EIA-568-B standard sets a limit for propagation delay for the
various categories of UTP.
Propagation delay measurements are the basis of the cable length measurement. TIA/EIA-568-B-1
specifies that the physical length of the link shall be calculated using the wire pair with the shortest
electrical delay. Testers measure the length of the wire based on the electrical delay as measured by
a Time Domain Reflectometry (TDR) test, not by the physical length of the cable jacket. Since the
wires inside the cable are twisted, signals actually travel farther than the physical length of the
cable. When a cable tester makes a TDR measurement, it sends a pulse signal down a wire pair and
measures the amount of time required for the pulse to return on the same wire pair.
The TDR test is used not only to determine length, but also to identify the distance to wiring faults
such as shorts and opens. When the pulse encounters an open, short, or poor connection, all or part
of the pulse energy is reflected back to the tester. This can calculate the approximate distance to the
wiring fault. The approximate distance can be helpful in locating a faulty connection point along a
cable run, such as a wall jack.
The propagation delays of different wire pairs in a single cable can differ slightly because of
differences in the number of twists and electrical properties of each wire pair. The delay difference
between pairs is called delay skew. Delay skew is a critical parameter for high-speed networks in
which data is simultaneously transmitted over multiple wire pairs, such as 1000BASE-T Ethernet. If
the delay skew between the pairs is too great, the bits arrive at different times and the data cannot be
properly reassembled. Even though a cable link may not be intended for this type of data
transmission, testing for delay skew helps ensure that the link will support future upgrades to high-
speed networks.
All cable links in a LAN must pass all of the tests previously mentioned as specified in the
TIA/EIA-568-B standard. These tests ensure that the cable links will function reliably at high
speeds and frequencies. Cable tests should be performed when the cable is installed and afterward
on a regular basis to ensure that LAN cabling meets industry standards. High quality cable test
instruments should be correctly used to ensure that the tests are accurate. Test results should also be
carefully documented.
 4.2.8 Testing optical fiber
A fiber link consists of two separate glass fibers functioning as independent data pathways. One
fiber carries transmitted signals in one direction, while the second carries signals in the opposite
direction. Each glass fiber is surrounded by a sheath that light cannot pass through, so there are no
crosstalk problems on fiber optic cable. External electromagnetic interference or noise has no affect
on fiber cabling. Attenuation does occur on fiber links, but to a lesser extent than on copper cabling.
Fiber links are subject to the optical equivalent of UTP impedance discontinuities. When light
encounters an optical discontinuity, some of the light signal is reflected back in the opposite
direction with only a fraction of the original light signal continuing down the fiber towards the
receiver. This results in a reduced amount of light energy arriving at the receiver, making signal
recognition difficult. Just as with UTP cable, improperly installed connectors are the main cause of
light reflection and signal strength loss in optical fiber.
Because noise is not an issue when transmitting on optical fiber, the main concern with a fiber link
is the strength of the light signal that arrives at the receiver. If attenuation weakens the light signal
at the receiver, then data errors will result. Testing fiber optic cable primarily involves shining a
light down the fiber and measuring whether a sufficient amount of light reaches the receiver.
On a fiber optic link, the acceptable amount of signal power loss that can occur without dropping
below the requirements of the receiver must be calculated. This calculation is referred to as the
optical link loss budget. A fiber test instrument checks whether the optical link loss budget has been
exceeded. If the fiber fails the test, the cable test instrument should indicate where the optical
discontinuities occur along the length of the cable link. Usually, the problem is one or more
improperly attached connectors. The cable test instrument will indicate the location of the faulty
connections that must be replaced. When the faults are corrected, the cable must be retested.
 4.2.9 A new standard
On June 20, 2002, the Category 6 (or Cat 6) addition to the TIA-568 standard was published. The
official title of the standard is ANSI/TIA/EIA-568-B.2-1. This new standard specifies the original
set of performance parameters that need to be tested for Ethernet cabling as well as the passing
scores for each of these tests. Cables certified as Cat 6 cable must pass all ten tests.
Although the Cat 6 tests are essentially the same as those specified by the Cat 5 standard, Cat 6
cable must pass the tests with higher scores to be certified. Cat6 cable must be capable of carrying
frequencies up to 250 MHz and must have lower levels of crosstalk and return loss.
A quality cable tester similar to the Fluke DSP-4000 series or Fluke OMNIScanner2 can perform all
the test measurements required for Cat 5, Cat 5e, and Cat 6 cable certifications of both permanent
links and channel links. Figure shows the Fluke DSP-LIA013 Channel/Traffic Adapter for Cat 5e.

Module 5: Cabling LANs and WANs
Even though each local-area network is unique, there are many design aspects that are common to
all LANs. For example, most LANs follow the same standards and the same components. This
module presents information on elements of Ethernet LANs and common LAN devices.
There are several wide-area network (WAN) connections available today. They range from dial-up
to broadband access, and differ in bandwidth, cost, and required equipment. This module presents
information on the various types of WAN connections.
 5.1.1 LAN physical layer
Various symbols are used to represent media types. Token Ring is represented by a circle. Fiber
Distributed Data Interface (FDDI) is represented by two concentric circles and the Ethernet symbol
is represented by a straight line. Serial connections are represented by a lightning bolt.
Each computer network can be built with many different media types. The function of media is to
carry a flow of information through a LAN. Wireless LANs use the atmosphere, or space, as the
medium. Other networking media confine network signals to a wire, cable, or fiber. Networking
media are considered Layer 1, or physical layer, components of LANs.
Each media has advantages and disadvantages. Some of the advantage or disadvantage comparisons
Cable length
Ease of installation
Susceptibility to interference
Coaxial cable, optical fiber, and even free space can carry network signals. However, the principal
medium that will be studied is Category 5 unshielded twisted-pair cable (Cat 5 UTP) which
includes the Cat 5e family of cables.
Many topologies support LANs, as well as many different physical media. Figure shows a subset
of physical layer implementations that can be deployed to support Ethernet.
 5.1.2 Ethernet in the campus
Ethernet is the most widely used LAN technology. Ethernet was first implemented by the Digital,
Intel, and Xerox group, referred to as DIX. DIX created and implemented the first Ethernet LAN
specification, which was used as the basis for the Institute of Electrical and Electronics Engineers
(IEEE) 802.3 specification, released in 1980. Later, the IEEE extended 802.3 to three new
committees known as 802.3u (Fast Ethernet), 802.3z (Gigabit Ethernet over Fiber), and 802.3ab
(Gigabit Ethernet over UTP).
Network requirements might dictate that an upgrade to one of the faster Ethernet topologies be
used. Most Ethernet networks support speeds of 10 Mbps and 100 Mbps.
The new generation of multimedia, imaging, and database products, can easily overwhelm a
network running at traditional Ethernet speeds of 10 and 100 Mbps. Network administrators may
consider providing Gigabit Ethernet from the backbone to the end user. Costs for installing new
cabling and adapters can make this prohibitive. Gigabit Ethernet to the desktop is not a standard
installation at this time.
In general, Ethernet technologies can be used in a campus network in several different ways:
An Ethernet speed of 10 Mbps can be used at the user level to provide good performance. Clients or
servers that require more bandwidth can use 100-Mbps Ethernet.
Fast Ethernet is used as the link between user and network devices. It can support the combination
of all traffic from each Ethernet segment.
To enhance client-server performance across the campus network and avoid bottlenecks, Fast
Ethernet can be used to connect enterprise servers.
Fast Ethernet or Gigabit Ethernet, as affordable, should be implemented between backbone devices.
 5.1.3 Ethernet media and connector requirements
Before selecting an Ethernet implementation, consider the media and connector requirements for
each implementation. Also, consider the level of performance needed by the network.
The cables and connector specifications used to support Ethernet implementations are derived from
the Electronic Industries Association and the Telecommunications Industry Association (EIA/TIA)
standards body. The categories of cabling defined for Ethernet are derived from the EIA/TIA-568
(SP-2840) Commercial Building Telecommunications Wiring Standards.
Figure compares the cable and connector specifications for the most popular Ethernet
implementations. It is important to note the difference in the media used for 10-Mbps Ethernet
versus 100-Mbps Ethernet. Networks with a combination of 10- and 100-Mbps traffic use UTP
Category 5 to support Fast Ethernet.
 5.1.4 Connection media
Figure illustrates the different connection types used by each physical layer implementation. The
registered jack (RJ-45) connector and jack are the most common. RJ-45 connectors are discussed in
more detail in the next section.
In some cases the type of connector on a network interface card (NIC) does not match the media
that it needs to connect to. As shown in Figure , an interface may exist for the 15-pin attachment
unit interface (AUI) connector. The AUI connector allows different media to connect when used
with the appropriate transceiver. A transceiver is an adapter that converts one type of connection to
another. Typically, a transceiver converts an AUI to RJ-45, coax, or fiber optic connector. On
10BASE5 Ethernet, or Thicknet, a short cable is used to connect the AUI with a transceiver on the
main cable.
 5.1.5 UTP implementation
EIA/TIA specifies an RJ-45 connector for UTP cable. The letters RJ stand for registered jack, and
the number 45 refers to a specific wiring sequence. The RJ-45 transparent end connector shows
eight colored wires. Four of the wires carry the voltage and are considered “tip” (T1 through T4).
The other four wires are grounded and are called “ring” (R1 through R4). Tip and ring are terms
that originated in the early days of the telephone. Today, these terms refer to the positive and the
negative wire in a pair. The wires in the first pair in a cable or a connector are designated as T1 and
R1. The second pair is T2 and R2, and so on.
The RJ-45 connector is the male component, crimped on the end of the cable. When looking at the
male connector from the front, the pin locations are numbered 8 on the left down to 1 on the right as
seen in Figure .
The jack is the female component in a network device, wall outlet, or patch panel as seen in Figure
  . Figure shows the punch-down connections at the back of the jack where the Ethernet UTP
cable connects.
For electricity to run between the connector and the jack, the order of the wires must follow
EIA/TIA-T568-A or T568-B standards, as shown in Figure . Identify the correct EIA/TIA
category of cable to use for a connecting device by determining what standard is being used by the
jack on the network device. In addition to identifying the correct EIA/TIA category of cable,
determine whether to use a straight-through cable or a crossover cable.
If the two RJ-45 connectors of a cable are held side by side in the same orientation, the colored
wires will be seen in each. If the order of the colored wires is the same at each end, then the cable is
straight-through as seen in Figure .
With crossover, the RJ-45 connectors on both ends show that some of the wires on one side of the
cable are crossed to a different pin on the other side of the cable. Figure shows that pins 1 and 2
on one connector connect respectively to pins 3 and 6 on the other.
Figure shows the guidelines for what type of cable to use when interconnecting Cisco devices.
Use straight-through cables for the following cabling:
Switch to router
Switch to PC or server
Hub to PC or server
Use crossover cables for the following cabling:
Switch to switch
Switch to hub
Hub to hub
Router to router
PC to PC
Router to PC
Figure illustrates how a variety of cable types may be required in a given network. The category
of UTP cable required is based on the type of Ethernet that is chosen.
 5.1.6 Repeaters
The term repeater comes from the early days of long distance communication. The term describes
the situation when a person on one hill would repeat the signal that was just received from the
person on the previous hill. The process would repeat until the message arrived at its destination.
Telegraph, telephone, microwave, and optical communications use repeaters to strengthen signals
sent over long distances.
A repeater receives a signal, regenerates it, and passes it on. It can regenerate and retime network
signals at the bit level to allow them to travel a longer distance on the media. The Four Repeater
Rule for 10-Mbps Ethernet should be used as a standard when extending LAN segments. This rule
states that no more than four repeaters can be used between hosts on a LAN. This rule is used to
limit latency added to frame travel by each repeater. Too much latency on the LAN increases the
number of late collisions and makes the LAN less efficient.
 5.1.7 Hubs
Hubs are actually multiport repeaters. In many cases, the difference between the two devices is the
number of ports that each provides. While a typical repeater has just two ports, a hub generally has
from four to twenty-four ports. Hubs are most commonly used in Ethernet 10BASE-T or
100BASE-T networks, although there are other network architectures that use them as well.
Using a hub changes the network topology from a linear bus, where each device plugs directly into
the wire, to a star. With hubs, data arriving over the cables to a hub port is electrically repeated on
all the other ports connected to the same network segment, except for the port on which the data
was sent.
Hubs come in three basic types:
Passive – A passive hub serves as a physical connection point only. It does not manipulate or view
the traffic that crosses it. It does not boost or clean the signal. A passive hub is used only to share
the physical media. As such, the passive hub does not need electrical power.
Active – An active hub must be plugged into an electrical outlet because it needs power to amplify
the incoming signal before passing it out to the other ports.
Intelligent – Intelligent hubs are sometimes called smart hubs. These devices basically function as
active hubs, but also include a microprocessor chip and diagnostic capabilities. Intelligent hubs are
more expensive than active hubs, but are useful in troubleshooting situations.
Devices attached to a hub receive all traffic traveling through the hub. The more devices there are
attached to the hub, the more likely there will be collisions. A collision occurs when two or more
workstations send data over the network wire at the same time. All data is corrupted when that
occurs. Every device connected to the same network segment is said to be a member of a collision
Sometimes hubs are called concentrators, because hubs serve as a central connection point for an
Ethernet LAN.
 5.1.8 Wireless
A wireless network can be created with much less cabling than other networks. Wireless signals are
electromagnetic waves that travel through the air. Wireless networks use Radio Frequency (RF),
laser, infrared (IR), or satellite/microwaves to carry signals from one computer to another without a
permanent cable connection. The only permanent cabling can be to the access points for the
network. Workstations within the range of the wireless network can be moved easily without
connecting and reconnecting network cabling.
A common application of wireless data communication is for mobile use. Some examples of mobile
use include commuters, airplanes, satellites, remote space probes, space shuttles, and space stations.
At the core of wireless communication are devices called transmitters and receivers. The transmitter
converts source data to electromagnetic (EM) waves that are passed to the receiver. The receiver
then converts these electromagnetic waves back into data for the destination. For two-way
communication, each device requires a transmitter and a receiver. Many networking device
manufacturers build the transmitter and receiver into a single unit called a transceiver or wireless
network card. All devices in wireless LANs (WLANs) must have the appropriate wireless network
card installed.
The two most common wireless technologies used for networking are IR and RF. IR technology has
its weaknesses. Workstations and digital devices must be in the line of sight of the transmitter in
order to operate. An infrared-based network suits environments where all the digital devices that
require network connectivity are in one room. IR networking technology can be installed quickly,
but the data signals can be weakened or obstructed by people walking across the room or by
moisture in the air. There are, however, new IR technologies being developed that can work out of
Radio Frequency technology allows devices to be in different rooms or even buildings. The limited
range of radio signals restricts the use of this kind of network. RF technology can be on single or
multiple frequencies. A single radio frequency is subject to outside interference and geographic
obstructions. Furthermore, a single frequency is easily monitored by others, which makes the
transmissions of data insecure. Spread spectrum avoids the problem of insecure data transmission
by using multiple frequencies to increase the immunity to noise and to make it difficult for outsiders
to intercept data transmissions.
Two approaches currently being used to implement spread spectrum for WLAN transmissions are
Frequency Hopping Spread Spectrum (FHSS) and Direct Sequence Spread Spectrum (DSSS). The
technical details of how these technologies work are beyond the scope of this course.
 5.1.9 Bridges
There are times when it is necessary to break up a large LAN into smaller, more easily managed
segments.       This decreases the amount of traffic on a single LAN and can extend the geographical
area past what a single LAN can support. The devices that are used to connect network segments
together include bridges, switches, routers, and gateways. Switches and bridges operate at the Data
Link layer of the OSI model. The function of the bridge is to make intelligent decisions about
whether or not to pass signals on to the next segment of a network.
When a bridge receives a frame on the network, the destination MAC address is looked up in the
bridge table to determine whether to filter, flood, or copy the frame onto another segment. This
decision process occurs as follows:
If the destination device is on the same segment as the frame, the bridge blocks the frame from
going on to other segments. This process is known as filtering.
If the destination device is on a different segment, the bridge forwards the frame to the appropriate
If the destination address is unknown to the bridge, the bridge forwards the frame to all segments
except the one on which it was received. This process is known as flooding.
If placed strategically, a bridge can greatly improve network performance.
 5.1.10 Switches
A switch is sometimes described as a multiport bridge. While a typical bridge may have just two
ports linking two network segments, the switch can have multiple ports depending on how many
network segments are to be linked. Like bridges, switches learn certain information about the data
packets that are received from various computers on the network. Switches use this information to
build forwarding tables to determine the destination of data being sent by one computer to another
computer on the network.
Although there are some similarities between the two, a switch is a more sophisticated device than a
bridge. A bridge determines whether the frame should be forwarded to the other network segment
based on the destination MAC address. A switch has many ports with many network segments
connected to them. A switch chooses the port to which the destination device or workstation is
connected. Ethernet switches are becoming popular connectivity solutions because, like bridges,
switches improve network performance by improving speed and bandwidth.
Switching is a technology that alleviates congestion in Ethernet LANs by reducing the traffic and
increasing the bandwidth. Switches can easily replace hubs because switches work with existing
cable infrastructures. This improves performance with a minimum of intrusion into an existing
In data communications today, all switching equipment performs two basic operations. The first
operation is called switching data frames. Switching data frames is the process by which a frame is
received on an input medium and then transmitted to an output medium. The second is the
maintenance of switching operations where switches build and maintain switching tables and search
for loops.
Switches operate at much higher speeds than bridges and can support new functionality, such as
virtual LANs.
An Ethernet switch has many benefits. One benefit is that an Ethernet switch allows many users to
communicate in parallel through the use of virtual circuits and dedicated network segments in a
virtually collision-free environment. This maximizes the bandwidth available on the shared
medium. Another benefit is that moving to a switched LAN environment is very cost effective
because existing hardware and cabling can be reused.
 5.1.11 Host connectivity
The function of a NIC         is to connect a host device to the network medium. A NIC is a printed
circuit board that fits into the expansion slot on the motherboard or peripheral device of a computer.
The NIC is also referred to as a network adapter. On laptop or notebook computers a NIC is the size
of a credit card.
NICs are considered Layer 2 devices because each NIC carries a unique code called a MAC
address. This address is used to control data communication for the host on the network. More will
be learned about the MAC address later. As the name implies, the network interface card controls
host access to the medium.
In some cases the type of connector on the NIC does not match the type of media that needs to be
connected to it. A good example is a Cisco 2500 router. On the router an AUI connector is seen.
That AUI connector needs to connect to a UTP Cat 5 Ethernet cable. To do this a
transmitter/receiver, also known as a transceiver, is used. A transceiver converts one type of signal
or connector to another. For example, a transceiver can connect a 15-pin AUI interface to an RJ-45
jack. It is considered a Layer 1 device because it only works with bits, and not with any address
information or higher-level protocols.
In diagrams, NICs have no standardized symbol. It is implied that, when networking devices are
attached to network media, there is a NIC or NIC-like device present. Wherever a dot is seen on a
topology map, it represents either a NIC interface or port, which acts like a NIC.
 5.1.12 Peer-to-peer
By using LAN and WAN technologies, many computers are interconnected to provide services to
their users. To accomplish this, networked computers take on different roles or functions in relation
to each other. Some types of applications require computers to function as equal partners. Other
types of applications distribute their work so that one computer functions to serve a number of
others in an unequal relationship. In either case, two computers typically communicate with each
other by using request/response protocols. One computer issues a request for a service, and a second
computer receives and responds to that request. The requestor takes on the role of a client, and the
responder takes on the role of a server.
In a peer-to-peer network, networked computers act as equal partners, or peers. As peers, each
computer can take on the client function or the server function. At one time, computer A may
make a request for a file from computer B, which responds by serving the file to computer A.
Computer A functions as client, while B functions as the server. At a later time, computers A and B
can reverse roles.
In a peer-to-peer network, individual users control their own resources. The users may decide to
share certain files with other users.    The users may also require passwords before allowing others
to access their resources. Since individual users make these decisions, there is no central point of
control or administration in the network. In addition, individual users must back up their own
systems to be able to recover from data loss in case of failures. When a computer acts as a server,
the user of that machine may experience reduced performance as the machine serves the requests
made by other systems.
Peer-to-peer networks are relatively easy to install and operate. No additional equipment is
necessary beyond a suitable operating system installed on each computer. Since users control their
own resources, no dedicated administrators are needed.
As networks grow, peer-to-peer relationships become increasingly difficult to coordinate. A peer-
to-peer network works well with 10 or fewer computers. Since peer-to-peer networks do not scale
well, their efficiency decreases rapidly as the number of computers on the network increases. Also,
individual users control access to the resources on their computers, which means security may be
difficult to maintain. The client/server model of networking can be used to overcome the limitations
of the peer-to-peer network.
 5.1.13 Client/server
In a client/server arrangement, network services are located on a dedicated computer called a server.
The server responds to the requests of clients. The server is a central computer that is
continuously available to respond to requests from clients for file, print, application, and other
services. Most network operating systems adopt the form of a client/server relationship. Typically,
desktop computers function as clients and one or more computers with additional processing power,
memory, and specialized software function as servers.
Servers are designed to handle requests from many clients simultaneously. Before a client can
access the server resources, the client must be identified and be authorized to use the resource. This
is done by assigning each client an account name and password that is verified by an authentication
service. The authentication service acts as a sentry to guard access to the network. With the
centralization of user accounts, security, and access control, server-based networks simplify the
administration of large networks.
The concentration of network resources such as files, printers, and applications on servers also
makes the data generated easier to back-up and maintain. Rather than having these resources spread
around individual machines, resources can be located on specialized, dedicated servers for easier
access. Most client/server systems also include facilities for enhancing the network by adding new
services that extend the usefulness of the network.
The distribution of functions in the client/server networks brings substantial advantages, but it also
incurs some costs. Although the aggregation of resources on server systems brings greater security,
simpler access and coordinated control, the server introduces a single point of failure into the
network. Without an operational server, the network cannot function at all. Servers require a
trained, expert staff to administer and maintain. This increases the expense of running the network.
Server systems also require additional hardware and specialized software that add to the cost.
Figures and summarize the advantages and disadvantages of peer-to-peer vs. client-server.
 5.2.1 WAN physical layer
The physical layer implementations vary depending on the distance of the equipment from the
services, the speed, and the type of service itself. Serial connections are used to support WAN
services such as dedicated leased lines that run Point-to-Point Protocol (PPP) or Frame Relay. The
speed of these connections ranges from 2400 bits per second (bps) to T1 service at 1.544 megabits
per second (Mbps) and E1 service at 2.048 megabits per seconds (Mbps).
ISDN offers dial-on-demand connections or dial backup services. An ISDN Basic Rate Interface
(BRI) is composed of two 64 kbps bearer channels (B channels) for data, and one delta channel (D
channel) at 16 kbps used for signaling and other link-management tasks. PPP is typically used to
carry data over the B channels.
With the increasing demand for residential broadband high-speed services, DSL and cable modem
connections are becoming more popular. For example, typical residential DSL service can achieve
T1/E1 speeds over the existing telephone line. Cable services use the existing coaxial cable TV line.
A coaxial cable line provides high-speed connectivity matching or exceeding that of DSL. DSL and
cable modem service will be covered in more detail in a later module.
 5.2.2 WAN serial connections
For long distance communication, WANs use serial transmission. This is a process by which bits of
data are sent over a single channel. This process provides more reliable long distance
communication and the use of a specific electromagnetic or optical frequency range.
Frequencies are measured in terms of cycles per second and expressed in Hertz (Hz). Signals
transmitted over voice grade telephone lines use 4 kilohertz (kHz). The size of the frequency range
is referred to as bandwidth. In networking, bandwidth is a measure of the bits per second that are
For a Cisco router, physical connectivity at the customer site is provided by one of two types of
serial connections. The first type of serial connections is a 60-pin connector. The second is a more
compact ‘smart serial’ connector. The provider connector will vary depending on the type of service
If the connection is made directly to a service provider, or a device that provides signal clocking
such as a channel/data service unit (CSU/DSU), the router will be a data terminal equipment (DTE)
and use a DTE serial cable. Typically this is the case. However, there are occasions where the local
router is required to provide the clocking rate and therefore will use a data communications
equipment (DCE) cable. In the curriculum router labs one of the connected routers will need to
provide the clocking function. Therefore, the connection will consist of a DCE and a DTE cable.
 5.2.3 Routers and serial connections
Routers are responsible for routing data packets from source to destination within the LAN, and for
providing connectivity to the WAN. Within a LAN environment the router contains broadcasts,
provides local address resolution services, such as ARP and RARP, and may segment the network
using a subnetwork structure. In order to provide these services the router must be connected to the
LAN and WAN.
In addition to determining the cable type, it is necessary to determine whether DTE or DCE
connectors are required. The DTE is the endpoint of the user’s device on the WAN link. The DCE
is typically the point where responsibility for delivering data passes into the hands of the service
When connecting directly to a service provider, or to a device such as a CSU/DSU that will perform
signal clocking, the router is a DTE and needs a DTE serial cable. This is typically the case for
routers. However, there are cases when the router will need to be the DCE. When performing a
back-to-back router scenario in a test environment, one of the routers will be a DTE and the other
will be a DCE.
When cabling routers for serial connectivity, the routers will either have fixed or modular ports. The
type of port being used will affect the syntax used later to configure each interface.
Interfaces on routers with fixed serial ports are labeled for port type and port number.
Interfaces on routers with modular serial ports are labeled for port type, slot, and port number.
The slot is the location of the module. To configure a port on a modular card, it is necessary to
specify the interface using the syntax “port type slot number/port number.” Use the label “serial
1/0,” when the interface is serial, the slot number where the module is installed is slot 1, and the
port that is being referenced is port 0.
 5.2.4 Routers and ISDN BRI connections
With ISDN BRI, two types of interfaces may be used, BRI S/T and BRI U. Determine who is
providing the Network Termination 1 (NT1) device in order to determine which interface type is
An NT1 is an intermediate device located between the router and the service provider ISDN switch.
The NT1 is used to connect four-wire subscriber wiring to the conventional two-wire local loop. In
North America, the customer typically provides the NT1, while in the rest of the world the service
provider provides the NT1 device.
It may be necessary to provide an external NT1 if the device is not already integrated into the
router. Reviewing the labeling on the router interfaces is usually the easiest way to determine if the
router has an integrated NT1. A BRI interface with an integrated NT1 is labeled BRI U. A BRI
interface without an integrated NT1 is labeled BRI S/T. Because routers can have multiple ISDN
interface types, determine which interface is needed when the router is purchased. The type of BRI
interface may be determined by looking at the port label. To interconnect the ISDN BRI port to
the service-provider device, use a UTP Category 5 straight-through cable.
Caution: It is important to insert the cable running from an ISDN BRI port only to an ISDN jack or
an ISDN switch. ISDN BRI uses voltages that can seriously damage non-ISDN devices.
 5.2.5 Routers and DSL connections
The Cisco 827 ADSL router has one asymmetric digital subscriber line (ADSL) interface. To
connect an ADSL line to the ADSL port on a router, do the following:
Connect the phone cable to the ADSL port on the router.
Connect the other end of the phone cable to the phone jack.
To connect a router for DSL service, use a phone cable with RJ-11 connectors. DSL works over
standard telephone lines using pins 3 and 4 on a standard RJ-11 connector.
 5.2.6 Routers and cable connections
The Cisco uBR905 cable access router provides high-speed network access on the cable television
system to residential and small office, home office (SOHO) subscribers. The uBR905 router has a
coaxial cable, or F-connector, interface that connects directly to the cable system. Coaxial cable and
a BNC connector are used to connect the router and cable system.
Use the following steps to connect the Cisco uBR905 cable access router to the cable system:
Verify that the router is not connected to power.
Locate the RF coaxial cable coming from the coaxial cable (TV) wall outlet.
Install a cable splitter/directional coupler, if needed, to separate signals for TV and computer use. If
necessary, also install a high-pass filter to prevent interference between the TV and computer
Connect the coaxial cable to the F connector of the router. Hand-tighten the connector, making
sure that it is finger-tight, and then give it a 1/6 turn with a wrench.
Make sure that all other coaxial cable connectors, all intermediate splitters, couplers, or ground
blocks, are securely tightened from the distribution tap to the Cisco uBR905 router.
Caution: Do not over tighten the connector. Over tightening may break off the connector. Do not
use a torque wrench because of the danger of tightening the connector more than the recommended
1/6 turns after it is finger-tight.
 5.2.7 Setting up console connections
To initially configure the Cisco device, a management connection must be directly connected to the
device. For Cisco equipment this management attachment is called a console port. The console port
allows monitoring and configuration of a Cisco hub, switch, or router.
The cable used between a terminal and a console port is a rollover cable, with RJ-45 connectors.
The rollover cable, also known as a console cable, has a different pinout than the straight-through or
crossover RJ-45 cables used with Ethernet or the ISDN BRI. The pinout for a rollover is as follows:
1 to 8
2 to 7
3 to 6
4 to 5
5 to 4
6 to 3
7 to 2
8 to 1
To set up a connection between the terminal and the Cisco console port, perform two steps. First,
connect the devices using a rollover cable from the router console port to the workstation serial
port. An RJ-45-to-DB-9 or an RJ-45-to-DB-25 adapter may be required for the PC or terminal.
Next, configure the terminal emulation application with the following common equipment (COM)
port settings: 9600 bps, 8 data bits, no parity, 1 stop bit, and no flow control.
The AUX port is used to provide out-of-band management through a modem. The AUX port must
be configured by way of the console port before it can be used. The AUX port also uses the settings
of 9600 bps, 8 data bits, no parity, 1 stop bit, and no flow control.

Module 6: Ethernet Fundamentals
Ethernet is now the dominant LAN technology in the world. Ethernet is not one technology but a
family of LAN technologies and may be best understood by using the OSI reference model. All
LANs must deal with the basic issue of how individual stations (nodes) are named, and Ethernet is
no exception. Ethernet specifications support different media, bandwidths, and other Layer 1 and 2
variations. However, the basic frame format and addressing scheme is the same for all varieties of
For multiple stations to access physical media and other networking devices, various media access
control strategies have been invented. Understanding how network devices gain access to the
network media is essential for understanding and troubleshooting the operation of the entire
 6.1.1 Introduction to Ethernet
Most of the traffic on the Internet originates and ends with Ethernet connections. From its beginning
in the 1970s, Ethernet has evolved to meet the increasing demand for high speed LANs. When a
new media was produced, such as optical fiber, Ethernet adapted to take advantage of the superior
bandwidth and low error rate that fiber offers. Now, the same protocol that transported data at 3
Mbps in 1973 is carrying data at 10 Gbps.
The success of Ethernet is due to the following factors:
Simplicity and ease of maintenance
Ability to incorporate new technologies
Low cost of installation and upgrade
With the introduction of Gigabit Ethernet, what started as a LAN technology now extends out to
distances that make Ethernet a metropolitan-area network (MAN) and wide-area network (WAN)
The original idea for Ethernet grew out of the problem of allowing two or more hosts to use the
same medium and prevent the signals from interfering with each other. This problem of multiple
user access to a shared medium was studied in the early 1970s at the University of Hawaii. A
system called Alohanet was developed to allow various stations on the Hawaiian Islands structured
access to the shared radio frequency band in the atmosphere. This work later formed the basis for
the Ethernet access method known as CSMA/CD.
The first LAN in the world was the original version of Ethernet. Robert Metcalfe and his coworkers
at Xerox designed it more than thirty years ago. The first Ethernet standard was published in 1980
by a consortium of Digital Equipment Company, Intel, and Xerox (DIX). Metcalfe wanted Ethernet
to be a shared standard from which everyone could benefit, so it was released as an open standard.
The first products developed using the Ethernet standard were sold during the early 1980s. Ethernet
transmitted at up to 10 Mbps over thick coaxial cable up to a distance of two kilometers. This type
of coaxial cable was referred to as thicknet and was about the width of a small finger.
In 1985, the Institute of Electrical and Electronics Engineers (IEEE) standards committee for Local
and Metropolitan Networks published standards for LANs. These standards start with the number
802. The standard for Ethernet is 802.3. The IEEE wanted to make sure that its standards were
compatible with the International Standards Organization (ISO)/OSI model. To do this, the IEEE
802.3 standard had to address the needs of Layer 1 and the lower portion of Layer 2 of the OSI
model. As a result, some small modifications to the original Ethernet standard were made in 802.3.
The differences between the two standards were so minor that any Ethernet network interface card
(NIC) can transmit and receive both Ethernet and 802.3 frames. Essentially, Ethernet and IEEE
802.3 are the same standards.
The 10-Mbps bandwidth of Ethernet was more than enough for the slow personal computers (PCs)
of the 1980s. By the early 1990s PCs became much faster, file sizes increased, and data flow
bottlenecks were occurring. Most were caused by the low availability of bandwidth. In 1995, IEEE
announced a standard for a 100-Mbps Ethernet. This was followed by standards for gigabit per
second (Gbps, 1 billion bits per second) Ethernet in 1998 and 1999.
All the standards are essentially compatible with the original Ethernet standard. An Ethernet frame
could leave an older coax 10-Mbps NIC in a PC, be placed onto a 10-Gbps Ethernet fiber link, and
end up at a 100-Mbps NIC. As long as the packet stays on Ethernet networks it is not changed. For
this reason Ethernet is considered very scalable. The bandwidth of the network could be increased
many times without changing the underlying Ethernet technology.
The original Ethernet standard has been amended a number of times in order to manage new
transmission media and higher transmission rates. These amendments provide standards for the
emerging technologies and maintain compatibility between Ethernet variations.
 6.1.2 IEEE Ethernet naming rules
Ethernet is not one networking technology, but a family of networking technologies that includes
Legacy, Fast Ethernet, and Gigabit Ethernet. Ethernet speeds can be 10, 100, 1000, or 10,000 Mbps.
The basic frame format and the IEEE sublayers of OSI Layers 1 and 2 remain consistent across all
forms of Ethernet.
When Ethernet needs to be expanded to add a new medium or capability, the IEEE issues a new
supplement to the 802.3 standard. The new supplements are given a one or two letter designation
such as 802.3u. An abbreviated description (called an identifier) is also assigned to the supplement.

The abbreviated description consists of:
A number indicating the number of Mbps transmitted.
The word base, indicating that baseband signaling is used.
One or more letters of the alphabet indicating the type of medium used (F= fiber optical cable, T =
copper unshielded twisted pair).
Ethernet relies on baseband signaling, which uses the entire bandwidth of the transmission medium.
The data signal is transmitted directly over the transmission medium. In broadband signaling, not
used by Ethernet, the data signal is never placed directly on the transmission medium. An analog
signal (carrier signal) is modulated by the data signal and the modulated carrier signal is
transmitted. Radio broadcasts and cable TV use broadband signaling.
The IEEE cannot force manufacturers of networking equipment to fully comply with all the
particulars of any standard. The IEEE hopes to achieve the following:
Supply the engineering information necessary to build devices that comply with Ethernet standards.
Promote innovation by manufacturers.
6.1.3 Ethernet and the OSI model
Ethernet operates in two areas of the OSI model, the lower half of the data link layer, known as the
MAC sublayer and the physical layer.
To move data between one Ethernet station and another, the data often passes through a repeater.
All other stations in the same collision domain see traffic that passes through a repeater. A
collision domain is then a shared resource. Problems originating in one part of the collision domain
will usually impact the entire collision domain.
A repeater is responsible for forwarding all traffic to all other ports. Traffic received by a repeater is
never sent out the originating port. Any signal detected by a repeater will be forwarded. If the signal
is degraded through attenuation or noise, the repeater will attempt to reconstruct and regenerate the
Standards guarantee minimum bandwidth and operability by specifying the maximum number of
stations per segment, maximum segment length, maximum number of repeaters between stations,
etc. Stations separated by repeaters are within the same collision domain. Stations separated by
bridges or routers are in different collision domains.
Figure maps a variety of Ethernet technologies to the lower half of OSI Layer 2 and all of Layer
1. Ethernet at Layer 1 involves interfacing with media, signals, bit streams that travel on the media,
components that put signals on media, and various topologies. Ethernet Layer 1 performs a key role
in the communication that takes place between devices, but each of its functions has limitations.
Layer 2 addresses these limitations.
Data link sublayers contribute significantly to technology compatibility and computer
communication. The MAC sublayer is concerned with the physical components that will be used to
communicate the information. The Logical Link Control (LLC) sublayer remains relatively
independent of the physical equipment that will be used for the communication process.
Figure maps a variety of Ethernet technologies to the lower half of OSI Layer 2 and all of Layer
1. While there are other varieties of Ethernet, the ones shown are the most widely used.
 6.1.4 Naming
To allow for local delivery of frames on the Ethernet, there must be an addressing system, a way of
uniquely identifying computers and interfaces. Ethernet uses MAC addresses that are 48 bits in
length and expressed as twelve hexadecimal digits. The first six hexadecimal digits, which are
administered by the IEEE, identify the manufacturer or vendor. This portion of the MAC address is
known as the Organizational Unique Identifier (OUI). The remaining six hexadecimal digits
represent the interface serial number, or another value administered by the specific equipment
manufacturer. MAC addresses are sometimes referred to as burned-in addresses (BIA) because
they are burned into read-only memory (ROM) and are copied into random-access memory (RAM)
when the NIC initializes.
At the data link layer MAC headers and trailers are added to upper layer data. The header and trailer
contain control information intended for the data link layer in the destination system. Data from
upper layer entities is encapsulated in the data link layer header and trailer.
The NIC uses the MAC address to assess whether the message should be passed onto the upper
layers of the OSI model. The NIC makes this assessment without using CPU processing time,
enabling better communication times on an Ethernet network.
On an Ethernet network, when one device sends data it can open a communication pathway to the
other device by using the destination MAC address. The source device attaches a header with the
MAC address of the intended destination and sends data onto the network. As this data propagates
along the network media the NIC in each device on the network checks to see if the MAC address
matches the physical destination address carried by the data frame. If there is no match, the NIC
discards the data frame. When the data reaches the destination node, the NIC makes a copy and
passes the frame up the OSI layers. On an Ethernet network, all nodes must examine the MAC
header even if the communicating nodes are side by side.
All devices that are connected to the Ethernet LAN have MAC addressed interfaces including
workstations, printers, routers, and switches.
 6.1.5 Layer 2 framing
Encoded bit streams (data) on physical media represent a tremendous technological
accomplishment, but they, alone, are not enough to make communication happen. Framing helps
obtain essential information that could not, otherwise, be obtained with coded bit streams alone.
Examples of such information are:
Which computers are communicating with one another
When communication between individual computers begins and when it terminates
Provides a method for detection of errors that occurred during the communication
Whose turn it is to "talk" in a computer "conversation"
Framing is the Layer 2 encapsulation process. A frame is the Layer 2 protocol data unit.
A voltage vs. time graph could be used to visualize bits. However, when dealing with larger units of
data, addressing and control information, a voltage vs. time graph could become large and
confusing. Another type of diagram that could be used is the frame format diagram, which is based
on voltage versus time graphs. Frame format diagrams are read from left to right, just like an
oscilloscope graph. The frame format diagram shows different groupings of bits (fields) that
perform other functions.
There are many different types of frames described by various standards. A single generic frame has
sections called fields, and each field is composed of bytes. The names of the fields are as follows:
Start frame field
Address field
Length / type field
Data field
Frame check sequence field
When computers are connected to a physical medium, there must be a way they can grab the
attention of other computers to broadcast the message, "Here comes a frame!" Various technologies
have different ways of doing this process, but all frames, regardless of technology, have a beginning
signaling sequence of bytes.
All frames contain naming information, such as the name of the source node (MAC address) and
the name of the destination node (MAC address).
Most frames have some specialized fields. In some technologies, a length field specifies the exact
length of a frame in bytes. Some frames have a type field, which specifies the Layer 3 protocol
making the sending request.
The reason for sending frames is to get upper layer data, ultimately the user application data, from
the source to the destination. The data package has two parts, the user application data and the
encapsulated bytes to be sent to the destination computer. Padding bytes may be added so frames
have a minimum length for timing purposes. Logical link control (LLC) bytes are also included
with the data field in the IEEE standard frames. The LLC sub-layer takes the network protocol data,
an IP packet, and adds control information to help deliver that IP packet to the destination node.
Layer 2 communicates with the upper-level layers through LLC.
All frames and the bits, bytes, and fields contained within them, are susceptible to errors from a
variety of sources. The Frame Check Sequence (FCS) field contains a number that is calculated by
the source node based on the data in the frame. This FCS is then added to the end of the frame that
is being sent. When the destination node receives the frame the FCS number is recalculated and
compared with the FCS number included in the frame. If the two numbers are different, an error is
assumed, the frame is discarded, and the source is asked to retransmit.
There are three primary ways to calculate the Frame Check Sequence number:
Cyclic Redundancy Check (CRC) – performs calculations on the data.
Two-dimensional parity – adds an 8th bit that makes an 8 bit sequence have an odd or even
number of binary 1s.
Internet checksum – adds the values of all of the data bits to arrive at a sum.
The node that transmits data must get the attention of other devices, in order to start a frame, and to
end the frame. The length field implies the end, and the frame is considered ended after the FCS.
Sometimes there is a formal byte sequence referred to as an end-frame delimiter.
 6.1.6 Ethernet frame structure
At the data link layer the frame structure is nearly identical for all speeds of Ethernet from 10 Mbps
to 10,000 Mbps. However, at the physical layer almost all versions of Ethernet are substantially
different from one another with each speed having a distinct set of architecture design rules.
In the version of Ethernet that was developed by DIX prior to the adoption of the IEEE 802.3
version of Ethernet, the Preamble and Start Frame Delimiter (SFD) were combined into a single
field, though the binary pattern was identical. The field labeled Length/Type was only listed as
Length in the early IEEE versions and only as Type in the DIX version. These two uses of the field
were officially combined in a later IEEE version, as both uses of the field were common throughout
The Ethernet II Type field is incorporated into the current 802.3 frame definition. The receiving
node must determine which higher-layer protocol is present in an incoming frame by examining the
Length/Type field. If the two-octet value is equal to or greater than 0x600 (hexadecimal), then the
frame is interpreted according to the Ethernet II type code indicated.
 6.1.7 Ethernet frame fields
Some of the fields permitted or required in an 802.3 Ethernet Frame are:
Start Frame Delimiter
Destination Address
Source Address
Data and Pad
The Preamble is an alternating pattern of ones and zeroes used for timing synchronization in the
asynchronous 10 Mbps and slower implementations of Ethernet. Faster versions of Ethernet are
synchronous, and this timing information is redundant but retained for compatibility.
A Start Frame Delimiter consists of a one-octet field that marks the end of the timing information,
and contains the bit sequence 10101011.
The Destination Address field contains the MAC destination address. The destination address can
be unicast, multicast (group), or broadcast (all nodes).
The Source Address field contains the MAC source address. The source address is generally the
unicast address of the transmitting Ethernet node. There are, however, an increasing number of
virtual protocols in use that use and sometimes share a specific source MAC address to identify the
virtual entity.
The Length/Type field supports two different uses. If the value is less than 1536 decimal, 0x600
(hexadecimal), then the value indicates length. The length interpretation is used where the LLC
Layer provides the protocol identification. The type value specifies the upper-layer protocol to
receive the data after Ethernet processing is completed. The length indicates the number of bytes of
data that follows this field. If the value is equal to or greater than 1536 decimal (0600 hexadecimal),
the value indicates that the type and contents of the Data field are decoded per the protocol
The Data and Pad field may be of any length that does not cause the frame to exceed the maximum
frame size. The maximum transmission unit (MTU) for Ethernet is 1500 octets, so the data should
not exceed that size. The content of this field is unspecified. An unspecified pad is inserted
immediately after the user data when there is not enough user data for the frame to meet the
minimum frame length. Ethernet requires that the frame be not less than 46 octets or more than
1518 octets.
A FCS contains a four byte CRC value that is created by the sending device and is recalculated by
the receiving device to check for damaged frames. Since the corruption of a single bit anywhere
from the beginning of the Destination Address through the end of the FCS field will cause the
checksum to be different, the coverage of the FCS includes itself. It is not possible to distinguish
between corruption of the FCS itself and corruption of any preceding field used in the calculation.
 6.2.1 Media Access Control (MAC)
MAC refers to protocols that determine which computer on a shared-medium environment, or
collision domain, is allowed to transmit the data. MAC, with LLC, comprises the IEEE version of
the OSI Layer 2. MAC and LLC are sublayers of Layer 2. There are two broad categories of Media
Access Control, deterministic (taking turns) and non-deterministic (first come, first served).
Examples of deterministic protocols include Token Ring and FDDI. In a Token Ring network,
individual hosts are arranged in a ring and a special data token travels around the ring to each host
in sequence. When a host wants to transmit, it seizes the token, transmits the data for a limited time,
and then forwards the token to the next host in the ring. Token Ring is a collisionless environment
as only one host is able to transmit at any given time.
Non-deterministic MAC protocols use a first-come, first-served approach. CSMA/CD is a simple
system. The NIC listens for an absence of a signal on the media and starts transmitting. If two nodes
transmit at the same time a collision occurs and none of the nodes are able to transmit.
Three common Layer 2 technologies are Token Ring, FDDI, and Ethernet. All three specify Layer 2
issues, LLC, naming, framing, and MAC, as well as Layer 1 signaling components and media
issues. The specific technologies for each are as follows:
Ethernet – logical bus topology (information flow is on a linear bus) and physical star or extended
star (wired as a star)
Token Ring – logical ring topology (in other words, information flow is controlled in a ring) and a
physical star topology (in other words, it is wired as a star)
FDDI – logical ring topology (information flow is controlled in a ring) and physical dual-ring
topology (wired as a dual-ring)
 6.2.2 MAC rules and collision detection/backoff
Ethernet is a shared-media broadcast technology. The access method CSMA/CD used in Ethernet
performs three functions:
Transmitting and receiving data packets
Decoding data packets and checking them for valid addresses before passing them to the upper
layers of the OSI model
Detecting errors within data packets or on the network
In the CSMA/CD access method, networking devices with data to transmit work in a listen-before-
transmit mode. This means when a node wants to send data, it must first check to see whether the
networking media is busy. If the node determines the network is busy, the node will wait a random
amount of time before retrying. If the node determines the networking media is not busy, the node
will begin transmitting and listening. The node listens to ensure no other stations are transmitting at
the same time. After completing data transmission the device will return to listening mode.
Networking devices detect a collision has occurred when the amplitude of the signal on the
networking media increases. When a collision occurs, each node that is transmitting will continue to
transmit for a short time to ensure that all devices see the collision. Once all the devices have
detected the collision a backoff algorithm is invoked and transmission is stopped. The nodes stop
transmitting for a random period of time, which is different for each device. When the delay period
expires, all devices on the network can attempt to gain access to the networking media. When data
transmission resumes on the network, the devices that were involved in the collision do not have
priority to transmit data.
 6.2.3 Ethernet timing
The basic rules and specifications for proper operation of Ethernet are not particularly complicated,
though some of the faster physical layer implementations are becoming so. Despite the basic
simplicity, when a problem occurs in Ethernet it is often quite difficult to isolate the source.
Because of the common bus architecture of Ethernet, also described as a distributed single point of
failure, the scope of the problem usually encompasses all devices within the domain. In situations
where repeaters are used, this can include devices up to four segments away.
Any station on an Ethernet network wishing to transmit a message first “listens” to ensure that no
other station is currently transmitting. If the cable is quiet, the station will begin transmitting
immediately. The electrical signal takes time to travel down the cable (delay), and each subsequent
repeater introduces a small amount of latency in forwarding the frame from one port to the next.
Because of the delay and latency, it is possible for more than one station to begin transmitting at or
near the same time. This results in a collision.
If the attached station is operating in full duplex then the station may send and receive
simultaneously and collisions should not occur. Full-duplex operation also changes the timing
considerations and eliminates the concept of slot time. Full-duplex operation allows for larger
network architecture designs since the timing restriction for collision detection is removed.
In half duplex, assuming that a collision does not occur, the sending station will transmit 64 bits of
timing synchronization information that is known as the preamble. The sending station will then
transmit the following information:
Destination and source MAC addressing information
Certain other header information
The actual data payload
Checksum (FCS) used to ensure that the message was not corrupted along the way
Stations receiving the frame recalculate the FCS to determine if the incoming message is valid and
then pass valid messages to the next higher layer in the protocol stack.
10 Mbps and slower versions of Ethernet are asynchronous. Asynchronous means that each
receiving station will use the eight octets of timing information to synchronize the receive circuit to
the incoming data, and then discard it. 100 Mbps and higher speed implementations of Ethernet are
synchronous. Synchronous means the timing information is not required, however for compatibility
reasons the Preamble and SFD are present.
For all speeds of Ethernet transmission at or below 1000 Mbps, the standard describes how a
transmission may be no smaller than the slot time. Slot time for 10 and 100-Mbps Ethernet is 512
bit-times, or 64 octets. Slot time for 1000-Mbps Ethernet is 4096 bit-times, or 512 octets. Slot time
is calculated assuming maximum cable lengths on the largest legal network architecture. All
hardware propagation delay times are at the legal maximum and the 32-bit jam signal is used when
collisions are detected.
The actual calculated slot time is just longer than the theoretical amount of time required to travel
between the furthest points of the collision domain, collide with another transmission at the last
possible instant, and then have the collision fragments return to the sending station and be detected.
For the system to work the first station must learn about the collision before it finishes sending the
smallest legal frame size. To allow 1000-Mbps Ethernet to operate in half duplex the extension field
was added when sending small frames purely to keep the transmitter busy long enough for a
collision fragment to return. This field is present only on 1000-Mbps, half-duplex links and allows
minimum-sized frames to be long enough to meet slot time requirements. Extension bits are
discarded by the receiving station.
On 10-Mbps Ethernet one bit at the MAC layer requires 100 nanoseconds (ns) to transmit. At 100
Mbps that same bit requires 10 ns to transmit and at 1000 Mbps only takes 1 ns. As a rough
estimate, 20.3 cm (8 in) per nanosecond is often used for calculating propagation delay down a UTP
cable. For 100 meters of UTP, this means that it takes just under 5 bit-times for a 10BASE-T signal
to travel the length the cable.
For CSMA/CD Ethernet to operate, the sending station must become aware of a collision before it
has completed transmission of a minimum-sized frame. At 100 Mbps the system timing is barely
able to accommodate 100 meter cables. At 1000 Mbps special adjustments are required as nearly an
entire minimum-sized frame would be transmitted before the first bit reached the end of the first
100 meters of UTP cable. For this reason half duplex is not permitted in 10-Gigabit Ethernet.
 6.2.4 Interframe spacing and backoff
The minimum spacing between two non-colliding frames is also called the interframe spacing. This
is measured from the last bit of the FCS field of the first frame to the first bit of the preamble of the
second frame.
After a frame has been sent, all stations on a 10-Mbps Ethernet are required to wait a minimum of
96 bit-times (9.6 microseconds) before any station may legally transmit the next frame. On faster
versions of Ethernet the spacing remains the same, 96 bit-times, but the time required for that
interval grows correspondingly shorter. This interval is referred to as the spacing gap. The gap is
intended to allow slow stations time to process the previous frame and prepare for the next frame.
A repeater is expected to regenerate the full 64 bits of timing information, which is the preamble
and SFD, at the start of any frame. This is despite the potential loss of some of the beginning
preamble bits because of slow synchronization. Because of this forced reintroduction of timing bits,
some minor reduction of the interframe gap is not only possible but expected. Some Ethernet
chipsets are sensitive to a shortening of the interframe spacing, and will begin failing to see frames
as the gap is reduced. With the increase in processing power at the desktop, it would be very easy
for a personal computer to saturate an Ethernet segment with traffic and to begin transmitting again
before the interframe spacing delay time is satisfied.
After a collision occurs and all stations allow the cable to become idle (each waits the full
interframe spacing), then the stations that collided must wait an additional and potentially
progressively longer period of time before attempting to retransmit the collided frame. The waiting
period is intentionally designed to be random so that two stations do not delay for the same amount
of time before retransmitting, which would result in more collisions. This is accomplished in part by
expanding the interval from which the random retransmission time is selected on each
retransmission attempt. The waiting period is measured in increments of the parameter slot time.
If the MAC layer is unable to send the frame after sixteen attempts, it gives up and generates an
error to the network layer. Such an occurrence is fairly rare and would happen only under extremely
heavy network loads, or when a physical problem exists on the network.
 6.2.5 Error handling
The most common error condition on an Ethernet is the collision. Collisions are the mechanism
for resolving contention for network access. A few collisions provide a smooth, simple, low
overhead way for network nodes to arbitrate contention for the network resource. When network
contention becomes too great, collisions can become a significant impediment to useful network
Collisions result in network bandwidth loss that is equal to the initial transmission and the collision
jam signal. This is consumption delay and affects all network nodes possibly causing significant
reduction in network throughput.
The considerable majority of collisions occur very early in the frame, often before the SFD.
Collisions occurring before the SFD are usually not reported to the higher layers, as if the collision
did not occur. As soon as a collision is detected, the sending stations transmit a 32-bit “jam” signal
that will enforce the collision. This is done so that any data being transmitted is thoroughly
corrupted and all stations have a chance to detect the collision.
In Figure two stations listen to ensure that the cable is idle, then transmit. Station 1 was able to
transmit a significant percentage of the frame before the signal even reached the last cable segment.
Station 2 had not received the first bit of the transmission prior to beginning its own transmission
and was only able to send several bits before the NIC sensed the collision. Station 2 immediately
truncated the current transmission, substituted the 32-bit jam signal and ceased all transmissions.
During the collision and jam event that Station 2 was experiencing, the collision fragments were
working their way back through the repeated collision domain toward Station 1. Station 2
completed transmission of the 32-bit jam signal and became silent before the collision propagated
back to Station 1 which was still unaware of the collision and continued to transmit. When the
collision fragments finally reached Station 1, it also truncated the current transmission and
substituted a 32-bit jam signal in place of the remainder of the frame it was transmitting. Upon
sending the 32-bit jam signal Station 1 ceased all transmissions.
A jam signal may be composed of any binary data so long as it does not form a proper checksum for
the portion of the frame already transmitted. The most commonly observed data pattern for a jam
signal is simply a repeating one, zero, one, zero pattern, the same as Preamble. When viewed by a
protocol analyzer this pattern appears as either a repeating hexadecimal 5 or A sequence. The
corrupted, partially transmitted messages are often referred to as collision fragments or runts.
Normal collisions are less than 64 octets in length and therefore fail both the minimum length test
and the FCS checksum test.
 6.2.6 Types of collisions
Collisions typically take place when two or more Ethernet stations transmit simultaneously within a
collision domain. A single collision is a collision that was detected while trying to transmit a frame,
but on the next attempt the frame was transmitted successfully. Multiple collisions indicate that the
same frame collided repeatedly before being successfully transmitted. The results of collisions,
collision fragments, are partial or corrupted frames that are less than 64 octets and have an invalid
FCS. Three types of collisions are:
To create a local collision on coax cable (10BASE2 and 10BASE5), the signal travels down the
cable until it encounters a signal from the other station. The waveforms then overlap, canceling
some parts of the signal out and reinforcing or doubling other parts. The doubling of the signal
pushes the voltage level of the signal beyond the allowed maximum. This over-voltage condition is
then sensed by all of the stations on the local cable segment as a collision.
In the beginning the waveform in Figure represents normal Manchester encoded data. A few
cycles into the sample the amplitude of the wave doubles. That is the beginning of the collision,
where the two waveforms are overlapping. Just prior to the end of the sample the amplitude returns
to normal. This happens when the first station to detect the collision quits transmitting, and the jam
signal from the second colliding station is still observed.
On UTP cable, such as 10BASE-T, 100BASE-TX and 1000BASE-T, a collision is detected on the
local segment only when a station detects a signal on the RX pair at the same time it is sending on
the TX pair. Since the two signals are on different pairs there is no characteristic change in the
signal. Collisions are only recognized on UTP when the station is operating in half duplex. The only
functional difference between half and full duplex operation in this regard is whether or not the
transmit and receive pairs are permitted to be used simultaneously. If the station is not engaged in
transmitting it cannot detect a local collision. Conversely, a cable fault such as excessive crosstalk
can cause a station to perceive its own transmission as a local collision.
The characteristics of a remote collision are a frame that is less than the minimum length, has an
invalid FCS checksum, but does not exhibit the local collision symptom of over-voltage or
simultaneous RX/TX activity. This sort of collision usually results from collisions occurring on the
far side of a repeated connection. A repeater will not forward an over-voltage state, and cannot
cause a station to have both the TX and RX pairs active at the same time. The station would have to
be transmitting to have both pairs active, and that would constitute a local collision. On UTP
networks this is the most common sort of collision observed.
There is no possibility remaining for a normal or legal collision after the first 64 octets of data has
been transmitted by the sending stations. Collisions occurring after the first 64 octets are called “late
collisions". The most significant difference between late collisions and collisions occurring before
the first 64 octets is that the Ethernet NIC will retransmit a normally collided frame automatically,
but will not automatically retransmit a frame that was collided late. As far as the NIC is concerned
everything went out fine, and the upper layers of the protocol stack must determine that the frame
was lost. Other than retransmission, a station detecting a late collision handles it in exactly the same
way as a normal collision.
 6.2.7 Ethernet errors
Knowledge of typical errors is invaluable for understanding both the operation and troubleshooting
of Ethernet networks.
The following are the sources of Ethernet error:
Collision or runt – Simultaneous transmission occurring before slot time has elapsed
Late collision – Simultaneous transmission occurring after slot time has elapsed
Jabber, long frame and range errors – Excessively or illegally long transmission
Short frame, collision fragment or runt – Illegally short transmission
FCS error – Corrupted transmission
Alignment error – Insufficient or excessive number of bits transmitted
Range error – Actual and reported number of octets in frame do not match
Ghost or jabber – Unusually long Preamble or Jam event
While local and remote collisions are considered to be a normal part of Ethernet operation, late
collisions are considered to be an error. The presence of errors on a network always suggests that
further investigation is warranted. The severity of the problem indicates the troubleshooting
urgency related to the detected errors. A handful of errors detected over many minutes or over hours
would be a low priority. Thousands detected over a few minutes suggest that urgent attention is
Jabber is defined in several places in the 802.3 standard as being a transmission of at least 20,000 to
50,000 bit times in duration. However, most diagnostic tools report jabber whenever a detected
transmission exceeds the maximum legal frame size, which is considerably smaller than 20,000 to
50,000 bit times. Most references to jabber are more properly called long frames.
A long frame is one that is longer than the maximum legal size, and takes into consideration
whether or not the frame was tagged. It does not consider whether or not the frame had a valid FCS
checksum. This error usually means that jabber was detected on the network.
A short frame is a frame smaller than the minimum legal size of 64 octets, with a good frame check
sequence. Some protocol analyzers and network monitors call these frames “runts". In general the
presence of short frames is not a guarantee that the network is failing.
The term runt is generally an imprecise slang term that means something less than a legal frame
size. It may refer to short frames with a valid FCS checksum although it usually refers to collision
 6.2.8 FCS and beyond
A received frame that has a bad Frame Check Sequence, also referred to as a checksum or CRC
error, differs from the original transmission by at least one bit. In an FCS error frame the header
information is probably correct, but the checksum calculated by the receiving station does not
match the checksum appended to the end of the frame by the sending station. The frame is then
High numbers of FCS errors from a single station usually indicates a faulty NIC and/or faulty or
corrupted software drivers, or a bad cable connecting that station to the network. If FCS errors are
associated with many stations, they are generally traceable to bad cabling, a faulty version of the
NIC driver, a faulty hub port, or induced noise in the cable system.
A message that does not end on an octet boundary is known as an alignment error. Instead of the
correct number of binary bits forming complete octet groupings, there are additional bits left over
(less than eight). Such a frame is truncated to the nearest octet boundary, and if the FCS checksum
fails, then an alignment error is reported. This is often caused by bad software drivers, or a
collision, and is frequently accompanied by a failure of the FCS checksum.
A frame with a valid value in the Length field but did not match the actual number of octets counted
in the data field of the received frame is known as a range error. This error also appears when the
length field value is less than the minimum legal unpadded size of the data field. A similar error,
Out of Range, is reported when the value in the Length field indicates a data size that is too large to
be legal.
Fluke Networks has coined the term ghost to mean energy (noise) detected on the cable that appears
to be a frame, but is lacking a valid SFD. To qualify as a ghost, the frame must be at least 72 octets
long, including the preamble. Otherwise, it is classified as a remote collision. Because of the
peculiar nature of ghosts, it is important to note that test results are largely dependent upon where
on the segment the measurement is made.
Ground loops and other wiring problems are usually the cause of ghosting. Most network
monitoring tools do not recognize the existence of ghosts for the same reason that they do not
recognize preamble collisions. The tools rely entirely on what the chipset tells them. Software-only
protocol analyzers, many hardware-based protocol analyzers, hand held diagnostic tools, as well as
most remote monitoring (RMON) probes do not report these events.
6.2.9 Ethernet auto-negotiation
As Ethernet grew from 10 to 100 and 1000 Mbps, one requirement was to make each technology
interoperable, even to the point that 10, 100, and 1000 interfaces could be directly connected. A
process called Auto-Negotiation of speeds at half or full duplex was developed. Specifically, at the
time that Fast Ethernet was introduced, the standard included a method of automatically configuring
a given interface to match the speed and capabilities of the link partner. This process defines how
two link partners may automatically negotiate a configuration offering the best common
performance level. It has the additional advantage of only involving the lowest part of the physical
10BASE-T required each station to transmit a link pulse about every 16 milliseconds, whenever the
station was not engaged in transmitting a message. Auto-Negotiation adopted this signal and
renamed it a Normal Link Pulse (NLP). When a series of NLPs are sent in a group for the purpose
of Auto-Negotiation, the group is called a Fast Link Pulse (FLP) burst. Each FLP burst is sent at the
same timing interval as an NLP, and is intended to allow older 10BASE-T devices to operate
normally in the event they should receive an FLP burst.
Auto-Negotiation is accomplished by transmitting a burst of 10BASE-T Link Pulses from each of
the two link partners. The burst communicates the capabilities of the transmitting station to its link
partner. After both stations have interpreted what the other partner is offering, both switch to the
highest performance common configuration and establish a link at that speed. If anything interrupts
communications and the link is lost, the two link partners first attempt to link again at the last
negotiated speed. If that fails, or if it has been too long since the link was lost, the Auto-Negotiation
process starts over. The link may be lost due to external influences, such as a cable fault, or due to
one of the partners issuing a reset.
 6.2.10 Link establishment and full and half duplex
Link partners are allowed to skip offering configurations of which they are capable. This allows the
network administrator to force ports to a selected speed and duplex setting, without disabling Auto-
Auto-Negotiation is optional for most Ethernet implementations. Gigabit Ethernet requires its
implementation, though the user may disable it. Auto-Negotiation was originally defined for UTP
implementations of Ethernet and has been extended to work with other fiber optic implementations.
When an Auto-Negotiating station first attempts to link it is supposed to enable 100BASE-TX to
attempt to immediately establish a link. If 100BASE-TX signaling is present, and the station
supports 100BASE-TX, it will attempt to establish a link without negotiating. If either signaling
produces a link or FLP bursts are received, the station will proceed with that technology. If a link
partner does not offer an FLP burst, but instead offers NLPs, then that device is automatically
assumed to be a 10BASE-T station. During this initial interval of testing for other technologies, the
transmit path is sending FLP bursts. The standard does not permit parallel detection of any other
If a link is established through parallel detection, it is required to be half duplex. There are only two
methods of achieving a full-duplex link. One method is through a completed cycle of Auto-
Negotiation, and the other is to administratively force both link partners to full duplex. If one link
partner is forced to full duplex, but the other partner attempts to Auto-Negotiate, then there is
certain to be a duplex mismatch. This will result in collisions and errors on that link. Additionally if
one end is forced to full duplex the other must also be forced. The exception to this is 10-Gigabit
Ethernet, which does not support half duplex.
Many vendors implement hardware in such a way that it cycles through the various possible states.
It transmits FLP bursts to Auto-Negotiate for a while, then it configures for Fast Ethernet, attempts
to link for a while, and then just listens. Some vendors do not offer any transmitted attempt to link
until the interface first hears an FLP burst or some other signaling scheme.
There are two duplex modes, half and full. For shared media, the half-duplex mode is mandatory.
All coaxial implementations are half duplex in nature and cannot operate in full duplex. UTP and
fiber implementations may be operated in half duplex. 10-Gbps implementations are specified for
full duplex only.
In half duplex only one station may transmit at a time. For the coaxial implementations a second
station transmitting will cause the signals to overlap and become corrupted. Since UTP and fiber
generally transmit on separate pairs the signals have no opportunity to overlap and become
corrupted. Ethernet has established arbitration rules for resolving conflicts arising from instances
when more than one station attempts to transmit at the same time. Both stations in a point-to-point
full-duplex link are permitted to transmit at any time, regardless of whether the other station is
Auto-Negotiation avoids most situations where one station in a point-to-point link is transmitting
under half-duplex rules and the other under full-duplex rules.
In the event that link partners are capable of sharing more than one common technology, refer to the
list in Figure . This list is used to determine which technology should be chosen from the offered
Fiber-optic Ethernet implementations are not included in this priority resolution list because the
interface electronics and optics do not permit easy reconfiguration between implementations. It is
assumed that the interface configuration is fixed. If the two interfaces are able to Auto-Negotiate
then they are already using the same Ethernet implementation. However, there remain a number of
configuration choices such as the duplex setting, or which station will act as the Master for clocking
purposes, that must be determined.

Module 7: Ethernet Technologies
Ethernet has been the most successful LAN technology largely because of its simplicity of
implementation compared to other technologies. Ethernet has also been successful because it has
been a flexible technology that has evolved to meet changing needs and media capabilities. This
module introduces the specifics of the most important varieties of Ethernet. The goal is not to
convey all the facts about each type of Ethernet, but rather to develop a sense of what is common to
all forms of Ethernet.
Changes in Ethernet have resulted in major improvements over the 10-Mbps Ethernet of the early
1980s. The 10-Mbps Ethernet standard remained virtually unchanged until 1995 when IEEE
announced a standard for a 100 Mbps Fast Ethernet. In recent years, an even more rapid growth in
media speed has moved the transition from Fast Ethernet to Gigabit Ethernet. The standards for
Gigabit Ethernet emerged in only three years. An even faster Ethernet version, 10 Gigabit Ethernet,
is now widely available and still faster versions are being developed.
In these faster versions of Ethernet, MAC addressing, CSMA/CD, and the frame format have not
been changed from earlier versions of Ethernet. However, other aspects of the MAC sublayer,
physical layer, and medium have changed. Copper-based network interface card (NICs) capable of
10/100/1000 operation are now common. Gigabit switch and router ports are becoming the standard
for wiring closets. Optical fiber to support Gigabit Ethernet is considered a standard for backbone
cabling in most new installations.
 7.1.1 10-Mbps Ethernet
10BASE5, 10BASE2, and 10BASE-T Ethernet are considered Legacy Ethernet. The four
common features of Legacy Ethernet are timing parameters, frame format, transmission process,
and a basic design rule.
10BASE5, 10BASE2, and 10BASE-T all share the same timing parameters, as shown in Figure (1
bit time at 10 Mbps = 100 nsec = 0.1 µsec = 1 ten-millionth of a second.)
10BASE5, 10BASE2, and 10BASE-T also have a common frame format.
The Legacy Ethernet transmission process is identical until the lower part of the OSI physical layer.
The Layer 2 frame data is converted from hex to binary. As the frame passes from the MAC
sublayer to the physical layer, further processes occur prior to the bits being placed from the
physical layer onto the medium. One important process is the signal quality error (SQE) signal.
SQE is always used in half-duplex. SQE can be used in full-duplex operation but is not required.
SQE is active:
Within 4 to 8 microseconds following a normal transmission to indicate that the outbound frame
was successfully transmitted
Whenever there is a collision on the medium
Whenever there is an improper signal on the medium. Improper signals might include jabber, or
reflections that result from a cable short.
Whenever a transmission has been interrupted
All 10 Mbps forms of Ethernet take octets received from the MAC sublayer and perform a process
called line encoding. Line encoding describes how the bits are actually signaled on the wire. The
simplest encodings have undesirable timing and electrical characteristics. So line codes have been
designed to have desirable transmission properties. This form of encoding used in 10 Mbps systems
is called “Manchester.”
Manchester encoding relies on the direction of the edge transition in the middle of the timing
window to determine the binary value for that bit period. The top waveform has a falling edge, so
it is interpreted as a binary 0. The second waveform shows a rising edge, which is interpreted as a
binary 1. In the third waveform, there is an alternating binary sequence. With alternating binary
data, there is no need to return to the previous voltage level. As can be seen from the third and
fourth wave forms in the graphic, the binary bit values are indicated by the direction of change
during any given bit period. The waveform voltage levels at the beginning or end of any bit period
are not factors when determining binary values.
Legacy Ethernet has common architectural features. Networks usually contain multiple types of
media. The standard ensures that interoperability is maintained. The overall architectural design is
of the utmost importance when implementing a mixed-media network. It becomes easier to violate
maximum delay limits as the network grows. The timing limits are based on parameters such as:
Cable length and its propagation delay
Delay of repeaters
Delay of transceivers
Interframe gap shrinkage
Delays within the station
10-Mbps Ethernet operates within the timing limits offered by a series of not more than five
segments separated by no more than four repeaters. This is known as the 5-4-3 rule. No more than
four repeaters may be connected in series between any two distant stations. There can also be no
more than three populated segments between any two distant stations.
 7.1.2 10BASE5
The original 1980 Ethernet product 10BASE5 transmitted 10 Mbps over a single thick coaxial cable
bus. 10BASE5 is important because it was the first medium used for Ethernet. 10BASE5 was part
of the original 802.3 standard. The primary benefit of 10BASE5 was length. Today it may be found
in legacy installations, but would not be recommended for new installations. 10BASE5 systems are
inexpensive and require no configuration, but basic components like NICs are very difficult to find
as well as the fact that it is sensitive to signal reflections on the cable. 10BASE5 systems also
represent a single point of failure.
10BASE5 uses Manchester encoding. It has a solid central conductor. Each of the maximum five
segments of thick coax may be up to 500 m (1640.4 ft) in length. The cable is large, heavy, and
difficult to install. However, the distance limitations were favorable and this prolonged its use in
certain applications.
Because the medium is a single coaxial cable, only one station can transmit at a time or else a
collision will occur. Therefore, 10BASE5 only runs in half-duplex resulting in a maximum of 10
Mbps of data transfer.
Figure illustrates one possible configuration for a maximum end-to-end collision domain.
Between any two distant stations only three repeated segments are permitted to have stations
connected to them, with the other two repeated segments used only as link segments to extend the
 7.1.3 10BASE2
10BASE2 was introduced in 1985. Installation was easier because of its smaller size, lighter weight,
and greater flexibility. It still exists in legacy networks. Like 10BASE5, it is not recommended for
installations in networks today. It has a low cost and a lack of need for hubs. Again, NICs are also
difficult to obtain for this medium.
10BASE2 also uses Manchester encoding. Computers on the LAN were linked together by an
unbroken series of coaxial cable lengths. These lengths were attached by BNC connectors to a T-
shaped connector on the NIC.
10BASE2 has a stranded central conductor. Each of the maximum five segments of thin coax may
be up to 185 meters long and each station is connected directly to the BNC “T” connector on the
Only one station can transmit at a time or else a collision will occur. 10BASE2 also uses half-
duplex. The maximum transmission rate of 10BASE2 is 10 Mbps.
There may be up to 30 stations on any individual 10BASE2 segment. Out of the five consecutive
segments in series between any two distant stations, only three may have stations attached.
 7.1.4 10BASE-T
10BASE-T was introduced in 1990. 10BASE-T used cheaper and easier to install Category 3
unshielded twisted pair (UTP) copper cable rather than coax cable. The cable plugged into a central
connection device that contained the shared bus. This device was a hub. It was at the center of a set
of cables that radiated out to the PCs like the spokes on a wheel. This is referred to as a star
topology. The distances the cables could extend from the hub and the way in which the UTP was
installed increasingly used stars made up of stars, referred to as an extended star topology.
Originally 10BASE-T was a half-duplex protocol, but full-duplex features were added later. The
explosion in the popularity of Ethernet in the mid-to-late 1990s was when Ethernet came to
dominate LAN technology.
10BASE-T also uses Manchester encoding. A 10BASE-T UTP cable has a solid conductor for each
wire in the maximum 90 meter horizontal cable. UTP cable uses eight-pin RJ-45 connectors.
Though Category 3 cable is adequate for use on 10BASE-T networks, it is strongly recommended
that any new cable installations be made with Category 5e or better. All four pairs of wires should
be used either with the T568-A or T568-B cable pinout arrangement. With this type of cable
installation, supports the use of multiple protocols without rewiring. Figure shows the pinout
arrangement for a 10BASE-T connection. The transmitting pair on the receiving side are connected
to the receiving pair on the attached device.
Half duplex or full duplex is a configuration choice. 10BASE-T carries 10 Mbps of traffic in half-
duplex mode and 20 Mbps in full-duplex mode.
7.1.5 10BASE-T wiring and architecture
10BASE-T links generally consist of a connection between the station and a hub or switch. Hubs
are multi-port repeaters and count toward the limit on repeaters between distant stations. Hubs do
not divide network segments into separate collision domains. Because hubs or repeaters merely
extend the length of a network segment within a single collision domain, there is a limit on how
many hubs may be used in that segment. Bridges and switches divide a segment into separate
collision domains, only leaving the media limitations to determine the distance between the
switches. 10BASE-T limits the distance between switches to 100 m (328 ft).
Although hubs may be linked, it is best to avoid this arrangement. This is to prevent exceeding the
limit for maximum delay between distant stations. When multiple hubs are required, it is best to
arrange them in hierarchical order as to create a tree structure. Performance will be improved if
fewer repeaters separate stations.
An architectural example is shown in Figure . All distances between stations are acceptable.
However, the total distance from one end of the network to the other, places the architecture at its
limit. The most important aspect to consider is how to keep the delay between distant stations to a
minimum, regardless of the architecture and media types involved. A shorter maximum delay will
provide better overall performance.
10BASE-T links can have unrepeated distances up to 100 m. While this may seem like a long
distance, it is typically “used up” when wiring an actual building. Hubs can solve the distance issue
but will allow collisions to propagate. The widespread introduction of switches has made the
distance limitation less important. As long as workstations are located within 100 m of a switch, the
100 m distance starts over at the switch.
 7.1.6 100-Mbps Ethernet
100-Mbps Ethernet is also known as Fast Ethernet. The two technologies that have become
important are 100BASE-TX, which is a copper UTP medium and 100BASE-FX, which is a
multimode optical fiber medium.
Three characteristics common to 100BASE-TX and 100BASE-FX are the timing parameters, the
frame format, and parts of the transmission process. 100BASE-TX and 100-BASE-FX both share
timing parameters. Note that one bit time in 100-Mbps Ethernet is 10nsec = .01 microseconds = 1
100-millionth of a second.
The 100-Mbps frame format is the same as the 10-Mbps frame.
Fast Ethernet represents a 10-fold increase in speed over 10BASE-T. Because of the increase in
speed, extra care must be taken because the bits being sent are getting shorter in duration and
occurring more frequently. These higher frequency signals are more susceptible to noise. In
response to these issues, two separate encoding steps are used by 100-Mbps Ethernet. The first part
of the encoding uses a technique called 4B/5B, the second part of the encoding is the actual line
encoding specific to copper or fiber.
 7.1.7 100BASE-TX
In 1995, 100BASE-TX was the standard, using Cat 5 UTP cable, which became commercially
The original coaxial Ethernet used half-duplex transmission so only one device could transmit at a
time. However, in 1997, Ethernet was expanded to include a full duplex capability that allowed
more than one PC on a network to transmit at the same time. Switches increasingly replaced hubs.
These switches had the capability of full duplex and rapid handling of Ethernet frames.
100BASE-TX uses 4B/5B encoding, which is then scrambled and converted to multi-level transmit-
3 levels or MLT-3. In the example, the highlighted window shows four waveform examples. The
top waveform has no transition in the center of the timing window. No transition indicates that a
binary 0 is present. The second waveform shows a transition in the center of the timing window. A
binary 1 is represented by a transition. The third waveform shows an alternating binary sequence.
The absence of binary transition indicates a binary 0, and the presence of a transition indicates a
binary 1. Rising or falling edges indicate 1s. Very steep signal changes indicate 1s. Any noticeable
horizontal line in the signal indicates a 0.
Figure shows the pinout for a 100BASE-TX connection. Notice that the two separate transmit-
receive paths exist. This is identical to the 10BASE-T configuration.
100BASE-TX carries 100 Mbps of traffic in half-duplex mode. In full-duplex mode, 100BASE-TX
can exchange 200 Mbps of traffic. The concept of full duplex will become increasingly important as
Ethernet speeds increase.
7.1.8 100BASE-FX
At the time copper-based Fast Ethernet was introduced, a fiber version was also desired. A fiber
vervsion could be used for backbone applications, connections between floors and buildings where
copper is less desirable, and also in high noise environments. 100BASE-FX was introduced to
satisfy this desire. However, 100BASE-FX was never adopted successfully. This was due to the
timely introduction of Gigabit Ethernet copper and fiber standards. Gigabit Ethernet standards are
now the dominant technology for backbone installations, high-speed cross-connects, and general
infrastructure needs.
The timing, frame format, and transmission are all common to both versions of 100 Mbps Fast
Ethernet. 100BASE-FX also uses 4B/5B encoding. In Figure notice the highlighted waveform in
the example. The top waveform has no transition, which indicates that a binary 0 is present. In the
second waveform, a transition is in the center of the timing window. A binary 1 is represented by a
transition. In the third waveform, there is an alternating binary sequence. In this example it is more
obvious that no transition indicates a binary 0 and the presence of a transition is a binary 1.
Figure summarizes a 100BASE-FX link and pinouts. Fiber pair with either ST or SC connectors is
most commonly used.
200 Mbps transmission is possible because of the separate Transmit and Receive paths in
100BASE-FX optical fiber.
 7.1.9 Fast Ethernet architecture
Fast Ethernet links generally consist of a connection between a station and a hub or switch. Hubs
are considered multi-port repeaters and switches are considered multi-port bridges. These are
subject to the 100 m UTP media distance limitation.
A Class I repeater may introduce up to 140 bit-times of latency. Any repeater that changes between
one Ethernet implementation and another is a Class I repeater. A Class II repeater may only
introduce a maximum of 92 bit-times latency. Because of the reduced latency it is possible to have
two Class II repeaters in series, but only if the cable between them is very short.
As with 10 Mbps versions, it is possible to modify some of the architecture rules for 100 Mbps
versions. However there is virtually no allowance for additional delay. Modification of the
architecture rules is strongly discouraged for 100BASE-TX. 100BASE-TX cable between Class II
repeaters may not exceed 5 meters. Links operating in half duplex are not uncommon to find in Fast
Ethernet. However, half duplex is undesirable because the signaling scheme is inherently full
Figure shows architecture configuration cable distances. 100BASE-TX links can have unrepeated
distances up to 100 m. The widespread introduction of switches has made this distance limitation
less important. If workstations are located within 100 m of a switch, the 100 m distance starts over
at the switch. Since most Fast Ethernet is switched, these are the practical limits between devices.
 7.2.1 1000-Mbps Ethernet
The 1000-Mbps Ethernet or Gigabit Ethernet standards represent transmission using both fiber and
copper media. The 1000BASE-X standard, IEEE 802.3z, specifies 1 Gbps full duplex over
optical fiber. The 1000BASE-X standard, IEEE 802.3z, specifies 1 Gbps full duplex over optical
1000BASE-TX, 1000BASE-SX, and 1000BASE-LX use the same timing parameters, as shown in
Figure . They use a 1 nanosecond (0.000000001 seconds) or 1 billionth of a second bit time. The
Gigabit Ethernet frame has the same format as is used for 10 and 100-Mbps Ethernet. Depending on
the implementation, Gigabit Ethernet may use different processes to convert frames to bits on the
cable. Figure shows the Ethernet frame formats.
The differences between standard Ethernet, Fast Ethernet and Gigabit Ethernet occur at the physical
layer. Due to the increased speeds of these newer standards, the shorter duration bit times require
special considerations. Since the bits are introduced on the medium for a shorter duration and more
often, timing is critical. This high-speed transmission requires frequencies closer to copper medium
bandwidth limitations. This causes the bits to be more susceptible to noise on copper media.
These issues require Gigabit Ethernet to use two separate encoding steps. Data transmission is made
more efficient by using codes to represent the binary bit stream. The encoded data provides
synchronization, efficient usage of bandwidth, and improved Signal-to-Noise Ratio characteristics.
At the physical layer, the bit patterns from the MAC layer are converted into symbols. The symbols
may also be control information such as start frame, end frame, medium idle conditions. The frame
is coded into control symbols and data symbols to increase in network throughput.
Fiber-based Gigabit Ethernet (1000BASE-X) uses 8B/10B encoding which is similar to the 4B/5B
concept. This is followed by the simple Non-Return to Zero (NRZ) line encoding of light on optical
fiber. This simpler encoding process is possible because the fiber medium can carry higher
bandwidth signals.
 7.2.2 1000BASE-T
As Fast Ethernet was installed to increase bandwidth to workstations, this began to create
bottlenecks upstream in the network. 1000BASE-T (IEEE 802.3ab) was developed to provide
additional bandwidth to help alleviate these bottlenecks. It provided more "speed" for applications
such as intra-building backbones, inter-switch links, server farms, and other wiring closet
applications as well as connections for high-end workstations. Fast Ethernet was designed to
function over existing Cat 5 copper cable and this necessitated that cable would pass the Cat 5e test.
Most installed Cat 5 cable can pass 5e certification if properly terminated. One of the most
important attributes of the 1000BASE-T standard is that it be interoperable with 10BASE-T and
Because Cat 5e cable can reliably carry up to 125 Mbps of traffic, getting 1000 Mbps (Gigabit) of
bandwidth was a design challenge. The first step to accomplish 1000BASE-T is to use all four pairs
of wires instead of the traditional two pairs of wires used by 10BASE-T and 100BASE-TX. This is
done using complex circuitry to allow full duplex transmissions on the same wire pair. This
provides 250 Mbps per pair. With all four-wire pairs, this provides the desired 1000 Mbps. Since
the information travels simultaneously across the four paths, the circuitry has to divide frames at the
transmitter and reassemble them at the receiver.
The 1000BASE-T encoding with 4D-PAM5 line encoding is used on Cat 5e or better UTP.
Achieving the 1 Gbps rate requires use of all four pairs in full duplex simultaneously. That is the
transmission and reception of data happens in both directions on the same wire at the same time. As
might be expected, this results in a permanent collision on the wire pairs. These collisions result in
complex voltage patterns. With the complex integrated circuits using techniques such as echo
cancellation, Layer 1 Forward Error Correction (FEC), and prudent selection of voltage levels, the
system achieves the 1Gigabit throughput.
In idle periods there are nine voltage levels found on the cable, and during data transmission periods
there are 17 voltage levels found on the cable. With this large number of states and the effects of
noise, the signal on the wire looks more analog than digital. Like analog, the system is more
susceptible to noise due to cable and termination problems.
The data from the sending station is carefully divided into four parallel streams, encoded,
transmitted and detected in parallel, and then reassembled into one received bit stream. Figure
represents of the simultaneous full duplex on four-wire pairs. 1000BASE-T supports both half-
duplex as well as full-duplex operation. The use of full-duplex 1000BASE-T is widespread.
 7.2.3 1000BASE-SX and LX
The IEEE 802.3 standard recommends that Gigabit Ethernet over fiber be the preferred backbone
The timing, frame format, and transmission are common to all versions of 1000 Mbps. Two signal-
encoding schemes are defined at the physical layer. The 8B/ 10B scheme is used for optical fiber
and shielded copper media, and the pulse amplitude modulation 5 (PAM5) is used for UTP.
1000BASE-X uses 8B/10B encoding converted to non-return to zero (NRZ) line encoding. NRZ
encoding relies on the signal level found in the timing window to determine the binary value for
that bit period. Unlike most of the other encoding schemes described, this encoding system is level
driven instead of edge driven. That is the determination of whether a bit is a zero or a one is made
by the level of the signal rather than when the signal changes levels.
The NRZ signals are then pulsed into the fiber using either short-wavelength or long-wavelength
light sources. The short-wavelength uses an 850 nm laser or LED source in multimode optical fiber
(1000BASE-SX). It is the lower-cost of the options but has shorter distances. The long-wavelength
1310 nm laser source uses either single-mode or multimode optical fiber (1000BASE-LX). Laser
sources used with single-mode fiber can achieve distances of up to 5000 meters. Because of the
length of time to completely turn the LED or laser on and off each time, the light is pulsed using
low and high power. A logic zero is represented by low power, and a logic one by high power.
The Media Access Control method treats the link as point-to-point. Since separate fibers are used
for transmitting (Tx) and receiving (Rx) the connection is inherently full duplex. Gigabit Ethernet
permits only a single repeater between two stations. Figure is a 1000BASE Ethernet media
comparison chart.
 7.2.4 Gigabit Ethernet architecture
The distance limitations of full-duplex links are only limited by the medium, and not the round-trip
delay. Since most Gigabit Ethernet is switched, the values in Figures and are the practical limits
between devices. Daisy-chaining, star, and extended star topologies are all allowed. The issue then
becomes one of logical topology and data flow, not timing or distance limitations.
A 1000BASE-T UTP cable is the same as 10BASE-T and 100BASE-TX cable, except that link
performance must meet the higher quality Category 5e or ISO Class D (2000) requirements.
Modification of the architecture rules is strongly discouraged for 1000BASE-T. At 100 meters,
1000BASE-T is operating close to the edge of the ability of the hardware to recover the transmitted
signal. Any cabling problems or environmental noise could render an otherwise compliant cable
inoperable even at distances that are within the specification.
It is recommended that all links between a station and a hub or switch be configured for Auto-
Negotiation to permit the highest common performance. This will avoid accidental
misconfiguration of the other required parameters for proper Gigabit Ethernet operation.
7.2.5 10-Gigabit Ethernet
IEEE 802.3ae was adapted to include 10 Gbps full-duplex transmission over fiber optic cable. The
basic similarities between 802.3ae and 802.3, the original Ethernet are remarkable. This 10-Gigabit
Ethernet (10GbE) is evolving for not only LANs, but also MANs, and WANs.
With the frame format and other Ethernet Layer 2 specifications compatible with previous
standards, 10GbE can provide increased bandwidth needs that are interoperable with existing
network infrastructure.
A major conceptual change for Ethernet is emerging with 10GbE. Ethernet is traditionally thought
of as a LAN technology, but 10GbE physical layer standards allow both an extension in distance to
40 km over single-mode fiber and compatibility with synchronous optical network (SONET) and
synchronous digital hierarchy (SDH) networks. Operation at 40 km distance makes 10GbE a viable
MAN technology. Compatibility with SONET/SDH networks operating up to OC-192 speeds
(9.584640 Gbps) make 10GbE a viable WAN technology. 10GbE may also compete with ATM for
certain applications.
To summarize, how does 10GbE compare to other varieties of Ethernet?
Frame format is the same, allowing interoperability between all varieties of legacy, fast, gigabit, and
10 Gigabit, with no reframing or protocol conversions.
Bit time is now 0.1 nanoseconds. All other time variables scale accordingly.
Since only full-duplex fiber connections are used, CSMA/CD is not necessary
The IEEE 802.3 sublayers within OSI Layers 1 and 2 are mostly preserved, with a few additions to
accommodate 40 km fiber links and interoperability with SONET/SDH technologies.
Flexible, efficient, reliable, relatively low cost end-to-end Ethernet networks become possible.
TCP/IP can run over LANs, MANs, and WANs with one Layer 2 Transport method.
The basic standard governing CSMA/CD is IEEE 802.3. An IEEE 802.3 supplement, entitled
802.3ae, governs the 10GbE family. As is typical for new technologies, a variety of
implementations are being considered, including:
10GBASE-SR – Intended for short distances over already-installed multimode fiber, supports a
range between 26 m to 82 m
10GBASE-LX4 – Uses wavelength division multiplexing (WDM), supports 240 m to 300 m over
already-installed multimode fiber and 10 km over single-mode fiber
10GBASE-LR and 10GBASE-ER – Support 10 km and 40 km over single-mode fiber
10GBASE-SW, 10GBASE-LW, and 10GBASE-EW – Known collectively as 10GBASE-W are
intended to work with OC-192 synchronous transport module (STM) SONET/SDH WAN
The IEEE 802.3ae Task force and the 10-Gigabit Ethernet Alliance (10 GEA) are working to
standardize these emerging technologies.
10-Gbps Ethernet (IEEE 802.3ae) was standardized in June 2002. It is a full-duplex protocol that
uses only optic fiber as a transmission medium. The maximum transmission distances depend on
the type of fiber being used. When using single-mode fiber as the transmission medium, the
maximum transmission distance is 40 kilometers (25 miles). Some discussions between IEEE
members have begun that suggest the possibility of standards for 40, 80, and even 100-Gbps
 7.2.6 10-Gigabit Ethernet architectures
As with the development of Gigabit Ethernet, the increase in speed comes with extra requirements.
The shorter bit time duration because of increased speed requires special considerations. For 10
GbE transmissions, each data bit duration is 0.1 nanosecond. This means there would be 1,000 GbE
data bits in the same bit time as one data bit in a 10-Mbps Ethernet data stream. Because of the
short duration of the 10 GbE data bit, it is often difficult to separate a data bit from noise. 10 GbE
data transmissions rely on exact bit timing to separate the data from the effects of noise on the
physical layer. This is the purpose of synchronization.
In response to these issues of synchronization, bandwidth, and Signal-to-Noise Ratio, 10-Gigabit
Ethernet uses two separate encoding steps. By using codes to represent the user data, transmission is
made more efficient. The encoded data provides synchronization, efficient usage of bandwidth, and
improved Signal-to-Noise Ratio characteristics.
Complex serial bit streams are used for all versions of 10GbE except for 10GBASE-LX4, which
uses Wide Wavelength Division Multiplex (WWDM) to multiplex four bit simultaneous bit streams
as four wavelengths of light launched into the fiber at one time.
Figure represents the particular case of using four slightly different wavelength, laser sources.
Upon receipt from the medium, the optical signal stream is demultiplexed into four separate optical
signal streams. The four optical signal streams are then converted back into four electronic bit
streams as they travel in approximately the reverse process back up through the sublayers to the
MAC layer.
Currently, most 10GbE products are in the form of modules, or line cards, for addition to high-end
switches and routers. As the 10GbE technologies evolve, an increasing diversity of signaling
components can be expected. As optical technologies evolve, improved transmitters and receivers
will be incorporated into these products, taking further advantage of modularity. All 10GbE
varieties use optical fiber media. Fiber types include 10µ single-mode Fiber, and 50µ and 62.5µ
multimode fibers. A range of fiber attenuation and dispersion characteristics is supported, but they
limit operating distances.
Even though support is limited to fiber optic media, some of the maximum cable lengths are
surprisingly short. No repeater is defined for 10-Gigabit Ethernet since half duplex is explicitly
not supported.
As with 10 Mbps, 100 Mbps and 1000 Mbps versions, it is possible to modify some of the
architecture rules slightly. Possible architecture adjustments are related to signal loss and distortion
along the medium. Due to dispersion of the signal and other issues the light pulse becomes
undecipherable beyond certain distances.
 7.2.7 Future of Ethernet
Ethernet has gone through an evolution from Legacy → Fast → Gigabit → MultiGigabit
technologies. While other LAN technologies are still in place (legacy installations), Ethernet
dominates new LAN installations. So much so that some have referred to Ethernet as the LAN “dial
tone”. Ethernet is now the standard for horizontal, vertical, and inter-building connections. Recently
developing versions of Ethernet are blurring the distinction between LANs, MANs, and WANs.
While 1-Gigabit Ethernet is now widely available and 10-Gigabit products becoming more
available, the IEEE and the 10-Gigabit Ethernet Alliance are working on 40, 100, or even 160 Gbps
standards. The technologies that are adopted will depend on a number of factors, including the rate
of maturation of the technologies and standards, the rate of adoption in the market, and cost.
Proposals for Ethernet arbitration schemes other than CSMA/CD have been made. The problem of
collisions with physical bus topologies of 10BASE5 and 10BASE2 and 10BASE-T and 100BASE-
TX hubs is no longer common. Using UTP and optical fiber with separate Tx and Rx paths, and the
decreasing costs of switches make single shared media, half-duplex media connections much less
The future of networking media is three-fold:
Copper (up to 1000 Mbps, perhaps more)
Wireless (approaching 100 Mbps, perhaps more)
Optical fiber (currently at 10,000 Mbps and soon to be more)
Copper and wireless media have certain physical and practical limitations on the highest frequency
signals that can be transmitted. This is not a limiting factor for optical fiber in the foreseeable
future. The bandwidth limitations on optical fiber are extremely large and are not yet being
threatened. In fiber systems, it is the electronics technology (such as emitters and detectors) and
fiber manufacturing processes that most limit the speed. Upcoming developments in Ethernet are
likely to be heavily weighted towards Laser light sources and single-mode optical fiber.
When Ethernet was slower, half-duplex, subject to collisions and a “democratic” process for
prioritization, was not considered to have the Quality of Service (QoS) capabilities required to
handle certain types of traffic. This included such things as IP telephony and video multicast.
The full-duplex high-speed Ethernet technologies that now dominate the market are proving to be
sufficient at supporting even QoS-intensive applications. This makes the potential applications of
Ethernet even wider. Ironically end-to-end QoS capability helped drive a push for ATM to the
desktop and to the WAN in the mid-1990s, but now it is Ethernet, not ATM that is approaching this

Module 8: Ethernet Switching
Shared Ethernet works extremely well under ideal conditions. When the number of devices trying to
access the network is low, the number of collisions stays well within acceptable limits. However,
when the number of users on the network increases, the increased number of collisions can cause
intolerably bad performance. Bridging was developed to help ease performance problems that arose
from increased collisions. Switching evolved from bridging to become the key technology in
modern Ethernet LANs.
Collisions and broadcasts are expected events in modern networking. They are, in fact, engineered
into the design of Ethernet and higher layer technologies. However, when collisions and broadcasts
occur in numbers that are above the optimum, network performance suffers. The concept of
collision domains and broadcast domains is concerned with the ways that networks can be designed
to limit the negative effects of collisions and broadcasts. This module explores the effects of
collisions and broadcasts on network traffic and then describes how bridges and routers are used to
segment networks for improved performance.
 8.1.1 Layer 2 bridging
As more nodes are added to an Ethernet physical segment, contention for the media increases.
Ethernet is a shared media, which means only one node can transmit data at a time. The addition of
more nodes increases the demands on the available bandwidth and places additional loads on the
media. By increasing the number of nodes on a single segment, the probability of collisions
increases, resulting in more retransmissions. A solution to the problem is to break the large segment
into parts and separate it into isolated collision domains.
To accomplish this a bridge keeps a table of MAC addresses and the associated ports. The bridge
then forwards or discards frames based on the table entries. The following steps illustrate the
operation of a bridge:
The bridge has just been started so the bridge table is empty. The bridge just waits for traffic on the
segment. When traffic is detected, it is processed by the bridge.
Host A is pinging Host B. Since the data is transmitted on the entire collision domain segment, both
the bridge and Host B process the packet.
The bridge adds the source address of the frame to its bridge table. Since the address was in the
source address field and the frame was received on port 1, the frame must be associated with port 1
in the table.
The destination address of the frame is checked against the bridge table. Since the address is not in
the table, even though it is on the same collision domain, the frame is forwarded to the other
segment. The address of Host B has not been recorded yet as only the source address of a frame is
Host B processes the ping request and transmits a ping reply back to Host A. The data is transmitted
over the whole collision domain. Both Host A and the bridge receive the frame and process it.
The bridge adds the source address of the frame to its bridge table. Since the source address was not
in the bridge table and was received on port 1, the source address of the frame must be associated
with port 1in the table. The destination address of the frame is checked against the bridge table to
see if its entry is there. Since the address is in the table, the port assignment is checked. The address
of Host A is associated with the port the frame came in on, so the frame is not forwarded.
Host A is now going to ping Host C. Since the data is transmitted on the entire collision domain
segment, both the bridge and Host B process the frame. Host B discards the frame as it was not the
intended destination.
The bridge adds the source address of the frame to its bridge table. Since the address is already
entered into the bridge table the entry is just renewed.
The destination address of the frame is checked against the bridge table to see if its entry is there.
Since the address is not in the table, the frame is forwarded to the other segment. The address of
Host C has not been recorded yet as only the source address of a frame is recorded.
Host C processes the ping request and transmits a ping reply back to Host A. The data is transmitted
over the whole collision domain. Both Host D and the bridge receive the frame and process it. Host
D discards the frame, as it was not the intended destination.
The bridge adds the source address of the frame to its bridge table. Since the address was in the
source address field and the frame was received on port 2, the frame must be associated with port 2
in the table.
The destination address of the frame is checked against the bridge table to see if its entry is present.
The address is in the table but it is associated with port 1, so the frame is forwarded to the other
When Host D transmits data, its MAC address will also be recorded in the bridge table. This is how
the bridge controls traffic between to collision domains.
These are the steps that a bridge uses to forward and discard frames that are received on any of its
 8.1.2 Layer 2 switching
Generally, a bridge has only two ports and divides a collision domain into two parts. All decisions
made by a bridge are based on MAC or Layer 2 addressing and do not affect the logical or Layer 3
addressing. Thus, a bridge will divide a collision domain but has no effect on a logical or broadcast
domain. No matter how many bridges are in a network, unless there is a device such as a router that
works on Layer 3 addressing, the entire network will share the same logical broadcast address
space. A bridge will create more collision domains but will not add broadcast domains.
A switch is essentially a fast, multi-port bridge, which can contain dozens of ports. Rather than
creating two collision domains, each port creates its own collision domain. In a network of twenty
nodes, twenty collision domains exist if each node is plugged into its own switch port. If an uplink
port is included, one switch creates twenty-one single-node collision domains. A switch
dynamically builds and maintains a Content-Addressable Memory (CAM) table, holding all of the
necessary MAC information for each port.
 8.1.3 Switch operation
A switch is simply a bridge with many ports. When only one node is connected to a switch port, the
collision domain on the shared media contains only two nodes. The two nodes in this small
segment, or collision domain, consist of the switch port and the host connected to it. These small
physical segments are called microsegments. Another capability emerges when only two nodes
are connected. In a network that uses twisted-pair cabling, one pair is used to carry the transmitted
signal from one node to the other node. A separate pair is used for the return or received signal. It is
possible for signals to pass through both pairs simultaneously. The capability of communication in
both directions at once is known as full duplex. Most switches are capable of supporting full
duplex, as are most network interface cards (NICs). In full duplex mode, there is no contention for
the media. Thus, a collision domain no longer exists. Theoretically, the bandwidth is doubled when
using full duplex.
In addition to faster microprocessors and memory, two other technological advances made switches
possible. Content-addressable memory (CAM) is memory that essentially works backwards
compared to conventional memory. Entering data into the memory will return the associated
address. Using CAM allows a switch to directly find the port that is associated with a MAC address
without using search algorithms. An application-specific integrated circuit (ASIC) is a device
consisting of undedicated logic gates that can be programmed to perform functions at logic speeds.
Operations that might have been done in software can now be done in hardware using an ASIC. The
use of these technologies greatly reduced the delays caused by software processing and enabled a
switch to keep pace with the data demands of many microsegments and high bit rates.
 8.1.4 Latency
Latency is the delay between the time a frame first starts to leave the source device and the time the
first part of the frame reaches its destination. A wide variety of conditions can cause delays as a
frame travels from source to destination:
Media delays caused by the finite speed that signals can travel through the physical media.
Circuit delays caused by the electronics that process the signal along the path.
Software delays caused by the decisions that software must make to implement switching and
Delays caused by the content of the frame and where in the frame switching decisions can be made.
For example, a device cannot route a frame to a destination until the destination MAC address has
been read.
8.1.5 Switch modes
How a frame is switched to the destination port is a trade off between latency and reliability. A
switch can start to transfer the frame as soon as the destination MAC address is received. Switching
at this point is called cut-through switching and results in the lowest latency through the switch.
However, no error checking is available. At the other extreme, the switch can receive the entire
frame before sending it out the destination port. This gives the switch software an opportunity to
verify the Frame Check Sum (FCS) to ensure that the frame was reliably received before sending it
to the destination. If the frame is found to be invalid, it is discarded at this switch rather than at the
ultimate destination. Since the entire frame is stored before being forwarded, this mode is called
store-and-forward. A compromise between the cut-through and store-and-forward modes is the
fragment-free mode. Fragment-free reads the first 64 bytes, which includes the frame header, and
switching begins before the entire data field and checksum are read. This mode verifies the
reliability of the addressing and Logical Link Control (LLC) protocol information to ensure the
destination and handling of the data will be correct.
When using cut-through methods of switching, both the source port and destination port must be
operating at the same bit rate in order to keep the frame intact. This is called synchronous
switching. If the bit rates are not the same, the frame must be stored at one bit rate before it is sent
out at the other bit rate. This is known as asynchronous switching. Store-and-forward mode must be
used for asynchronous switching.
Asymmetric switching provides switched connections between ports of unlike bandwidths, such as
a combination of 100 Mbps and 1000 Mbps. Asymmetric switching is optimized for client/server
traffic flows in which multiple clients simultaneously communicate with a server, requiring more
bandwidth dedicated to the server port to prevent a bottleneck at that port.
 8.1.6 Spanning-Tree Protocol
When multiple switches are arranged in a simple hierarchical tree, switching loops are unlikely to
occur. However, switched networks are often designed with redundant paths to provide for
reliability and fault tolerance. While redundant paths are desirable, they can have undesirable side
effects. Switching loops are one such side effect. Switching loops can occur by design or by
accident, and they can lead to broadcast storms that will rapidly overwhelm a network. To
counteract the possibility of loops, switches are provided with a standards-based protocol called the
Spanning-Tree Protocol (STP). Each switch in a LAN using STP sends special messages called
Bridge Protocol Data Units (BPDUs) out all its ports to let other switches know of its existence and
to elect a root bridge for the network. The switches then use the Spanning-Tree Algorithm (STA) to
resolve and shut down the redundant paths.
Each port on a switch using Spanning-Tree Protocol exists in one of the following five states:
A port moves through these five states as follows:
From initialization to blocking
From blocking to listening or to disabled
From listening to learning or to disabled
From learning to forwarding or to disabled
From forwarding to disabled
The result of resolving and eliminating loops using STP is to create a logical hierarchical tree with
no loops. However, the alternate paths are still available should they be needed.
 8.2.1 Shared media environments
Understanding collision domains requires understanding what collisions are and how they are
caused. To help explain collisions, Layer 1 media and topologies are reviewed here.
Some networks are directly connected and all hosts share Layer 1. Examples are listed in the
Shared media environment – Occurs when multiple hosts have access to the same medium. For
example, if several PCs are attached to the same physical wire, optical fiber, or share the same
airspace, they all share the same media environment.
Extended shared media environment – Is a special type of shared media environment in which
networking devices can extend the environment so that it can accommodate multiple access or
longer cable distances.
Point-to-point network environment – Is widely used in dialup network connections and is the
most familiar to the home user. It is a shared networking environment in which one device is
connected to only one other device, such as connecting a computer to an Internet service provider
by modem and a phone line.
It is important to be able to identify a shared media environment, because collisions only occur in a
shared environment. A highway system is an example of a shared environment in which collisions
can occur because multiple vehicles are using the same roads. As more vehicles enter the system,
collisions become more likely. A shared data network is much like a highway. Rules exist to
determine who has access to the network medium, but sometimes the rules simply cannot handle the
traffic load and collisions occur.
8.2.2 Collision domains
Collision domains are the connected physical network segments where collisions can occur.
Collisions cause the network to be inefficient. Every time a collision happens on a network, all
transmission stops for a period of time. The length of this period of time without transmissions
varies and is determined by a backoff algorithm for each network device.
The types of devices that interconnect the media segments define collision domains. These
devices have been classified as OSI Layer 1, 2 or 3 devices. Layer 1 devices do not break up
collision domains, Layer 2 and Layer 3 devices do break up collision domains. Breaking up, or
increasing the number of collision domains with Layer 2 and 3 devices is also known as
Layer 1 devices, such as repeaters and hubs, serve the primary function of extending the Ethernet
cable segments. By extending the network more hosts can be added. However, every host that is
added increases the amount of potential traffic on the network. Since Layer 1 devices pass on
everything that is sent on the media, the more traffic that is transmitted within a collision domain,
the greater the chances of collisions. The final result is diminished network performance, which will
be even more pronounced if all the computers on that network are demanding large amounts of
bandwidth. Simply put, Layer 1 devices extend collision domains, but the length of a LAN can also
be overextended and cause other collision issues.
The four repeater rule in Ethernet states that no more than four repeaters or repeating hubs can be
between any two computers on the network. To assure that a repeated 10BASE-T network will
function properly, the round-trip delay calculation must be within certain limits otherwise all the
workstations will not be able to hear all the collisions on the network. Repeater latency, propagation
delay, and NIC latency all contribute to the four repeater rule. Exceeding the four repeater rule
can lead to violating the maximum delay limit. When this delay limit is exceeded, the number of
late collisions dramatically increases. A late collision is when a collision happens after the first 64
bytes of the frame are transmitted. The chipsets in NICs are not required to retransmit automatically
when a late collision occurs. These late collision frames add delay that is referred to as consumption
delay. As consumption delay and latency increase, network performance decreases.
The 5-4-3-2-1 rule requires that the following guidelines should not be exceeded:
Five segments of network media
Four repeaters or hubs
Three host segments of the network
Two link sections (no hosts)
One large collision domain
 The 5-4-3-2-1 rule also provides guidelines to keep round-trip delay time in a shared network
within acceptable limits.
8.2.3 Segmentation
The history of how Ethernet handles collisions and collision domains dates back to research at the
University of Hawaii in 1970. In its attempts to develop a wireless communication system for the
islands of Hawaii, university researchers developed a protocol called Aloha. The Ethernet protocol
is actually based on the Aloha protocol.
One important skill for a networking professional is the ability to recognize collision domains.
Connecting several computers to a single shared-access medium that has no other networking
devices attached creates a collision domain. This situation limits the number of computers that can
use the medium, also called a segment. Layer 1 devices extend but do not control collision domains.
Layer 2 devices segment or divide collision domains. Controlling frame propagation using the
MAC address assigned to every Ethernet device performs this function. Layer 2 devices, bridges,
and switches, keep track of the MAC addresses and which segment they are on. By doing this these
devices can control the flow of traffic at the Layer 2 level. This function makes networks more
efficient by allowing data to be transmitted on different segments of the LAN at the same time
without the frames colliding. By using bridges and switches, the collision domain is effectively
broken up into smaller parts, each becoming its own collision domain.
These smaller collision domains will have fewer hosts and less traffic than the original domain.
The fewer hosts that exist in a collision domain, the more likely the media will be available. As long
as the traffic between bridged segments is not too heavy a bridged network works well. Otherwise,
the Layer 2 device can actually slow down communication and become a bottleneck itself.
Layer 3 devices, like Layer 2 devices, do not forward collisions. Because of this, the use of Layer 3
devices in a network has the effect of breaking up collision domains into smaller domains.
Layer 3 devices perform more functions than just breaking up a collision domain. Layer 3 devices
and their functions will be covered in more depth in the section on broadcast domains.
 8.2.4 Layer 2 broadcasts
To communicate with all collision domains, protocols use broadcast and multicast frames at Layer 2
of the OSI model. When a node needs to communicate with all hosts on the network, it sends a
broadcast frame with a destination MAC address 0xFFFFFFFFFFFF. This is an address to which
the network interface card (NIC) of every host must respond.
Layer 2 devices must flood all broadcast and multicast traffic. The accumulation of broadcast and
multicast traffic from each device in the network is referred to as broadcast radiation. In some cases,
the circulation of broadcast radiation can saturate the network so that there is no bandwidth left for
application data. In this case, new network connections cannot be established, and existing
connections may be dropped, a situation known as a broadcast storm. The probability of broadcast
storms increases as the switched network grows.
Because the NIC must interrupt the CPU to process each broadcast or multicast group it belongs to,
broadcast radiation affects the performance of hosts in the network. Figure shows the results of
tests that Cisco conducted on the effect of broadcast radiation on the CPU performance of a Sun
SPARCstation 2 with a standard built-in Ethernet card. As indicated by the results shown, an IP
workstation can be effectively shut down by broadcasts flooding the network. Although extreme,
broadcast peaks of thousands of broadcasts per second have been observed during broadcast storms.
Testing in a controlled environment with a range of broadcasts and multicasts on the network shows
measurable system degradation with as few as 100 broadcasts or multicasts per second.
Most often, the host does not benefit from processing the broadcast, as it is not the destination being
sought. The host does not care about the service that is being advertised, or it already knows about
the service. High levels of broadcast radiation can noticeably degrade host performance. The three
sources of broadcasts and multicasts in IP networks are workstations, routers, and multicast
Workstations broadcast an Address Resolution Protocol (ARP) request every time they need to
locate a MAC address that is not in the ARP table. Although the numbers in Figure might
appear low, they represent an average, well-designed IP network. When broadcast and multicast
traffic peak due to storm behavior, peak CPU loss can be orders of magnitude greater than average.
Broadcast storms can be caused by a device requesting information from a network that has grown
too large. So many responses are sent to the original request that the device cannot process them, or
the first request triggers similar requests from other devices that effectively block normal traffic
flow on the network.
As an example, the command telnet translates into an IP address through a Domain
Name System (DNS) search. To locate the corresponding MAC address an ARP request is
broadcast. Generally, IP workstations cache 10 to 100 addresses in their ARP tables for about two
hours. The ARP rate for a typical workstation might be about 50 addresses every two hours or 0.007
ARPs per second. Thus, 2000 IP end stations produce about 14 ARPs per second.
The routing protocols that are configured on a network can increase broadcast traffic significantly.
Some administrators configure all workstations to run Routing Information Protocol (RIP) as a
redundancy and reachability policy. Every 30 seconds, RIPv1 uses broadcasts to retransmit the
entire RIP routing table to other RIP routers. If 2000 workstations were configured to run RIP and,
on average, 50 packets were required to transmit the routing table, the workstations would generate
3333 broadcasts per second. Most network administrators only configure a small number of routers,
usually five to ten, to run RIP. For a routing table that has a size of 50 packets, 10 RIP routers
would generate about 16 broadcasts per second.
IP multicast applications can adversely affect the performance of large, scaled, switched networks.
Although multicasting is an efficient way to send a stream of multimedia data to many users on a
shared-media hub, it affects every user on a flat switched network. A particular packet video
application can generate a seven megabyte (MB) stream of multicast data that, in a switched
network, would be sent to every segment, resulting in severe congestion.
 8.2.5 Broadcast domains
A broadcast domain is a grouping of collision domains that are connected by Layer 2 devices.
Breaking up a LAN into multiple collision domains increases the opportunity for each host in the
network to gain access to the media. This effectively reduces the chance of collisions and increases
available bandwidth for every host. But broadcasts are forwarded by Layer 2 devices and if
excessive, can reduce the efficiency of the entire LAN. Broadcasts have to be controlled at Layer 3,
as Layer 2 and Layer 1 devices have no way of controlling them. The total size of a broadcast
domain can be identified by looking at all of the collision domains that the same broadcast frame is
processed by. In other words, all the nodes that are a part of that network segment bounded by a
layer three device. Broadcast domains are controlled at Layer 3 because routers do not forward
broadcasts. Routers actually work at Layers 1, 2, and 3. They, like all Layer 1 devices, have a
physical connection to, and transmit data onto, the media. They have a Layer 2 encapsulation on all
interfaces and perform just like any other Layer 2 device. It is Layer 3 that allows the router to
segment broadcast domains.
In order for a packet to be forwarded through a router it must have already been processed by a
Layer 2 device and the frame information stripped off. Layer 3 forwarding is based on the
destination IP address and not the MAC address. For a packet to be forwarded it must contain an IP
address that is outside of the range of addresses assigned to the LAN and the router must have a
destination to send the specific packet to in its routing table.
 8.2.6 Introduction to data flow
Data flow in the context of collision and broadcast domains focuses on how data frames propagate
through a network. It refers to the movement of data through Layer 1, 2 and 3 devices and how data
must be encapsulated to effectively make that journey. Remember that data is encapsulated at the
network layer with an IP source and destination address, and at the data-link layer with a MAC
source and destination address.
A good rule to follow is that a Layer 1 device always forwards the frame, while a Layer 2 device
wants to forward the frame. In other words, a Layer 2 device will forward the frame unless
something prevents it from doing so. A Layer 3 device will not forward the frame unless it has to.
Using this rule will help identify how data flows through a network.
Layer 1 devices do no filtering, so everything that is received is passed on to the next segment. The
frame is simply regenerated and retimed and thus returned to its original transmission quality. Any
segments connected by Layer 1 devices are part of the same domain, both collision and broadcast.
Layer 2 devices filter data frames based on the destination MAC address. A frame is forwarded if it
is going to an unknown destination outside the collision domain. The frame will also be forwarded
if it is a broadcast, multicast, or a unicast going outside of the local collision domain. The only time
that a frame is not forwarded is when the Layer 2 device finds that the sending host and the
receiving host are in the same collision domain. A Layer 2 device, such as a bridge, creates multiple
collision domains but maintains only one broadcast domain.
Layer 3 devices filter data packets based on IP destination address. The only way that a packet will
be forwarded is if its destination IP address is outside of the broadcast domain and the router has an
identified location to send the packet. A Layer 3 device creates multiple collision and broadcast
Data flow through a routed IP based network, involves data moving across traffic management
devices at Layers 1, 2, and 3 of the OSI model. Layer 1 is used for transmission across the physical
media, Layer 2 for collision domain management, and Layer 3 for broadcast domain management.
 8.2.7 What is a network segment?
As with many terms and acronyms, segment has multiple meanings. The dictionary definition of the
term is as follows:
A separate piece of something
One of the parts into which an entity, or quantity is divided or marked off by or as if by natural
In the context of data communication, the following definitions are used:
Section of a network that is bounded by bridges, routers, or switches.
In a LAN using a bus topology, a segment is a continuous electrical circuit that is often connected
to other such segments with repeaters.
Term used in the TCP specification to describe a single transport layer unit of information. The
terms datagram, frame, message, and packet are also used to describe logical information groupings
at various layers of the OSI reference model and in various technology circles.
To properly define the term segment, the context of the usage must be presented with the word. If
segment is used in the context of TCP, it would be defined as a separate piece of the data. If
segment is being used in the context of physical networking media in a routed network, it would be
seen as one of the parts or sections of the total network.
Module 9: TCP/IP Protocol Suite and IP Addressing
The Internet was developed to provide a communication network that could continue to function in
wartime. Although the Internet has evolved in ways very different from those imagined by its
architects, it is still based on the TCP/IP protocol suite. The design of TCP/IP is ideal for the
decentralized and robust network that is the Internet. Many protocols used today were designed
using the four-layer TCP/IP model.
It is useful to know both the TCP/IP and OSI networking models. Each model offers its own
structure for explaining how a network works but there is much overlap between the two. Without
an understanding of both, a system administrator may not have sufficient insight into why a
network functions the way it does.
Any device on the Internet that wants to communicate with other Internet devices must have a
unique identifier. The identifier is known as the IP address because routers use a layer three
protocol, the IP protocol, to find the best route to that device. IPv4, the current version of IP, was
designed before there was a large demand for addresses. Explosive growth of the Internet has
threatened to deplete the supply of IP addresses. Subnetting, Network Address Translation (NAT)
and private addressing are used to extend IP addressing without exhausting the supply. Another
version of IP known as IPv6 improves on the current version providing a much larger address
space, integrating or eliminating the methods used to work with the shortcomings of IPv4.
In addition to the physical MAC address, each computer needs a unique IP address, sometimes
called logical address, to be part of the Internet. There are several methods of assigning an IP
address to a device. Some devices always have a static address, while others have a temporary
address assigned to them every time they connect to the network. When a dynamically assigned IP
address is needed, the device can obtain it using several methods.
For efficient routing to occur between devices, other issues must be resolved. For example,
duplicate IP addresses can stop efficient routing of data.
 9.1.1 History and future of TCP/IP
The U.S. Department of Defense (DoD) created the TCP/IP reference model because it wanted a
network that could survive any conditions. To illustrate further, imagine a world, crossed by
multiple cable runs, wires, microwaves, optical fibers, and satellite links. Then imagine a need for
data to be transmitted without regard for the condition of any particular node or network. The DoD
required reliable data transmission to any destination on the network under any circumstance. The
creation of the TCP/IP model helped to solve this difficult design problem. The TCP/IP model has
since become the standard on which the Internet is based.
In reading about the layers of the TCP/IP model layers, keep in mind the original intent of the
Internet. Remembering the intent will help reduce confusion. The TCP/IP model has four layers: the
application layer, transport layer, Internet layer, and the network access layer. Some of the layers in
the TCP/IP model have the same name as layers in the OSI model. It is critical not to confuse the
layer functions of the two models because the layers include different functions in each model.
The present version of TCP/IP was standardized in September of 1981. As shown in Figure , IPv4
addresses are 32 bits long, written in dotted decimal, and separated by periods. IPv6 addresses are
128 bits long, written in hexadecimal, and separated by colons. Colons separate 16-bit fields.
Leading zeros can be omitted in each field as can be seen in the Figure where the field :0003: is
written :3:. In 1992 the standardization of a new generation of IP, often called IPng, was supported
by the Internet Engineering Task Force (IETF). IPng is now known as IPv6. IPv6 has not gained
wide implementation, but it has been released by most vendors of networking equipment and will
eventually become the dominant standard.
 9.1.2 Application layer
The application layer of the TCP/IP model handles high-level protocols, issues of representation,
encoding, and dialog control. The TCP/IP protocol suite combines all application related issues into
one layer and assures this data is properly packaged before passing it on to the next layer. TCP/IP
includes not only Internet and transport layer specifications, such as IP and TCP, but also
specifications for common applications. TCP/IP has protocols to support file transfer, e-mail, and
remote login, in addition to the following applications:
File Transfer Protocol (FTP) – FTP is a reliable, connection-oriented service that uses TCP to
transfer files between systems that support FTP. It supports bi-directional binary file and ASCII file
Trivial File Transfer Protocol (TFTP) – TFTP is a connectionless service that uses the User
Datagram Protocol (UDP). TFTP is used on the router to transfer configuration files and Cisco IOS
images, and to transfer files between systems that support TFTP. It is useful in some LANs because
it operates faster than FTP in a stable environment.
Network File System (NFS) – NFS is a distributed file system protocol suite developed by Sun
Microsystems that allows file access to a remote storage device such as a hard disk across a
Simple Mail Transfer Protocol (SMTP) – SMTP administers the transmission of e-mail over
computer networks. It does not provide support for transmission of data other than plaintext.
Terminal emulation (Telnet) – Telnet provides the capability to remotely access another
computer. It enables a user to log in to an Internet host and execute commands. A Telnet client is
referred to as a local host. A Telnet server is referred to as a remote host.
Simple Network Management Protocol (SNMP) – SNMP is a protocol that provides a way to
monitor and control network devices, and to manage configurations, statistics collection,
performance, and security.
Domain Name System (DNS) – DNS is a system used on the Internet for translating names of
domains and their publicly advertised network nodes into IP addresses.
 9.1.3 Transport layer
The transport layer provides transport services from the source host to the destination host. The
transport layer constitutes a logical connection between the endpoints of the network, the sending
host and the receiving host. Transport protocols segment and reassemble upper-layer applications
into the same data stream between endpoints. The transport layer data stream provides end-to-end
transport services.
The Internet is often represented by a cloud. The transport layer sends data packets from the
sending source to the receiving destination through the cloud. End-to-end control, provided by
sliding windows and reliability in sequencing numbers and acknowledgments, is the primary duty
of the transport layer when using TCP. The transport layer also defines end-to-end connectivity
between host applications. Transport services include all the following services:
Segmenting upper-layer application data
Sending segments from one end device to another end device
TCP only
Establishing end-to-end operations
Flow control provided by sliding windows
Reliability provided by sequence numbers and acknowledgments
The Internet is often represented by a cloud. The transport layer sends data packets from the
sending source to the receiving destination through the cloud. The cloud deals with issues such as
“Which of several paths is best for a given route?”
 9.1.4 Internet layer
The purpose of the Internet layer is to select the best path through the network for packets to travel.
The main protocol that functions at this layer is the Internet Protocol (IP). Best path determination
and packet switching occur at this layer.
The following protocols operate at the TCP/IP Internet layer:
IP provides connectionless, best-effort delivery routing of packets. IP is not concerned with the
content of the packets but looks for a path to the destination.
Internet Control Message Protocol (ICMP) provides control and messaging capabilities.
Address Resolution Protocol (ARP) determines the data link layer address, MAC address, for
known IP addresses.
Reverse Address Resolution Protocol (RARP) determines IP addresses when the MAC address is
IP performs the following operations:
Defines a packet and an addressing scheme
Transfers data between the Internet layer and network access layers
Routes packets to remote hosts
Finally, as a clarification of terminology, IP is sometimes referred to as an unreliable protocol. This
does not mean that IP will not accurately deliver data across a network. Calling IP an unreliable
protocol simply means that IP does not perform error checking and correction. That function is
handled by upper layer protocols from the transport or application layers.
 9.1.5 Network access layer
The network access layer is also called the host-to-network layer. The network access layer is the
layer that is concerned with all of the issues that an IP packet requires to actually make a physical
link to the network media. It includes the LAN and WAN technology details, and all the details
contained in the OSI physical and data-link layers.
Drivers for software applications, modem cards and other devices operate at the network access
layer. The network access layer defines the procedures for interfacing with the network hardware
and accessing the transmission medium. Modem protocol standards such as Serial Line Internet
Protocol (SLIP) and Point-to-Point Protocol (PPP) provide network access through a modem
connection. Because of an intricate interplay of hardware, software, and transmission-medium
specifications, there are many protocols operating at this layer. This can lead to confusion for users.
Most of the recognizable protocols operate at the transport and Internet layers of the TCP/IP model.
Network access layer functions include mapping IP addresses to physical hardware addresses and
encapsulation of IP packets into frames. Based upon the hardware type and the network interface,
the network access layer will define the connection with the physical network media.
A good example of network access layer configuration would be to set up a Windows system using
a third party NIC. Depending on the version of Windows, the NIC would automatically be detected
by the operating system and then the proper drivers would be installed. If this were an older version
of Windows, the user would have to specify the network card driver. The card manufacturer
supplies these drivers on disks or CD-ROMs.
 9.1.6 Comparing the OSI model and the TCP/IP model
The following is a comparison of the OSI model and the TCP/IP model noting the similarities and
Similarities of the OSI and TCP/IP models:
Both have layers
Both have application layers, though they include very different services
Both have comparable transport and network layers
Packet-switched, not circuit-switched, technology is assumed
Networking professionals need to know both models
Differences of the OSI and TCP/IP models:
TCP/IP combines the presentation and session layer into its application layer
TCP/IP combines the OSI data link and physical layers into one layer
TCP/IP appears simpler because it has fewer layers
TCP/IP transport layer using UDP does not always guarantee reliable delivery of packets as the
transport layer in the OSI model does
The Internet is developed by the standards of the TCP/IP protocols. The TCP/IP model gains
credibility because of its protocols. In contrast, networks typically are not built on the OSI protocol.
The OSI model is used as a guide for understanding the communication process.
 9.1.7 Internet architecture
While the Internet is complex, there are some basic ideas in its operation. In this section the basic
architecture of the Internet will be examined. The Internet is a deceptively simple idea, that when
repeated on a large scale, enables nearly instantaneous worldwide data communications between
anyone, anywhere, at any time.
LANs are smaller networks limited in geographic area. Many LANs connected together allow the
Internet to function. But LANs have limitations in scale. Although there have been technological
advances to improve the speed of communications, such as Metro Optical, Gigabit, and 10-Gigabit
Ethernet, distance is still a problem.
Focusing on the communication between the source and destination computer and intermediate
computers at the application layer is one way to get an overview of the Internet architecture. Placing
identical instances of an application on all the computers in the network could ease the delivery of
messages across the large network. However, this does not scale well. For new software to function
properly, it would require new applications installed on every computer in the network. For new
hardware to function properly, it would require modifying the software. Any failure of an
intermediate computer or the application of the computer would cause a break in the chain of the
messages that are passed.
The Internet uses the principle of network layer interconnection. Using the OSI model as an
example, the goal is to build the functionality of the network in independent modules. This allows a
diversity of LAN technologies at Layers 1 and 2 and a diversity of applications functioning at
Layers 5, 6, and 7. The OSI model provides a mechanism where the details of the lower and the
upper layers are separated. This allows intermediate networking devices to “relay” traffic without
having to bother with the details of the LAN.
This leads to the concept of internetworking, or building networks of networks. A network of
networks is called an internet, indicated with the lowercase “i”. When referring to the networks that
developed from the DoD on which the Worldwide Web (www) runs, the uppercase “I” is used and
is called the Internet. Internetworking must be scalable with regard to the number of networks and
computers attached. Internetworking must be able to handle the transport of data across vast
distances. It must be flexible to account for constant technological innovations. It must be able to
adjust to dynamic conditions on the network. And internetworks must be cost-effective.
Internetworks must be designed to permit anytime, anywhere, data communications to anyone.
Figure summarizes the connection of one physical network to another through a special purpose
computer called a router. These networks are described as directly connected to the router. The
router is needed to handle any path decisions required for the two networks to communicate. Many
routers are needed to handle large volumes of network traffic.
Figure extends the idea to three physical networks connected by two routers. Routers make
complex decisions to allow all the users on all the networks to communicate with each other. Not all
networks are directly connected to one another. The router must have some method to handle this
One option is for a router to keep a list of all computers and all the paths to them. The router would
then decide how to forward data packets based on this reference table. The forwarding is based on
the IP address of the destination computer. This option would become difficult as the number of
users grows. Scalability is introduced when the router keeps a list of all networks, but leaves the
local delivery details to the local physical networks. In this situation, the routers pass messages to
other routers. Each router shares information about which networks it is connected to. This builds
the routing table.
Figure shows the transparency that users require. Yet, the physical and logical structures inside
the Internet cloud can be extremely complex as displayed in Figure . The Internet has grown
rapidly to allow more and more users. The fact that the Internet has grown so large with more than
90,000 core routes and 300,000,000 end users is proof of the soundness of the Internet architecture.
Two computers, anywhere in the world, following certain hardware, software, and protocol
specifications, can communicate reliably. Standardization of practices and procedures for moving
data across networks has made the Internet possible.
 9.2.1 IP addressing
For any two systems to communicate, they must be able to identify and locate each other. While
these addresses in Figure are not actual network addresses, they represent and show the concept of
address grouping. This uses the A or B to identify the network and the number sequence to identify
the individual host.
A computer may be connected to more than one network. In this situation, the system must be given
more than one address. Each address will identify the connection of the computer to a different
network. A device is not said to have an address, but that each of the connection points, or
interfaces, on that device has an address to a network. This will allow other computers to locate the
device on that particular network. The combination of letter (network address) and the number (host
address) create a unique address for each device on the network. Each computer in a TCP/IP
network must be given a unique identifier, or IP address. This address, operating at Layer 3, allows
one computer to locate another computer on a network. All computers also have a unique physical
address, known as a MAC address. These are assigned by the manufacturer of the network interface
card. MAC addresses operate at Layer 2 of the OSI model.
An IP address is a 32-bit sequence of 1s and 0s. Figure shows a sample 32-bit number. To make
the IP address easier to use, the address is usually written as four decimal numbers separated by
periods. For example, an IP address of one computer is Another computer might have
the address This way of writing the address is called the dotted decimal format. In this
notation, each IP address is written as four parts separated by periods, or dots. Each part of the
address is called an octet because it is made up of eight binary digits. For example, the IP address would be 11000000.10101000.00000001.00001000 in binary notation. The dotted
decimal notation is an easier method to understand than the binary ones and zeros method. This
dotted decimal notation also prevents a large number of transposition errors that would result if only
the binary numbers were used.
Using dotted decimal allows number patterns to be more easily understood. Both the binary and
decimal numbers in Figure represent the same values, but it is easier to see in dotted decimal
notation. This is one of the common problems found in working directly with binary number. The
long strings of repeated ones and zeros make transposition and omission errors more likely.
It is easy to see the relationship between the numbers and, where
11000000.10101000.00000001.00001000 and 11000000.10101000.00000001.00001001 are not as
easy to recognize. Looking at the binary, it is almost impossible to see that they are consecutive
 9.2.2 Decimal and binary conversion
There are many ways to solve a problem. There are also several ways to convert decimal numbers
to binary numbers. One method is presented here, however it is not the only method. The student
may find other methods easier. It is a matter of personal preference.
When converting a decimal number to binary, the biggest power of two that will fit into the decimal
number must be determined. If this process is designed to be working with computers, the most
logical place to start is with the largest values that will fit into a byte or two bytes. As mentioned
earlier, the most common grouping of bits is eight, which make up one byte. However, sometimes
the largest value that can be held in one byte is not large enough for the values needed. To
accommodate this, bytes are combined. Instead of having two eight-bit numbers, one 16-bit number
is created. Instead of three eight-bit numbers, one 24-bit number is created. The same rules apply as
they did for eight-bit numbers. Multiply the previous position value by two to get the present
column value.
Since working with computers often is referenced by bytes it is easiest to start with byte boundaries
and calculate from there. Start by calculating a couple of examples, the first being 6,783. Since
this number is greater than 255, the largest value possible in a single byte, two bytes will be used.
Start calculating from 215. The binary equivalent of 6,783 is 00011010 01111111.
The second example is 104. Since this number is less than 255, it can be represented by one byte.
The binary equivalent of 104 is 01101000.
This method works for any decimal number. Consider the decimal number one million. Since one
million is greater than the largest value that can be held in two bytes, 65535, at least three bytes will
be needed. By multiplying by two until 24 bits, three bytes, is reached, the value will be 8,388,608.
This means that the largest value that 24 bits can hold is 16,777,215. So starting at the 24-bit,
follow the process until zero is reached. Continuing with the procedure described, it is determined
that the decimal number one million is equal to the binary number 00001111 01000010 01000000.
Figure includes some decimal to binary conversion exercises.
Binary to decimal conversion is just the opposite. Simply place the binary in the table and if there is
a one in a column position add that value into the total. Convert 00000100 00011101 to decimal.
The answer is 1053.
Figure includes some decimal to binary conversion exercises.
 9.2.3 IPv4 addressing
A router forwards packets from the originating network to the destination network using the IP
protocol. The packets must include an identifier for both the source and destination networks.
Using the IP address of destination network, a router can deliver a packet to the correct network.
When the packet arrives at a router connected to the destination network, the router uses the IP
address to locate the particular computer connected to that network. This system works in much the
same way as the national postal system. When the mail is routed, it must first be delivered to the
post office at the destination city using the zip code. That post office then must locate the final
destination in that city using the street address. This is a two-step process.
Accordingly, every IP address has two parts. One part identifies the network where the system is
connected, and a second part identifies that particular system on the network. As is shown Figure ,
each octet ranges from 0 to 255. Each one of the octets breaks down into 256 subgroups and they
break down into another 256 subgroups with 256 addresses in each. By referring to the group
address directly above a group in the hierarchy, all of the groups that branch from that address can
be referenced as a single unit.
This kind of address is called a hierarchical address, because it contains different levels. An IP
address combines these two identifiers into one number. This number must be a unique number,
because duplicate addresses would make routing impossible. The first part identifies the system's
network address. The second part, called the host part, identifies which particular machine it is on
the network.
IP addresses are divided into classes to define the large, medium, and small networks. Class A
addresses are assigned to larger networks. Class B addresses are used for medium-sized networks,
and Class C for small networks.         The first step in determining which part of the address
identifies the network and which part identifies the host is identifying the class of an IP address.
 9.2.4 Class A, B, C, D, and E IP addresses
To accommodate different size networks and aid in classifying these networks, IP addresses are
divided into groups called classes. This is known as classful addressing. Each complete 32-bit IP
address is broken down into a network part and a host part. A bit or bit sequence at the start of
each address determines the class of the address. There are five IP address classes as shown in
Figure .
The Class A address was designed to support extremely large networks, with more than 16 million
host addresses available. Class A IP addresses use only the first octet to indicate the network
address. The remaining three octets provide for host addresses.
The first bit of a Class A address is always 0. With that first bit a 0, the lowest number that can be
represented is 00000000, decimal 0. The highest number that can be represented is 01111111,
decimal 127. The numbers 0 and 127 are reserved and cannot be used as network addresses. Any
address that starts with a value between 1 and 126 in the first octet is a Class A address.
The network is reserved for loopback testing. Routers or local machines can use this
address to send packets back to themselves. Therefore, this number cannot be assigned to a
The Class B address was designed to support the needs of moderate to large-sized networks. A
Class B IP address uses the first two of the four octets to indicate the network address. The other
two octets specify host addresses.
The first two bits of the first octet of a Class B address are always 10. The remaining six bits may
be populated with either 1s or 0s. Therefore, the lowest number that can be represented with a Class
B address is 10000000, decimal 128. The highest number that can be represented is 10111111,
decimal 191. Any address that starts with a value in the range of 128 to 191 in the first octet is a
Class B address.
The Class C address space is the most commonly used of the original address classes. This
address space was intended to support small networks with a maximum of 254 hosts.
A Class C address begins with binary 110. Therefore, the lowest number that can be represented is
11000000, decimal 192. The highest number that can be represented is 11011111, decimal 223. If
an address contains a number in the range of 192 to 223 in the first octet, it is a Class C address.
The Class D address class was created to enable multicasting in an IP address. A multicast address
is a unique network address that directs packets with that destination address to predefined groups
of IP addresses. Therefore, a single station can simultaneously transmit a single stream of data to
multiple recipients.
The Class D address space, much like the other address spaces, is mathematically constrained. The
first four bits of a Class D address must be 1110. Therefore, the first octet range for Class D
addresses is 11100000 to 11101111, or 224 to 239. An IP address that starts with a value in the
range of 224 to 239 in the first octet is a Class D address.
A Class E address has been defined. However, the Internet Engineering Task Force (IETF)
reserves these addresses for its own research. Therefore, no Class E addresses have been released
for use in the Internet. The first four bits of a Class E address are always set to 1s. Therefore, the
first octet range for Class E addresses is 11110000 to 11111111, or 240 to 255.
Figure shows the IP address range of the first octet both in decimal and binary for each IP address
 9.2.5 Reserved IP addresses
Certain host addresses are reserved and cannot be assigned to devices on a network. These reserved
host addresses include the following:
Network address – Used to identify the network itself
In Figure , the section that is identified by the upper box represents the network.
Data that is sent to any host on that network ( will be seen outside of
the local area network as The only time that the host numbers matter is when the data
is on the local area network. The LAN that is contained in the lower box is treated the same as the
upper LAN, except that its network number is
Broadcast address – Used for broadcasting packets to all the devices on a network
In Figure , the section that is identified by the upper box represents the broadcast
address. Data that is sent to the broadcast address will be read by all hosts on that network
( The LAN that is contained in the lower box is treated the same as
the upper LAN, except that its broadcast address is
An IP address that has binary 0s in all host bit positions is reserved for the network address. In a
Class A network example, is the IP address of the network, known as the network ID,
containing the host A router uses the network IP address when it forwards data on the
Internet. In a Class B network example, the address is a network address, as shown in
Figure .
In a Class B network address, the first two octets are designated as the network portion. The last
two octets contain 0s because those 16 bits are for host numbers and are used to identify devices
that are attached to the network. The IP address,, is an example of a network address.
This address is never assigned as a host address. A host address for a device on the
network might be In this example, “176.10” is the network portion and “16.1” is the
host portion.
To send data to all the devices on a network, a broadcast address is needed.         A broadcast occurs
when a source sends data to all devices on a network. To ensure that all the other devices on the
network process the broadcast, the sender must use a destination IP address that they can recognize
and process. Broadcast IP addresses end with binary 1s in the entire host part of the address.
In the network example,, the last 16 bits make up the host field or host part of the
address. The broadcast that would be sent out to all devices on that network would include a
destination address of This is because 255 is the decimal value of an octet
containing 11111111.
 9.2.6 Public and private IP addresses
The stability of the Internet depends directly on the uniqueness of publicly used network addresses.
In Figure , there is an issue with the network addressing scheme. In looking at the networks, both
have a network address of The router in this illustration will not be able to forward
the data packets correctly. Duplicate network IP addresses prevent the router from performing its
job of best path selection. Unique addresses are required for each device on a network.
A procedure was needed to make sure that addresses were in fact unique. Originally, an
organization known as the Internet Network Information Center (InterNIC) handled this procedure.
InterNIC no longer exists and has been succeeded by the Internet Assigned Numbers Authority
(IANA). IANA carefully manages the remaining supply of IP addresses to ensure that duplication
of publicly used addresses does not occur. Duplication would cause instability in the Internet and
compromise its ability to deliver datagrams to networks.
Public IP addresses are unique. No two machines that connect to a public network can have the
same IP address because public IP addresses are global and standardized. All machines connected
to the Internet agree to conform to the system. Public IP addresses must be obtained from an
Internet service provider (ISP) or a registry at some expense.
With the rapid growth of the Internet, public IP addresses were beginning to run out. New
addressing schemes, such as classless interdomain routing (CIDR) and IPv6 were developed to help
solve the problem. CIDR and IPv6 are discussed later in the course.
Private IP addresses are another solution to the problem of the impending exhaustion of public IP
addresses. As mentioned, public networks require hosts to have unique IP addresses. However,
private networks that are not connected to the Internet may use any host addresses, as long as each
host within the private network is unique. Many private networks exist alongside public networks.
However, a private network using just any address is strongly discouraged because that network
might eventually be connected to the Internet. RFC 1918 sets aside three blocks of IP addresses for
private, internal use. These three blocks consist of one Class A, a range of Class B addresses, and a
range of Class C addresses. Addresses that fall within these ranges are not routed on the Internet
backbone. Internet routers immediately discard private addresses. If addressing a nonpublic intranet,
a test lab, or a home network, these private addresses can be used instead of globally unique
addresses. Private IP addresses can be intermixed, as shown in the graphic, with public IP
addresses. This will conserve the number of addresses used for internal connections.
Connecting a network using private addresses to the Internet requires translation of the private
addresses to public addresses. This translation process is referred to as Network Address
Translation (NAT). A router usually is the device that performs NAT. NAT, along with CIDR and
IPv6 are covered in more depth later in the curriculum.
 9.2.7 Introduction to subnetting
Subnetting is another method of managing IP addresses. This method of dividing full network
address classes into smaller pieces has prevented complete IP address exhaustion. It is impossible to
cover TCP/IP without mentioning subnetting. As a system administrator it is important to
understand subnetting as a means of dividing and identifying separate networks throughout the
LAN. It is not always necessary to subnet a small network. However, for large or extremely large
networks, subnetting is required. Subnetting a network means to use the subnet mask to divide the
network and break a large network up into smaller, more efficient and manageable segments, or
subnets. An example would be the U.S. telephone system which is broken into area codes, exchange
codes, and local numbers.
The system administrator must resolve these issues when adding and expanding the network. It is
important to know how many subnets or networks are needed and how many hosts will be needed
on each network. With subnetting, the network is not limited to the default Class A, B, or C
network masks and there is more flexibility in the network design.
Subnet addresses include the network portion, plus a subnet field and a host field. The subnet field
and the host field are created from the original host portion for the entire network. The ability to
decide how to divide the original host portion into the new subnet and host fields provides
addressing flexibility for the network administrator.
To create a subnet address, a network administrator borrows bits from the host field and designates
them as the subnet field. The minimum number of bits that can be borrowed is two. When
creating a subnet, where only one bit was borrowed the network number would be the .0 network.
The broadcast number would then be the .255 network. The maximum number of bits that can be
borrowed can be any number that leaves at least two bits remaining, for the host number.
9.2.8 IPv4 versus IPv6
When TCP/IP was adopted in the 1980s, it relied on a two-level addressing scheme. At the time this
offered adequate scalability. Unfortunately, the designers of TCP/IP could not have predicted that
their protocol would eventually sustain a global network of information, commerce, and
entertainment. Over twenty years ago, IP Version 4 (IPv4) offered an addressing strategy that,
although scalable for a time, resulted in an inefficient allocation of addresses.
The Class A and B addresses make up 75 percent of the IPv4 address space, however fewer than
17,000 organizations can be assigned a Class A or B network number. Class C network addresses
are far more numerous than Class A and Class B addresses, although they account for only 12.5
percent of the possible four billion IP addresses.
Unfortunately, Class C addresses are limited to 254 usable hosts. This does not meet the needs of
larger organizations that cannot acquire a Class A or B address. Even if there were more Class A, B,
and C addresses, too many network addresses would cause Internet routers to come to a stop under
the burden of the enormous size of routing tables required to store the routes to reach each of the
As early as 1992, the Internet Engineering Task Force (IETF) identified the following two specific
Exhaustion of the remaining, unassigned IPv4 network addresses. At the time, the Class B space
was on the verge of depletion.
The rapid and large increase in the size of Internet routing tables occurred as more Class C
networks came online. The resulting flood of new network information threatened the ability of
Internet routers to cope effectively.
Over the past two decades, numerous extensions to IPv4 have been developed. These extensions are
specifically designed to improve the efficiency with which the 32-bit address space can be used.
Two of the more important of these are subnet masks and classless interdomain routing (CIDR),
which are discussed in more detail in later lessons.
Meanwhile, an even more extendible and scalable version of IP, IP Version 6 (IPv6), has been
defined and developed. IPv6 uses 128 bits rather than the 32 bits currently used in IPv4. IPv6 uses
hexadecimal numbers to represent the 128 bits. IPv6 provides 640 sextrillion addresses. This
version of IP should provide enough addresses for future communication needs. Figure shows
IPv4 addresses which are 32 bits long, written in decimal form, and separated by periods. IPv6
addresses are 128 bits long, written in hexadecimal form, and separated by colons. IPv6 fields are
16 bits long. To make the addresses easier to read, leading zeros can be omitted from each field.
The field :0003: is written :3:. IPv6 shorthand representation of the 128 bits uses eight 16-bit
numbers, shown as four hexadecimal digits.
After years of planning and development, IPv6 is slowly being implemented in select networks.
Eventually, IPv6 may replace IPv4 as the dominant Internet protocol.
 9.3.1 Obtaining an Internet address
A network host needs to obtain a globally unique address in order to function on the Internet. The
physical or MAC address that a host has is only locally significant, identifying the host within the
local area network. Since this is a Layer 2 address, the router does not use it to forward outside the
IP addresses are the most commonly used addresses for Internet communications. This protocol is a
hierarchical addressing scheme that allows individual addresses to be associated together and
treated as groups. These groups of addresses allow efficient transfer of data across the Internet.
Network administrators use two methods to assign IP addresses. These methods are static and
dynamic. Later in this lesson, static addressing and three variations of dynamic addressing will be
covered. Regardless of which addressing scheme is chosen, no two interfaces can have the same IP
address. Two hosts that have the same IP address could create a conflict that might cause both of
the hosts involved not to operate properly. As shown in Figure , the hosts have a physical address
by having a network interface card that allows connection to the physical medium.
9.3.2 Static assignment of an IP address
Static assignment works best on small, infrequently changing networks. The system administrator
manually assigns and tracks IP addresses for each computer, printer, or server on the intranet. Good
recordkeeping is critical to prevent problems which occur with duplicate IP addresses. This is
possible only when there are a small number of devices to track.
Servers should be assigned a static IP address so workstations and other devices will always know
how to access needed services. Consider how difficult it would be to phone a business that changed
its phone number every day.
Other devices that should be assigned static IP addresses are network printers, application servers,
and routers.
9.3.3 RARP IP address assignment
Reverse Address Resolution Protocol (RARP) associates a known MAC addresses with an IP
addresses. This association allows network devices to encapsulate data before sending the data out
on the network. A network device, such as a diskless workstation, might know its MAC address but
not its IP address. RARP allows the device to make a request to learn its IP address. Devices using
RARP require that a RARP server be present on the network to answer RARP requests.
Consider an example where a source device wants to send data to another device. In this example,
the source device knows its own MAC address but is unable to locate its own IP address in the ARP
table. The source device must include both its MAC address and IP address in order for the
destination device to retrieve data, pass it to higher layers of the OSI model, and respond to the
originating device. Therefore, the source initiates a process called a RARP request. This request
helps the source device detect its own IP address. RARP requests are broadcast onto the LAN and
are responded to by the RARP server which is usually a router.
RARP uses the same packet format as ARP. However, in a RARP request, the MAC headers and
"operation code" are different from an ARP request.         The RARP packet format contains places
for MAC addresses of both the destination and source devices. The source IP address field is empty.
The broadcast goes to all devices on the network. Therefore, the destination MAC address will be
set to all binary 1s. Workstations running RARP have codes in ROM that direct them to start the
RARP process. A step-by-step layout of the RARP process is illustrated in Figures through          .
 9.3.4 BOOTP IP address assignment
The bootstrap protocol (BOOTP) operates in a client-server environment and only requires a single
packet exchange to obtain IP information.         However, unlike RARP, BOOTP packets can include
the IP address, as well as the address of a router, the address of a server, and vendor-specific
One problem with BOOTP, however, is that it was not designed to provide dynamic address
assignment. With BOOTP, a network administrator creates a configuration file that specifies the
parameters for each device. The administrator must add hosts and maintain the BOOTP database.
Even though the addresses are dynamically assigned, there is still a one to one relationship between
the number of IP addresses and the number of hosts. This means that for every host on the network
there must be a BOOTP profile with an IP address assignment in it. No two profiles can have the
same IP address. Those profiles might be used at the same time and that would mean that two hosts
have the same IP address.
A device uses BOOTP to obtain an IP address when starting up. BOOTP uses UDP to carry
messages. The UDP message is encapsulated in an IP packet. A computer uses BOOTP to send a
broadcast IP packet using a destination IP address of all 1s, in dotted decimal
notation. A BOOTP server receives the broadcast and then sends back a broadcast. The client
receives a frame and checks the MAC address. If the client finds its own MAC address in the
destination address field and a broadcast in the IP destination field, it takes and stores the IP address
and other information supplied in the BOOTP reply message. A step-by-step description of the
process is shown in Figures through         .
 9.3.5 DHCP IP address management
Dynamic host configuration protocol (DHCP) is the successor to BOOTP. Unlike BOOTP, DHCP
allows a host to obtain an IP address dynamically without the network administrator having to set
up an individual profile for each device. All that is required when using DHCP is a defined range of
IP addresses on a DHCP server. As hosts come online, they contact the DHCP server and request an
address. The DHCP server chooses an address and leases it to that host. With DHCP, the entire
network configuration of a computer can be obtained in one message. This includes all of the data
supplied by the BOOTP message, plus a leased IP address and a subnet mask.
The major advantage that DHCP has over BOOTP is that it allows users to be mobile. This mobility
allows the users to freely change network connections from location to location. It is no longer
required to keep a fixed profile for every device attached to the network as was required with the
BOOTP system. The importance to this DHCP advancement is its ability to lease an IP address to a
device and then reclaim that IP address for another user after the first user releases it. This means
that DHCP offers a one to many ratio of IP addresses and that an address is available to anyone who
connects to the network.
 9.3.6 Problems in address resolution
One of the major problems in networking is how to communicate with other network devices. In
TCP/IP communications, a datagram on a local-area network must contain both a destination MAC
address and a destination IP address. These addresses must be correct and match the destination
MAC and IP addresses of the host device. If it does not match, the datagram will be discarded by
the destination host. Communications within a LAN segment require two addresses. There needs to
be a way to automatically map IP to MAC addresses. It would be too time consuming for the user to
create the maps manually. The TCP/IP suite has a protocol, called Address Resolution Protocol
(ARP), which can automatically obtain MAC addresses for local transmission. Different issues are
raised when data is sent outside of the local area network.
Communications between two LAN segments have an additional task. Both the IP and MAC
addresses are needed for both the destination host and the intermediate routing device. TCP/IP has a
variation on ARP called Proxy ARP that will provide the MAC address of an intermediate device
for transmission outside the LAN to another network segment.
9.3.7 Address Resolution Protocol (ARP)
With TCP/IP networking, a data packet must contain both a destination MAC address and a
destination IP address. If the packet is missing either one, the data will not pass from Layer 3 to the
upper layers. In this way, MAC addresses and IP addresses act as checks and balances for each
other. After devices determine the IP addresses of the destination devices, they can add the
destination MAC addresses to the data packets.
Some devices will keep tables that contain MAC addresses and IP addresses of other devices that
are connected to the same LAN. These are called Address Resolution Protocol (ARP) tables. ARP
tables are stored in RAM memory, where the cached information is maintained automatically on
each of the devices. It is very unusual for a user to have to make an ARP table entry manually. Each
device on a network maintains its own ARP table. When a network device wants to send data across
the network, it uses information provided by the ARP table.
When a source determines the IP address for a destination, it then consults the ARP table in order to
locate the MAC address for the destination. If the source locates an entry in its table, destination IP
address to destination MAC address, it will associate the IP address to the MAC address and then
uses it to encapsulate the data. The data packet is then sent out over the networking media to be
picked up by the destination device.
There are two ways that devices can gather MAC addresses that they need to add to the
encapsulated data. One way is to monitor the traffic that occurs on the local network segment. All
stations on an Ethernet network will analyze all traffic to determine if the data is for them. Part of
this process is to record the source IP and MAC address of the datagram to an ARP table. So as data
is transmitted on the network, the address pairs populate the ARP table. Another way to get an
address pair for data transmission is to broadcast an ARP request.
The computer that requires an IP and MAC address pair broadcasts an ARP request. All the other
devices on the local area network analyze this request. If one of the local devices matches the IP
address of the request, it sends back an ARP reply that contains its IP-MAC pair. If the IP address is
for the local area network and the computer does not exist or is turned off, there is no response to
the ARP request. In this situation, the source device reports an error. If the request is for a different
IP network, there is another process that can be used.
Routers do not forward broadcast packets. If the feature is turned on, a router performs a proxy
ARP. Proxy ARP is a variation of the ARP protocol. In this variation, a router sends an ARP
response with the MAC address of the interface on which the request was received, to the
requesting host. The router responds with the MAC addresses for those requests in which the IP
address is not in the range of addresses of the local subnet.
Another method to send data to the address of a device that is on another network segment is to set
up a default gateway. The default gateway is a host option where the IP address of the router
interface is stored in the network configuration of the host. The source host compares the
destination IP address and its own IP address to determine if the two IP addresses are located on the
same segment. If the receiving host is not on the same segment, the source host sends the data using
the actual IP address of the destination and the MAC address of the router. The MAC address for
the router was learned from the ARP table by using the IP address of that router.
If the default gateway on the host or the proxy ARP feature on the router is not configured, no
traffic can leave the local area network. One or the other is required to have a connection outside of
the local area network.
Module 10: Routing Fundamentals and Subnets
Internet Protocol (IP) is the routed protocol of the Internet. IP addressing enables packets to be
routed from source to destination using the best available path. The propagation of packets,
encapsulation changes, and connection-oriented and connectionless protocols are also critical to
ensure that data is properly transmitted to its destination. This module will provide an overview for
The difference between routing and routed protocols is a common source of confusion for students
learning networking. The two words sound similar but are quite different. This module also
introduces routing protocols which allow routers to build tables from which to determine the best
path to a host on the Internet.
No two organizations in the world are identical. In fact, not all the organizations can fit into the
three class system of A, B, and C addresses. However, flexibility does exist within the class
addressing system and it is called subnetting. Subnetting allows the network administrators to
determine the size of the pieces of the network they will be working with. Once they have
determined how to segment their network, they can then use the subnet mask to determine what part
of the network each device is on.
10.1.1 Routable and routed protocols
A protocol is a set of rules that determines how computers communicate with each other across
networks. Computers communicate with one another by exchanging data messages. To accept and
act on these messages, computers must have definitions of how a message is interpreted. Examples
of messages include those establishing a connection to a remote machine, e-mail messages, and files
transferred over a network.
A protocol describes the following:
The format that a message must conform to
The way in which computers must exchange a message within the context of a particular activity
A routed protocol allows the router to forward data between nodes on different networks. In order
for a protocol to be routable, it must provide the ability to assign a network number and a host
number to each individual device. Some protocols, such as IPX, require only a network number
because these protocols use the host's MAC address for the host number. Other protocols, such as
IP, require a complete address consisting of a network portion and a host portion. These protocols
also require a network mask in order to differentiate the two numbers. The network address is
obtained by ANDing the address with the network mask.
The reason that a network mask is used is to allow groups of sequential IP addresses to be treated as
a single unit. If this grouping were not allowed, each host would have to be mapped individually
for routing. According to the Internet Software Consortium, this would not be possible with the
162,128,000 hosts that are currently on the Internet.
 10.1.2 IP as a routed protocol
The Internet Protocol (IP) is the most widely used implementation of a hierarchical network-
addressing scheme. IP is a connectionless, unreliable, best-effort delivery protocol. The term
connectionless means that no dedicated circuit connection is established prior to transmission as
there is when placing a telephone call. IP determines the most efficient route for data based on the
routing protocol. The terms unreliable and best-effort do not imply that the system is unreliable and
does not work well, but that IP does not verify that the data reached its destination. This function is
handled by the upper layer protocols.
As information flows down the layers of the OSI model, the data is processed at each layer. At the
network layer, the data is encapsulated into packets, also known as datagrams. IP determines the
contents of the IP packet header, which includes addressing and other control information, but is not
concerned with the actual data. IP accepts whatever data is passed down to it from the upper layers.
 10.1.3 Packet propagation and switching within a router
As a packet travels through an internetwork to its final destination, the Layer 2 frame headers and
trailers are removed and replaced at every Layer 3 device. This is because Layer 2 data units,
frames, are for local addressing. Layer 3 data units, packets, are for end-to-end addressing.
Layer 2 Ethernet frames are designed to operate within a broadcast domain using the MAC address
that is burned into the physical device. Other Layer 2 frame types include Point-to-Point Protocol
(PPP) serial links and Frame Relay connections, which use different Layer 2 addressing schemes.
Regardless of the type of Layer 2 addressing used, frames are designed to operate within a Layer 2
broadcast domain, as the data crosses a Layer 3 device the Layer 2 information changes.
As a frame is received at a router interface, the destination MAC address is extracted. The address
is checked to see if the frame is directly addressed to the router interface, or if it is a broadcast. In
either of these two cases, the frame is accepted. Otherwise, the frame is discarded since it is
destined for another device on the collision domain. The accepted frame has the Cyclic Redundancy
Check (CRC) information extracted from the frame trailer, and calculated to verify that the frame
data is without error. If the check fails, the frame is discarded. If the check is valid, the frame
header and trailer are removed and the packet is passed up to Layer 3. The packet is then checked to
see if it is actually destined for the router, or if it is to be routed to another device in the
internetwork. If the destination IP address matches one of the router ports, the Layer 3 header is
removed and the data is passed up to the Layer 4. If the packet is to be routed, the destination IP
address will be compared to the routing table. If a match is found or there is a default route, the
packet will be sent to the interface specified in the matched routing table statement. When the
packet is switched to the outgoing interface, a new CRC value is added as a frame trailer, and the
proper frame header is added to the packet. The frame is then transmitted to the next broadcast
domain on its trip to the final destination.
 10.1.4 Internet Protocol (IP)
Two types of delivery services are connectionless and connection-oriented. These two services
provide the actual end-to-end delivery of data in an internetwork.
Most network services use a connectionless delivery system. Different packets may take different
paths to get through the network, but are reassembled after arriving at the destination. In a
connectionless system, the destination is not contacted before a packet is sent. A good comparison
for a connectionless system is a postal system. The recipient is not contacted to see if they will
accept the letter before it is sent. Also, the sender never knows whether the letter arrived at the
In connection-oriented systems, a connection is established between the sender and the recipient
before any data is transferred. An example of a connection-oriented network is the telephone
system. The caller places the call, a connection is established, and then communication occurs.
Connectionless network processes are often referred to as packet switched processes. As the packets
pass from source to destination, packets can switch to different paths, and possibly arrive out of
order. Devices make the path determination for each packet based on a variety of criteria. Some of
the criteria, such as available bandwidth, may differ from packet to packet.
Connection-oriented network processes are often referred to as circuit switched processes. A
connection with the recipient is first established, and then data transfer begins. All packets travel
sequentially across the same physical or virtual circuit.
The Internet is a gigantic, connectionless network in which all packet deliveries are handled by IP.
TCP adds Layer 4, connection-oriented reliability services to IP.
10.1.5 Anatomy of an IP packet
IP packets consist of the data from upper layers plus an IP header. The IP header consists of the
Version – Indicates the version of IP currently used; four bits. If the version field is different than
the IP version of the receiving device, that device will reject the packets.
IP header length (HLEN) – Indicates the datagram header length in 32-bit words. This is the total
length of all header information, accounting for the two variable-length header fields.
Type-of-service (TOS) – Specifies the level of importance that has been assigned by a particular
upper-layer protocol, eight bits.
Total length – Specifies the length of the entire packet in bytes, including data and header, 16 bits.
To get the length of the data payload subtract the HLEN from the total length.
Identification – Contains an integer that identifies the current datagram, 16 bits. This is the
sequence number.
Flags – A three-bit field in which the two low-order bits control fragmentation. One bit specifies
whether the packet can be fragmented, and the other specifies whether the packet is the last
fragment in a series of fragmented packets.
Fragment offset – Used to help piece together datagram fragments, 13 bits. This field allows the
previous field to end on a 16-bit boundary.
Time-to-live (TTL) – A field that specifies the number of hops a packet may travel. This number is
decreased by one as the packet travels through a router. When the counter reaches zero the packet is
discarded. This prevents packets from looping endlessly.
Protocol – indicates which upper-layer protocol, such as TCP or UDP, receives incoming packets
after IP processing has been completed, eight bits.
Header checksum – helps ensure IP header integrity, 16 bits.
Source address – specifies the sending node IP address, 32 bits.
Destination address – specifies the receiving node IP address, 32 bits.
Options – allows IP to support various options, such as security, variable length.
Padding – extra zeros are added to this field to ensure that the IP header is always a multiple of 32
Data – contains upper-layer information, variable length up to 64 Kb.
While the IP source and destination addresses are important, the other header fields have made IP
very flexible. The header fields are the information that is provided to the upper layer protocols
defining the data in the packet.
10.2.1 Routing overview
Routing is an OSI Layer 3 function. Routing is a hierarchical organizational scheme that allows
individual addresses to be grouped together. These individual addresses are treated as a single unit
until the destination address is needed for final delivery of the data. Routing is the process of
finding the most efficient path from one device to another. The primary device that performs the
routing process is the router.
The following are the two key functions of a router:
Routers must maintain routing tables and make sure other routers know of changes in the network
topology. This function is performed using a routing protocol to communicate network information
with other routers.
When packets arrive at an interface, the router must use the routing table to determine where to send
them. The router switches the packets to the appropriate interface, adds the necessary framing
information for the interface, and then transmits the frame.
A router is a network layer device that uses one or more routing metrics to determine the optimal
path along which network traffic should be forwarded. Routing metrics are values used in
determining the advantage of one route over another. Routing protocols use various combinations
of metrics for determining the best path for data.
Routers interconnect network segments or entire networks. Routers pass data frames between
networks based on Layer 3 information. Routers make logical decisions regarding the best path for
the delivery of data. Routers then direct packets to the appropriate output port to be encapsulated for
transmission. The encapsulation and de-encapsulation process occurs each time a packet transfers
through a router. As shown in Figure 4, the process of sending data from one device to another
involves the process of encapsulation and de-encapsulation. This process breaks up the data stream
into segments, adds the appropriate headers and trailers then transmits the data. The de-
encapsulation process is the opposite process, removing the headers and trailers, then recombining
the data into a seamless stream.
This course focuses on the most common routable protocol, which is the Internet Protocol (IP).
Other examples of routable protocols include IPX/SPX and AppleTalk. These protocols provide
Layer 3 support. Non-routable protocols do not provide Layer 3 support. The most common non-
routable protocol is NetBEUI. NetBEUI is a small, fast, and efficient protocol that is limited to
frame delivery within one segment.
10.2.2 Routing versus switching
Routing is often contrasted with switching. Routing and switching might seem to perform the
same function to the inexperienced observer. The primary difference is that switching occurs at
Layer 2, the data link layer, of the OSI model and routing occurs at Layer 3. This distinction means
routing and switching use different information in the process of moving data from source to
The relationship between switching and routing parallels that of telephone local and long distance
calls. When a telephone call is made to a number within the same area code, a local switch handles
the call. However, the local switch can only keep track of its own local numbers. The local switch
cannot handle all the telephone numbers in the world. When the switch receives a request for a call
outside of its area code, it switches the call to a higher-level switch that recognizes area codes. The
higher-level switch then switches the call so that it eventually gets to the local switch for the area
code dialed.
The router performs a function similar to that of the higher-level switch in the telephone example.
Figure shows the ARP tables for Layer 2 addressing and routing tables for Layer 3 addressing.
Each computer and router interface maintains an ARP table for Layer 2 communication. The ARP
table is only effective for the broadcast domain (or LAN) that it is connected to. The router also
maintains a routing table that allows it to route data outside of the broadcast domain. Each ARP
table contains an IP-MAC address pair (the MAC addresses in the graphic are represented by the
acronym MAC, as the actual addresses are too long to fit in the graphic). The routing tables also
track how the route was learned (in this case either directly connected [C] or learned by RIP [R]),
the network IP address for reachable networks, the hop count or distance to those networks, and the
interface the data must be sent out to get to the destination network.
The Layer 2 switch can only recognize its own local MAC addresses and cannot handle Layer 3 IP
addresses. When a host has data for a non-local IP address, it sends the frame to the closest router,
also known as its default gateway. The host uses the MAC address of the router as the destination
MAC address.
A Layer 2 switch interconnects segments belonging to same logical network or subnetwork.             If
Host X needs to send a frame to a host on a different network or subnetwork, Host X sends the
frame to the router that is also connected to the switch. The switch forwards the frame to the router
based on the destination MAC address. The router examines the Layer 3 destination address of the
packet to make the forwarding decision. Host X knows the IP address of the router because the IP
configuration of the router also includes the IP address of the default gateway.
Just as a Layer 2 switch keeps a table of known MAC addresses, the router keeps a table of IP
addresses known as a routing table. There is a difference between these two types of addresses.
MAC addresses are not logically organized, but IP addresses are organized in a hierarchical manner.
A Layer 2 device can handle a reasonable number of unorganized MAC addresses, because it will
only have to search its table for those addresses within its segment. Routers need to handle a greater
volume of addresses. Therefore, routers need an organized addressing system that can group similar
addresses together and treat them as a single network unit until the data reaches the destination
segment. If IP addresses were not organized, the Internet simply would not work. An example
would be like a library that contained millions of individual pages of printed material in a large pile.
This material is useless because it is impossible to locate an individual document. If the pages are
organized into books and each page is individually identified, and the books are also listed in a book
index, it becomes a lot easier to locate and use the data.
Another difference between switched and routed networks is switched networks do not block
broadcasts. As a result, switches can be overwhelmed by broadcast storms. Routers block LAN
broadcasts, so a broadcast storm only affects the broadcast domain from which it originated.
Because routers block broadcasts, routers also provide a higher level of security and bandwidth
control than switches.
 10.2.3 Routed versus routing
Protocols used at the network layer that transfer data from one host to another across a router are
called routed or routable protocols. Routed protocols transport data across a network. Routing
protocols allow routers to choose the best path for data from source to destination.
A routed protocol functions include the following:
Includes any network protocol suite that provides enough information in its network layer address
to allow a router to forward it to the next device and ultimately to its destination.
Defines the format and use of the fields within a packet
The Internet Protocol (IP) and Novell's Internetwork Packet Exchange (IPX) are examples of routed
protocols. Other examples include DECnet, AppleTalk, Banyan VINES, and Xerox Network
Systems (XNS).
Routers use routing protocols to exchange routing tables and share routing information. In other
words, routing protocols enable routers to route routed protocols.
A routing protocol functions includes the following:
Provides processes for sharing route information
Allows routers to communicate with other routers to update and maintain the routing tables
Examples of routing protocols that support the IP routed protocol include the Routing Information
Protocol (RIP), Interior Gateway Routing Protocol (IGRP), Open Shortest Path First (OSPF),
Border Gateway Protocol (BGP), and Enhanced IGRP (EIGRP).
10.2.4 Path determination
Path determination occurs at the network layer. Path determination enables a router to compare
the destination address to the available routes in its routing table, and to select the best path. The
routers learn of these available routes through static routing or dynamic routing. Routes configured
manually by the network administrator are static routes. Routes learned by others routers using a
routing protocol are dynamic routes.
The router uses path determination to decide which port an incoming packet should be sent out of to
travel on to its destination. This process is also referred to as routing the packet. Each router that
the packet encounters along the way is called a hop. The hop count is the distanced traveled. Path
determination can be compared to a person driving a car from one location in a city to another. The
driver has a map that shows the streets that can be taken to get to the destination, just as a router has
a routing table. The driver travels from one intersection to another just as a packet travels from one
router to another in each hop. At any intersection, the driver can route himself by choosing to turn
left, turn right, or go straight ahead. In the same manner, a router decides which outbound port the
packet should be sent.
The decisions of a driver are influenced by factors such as traffic on a road, the speed limit of the
road, the number of lanes on the road, whether or not there is a toll on the road, and whether or not
the road is frequently closed. Sometimes it is faster to take a longer route on a smaller, less crowded
back street instead of a highway with a lot of traffic. Similarly, routers can make decisions based on
the load, bandwidth, delay, cost, and reliability of a network link.
The following process is used during path determination for every packet that is routed:
The destination address is obtained from the packet.
The mask of the first entry in the routing table is applied to the destination address.
The masked destination and the routing table entry are compared.
If there is a match, the packet is forwarded to the port that is associated with that table entry.
If there is not a match, the next entry in the table is checked.
If the packet does not match any entries in the table, the router checks to see if a default route has
been set.
If a default route has been set, the packet is forwarded to the associated port. A default route is a
route that is configured by the network administrator as the route to use if there are no matches in
the routing table.
If there is no default route, the packet is discarded. Usually a message is sent back to the sending
device indicating that the destination was unreachable.
 10.2.5 Routing tables
Routers use routing protocols to build and maintain routing tables that contain route information.
This aids in the process of path determination. Routing protocols fill routing tables with a variety of
route information. This information varies depending on the routing protocol used. Routing tables
contain the information necessary to forward data packets across connected networks. Layer 3
devices interconnect broadcast domains or LANs. A hierarchical addressing scheme is required for
data transfer to occur.
Routers keep track of important information in their routing tables, including the following:
Protocol type – The type of routing protocol that created the routing table entry
Destination/next-hop associations – These associations tell a router that a particular destination is
either directly connected to the router, or that it can be reached using another router called the
“next-hop” on the way to the final destination. When a router receives an incoming packet, it checks
the destination address and attempts to match this address with a routing table entry.
Routing metric – Different routing protocols use different routing metrics. Routing metrics are
used to determine the desirability of a route. For example, the Routing Information Protocol (RIP)
uses hop count as its only routing metric. Interior Gateway Routing Protocol (IGRP) uses a
combination of bandwidth, load, delay, and reliability metrics to create a composite metric value.
Outbound interfaces – The interface that the data must be sent out on, in order to reach the final
Routers communicate with one another to maintain their routing tables through the transmission of
routing update messages. Some routing protocols transmit update messages periodically, while
others send them only when there are changes in the network topology. Some protocols transmit the
entire routing table in each update message, and some transmit only routes that have changed. By
analyzing the routing updates from the neighboring routers, a router builds and maintains its routing
10.2.6 Routing algorithms and metrics
An algorithm is a detailed solution to a problem. In the case of routing packets, different routing
protocols use different algorithms to decide which port an incoming packet should be sent to.
Routing algorithms depend on metrics to make these decisions.
Routing protocols often have one or more of the following design goals:
Optimization – Optimization describes the capability of the routing algorithm to select the best
route. The route will depend on the metrics and metric weightings used in the calculation. For
example, one algorithm may use both hop count and delay metrics, but may consider delay metrics
as more important in the calculation.
Simplicity and low overhead – The simpler the algorithm, the more efficiently it will be processed
by the CPU and memory in the router. This is important so that the network can scale to large
proportions, such as the Internet.
Robustness and stability – A routing algorithm should perform correctly when confronted by
unusual or unforeseen circumstances, such as hardware failures, high load conditions, and
implementation errors.
Flexibility – A routing algorithm should quickly adapt to a variety of network changes. These
changes include router availability, router memory, changes in bandwidth, and network delay.
Rapid convergence – Convergence is the process of agreement by all routers on available routes.
When a network event causes changes in router availability, updates are needed to reestablish
network connectivity. Routing algorithms that converge slowly can cause data to be undeliverable.
Routing algorithms use different metrics to determine the best route. Each routing algorithm
interprets what is best in its own way. The routing algorithm generates a number, called the metric
value, for each path through the network. Sophisticated routing algorithms base route selection on
multiple metrics, combining them in a single composite metric value. Typically, smaller metric
values indicate preferred paths.
Metrics can be based on a single characteristic of a path, or can be calculated based on several
characteristics. The following are the metrics that are most commonly used by routing protocols:
Bandwidth – The data capacity of a link. Normally, a 10-Mbps Ethernet link is preferable to a 64-
kbps leased line.
Delay – The length of time required to move a packet along each link from source to destination.
Delay depends on the bandwidth of intermediate links, the amount of data that can be temporarily
stored at each router, network congestion, and physical distance.
Load – The amount of activity on a network resource such as a router or a link.
Reliability – Usually a reference to the error rate of each network link.
Hop count – The number of routers that a packet must travel through before reaching its
destination. Each router the data must pass through is equal to one hop. A path that has a hop count
of four indicates that data traveling along that path would have to pass through four routers before
reaching its final destination. If multiple paths are available to a destination, the path with the least
number of hops is preferred.
Ticks – The delay on a data link using IBM PC clock ticks. One tick is approximately 1/18 second.
Cost – An arbitrary value, usually based on bandwidth, monetary expense, or other measurement,
that is assigned by a network administrator.
 10.2.7 IGP and EGP
An autonomous system is a network or set of networks under common administrative control, such
as the domain. An autonomous system consists of routers that present a consistent view
of routing to the external world.
Two families of routing protocols are Interior Gateway Protocols (IGPs) and Exterior Gateway
Protocols (EGPs).
IGPs route data within an autonomous system.
Routing Information Protocol (RIP) and (RIPv2)
Interior Gateway Routing Protocol (IGRP)
Enhanced Interior Gateway Routing Protocol (EIGRP)
Open Shortest Path First (OSPF)
Intermediate System-to-Intermediate System protocol (IS-IS)
EGPs route data between autonomous systems. An example of an EGP is Border Gateway Protocol
10.2.8 Link state and distance vector
Routing protocols can be classified as either IGPs or EGPs, which describes whether a group of
routers is under a single administration or not. IGPs can be further categorized as either distance-
vector or link-state protocols.
The distance-vector routing approach determines the distance and direction, vector, to any link in
the internetwork. The distance may be the hop count to the link. Routers using distance-vector
algorithms send all or part of their routing table entries to adjacent routers on a periodic basis. This
happens even if there are no changes in the network. By receiving a routing update, a router can
verify all the known routes and make changes to its routing table. This process is also known as
“routing by rumor”. The understanding that a router has of the network is based upon the
perspective of the adjacent router of the network topology.
Examples of distance-vector protocols include the following:
Routing Information Protocol (RIP) – The most common IGP in the Internet, RIP uses hop count
as its only routing metric.
Interior Gateway Routing Protocol (IGRP) – This IGP was developed by Cisco to address issues
associated with routing in large, heterogeneous networks.
Enhanced IGRP (EIGRP) – This Cisco-proprietary IGP includes many of the features of a link-
state routing protocol. Because of this, it has been called a balanced-hybrid protocol, but it is really
an advanced distance-vector routing protocol.
Link-state routing protocols were designed to overcome limitations of distance vector routing
protocols. Link-state routing protocols respond quickly to network changes sending trigger updates
only when a network change has occurred. Link-state routing protocols send periodic updates,
known as link-state refreshes, at longer time intervals, such as every 30 minutes.
When a route or link changes, the device that detected the change creates a link-state advertisement
(LSA) concerning that link. The LSA is then transmitted to all neighboring devices. Each routing
device takes a copy of the LSA, updates its link-state database, and forwards the LSA to all
neighboring devices. This flooding of LSAs is required to ensure that all routing devices create
databases that accurately reflect the network topology before updating their routing tables.
Link-state algorithms typically use their databases to create routing table entries that prefer the
shortest path. Examples of link-state protocols include Open Shortest Path First (OSPF) and
Intermediate System-to-Intermediate System (IS-IS).
 10.2.9 Routing protocols
RIP is a distance vector routing protocol that uses hop count as its metric to determine the direction
and distance to any link in the internetwork. If there are multiple paths to a destination, RIP selects
the path with the least number of hops. However, because hop count is the only routing metric used
by RIP, it does not always select the fastest path to a destination. Also, RIP cannot route a packet
beyond 15 hops. RIP Version 1 (RIPv1) requires that all devices in the network use the same subnet
mask, because it does not include subnet mask information in routing updates. This is also known
as classful routing.
RIP Version 2 (RIPv2) provides prefix routing, and does send subnet mask information in routing
updates. This is also known as classless routing. With classless routing protocols, different subnets
within the same network can have different subnet masks. The use of different subnet masks within
the same network is referred to as variable-length subnet masking (VLSM).
IGRP is a distance-vector routing protocol developed by Cisco. IGRP was developed specifically to
address problems associated with routing in large networks that were beyond the range of protocols
such as RIP. IGRP can select the fastest available path based on delay, bandwidth, load, and
reliability. IGRP also has a much higher maximum hop count limit than RIP. IGRP uses only
classful routing.
OSPF is a link-state routing protocol developed by the Internet Engineering Task Force (IETF) in
1988. OSPF was written to address the needs of large, scalable internetworks that RIP could not.
Intermediate System-to-Intermediate System (IS-IS) is a link-state routing protocol used for routed
protocols other than IP. Integrated IS-IS is an expanded implementation of IS-IS that supports
multiple routed protocols including IP.
Like IGRP, EIGRP is a proprietary Cisco protocol. EIGRP is an advanced version of IGRP.
Specifically, EIGRP provides superior operating efficiency such as fast convergence and low
overhead bandwidth. EIGRP is an advanced distance-vector protocol that also uses some link-state
protocol functions. Therefore, EIGRP is sometimes categorized as a hybrid routing protocol.
Border Gateway Protocol (BGP) is an example of an External Gateway Protocol (EGP). BGP
exchanges routing information between autonomous systems while guaranteeing loop-free path
selection. BGP is the principal route advertising protocol used by major companies and ISPs on the
Internet. BGP4 is the first version of BGP that supports classless interdomain routing (CIDR) and
route aggregation. Unlike common Internal Gateway Protocols (IGPs), such as RIP, OSPF, and
EIGRP, BGP does not use metrics like hop count, bandwidth, or delay. Instead, BGP makes routing
decisions based on network policies, or rules using various BGP path attributes.
10.3.1 Classes of network IP addresses
Classes of IP addresses offer a range from 256 to 16.8 million hosts, as discussed previously in this
module. To efficiently manage a limited supply of IP addresses, all classes can be subdivided into
smaller subnetworks. Figure provides an overview of the division between networks and hosts.
 10.3.2 Introduction to and reason for subnetting
To create the subnetwork structure, host bits must be reassigned as network bits. This is often
referred to as ‘borrowing’ bits. However, a more accurate term would be ‘lending’ bits. The starting
point for this process is always the leftmost host bit, the one closest to the last network octet.
Subnet addresses include the Class A, Class B, and Class C network portion, plus a subnet field and
a host field. The subnet field and the host field are created from the original host portion of the
major IP address. This is done by assigning bits from the host portion to the original network
portion of the address.       The ability to divide the original host portion of the address into the
new subnet and host fields provides addressing flexibility for the network administrator.
In addition to the need for manageability, subnetting enables the network administrator to provide
broadcast containment and low-level security on the LAN. Subnetting provides some security since
access to other subnets is only available through the services of a router. Further, access security
may be provided through the use of access lists. These lists can permit or deny access to a subnet,
based on a variety of criteria, thereby providing more security. Access lists will be studied later in
the curriculum. Some owners of Class A and B networks have also discovered that subnetting
creates a revenue source for the organization through the leasing or sale of previously unused IP
A LAN is seen as a single network with no knowledge of the internal network structure. This view
of the network keeps the routing tables small and efficient. Given a local node address of, the world outside the LAN sees only the advertised major network number of The reason for this is that the local address of is only valid within the
LAN and cannot function anywhere else.
10.3.3 Establishing the subnet mask address
Selecting the number of bits to use in the subnet process will depend on the maximum number of
hosts required per subnet. An understanding of basic binary math and the position value of the bits
in each octet is necessary when calculating the number of subnetworks and hosts created when bits
were borrowed.
The last two bits in the last octet, regardless of the IP address class, may never be assigned to the
subnetwork. These bits are referred to as the last two significant bits. Use of all the available bits to
create subnets, except these last two, will result in subnets with only two usable hosts. This is a
practical address conservation method for addressing serial router links. However, for a working
LAN this would result in prohibitive equipment costs.
The subnet mask gives the router the information required to determine in which network and
subnet a particular host resides. The subnet mask is created by using binary ones in the host octet
or octets. The subnet octet or octets are determined by adding the position value of the bits that
were borrowed. If three bits were borrowed, the mask for a Class C address would be This mask may also be represented, in the slash format, as /27. The number
following the slash is the total number of bits that were used for the network and subnetwork
To determine the number of bits to be used, the network designer needs to calculate how many
hosts the largest subnetwork requires and the number of subnetworks needed. As an example, the
network requires 30 hosts and five subnetworks. A shortcut to determine how many bits to reassign
is by using the subnetting chart. By consulting the row titled ”Usable hosts”, the chart indicates
that for 30 usable hosts three bits are required. The chart also shows that this creates six usable
subnetworks, which will satisfy the requirements of this scheme. The difference between usable
hosts and total hosts is a result of using the first available address as the ID and the last available
address as the broadcast for each subnetwork. The ability to use these subnetworks is not provided
with classful routing. However, classless routing, which will be covered later in the course can
recover many of these lost addresses.
The method that was used to create the subnet chart can be used to solve all subnetting problems.
This method uses the following formula:
Number of usable subnets= two to the power of the assigned subnet bits or borrowed bits, minus
two (reserved addresses for subnetwork id and subnetwork broadcast)
        (2 power of borrowed bits) – 2 = usable subnets
        (23)                       –2=6
Number of usable hosts= two to the power of the bits remaining, minus two (reserved addresses for
subnet id and subnet broadcast)
       (2 power
      remaining       – 2 = usable hosts
      host bits
      (2 )      – 2 = 30
10.3.4         Applying the subnet mask
Once the subnet mask has been established it then can be used to create the subnet scheme. The
chart in the Figure is an example of the subnets and addresses created by assigning three bits to the
subnet field. This will create eight subnets with 32 hosts per subnet. Start with zero (0) when
numbering subnets. The first subnet is always referenced as the zero subnet.
When filling in the subnet chart three of the fields are automatic, others require some calculation.
The subnetwork ID of subnet zero is the same as the major network number, in this case The broadcast ID for the whole network is the largest number possible, in this case The third number that is given is the subnetwork ID for subnet number seven. This
number is the three network octets with the subnet mask number inserted in the fourth octet
position. Three bits were assigned to the subnet field with a cumulative value of 224. The ID for
subnet seven is By inserting these numbers, checkpoints have been established that
will verify the accuracy when the chart is completed.
When consulting the subnetting chart or using the formula, the three bits assigned to the subnet field
will result in 32 total hosts assigned to each subnet. This information provides the step count for
each subnetwork ID. Adding 32 to each preceding number, starting with subnet zero, the ID for
each subnet is established. Notice that the subnet ID has all binary 0s in the host portion.
The broadcast field is the last number in each subnetwork, and has all binary ones in the host
portion. This address has the ability to broadcast only to the members of a single subnet. Since the
subnetwork ID for subnet zero is and there are 32 total hosts the broadcast ID would
be Starting at zero the 32nd sequential number is 31. It is important to remember
that zero (0) is a real number in the world of networking.
The balance of the broadcast ID column can be filled in using the same process that was used in the
subnetwork ID column. Simply add 32 to the preceding broadcast ID of the subnet. Another option
is to start at the bottom of this column and work up to the top by subtracting one from the preceding
subnetwork ID.
10.3.5 Subnetting Class A and B networks
The Class A and B subnetting procedure is identical to the process for Class C, except there may be
significantly more bits involved. The available bits for assignment to the subnet field in a Class A
address is 22 bits while a Class B address has 14 bits.
Assigning 12 bits of a Class B address to the subnet field creates a subnet mask of
or /28. All eight bits were assigned in the third octet resulting in 255, the total value of all eight bits.
Four bits were assigned in the fourth octet resulting in 240. Recall that the slash mask is the sum
total of all bits assigned to the subnet field plus the fixed network bits.
Assigning 20 bits of a Class A address to the subnet field creates a subnet mask of
or /28. All eight bits of the second and third octets were assigned to the subnet field and four bits
from the fourth octet.
In this situation, it is apparent that the subnet mask for the Class A and Class B addresses appear
identical. Unless the mask is related to a network address it is not possible to decipher how many
bits were assigned to the subnet field.
Whichever class of address needs to be subnetted, the following rules are the same:
Total subnets = 2 to the power of the bits borrowed
Total hosts= 2 to the power of the bits remaining
Usable subnets = 2 to the power of the bits borrowed minus 2
Usable hosts= 2 to the power of the bits remaining minus 2
10.3.6 Calculating the resident subnetwork through ANDing
Routers use subnet masks to determine the home subnetwork for individual nodes. This process is
referred to as logical ANDing. ANDing is a binary process by which the router calculates the
subnetwork ID for an incoming packet. ANDing is similar to multiplication.
This process is handled at the binary level. Therefore, it is necessary to view the IP address and
mask in binary. The IP address and the subnetwork address are ANDed with the result being the
subnetwork ID. The router then uses that information to forward the packet across the correct
Subnetting is a learned skill. It will take many hours performing practice exercises to gain a
development of flexible and workable schemes. A variety of subnet calculators are available on the
web. However, a network administrator must know how to manually calculate subnets in order to
effectively design the network scheme and assure the validity of the results from a subnet
calculator. The subnet calculator will not provide the initial scheme, only the final addressing. Also,
no calculators, of any kind, are permitted during the certification exam.
Module 11: TCP/IP Transport and Application Layer
As its name implies, the TCP/IP transport layer does the work of transporting data between
applications on source and destination devices. A thorough understanding of the operation of the
transport layer is essential to understanding modern data networking. This module will describe the
functions and services of this critical layer of the TCP/IP network model.
Many of the network applications that are found at the TCP/IP application layer are familiar to even
casual network users. HTTP, FTP and SMTP, for example, are acronyms that are commonly seen
by users of Web browsers and e-mail clients. This module also describes the function of these and
other applications from the TCP/IP networking model.
 11.1.1 Introduction to transport layer
The primary duties of the transport layer, Layer 4 of the OSI model, are to transport and regulate the
flow of information from the source to the destination, reliably and accurately. End-to-end control
and reliability are provided by sliding windows, sequencing numbers, and acknowledgments.
To understand reliability and flow control, think of someone who studies a foreign language for one
year and then they visit the country where that language is used. In conversation, words must be
repeated for reliability and to speak slowly so that the meaning of the conversation is not lost, this is
flow control.
The transport layer provides transport services from the source host to the destination host. It
establishes a logical connection between the endpoints of the network. Transport services segment
and reassemble several upper-layer applications onto the same transport layer data stream. This
transport layer data stream provides end-to-end transport services.
The transport layer data stream is a logical connection between the endpoints of a network. Its
primary duties are to transport and regulate the flow of information from source to destination
reliably and accurately. The primary duty of Layer 4 is to provide end-to-end control using sliding
windows and to provide reliability in sequencing numbers and acknowledgments. The transport
layer defines end-to-end connectivity between host applications. Transport services include the
following basic services:
Segmentation of upper-layer application data
Establishment of end-to-end operations
Transport of segments from one end host to another end host
Flow control provided by sliding windows
Reliability provided by sequence numbers and acknowledgments
TCP/IP is a combination of two individual protocols. IP operates at Layer 3, and is a connectionless
protocol that provides best-effort delivery across a network. TCP operates at Layer 4, and is a
connection-oriented service that provides flow control as well as reliability. By pairing these
protocols, a wider range of services is provided. Together, they are the basis for an entire suite of
protocols called the TCP/IP protocol suite. The Internet is built upon this TCP/IP protocol suite.
 11.1.2 Flow control
As the transport layer sends data segments, it tries to ensure that data is not lost. A receiving host
that is unable to process data as quickly as it arrives could be a cause of data loss. The receiving
host is then forced to discard it. Flow control avoids the problem of a transmitting host overflowing
the buffers in the receiving host. TCP provides the mechanism for flow control by allowing the
sending and receiving host to communicate. The two hosts then establish a data-transfer rate that is
agreeable to both.
11.1.3 Session establishment, maintenance, and termination overview
Multiple applications can share the same transport connection in the OSI reference model.
Transport functionality is accomplished on a segment-by-segment basis. In other words, different
applications can send data segments on a first-come, first-served basis. The segments that arrive
first will be taken care of first. These segments can be routed to the same or different destinations.
This is referred to as the multiplexing of upper-layer conversations.
One function of the transport layer is to establish a connection-oriented session between similar
devices at the application layer. For data transfer to begin, both the sending and receiving
applications inform the respective operating systems that a connection will be initiated. One node
initiates a connection that must be accepted by the other. Protocol software modules in the two
operating systems communicate with each other by sending messages across the network to verify
that the transfer is authorized and that both sides are ready.
The connection is established and the transfer of data begins after all synchronization has occurred.
During transfer, the two machines continue to communicate with their protocol software to verify
that data is received correctly.
Figure shows a typical connection between the sending and receiving systems. The first
handshake requests synchronization. The second and third handshakes acknowledge the initial
synchronization request, as well as synchronizing connection parameters in the opposite direction.
The final handshake segment is an acknowledgment used to inform the destination that both sides
agree that a connection has been established. After the connection has been established, data
transfer begins.
Congestion can occur during data transfer for two reasons. First, a high-speed computer might be
capable of generating traffic faster than a network can transfer it. Second, if many computers
simultaneously need to send datagrams to a single destination, that destination can experience
congestion, although no single source caused the problem.
When datagrams arrive too quickly for a host or gateway to process, they are temporarily stored in
memory. If the traffic continues, the host or gateway eventually exhausts its memory and must
discard additional datagrams that arrive.
Instead of allowing data to be lost, the transport function can issue a “not ready” indicator to the
sender. Acting like a stop sign, this indicator signals the sender to stop sending data. When the
receiver can handle additional data, the receiver sends a “ready” transport indicator. When this
indicator is received, the sender can resume the segment transmission.
At the end of data transfer, the sending host sends a signal that indicates the end of the transmission.
The receiving host at the end of the data sequence acknowledges the end of transmission and the
connection is terminated.
 11.1.4 Three-way handshake
TCP is a connection-oriented protocol. TCP requires connection establishment before data transfer
begins. For a connection to be established or initialized, the two hosts must synchronize their Initial
Sequence Numbers (ISNs). Synchronization is done through an exchange of connection establishing
segments that carry a control bit called SYN, for synchronize, and the ISNs. Segments that carry the
SYN bit are also called “SYNs". This solution requires a suitable mechanism for picking an initial
sequence number and a slightly involved handshake to exchange the ISNs.
The synchronization requires each side to send its own initial sequence number and to receive a
confirmation of exchange in an acknowledgment (ACK) from the other side. Each side must also
receive the INS from the other side and send a confirming ACK. The sequence is as follows:
A→B SYN—(A) initial sequence number is X, ACK number is 0, SYN bit is set, but ACK bit is
not set.
B→A ACK—(A) sequence number is X + 1, (B) initial sequence number is Y, and SYN and ACK
bit are set.
A→B ACK—(B) sequence number is Y + 1, (A) sequence number is X + 1, the ACK bit is set, but
the SYN bit is not set.
This exchange is called the three-way handshake.
A three-way handshake is necessary because sequence numbers are not tied to a global clock in the
network and TCP protocols may have different mechanisms for picking the ISN. The receiver of the
first SYN has no way of knowing whether the segment was an old delayed one, unless it remembers
the last sequence number used on the connection. Recalling that number is not always possible
Therefore, the receiver must ask the sender to verify this SYN.
 11.1.5 Windowing
Data packets must be delivered to the recipient in the same order in which they were transmitted to
have a reliable, connection-oriented data transfer. The protocol fails if any data packets are lost,
damaged, duplicated, or received in a different order. An easy solution is to have a recipient
acknowledge the receipt of each packet before the next packet is sent.
If the sender must wait for an acknowledgment after sending each packet, throughput would be low.
Therefore, most connection-oriented, reliable protocols allow more than one packet to be
outstanding on the network at one time. Because time is available after the sender finishes
transmitting the data packet and before the sender finishes processing any received
acknowledgment, this interval is used for transmitting more data. The number of data packets the
sender is allowed to have outstanding without having received an acknowledgment is known as the
window size, or window.

TCP uses expectational acknowledgments. Expectational acknowledgements mean that the
acknowledgment number refers to the packet that is next expected. Windowing refers to the fact
that the window size is negotiated dynamically during the TCP session. Windowing is a flow-
control mechanism. Windowing requires that the source device receive an acknowledgment from
the destination after transmitting a certain amount of data. The receiving TCP process reports a
“window” to the sending TCP. This window specifies the number of packets, starting with the
acknowledgment number, that the receiving TCP process is currently prepared to receive.

With a window size of three, the source device can send three packets to the destination. The source
device must then wait for an acknowledgment. If the destination receives the three packets, it sends
an acknowledgment to the source device, which can now transmit three more packets. If the
destination does not receive the three packets, because of overflowing buffers, it does not send an
acknowledgment. Because the source does not receive an acknowledgment, it knows that the
packets should be retransmitted, and that the transmission rate should be slowed.

TCP window sizes are variable during the lifetime of a connection. Each acknowledgement contains
a window advertisement that indicates the number of bytes the receiver can accept. TCP also
maintains a congestion-control window. This window is normally the same size as the window of
the receiver. However, this window is cut in half when a packet is lost, perhaps as a result of
network congestion. This approach permits the window to be expanded or contracted as necessary
to manage buffer space and processing. A larger window size allows more data to be processed.

As shown in Figure , the sender sends three packets before expecting an ACK. If the receiver can
handle a window size of only two packets, the window drops packet three, specifies three as the
next packet, and specifies a new window size of two. The sender sends the next two packets, but
still specifies a window size of three. This means that the sender will still expect a three packet
acknowledgement from the receiver. The receiver replies by requesting packet five and again
specifying a window size of two.
 11.1.6 Acknowledgment
Reliable delivery guarantees that a stream of data sent from one device is delivered through a data
link to another device without duplication or data loss. Positive acknowledgment with
retransmission is one technique that guarantees reliable delivery of data. Positive acknowledgment
requires a recipient to communicate with the source and send back an acknowledgment message
when the data is received. The sender keeps a record of each data packet (TCP segment), that it
sends and expects an acknowledgment. The sender also starts a timer when it sends a segment and
will retransmit a segment if the timer expires before an acknowledgment arrives.
Figure shows the sender transmitting data packets 1, 2, and 3. The receiver acknowledges receipt
of the packets by requesting packet 4. Upon receiving the acknowledgment, the sender sends
packets 4, 5, and 6. If packet 5 does not arrive at the destination, the receiver acknowledges with a
request to resend packet 5. The sender resends packet 5 and then receives an acknowledgment to
continue with the transmission of packet 7.
TCP provides sequencing of segments with a forward reference acknowledgment. Each datagram is
numbered before transmission. At the receiving station, TCP reassembles the segments into a
complete message. If a sequence number is missing in the series, that segment is retransmitted.
Segments that are not acknowledged within a given time period will result in a retransmission.
 11.1.7 Transmission Control Protocol (TCP)
Transmission Control Protocol (TCP) is a connection-oriented Layer 4 protocol that provides
reliable full-duplex data transmission. TCP is part of the TCP/IP protocol stack. In a connection-
oriented environment, a connection is established between both ends before the transfer of
information can begin. TCP is responsible for breaking messages into segments, reassembling them
at the destination station, resending anything that is not received, and reassembling messages from
the segments. TCP supplies a virtual circuit between end-user applications.
The protocols that use TCP include:
FTP (File Transfer Protocol)
HTTP (Hypertext Transfer Protocol)
SMTP (Simple Mail Transfer Protocol)
The following are the definitions of the fields in the TCP segment:
Source port – Number of the calling port
Destination port – Number of the called port
Sequence number – Number used to ensure correct sequencing of the arriving data
Acknowledgment number – Next expected TCP octet
HLEN – Number of 32-bit words in the header
Reserved – Set to zero
Code bits – Control functions, such as setup and termination of a session
Window – Number of octets that the sender is willing to accept
Checksum – Calculated checksum of the header and data fields
Urgent pointer – Indicates the end of the urgent data
Option – One option currently defined, maximum TCP segment size
Data – Upper-layer protocol data
11.1.8 User Datagram Protocol (UDP)
User Datagram Protocol (UDP) is the connectionless transport protocol in the TCP/IP protocol
stack. UDP is a simple protocol that exchanges datagrams, without acknowledgments or guaranteed
delivery. Error processing and retransmission must be handled by higher layer protocols.
UDP uses no windowing or acknowledgments so reliability, if needed, is provided by application
layer protocols. UDP is designed for applications that do not need to put sequences of segments
The protocols that use UDP include:
TFTP (Trivial File Transfer Protocol)
SNMP (Simple Network Management Protocol)
DHCP (Dynamic Host Control Protocol)
DNS (Domain Name System)
The following are the definitions of the fields in the UDP segment:
Source port – Number of the calling port
Destination port – Number of the called port
Length – Number of bytes including header and data
Checksum – Calculated checksum of the header and data fields
Data – Upper-layer protocol data
11.1.9 TCP and UDP port numbers
Both TCP and UDP use port (socket) numbers to pass information to the upper layers. Port numbers
are used to keep track of different conversations crossing the network at the same time.
Application software developers agree to use well-known port numbers that are issued by the
Internet Assigned Numbers Authority (IANA). Any conversation bound for the FTP application
uses the standard port numbers 20 and 21. Port 20 is used for the data portion and port 21 is used
for control. Conversations that do not involve an application with a well-known port number are
assigned port numbers randomly from within a specific range above 1023. Some ports are reserved
in both TCP and UDP, but applications might not be written to support them. Port numbers have
the following assigned ranges:
Numbers below 1024 are considered well-known ports numbers.
Numbers above 1024 are dynamically assigned ports numbers.
Registered port numbers are those registered for vendor-specific applications. Most of these are
above 1024.
End systems use port numbers to select the proper application. The source host dynamically assigns
originating source port numbers. These numbers are always greater than 1023.
11.2.1 Introduction to the TCP/IP application layer
When the TCP/IP model was designed, the session and presentation layers from the OSI model
were bundled into the application layer of the TCP model. This means that issues of representation,
encoding, and dialog control are handled in the application layer rather than in separate lower layers
as in the OSI model. This design assures that the TCP/IP model provides maximum flexibility at the
application layer for developers of software.
The TCP/IP protocols that support file transfer, e-mail, and remote login are probably the most
familiar to users of the Internet. These protocols include the following applications:
Domain Name System (DNS)
File Transfer Protocol (FTP)
Hypertext Transfer Protocol (HTTP)
Simple Mail Transfer Protocol (SMTP)
Simple Network Management Protocol (SNMP)
 11.2.2 DNS
The Internet is built on a hierarchical addressing scheme. This scheme allows for routing to be
based on classes of addresses rather than based on individual addresses. The problem this creates
for the user is associating the correct address with the Internet site. It is very easy to forget an IP
address to a particular site because there is nothing to associate the contents of the site with the
address. Imagine the difficulty of remembering the IP addresses of tens, hundreds, or even
thousands of Internet sites.
A domain naming system was developed in order to associate the contents of the site with the
address of that site. The Domain Name System (DNS) is a system used on the Internet for
translating names of domains and their publicly advertised network nodes into IP addresses. A
domain is a group of computers that are associated by their geographical location or their business
type. A domain name is a string of characters, number, or both. Usually a name or abbreviation that
represents the numeric address of an Internet site will make up the domain name. There are more
than 200 top-level domains on the Internet, examples of which include the following:
.us – United States
.uk – United Kingdom
There are also generic names, which examples include the following:
.edu – educational sites
.com – commercial sites
.gov – government sites
.org – non-profit sites
.net – network service
See Figure for a detailed explanation of these domains.
 11.2.3 FTP and TFTP
FTP is a reliable, connection-oriented service that uses TCP to transfer files between systems that
support FTP. The main purpose of FTP is to transfer files from one computer to another by copying
and moving files from servers to clients, and from clients to servers. When files are copied from a
server, FTP first establishes a control connection between the client and the server. Then a second
connection is established, which is a link between the computers through which the data is
transferred. Data transfer can occur in ASCII mode or in binary mode. These modes determine the
encoding used for data file, which in the OSI model is a presentation layer task. After the file
transfer has ended, the data connection terminates automatically. When the entire session of
copying and moving files is complete, the command link is closed when the user logs off and ends
the session.
TFTP is a connectionless service that uses User Datagram Protocol (UDP). TFTP is used on the
router to transfer configuration files and Cisco IOS images and to transfer files between systems
that support TFTP. TFTP is designed to be small and easy to implement. Therefore, it lacks most of
the features of FTP. TFTP can read, write, or mail files to or from a remote server but it cannot list
directories and currently has no provisions for user authentication. It is useful in some LANs
because it operates faster than FTP and in a stable environment it works reliably.
 11.2.4 HTTP
Hypertext Transfer Protocol (HTTP) works with the World Wide Web, which is the fastest growing
and most used part of the Internet. One of the main reasons for the extraordinary growth of the Web
is the ease with which it allows access to information. A Web browser is a client-server application,
which means that it requires both a client and a server component in order to function. A Web
browser presents data in multimedia formats on Web pages that use text, graphics, sound, and
video. The Web pages are created with a format language called Hypertext Markup Language
(HTML). HTML directs a Web browser on a particular Web page to produce the appearance of the
page in a specific manner. In addition, HTML specifies locations for the placement of text, files,
and objects that are to be transferred from the Web server to the Web browser.
Hyperlinks make the World Wide Web easy to navigate. A hyperlink is an object, word, phrase, or
picture, on a Web page. When that hyperlink is clicked, it directs the browser to a new Web page.
The Web page contains, often hidden within its HTML description, an address location known as a
Uniform Resource Locator (URL).
In the URL, the "http://" tells the browser which protocol to use. The
second part, "www", is the hostname or name of a specific machine with a specific IP address. The
last part, /education identifies the specific folder location on the server that contains the default web
A Web browser usually opens to a starting or "home" page. The URL of the home page has already
been stored in the configuration area of the Web browser and can be changed at any time. From the
starting page, click on one of the Web page hyperlinks, or type a URL in the address bar of the
browser. The Web browser examines the protocol to determine if it needs to open another program,
and then determines the IP address of the Web server using DNS. Then the transport layer, network
layer, data link layer, and physical layer work together to initiate a session with the Web server. The
data that is transferred to the HTTP server contains the folder name of the Web page location. The
data can also contain a specific file name for an HTML page. If no name is given, then the default
name as specified in the configuration on the server is used.
The server responds to the request by sending to the Web client all of the text, audio, video, and
graphic files specified in the HTML instructions. The client browser reassembles all the files to
create a view of the Web page, and then terminates the session. If another page that is located on the
same or a different server is clicked, the whole process begins again.
11.2.5 SMTP
Email servers communicate with each other using the Simple Mail Transfer Protocol (SMTP) to
send and receive mail. The SMTP protocol transports email messages in ASCII format using TCP.
When a mail server receives a message destined for a local client, it stores that message and waits
for the client to collect the mail. There are several ways for mail clients to collect their mail. They
can use programs that access the mail server files directly or collect their mail using one of many
network protocols. The most popular mail client protocols are POP3 and IMAP4, which both use
TCP to transport data. Even though mail clients use these special protocols to collect mail, they
almost always use SMTP to send mail. Since two different protocols, and possibly two different
servers, are used to send and receive mail, it is possible that mail clients can perform one task and
not the other. Therefore, it is usually a good idea to troubleshoot e-mail sending problems
separately from e-mail receiving problems.
When checking the configuration of a mail client, verify that the SMTP and POP or IMAP settings
are correctly configured. A good way to test if a mail server is reachable is to Telnet to the SMTP
port (25) or to the POP3 port (110). The following command format is used at the Windows
command line to test the ability to reach the SMTP service on the mail server at IP address
C:\>telnet 25
The SMTP protocol does not offer much in the way of security and does not require any
authentication. Administrators often do not allow hosts that are not part of their network to use their
SMTP server to send or relay mail. This is to prevent unauthorized users from using their servers as
mail relays.
 11.2.6 SNMP
The Simple Network Management Protocol (SNMP) is an application layer protocol that facilitates
the exchange of management information between network devices. SNMP enables network
administrators to manage network performance, find and solve network problems, and plan for
network growth. SNMP uses UDP as its transport layer protocol.
An SNMP managed network consists of the following three key components:
Network management system (NMS) – NMS executes applications that monitor and control
managed devices. The bulk of the processing and memory resources required for network
management are provided by NMS. One or more NMSs must exist on any managed network.
Managed devices – Managed devices are network nodes that contain an SNMP agent and that
reside on a managed network. Managed devices collect and store management information and
make this information available to NMSs using SNMP. Managed devices, sometimes called
network elements, can be routers, access servers, switches, and bridges, hubs, computer hosts, or
Agents – Agents are network-management software modules that reside in managed devices. An
agent has local knowledge of management information and translates that information into a form
compatible with SNMP.
11.2.7 Telnet
Telnet client software provides the ability to login to a remote Internet host that is running a Telnet
server application and then to execute commands from the command line. A Telnet client is referred
to as a local host. Telnet server, which uses special software called a daemon, is referred to as a
remote host.
To make a connection from a Telnet client, the connection option must be selected. A dialog box
typically prompts for a host name and terminal type. The host name is the IP address or DNS name
of the remote computer. The terminal type describes the type of terminal emulation that the Telnet
client should perform. The Telnet operation uses none of the processing power from the
transmitting computer. Instead, it transmits the keystrokes to the remote host and sends the resulting
screen output back to the local monitor. All processing and storage take place on the remote
Telnet works at the application layer of the TCP/IP model. Therefore, Telnet works at the top three
layers of the OSI model. The application layer deals with commands. The presentation layer
handles formatting, usually ASCII. The session layer transmits. In the TCP/IP model, all of these
functions are considered to be part of the application layer.

To top