Docstoc

Serial Protocols

Document Sample
Serial Protocols Powered By Docstoc
					                            Introduction to Routing Basics II
Serial Protocols

Synchronous data-link control (SDLC) – bit oriented developed by IBM for SNA networks. There is
always communication between a primary and secondary node. Addressing is only done on the
secondary node.

Point-to-Point (PPP) – developed by Internet Engineering Task Force (IETF), derived from SDLC,
uses Link Control Protocol LCP.

High-Level Data Link Control (HLDC) – ISO standard, default for Cisco routers, Cisco uses
proprietary protocol field with point-to-point & multi-point.

Link Access Protocol Balanced (LAPB) or X.25 – same format as SDLC.

X.25 addresses the Physical Layer, Data-Link Layer, and Network Layer of the OSI model.
Connectionless service datagram, the Network Layer assembles the packets and passes them to the
Data-Link Layer, all data must pass thru PAD’s, which assemble and dissamble packets, in appropriate
for broadband digital voice, video, and bursty data traffic.

Optical Fiber makes sophisticated error checking unnecessary due to low bit-error-rate.

Frame-relay uses CRC cyclic-redundancy-check and can handle streams of bursty data traffic at higher
speeds than X.25.

TCP/IP

Corresponds closely to the OSI model.

Network Layer protocols include:

      IP                                                     ARP
      ICMP                                                   RARP

Transport Layer protocols include:

      TCP                                                    UDP

Application Layer protocols include:

      FTP                                                    telnet
      TFTP                                                   rlogin
      NFS                                                    SNMP

IP provides connectionless best-effort delivery routing of datagrams, IP datagrams are not guaranteed
in sequence or at all.
TCP guarantees packet delivery and data integrity, it is connection-oriented; is a Transport Layer
protocol that provides reliable full-duplex data transmittion; initiates handshaking, controls packet
sequencing, performs flow control, and handles errors.

Netware

Netware developed from Xerox XNS and sits above the lower layer standards ie ehternet; ODI specs
allow Netware to be independent of the Physical Layer networking hardware.

IPX provides connectionless service, is located at the Network Layer, provides addressing, routing,
and other Network Layer services.

SPX is a connection-oriented service, is located at the Transport Layer, provides error checking, flow
control, and does not conform precisely OSI model, and insures reliable delivery of data.

Netware upper layers correspond to Application Layer, Session Layer, and Transport Layer; NetBIOS
emulation; file system support using NCP, the Netware shell, Netware RPC, Netware streams, message
handling system (MHS), and Betrieve.

NetBIOS developed by IBM for microcomputer communication.

Netware Core Protocols (NCP) provide server functions, files service, print service, name
management, file locking, synchronization, and accounting.

Netware shell works with NCP to provide transparent access to file and print services, it intercepts
client application requests and processes them.

Netware RPC’s provide remote transparent access to other programs.

Netware streams is a special linking protocol that connects upper layer processes with the Transport
Layer; works with transport-link interface TLI, LSL, ODI to run over a variety of transport
technologies and network drivers; accepts upper layer streams and relays to NIC driver.

MHS provides email transfer and mailbos for message storing; does not provide a user interface.

Betrieve provides rapid access to databases.

NLM’s provide file sharing, printer sharing, communication services, and databases services.

AppleTalk

AppleTalk provides connectivity for MAC’s, DOS, Unix, IBM mainframe, and DEC VAX computers;
At the lower layers AppleTalk uses its proprietary LocalTalk protocol; AppleTalk uses its own ARP
AARP protocols, sits above the Data-Link Layer, maps data-link addresses to protocol addresses, and
allows AppleTalk protocols to run over any Data-Link Layer protocols.

At the Network Layer AppleTalk use Datagram Delivery Protocol (DDP), which is a connectionless
service; DDP works with the following protocols:
      ZIP – zone information protocol, which maps AppleTalk network numbers to zones.
      RTMP – routing table maintenance protocol maintains AppleTalk routing tables.
      NBP – name binding protocol, maps AppleTalk names to addresses.

At the Transport Layer there is AppleTalk Transaction Protocol (ATP), which is a transaction based
protocol that provides reliable delivery of data.

AppleTalk Data Streams Protocol (ADSP) provides reliable delivery of data; is byte streamed rather
than transaction based; more conventional than ATP.

The following AppleTalk protocols are Session Layer protocols:

      PAP – printer access protocol; connection between nodes and servers of all types
      ASP – AppleTalk Session Protocol; initiates, maintains, and terminates sessions


AppleTalk Filing Protocol (AFP) is a Presentation Layer protocol that provides access to remote files.

The following AppleTalk protocols are Application Layer protocols:

      AppleShare Print Server
      AppleShare FileServer – uses AFP
      AppleSharePC – allows DOS connectivity

AppleTalk defines the concept of Network Visible Entity (NVE); is a network addressable service ie
protocol socket (Note: nodes are not NVE’s).

Banyan VINES

Virtual Integrated Network Service (VINES) implements a distributed network OS based on
proprietary protocol derived from XNS with a client/server architecture.

The two lower layers of the Banyan stack use: HDLC, Ethernet, and Token-ring.
VINES uses the proprietary VINES Internetworking Protocol (VIP) to perform Network Layer
addressing and routing; has its own proprietary ARP, RIP called Routing Table Protocol (RTP), and
Internet Control Protocol (ICP) at the Network Layer.

VINES Network Layer addresses are 48 bit entities, which are divided into a 32 bit network portion
and a 16 bit subnetwork portion.

VINES provide three Transport Layer services:

      Unreliable datagram service – routed on best-effort
      Reliable Message Service – reliable, sequenced, and acknowledged
      Data Strea Service – flow control

VINES uses RPC for communication between clients and servers.
VINES offers the followinf services at the Application Layer:

      File services
      Print services
      StreetTalk – provides consistent name service for an entire internetworking

DECnet

DECnet is described by the Digital Network Architecture (DNA), which is a seven layer architecture
that is fully OSI-compliant.

At the lower layers DNA runs over 802.3/etherne, FDDI and X.25

At the Network Layer DNA supports:

      OSI connectionless CLNS                                 IP
      OSI connectionless CLNP                                 ES-IS routing protocol
      OSI connection-oriented CONS                            IS-IS routing protocol.
      X.25

At the Transport Layer DNA supports:

      TP0                                                     TP4
      TP2                                                     TCP

DECnet proprietary major protocol is Network Services Protocol (NSP), which is located at the
Transport Layer and is an connection-oriented service with flow control.

The DNA stack at the Session Layer supports both the OSI model’s Session Layer and DEC’s
proprietary session control protocol, which maps names to addresses; this protocol also controls access
to lower layers and network resources.

DNA applications include presentation features ie suDNA and does not have a separate Presentation
Layer; however, does support the OSI Presentation Layer.

At the Application Layer, DNA supports the following applications:

FTAM                                                   ACSE
VT                                                     ROSE
MOTIS?MHS                                              CMIP

DNA also supports gateway to other protocols stacks ie SNA.

Routing

Routers help avoid network congestion and assign available bandwidth, relays packets from one
network to another, and are often contrasted with switches.
The primary difference between routers and switches is that switching occurs at the Data-Link Layer
and routing occurs at the Network Layer.

Routing involves two basic activities:

      Path determination                                       Switching

Routing algorithms initiate and maintain routing tables, routing information, and varies according to
the routing algorithm.

Some of the metrics used by the routing algorithm include:

      Path length                                              Hop count
      Reliability                                              Bandwidth
      Delay                                                    Load
      Ticks                                                    Communication cost

Path length is the most common metric used and what is measured depends on the routing protocol.
Some routing protocols allow you to assign costs to each network link.

Other routing protocols determine path length by performing a hop count. Routers are called
internetworking nodes.

Reliability is described by bit-error-rate; reliability ratings are usually assigned by network admins.

Delay depends on bandwidth, port queues, and the physical distance.

Tick count is used by Netware and IP; equal clock ticks in milliseconds

Load refers to the degree a router is busy and is calculated by CPU utilization and packets processed
per second.

Sophisticated routing algorithms can base routing selection on multiple metric values, combining them
into a single hybrid metric.

The switching function of a router allows it to accept packets on one interface and forward it to another
interface.

If a router knows how to forward a packet it changes the data-link address of the packet to that of the
next hop.

Node addresses are a flat addressing structure and exist at the Data-Link Layer.

Network addresses exist at the Network Layer and are also known as virtual or logical addresses.

Some Network Layer protocols a network admin assigns and other addresses are assigned dynamically.

Unlike data-link addresses, network addresses are assigned hierarchial.
Internetwork addresses consist of a network address and a host addresses.

Classless routers accept a prefix of any length, while classfull routersaccept only a few specified prefix
lengths modified by subnet masks. The length of a prefix is determined by a mask.

Class A addresses have a prefix length of 8 bits.
Class B addresses have a prefix length of 16 bits.
Class C addresses have a prefix length of 24 bits.

Class A addresses always start with 0..
Class B addresses always start with 10.
Class C addresses always start with 110.

TCP/IP networks represent addresses as 32 bit entities and are accompanied by a masm.

Classless InterDomain Routing (CIDR)

IPX uses up to 32 bits for the network address in the hexadecimal format; the node address is
automatically assigned by the MAC address and is fixed at 48 bits.

AppleTalk network addresses are 16 bits; node address are 8 bit; and node addresses are acquired
automatically at network boot.

X.25 uses the X.121 protocol, which covers the international numbering plan for Public Data Network
(PDN); the network part of the address known as the data network network identification code (DNIC),
which includes the data country code (DCC). The node portion of address is called network terminal
number (NTN). X.25 adminsobtain these NTN’s from a central authority within the X.25 data network
service provider.

Routing algorithms depending on what trying to accomplish and contain one or more of the following:

      Optimality – best route                                 Rapid convergence
      Simplicity                                              Flexibility
      Robustness

All routers in a network using a single protocol will eventually agree on an optimal routes. This is
called convergence, and should be achieved rapidly by routing algorithms.

Routing algorithms should adat to changes in:

      Network bandwidth                                       Network delay
      Routing queue size                                      Other

Routed Protocols

      IP                                                      DECnet
      Netware                                                 AppleTalk
      OSI                                                      XNS
      VINES

Packets are conveyed from end system to wnd system.

Routing Protocols

Intra-domain protocols

      IGRP                                                     IS-IS
      EIGRP                                                    RIP
      OSPF

Inter-domain protocols

      Exterior Gateway Protocol Border (EGP)               Border Gateway Protocol (BGP)

In a multiple-protocol router, the routed and routing protocols have no knowledge of each other aka
ships-in-the-night routing.

Routing is either static or dynamic; static is not algorithmic and routes are manually configured by
network admins; static routes are used for security,

Dynamic routing depends on two routing functions:

      Maintenance of routing table                             Timely dist of routing info

Dynamic routing uses dynamic algorithms and distribute information by sending updates.

Routing protocols describe:

      How updates are conveyed                                 When information is conveyed
      What information is conveyed                             How to locate recipients

In very large networks routers will not know exact routes to all nodes, so you should supplement
dynamic routing with static routing.

Routing algorithms are classified as distance-vector or link-state.

Distance-vector determines direction (vector) and distance to any link. These algorithms are also
called Bellman-Ford algorithms. Each router periodically send updates on all or part of its routing
table to its neighbors. This type of protocol accumulates network distances to maintain
internetworking distance information, and does not know the exact network topology of the
internetwork.

Examples of Distance-vector protocols are:

      RIP                                                      IGRP
The link-state algorithm recreates the exact network topology of the entire internetwork or the portion
of the internetwork in which it is located. Thias type of protocol is also known as the shortest-path-
first (SPF) approach. This type of protocol maintains complex database of topology information and
floods the nodes in the internetwork with routing information. Each router sends the portion of th
routing table that describes the stat of its own links.

Link-state algorithms send small updates everywhere
Distance-vector algorithms send larger updates only to its neighbors

Link-state converges more quickly and is less susceptible to routing loops, requires more CPU
utilization and memory, which makes it more expensive to implement and support.

Balanced Hybrid Routing Algorithms use distance-vectors to determine best path; is also aware of
topology changes, which trigger routing database updates. This is more efficient than the link-state
algorithm in terms bandwidth, memory, and CPU utlization.

Examples:

      OSI’s IS-IS                                             Cisco’s EIGRP

EIGRP – has single integrated algorithm, which supports path select and packet switching for IP, IPX,
and AppleTalk networks; replaces the native algorithm for each protocol, which is no longer required,
creates separate routing table fore each protocol, saves network and router resources and simplifies
network adminstration.

Basic Router Configuration
Router configuration components or either external or internal. All Cisco routers have an
asynchronous or console port for initial configuration.

Internal Components

ROM                   bootstrap program, OS, and power-on-diagnostics are stored here; and contents
                      are retained.

Flash Memory          EPROM, holds IOS image and microcode; contents are retained.

RAM/DRAM              store operating information ie routing tables and running configuration; provides
                      caching and packet buffering; does not
                      Retain contents.

NVRAM                 stores startup-configuration file; contents are retained.

The system bootstrap software in ROM executes and searches for valid router OS or Cisco IOS image,
which can be loaded from ROM, flash memory, and a TFTP server. Where the image is loaded from is
specified by the boot field of the configuration register. The default setting is to boot from flash
memory. If the router does not find a valid IOS or discovers a corrupt configuration file, then system
enters ROM monitor mode.

The configuration file, stored in NVRAM or a TFTP server, is loaded into memory and executed one
line at a time. These commands start the routing process, define interface addresses, and set media
characteristics. If no configuration file exist, then the system enters setup mode.

EXEC is the Cisco command-line-interface and has two modes:

USER                   indicated by device name followed by the >; from USER you can connect to
                       remote devices, make temporary changes to terminal settings, perform basic
                       tests, and list system information.

PRIVILEDGED            can set operating parameters, perform detailed examination of the router’s state,
                       access global and other included configuration modes.


You can specify the source of configuration commands as being from a terminal,
network, and memory. From ROM monitor mode you can boot the device or perform
diagnostics.

RXBoot mode is special mode that is entered into by modifying the settings of the
router’s configuration register and then rebooting the router. It provides a subset of the
IOS and helps the router boot when it can not find a valid IOS in flash memory. Is
designated by the hostname followed by the <boot> prompt.

EXEC has a default timeout of ten minutes. If you set the interval to 0, then it is the same
as having no timeout or the no exec-timeout command. To exit priviledged mode back to
user mode, type disable. To exit user mode, type logout.

There are three types of on-line help:

General                       Performed by typing?

Word Help                     Performed by character-string followed by a “?” cl?

Commend Syntax Help           Performed by typing command followed a “ ?” clock ?

CTL+P                         repeats the previous command

CTL+B/A                       scrolls to the beginning of the command line.

ESC+B                         scrolls back one word at a time of the command line.

show version gives the following:

       configuration of system hardware                        names of configuration files
       software version                                        sources of configuration files
      boot image                                              how system was restarted
      up-time

sh processes gives the following

PID – ID # of each process
Q – tell queue priority (H,M,L)
Ty – test of the status of the process
PC - urrent program counter
Runtime – denotes CPU time that the process has used.
Invoked – number of times CPU has been used for each process.
uSec – microseconds of CPU time for each process invocation
Stacks – bytes
TTY – which terminal controls the process
Process – name of process

show memory – shows how the management system allocates memory for different
purposes. The first section includes summary stats about the system memory allocator
activities. Shows the amount of memory in use and the largest available free block.
The second section of display is a block-by-block listing of memory in use.

show buffers – shows the size of the files and amount flash memory that is free.

show interfaces
show protocols
show ip protocols
show ip route

configuration register is 0x2102 determines that the router will load IOS image flash
memory.

Autoinstall allows configuration of router remotely. The new router must be connected
to an existing router on a WAN or LAN interface. Existing router acts as a boot strap
protocol (BOOTP) or RARP and gives new router IP address. The existing router also
contains the IP helper address, which is the address of the TFTP server. The TFTP server
provides the hostname for the new router. If the file does not exist on the TFTP server,
then it will make use of a DNS server. The new configuration file is downloaded from
the TFTP server. The new router sends a SLARP or BOOTP request for an IP address.
Once the IP address is obtained, the router requests a file from the TFTP server to resolve
IP address to a hostname, which is supplied by a file called network-config. Finally the
new router uses its hostname to request the file hostname-config that contains specific
configuration for the router. If the TFTP server fails to provide the new router with a
hostname, then the router requests a DNS server to obtain IP address to hostname
resolution. If the TFTP server can’t send a hostname-confif file, then it sends a generic
configuration file called router-config. You can then telnet into the router and make the
necessary configuration changes.

There are two configurations associated with each router:
1) startup-config – stored in NVRAM and is accessed at startup or restart
2) running-config – stored in RAM

Cisco commands are not case sensitive.

The IOS image can be loaded in three ways:
1) flash memory
2) TFTP server
3) ROM

You can configure the router to load the image from any of the above in case one of them
fails. The configuration-register boot field tells how and where the image is loaded.

In ROM monitor mode use the o command to modify the configuration-register. From
global configuration use the config-register command.
0x0 - tells the router to boot in ROM monitor mode
0x1 - tells the router to load the image from ROM
0x2 - tells the router to load the image from flash memory

show flash – shows the system image file
show version – shows the name of the running image

Each ! means that ten UDP segments have completed
Each V indicates successful checksum verification

Cisco IOS allow for bot DOS and UNIX naming conventions.

Erase startup-config – clears the the NVRAM of this file.

line enables passwords on terminal lines.
To change the console password: line console 0; login; password Cisco
To change the telnet password: line vty 0 4; login; password Cisco
Cisco Discovery Protocol (CDP) enables access to configuration information on other
Routers; physical media supporting subnetwork access protocol (SNAP) connect CDP
devices, including LANs, frame-relay, SMDD WANs, and ATM; CDP involves frame
exchange at the Data-Link Layer, so only neighbors can exchange CDP information;
CDP runs by default regardless of routing protocols. Since CDPoperates at the Data-Link
Layer, two or more CDP devices can learn about each other despite the different Network
Layer protocols. Routers running CDP exchange information about any protocol entries
with its neighbors.

Show CDP interfaces
Show CDP entry – displays single cached CDP entry, shows all layer 3 addresses in
Neighbors (IP, IPX, AppleTalk., CLNS, etc)
Show CDP neighbors – only return information from neighboring routers, displays CDP
updates received, neighbor routers ID, local port type and number, holdtime, device
compatibility code, platform, remote port type and number.
Show CDP neighbor detail – combines show CDP entry and show CDP neighbors
You can use just the hostname of a router to establish a telnet session with it if used with
DNS. Use the escape sequence (Ctrl+Shift+6+X) to toggle back to the source router with
out breaking telent session.

Show sessions give details of open telnet sessions.

By entering session number you can resume telnet sessions with a particular router. Use
the disconnect session number to exit the telnet session with that particular router; use
exit or quit at the remote router to end session with that router.

You can test internetworking connectivity by following the OSI layers, beginning at the
upper layers by using telnet, ping, and trace. Low ttl values cause routers to discard the
probe initiated by the trace command. Several probes are sent at each ttl level and the
round-trip time is displayed.
Network Layer tests include the show ip route command to determine whether a routing
table entry exists for the target network.

Show interface – displays line and Data-Link Layer protocol status; line is triggered by a
carrier detect signal, which is the Physical Layer; line protocol is triggered by keepalive
frames, which refer to the Data-Link Layer. If line is up and line protocol is down, then
there is a connection problem. If both line and line protocol are down, then there is an
interface problem. If line is administratively down and line protocol is down, then
interface is disabled.

Show interfaces – displays realtime stats, ie, increasing number of input errors may
indicate faulty equipment or a noisy line.

Debug – shows what protocol messages are being sent and received by a router; the
debug command often generates data that is of little use for specific problems, has high
overhead , and can disrupt a routers operation. Use the debug command only when
problem has been narrowed to a specific area.

Copy tftp startup-config merges additional router configuration information from a TFTP
Server.

Cisco and IP addressing

TCP/IP was developed for th defense department and was later included with the
Berkeley distribution of UNIX. Defense Advanced Research Project Agency (DARPA)
developed the TCP/IP protocol suite and implemented a network called ARPAnet that
later became the Internet.

Of the TCP/IP protocol suite TCP, which provides reliable sequenced delivery of packets,
and IP, which provide packet delivery, are the two most known. TCP/IP can handle high
error rates by re-transmitting and re-routing lost or corrupt data. TCP/IP is very scalable
with little overhead.

A network implemented with TCP/IP is a packet-switched network, which transmits
information across the network in small segments called packets. The TCP/IP packet
format consists of the origin and destination of the packet, the type of packet, and the way
packets are are acted on by upper layer protocols.
The TCP/IP protocols map closely to the OSI model, which is a description of the ideal
network.

                           OSI MODEL                               TCP/IP
                          APPLICATION
                         PRESENTATION                         APPLICATION
                             SESSION
                           TRANSPORT                         TRANSPORT
                            NETWORK                           INTERNET
                           DATA-LINK                     NETWORK INTERFACE
                            PHYSICAL

The Internet layer provides connectivity and path selection between interconnected
networks. The following operate in this area:

Internet Layer Protocols (Network Layer)
IP (Internet Protocol)

Is the fundamental protocol, provides no error checking of data or reliability, transports
data in packet format, best-effort delivery, routing datagrams are not guaranteed to arrive
in sequence or at all. IP checksum only checks the header field for erros and not the data
field. IP defines the form that the packets must take and the ways to handle packets when
they are transmitted or received. Packet format is called the IP datagram, which has a
header field with sender and receiver addresses and a data field.

The header has the following fields:

VERS – indicates the version of IP being used
HLEN or IHL – indicated the datagram header length to be 32 bit words
Type of Protocol – specifies how upper layer protocols would like the datagram handled
Total Length – specifies total length of the packet including the header and data in bytes
Identification – contains an integer for sequencing
Flags – 3 bit field; 2 lower bits control fragmentation; one bit specifies if packet can be
Fragmented, the other bit specifies if the packet is the last in a series of fragmented
packets.
Fragmentation offset – contains position in the complete message of the submessage
contained with datagram, which enables IP to reassemble fragmented packets in the
proper order.
TTL (ime-to-live) – maintains counter that gradually decrements to zero at which point
the datagram is discarded, which keeps packets from endlessly looping.
Protocol – indicates which upper-layer protocol receives the incoming packet; these
upper-layer protocols are identified by numbers (TCP-6, UDP-17).
Header checksum – ensures IP header integrity, not the data
Source address – identifies ending node
Destination address – identifies receiving address
Option – allows support for security, network testing, debugging, and other
Data- upper-layer information


Internet Control Message Protocol (ICMP)

Is implemented by all TCP/IP hosts; ICMP messages are carried in IP datagrams and are
used to send error and control messages between hosts and routers. Principally created to
report routing failure back to the source. ICMP messages are sent if the host needs to
know if another host is available, needs to determine which network is attached, needs to
determine router address on a subnetwork, and needs to notify internetwork of congested
or failed links.

Hostunreachable is an ICMP message sent by routers. Information supplied by ICMP is
used by upper-layer protocols to recover form transmission problems. ICMP can detect
network trouble, ie, PING, which uses the ICMP echo request and echo reply. ICMP
types of messages include:

Redirect – provide more efficient routing
Time exceeded – informs sources that a datagram has exceeded its TTL

Routers advertisements and router solicitation determine the address of routers on
directly attached subnetworks, which enable new nodes to discover the subnet mask
currently being used.

Address Resolution Protocol (ARP)

Maps the known IP address to physical MAC address, which allows communication on
multi-access media like Ethernet; sends ARP requests messages and checks ARP cache;
IP acquires hosts’ MAC br broadcasting ARP; all IP hosts receive ARP broadcasts; the
host with the IP address sends its MAC address back to the source host with an ARP
reply; ARP maps IP addresses to MAC addresses; ARP cache is created by each
broadcast and reply, and is used to reduce ARP broadcasts. Since sending and receiving
devices are on the same subnet, they must use MAC addresses to communicate.

Reverse Address Resolution Protocol (RARP)

Maps physical addresses to IP addresses; both ARP and RARP arem implemented
directly on top of the Data-Link Layer. RARP uses broadcasts, and is usually used by a
diskless workstation that doesn’t know its IP address at bootup.

Transport Layer Protocols

Transmission Control Protocol (TCP)

Connection-oriented protocol that provides reliable, full-duplex data transmissions;
initiates handshaking, control packet sequencing, flow control, and handles errors.
Controls setting up connection with destination computer and guarantees delivery of
packets. It sequences packets in the correct order when they arrive at their destination;
controls flow so that receiving computer’s buffers can hold the data; provides checksum
field to ensure header and data integrity. TCP does not acknowledge lost or corrupt
packets, which signals the application to re-transmit. TCP is the ideal protocol for
session-based data transmission, ie, client-server applications and FTP. TCP’s reliability
comes at the expense of overhead and reduces performance, since additional bits are
required by TCP headers to provide the following: sequencing of information, checksum
to ensure header and data reliability, and acknowledgements ACK’s generate additional
traffic.

The TCP packet fields include:

   1. Source port - identify ports at which upper layer source processes receive TCP services)
   2. Destination port - identify ports at which upper layer destination processes receive TCP
      services
   3. Sequence number field - specifies first byte of data
   4. Checksum - confirms integrity of the header and data)

TCP uses a three-way handshake/open connection sequence; host A sends the first
 number sequence to host B, which host B increments by one and sends it back as an
ACK, host B also sends a sequence number of its own to host A, host A increments the
sequence number for host B and send it backs as an ACK, then connection is established
and data can now be exchanged.

Window size refer to the number of messages that can be transmitted before an ACK is
received. The host must receive the ACK before more messages are sent. Window sizes
can be negotiated dynamically, called sliding window, which makes more efficient use of
bandwidth. The larger the window size, the better the performance. TCP ACK’s refer to
the packet that is expected next. If the source host does not receive an ACK for the
window size packets it sent in a given time, it will automatically send the same packets
again.

User Datagram Packet (UDP)

Connectionless, unreliable simple protocol that exchanges packets without ACK’s or
guarantees correct sequencing or packet deliver; errors and re-transmissions are handled
by upper layer protocols. UDP is used for transmitting data over very reliable networks;
like TCP it uses port numbers to pass information to upper layer protocols. UDP defines
a set of destination applications as protocol ports; each applications that wants to send or
receive information is assigned a 16-bit number called the port of the program; UDP
defines 2 types of ports: commonly used ports, ie, FTP-21 and dynamically bound ports
which have no fixed number and can change. This allows several client applications with
different IP addresses to use the same port number. UDP packets are delivered to the
applications that matches the port number and IP address; the UDP datagrams is enclosed
in one or more IP packets. IP sends the IP packet to the correct node and extracts the
UDP packet and delivers to the UDP layer software, which in turns delivers UDP data to
the specified destination protocol port. The UDP header has only 4 fields:
1) Source port
2) Destination port
3) Length
4) UDP checksum – optional; when used it validates both header and data

Application Layer Protocols

Manages the user interface and communications among the applications; these protocols
Include:

File Transfer Protocol (FTP)

Provides a reliable way of moving files, allows remote login, directory inspection,
command execution, and file manipulation. Does not provide a transparent service and
uses the UNIX directory tree structure.

Trivial File Transfer Protocol (TFTP)

Simplified version of FTP and is used by routers.

Network File System (NFS)

Distribution file system protocol suite developed by Sun Microsystems that allows
remote file access. NFS is only one of the protocols in the suite; some of the other
protocols include: Remote Procedure Call (RPC) and External Data Representation
(XDR). NFS is part of the larger architecture that Sun calls Open Network Computing
(ONC); uses UDP at the Transport Layer and allows transparent access to remote
resources.

Simple Mail Transfer Protocol (SMTP)

Uses UDP and Ip for message exchange; does not provide user interface; transmits the
entire message from one node to the next until the destination is reached.

Telnet

Terminal emulation protocol that allows login to remote systems; can be used as a
debugging tool since it can impersonate different applications, ie, FTP and email; used by
routers.


rlogin

similar to telnet and is offered in most UNIX implementations.

Simple Network Management Protocol (SNMP)

Used almost exclusively in TCP/IP networks; enables network admins to monitor a
network from a single workstation, called the SNMP manager and other network devices
are called SNMP agents (TCP/IP hosts, routers, terminal servers, or other SNMP
managers). Routers provide information about routing tables and the amount of traffic;
mainframe hosts provide information about services and client connections.

Each layer of the TCP/IP protocol stack provides services to the layer above and each
layer expects services form the layer below.

Addressing
In a TCP/IP environment communication between hosts is transparent, since each node
has at least on unique 32-bit IP address, which is represented by the dotted decimal
notation: 207.64.77.1; each part of the address is represented in binary as an octet or
group of 8 bits. There are 5 classes of IP addresses which are indicated by the left most
bits:

Class A – 0                                             Class D – 1,1,1,0
Class B – 1,0                                           Class E – 1,1,1,1
Class C – 1,1,0

In a Class A address (also known as /8), the first octet contains the entire network
portion of the address; supports 126 networks with 16,777,214 nodes; capable of
supporting very large networks; are no longer available (sold out); the decimal value of
the first octet is between 1 and 126.

In a Class B address (also known as /16), the first 2 octets contains the network portion
of the address; supports 16,384 networks and 65,534 nodes; the decimal value of the first
octet is between 128 and 191.

In a Class C address (also known as /24), the first 3 octets contains the network portion of
the address; supports 2,097,152 networks and 254 nodes; the decimal value of the first
octet is between 192 and 223.

In a Class D address, there is no network portion of the address; used for multicast
packets, ie, ICMP; a host can discover the address of a routers on its local segment by
sending a request to 224.0.0.2 and any local router will respond; the decimal value of the
first octet is between 224 and 239.


In a Class E address, there is no network portion of the address; reserved for future
additions to the IP addressing scheme; the decimal value of the first octet is between 240
and 255.255.255.255.

Router interfaces can have more than one IP address, which gives you two logical
networks on the same physical wire;communication between the two networks is handled
by the router, but hosts with dissimilar addresses can not talk to each other.

The network address without specific node addresses was used in early implementations
of TCP/IP as a broadcast address. An IP address of 255.255.255.255 refers router all
networks or hosts and is reservered for broadcasts to a network.

Routing tables contain entries for network or wire addresses including subnets; usually do
not contain host address information; however, it does contain MAC addresses.

Most applications will accept either an IP address or a host name argument. Host names
must be resolved to IP addresses; there are three commonly used methods of mapping
names to IP addresses:

1) host tables and net tables
2) DNS
3) Network Information Service (NIS)

A host table is an ASCII text file with IP addresses associated to host names; used by
small to moderate networks. UNIX stores the host table in a file called /etc/HOSTS.
Network tables are similar; however, it stores network names, addresses, and aliases of
networks; addresses in network table do not refer to individual hosts. UNIX stores the
network table in a file called /etc/NETWORKS.

DNS is widely used on the Internet; DNS divides the Internet into domains which are
arranged in a hierarchial order starting with the root domain. Each domain must contain
a name server; name servers contain host address databases for all hosts in the domain.
DNS servers can resolve any IP address on the Internet by referring the request to other
DNS servers. The following are some of the top level domains:

      MIL – military                                         GOV – government
      EDU – education                                        ORG – organizations
      COM – business                                         NET – gateways and hosts

There are also separate top level domains for most countries: US, UK, and DE –
Germany, etc.

NIS is often used on IP networks; does not provide for the entire Internet; a way for
network admins to maintain addresses for a group of computers in a single database;
makes use of domains, but are not the same as DNS domains.

In the US valid IP addresses are assigned by the InterNIC Internet Registration Service
operated by Network Solutions, Inc., which also register domain names; in Europe it’s
called RIPE. Many ISP’s can supply domain names with the approval of the InterNIC;
the InterNIC can help you determine how many Internet addresses you need based on
how many individual network segments you have and how many systems of these
segments are connected to the Internet.

If a router does not recognize a host name, it sends a DNS request for an IP address
corresponding to the host name. DNS refers the request to its domain servers until a
matching IP address is found; DNS returns an IP address to the router, which can then
choose a path.
Subnetting
Utilization of subnets is a more efficient use of IP addressing; subnets are used to divide
one larger network into a number of smaller subnetworks. All hosts on the subnetted
network must share the same network address, so additional information is required to
distinguish them from each other; all subnets on a network must have the same network
address.

Subnet addresses are extensions of the network number; subnetting involves
reassigning part of the node address to act as an additional network address. Subnet
masks are use to subnet networks; routers use subnet masks to identify which part of the
IP address is the network number and which part is the host address. All routing
decisions are based on network and subnetwork numbers.

There are 3 steps in assigning addresses in a network that has been subnetted:

      Defining a subnet mask
      Assigning an address to each subnet
      Assigning IP addresses to each node in each subnet

A subnet mask is a 32-bit number that is applied to and IP address to override the default
network/node division of the IP address. The layout of the mask field is as follows:

binary 1 for the network bits                     network        subnet   hosts
binary 1 for the subnetwork bits              11111111.11111111.11111111.00000000
binary 0 for the host bits

To subnet your network you will need to alter some of the 0’s in the mask to 1’s .
Routers extract the destination address and retrieves the mask; routers perform a logical
AND operation to obtain the network number; during the AND operation the host portion
of the address is removed, because any bit ANDED with 0 will return 0; routing decisions
are based on network and subnetwork numbers only. Rules for subnetting addresses are
defined by the Internet protocol specification RFC 791 and subnetting specification RFC
950. The following is a description of subnetting rules and specifications:




   # of masked bits          binary             decimal          poss. subnets           poss. hosts
          1                 10000000              128                  0                    126
          2                 11000000              192                  2                     62
          3                 11100000              224                  6                     30
          4                 11110000              240                 14                     14
          5                 11111000              248                 30                      6
          6                 11111100              252                 62                      2
          7                 11111110              254                126                      0
           8                11111111              255                254                     N/A


The number of possible subnets is derived from the number of 1’s in the subnet mask
minus the number of 1’s in the default mask; use 2 to the power of this number minus 2.
Consider the following example:

subnet mask            binary
255.255.240.0          11111111.11111111.11110000.00000000
nrml b mask            binary
255.255.0.0            11111111.11111111.00000000.00000000
bits is subnet mask                     1111

So 24=16-2=14 possible subnets. The number of hosts is
derived by adding up the 0’s and applying and computing as an exponent of 2; in the
example above there are 12 0’s in the subnet mask, so 212 = 4096-2=4094 possible hosts.

The first number available in a subnet is used as a subnet address and can not be
Used; the last number available in the subnet is used as the broadcast address and can not
be used either. You can not use all 1’s or 0’s with the network and host portion of the IP
address.

It is difficult to predict how many subnets you may need when designing a network, so it
becomes difficult to decide on a suitable mask. RFC 1219 describes an addressing
strategy for assigning IP addresses that allow changes to the mask without reassigning
existing IP addresses

Subnet addresses should be assigned in an ascending numerical order. RFC 1219
addressing strategy is that the bits nearest the division between the subnet and host
portions of the IP address are the last to be changed from 0’s to 1’s. Example:

                              11111111.11111111.10000000.00000001
                              11111111.11111111.11000000.00000011
                              11111111.11111111.11100000.00000111
                              11111111.11111111.11110000.00001111
                              11111111.11111111.11111000.00011111
                              11111111.11111111.11111100.00111111
                              11111111.11111111.11111110.01111111
Recent routing protocols allow the use of mask 255.255.255.128, if you issue the
command: ip subnet-zero

The Cisco IOS listens on e0 for DNS request to the name of new-router.cisco.com; uses
the ARP protocol to locate an unused IP address near the IP address of the device that
issued the DNS request. The Cisco IOS uses the address to which there was no response
as the IP address of the e0 interface. Router configuration is achieved by one of the
following:

      System configuration dialog – used if unfamiliar with Cisco commands
      Configuration mode - used if familiar with Cisco commands
      Autoinstall – automatic but only works with othe Cisco routers

You must disconnect the WAN cable form the router if you want to use the system
configuration dialog or the configuration mode; otherwise autoinstall will run.

Secondary IP addresses are useful in that you can have 2 logical subnets on the same
physical subnet or wire; this was a popular way of subnetting before subnetting became
common; creates a single logical network from subnets on separate networks.

Serial interfaces can be unnumbered and still process IP packets; you assign the name of
another interface with an IP address in the router attached to the unnumbered interface.
The unnumbered serial interface uses the specified IP address as a source address when it
generates packets; it also uses the assigned interface address to determine which routing
processes are sending updates over the unnumbered interface.

Ping uses the UCMP protocol to determine if protocol packets are being routed.

Trace can be used in USER or PRIVILEDGE mode; protocols that support trace are: IP,
AppleTalk, VINES, and CLNS.

Cisco IOS uses 3 forms of address resolution:

      ARP and Proxy ARP – dynamic
      Probe – similar to ARP; developed by HP for 802.3 networks
      RARP

When configuring address resolution, you can perform the following:

      Define static ARP cache entries
      Set ARP encapsulations
      Disable proxy ARP

Most hosts support dynamic address resolution, so you do not need to specify static ARP
cache entries; TCP/IP provides dynamic address resolution using ARP and RARP.

Show interface – will display type of ARP and the timeout value
Show ARP – examines contents of ARP cache and can be use in USER
Clear ARP – clear the contents of ARP cache except for static entries and can only be
used in PRIVILEDGED.

ARP encapsulation controls the interface-specific handling of 32-bit IP addresses into 48-
bit MAC addresses, ie, Ethernet (ARPA). Standard Ethernet ARP encapsulation is
enabled by default; you can change to SNAP or HP Probe by issuing the following
command: ARP SNAP or ARP PROBE

Cisco IOS can use probe whenever it attempts to resolve 802.3 or Ethernet local data-link
addresses.
Cisco IOS use proxy ARP to help hosts with no knowledge of routing to determine data-
link addresses of hosts on other networks or subnetworks; proxy ARP is enable by
default; you can disable by issuing the command: no ip proxy ARP. A proxy ARP router
sends its MAC address to a host on the network that sent an ARP request for a host on the
other network; RFC 1027.

Cisco IOS maintains a table of host names and associated IP addresses, which speed up
the routing process; host names can be associated with one another either statically or
dynamically (DNS). Use the following command to statically assign host names to IP
addresses: ip hostname A.B.C.D

To map domain names you identify the host names, specify a name server, and enable
DNS: ip domain-name alvaardo.isd.tenet.edu. An IP host name that does not have a
domain name will assume the default domain that you specified. Examples of domain
commands ARP as follows: ip domain-list NAME and ip name server A.B.C.D; Cisco
IOS supports 6 DNS servers. DNS is enabled by default; to disable DNS issue the
following command: no ip domain-lookup.

Cisco and Routing Protocols
Routers learn of routes by:

      static routing                                           dynamic routing
      default routing

Routing updates are not sent on links if they are static to conserver bandwidth. Static
routing is private and not advertised.

Stub network refers to a partition of the internetworking that can only be accessed by one
route.

ip route uses the following arguments:

1) network                                                4) interface (optional)
2) mask ( optional depending on dest.)                    5) distance (optional)
3) address

Default route is also know as the route of last resort.
ip default-network

Success of dynamic routing depends on two basic router functions:
1) maintenance of routing table
2) timely distribution of routing updates to othe routers

Routers discover routes to remote destinations; routers advertise routes and costs to other
routers.
AS – autonomous system

Interior Routing Protocols
RIP (Routing Information Protocol)

distance-vector; maintains best route information; no knowledge of exact network
topology. RIP table have the following information:

1) destination network                                    3) timers – regulate performance
2) next hop                                               4) a single metric (hop count w/15 max)

RIP routing update timers ensures that each router broadcasts a complete copy of its
entire routing table topology all its neighbors at regular intervals, typically 30 seconds.
The combination of route invalid timer and the flush timer are used topology remove an
invalid route from the routing table. Holddowns are used topology prevent regular
update messages from re-instating invalid routes. Split horizon (information is never sent
back the way it came) and poison reverse (used topology defeat larger routing loops)
updates are used topology prevent routing loops. RIP is best used in small topology
moderate size homogenous networks; its small hop count and single metric don’t allow
muck flexibility in complex environments.

IGRP (Interior Gateway Routing Protocol)

Distance-vector; designed for an AS with complex topology and media with diverse
bandwidth and delay characteristics; uses a combination of metrics (delay, bandwidth,
reliability, and load); metric can be manually weighted; permits multi-path routing,
makes use of holddowns, split horizon and poison reverse, and a number of timers for
performance (update, invalid, holddown, flush – all with default values)

OSPF (Open Shortest Path First)

Link-state protocol
routers send only the portion of its routing table that defines the state of its own links to all routers on
the internetwork; has full knowledge of the exact network topology; uses SPF (Shortest Path First)
algorithm topology calculate shortest paths; can also send routes topology AS’s; unlike RIP, it can
operate in a hierarchy, which is the partitioning of AS into logical areas to reduce routing traffic. The
hierarchy makes use of an OSPF backbone made up of border routers for distribution of routing
information between areas.

AS border routers learn of exterior routes thru exterior gateway protocols, ie, EGP and BGP. SPF
routers use the OSPF hello protocol (which serves as keepalives) to learn of its neighbors. On multi-
access networks, the hello protocol elects a designated router to generate link-state advertisements
(LSA’s) for the entire network, which reduces routing protocol traffic and topological database size.
Additional features include: type of service (TOS), support for one or more metrics, and variable
length subnet masks (VLSM’s).

EIGRP (Enhanced Interior Gateway Protocol)
Combine advantages from both distance-vector and link-state protocols (balanced-hybrid routing
protocol). Uses distance-vector to determine best paths and topology changes to trigger routing table
database updates; fast convergence; support for VLSM’s, partial bounded updates, and multiple
network layers. EIGRP routers receive full routing table updates from its neighbors when it first
communicates with them; therafter, only changes to a routing table or partial changes bounded only to
neighboring routers that are affected by the change. Bounded or partial updates improves bandwidth
efficiency. EIGRP supports path selection and packet switching for more than one routed protocol (IP,
IPX, and AppleTalk).

A neighbor routing table records the address and interface of each new neighbor it discovers, a
topology table has destinations advertised by neighboring routers; assigns route states as either passive
(a destination is passive when the router is not performing re-computation) or active. If feasible
successors are always available, which means that a destination never has to go into active state.
Feasible successors refer to neighbors being used for packet forwarding, but are not part of routing
loop; provide next least-cost path without introducing routing loops and requires fewer re-
computations.

Supports route tagging which marks routes as internal or external. Route tagging allows customization
of routing. EIGRP has seamless interoperation with IGRP.


Exterior Routing Protocols
These protocols route between AS’s and some can route within an AS; use more complex algorithms
since they have to know greater number of networks and routers.

(EGP) Exterior Gateway Protocol

The first to gain wide acceptance; dynamic with simple design. It is not considered a true routing
protocol by today’s standards. Routing updates only specify that certain networks are accessible thru
certain routers; doesn’t use metrics, so it can’t make intelligent decisions or detect routing loops. The
three main functions of EGP are:

1) EGP routers establish a set of neighbors wqith which to share information
2) Poll neighbors to see if alive
3) Send updates containing accessibility information on networks with AS’s

The limited use of routing update messages places a topology restriction of this protocol. EGP uses the
following message type:

   1)   neighbor acquisition – test alive
   2)   neighbor reachability – test down
   3)   poll – acquire access information about networks on which remote hosts reside
   4)   error – identify EGP error conditions

BGP (Border Gateway Protocol)
Exterior AS routing protocol created for the Internet; can be used within and between
AS’s. BGP neighbors that communicate between AS’s must reside on the same physical
network. Communications by BGP routers within the same AS ensures that each router
has a consistant view of the entire AS; used to to determine which BGP router will serve
as the connection point to or from certain AS’s; can detect routing loops. Initial
exchange between BGP routers is the entire BGP routing table; incremental updates are
sent as routing tables change. The BGP metric specifies the degree of the preference of a
path. Metric are usually assigned by a network admin and include : AS count and type of
link. BGP maintains a routing table with all feasible paths to a particular network;
however, only advertises the optimal paths. BGP update messages consist of “network/AS path” pairs.
                        Other BGP messages include:

   1) Open – first message sent once a transport protocol connection is established.
   2) Notification – sent when erro is detected
   3) Keepalive – sent often enough to keep holddown timer from expiring

Scalability

The concerns of routing protocols as they scale up to larger networks are:

   1) Convergence                                           3) Metric limits
   2) Update traffic

Slow convergence causes routing loops or network outages. A routing algorithm’s ability
to determine optimal routes depends on the metrics that it uses and their weights.
Examples of these include :

   1) Hop count                                             4) Port queues at each router
   2) Delay – time packet takes for trip                    5) Network congestion
   3) Bandwidth of intermediate links                       6) Physical distance

Distance-vector

Distance-vector which uses direction and distance is widely used and easy to configure. Distance-
vector routing algorithms periodically pass copies of complete routing table sequentially from router to
neighboring router. Distance-vector begins by identifying neighbors until best path is arrived; based
on accumulated network distances; has no knowledge of exact topology of the internetwork. When
topology changes occurs, routing table updates must occur in a step-by-step manner from router to
router. Routing tables include information about total path cost and next hop. Updates for topology
changes come in the form of periodic routing table updates.

An endless network routing loop is called the count to infinity. The hop count keeps rising until the
maximum is reached (RIP 15), and then the route is considered unreachable. Holddown timers are
used to prevent periodic update from overwriting an invalid route; is calculated to be just greater than
the period to update the entire network with a routing change.

To prevent routing loops distance-vector makes use of the following to mechanisms:
Poison reverse – used on large internetwork routing loops. When a network goes down, then a router
will advertise an infinite cost for that network, ie, 16 for RIP, which makes that router no longer
susceptible to incorrect update messages from its neighbors.

Split-horizon – used to solve routing problems between routing neighbors. Traffic can never be sent
back the way it came.

Whether you use poison reverse or split-horizon, you will reduce the time for convergence; however,
increased RIP traffic will occur. Update traffic is a concern for distance-vector protocols when scaling
up a network. RIP requires that each router broadcast its entire database to all its neighbors every
thirty seconds. Distance-vector traffic demands are not as great as link-state protocols.
Link-state

Link-state uses link-state packets (LSP’s), topological databases, shortest path first (SPF) algorithms,
SPF trees, and routing tables; send routing information updates to all nodes with LSP’s. Each router
sends only a portion of its routing table that describe the state of its own links to routers on the
internetwork. Each router constructs a database representing the entire internetwork and runs a SPF
algorithm to construct a path tree to each destination.

OSPF (Open Shortest Path First)

This is the most common link-state protocol and is described in RFC 1247. Makes use of event-
triggered updates; initial router will notice topology change and will advertise this change to all routers
or a designated router, which is called flooding. Convergence is achieved by each router keeping track
of its neighbors. Router constructs an LSP which lists it neighbors, router names, link costs, new
neighbors, changes in link costs, and links to neighbors which are down. Each router sends its LSP’s
to all the other routers on the internetwork; accumulated LSP data is used to construct an exact
internetwork topology map, and then a SPF algorithm is run to re-compute routes. Each time an LSP
causes a change to the link-state database, then the link-state algorithm recalculates best paths, updates
the routing table, and then every other router takes topological changes into account and determines
shortest paths for packet switching. Distance-vector routing determines best paths by adding to the
metric value it receives as routing tables move from one router to router. With link-state protocols,
each router works in parallel to calculate its own shortest path; allows for fast convergence and is less
prone to routing loops.

Scalability concerns of link-state protocols are heavy memory demand on routers, since routers must
hold information from various databases, the topology trees, and routing tables. Link-state routers use
Dijkstra’s algorithm to compute SPF; the larger a network grows the more links and routers are used
int the algorithm, which generates consistently higher demands on the routers’ processing power.
Packet flooding caused by LSP’s reduces bandwidth.

To control these effects network admins can reduce periodic distribution of LSP’s, which is known as
dampening; however, event-triggered topology changes are not affected. LSP updates can go to a
multicast group rather than flood all routers. A router can be designated as a depository for all LSP
transmissions. A hierarchy of router levels can be used whereby routers do not need to store and
process LSP’s from other routers not located in its area.

Link-state protocols also have a problem with unsynchronized updates and faulty LSP distribution,
which can be caused different speed links and cause routers to receive conflicting LSP’s which
impairs convergence. Parts of the internetworking that update quickly cause problems for parts that
update slowly. Differences in update speeds may eventually result in segmentation or partitioning.
Link-state protocols make use of time stamps, sequencing numbers, aging schemes, and hierarchy to
combat these problems. Link-state protocols’ sophisticated controls for troubleshooting scalability is a
considerable advantage.

Route Summarization
Is a way of allocating multiple IP addresses that allow them to be condensed into a smaller number of
routing table entries, which is called aggregation and supernetting. Subnetting extends the prefix to the
right, but summarization collapses to the left. Groups of routers can be identified as a single unit,
which reduces memory usage and routing protocol traffic. To work correctly ensure that the multiple
IP addresses used all share them same higher order bits. Routing tables and protocols must base their
decisions on 32 bit IP addresses whose prefix length can be up to 32 bits; routing protocols must
explicitly state the prefix length with a 32 bit Ip address.

Classful Routing Protocols

Examples are RIP and IGRP; always consider the IP network class and automatically summarize by
major network numbers. Summarization by major network numbers can not be changed by RIP or
IGRP. Since RIP and IGRP summarize automatically at network boundaries, only major networks are
advertised. Subnets are not advertised across network boundaries and discontiguous networks are not
visible to each other. Summarization procedures used by classful protocols, ie, RIP and IGRP, are not
the most efficient way to aggregate routing information.

Classless Inter-domain Routing (CDIR)

Is a different type of route summarization technique used by BGP4. BGP4 uses CDIR to aggregate
blocks of class C networks as a single route; reduces the number of routes advertised over the
internetwork. Classless routing is similar to another route summary technique called prefix routing.
Clssless prefix routing allows contiguous blocks of hosts, subnets, or networks to be represented as a
single route. Classless and prefix routing allows much shorter routing tables, faster switching
performance, and reduced routing protocol traffic. OSPF and EIGRP support classless and prefix
routing. Unlike classful protocols; prefix protocols, ie, OSPF and EIGRP, can support discontiguous
networks, because addresses across different major networks are configurable with prefix protocols.
Cisco IOS also supports discontiguous subnets, provides an IP unnumbered feature that permits
discontiguous subnets separated by an unnumbered link.

You can use IP unnumbered, secondary addressing, OSPF, or EIGRP to support discontiguous
networks. When implementing prefix routing, routers look for the longest matching prefix in the
routing table whick supports specific routes, blocks of networks, or default routes. Prefix routing
allows mobile hosts. Network and subnet routes apply to hosts that do not move; when a host moves, a
specific route is advertised for that host. This individual host route is an exception, because it is not
located with the other hosts in the subnet. Cisco IOS’s local-mobility feature uses prefix routing to
track individual hosts that move. OSPF and EIGRP support mobile hosts.

Classful routing consolidates all hosts into subnet routes; bridges are aware of all the Data-Link Layer
addresses of every individual host, so classful routing is far more scalable the bridging. Internetworks
that support mobile host are in between classful routing and bridging in terms of scalability. The more
mobile hosts, the closer it gets to bridging performance.

Implementing Distance-Vector Routing Protocols

RIP is a classful routing protocol described by RFC-1723 and later updated by RFC-1058. RFC-1723
allowqed RIP to carry more information and added security features. Routing table entries include:

      Ultimate destination
      Hop count (metric)
      Next hop
      Timers
      Route change flag

Only information on a single optimal route for each specific destination is maintained in the routing
table; this is caused by its single metric (hop count). Sends periodic updates of its entire routing table
to all of it neighbors (usually every 30 secs); also send updates when there are topology changes. Each
update causes all routers to recalculate their routing table and in turn send updates to their neighbors
until the changes are propagated through the network. If a better route becomes available, RIP will
also update its routing table.

RIP uses timers to regulate performance and set the periodic intervals

routing updates sends a complete copy of its routing table to all neighbors, usually 30 seconds.
The route invalid timer is set to the length of time before not-heard-about route is deemed invalid: once
rote is deemed invalid, neighbors are notified of this condition; usually 90 seconds
Flush timer sets the time before a route that becomes invalid and being flushed from the routing table,
usually 270 seconds.

RIP Packet Format:

COMMND VERSION ZERO                   AFI       ZERO       ADDRESS ZERO            ZERO       METRIC

Command- indicates if packet is requested or a response; request asks respondining system to sand all
or part of routing table. Response is more often than not an indication of a reply to an solicited routing
update messages

Version number – specifies version of RIP being used; used to signal differences and possibly
incompatible RIP implementations

Address Family Identifier – follows a field of 16 zeros; identifies the type of addressing scheme by
destination address for which update information is being given or sought; IP has value = 2

Address – follows another field of 16 zeros that contains the address of a destination either to update
the recipient or request for an update.

Metric follows 2 more 32-bit fields of zeros; specifies hop count to the destination whose address
family and address were given in previous fields.

Up to 25 destinations may be listed in any single RIP packet. RIP has features to combat rapid
topology updates:

      Hop count                                                Split horizon
      Holddown                                                 Poison reverse updates

Routers ignore updates with a poorer metric than the originally metric while holddown timer is in
place. Routers will accept updates to reinstate a route from the router who initially informed them of
the invalid route. Routers will also mark the route as accessible if an update with a better metric
arrives from a neighboring router within the holddown period.

With split horizon, adjacent routers will not circulate conflicting information between them.

Show ip route displays what networks and interfaces the router has information on. When monitoring
RIP you may look at RIP updates, routing tables, and network connectivity. To view RIP updates you
turn on RIP debugging by entering: debug ip rip; to turn off: undebug ip rip. To test connectivity use
ping, trace, and telnet.

IGRP

Hop count limit is 255; initially supported IP only, but later ported to run in Open System
Interconnection (OSI) Connectionless Network Protocol (CLNP) networks. Routers can calculate
distances to all nodes in an AS; use a combination (vector) of metrics known as a composite routing
metric. By default only bandwidth and delay are used by IGRP; the following factors are used in
routing decisions: internetworking delay, bandwidth, reliability, load, MTU, and weighting. All of
these can be set by network admins, but also have default settings.
IGRP has a wide range for its metrics:
Reliability – (1-255) 255 is best
Load – (1-255) 255 is worst
Bandwidth – (1200 bps – 10 gig)
Delay – speed in units of 10 ms

This wide range of metric values provides suitable settings for internetworks with widely varying
performance characteristics.

Metric components are combined in a user-defined algorithm, which offers much flexibility; IGRP
supports multi-path routing, known as unequal cost load balancing, which allows traffic to be
distributed up to 4 unequal cost paths to provide overall throughput and reliability. The following are
rules for unequal load balancing:

      Will accept up to 6 (4 default) paths for a destination
      Next hp router must be closer to the destination than the local router.
      The alternative path metric must be within specified variance of the best local metric

If these conditions are met, then the route is considered and can be added to the routing table.

Dual lines of bandwidth may run a single stream of traffic in a “round-robin” fashion; IGRP provides
auto switchover to the second line if the first line goes down. Multi-paths can also be used if the
metrics for the paths are different. IGRP uses the following for stability:

Holddowns
Split-horizon
Poison reverse updates
Flash updates
Triggered updates do not arrive simultaneously to every node on the network. IGRP’s implementation
of poison reverse updates will be sent if a route metric has increased by a factor of 1.1 or more.

In addition to periodic updates, IGRP uses flash updates to speed up convergence of the routing
algorithm. Flash updates are sent to mers as soon as the network topology change is noticed. IGRP
has a number of timers and variable that contain intervals:

        Update timer (90 secs)
        Invalid timer (3x update timer = 270 secs)
        Hold-time period (3x update timer + 10 = 280 secs)
        Flush timer (7x update timer = 630 secs)


IGRP advertises 3 types of routes:

        Interior – routes between subnets in the network attached to a router
        System – routes to networks within AS; don’t include subnet information
        Exterior – outside AS and considered when identifying gateway of last resort

Cisco IOS chooses a gateway of last resort from the list of exterior route that IGRP provides. If AS
has more than one connection to an external network, then different routers can choose a different
exterior router as the gateway of last resort.

If you are migrating from RIP to IGRP, then RIP has to be disabled. To monitor IGRP: debug ip igrp
events; to turn off: undebug ip igrp events.

RIP v2

Supported by Cisco IOS 11.1 and above; discovers and maintains routes the same as RIP v1; the
difference is in the route information packet which has the following additions:

        Inclusion of netmask field – allows implementation of VLSM and CDIR, which optimizes of
         NIC assigned ip addresses
        Authentication field – minimizes risk of incorrect routing information injection
             o plain text – ensures misconfigured hosts do not participate in routing
             o Cisco message digest (MD5 encryption)
                     verifies integrity
                     authenticates origin
                     checks for timeliness
        Next hop field – if next hop to a destination is on the same subnet, then the router will
         advertise.

Allows multicast updates – single packets are copied to the network and sent to a specific subset of
network addresses. This restricts information sent to only RIP v2 routers rather than to all devices on
the network. RIP v1 can not receive multicast updates.

RIP v1 routers ignore the netmask field in RIP v2 packets, which can cause misinterpretation of the
destination as a host rather than a network.
Authentication allows configuration of routers to only accept routing information updates when the
packet has the correct key or password; each neighboring router must shaer the same key for routing to
work properly. Authentication is enabled on an interface-by-interface basis. RIP v1 handles these
packets normally by ignoring the authentication field.

When you implement RIP v2 on an internetwork with disconnected subnets, you have to disable
automatic route summarization, which is enabled by default. To disable automatic route
summarization type: no auto-summary. This will ensure that only specific routes are exchanged
between domains Netmask information field is used to forward explicit routes to their appropriate
domains. RIP v2 can be configured globally and on an interface-by-interface basis; which helps in
migrating from RIP v1. If no version is specified, then the router will accept both versions of packets,
but send only v1 packets.

To configure for a specific interface you enter interface configuration mode from the config-router by
typing: interface ethernet0; then enter th ip address by typing: ip address A.B.C.D mask. Examples:

ip rip send version 1 2
ip rip receive version 2

A key string is the password used by neighboring routers to authenticate packets received and are case
sensitive. Key refers to key string and its associated accept and send time frames. Key chain is a
collection of keys that can be used on an interface and must be configured before any authentication is
performed on an interface. Packets are sent for each active key in a key chain. To enter a key chain
called tree type: key chain trees; then configure keys by typing: key 1; then give a name by typing: key
string maple. If you do not configure accept and send time frames, then packets authenticated as
maple will be sent and accepted indefinitely. To configure accept and send time frames type:

send-lifetime HH:MM:SS Oct 1 1996 HH:MM:SS Dec 31 2000
accept-lifetime HH:MM:SS Oct 1 1996 HH:MM:SS Dec 31 2000

To define a second key string type: key 2, then give a name by typing: key string willow. Once you
have configured one or more key chains, then you can enable authentication on one or more interfaces.
To do this type: interface ethernet0; then set primary ip address: ip address A.B.C.D mask; then type:
ip rip authentication key chain trees; then set authentication protocol type by typing: ip rip
authentication mode MD5.

Show key chains verifies authentication key information.
Show run display includes each configured key chain and its associated keys and shows protocols
currently running on the router.

To enter configuration mode you can type: configuration, config, or conf.

IPX Overview and Addressing
Netware is derived from Xerox Network Systems (XNS) and operates on the same physical channel as
TCP/IP, DECnet, and AppleTalk; client/server architecture in which remote access is transparent to the
user; accomplished by the use of RPC’s.
IPX is a connectionless datagram and is similar to IP; Netware corresponds loosely to the OSI model
and specifies five upper layer protocols:

      Netware Core Protocol (NCP) corresponds roughly to the Application, Presentation , and
       Transport layers.
      Network Basic Input/Output System (NetBIOS) emulator corresponds roughly to the Session
       and Transport layers, although it is usually known as a Session layer
      Service Advertising Protocol (SAP) corresponds to the Transport and Network layers as well as
       upper layers.
      Sequenced Packet Exchange (SPX) is a Transport Layer protocol.
      IPX, RIP and Novell Link Services Protocol (NLSP) are all Network Layer protocols

At the lower layers are the media access control (MAC) protocols on which IPX runs:

      Ethernet/802.3                                          ARCnet
      Token Ring/802.5                                        ISDN
      Fiber Dist Data Interface (FDDI)                        PPP
      ATM

Higher level protocols rely on the MAC protocols and IPX to handle lower level communications, ie,
node addressing.

IPX is a datagram connectionless (does not require an ACK) protocol that operates at Network and
Transport layers. It defines internetwork and internode addressing and routes packets in an IPX
internetwork. To communicate with difference devices on the network, ipx route the information thru
intermediate networks; uses physical device address (MAC) and its socket or service address to
address the packet to its final destination.

A datagram is a logical grouping of information sent as a Network Layer unit over transmission
medium without prior establishment of a virtual circuit.

Netware uses the services of RIP (distance-vector) and NLSP (link-state) to route packets in a
internetwork.

RIP uses IPX and the MAC protocols for its transport; the IPX RIP metrics are ticks (1/18 of a second,
similar to delay) and hop count. The hop count becomes the tie breaker when two route have the same
tick count. Routing updates are sent at 60 secs intervals, which is a high frequency of updates and can
cause excessive overhead traffic; IPX uses the proprietary protocol SAP to advertise network services
(every 60 secs). SAP allows nodes to advertise services and addresses, ie, fileservers and print servers.
Services are identified by a hexadecimal number called the SAP identifier, ie, fileserver=4,
printserver=7. Workstations use SAP with a service query packet to determine which services are
available.

Novell introduced a link-state protocol called NLSP which is intended to replace RIP and SAP;
derived from ISO’s IS-IS protocol. NLSP routers exchange information to maintain a logical map of
the internetwork; this information includes:
      Connectivity states                                    Max trans unit size (MTU)
      Path costs                                             Media types

SPX is a reliable connection-oriented protocol that supplements the datagram service provided by IPX;
data transfer must occur via a virtual circuit; provides packet verification lacking in IPX routing.

Netware Core Protocol (NCP) upper layer protocol is used by IPX to provide client-to-server
applications and connections. The services provided by NCP are:

      File access                                            Accounting
      Printer access                                         Security
      Name management                                        File synchronization

IPX supports NetBIOS Session Layer specification from IBM and Microsoft; Netware emulation
allows programs written for NetBIOS to run with Netware.

Ethernet and 802.3 specify similar technologies; they are both CSMA/CD LANs; CSMA/CD
workstations know when data needs to be re-transmitted. Today the term Ethernet is applied to
CSMA/CD LANs including 802.3. Ethernet provides services that correspond to the physical and
data-link layers; the Data-Link Layer corresponds to 2 sublayers: logical link control (LLC) and MAC;
802.3 and 802.5 correspond to the Physical Layer and MAC portion of the Data-Link Layer, but does
not have LLC protocol.

FDDI also uses the token passing media access method; structure based on 2 ring: one for data and the
other for backup and other services. The FDDI ANSI standard consists of 4 part:

      Physical Media Dependent (PMD) – Physical Layer
      Physical Layer (PHY) – Physical Layer
      Media Access Control (MAC) – lower half of Data-Link Layer
      Station Mngmt – fixes faults on the ring, gets data on and off, and generates data.

Attached Resource Computer Network (ARCnet) simple network that supports UTP, COAXIAL, and
FX cable type. Combines token-passing element of Token Ring with the bus and star topologies;
ARCnet functions map evenly to the Physical Layer and Data-Link Layer.

Point-to-Point (PPP) also maps exactly to the data-link and physical layers; provides router-to-router
and host-to-host connections. At the Data-Link Layer it uses LLC protocol to manage point-to-point
connections; also uses the High Level Data Link (HLDL) protocol to encapsulate datagrams over PPP
links.


IPX Addressing

Made up of network and node elements; in text format these elements are separated by a period
(network.node); addresses in IPX packets are represented by a sequence of 80 bits; The network
number identifies a physical network, is assigned by the network admin, contains 32 of the 80 bits, and
r3epresented by 8 hexadecimal digits. Cisco IOS does not require all 8 hexadecimal digits; all leading
zeros can be omitted. The node number is 6 bytes long, represented by a dotted triplet of 4 digit
hexadecimal numbers, ie, 0000.0cc0.23fe. and is derived from the MAC address. IPX’s use of MAC
protocols eliminates the need for an ip process like ARP.

Encapsulation is the process pf packaging upper layer protocol information and data into a frame. A
frame is an information unit whose source and destination is a Data-Link entry. Encapsulation uses the
frame formats of the MAC protocols; most MAC protocols are from the 802.X series and specify
particular header types used in IPX encapsulation.

There are several types of encapsulation formats used by IPX routing:

Ethernet
Token ring
FDDI
PPP

The most common Ethernet encapsulation formats are Ethernet v2 or Ethernet_ii and 802.3. Ethernet
v2 can handle both TCP/IP and IPX traffic; includes the standard v3 header, destination and source
address fields, followed by ether type number field..

802.3 is ethernet’s raw format, which means that an 802.3 header is used alone without the 802.2
frame information. For each Netware packet format, Cisco has a corresponding key word:

Ethernet v2 – ARPA
802.3 – novell-ether
802.2 – SAP service access point

A SAP is also a logical interface between 2 adjacent protocol layers.

Subnetwork Address Protocol (SNAP) extends the 802.2 headers, which Cisco calls snap; bothsnap
and sap include LLC protocol which handles error control, flow control, framing, and MAC sublayer
addressing.

FDDI also has a raw 802.3 frame format which has the keyword of Novell-FDDI; FDDI SAP format
has the standard FDDI MAC header by an 802.2 LLC header.

FDDI SNAP format consists of FDDI MAC header followed by and 802.2 SNAP LLC header.

Token-ring has now raw format, but has SAP and snap formats; the SAP format is the standard 802.5
MAC header followed by an 802.2 LLC header; the SNAP format consists of an 802.5 MAC header
followed by a SNAP LLC header

On serial interfaces, IPX uses PPP’s HDLC encapsulation; the PPP/HDLC frame format has the
following fields:

Flag – marks start or end of frame
Address – broadcast address
Control – similar to LLC
Protocol – encapsulates protocol name
Datagram – datagram contained
Frame check sequence – error control.

The IPX packet is similar to the XNS packet and consists of 2 part:

IPX header – 30-byte min; contains various packet fields
Data – includes the header of a higher level protocol.

The IPX packet has several fields:
Checksum – 16-bit that is set to all1’s since it is not used
Packet length – 16-bit and specifies the length in bytes of the IPX datagram.

IPX packets can be any size up to 65,535 bytes depending on the media being used and its
corresponding maximum transmission unit (MTU) which is the maximum size packet that the interface
can handle, ie, Ethernet MTU = 1500 bytes.

The transport control field is 8 bits long and indicates the number of routers that the packet has passed
thru; on a RIP based IPX router, an IPX packet whose transport control field has reached 16 is
discarded. Sending nodes always set this field to zero; routers that receive packets that require
additional routing increment this field bye one.

The packet type field is also an 8-bit field that specifies which upper layer protocols, ie, NCP, SAP,
SPX, NetBIOS, or RIP is to receive the packet information, ie, 5=SPX, 17=NCP.

The destination network, destination node (physical address), and destination socket fields specify
relevant destination information. When destination network field is set to zero, then the nodes are
assumed to be on the same network segment. The destination socket field is the socket address of the
packet’s destination process. A socket is a software structure operating as a common end point within
a network device. There are also source network, source node, and source socket fields which perform
opposite but equal functions.


The upper layer data field contains headers of upper layer protocols, ie, NCP or SPX; the upper layer
data for these upper layer protocols are contained in the data portion of the IPX packet.

When you assign IPX network numbers to router interfaces, you enable IPX routing on these
interfaces. To enable IPX routing enter the global command: ipx routing [network number]
encapsulation [encapsulation-type] or ipx routing 1000 encapsulation sap. A single interface can
support single and multiple logical networks.

To configure an interface for multiple networks, you must specify a different encapsulation type for
each network, so that the router will know which packets are destined for which network. You can
configure all 4 ethernet encapsulation type; to assign multiple networks to a single interface, use
subinterfaces which allows several logical interfaces to be associated with a single network or physical
interface. In the example interface ethernet 0.1, 0 refers to the port, connector or interface card; 1
refers to the subinterface number and can be 1-4294967293.

Another way to configure an interface for multiple networks is to use a primary and secondary network
number, but this will not be supported by future Cisco IOS software release.
Netware provides file, print, message, application, and database services; server centric architecture in
which remote devices appear local to the workstation. All servers and routers keep a complete list of
all services on the network. Server advertisements synchronize list of available services; finding,
adding, and removing services on an IPX network is dynamic due to SAP broadcasts. An IPX client
broadcasts get nearest server (GNS) when they require a specific service; responses for GNS come
from local servers, local routers, and remote servers. A GNS broadcast is issued by a client using IPX
SAP. A GNS packet request a specific service from a server; the nearest server offering the service
responds with another SAP and a GNS response allocates a server to a client allowing login and use of
server resources. A Cisco router can respond to a GNS request, but does not perform nearest server
function; routers act like servers by building a SAP table. If there are no servers on the local network,
the a router can be configured to forward the GNS SAP requests; routers do not forward individual
SAP broadcasts. SAP tables are created by routers to state advertised services information and
advertised at regular intervals (60 secs). When a router chooses a server for a GNS response the tick
value is used to decide which server with the hop count being the tie breaker.

Transport Layer Protocols

Handle conversations between end nodes; end-to-end flow control and just the sender and receiver are
involved in recovering bad packets. SPX is the most common Transport Layer protocol used and was
derived from the Sequence Packet Protocol from XNS. SPX uses segment sequencing to reorder
segments once they have arrived to their destination; segment sequencing numbers are unique. SPX is
a reliable connection-oriented protocol that supplements datagram services by the layer 3 protocols
like IPX. SPX monitors network transmissions to ensure delivery; verifies, acknowledges, and
requests verification which is accomplished by adding a checksum to each segment. SPX can track
data consisting of a series of separate packets; if no acknowledgement is received in a given amount of
time, then SPX will retransmit the entire series. If a specific number of re-transmissions fail, then SPX
will assume a failed connection and warn the operator.

Cisco supports SPX spoofing (a host will treat a router interface as if it was up and supporting a
session). The router spoofs or replies to keepalives, which allows clients and servers to create their
own “watchdogkeepalive” packets at a user defined rate. A request for legitimate information triggers
the dial-up connection resulting in reduced WAN costs. Netware offers IP support in the form of
UDP/IP encapsulation of other Netware packets. IPX/SPX datagrams are encapsulated inside UDP/IP
headers for transport across an IP network.

Port addresses are pointers to a local process rather than a virtual circuit. UDP usually transfers data
faster than TCP.

Upper Layer Protocols

The Netware shell or Netware DOS requestor consists of VLMs (modular executables with a set of
logically grouped features) runs on clients and workstations. When the shell intercepts a network
request it passes it to a lower layer.

RPC is a Session Layer protocol that allows local applications to call functions on other networks.
Local function calls are passed to software usually called a re-director, shell, or virtual file system
interface. This software determines if a call can be satisfied locally or requires network access. RPC
servers are specialized service providers that can handle many RPCs and store many files. An RPC
server executes the function calls and puts the results in a reply packet which is returned to the
originating OS or client.

Netware Core Protocol (NCP) is a series of server routines designed to satisfy application requests
coming from the Netware shell. An NCP exists for every service a workstation might request from a
server:

      File access                                              Accounting
      Printer access                                           Security
      Name management                                          File synchronization

NCP creates and destroys service connections, manipulate directories and files, controls printing, and
open semaphores (a flag that is used to coordinate access to global data in a multi-process
environment. The Netware shell uses NCP function call to provide transpartent file and printer access.



Application Layer Protocols

Netware Message Handling Service (MHS) is a core Netware service that provides email transport.
There three types of MHS: Basic MHS, Global MHS, and Remote MHS. Basic MHS is for a one
server environment, supports full-name user addressing, users are imported from Bindery, supports
third party applications, and is fully compatible with Global MHS. Global MHS is for a multi-server
environment on network-wide system. Remote MHS is for laptops with asynchronous network access.

Users create messages with MHS-compatible email application that compiles with the Novell Standard
Message Format (SMF) interface and submits to MHS which delivers the email to the recipient’s
mailbox.

Betrieve is an implementation of the binary tree (btree) database access mechanism. Betrieve is a key-
indexed record management system for high performance data handling and improved programming
productivity. Allows applications to retrieve, insert, update, and delete either by key value or by
sequential or random access methods. Betrieve is server based; automatically creates and maintains
file indexes as records are inserted, updated, or deleted; supports files up to 4 gigabytes; offers
consistent file structures and management routines; allows specification of memory to reserve for I/O
cache buffers, and offers concurrent access to records in a multi-user environment while ensuring file
integrity.

Netware supports IBM logical unit (LU) 6.2 network addressable units (NAUs). LU 6.2 allows peer-
to-peer connectivity across IBM communication environments; allows Netware nodes to exchange
information across an IBM network. Netware packets are encapsulated in LU 6.2 packets for transport
across an IBM network.

Netware supports NetBIOS Session Layer interface specification from IBM and Microsoft. Netware’s
NetBIOS emulation software allows workstations to run applications that support NetBIOS calls.


AppleTalk Overview and Addressing
AppleTalk is a proprietary distributed client/server network system which enables users to share
network resources; has a transparent interface with little need for interaction; the operations of the
AppleTalk protocols are invisible to the end user.

Distributed client/server systems enjoy an economical advantage peer-to-peer since important
resources can be located in a few rather than many places; AppleTalk provides the following:

Peer-to-peer networking – allows a router to propagate client lookup services ensuring all available
services will be located by the user.

Allows devices to dynamically acquire addresses
Uses the Routing Tabs Maintenance Protocol (RTMP) as its routing protocol

There are 2 versions of AppleTalk:

AppleTalk phase 1 – developed in the early 1980s and designed for local workgroups
AppleTalk phase 2 – improved the routing capabilities and designed for larger networks

AppleTalk specifies a series of communication protocols that make up the Apple’s network
architecture and correspond almost exactly to the OSI model.

Layer one of the AppleTalk protocol stack is the Physical Layer and is responsible for handling the
network hardware; establishes and maintains physical link between systems; specifies the type of
cabling, electrical signals, and mechanical connections. AppleTalk can be used with the following
standard Physical Layer interfaces: 802.3, 802.5, and FDDI. Bus architecture networks can be used on
10Base2, 10BaseT, or 10Base100; 802.5 token passing at 4, 16, or 100Mbps over STP cabling; FDDI
specifies 100Mbps using fx cabling up to 2Km.

Layer two of the AppleTalk protocol stack is the Data-Link Layer which interfaces with the network
hardware.

AppleTalk over Ethernet – EtherTalk
AppleTalk over Token ring – TokenTalk
AppleTalk over FDDI – FDDITalk

These media-access implementations allow AppleTalk to operate on topology of the standard Physical
Layer interfaces. EtherTalk, TokenTalk. And FDDITalk networks are organized exactly the same as
the associated Physical Layer interfaces that they represent, ie, same speed and network nodes.

LocalTalk is Apple’s own proprietary media-access system Data-Link Layer which is cost effective;
LocalTalk hardware is typically builtin to Apple products and is connected using twisted pair cabling;
based on contention access CSMA/CD bus topology that transmits data over STP at 230.4Kbps.
Network segments can span 300 meters and support a maximum of 32 nodes. Cisco does not support
LocalTalk interfaces.

Link Access Protocol (LAP) handles interaction between the AppleTalk protocols and the associated
media’s data-link interface; needed because upper layer prtos do not recognize the standard interface’s
hardware addresses. For data to be transmitted across the Physical Layer, the network addresses have
to be mapped to hardware addresses. LAP uses the AppleTalk ARP (AARP) to map network
addresses to hardware (MAC) addresses.

Data at the upper layer protocols is divided into packets at the Network Layer; LAP receives a
Datagram Delivery Protocol (DDP) packet from the Network Layer that requires transmission; DDP is
the Network Layer AppleTalk protocol. LAP finds the network address in the DDP packet header and
request AARP to find the corresponding hardware address. When AARP returns the hardware address,
LAP creates a new header containing the hardware address in the destination field and appends it to the
DDP header; examples of headers that can be applied at this level are 802.3, 802.5, SNAP, and the
802.2 LLC headers. A frame is a logical grouping of information sent as Data-Link Layer unit over
transmission medium. Each type of media-access implementation has its own link layer protocol:

ELAP – EtherTalk
TLAP – TokenTalk
FLAP – FDDITalk
LLAP – LocalTalk

Layer three of the AppleTalk protocol stack is the Network Layer and is responsible for ensuring that
data reaches its destination. Datagram Delivery Protocol (DDP) is the primary routing protocol which
transmits and receives packets from the Data-Link Layer.

When DDP receives data from devices it creates a DDP header with destination network address as
passes the packet to the Data-Link Layer protocol. When DDP receives frames from the Data-Link
Layer it routes the packet to the destination device by examining the DDP header for a network
address.

AARP is another Network Layer protocol that associates AppleTalk network addresses to hardware
addresses: uses and Address Mapping Table (AMT) to simplify and speed up the process. If there is
no entry in the AMT for a hardware, then AARP broadcasts to find the hardware address and associate
it with a network address and adds the entry to the AMT.

L:ayer four of the AppleTalk protocol stack is the Transport Layer and is responsible for the reliable
delivery of packets. The Transport Layer contains five protocols:

AppleTalk Echo Protocol (AEP)
AppleTalk Transaction Protocol (ATP)
Name Binding Protocol (NBP)
AppleTalk Update-based Routing Protocol (AURP)
Routing Table Maintenance Protocol (RTMP)

RTMP establishes and maintains AppleTalk routing tables and can be used to learn router’s addresses.
Defines what information is contained in each routing table and how information is exchanged
between routers for routing table maintenance.

ATP ensures DDP packets are delivered to destination address without packet loss by means of
transaction requests and responses Transaction requests are sent every time a packet is sent to a device
which performs the requested action and send a transaction response.
Since AppleTalk uses names instead of numeric network addresses, it uses NBP to translate AppleTalk
device names into numeric network addresses which enables routing and communication to occur
between devices.

Before a node sends traffic across a network it checks to see if destination address is reachable by
using AEP, which is similar to TCP/IP ping.

AppleTalk is mainly used for LANs; however, AURP allows connection between two discontiguous
AppleTalk networks thru a foreign network, ie, TCP/IP to form an AppleTalk WAN. An AURP tunnel
encapsulates AppleTalk traffic into the header of the foreign protocol packet. AURP allows
maintenance of routing tables for the entire AppleTalk WAN by exchanging routing information
between exterior routers that are connected by an AURP tunnel.

Layer five of the AppleTalk protocol stack is the Session Layer which establishes and controls
conversations (sessions) between devices. The Session Layer contains four protocols:

Printer Access Protocol (PAP)
AppleTalk Session Protocol (ASP)
Zone Information Protocol (ZIP)
AppleTalk Data Stream Protocol (ADSP)

ASP establishes and maintains sessions between client and servers; allows multiple client sessions to a
single server. Workstation sends a request to ASP to establish a session with a server.

PAP requests use NBP to learn the network address of the requested server; like ASP, PAP establishes
and maintains connections between clients and servers; however, PAP is used for client connections to
print servers.

ADSP is responsible for reliable packet delivery; provides full-duplex byte-stream delivery which ATP
does not provide. ADSP guarantees that data is correctly sequenced packets are not duplicated, and
controls the rate of data sent with flow control.

ZIP coordinates NBP functions and maintains network number-to-zone name mappings. Zip is
primarily used by AppleTalk routers; network devices use ZIP to assign their zones and acquire
internetwork zone information at bootup. NBP also uses ZIP to determine which networks belong to
which zones. ZIP maintains a Zone Information Table (ZIT) at each router that maps specific network
numbers to one or more zone names. For every network in an AppleTalk internetwork, ZIT contains a
network number-to-zone name mapping. ZIP makes use of RTMP routing tables to keep up with
topology changes to tne internetwork.

Layer six of the AppleTalk protocol stack is the Presentation Layer and is responsible handling data
files and formats. AppleTalk Filing Protocol (AFP) is used to provide remote file access.

Layer seven of the AppleTalk protocol stack is the Application Layer which defines which protocols
are used to provide which services to application process, ie, file transfer; AppleTalk has no specific
protocols at this layer, since the services are carried out at the lower layers.

Network Entities
There are four basic components to an AppleTalk network:

Nodes
Sockets
Networks
Zones

A node is any addressable network device; each node belongs to asingle network and a specific zone.
Each node has addressable locations called sockets (logical points where upper layer software
processes {socket clients} and DDP interact). Socket clients own one or more sockets used to send
and receive datagrams. Messages for each individual program arrive thru individual sockets; sockets
can be assigned dynamically or statically. Static sockets are reserved certain protocols and processes;
dynamically sockets are assigned by DDP to socket clients as needed. An AppleTalk node can support
254 sockets.

Network Visible Entities (NVEs)

An AppleTalk network consists of a single logical cable (can be a single cable or multiple cables
connected by bridges and routers) and multiple nodes. AppleTalk networks have two categories:

Nonextended network is a physical network segment with one network address where each node has a
unique number. Phase 1 only supported networks of this type and is no longer frequently use.

Extended network is a physical network segment with multiple network numbers called a cable-range ,
which is represented by a single, unary network number (3-3) or multiple, consecutive network
numbers (3-6). Nodes can have the same node number, so long as they have different cable-range
numbers, ie, cable-range 3-4 can two node with node address of 20 (3.20 and 4.20). AppleTalk phase
2 can support both nonextended and extended network types.

An AppleTalk zone is a logical grouping of nodes or networks, which do not have to be physically
contiguous. Zones are configured when you configure the network and given a unique name. The
main reason for creation of zones is to reduce broadcast traffic by reducing search time by nodes for
resources.; requests for services are only sent to nodes in a particular zone. The Chooser is the most
common way of interfacing with an AppleTalk network; the Chooser sends NBP to the nearest router
for a list of all zones that contain LW. Since entities are represented by names, applications and nodes
have to use NBP to discover addresses of services and nodes. The nearest router returns a list of zones;
with the selection of a zone, the Chooser sends another NBP to the nearest router in that zone to find
all recurrences of the requested service-type in that zone. NBP requests use the following character
string for searching: =:service type@zone name.

Double-clicking an item in the Chooser defines the service type and selecting a zone defines the zone
name. If a zone is made up of multiple cable-ranges, then the router requests to each cable group in the
selected zone; this request comes in the form of a one-to-many multicast, which goes to all selected
service-types on the cable-range. Available services which meet the specific service type respond with
an NBP that contains the name, type or service, and AppleTalk network address of the service
provided at the end-node. Addresses are for applications; names are for people. The Chooser displays
NBP reply packets a list of the specified service type; once a specific service type has been chosen, a
logical link for this service is retained. Service and zone information is maintained at the router.
Nonextended networks can only support one zone, so all nodes share the same zone name.

Extended networks can have multiple zones on a single segment; nodes belong to only one zone. Each
zone list has a default zone, which nodes belong to at bootup until another zone is chosen. Zone names
are selected from the network control panel in the Chooser. A zone in which a node is registered is the
zone in which it service if any are advertised. In oreder for each router to have accurate information,
all routers must agree on default zone and zone names.

AppleTalk Addressing

AppleTalk network addresses consist of three elements:

16-bit network number
8-bit node number
8-bit socket number

Written as decimal separated by periods, ie, 10.1.50 equals network 10, node 1, socket 50 and is also
denoted 10.1, socket 50. AppleTalk has two network numbering systems: nonextended and extended.

All nodes in nonextended network have the same network address and is used on LocalTalk, ARCnet,
and EtherTalk 1.0. Because each node is identified by an 8-bit number, there can only be (as with IP)
254 addresses (0 and 255 are reserved by AppleTalk). These 254 addresses can consist of 127 servers
and 127 hosts.

Extended network addressing can assign a range of cable numbers to a single network; each network
can consist of 253 nodes (0, 254, and 255 are reserved by AppleTalk). These 253 hosts can consist of
any combination of servers and hosts. Each node uses the full 24-bit combination of network number
and node number as a unique identifier. There is theoretical limitation of greater than 16 million nodes.

AppleTalk nodes are assigned their addresses dynamically when they bootup; at first it receives a
provisional network layer address selected from the reserved, startup range (65280-65534) and then the
node address is chosen dynamically. This provisional address allows communication with a router for
the node to acquire a valid network number; the node uses ZIP to locate a router on the same network
with a get cable-range request. A network address is selected from the cable-range supplied by the
router and node address is randomly selected. AARP is used to verify the uniqueness of this node
address; if no node responds the AARP probe within a specific amount of time, then the node assumes
that address. If a node responds that it already has this address, then process is repeated until a unique
network address is obtained. Nodes store addresses in memory which allows them to re-use after
bootup, unless the address has been taken by another node. Routers also have node addresses which
can be acquired dynamically the same way as other nodes.

A router interface need to be assigned an addresses and to one or more zones before it is operational.
This inter can be configured for nonextended and extended routing either statically or dynamically. To
manually configure an interface for nonextended routing, the interface must be assigned an AppleTalk
address and a zone name. The following commands must occur in order:

appletalk routing
appletalk address 2.206
appletalk zone myzone
Only one zone name can be assigned to an interface configured for a nonextended network. If the
interface is mis-configured in regards to other existing routers, then it will not be operational. If there
are no other neighboring routers, then the interface is automatically operational.

The main difference in configuring interfaces for nonextended and extended networks is the use of the
command: appletalk cable-range instead of: appletalk address, ie, appletalk 3-4 [2.206]. The cable-
range portion specifies the start and end of the cable-range (0-65279). A cable-range of 1 makes
extended networks compatible with nonextended networks. After cable-ranges or addresses are
configured, then and only then can you configure zone names. In an extended network you can
configure multiple zone names on an interface, which creates a zone list. The first configured zone is
the default zone and all the other zones make up the zone list; multiple zone names can be assigned to
a single cable-range. Routers always use the default zone when registering NBP names for interfaces.
The zone list is cleared when you issue the following commands: appletalk address; appletalk cable-
range; and appletalk zone.

Priviledged mode command: show AppleTalk interface [type.number] displays status of an AppleTalk
interface. To check a nodes reachability and network connectivity issue the following priviledged
mode command: ping network number.node.number. Ping uses AEP to verify connectivity and
measure round-trip times.

Cisco routers can route packets between nonextended and extended networks that exist on a single
cable, which is called transitioning routing. For this type of routing to occur the router must be in
transition mode; the router must have two interfaces (one configured for nonextended and the other for
extended) connected to the same physical cable. For the two interfaces to communicate they must be
the same interface type, ie, Ethernet, but must be difference interface numbers, ie, 1 and 0. The
extended interface must be configured with a cable-range of 1 and a single zone. Once and interface
has been assigned and address or cable-range and a zone, configuration is complete.

If a nonextended or extended interface is connected to a network with at least one operational
AppleTalk router, then this interface can be dynamically configured. The interface must be placed in
to discovery mode. The operational routers are often called seed routers; information obtained from
seed routers is the network number or cable-range and a zone name or list of networks. Discovery
mode is useful in changing zone names or adding new routers. If discovery mode is used, you only
need to make configuration changes on the seed router. For discovery mode to work, there must be at
least one operational router; discovery mode does not work over serial lines.

To activate discovery mode you need to configure the interface with the network number or cable-
range, then: appletalk discovery. You do not need to configure the zone name. You do not need to
know the network number; you can acquire this number by placing the interface in to discovery mode
by assigning an address of 0.0 for nonextended networks and cable-range of 0-0 for extended
networks.

Show appletalk zone displays information in the ZIT
Show appletalk globals displays information and settings of the router’s global AppleTalk
configuration
Debug appletalk routing displays output from the RTMP routines used to monitor acquisition, aging
and advertisement of routes; can also report conflicting network numbers on the same network.
Implementing AppleTalk Routing
DDP is the primary Network Layer routing protocol that provides a best-effort connectionless
datagram service between AppleTalk sockets. It performs two functions: transmission of packets and
reception of packets. DDP receives data from socket clients, creates DDP header with appropriate
destination address and passes the packet to the Data-Link Layer. Receives frames from the Data-Link
Layer, examines the DDP header to find destination address, and routes packet to the destination
socket. There are two type DDP headers: short and long. A short header’s addresses source and
destination sockets by their 8-bit socket number and 8-bit node number; used in nonextended
addressing. A long header consists of an extended address of the 8-bit socket number, 8-bit node
number, and 16-bit network number used in extended addressing. DDP maintains the following
information in every AppleTalk node:

Cable-range of the local network
Network address of a router attached to the local network

DDP operates like most other routing protocols; packets are addressed at the source passed to the Data-
Link Layer, and transmitted to the destination address. If the the destination network number is within
the cable-range of the local network, then the packet is encapsulated in a DDP header and passed to the
Data-Link Layer for transmission to the destination node; if the destination network number is not on
the cable-range of the local network, then the packet is encapsulated into a DDP header and passed to
the Data-Link Layer for transmission to a router. As the packet is transmitted across the network,
intermediate routers use their routing tables to forward the packet to the destination network. Once the
packet reaches a router on the destination network, then the packet is transmitted to the destination
node.

AppleTalk uses RTMP to transmit routing information to neighboring routers; RTMP is also used to
establish and maintain routing tables in AppleTalk routers. RTMP is a Transport Layer protocol and is
based on RIP; like RIP RTMP uses hop count as its metric for routing decisions. The RTMP routing
table contains an entry for each network a packet can reach; has maximum hop count of 15. The
RTMP routing table contains the following information about each destination network known:

A network cable-range of the destination network
Distance in hops to the destination network
Router port that leads to the destination network
Address of the next hop router

The current state of an routing table entry is either good, suspect, or bad. RTMP broadcasts its routing
information every 10 seconds to all connected networks, which ensures that each router contains the
most current and consistent information across the internetwork.

Routing tables are updated a set interval; first the status of entries are changed from good to suspect;
the router sends an RTMP packet to all routers within the routing table. If a response is not received
with a specific time, then the entry for the non-responding router is set to bad and removed from the
routing table; if a response is received, the entry for the responding router is set back to good and any
changes indicated by the response are added to the routing table. RTMP saves bandwidth by providing
split-horizon routing. The router does not advertise routes it learns from an interface back thru the
same interface.
AURP is an enhancement to RTMP and is also a Transport Layer protocol; provides AURP tunneling
(architecture designed to implement any standard point-to-point encapsulation scheme) thru a TCP/IP
network. AURP encapsulates AppleTalk packets into UDP headers; encapsulation is the wrapping of
data in a particular protocol header. AURP is a distance-vector, split-horizoned routing protocol with a
maximum hop count of 15; an AURP tunnel counts as one hop. AURP has two components: exterior
routers and an AURP tunnel; exterior routers connect a local AppleTalk internetwork to an AURP
tunnel. Exterior routers convert AppleTalk data and routing information to AURP and perform
encapsulation and de-encapsulation of AppleTalk traffic. When exchanging routing information or
data thru an AURP tunnel, AppleTalk packets must be converted from RTMP to AURP.

An exterior router receives routing information or data packets; converts to AURP packets;
encapsulates these packets into UDP headers and sent into the tunnel or TCP/IP network, which is
treated by the TCP/IP network as normal UDP traffic; the remote exterior router receives the UDP
packet and removes the UDP header information; and converts the AURP packets back into their
original format. If the converted AppleTalk packets contain routing information, then the receiving
exterior router updates its routing table; if this packet contains data for a node on the local network,
then it is sent out on the appropriate interface. Exterior routers function as a router in the local network
and as an end node on the TCP/IP network. When exterior routers first attach to an AURP tunnel, they
exchange routing information with other exterior routers; after initial exchange exterior routers only
send routing information when there are topology changes, which is known as update-based routing.
The relatively high bandwidth of frequent RTMP broadcasts can severely curtail a backbone’s
performance. Most of the information in RTMP packets is redundant and contains no new
information.

An AURP tunnel functions as a single logical data link between remote AppleTalk inws; there can be
any number of physical nodes in the path between exterior routers; however, they are transparent to
AppleTalk networks. There are two kinds of AURP tunnels: a point-to-point tunnel that connects only
two exterior routers and a multipoint tunnel that connects three or more exterior routers of which there
are also two kinds: fully-connected multipoint and partially-connected multipoint.

A fully-connected multipoint tunnel is where all connected exterior routers are aware of each other and
can send packets to each other; the same number of routes should be reachable from each exterior
router. A partially-connected multipoint tunnel is one where not all connected exterior routers are
aware of each and can communicate with each other; this scenario can enhance security. The routing
tables on the different exterior routers have a different number of entries, which means that all
networks connected to these exterior routers are not reachable. Allows you to isolate a particular
network; this scenario can also be configured accidentally by entering the wrong information in the list
of peers that the router should communicate with.

Configuring AppleTalk for Routing

You need to specify the type of routing protocol to be used by the following command:

appletalk protocol rtmp; AURP and EIGRP are the other type of routing protocols that can be
configured on an AppleTalk interface. You do not need to specify RTMP, since it is the default.

show appletalk route
Banyan VINES Configuration
Banyan Virtual integrated Network Service (VINES) is a proprietary protocol family and is derived
from Xerox’s XNS protocol; implements a distributed NOS or a client/server architecture; uses a 48-
bit network addressing subdivided into a 32-bit network portion and a 16-bit subnetwork portion. The
network number is better described as a server number; the network number of the server is derived
directly from a hardware module or key. The subnetwork portion of the address can be described as
the host number. VINES uses dynamic address assignment; when a client firsts bootsup it broadcast s
a request for servers and chooses the first response. The client then requests a subnetwork (host)
address from the responding server, which responds with a host address comprised of a dynamic host
number and the server’s unique network number.

In A serverless VINES network the nearest router or access server responds to the client’s request for a
host address; Cisco IOS generates unique address for the client using the router’s or access server’s
address. A VINES fileserver must still be present somewhere on the network, so that the client can
connect to network services. VINES allows multiple virtual networks on a single wire; each network
is identified by its logical network number. Each logical network consists of a single server and a
group of clients

RTP is used by VINES to distribution network topology information, and is VINES own version of
RIP. All routing decisions are based on the delay metric, which can statically assigned to each
interface by a network admin. If no static definition exists, then a default delay metric is assigned to
the interface based on the bandwidth or type of interface.

VINES messages are generated by:
clients send hellos, which are empty routing updates.
servers send hellos, redirects, and updates, which are sent to other neighbor servers to inform them of a
nodes existence and if it is a client or server; the following information is included in a server’s update
packet:
        list of all known networks
        the costs for reaching these networks
routers send updates, which are periodic messages every 90 seconds.
Cisco also supports the more recent routing protocol Sequenced Routing Table Protocol (SRTP).
SRTP uses an update-based scheme for routers and servers to communicate routing changes.

The VINES protocol stack broadly corresponds to the OSI model; the two lower layers are
implemented with the following media-access protocols:

      FDLC                                                     Ethernet
      X.25                                                     Token Ring

VINES differs from the OSI model at layers three and four.

Layer three of the VINES protocol stack supports the following Network Layer protocols:

      VINES Internetwork Protocol (VIP)
      ARP and Sequenced ARP (SARP)
      Internet Control Protocol (ICP)
      RTP and SRTP

VIP is a datagram delivery protocol similar to TCP/IP and able to interoperate in a TCP/IP
environment; provide end-to-end connectivity between internetworked nodes. Uses 48-bit addressing
scheme; and can be describe as a two-level tree, where service nodes form the root and client nodes
form the leaves. Service nodes provide address resolution, routing services, and dynamically assigned
VIP addresses to client nodes. ARP, SARP, ICP, RTP, and SRTP are all encapsulated into VIP
headers for transmission across a network.

ARP and SARP are responsible for assigning network addresses to client nodes. A routing server
using ARP assigns only VINES internetwork addresses to clients; SARP makes the following
additional assignments: a sequence number and a routing metric. ARP entities are either address
resolution clients or address resolution services.

Non-sequenced ARP packets have an 8-byte header with the following fields:

      Packet type (2 bytes)                                    Network number (4 bytes)
      Subnetwork number (2 bytes)

Sequenced ARP (SARP) packets have a 14-byte header with the following fields:

      Version number (1 byte)                                  Sequence number (4 bytes)
      Packet type (1 byte)                                     Metric (2 bytes)
      VINES network address (6 bytes)

Both sequenced and non-sequenced ARP use four type of packets;

Query request – request for an ARP service
Service response- responds to a query request
Assignment request – request for a network address
Assignment response – used by ARP service to respond to an assignment request

ICP is used for exception handling and special routing cost information; provide information about
Network Layer exceptions when the following conditions are met:

      VIP packet cannot be properly routed
      Error sub-field in the VIP packet header is enabled

The exception packet contains a field that indicates a particular exception by its error code. ICP also
provides special routing cost information with metric notification packets; the cost information is about
the final transmission used to reach a client node. ICP in service nodes generate a metric notification
packet when the following conditions are met:

      The metric sub-field in the VIP packet is enabled.
      The destination address in the VIP packet header specifies on of the service nodes neighbors.

ICP also support Network Layer echoing.
RTP distributes network topology information to support VIP; RTP routing update packets are
broadcast periodically by client and server nodes, which helps VINES servers find neighbor clients,
neighbor servers, and routers. SRTP uses sequenced numbers to allow routers to determine if routing
table information received from neighbors is up to date. Non-sequenced RTP relies on suppression of
routing table updates to guarantee the integrity of routing table information. Routing table entries
consist of a host/cost pairs. RTP maintains two routing tables: network table and neighbor table. The
network table has entries for each known logical network; each network table entry contains: network
number, routing metric, and pointer to the neighbor table entry that provides next hop to the network
information. The neighbor table contains an entry for each neighbor service and client nodes.
Neighbor table entries include: network number, subnetwork number, media-access protocol used to
reach that node, LAN address, and neighbor metric.

RTP has four type of packets:

Routing updates - periodic and notify of neighbors of existence.
Routing requests – exchanged to learn network topology quickly.
Routing responses – contain topology information; used by service nodes to respond to routing
requests.
Routing redirects – provide better path information to nodes using inefficient paths.

RTP is a 4-byte header with following 1-byte fields:

Operation type – indicates packet type.
Node type – indicates a service or non-service node.
Controller type – indicates presence of multi-buffer controller in the transmitting node.
Machine type – indicates a fast or slow processor in the transmitting node.

Both the controller type and machine type field are used for pacing.

Clients learn of other routers by receiving redirect messages from their own server. If a Cisco router
detects the use of a sub-optimal path between two nodes, it sends a redirect message to nodes to
indicate the better path. Clients send periodic hello messages to indicate that they are still operating on
the network. Servers send periodic routing updates to other servers and routers signaling changes to
node addresses and network topology changes.

Layer four of the VINES protocol stack supports the following Transport Layer protocols:

      Internet Packet Control (IPC)                            Sequenced Packet Protocol (SPP)

Both are connection-oriented reliable transport mechanisms. IPC also supports unreliable datagram
service. SPP’s data stream service allows flow control between two processes; this data stream service
is an acknowledged virtual-circuit that supports transmission of messages of unlimited size. Each SPP
packet includes a sequence number used to order packets and to determine missing or duplicated
packets.

VINES uses the RPC model for communication between clients and servers; RPC is the technical
foundation of client/server computing. RPCs are procedure call built by clients, executed by servers,
and the results are returned to the clients.
Layers five and six of the VINES protocol stack support NetRPC, which allows access to remote
services that is transparent to both the user and the application.

Layer seven of the VINES protocol stack provides file and print services; VINES implements
StreetTalk, which provides consistent naming service for the global VINES network at this layer.
VINES also provides an integrated applications development environment, which can be run under
different OSs, ie, DOS and UNIX; allows third parties to develop clients and services that run in the
VINES environment.

Addressing

The network number is a unique value assigned to each server; the network number identifies a VINES
logical network. For server nodes and routers the subnetwork or host number is always 1; client node
addresses can range from hexadecimal value 8001 to FFFE inclusive. Examples of VINES
hexadecimal network addresses are as follows:

3000577A:001
3000577A:1
274112:1
30004444:8001

For Cisco VINES assignments of the network number is optional; Cisco VINES addresses are created
from the MAC addresses of 802.3, 802.5, and 802.2. The lower 21 bits of these addresses are placed
behind Cisco’s assigned hexadecimal block address of 300. The first six values from the right of the
MAC address make up the lower 21 bits. An example of a Cisco router address: 300158B4:1; Cisco
routers always take the subnetwork address value of a server or 0001.

VINES clients have no address at startup; when a client boots it broadcasts a request for servers. All
servers that hear the request will respond; the client chooses the first response it receives. The client
then requests a network address; the server responds with a dynamically, assigned subnetwork
hexadecimal addresses sequenced from 8001. Dynamically assigning addresses reduces the likelihood
of duplicate network addresses.

VIP Packet Structure

The first two fields in a VIP packet are checksum (2 bytes) and packet length (2 bytes), which
indicates the length of the entire VIP packet. The transport control field (1 byte) has a number of
possible sub-fields; the type of sub-fields depends on if the packet is a broadcast or a non-broadcast.
For broadcast packets the sub-fields are as follows:

Class – specifies type of node to receive broadcast, which help reduce traffic
Hop count – specifies number of hops the packets has been thru

For non-broadcast packets the sub-fields are as follows:

Error – specifies whether ICP should send an exception notification to the packet source if the packet
is un-routable.
Metric – use by a transport entity to learn the cost of moving packets between service nodes and
neighbors.
Redirect – specifies whether a router should generate a redirect
Hop count – specifies the number of hops the packet has been thru

The protocol type field (1 byte) indicates the Network Layer or Transport Layer protocol for which the
metric or exception notification packet is destined.

The remaining fields for the VIP packet are as follows:

      Destination network number
      Destination subnetwork number
      Source number
      Source subnetwork number

In a VINES network all servers with more than one interface are essentially routers. Clients always
choose their own server as the first-hop router, even if another server on the same cable can provide a
better route. Clients learn of other routers by receiving redirect messages their own server; VINES
servers maintain routing tables that consist of host/cost pairs, cost is calculated by delay.

VIP uses RTP to help servers find neighbors, clients, other servers, and routers; clients use RTP to
periodically advertise their network layer and MAC addresses with hello packets.

Servers send periodic updates to other servers; Cisco routers send periodic routing updates. A packet
arriving at a VINES server can be intended for that server, destined for another server, or can be a
broadcast. If a packet is a broadcast, then the server checks to see if it came from a least-cost path. If
it has then it is forwarded out on all interfaces except the one on which it was received; if the broadcast
did not come from least-cost path, then the broadcast is discarded, which helps to reduce broadcast
storms.

Implementing Banyan VINES

To enable VINES routing type the command: vines routing [address|recomputed]
The optional address parameter is used when there is no MAC address present in the router. The
optional recompute forces address selection when two routers have the same address; used when
addresses are not hard coded.

To configure VINES on each interface type the commands: interface type.number
and then: vines metric [whole[fractional]]. The optional whole parameter is used to assign a specific
delay metric value to the interface; if no metric is specified, the Cisco IOS automatically chooses the
value based on bandwidth or interface type. Examples of standard default delay metric (internal
formats in brackets):

      FDDI – 1 (0010)                                          T1 HDLC – 35 (0230)
      Ethernet – 2 (0020)                                      9600 Serial – 90 (05A0)

The optional fractional parameter is an option only used when the parameter whole is used.
In a serverless environment the local router has to assign network addresses to clients; you can
configure a Cisco router to respond to ARP requests generated by clients at startup. Use the interface
configuration command: vines arp-enable [dynamic]. The optional dynamic parameter should be
included on segments with no servers. Use the global command: vines serverless [dynamin|broadcast];
can only be used on segments without servers. Enables clients on serverless networks to find services
on other networks; instructs the router to propagate certain broadcast packets to the nearest server.
The optional dynamic parameter instructs the router to forward the broadcast packet to one server. The
optional broadcast parameter is used to instruct the server to forward the broadcast packet to all servers
available on all outgoing interfaces.
Show vines interface displays the status of the interfaces on which VINES routing is configured; you
can also include an optional interface type.number parameter. Interface status information includes:

Cisco VINES address of the router
Next available client address
Delay metric assigned to an interface (internal form, configure form, and secs)
Routing update interval
List of neighbors – the number proceeding the address indicates what version of the RTP protocol the
neighbor is running (0-RTP, 1-SRTP)
ARP processing – indicates if this interface will process ARP packets
Serverless network processing – indicates if interface is defined by vines serverless
Nodes present – indicates the number of VINES-speaking devices are on the given physical segment

Show vines route [number|neighbor address] displays entries in the VINES routing table. The optional
number parameter is used to display routing table entries for the specified network. The optional
neighbor address parameter is used to display all routes in the VINES routing table that have the
specified neighbor as their first hop. The first line of the show vines route displays:

      Number of servers
      Number of routes
      Version number (incremented each time a server or route is added or deleted)
      Time until next update

Each entry in the VINES routing table consists of a number of fields:

Network – name or number of remote network
Neighbor – indicates the next hop to destination network
Flags – consist of a three columns of sub-fields
       The first column indicates how route was learned
               C – connected
               D – learned from RTP redirect
               R – learned from RTP update
       The second column indicates the version of RTP
               0 – RTP
               1 - SRTP
       The third column can contain an asterisk which indicates that this route is used
       when forwarding a frame to that server.
Age - indicates the age of the routing table entry in seconds
Metric- gives distance to server
Uses – indicates the number of times this route has been used to forward a packet

Show vines neighbor [address|interface_type.number|server number] displays full contents of a
neighbor’s routing table and includes the following information:
Address – contains the address or name of the neighbor
Hardware address – the MAC address of the router interface thru which the neighbor can be reached
Type – indicates the MAC-level encapsulation used to communication
Interface – indicates the type and number of the interface thru which the neighbor can be reached
Flag – consists of three sub-fields just like the routing table field
Age – indicates the age of neighbor table entry in seconds
Uses – the same the routing table field

Use debug to check the contents of the periodic routing updates. Before initiating you should redirect
the output of the command to the telnet session terminal by issuing the command: terminal monitor;
allows you establish a telnet connection to a router and view the debug output without being connected
to the console port. To begin debugging VINES routing type the command: debug vines routing
verbose; all VINES RTP (VRTP) traffic activity for this router is then displayed. To turn off use: no
debug vines routing; no debug all; or undebug all.

DECnet Configuration
DECnet is not a network architecture; consists of a group of data communication products that
conform to DECs Digital Network Architecture (DNA). DNA is a comprehensive layered network
architecture that supports a large set of DECISIONS protocols and industry standard protocols.

DECnet addressing contains 16 bits; 6 bits are used for area address and 10 bits are used for the node
address. DECnet changes the area.node address so that it becomes a software-format MAC address
that can be used on all interfaces. Address resolution is not needed, because logical addressing is
incorporated into the MAC address. DECnet use distance-vector, so path determination is based on the
cost of all outgoing interfaces.

There have been many version of DECnet released over the years; the first version was released in
1975 and allowed two directly connected PDP-11 minicomputers to communicate. Each DECnet
version is fully backward compatible. Today there are to version of DECnet in wide use: DECnet
Phase IV and DECnet/OSI, sometimes referred to as DECnet Phase V.

DECnet Phase IV is the most widely implemented version of DECnet. It is based on DECnet IV DNA,
which is similar to the OSI model; it defines an eight layer model.
The four upper layers are made up of the following:

User layer – represents the network user interface and corresponds to the OSI Application Layer. This
layer supports users, services, and programs that interact with user applications.

Network Management Layer – represent user interface to network management information and
corresponds to the OSI Application Layer. This layer interacts with all the four lower layers. Digital
Network Information Control Exchange (NICE) is the most common network management protocol
used and is a command-response protocol. Commands are requests for action that are issued to
managed nodes or processes; responses in the form of action are returned by managed nodes or
processes. NICE performs many network management-related functions:

          Loading and dumping of remote systems
          Changing and examining network parameters
          Examining network counters and events
          Performing data-link and logical link tests

Some network management functions use the Maintenance Operations Protocol (MOP),
which is a collection processes that can operate without the aid of the DNA layers
between the network management and data-link layers.

Network Application Layer – is an upper layer that provides various network application, ie, remote
file access, virtual terminal access, etc. The layer corresponds to the OSI Presentation Layer and
Application Layer. DECnet Phase IV uses Data Access Protocol (DAP) at this layer; DAP supports
services that are used by network management layer and use upper layer applications, ie, remote file
access and remote file transfer. Mail – network Application Layer protocol; CTERM – allows remote
interactive terminal access.

Session Control Layer – is an upper layer that manages logical link connection between end nodes and
corresponds to the OSI Session Layer. DECnet Phase IV uses uses Session Control Protocol (SCP) to
perform the following:

      Requesting a logical link from an end device
      Receiving a logical link from an end device
      Accepts logical link requests
      Rejects logical link requests
      Translates names to addresses
      Terminates logical links

End-to-End Communication Layer – handles flow control, segmentation, and reassembly functions;
corresponds to the OSI Transport Layer.

Routing Layer – performs routing and other functions; corresponds to the OSI Network Layer.

Data-Link Layer – manages physical network channels and corresponds to the OSI Data-Link Layer.

Physical Layer – manages hardware interfaces and determines the electrical and mechanical functions
of the physical media; corresponds to the OSI Physical Layer.

DECnet identifies two type of nodes: end node and routing node. Both can send and receive network
information, but only routing nodes can provide routing services.

DECnet routing decisions are based on cost, which is an arbitrary measure assigned by network
admins. Costs are based on hop count, bandwidth, etc. When network faults occur DECnet Phase IV
routing protocol uses cost values to recalculate best paths.
DECnet/OSI is based on the DECnet/OSI DNA architecture that defines a layered model that
implements three protocol suite; conforms exactly to the seven layer OSI model, and also supports
many standard protocols. The TCP/IP implementation of DECnet/OSI supports the lower layer
TCP/IP protocols, which allows transmission of DECnet traffic over TCP transport protocols.
DECnet/OSI supports three transport options:

Network Services Protocol (NSP) for DECnet Phase IV backward compatibility
Three of the standard OSI protocols: TCP/IP), TCP/IP@, and TCP/IP$
TCP

RFC 1006 defines implementations of OSI Transport Layer protocols over the topology of TCP;
redefines TP0 on TCP. RFC 1006 Extensions defines TP2 on TCP.

DECnet/OSI implements the standard OSI Application Layer as well as standard Application Layer
processes:

      Common Management Information Protocol (CMIP)
      File Transfer Access and Management (FTAM)

This layer also supports all DECnet Phase IV protocols at the user and network management layers of
DNA, ie, NICE, MOP, etc.

All standard OSI Presentation Layer implementations are in DECnet/OSI; also supports all protocols
implemented by DECnet Phase IV at the network application layer of DNA of which DAP is the most
important.

All standard OSI Session Layer implementations are in DECnet/OSI; also supports all protocols
implemented by DECnet Phase IV at the session control layer of DNA of which SCP is the most
important.

Both DECnet Phase IV and DECnet/OSI support many of media access implementations at the
physical and data-link layers. At the Physical Layer they support 802.3, 802.5, and FDDI;
DECnet/OSI supports the additional implementations of frame-relay and X.21bis.

Both DECnet Phase IV and DECnet/OSI support the following protocols at the Data-Link Layer:

802.2 (LLC)
Link Access Procedure Balance (LAPB)
Frame-relay
HLDC
Digital Data Communication Message Protocol (DDCMP), which provides point-to-point and
multipoint connections at full or half duplex over synchronous or asynchronous channels, error
correction, sequencing, and management.

NSP is the only transport that DECnet Phase IV supports at the DNA end-to-end communication layer;
similar to TP4 because it offers a connection-oriented, flow control service with message
fragmentation and reassembly. Creates and terminates connections between nodes, manages error
control, and supports two sub-channels: one for data and the other for expedited data and flow control
information. There are two type of flow control messages: a simple start/stop and a more complex
scheme in which the receiver tells the sender how many messages it can accept.

DECnet Routing

Uses area.node pairs. All interfaces on a DECnet device have the same logical address. DECnet uses
6-bit area numbers, so the range is 1-63. Areas can span many routers and a single cable can support
many areas; however, all nodes in an area must be contiguous. DECnet uses 10-bit node numbers, so
range is 1-1023, which gives a total of approx. 65,000 node address capability in a DECnet network.
DECnet does not use MAC addresses. It folds a network level address into the MAC layer address
according to an algorithm that multiplies the area number by 1024 and adds the node number to the
product. The resulting 16-bit representation of the decimal address is converted to a hexadecimal
address that is appended to the address: aa00.0400 in byte-swap order with least significant byte first.
Any address that starts with aa00.04 is a DECnet address; aa00.03 is also used. You can define a
name-to-DECnet addresses map. When DECnet initializes the modified software supplied MAC and
is propagated to each interface.

Cisco recommends that you establish an address translation table for selected nodes between networks,
which eliminates any duplicate addresses between networks. The Address Translation Gateway
(ATG) allows you to define multiple DECnet networks and map between them. Configuring ATG
allows Cisco routers to route traffic for multiple independent DECnet networks and establishes user-
specified address translation for selected nodes between networks. ATG can be configured on all
media types; when using ATG all the DECnet configuration commands apply to network number 0
unless otherwise specified.

DECnet routing occurs at the routing layer of DNA in DECnet Phase IV and at the OSI Network Layer
in DECnet/OSI; both versions have similar routing implementations. DECnet networks are organized
in a hierarchy; the OSI routing hierarchy is similar to the hierarchy in DNA. The first component is
the area, which is a group of contiguous networks and attached hosts defined by network admins.
DEC Vaxes can be configured as routing nodes in DECnet Phase IV. End nodes or systems (ES refers
to any non-routing node) are workstations and Vaxes; and intermediate system (IS) refers to a routing
node. In DECnet/OSI there are end nodes, routing nodes, and one of the possible ISO defined nodes.

DECnet routing nodes are referred to as either LEVEL 1 or LEVEL 2 routers. LEVEL 1 routers
communicate with end nodes, other LEVEL 1 routers in the same area, and LEVEL 2 routers in the
same area. A LEVEL 1 router can be a normal LEVEL 1 or a designated router (DR). LEVEL 1
routers are sometimes reffered to as routing-iv routers. LEVEL 2 routers communication with LEVEL
1 routers in the same area and LEVEL 2 routers in different areas. Level 1 and Level 2 routers form
the DECnet hierarchy routing scheme.

The DR can be manually configured, or the Level 1 router with highest priority can be elected to be the
DR; if Level 1 routers have the same priority, then the router with the most nodes becomes the DR.
When a Level 1 router needs to send a packet outside the area it forwards to Level 2 router; sometimes
the Level 2 router will not have the optimal route to the destination. This mesh network configuration
offers a degree of fault tolerance not provided by assignment of one Level 2 router performs area.

Each router knows about all hosts in an area and has a routing table with host/cost information;
periodic updates that contain cost information about all nodes in a router’s area are sent by each router.
DRs perform routing functions and assist end nodes on the same segment in identifying other end
nodes and routing nodes so that communication can occur. When an end node first attaches it must
send all traffic to the DR; as the node receives traffic from other nodes, it learns and caches their
addresses. Eventually the node sends its traffic directly to the cached nodes rather than the DR.

Routers and end nodes use the hello protocol to inform each other of their existence; the DR send
hellos periodically to inform end nodes of the DR’s address. New end nodes send hello message to
inform the DR of their existence.

DECnet/OSI routing is implemented by the standard OSI routing protocols and the DECnet Routing
Protocol (DRP). ES-IS protocol allows ES’s and IS’s to discover each other while the IS-IS protocol
provides routing between IS’s. DRP is a proprietary DEC routing protocol; DECnet Phase IV routing
is performed by DRP at the routing layer of DNA and performs five functions:

Path determination by using cost metric
Traffic forwarding
Receive function – DRP receives all incoming traffic to the appropriate process or layer
Routing updates – informs other routing nodes about path and address information for destination
networks.
Routing Table Maintenance – Level 1 and Level 2 routers
To configure an DECnet address of area 5 and node 4 you type the command: decnet routing 5.4;
addresses are assigned to devices and not interfaces. To configure a router as an Level 1 type
command: decnet node-type routing-iv; to assign outgoing cost to an interface you must enter the
interface configuration and type the command: decnet cost 5 (cost must be assigned since there are no
default costs). To configure a router as a Level 2 type command: decnet node-type area.

Show decnet neighbors displays all DECnet Phase IV and DECnet Phase IV prime neighbors and their
MAC addresses.

Show decnet route displays contents of DECnet routing table; routing tables contain entries for all
hosts; each host is aware of all other hosts in the area.

Show decnet interface [interface type.number] – displays status and configuration of decnet interfaces,
including address, path cost, access lists, and more.

Show decnet global – displays global DECnet parameters.

Show decnet traffic – displays all DECnet traffic that has arrived at the router.

Debug decnet routing – displays DECnet routing update messages; to turn off undebug decnet routing
or no debug decnet routing.

Managing Traffic and Access

Managing IP Traffic

Managing IPX
IPX uses network.node and a socket number. Serial lines use the MAC address of another interface to
create their logical addresses. Cisco IOS checks packet headers of IPX; use standard access-lists to
check for source and destination addresses and are numbered 800-899; use wildcard masks to identify
which part of the address is to be checked or ignored and is similar to the IP wildcard masks. Access-
lists that use 1000-1099 are SAP filters and are used for service types and to control traffic form the
SAP. Access-lists can also filter: GNS, RIP, and NLSP.

Access-list access-list number {permit|deny} source-network [.source-node] [source-node-mask]
[destination-network] [.destination-node] [.destination-node-mask]

Ipx access-group access-list number [in|out]

Ipx routing
access-list 800 permit 2b 4d
Interface e0
Ipx network 4d
Ipx access-group 800 out
Interface e1
Ipx network 2b

Extended IPX access-lists are used to filter protocol types and range form 900-999

The protocol, the source-socket, and the destination-socket parameters can all be represented by a
hexadecimal number. The log parameter can be used to log access-list violations and contains source
and destination addresses, source and destination sockets, protocol type, and type of action. Source
and destination sockets can also be used to determine protocol type when the log doesn’t give a clear
indication.

A Cisco router can act like a SAP server; Sap broadcasts synchronize the list of available servers;
routers will not forward SAP broadcasts, so it builds a SAP table which it holds in main memory; SAP
numbers represent service types, ie, fileserver=4, print sever=7, remote bridge or router=24

SAP traffic is controlled by an IPX input SAP filter and an IPX output SAP filter; the IPX input filter
reduces the amount of services entered into the SAP table; SAP output filters reduce the amount of
services propagated from the SAP table. SAP filters must be between 1000-1099. –l represents all
networks

To configure an input Sap filter on an interface: ipx input-sap-filter access-list number
To configure an output Sap filter on an interface: ipx output-sap-filter access-list number
Ipx router-sap-filter access-list number – identfies the router that sends out SAP advertisements. To
configure an output SAP filter that blocks SAP traffic fro a specific server:

Ipx routing
access-list 1000 deny 9e.1234.5678.9101 4
access-list 1000 permit-l
interface e0
ipx network 9e
interface s0
ipx network 1a
ipx output-sap-filter 1000


show access-lists displays configured access-lists.

Managing AppleTalk Traffic

Cisco IOS checks AppleTalk packet headers for cable range or network numbers and ZIP replies;
AppleTalk access-lists are configured with numbers between 600-699. The network admin refers to
the 16-bit network section of the full 24-bit AppleTalk address when configuring access-lists.
AppleTalk node numbers are not predictable for entries to an access-list because they are dynamically
configured at startup. One or more at networks are expressed as a cable range.

appletalk routing
access-list 601 deny within cable-range 100-102
access-list 601 permit within cable-range 103-105
interface e0
appletalk cable-range 120-120
appletalk access-group 601
inteface e1
appletalk cable-range 100-105

ZIP filters are used to reduce traffic from AppleTalk zone information updates and are used on
GetZoneList (GZL) packets. The access-list other-access command defines a default filter for other
networks or cable ranges.

access-list 602 {permit|deny} zone zone-name
access-list 602 {permit|deny} additional-zones – defines default filtering for all other zones.

The AppleTalk zip-reply-filter command links a traffic filter to a particular interface. Use show
AppleTalk access-lists to display access-lists defined for AppleTalk.

Managing VINES Traffic

VINES standard and extended access-lists are used to control the transmission of packets on an
interface, select interesting traffic that initiates a dial-on-demand (DDR) connection, restricts contents
of routing updates, and control the source address of received routing updates.

Standard access-list restricts traffic based on the packet’s protocol, source addresses and source-
address mask, destination address and destination-address mask, and source and destination port.
VINES standard access-lists have numbers between 1 and 100.

Extended access-list restrict traffic the same as standard access-list; however you can specify masks for
the source and destination ports. You must specify port numbers and port masks to control IPC and
SPP traffic. VINES extended access-list have numbers between 101 and 200.

VINES also supports simple access-lists. VINES servers synchronize clocks across the entire network
by sending zero-hop and two-hop broadcast messages. Simple access-list are used to decide on the
stations form which time updates can be received. VINES simple access lists have numbers between
201 and 300. Restrictions are based on source address and source-address mask. Simple access lists
can not be used to filter routed traffic.

You can define two types of filters on a VINES network. The first filter is defined on a packet’s
protocol, source and destination addresses, addresses masks, and explicit port numbers. The second
type of filter is defined the same as the first with the inclusion of a port mask. Only one access-list can
be assigned to an interface, the entries in an access-list are applied to all outgoing packets not sourced
by the router.

VINES standard access-list are defined by the following command:

Vines access-list access-list number {permit|deny} protocol source-address source-mask [source-port]
destination-address destination-mask [destination-port]. VINES protocol ID numbers are between 1
and 255 or one of the following keywords: arp, icp, ip, ipc, rtp, or spp. VINES source and destination
addresses are expressed in hexadecimal network:host format, where network is 4 bytes and host is 2
bytes. The source and destination addresses must be accompanied by a 6-byte mask, which indicates
which bits in the address can ignored. You place a 1 in each of the bit positions you want to mask or
ignore. Source and destination ports are expressed in hexadecimal numbers ranging from 0000 thru
FFFF.

VINES extended access-list are defined by the following command:

Vines access-list access-list number {permit|deny} protocol source-address source-mask [source-port
source-port-mask] destination-address destination-mask [destination-port destination-port-mask].

VINES simple access-list are defined by the following command:

Vines access-list access-list number {permit|deny} source-address source-mask

You apply a VINES access-list to interface by: vines access-group access-list number

VINES mail traffic uses IPC port 4, and is controlled by an extended access-list since you need to
specify source and destination port masks. To configure this type of access-list type the following:

Vines access-list 101 permit ipc 264113:1 0:0 0 FFFF 264111:1 0:0 4 0 - allows mail traffic from the
first server to the second server
Vines access-list 101 permit ipc 264111:1 0:0 0 4 264113:1 0:0 4 FFFF - allows mail traffic form the
second server to the first server
Vines access-list 101 deny ip 264111:1 0:0 264113:1 0:0 - denys all other traffic between the two
servers
Vines access-list 101 permit ip 0:0 FFFFFFFF:FFFF 0:0 FFFFFFFF:FFFF - permits all other
communication to pass thru the router

A mask of FFFF is the same as placing all 1’s in the bit positions of the mask and will be ignored.
Specifying a mask value of FFFFFFF:FFFF tells the Cisco IOS to ignore source and destination
addresses.
Managing DECnet Traffic
If an access-list rejects a packet the router returns an ICMP Host Unreachable Packet. DECnet
extended access-lists have numbers between 300 and 399; DECnet standard access-lists have numbers
between 1 and 99.

Decnet access-group access-list number {in|out}

access-list 305 deny 20.0 0.1023 0.0 63.1023
access-list 305 permit 5.0 0.1023 0.0 63.1023

WAN Connectivity on Cisco Routers
Availability, bandwidth, cost, ease of management, application traffic, and routing protocol specifics
are all important when considering different types of connections.

Connection management deals with configuration at initial startup, ongoing configuration tasks of
normal operation, and the ability of the connection to deal with varying rates of traffic.

Dial-in using modems, dial-up connections using a router, dedicated lease lines, and packet-switched
services are all connection services available. Users with asynchronous modems make temporary
connections using PSTN.

Dedicated, lease-lines provide full-time synchronous connections and are implemented by Cisco as a
point-to-point connection over serial lines. This type of connection is only available to Cisco products,
which have a CSU/DSU attached to its synchronous serial port.

Dial-up connections are for infrequent low-volume traffic. Examples of dial-up are dial-on-demand
routing (DDR) and dial backup. These circuit-switched calls are placed using PSTN and ISDN. DDR
implementation is available on Cisco products that have asynchronous auxiliary ports, synchronous
serial ports, and ISDN ports.

Packet-switched networks (PSN’s) use virtual circuits that provide end-to-end connectivity.

X.25, frame-relay, and SMDS are supported on Cisco products that have synchronous serial interfaces.

An access server is a concentration point for dial-in and dial-out connections.

The choice of medium and technology will depend primarily on the bandwidth requirements of the
servers in the LAN environment. The amount of bandwidth required by each dial-in client should not
be considered.

Security is a concern for dial-in access networks; PPP call authentication using passwords; TACACS+
is an application that can also authenticate calls. Terminal Access Controller Access Control System
(TACACS) is an authentication protocol that provides remote access authentication and event logging.
TACACS+ is a Cisco enhancement of TACACS that provides additional authentication, authorization,
and accounting.
Cisco’s point-to-point implementation allows two type of transmissions: datagram transmission
composed of individually addressed frames and data stream transmission composed of a stream of data
for which address checking occurs only once.

Channel service unit/data service unit (CSU/DSU) equipment yield up to 2Mb of bandwidth (E1).
Reliability for user traffic is provided by different encapsulation methods at the data-link layer.

DDR connections are made only when traffic marked interesting by the router dictates; if uninteresting
traffic is encountered by the router and a connection is already established, then this traffic will also be
transmitted. The router maintains an idle timer which is reset after interesting traffic is detected. Once
the idle timer expires, then the circuit is terminated. Periodic routing updates and broadcast traffic
should be treated as uninteresting by the router. Since DDR connections are not always active, routing
protocols need some type of mechanism to convince it that the destination is reachable, ie, IPX
sessions are maintained by the router spoofing keepalives. When a DDR WAN link is down, there is
no tariff charged.

Packet-switched networks (PSN’s) consists of DCE’s connected thru packet-switching exchanges or
simply switches, which are multi-port devices that operate at the Data-Link Layer. PSN’s can be
accessed by multiple end stations thru DCE’s. The path between end stations is indirect, utilizing a
series of intermediate nodes. Packets contain source and destination addresses and other control
information. The switching technology is transparent to the user and is responsible for internal
delivery of data. In a WAN switched environment, routers do not share a common medium and is a
non-broadcast environment. PSN’s are described as a non-broadcast multi-access environment
(NBMA). A broadcast environment can be created by transmitting data on each individual circuit; this
causes significant buffering and CPU utilization by the transmitting router, which can result if loss of
data. PSN’s are both privately and publicly maintained. X.25, frame-relay, ATM, and switched
mutimegabit data service (SMDS) are all examples of PSN’s.

X.25 is more widely available in Europe; X.25 operates at the lower levels of OSI model, uses packet-
based analog technology, and its standards are well understood. X.25 is a connection-oriented service
which offers very good traffic control by using extensive error checking; X.25 uses both PVC’s
SVC’s. Individual transmissions between switches require acknowledgement; if packets are not
received or acknowledged, then they are retransmitted. This makes X.25 a good choice where line
quality is low.

Frame-relay is widely available in both the US and Europe and uses a frame-based digital technology.
It is also a connection-oriented service and its standards are well understood. It also uses both PVC’s
and SVC’s. Frame-relay was designed to take advantage of improved wide area transmission media
(fiber and digital links), can reliably support 56/64 Kbps to 1.544 Mbps data rates, and offers limited
error detection at the Data-Link Layer. Implementations of 45 Mbps (DS-3) links are also possible.
The high quality of its links allows frame-relay to dispense with error correction algorithms that can be
performed at the high protocol levels. Frame-relay uses cyclic redundancy check (CRC) for detecting
corrupted bits; bad frames are dropped without notifying the user. Frame-relay has a number of
extended features that provide capabilities for complex networking environments and are collectively
called the local management interface (LMI). Some LMI extensions are referred to as common and are
expected to be implemented by anyone adopting the specification, while other extensions are optional.
LMI extensions include: virtual circuit status messages (common), multicasting (optional), global
addressing (optional), and simple flow control (optional).
Both ATM and SMDS are cell-relay services that use fiber cable as its transmission medium.
Information is organized into fixed sized cells and can be processed and switched at very high speeds,
which produces very tight delay characteristics. Both are designed for high-speed transmission of
voice, video, and data over WAN networks, and are also widely available in both the US and Europe.
SMDS has proprietary standards, while those of ATM are still evolving. Both rely on low error rates
of the transmission medium to achieve high throughput rates. There is no error checking or
retransmissions in SMDS and ATM. Although SMDS is described as a fiber-based service, DS-1
(Digital Signal 1), access can be provided over either fiber or copper media with significantly good
error characteristics.

SMDS is a LAN-like datagram service that relies on SMDS servers to forward traffic within the
network, and can operate at speeds of 100 Mbps. Access to an SMDS network is accomplished by the
SMDS Interface Protocol (SIP); SMDS packet sizes are large enough to encapsulate entire 802.3,
802.4, 802.5, and FDDI frames. SMDS supports group addressing which is analogous to multicasting.
SMDS also offers source and destination address screening, which allows subscribers to achieve a
virtual private network that excludes unwanted traffic.

ATM is a connection-oriented service, primarily uses PVC’s to establish connections. ATM supports
two types of connections: unidirectional or bi-directional point-to-point connections and unidirectional
point-to-multipoint connections. Special ATM switches are used to forward traffic in the network.
ATM currently operates at speeds up to 155 Mbps, but new technologies now make speeds up 625
Mbps possible. ATM differs from synchronous transfer methods that use TDM techniques to
preassign users to fixed time slots. In contrast ATM assigns time slots on demand, which makes more
efficient use of bandwidth. An ATM station can send cells when ever necessary. ATM is a star
topology in which an ATM switch acts as a hub with all devices directly attached. This provides all
the advantages of star topology, ie, easier troubleshooting and flexibility for change. An ATM end
station makes a contract with the ATM network based on QOS parameters. This contract specifies an
envelope describing intended traffic flow; an ATM device uses traffic shaping to ensure that traffic
will fit in the promised envelope. Traffic shaping uses queues to constrain data bursts, limit peak data
rate, and smooth jitter. ATM switches have the option of using traffic policing to enforce the contract,
which allows switches to drop offending cells during periods of congestion.

Serial Encapsulation Protocols

Cisco’s encapsulation command is used to set the IP encapsulation method used by the serial interface:
encapsulation encapsulation-type. The choice of the encapsulation type will depend on the type of
interface used, where each type of media as a default encapsulation type. Synchronous serial
connections allow the following encapsulation types:

      Frame-relay                                            PPP
      X.25                                                   SMDS
      LABP                                                   HDLC (default)

Asynchronous serial connections are restricted to the following:

      PPP                                                    SLIP (default)

Other type of connections allow the following encapsulation types:
      Ethernet – SAP, SNAP, or ARPA                          ATM – SNAP (default)
       (default)                                              ISDN BRI – PPP or HDLC (default)
      Token-ring – SNAP (default)
      FDDI – SNAP (default)

HDLC is the default encapsulation type for synchronous serial lines. HDLC identifies three types of
network nodes:

      Primary                                                Combined
      Secondary

Primary nodes control one or more secondary nodes, and polls secondary nodes in a predetermined
order. The primary nodes sets up and tears down links, and manages the link while it is operational.
HDLC supports three transfer modes:

   1. Normal response mode (NRM)
   2. Asynchronous response mode (ARM)
   3. Asynchronous balanced mode (ABM)

ARM and ABM provide HDLC with more flexibility in defining nodes that can initiate
communications. With NRM secondary nodes can only communicate with primary nodes when it has
been given permission to do so. ARM allows secondary nodes to initiate communication with a
primary node without permission. With ABM all communication is between multiple combined nodes
in which any combined node can initiate communication with any other node without permission.

HDLC primary and secondary nodes can be connected in two basic ways:

   1. point-to-point, which involves two nodes, a primary and secondary
   2. multipoint, which involves one primary and multiple secondaries

HDLC supports multiple protocols and does not provide authentication. Since HDLC is the default
encapsulation type it is not necessary to configure.

PPP encapsulation must be configured at both ends of a link and is achieved by entering the following
in the serial configuration mode: encapsulation ppp. PPP performs address negotiation, supports
multiple protocols, and is interoperable with other vendor’s hardware.

Link Access Procedure Balanced (LAPB) is the Data-Link Layer protocol specified by X.25 that has a
retransmission characteristic to provide reliable connections on low quality lines. If LAPB is used on
satellite links, needless retransmissions caused by unpredictable delays can seriously degrade
performance, which makes LAPB a bad choice where unpredictable delays are common. LAPB shares
the same frame format with HDLC; however, it is restricted to the ABM transfer mode, which makes
is appropriate for only combined nodes. LAPB circuits can be established by both DCE and DTE; the
station initiating the call is designated as the primary, while the responding station is the secondary.
Configured by the following in the serial interface configuration mode: encapsulation lapb [dte|dce]
[multi|protocol]. The following protocols are supported by LAPB:
       IP (default)                                             VINES
       XNS                                                      CLNS
       DECnet                                                   IPX
       AppleTalk                                                Apollo

Cisco LAPB commands can also change the retransmission characteristics of LAPB by altering:

   1.   Max number of bits performs frame (multiple of 8)
   2.   Retransmission timer period
   3.   Transmission count – specifies how many times a frame can be trasmitted
   4.   Frame count – which sets the upper limit on the number of outstanding frames awaiting
        acknowledgement

Serial Compression Techniques

The default method for transmitting data across a serial link is in uncompressed format. This allows
header information to be used in normal switching operation; however, sending data in uncompressed
format makes heavy demands on bandwidth. Although compression involves some overhead, this
outweighed by the increased rate of transmission of compressed data. You can compress either the
header or the payload of a packet. The Van Jacobsen algorithm is used to compress the fixed-length
headers of small packets. Header compression is protocol specific and uses a different algorithm for
each protocol. To compress TCP headers use the following command: ip tcp header- compression
[passive], where the optional passive is used to restrict compression of outgoing TCP packets. When
passive is used outgoing TCP packets are compressed only if incoming packets on the same interface
are compressed. Header compression can used on, but not limited to the following applications:

       Telnet                                                   rlogin
       DECnet LAT                                               acknowledgements

Header compression is supported by the HDLC and X.25 encapsulations.

Payload compression leaves the header intact, which allows the packet to be routed thru a switched
network in a normal way. Payload compression is suitable to the following encapsulation type:

X.25
SMDS
Frame-relay
ATM

Link compression involves both the header and payload and uses the PPP and LAPB encapsulation
types. Link compression is also protocol independent and is achieved by applying a “lossless”
predictor algorithm to the packet. This predictor algorithm has the ability to learn data patterns
allowing it to predict the next character in a data stream. The term lossless refers to the fact that no
data is lossed during compression or decompression. Cisco’s compress predictor command can be
used to configure point-to-point software compression for the following encapsulation types:

LAPB
PPP
HDLC

Traditionally the implementation of multiple WAN connections has taken the form of a dedicated
CSU/DSU for each line. Each of these low-speed lines (64 Kbps) was located on a multiple-port
synchronous serial interface card, ie, the 4-port Fast Serial Interface Processor (FSIP), which is the
default serial interface processor for Cisco 7000 series routers. FSIP provides four or eight high speed
serial ports. The number of WAN connections that a router could support was limited to the port
density of the interface card and backplane capacity. Cisco 7000 series routers also support the
MultiChannel Interface Processor (MIP), which supports two full channelized T1 or E1 serial ports.
T1 and E1 or wide-are digital transmission scheme: T1 – 1.544 Mbps, E1 – 2.048 Mbps. Cisco 4000
and 3600 series routers support a single port interface for channelized T1/E1. The MIP card allows
you to configure up to 24 T1 or 30 E1 subchannels on one physical port; each subchannel can be
separately configured as if it were a dedicated interface, which have the same configuration options
and characteristics as regular serial ports. Line encoding and framing must be set to match the carrier
equipment. The output of a port on the MIP card can be carried by a private network, or it can be
directly connected to the service provider’s facility. The channel output is then carried by a Public
Data Network (PDN). You can configure multiple MIP cards into a single 7000 chasis; MIP cards can
be connected to other MIP cards in complex configurations.

To configure a channelized T1/E1 on a 7000 router you must first indicate the channel type (T1/E1),
indicate the MIP card being configured, linecode and framing type for the line are defined (T1/E1
specific). The framing command for a T1 allows the following frame types:

SF – super frame
ESF – extended super frame

SF also called D4 framing consists of 12 frames of 192 bits each with the 193 bit for error checking
and other functions. ESF is an enhanced version of SF with 24 frames with 192 bits each.

The framing command for an E1 allows the following frame type:

CRC4 (default)
NO-CRC4

A T1 channel can have the following linecode types:

AMI – alternate mark inversion
B8ZF – binary 8-zero substitution

An E1 channel can hve the following linecode type:

AMI
HDB3

When configuring channelized T1/E1, you then need to define the channel group (subchannel)
associated with each timeslot and is accomplished using the channel-group command: channel-group
[number] timeslots [range] [speed {48|56|64}]. Number refers to the channel-group number, 0-23 for
T1 and 0-29 for E1; range refers to the range of timeslots belonging to the channel-group, 1-24 for T1
and 1-31 for E1; speed is the option parameter with a default of 56 Kbps. Examples:
channel-group 0 timeslots 1
channel-group 8 timeslots 5,7,12-15,20 speed 64

Finally you need to assign an IP address to each subchannel and an encapsulation type.

In a simple configuration, the MIP card receives its clock signals from the T1/E1 line; in more
complex configurations where MIP’s are connected to each other you need to specify the clock source
as internal or T1/E1 line and is accomplished with the clock source command.

SMDS Configuration

Switched Multimegabit Data Service (SMDS) is a cell relay WAN technology that uses small, fixed
packets, which can processed and switched at high speeds. SMDS uses a packet-switching networking
method where nodes split-horizon are bandwidth with each other by sending packets. SMDS is a
connectionless service; its datagram-based service does not require data-link acknowledgements. This
service is usually implemented over a fiber-based full mesh topology and is used for communications
over PDN’s. SMDS development has been facilitated by extensive use of distributed processing and
greater efficiency of fiber-based media; its features are detailed by a series of specifications produced
by Bell Communications Research (Bellcore).

Although primarily deployed on fiber-based media, SMDS can also run over copper media. SMDS
supports speeds of 1.544 Mbps on DS-1 and 44.736 Mbps on DS-3. The SMDS protocol is based on
the three SIP layers, which do not correspond to the bottom three layers of the OSI model but provide
the same basic functionality. Layer 3 has variable sized SMDS Data Units (SDU’S), which can
contain 9188 bytes of user data and are large enogh to encapsulation entire 802.3, 802.5, and FDDI
frames.

Access to SMDS network is provided thru SMDS-specific CSU/DSU called SDSU, which is an
SMDS-specific data service unit transmission device over the Data-Link and Physical Layers. It is
connected to high speed serial and other serial interfaces. The SMDS network is made up of three
main elements:

Customer Premise Equipment (CPE)
Carrier Equipment (CE)
Subscriber Network Interface (SNI)

CPE includes end devices (terminals and personal computers) and intermediate nodes (routers,
modems, and multiplexers). Intermediate nodes are sometimes provided by the SMDS carrier.

Carrier equipment consists of high-speed WAN switches that conform to network equipment
specifications. These specifications define network operations, a local and long-distance carrier
network, and two switches inside a single carrier network.

SNI is the interface between CPE and CE, and is the point where the customer network ends and the
carrier network begins. The SNI is used to make the technology and operation of the carrier SMDS
network transparent to the user.
SMDS implements a set of access-classes that constrain different CPE devices to a specified sustained
or average rate of data transfer. Access-classes allow SMDS to accommodate a wide-range traffic
requirements and equipment capabilities. Each access-class defines a maximum sustained information
transfer rate and maximum allowed volume of traffic sent in bursts, ie, the five access-classes for a
DS-3 are as follows:

   1. 4 Mbps                                                 4. 25 Mbps
   2. 10 Mbps                                                5. 34 Mbps
   3. 16 Mbps

Access classes are implemented using a credit management scheme, which uses a credit algorithm to
create a credit balance for each customer interface, that is decremented whenever a customer send
packets into the network. New credits are allocated periodically up to an establish maximum; credit
management is used on DS-3, but not DS-1.

SMDS addresses (64-bit addresses specified in E.164 format) are assigned by the service provider.
The address type constitutes 4 bits and the telephone number 60 bits and is either a unicast (denoted by
a C) or multicast (denoted by an E). Unicast addresses are individual CPE addresses; multicast
addresses are group addresses.

SMDS protocol data units (PDU’s) carry both a source and a destination address. The source address
is used by the recipient of the data to return data to the sender; it can also be used for address
resolution to discover the mapping of high layer addresses and SMDS addresses. SMDS group
addresses allow a single address to refer to multiple CPE stations, and are used by CPE stations as the
destination address of the PDU. The network makes multiple copies of the PDU for delivery to all
members of the address group. SMDS group addressing is similar to multicasting on LAN’s. This
group addressing capability reduces the amount of network resources needed for: distributing route
information, resolving addresses, and dynamically discovering network resources.

SMDS implements two addressing security features:

Source address validation
Address screening

Source address validation is a method of ensuring that the PDU source address given is the legitimate
address of the SNI from which the PDU was sent, which protects users against address spoofing.
Source address screening acts on addresses as data units are leaving the network; destination address
screening acts on addresses as data units are entering the network. If the address is not authorized,
then the data unit is not delivered. Address screening allows a subscriber to establish a private virtual
network that excludes unwanted traffic. Address screening also allows greater network efficiency
since SMDS devices do not have to handle unwanted traffic.

SMDS Interface Protocol (SIP) is used for communications between CPE and SMDS carrier
equipment and is a connectionless service. SIP is the SMDS implementation of the 802.6 Distributed
Queue Dual Bus (DQDB) data-link communication protocol for metropolitan area networks (MAN’s).
DQDB defines a MAC protocol that allows many systems to interconnect thru two unidirectional
logical buses, and is aligned with emerging standards for Broadband ISDN (BISDN), which enables it
to interoperate with broadband voice and video services. However, to interface SMDS networks, only
the connectionless data portion of the 802.6 protocol is required, which means that SIP can not define
voice or video application support.

The DQDB was designed to support many different data and non-data applications. As a shared
medium protocol it is quite complex, consisting of the protocol syntax, a distributed queuing algorithm
that constitutes the shared medium access control. The term access DQDB refers to the operation of
DQDB across an SNI for SMDS network access, rather to operations inside the SMDS network. A
switch in the SMDS network operates as one station on the access DQDB; CPE’s operate as one or
more stations on the access DQDB. An SMDS access DQDB can be arranged in either a single or
multiple CPE configuration.

A single CPE access DQDB consists of one switch in the carrier SMDS network and one CPE station
at the subscribers site. This combination results in a two-node DQDB subnetwork where
communication occurs only between the switch and one CPE device across the SNI. There is no
contention on the bus because there other CPE devices attempting to access it.

A multiple CPE access DQDB consists of one switch in the carrier SMDS network and a number of
interconnected devices at the subscriber site, which allows local communication between CPE devices.
Some of the local communication is visible to the switch serving the SNI. Multiple devices’
contention for the bus is managed by the DQDB distributed queuing algorithm.

SIP maps to the physical and data-link layers of the OSI model and consists of three layers:

Level 3 maps to the MAC sublevel of the Data-Link Layer
Level 2 also maps to the MAC sublevel of the Data-Link Layer
Level 1 operates at the Physical Layer

SIP level 3 functionality is provided by the router; user information takes the form of SDU’s. SDU’s
are passed to SIP level 3 where they are encapsulated in a SIP level 3 header and trailer; the resulting
frame is called a level 3 protocol data unit (PDU). The router is also responsible for mapping network
protocol addresses to SMDS addresses. Level 3 PDU’s are passed to SIP level 2; SIP level 2 and level
1 functionality is provided in the SDSU. The SDSU segments the PDU’s into fixed-size cells of 53
bytes each, and the cells are then passed to SIP level 1 for placement on the physical medium. SIP
level 1 operates at the Physical Layer and provides the physical-link protocol that operates at DS-1 or
DS-3 rates between CPE devices and the network. There are two sublayers in level 1:

The transmission system sublayer – defines the characteristics and method of attachment to a DS-1 or
DS-3 transmission link.
The physical layer convergence protocol (PLCP) – specifies how SIP level 2 cells are arranged relative
to DS-1 or DS-3 frame, and defines management information.

To configure a router to route traffic thru an SMDS network you need to:

Enable SMDS encapsulation
Assign a specific SMDS address
Map upper layer addresses to SMDS ones
Map SMDS addresses to multicast addresses
Mapping upper layer addresses to SMDS ones ensures that the protocol specific traffic can be
transmitted over an SMDS network. To do this you map protocol specific network number with the
SMDS address of the router on the other side of the SMDS network.

Map SMDS addresses to multicast addresses, which allows broadcast to be transmitted over an SMDS
network by linking the SMDS E.164 multicast address or group address with the broadcast address of
the higher level routed protocol. This ensures that router does not have to replicate broadcast messages
to every remote host.

If you have an ARP server on your network, you need to configure the SMDS multicast ARP
command on the router, which maps an SMDS group address to an ARP multicast address; the
multicast ARP address is optional. When broadcast ARP’s are sent, SMDS first attempts to send the
packet to multicast ARP SMDS addresses; if you do not specify one in configuration, broadcast ARPs
are sent to all multicast IP SMDS multicast addresses. So if you do not specify the optional ARP
multicast address, you have to specify an IP multicast command for broadcasting. If no IP address is
used in the IP multicast command the default value is the IP address of the interface. You can enable
dynamic address learning using ARP by typing: smds enable-ARP; the multicast
address for ARP must be set before this command is used.

Example of configuring SMDS on a router:

int s0
encapsulation SMDS
smds address c123.4567.2323
ip address 172.16.1.1 255.255.255.0
novell network de01
smds static-map novell de01.1234.4567.8989.1010 c123.4576.6767
smds multicast novell e180.0999.9999
smds multicast arp e180.0999.9999
smds multicast ip e180.0999.999
smds enable-arp

X.25 Configuration
The X.25 protocol is an ITU-T standard for WAN communications in a packet-switched environment,
and is designed to operate effectively regardless of the type systems connected to the network. Packet-
switching exchanges (PSE’s) are made up of switches that support full-duplex information transfer. A
simple DTE device, ie, a character-mode terminal may not support full X.25 functionality; connectivity
is achieved with a DCE by the means of a packet assembler/disassemble (PAD) device, which
performs three functions:

Buffering
Packet assembly – adds X.25 header
Packet disassembly – removes X.25 header

The X.25 protocol maps to the lowest three layers of the OSI model; packet layer protocol (PLP) is the
X.25 network layer protocol, which manages packet exchange between DTE devices across virtual
circuits. Multiple network layer protocols can be transmitted across an X.25 network; this results in
tunneling (other layer 3 protocols are contained in X.25 layer 3 packets). Link Access Procedure
Balance (LAPB) is the layer 2, data-link protocol, which manages communication and packet framing
between DTE and DCE devices. LAPB is a bit-oriented protocol that ensures frames are correctly
ordered and error free. PLP at layer 3 and LAPB at layer 2 provide reliability and sliding windows,
and were designed with strong flow control and error checking functionality. X.21bis is a physical
layer, layer 1, protocol used in X.25 that defines the electrical and mechanical procedures for using the
physical medium. X.21bis handles the activation and deactivation of the physical medium connecting
DTE and DCE devices; it supports point-to-point connections at 19.2 Kbps and synchronous, full-
duplex transmission over four-wire media.

Communication between DTE is accomplished across a virtual circuit (VC); a VC is a logical, bi-
directional path from one DTE to another. X.25 can maintain up to 4095 VC’s on a single interface
thru the use of multiplexing. There are two types of X.25 virtual circuits:

   1. Switched virtual circuit (SVC) – temporary connections used for sporadic data transfers; they
      require the two DTE devices to establish, maintain, and terminate a session ech time the
      devices need to communicate. SVC’s can be combined to improve throughput for
      encapsulating a specific protocol by providing a larger effective windows size, especially for
      protocols that offer their own high-layer re-sequencing. A maximum of eight SVC’s per
      protocol per destination is allowed. This SVC combination does not benefit traditional X.25
      applications.
   2. Permanent virtual circuit – permanently established connections used for frequent data transfers
      between DTE devices, and do not require sessions to established and terminated since they are
      always active. A PVC is similar to a leased line.

The X.25 protocol offers simultaneous service to many hosts, ie, multiplex connection service. A
Cisco router’s traditional method of encapsulation assigns one protocol to each virtual circuit; in Cisco
IOS 11.1 and new releases, a single virtual circuit can carry traffic from multiple protocols; a
maximum of nine protocols can be mapped to a host. The newer standard, RFC 1356, standardizes a
method of encapsulating most datagram protocols over X.25.

DTE devices specify the virtual circuit to be used in the packet header and sends the packets to a
locally connected DCE device. The DCE sends the packets to the closest PSE in the path of the virtual
circuit. The PSE passes the packets from switch to switch until the remote DCE is reached; the remote
DCE then sends the packets to the destination DTE.

The format of X.25 addresses is defined by the ITU-T X.121 standard; X.121 addresses are used by
PLP to establish SVC’s. In an X.121 address the first 4 digits are used to define the data network
terminal identifier code (DNIC); this field consists of a country code and a provider number assigned
by the ITU to the PSN in which the destination DTE device is located. The field is sometimes
eliminated in calls within the same PSN. The remaining 8 to 11 digits define the network terminal
number (NTN) assigned by the PSN provider. Only decimal digits are legal for X.121 addresses; a
router will accept addresses with as few as 1 digit or as many as 15. Some networks allow subscribers
to use subaddresses, which has one or more additional digits after the base address.

Network layer addresses are mapped to X.121 addresses; network layer data is encapsulated inside
media-specific frames thru the use of LAPB. An X.25 frame is made up of a series of layer 3 and layer
2 fields; PLP fields make up an X.25 packet and include a header and user data.
The PLP header construction:

           GFI                  LCI                      PTI                  User Data

General Format Identifier (GFI) is a 4-bit field that identifies packet parameters, ie, whether it carries
user data or control information; it also identifies what kind of windowing is being used and whether
delivery confirmation is required.

The LCI is a 12-bit field that identifies the virtual circuit across the DTE/DCE interface.

The PTI field identifies the packet as one of 17 different PLP packet types.

User Data contains encapsulated upper-layer information, and is only present in data packets;
otherwise additional fields containing control information are used.

PLP operates in 5 distinct modes when managing exchanges between DTE devices:

   1. Call setup                                               4. Call clearing
   2. Data transfer                                            5. Restarting
   3. Idle

SVC’s use all 5 modes; PVC’s are in constant data transfer mode, since theses connections have been
permanently established. Call setup mode is used to establish SVC’s between DTE’s; PLP uses the
X.121 addressing scheme to setup a SVC. Once a call is established the PSN uses the LCI field to
specify a SVC to the remote DTE. Call setup mode is executed on a per-virtual-circuit basis; that is,
one virtual circuit can be in call setup mode while another virtual circuit is in data transfer mode. Data
transfer mode is used to transfer data between two DTE devices across a virtual circuit. In this mode
PLP breaks up and reassembles user messages if they are two big for the maximum packet size of the
circuit. Each packet is given a sequence number, so error and flow control can occur across the
DTE/DCE interface. Data transfer mode is executed on a per-virtual-circuit basis. Idle mode is
entered when a an SVC is established, but data transfer is not currently occurring, and is executed on a
per-virtual-circuit basis. Call clearing mode is used to end communications between DTE device and
to terminate SVC’s, and is executed on a per-virtual-circuit basis. Rstarting mode is used to
synchronize transmission between a DTE device and a local DCE device, and is not executed on a per-
virtual-circuit basis. It affects all the DTE device’s established virtual circuits.

Layer 2 X.25 is implemented by LAPB, which manages communication and packet framing between
DTE and DCE devices; it checks that frames arriving at the receiver are in the correct sequence and are
error free. LAPB frames include a header, encapsulated data, and a trailer

    Flag          Address          Control          Data             FSC             Flag

The flag fields delimit the beginning and end of the frame; bit stuffing is used to ensure that the flag
pattern does not occur within the body of the frame. The address field indicates whether a frame
carries a command or a response. The control field qualifies command and response frames, and also
indicates whether a frame is an I-frame, an S-frame, or a U-frame. It also indicates the frame function,
ie, receiver ready or disconnect, and the frame sequence number. The data field contains upper-layer
data in the form of an encapsulated PLP packet; the maximum length of this field is set by the PSN
administrator and the subscriber at subscription time. The frame check sequence (FCS) ensures the
integrity of the transmitted data. There are 3 types of LAPB frames:

Information frame (I-frame)
Supervisory frame (S-frame)
Unnumbered frame (U-Frame)

The information frame carries upper-layer information and some control information, and there
functions include: sequencing, flow control, and error detection and recovery. I-frames can send and
receive sequence numbers, which refer to the number of the current frame. The receive sequence
number refers to the number of the frame to be received next. S-frames contain control information
and their functions include: requesting and suspending transmissions, reporting on status, and
acknowledging the receipt of I-frames. S-frames carry only receive sequence numbers. U-frames are
not sequenced, and are used for control purposes. Their functions include: link setup, disconnection,
and error reporting.

Configuring X.25

Certain configuration parameters are essential for correct X.25 behavior: X.25 encapsulation style and
assigning X.121 addresses. You should also define mapping statement to associate X.121 addresses to
upper-layer protocol addresses. To set the X.25 encapsulation type use the following command:
encapsulation x25 [dte|dce]; the default type is DTE. A router can use either type: it is configured as
an DTE when the PDN is used to transport more than one protocol and is configured as a DCE when
the router must act as an X.25 switch. To set the X.25 address use the following command: x25
address x.121-address. If the router acts as an X.25 switch, then this task is optional. Defined
protocols cannot dynamically determine LAN protocol-to-remote host mappings, so you must
configure each host with which the router may exchange X.25 encapsulation. To establish an X.25
mapping use the following command: x25 map protocol-keyword protocol-address x121 address
[options]. The following are supported protocols:

      IP                                                     Bridge
      XNS                                                    CLNS
      DECnet                                                 Apollo
      IPX                                                    PAD
      AppleTalk                                              QLLC
      VINES                                                  Compressed TCP

The protocol address parameter defines the protocol address and is not used in CLNS and bridged
connections. You can specify how multiple protocols reach a specified destination using a single
virtual circuit by the following command: x25 map protocol address [protocol2 address2[…[protocol9
address9] x121 address [options]. This command is only used when trying to communication with a
host that understands multiple protocols over a single virtual circuit; this communication requires the
multiple encapsulations defined by RFC 1356 with bridging not supported. With Cisco routers you
can map an x121 address to a maximum of 9 protocol addresses.

Example of X.25 configuration commands:

Interface serial0
Encapsulation x25
X25 address 311082194567
Ip address 129.99.181.81 mask
X25 map ip 129.99.181.82 310082191234

To establish a virtual circuit the two routers need complimentary map configurations. Additional X.25
configuration parameters might be needed for appropriate service providers: virtual circuit range,
default packet sizes, default window sizes, and window modulus.

You identify a virtual circuit by specifying its logical channel identifier (LCI) or virtual circuit number
(VCN). Virtual circuit numbers are divided into 4 ranges listed in numerically decreasing order:

PVC’s
Incoming-only circuits
Two-way circuits
Outgoing-only circuits

The incoming-only, two-way, and outgoing circuit ranges define virtual circuit numbers over which an
SVC can be established. PVC’s cannot be established by the placement of an X.25 call; however,
SVC’s can. Only DCE’s can initiate calls using a circuit number in the incoming-only range; only
DTE’s can initiate calls using a circuit number in the out-going range; and both DCE and DTE devices
can initiate calls using a circuit number in the two-range. The operation of a virtual circuit is not
affected by the range to which it belongs. These ranges can be used to prevent one side of the
connection from monopolizing the virtual circuits, and is useful for X.25 interfaces that have a small
number of SVC’s available to them. The complete range of virtual circuits, 4095, can be allocated to
SVC’s or PVC’s. The circuit numbers must be assigned so that an incoming range comes before a
two-way range, and both incoming and two-way ranges must become before an outgoing range. Any
PVC’s must take a circuit number before an SVC; PVC circuit number must be less than an SVC
circuit number. There are 6 X.25 parameters that define upper and lower limits of each of the 3 SVC
ranges. An SVC range must not overlap each other. The network admin specifies the virtual circuit
ranges for an X.25 attachment. An SVC range is not used if the lower and upper limits are set to 0.
DCE and DTE devices must have identically configured ranges; and range values must be the same on
both ends on a X.25 link.

X.25 networks have a default maximum input and output packet size defined by the network admin;
the router must have these values assigned to match the network. To set the default input packet size
use the following command: x25 ips bytes. To set the default output packet size use the following
command: x25 ops bytes. These values should match unless the network supports asymmetric
transmissions. You set these values for circuits that do not negotiate sizes; supported values are 16, 32,
64, 128, 256, 512, 1024, 2048, 4096. The default packet size on most PDN’s in the world is 128 bytes;
the default packet size of 1024 is common for PDN’s located in the US and Europe. The layer 3
default packet size is subject to the lower layer limits.

Window size specifies the number of packets that can be received or sent without receiving and
acknowledgement; both ends of the X.25 link must use the same default window size. To set default
input window size use the following command: x25 win packets. To set default output window size
use the following command: x25 wout packets. The range of values is from 1 to M-1, where M is the
modulus; the default is 2 packets.
X.25 supports flow control with a sliding window sequence count; the window counter restarts at 0
when reaching the upper limit, which is called the modulus. To set an interface’s data packet
numbering count use the following command: x25 modulo modulus. Virtual circuit windows sizes are
limited to a maximum value equal to this modulo value minus 1. The modulus parameter is either 8 or
128; modulo 8 is widely used and allows virtual circuit windows sizes up to 7 packets. Modulo 128 is
very rare and allows virtual circuit window sizes of up to 127 packets. Modulo 128 is also referred to
as extended packet sequence numbering, which allows larger packet windows. Both ends of the X.25
link must have the same modulo.

Example of X.25 configuration:

interface serial0
encapsulation x25
x25 address 311082198756
x25 ips 1024
x25 ops 1024
x25 win 7
x25 wout 7

X.25 software implementation allows virtual circuit to be routed from one X.25 interface to another
and from one router to another. A router’s behavior can be controlled with switching and X.25-over-
TCP (XOT) configuration commands. Switching or forwarding of X.25 virtual circuit can be done in 2
ways:

Local X.25 switching
XOT switching

During local switching, incoming calls received from a local X.25 serial interface can be forwarded to
another local X.25 serial interface. Local X.25 switching indicates the router handles the complete
path. During XOT switching, sometimes called remote switching, an incoming call can be forwarded
to another Cisco router over a LAN using the TCP/IP protocols. Upon receipt of an incoming call, a
TCP connection is established to the router that is acting as a switch for the destination; all X.25
packets are and received over this reliable stream. Flow control is maintained end-to-end.

Running X.25 over TCP provides a number of benefits. The datagram containing the X.25 packet can
be switched by other routers using their high speed switching capabilities. X.25 connections can be
sent over networks running only TCP/IP, which can be run over many different network technologies,
including Ethernet, Token-ring, T1 serial, and FDDI. When the connection is made locally, the
switching configuration is used; when the connection is made across a LAN, XOT configuration is
used. The basic function is the same for both configurations, but different configuration command are
used for each type of configuration.

To enable X.25 routing use the following global command: x25 routing. To establish an X.25 route
use the following command: x25 route [# position] x.121-address [cud pattern] interface interface-
number. In the position parameter, the number that follows the #, is the line number in the routing
table where this entry will be placed; if no value is specified, then the entry will be added to the end of
the routing table. This parameter is optional. The x.121-address parameter can either be an actual
x.121 address or a regular expression, ie, 1111*, which represents a group of x.121 addresses. The cud
(call user data) pattern parameter is specified as a regular expression of ASCII text, and is an optional
parameter. The first few bytes of a packet, commonly 4 bytes long, identifies a protocol; the specified
pattern is applied to any user data after the protocol identification. There are 2 fields in the routing
table used to determine a call’s route to a destination: the destination x.121 address and the X.25
packet’s cud fields. When the destination address and cud of the incoming packet matches the x.121
and cud patterns in the routing table, then the call is forwarded accordingly.

Examples of X.25 configuration:

X25 routing
X25 route 1012 interface serial0
X25 route 100 cud ^pad$ interface serial2
X25 route .*ip 172.16.16.2
You use the bandwidth number command to specify the link bandwidth; this is used by IGRP to which
lines are the best choices for traffic. The default is 1544 Kbaud, but X.25 service is not generally
available at this rate, so you will have to specify bandwidth setting for most X.25 interfaces that are
used with IGRP.

Examples of X.25 configuration:

Interface serial1
Encapsulation x25
Bandwidth 10
X25 ips 1024
X25 ops 1024
X25 win 7
X25 wout 7
X25 address 311082103456
Ip address 172.16.66.1 255.255.255.0
X25 htc 30 – specifies the highest two-way virtual circuit number of 30
X25 idle 5 – specifies the period of inactivity after the router clears the SVC of calls
X25 nvc 2 – specifies the maximum number of SVC’s open simultaneously to a host
X25 map ip 172.16.66.2 311082191234

To clear one or all virtual circuits at once use the following command: clear x25-virtual circuit. You
can override default values specified for the idle and nvc parameters using the x25 facility command.

To monitor X.25 operation use the show interface serial command. The output for this command
shows encapsulation type and display LAPB information, ie, state CONNECT, modulo value,
windows size in k, transmit counter in N2 (the number of transmit attempts before a link is declared
down), the maximum number of bits performs frame in N1, the retransmission timer in T1 (determines
how long a transmitted frame can remain unacknowledged before the Cisco IOS software polls for an
acknowledgement), the hardware outage period disabled, idle link period in T4 (used to detect
unsignaled link failures), the VS parameter (indicates the modulo frame number of the next outgoing I-
frame), the VR parameter (indicates the modulo 8 frame number of the next I-frame expected to be
received), the Remote VR number (shows the number of the next I-frame the remote device expects to
receive), the retransmission parameter (the count of current retransmissions due to expiration of T1),
QUEUES parameter (indicates frames queued for transmission). The IFRAMEs parameter (displays
the count of I-frames sent and received), the RNRs parameter (receiver not ready), the REJs parameter
(rejects), the FRMRs parameter (frame reject; frames that have been sent and received), the SABMs
parameter (the count of set asynchronous balanced mode command sent and received), and the DISCs
parameter (the count of disconnect command that hve been sent and received).

The output for the show interface serial1 also displays the x.121 address, the state of the interface (R1
– normal ready, R2 – DTE restarting, or R3 – DCE restarting), the modulo value, the interface timer
field (this value is 0 unless the interface is in R2 or R3 state), the layer 3 parameters that were set at
configuration (idle 5, nvc 2), the flow control values set at configuration (input/output windows sizes
7/7, packet sizes 1024/1024), the Timers section (displays the values of X.25 timers, the packet
acknowledgement threshold (TH), determines the number of seconds the Cisco IOS waits for
acknowledgement of control packets, timer values from T20 – T23 are for DTE’s, timer values from
T10 – T13 are for DCE’s), the Channels sections (displays the virtual circuit ranges for the interface),
the RESTARTs field (displays the restart packet statistics for the interface), the CALLs field (displays
the number of call that have been sent, received, forwarded, and details of failed calls), the DIAGs
field (displays the number of diagnostic messages sent and received), the output and input queue fields
(displays packets in the queues, the maximum number of packets the queues can contain, and the
number of packets dropped), the # packets input field (displays the number of error-free packets
received, the byte count, and the number of packets discarded because there was no buffer space), the
input errors and type, the # packets output field (displays the number of packets transmitted by the
interface, the byte count, the number of interface reset, the carrier transitions, and the DCD (data
carrier detect)

X.25 Scalability

Throughput is the amount of traffic that is delivered to a destination. To improve throughput on an
X.25 network, you can adjust the windows size, the packet size, and the number of virtual circuits.
Flow control is set by x25 win/wout, x25 ips/ops, x25 modulo. The x25 nvc command sets the default
maximum number of SVC’s that can be open to any host or router; to increase throughput, you can
establish up to 8 SVC’s to a host or protocol, the default is 1.

The X.25 software allows support for X.25 user facilities, ie, accounting, user identification, and flow
control negotiation, and can be configured on performs-interface or performs-map basis. Use the X.25
facility command to configure facilities on a performs-call basis for call initiated by the router’s
interface; use the X.25 map command to configure facilities on a host or router. The X.25 facility
command can be overrided by the X.25 map commands. The facilities values must match the settings
in the switch; to configure X.25 facility use the following command: x25 facility [options], ie, closed
user group (CUG number with values from 1-99). CUGs are used to create a virtual private network in
a larger network and to restrict access.

The packetsize in-size out-size parameter sets the maximum input packet size and output packet size
for flow control parameter negotiation; both values must be 16, 32, 64, 128, 256, 512, 1024, 2048,
4096. The windowsize in-size out-size sets the packet count for input and output windows for flow
control parameter negotiation; both sizes must be between 1 and 127 and cannot be greater than or
equal to the value set by the x25 modulo command. The windowsize and packetsize are supported by
PVC’s; not all networks will allow a PVC to be defined with arbitrary flow control values.

The reverse keyword sets reverse charging to be in effect on all calls originated by the interface; the
syntax is x25 facility reverse. The accept-reverse keyword allows reverse charging acceptance; the
syntax is x25 accept-reverse. The throughput in out parameter sets the throughput class negotiation
values for input and output throughput across the network; the syntax is x25 facility throughput in out.
Values for in and out are expressed in bps and can range from 75 to 64000 bps. The transit-delay
parameter sets a network transit delay for the duration of outgoing calls for networks that support
transit delay; the syntax is x25 facility transit-delay value. The value can be between 0 and 65534
milliseconds. The recognized private operation agency (rpoa) parameter sets the name of packet
network carriers to use in outgoing call request packets; the syntax is x25 facility rpoa name.

You use the x25 map command to define optional parameters for call setup to a particular host or
router; the syntax is x25 map protocol address x121-address [option]. The nuid parameter sets the
Cisco standard network user identification; the syntax is x25 map protocol address x121-address nuid
username password. This parameter should only be used when connecting to another Cisco router, and
the combine length of username and password should not exceed 127 characters. The nudata
parameter sets a user-defined network user identification; the format is set by the network admin. The
syntax is x25 map protocol address x121-address nudata string. This parameter is provided for
connecting to non-Cisco equipment that requires a network user identification facility. This string
should not exceed 130 characters and must be enclosed in quotes if the string contains any spaces. The
passive keyword sets options to be used if the arriving packet has that option active; the syntax is x25
map protocol address x121-address passive. This option is available only for compressed TCP maps,
ie, you specify that the X.25 interface should send compressed outgoing TCP datagrams only if they
were already compressed when they were received.

A regular expression is a pattern of characters matched against an input string; you can use wildcard
characters:

dot (.)- matches any character in the string
asterisk (*) – matches any sequence of characters
caret (^) – refers to the rightmost characters in a string
dollar sign ($) – refers to the leftmost characters in a string

Matching a string to specified pattern is called pattern matching, which either succeeds or fails.
Regular expressions can specify routes in an X.25 routing table, ie, incoming packets with destination
addresses beginning with 3107 are routed to serial0. You can use parentheses around a pattern to
remember a pattern for use elsewhere in a regular expression; the backslash followed by a 1, instructs
memory to reuse the remember pattern.

Example of regular expression configuration:

X25 route ^1111(.*) substitute-dest 2222\1 interface serial1
This expression tells the IOS that any packet with the 1111 as the first four digits of its address is to
have 2222 substituted for the 1111 and routed to serial1.

Most datagram routing protocols, ie, OSPF, rely on broadcast or multicast to send routing information
to neighbors. You can configure a NBMA network, ie, X.25, to carry broadcast or multicast traffic to
a destination. Use the x25 map command with the broadcast option to run OSPF over X.25; the syntax
is x25 map protocol-address x.121-address broadcast. OSPF operation over X.25 requires the selection
of a designated router, which generates link-state advertisements (LSA’s) and has other special
responsibilities in running OSPF. Each multi-access OSPF network has at least two attached routers
and a designated router that is elected by the hello protocol; the designated router reduces the number
of adjacencies required on a multi-access network. An adjacency is a relationship formed between
selected neighboring routers and end nodes for the purpose of exchanging routing information, and is
based on a common media segment. This reduces the amount of routing protocol traffic and the size of
the topological database. PVC’s are required in the X.25 network for proper OSPF operation. Use the
IP OSPF command with the broadcast option to tell OSPF to treat the X.25 network attached to this
interface as a broadcast medium; the syntax is ip ospf network {broadcast|non-broadcast|point-to-
multipoint}. This command will ignored by an interface that does not allow it. This command
eliminates the need to configure OSPF neighbors. The broadcast option can dramatically increase the
traffic between two hosts.

Subinterfaces can be defined on an X.25 interface to forward routing information between neighbors
when the network topology is not fully connected. These subinterfaces are used on Cisco routers,
because routing protocols may need help to determine which hosts need a routing table, and is
especially necessary for routing protocols using split horizon. You can separate hosts into
subinterfaces on a physical interface, which leaves the X.25 protocol unaffected, causes the routing
processes to recognize each subinterface as a separate source or routing updates, and all subinterfaces
are eligible to receive routing updates.

Frame-relay Configuration
Frame-relay is a WAN packet-switching, layer 2 protocol that operates at the data-link and physical
layers of the OSI model. Frame-relay is more efficient than X.25 and is regarded as its replacement.
Frame-relay provides a packet-switching communication interface between user devices and network
devices across a WAN. Packet-switches are also referred to as data circuit-terminating equipment
(DCE); DCE’s are carrier owned internetworking devices which are normal switches, but can also
include routers. User devices, ie, terminals, pc’s routers, and bridges are referred to as data terminal
equipment (DTE). DTE’s initiate communications, and DCE’s respond. Frame-relay uses physical
layer facilities, ie, fiber media and digital transmission links to provide high-speed WAN transmission
for end station. At the data-link layer frame-relay encapsulates information from the upper layers of
OSI model.

Like X.25, frame-relay uses FIFO queuing on a statistically multiplexed circuit; multiplexing allows
several logical or virtual circuits to share the same physical link. Frame-relay speeds range between 56
Kbps and 2 Mbps, and DS-3 (45 Mbps) is available from some providers. Frame-relay is a streamlined
service that is a best effort, unreliable link. The improved fiber and digital facilities dispenses with the
need for error correction algorithms, acknowledgement schemes, and flow control. Frame-relay can
operate over permanent virtual circuits (PVC’s), and are used when there are frequent and consistent
data transfers between DTE’s across the frame-relay network. PVC’s decrease the bandwidth with the
establishment and termination of virtual circuits, but increase the cost of the link. A data-link
connection identifier (DLCI) is a number that identifies the logical circuit between the router and the
frame-relay switch; the switch maps the DLCI’s between each pair of routers. The DLCI is the
principal addressing mechanism used by routers that support frame-relay. In the local management
interface (LMI) enhancement to the basic frame-relay specification, DLCI’s are globally significant;
they uniquely identify individual end devices. Some DLCI-s are reserved, ie, 1019-1022 are used to
designate multicast groups and 1023 is for LMI.

The frame-relay frame structure:

           Flag        Address          Control          Data            FCS              Flag
The flag fields mark the beginning and end of the frame; the address and control fields contain the
address information; the data field contains encapsulated upper layer data; and the frame check
sequence (FCS) field is used for error control. The DLCI value is at the heart of the frame-relay
header; it makes up the first six bits in the address field and the first four bits in the control field. In
the basic mode of addressing, DLCI’s have local significance, which means that two ends of a
connection may use a different DLCI. A frame-relay network uses internal proprietary mechanisms to
keep locally significant PVC identifiers (DLCI’s) distinct.

Three bits of the 2-byte DLCI provide congestion control: the forward explicit congestion notification
(FECN) bit, the backward explicit congestion notification (BECN) bit, and the discard eligibility (DE)
bit. The FECN bit is set by the frame-relay network, and tells the DTE that receives the frame that
congestion was experienced in the source and destination path. The BECN is set by the frame-relay
network in frames that travel the opposite direction from frames that encounter a congested path.
DTE’s that receive frames with the FECN or BECN bit set can request higher level protocols to take
appropriate flow control action. The DE bit is simple priority bit that is usually only set when the
network is congested; it is set by the DTE and is used to inform the frame-relay network that a frame
has a lower priority than other frames and should be discarded before other frames. The eighth bit of
each byte of the address field is used to indicate the extended address (EA) bit, which indicates the
length of the address field. The EA bits allow extension of address lengths beyond the usual 2 bytes.
The C/R bit follows the most significant DLCI byte in the address field and is not currently defined.

In 1990 Cisco, StrataCom, Northern Telecom, and DEC formed a consortium to standardize frame-
relay development and facilitate interoperable frame-relay products. This consortium developed a
specification that conformed to the basic frame-relay protocol that was proposed by American National
Standards Institute (ANSI) and International Telecommunications Union Telecommunications
Standardization Sector ITU-T (formerly CCITT) with some additional capabilities for complex
internetworking environments.

Frame-relay extensions are referred to as collectively as the local management interface (LMI). LMI
extensions promoted by the consortium include: virtual circuit status messages, multicasting, global
addressing, and flow control. Some LMI extensions are common, ie, virtual circuit status messages,
and are expected to be implemented by everyone who adopts the specification, while other extensions
are optional.

Virtual circuit status messages provide communication and synchronization between the network and
the user device. They periodically report the existence of new PVC’s, the deletion of existing PVC’s,
and provide information about PVC integrity; they prevent data from being sent over PVC’s that no
longer exist, “black holes.”

Multicasting is an optional LMI extension that are designated by DLCI values of 1019-1022; frames
sent by a device using one of these values are replicated by the network and sent to members of the
designated group. This extension also defines LMI messages sent to a user device, which are used to
notify user devices of the addition, deletion, and presence of multicast groups. This is useful in the
efficient sending of routing information to a group of routers in networks that use dynamic routing.

Global addressing is an optional LMI extension, and gives connection identifiers (DLCI’s) global
rather than local significance, which allows them to identify a specific interface to the frame-relay
network. This extension is used to enable routers with different DLCI numbers to communicate by
placing a remote router’s DLCI value in the DLCI field; the DLCI field is changed by the frame-relay
network back to the sending router’s DLCI value to reflect the source node of the frame. Global
addressing provides significant benefits in large, complex internetworks, and causes a frame-relay
network to resemble a LAN in terms of addressing. ARP performs over frame-relay exactly as on a
LAN; ARP is a method for mapping layer 3 addresses to layer 2 addresses.

Simple flow control is an optional LMI extension and provides XON/XOFF flow control to the entire
frame-relay interface. It is intended for devices whose upper layers can not use the congestion
notification bits.

The frame format for an LMI frame uses the basic frame-relay protocol format with additional LMI
features.

   F       DLCI        UII        PD        CR        MT         IE        FCS        F

The LMI DLCI field identifies the frame as an LMI frame rather than a basic frame-relay frame; the
LMI-specific DLCI value is 1023. The Unnumbered Information Identification field has the poll/final
bit set to 0; the poll/final bit is a bit in bit-synchronous data-link layer protocols that indicate the
function of the frame. The Protocol Discriminator field is always set to a value that indicates the frame
is an LMI extension frame. The Call Reference field is always set to 0’s. The Message Type field
labels the frame as a status-inquiry or status message. Status-inquiry messages allow a device to
inquire about the status of the network; status messages are used to respond to status-inquiry messages
and include keepalives and PVC status messages. The Information Elements field consist of a variable
number of individual information elements (IE’s). Each IE consists of a single-byte IE identifier, an IE
length field, and one or more byte containing data. The Frame Check Sequence field is for error
control and to ensure integrity of data.

The Committed Information Rate (CIR) is the rate at which the frame-relay switch will transfer data
and is measured in bits performs second (bps). It is typically averaged over a certain time period
called the committed rate measurement interval (Tc). Oversubscription occurs when the total sum of
the CIR’s on all the VC’s entering the device is greater than the speed of the access line. This often
occurs if an access line is able to support the sum of all purchased CIR’s, but can not support the sum
of the CIR’s and the capacities of the VC’s, which causes packets to be dropped. Committed burst
(Bc) refers to the highest number of bits that a switch is willing to transfer during any Tc. The length
of time that a switch can manage a sustained rate of transfers depends on the Bc-to-CIR ratio; the
higher the ratio the longer a switch can sustain the transfer. An excess burst is the highest number of
bits that a frame-relay switch will try to transfer above the CIR, and is dependent on the speed of the
local access loop.

If a frame-relay switch notices congestion, it will send an FECN packet to the destination device and a
BECN to the source router, which tells the router to decrease the rate at which it sends packets. If the
switch recognizes network congestion, it will drop any packets with a DE bit first; a DE bit is set on
any traffic received after the CIR was reached.

Routers connect to a frame-relay switch directly or thru a CSU/DSU. Once the CPE router is enable it
sends a status-inquiry message to the frame-relay switch, which indicates the status of the router and
causes the switch to check the status of all remote routers. The frame-relay switch responds with a
status message that indicates the DLCI’s of the remote routers that can receive data from the local
routers. For every DLCI that is active, each router will send an inverse ARP request packet, which
introduces the router and request each remote router to identify itself by means of a network-layer
address. The router creates map entries in its frame-relay map for every DLCI that it receives an
inverse ARP message about. Map entries include the local DLCI, the network-layer address of the
remote router, and the connection state of which there three: active, inactive, and deleted. In some
cases the inverse ARP may not be working properly or it can not be supported by the remote router;
you should then configure the remote routers’ DLCI’s and IP addresses. Inverse ARP messages are
exchanged between routers every 60 seconds; the CPE router sends a keepalive to the frame-relay
switch; and the router modifies the status of the different DLCI’s depending on the frame-relay
switch’s response.


Frame-relay Configuration

Frame-relay encapsulation is set on a serial interface by the following command: encapsulation frame-
relay [cisco|ietf]. This encapsulation is a four byte header with 2 bytes as the DLCi identifier and 2
bytes as the packet identifier. The default is Cisco’s developed by the gang of four and is not
compatible with all vendor’s equipment. The optional ietf (Internet Engineering Task Force) is the
other type of frame-relay encapsulation set by RFC 1294 and 1490 and is used in connecting non-
Cisco routing equipment. IETF is supported at either the interface level or on a per DLCI basis.
Frame-relay encapsulation can be specified globally or on a circuit-by-circuit basis.

Cisco frame-relay software supports the industry standard for addressing the LMI. If a router is
attached to a PDN, then the LMI must match the type used on the PDN; otherwise, you can select an
LMI to suit the needs of your private network. To set the LMI type use the following command:
frame-relay lmi-type {ansi|cisco|q933a}. With IOS 11.2 the LMI is auto-detected. Cisco is the default
LMI type. The router must be configured with the appropriate signaling to match the frame-relay
carrier implementation. There three types of LMI signaling:

    1. ANSI                                                      3. Cisco
    2. ITU-T
Use the bandwidth command to configure the bandwidth of the link, which is used by routing
protocols, ie, IGRP and EIGRP as a metric. The keepalive interval must set to enable the LMI;
keepalive messages are sent by one network device to inform another network device that the virtual
circuit is still intact. The default keepalive interval is ten seconds and must be less than the
coresponding interval on the switch. To set this interval use the following command: frame-relay
keepalive number. You can also set a number of optional counters, interval, and thresholds to fine-
tune the operation of LMI DTE and DCE devices.

A static map can be used to link a specified nex-hop protocol address to a specified DLCI, which
removes the need for inverse ARP requests. Once a static map is specified the inverse ARP is
automatically disabled for the specified protocol on the specified DLCI. Static mapping becomes
necessary when the router at the other end does not support inverse ARP or the specified protocol does
not support inverse ARP. To configure a static map use the following command: frame-relay map
protocol protocol address DLCI [broadcast][ietf][cisco].

Example of configuration command for NBMA network:

Interface serial1
Encapsulation frame-relay
Frame-relay lmi-type Cisco
Ip address 129.99.123.81 255.255.255.0
Frame-relay map ip 129.99.123.82 101 broadcast
Frame-relay map ip 129.99.123.83 102 broadcast

The optional broadcast parameter is used to forward broadcast traffic to the next-hop address.

To monitor operation use the following command: show interface serial number, which displays status
and counter information. The output of this command status, encapsulation, keepalive interval, the
number of LMI status enquiries sent and received, the LMI status messages received and sent, the
number of status updates received and sent, the LMI-type, the broadcast queue (64 packets default),
and the details of the packets sent and received. The last input value shows the amount of time since a
packet was successfully received by the interfacep; the The last output value shows the amount of time
since a packet was successfully transmitted by the interface. Input queue: 0/75/0 (size/max/drops);
output queue: 0/64/0 (size/max/drops). The conversations field shows the currently active
conversations and the maximum number of conversations. CRC errors usually indicate transmission
problems with the data link. Interface resets are shown which can happen if a packet queued for
transmission is not done so in several seconds; this is caused by a malfunctioning modem that is not
supplying the transmit clock signal or a cabling problem. Carrier transitions are displayed which
indicates the carrier detect signal has changed; this can also be caused by modem or line problems.

Use the show frame-relay map command to display the current map entries and information about the
connections. The serial interface is either active or up. The DLCI decimal, hexadecimal, and
hexadecimal value as it appears on the wire are displayed.

Frame-relay Options

You can use frame-relay to connect remote sites in a number of ways by the following frame-relay
network topologies: star, full-mesh, and partial-mesh. The star topology is also called the hub and
spoke configuration in which all sites are connected to a central site; this central site usually contains
an application or service. The star topology is the cheapest, since it does not require as many PVC’s.
The star topology is facilitated by placing a multipoint router at the central site, and can use one
interface to interconnect a number of PVC’s. In a full-mesh topology, routers have virtual circuits to
all destinations, which enables fault tolerance.

The nonbroadcast multiaccess nature of frame-relay may bring about a reachability issue with star,
full-mesh, and partial-mesh topologies, which can occur when one interface is used to connect multiple
sites. Split horizon can occur when frame-relay operates multiple PVC’s over one interface. Since
frame-relay is a WAN switched service, routers do not share a common medium, which makes frame-
relay a nonbroadcast environment by definition. A broadcast environment can be reached by
transmitting data on each individual circuit; this simulated broadcast requires significant buffering and
CPU utilizatuion, which can cause packet loss due to contention for the circuits.

One model for implementing frame-relay in a internetwork is called non-broadcast multiaccess
(NBMA) that makes all routers connected by virtual circuits peers on the same IP network. Because
frame-relay does not support broadcast, the router must copy all broadcasts and transmit on each
virtual circuit. Full connectivity can be achieved in a partial-mesh configuration for routing protocols
that allow split horizon to be turned off. Some protocols, ie, AppleTalk RTMP, do support this, so
connectivity is restricted between routers that are directly connected by virtual circuits.
In an NBMA environment routers trying to forward updates can encounter problems with the operation
of split horizon on serial interfaces attached to WAN services. With split horizon a router does not
propagate route information back out the same interface except with RIP, IGRP and EIGRP. Split
horizon applies to all service advertisements, ie, IPX SAP or GNS, and AppleTalk ZIP updates. To
accommodate connectivity in this environment, you can operate frame-relay in a full-mesh
configuration; configuration at each router must include mapping statements for each DLCI it uses.

Frame-relay networks achieve connectivity by the use of multiple virtual interfaces on one physical
interface, which are called sub-interfaces. Each sub-interface uses a DLCI that represents the
destination for a PVC on your network. Frame-relay uses a partial-mesh design when you define
multiple sub-interfaces on a single physical interface. To configure multiple sub-interfaces, you must
have one physical interface set up with frame-relay encapsulation. To configure a sub-interface use the
following command: interface type number.sub-interface [multipoint|point-to-point]. For multipoint
interfaces the destinations can dynamically resolved with inverse ARP or statically mapped. For point-
to-point sub-interfaces you use the frame-relay interface dlci command. For changes form multipoint
to point-to-point or vice-versa to take affect you must reboot the router or you can create another sub-
interface. The frame-relay interface-dlci dlci [broadcast|ietf|cisco]. You use a different subnet for each
sub-interface created. When you use frame-relay with sub-interfaces, only two routers on a PVC act as
subnet peers. The dlci on the sub-interface can represent one or more destination protocol addresses.
Do not put a network address on a physical interface if you want the sub-interfaces to receive frames.

Example:

interface serial0
encapsulation frame-relay
frame-relay lmi-type cisco
interface serial1.1 point-to-point
ip address 129.99.181.81 255.255.255.0
frame-relay interface-dlci 152 broadcast

You must exit the sub-interface configuration mode by typing exit to return to interface configuration
mode to configure another sub-interface.

Cisco LAN Switching Fundamentals

Ethernet is a shared medium technology that enables all connected network devices to listen for
transmissions; each node must contend on an equal basis for the right to transmit data. Carrier Sense
Multiple Access with Collision Detection CSMA/CD is a link-access protocol used by Ethernet, and
allows only one station at a time to transmit. Each node on the network monitors the line and transmits
when it senses that the line is not busy; this is known as best service delivery.

To measure network performance you can determine forwarding rate or wire speed, average
throughput, and packet loss. The forwarding rate is the amount of packets performs second a switch
can transmit from one port to another, ie, standard ethernet transmits 64-byte packets at a maximum
rate of 14,890 packets per second. Average throughput is the maximum transmission rate available to
a user on the network,ie, the average utilization of standard ethernet is approximately 40% of its wire
speed. A switched network can increase average throughput so that it is closer to wire speed. Packet
loss is the number of packets transmitted at wire speed less the number of packets received by a
destination node.

You can increase network bandwidth by segmenting the network into smaller collision domains by
using bridges and routers or microsegment a network using a LAN switch. Network bandwidth can
also be increased by using full-duplex, fast Ethernet, FDDI (developed by the American National
Standards Institute X3T9.5), or switched Gigabit ethernet devices. Cisco has developed a number of
technologies to increase network performance including Catalyst switches, fast ethernet switches, fast
ethernet repeaters, and high-speed routers. Catalyst switches (1900, 2820 switches) allow you to
segment networks using frame switching and to create virtual LANs to create logical networks.

Propagation delay (latency) is another common network problem; this is the total delay in a frame
reaching its final destination after being transmitted from its source across a medium. Propagation
delay is a system-wide phenomenon. Latency increases as the distance a frame travels along a network
increases and as the number of devices it passes thru increases. For a network to be viable the time it
takes to transmit a frame must be at least twice the propagation delay, which is necessary to guarantee
that collision detection can occur before a source node has finished transmitting the frame. Time-
sensitive applications, ie, voice and interactive imaging are severely affected by latency.

Ethernet networks have their own inherent latency. Each ethernet bit has a bit windows of 100
nanoseconds. Because ethernet is a shared-media technology, it has the following limitations: it can
support only one network conversation at a time, distance

UTP is prone to signal attenuation, which is caused by electrical resistance (or weakening) and has a
limitation of 100 meters. Repeaters are used to combat attenuation; they combine network segments
into a single network. An ethernet repeater allows nodes to share a single communication channel,
only one transmitter to transmit at a time, and the same data transmission rates on all ports. Although
repeaters overcome attenuation, they increase the amount of latency.

Traditionally, bridges and routers have been used to segment networks. Although segmentation
decreases the amount of devices contending for bandwidth, it does not increase bandwidth in ethernet
or Token-ring networks. Segmentation is achieved by the use of bridges, routers, and switches.

A bridge is data-link layer device that connects two network segments, and is transparent to
workstations. Unlike a repeater it forwards packets only when necessary; a bridge will drop any
packets from one segment that are not destined for the other segment. Latency is increased by the use
of bridges. Bridges are protocol independent and are easy to install, because they maintain and update
their own address table, ie, they use frame source addresses to learn what device belong to specific
segments. Bridges use store and forward technology; a frame is copied to and stored on a bridge long
enough for it to examine the frames destination address against its address table. If the frame’s
destination address belongs to a segment that is different from the one it originated from, then it is
forwarded to the other segment. If a frame that has a destination address that is not contained in the
bridge’s address table, then it forwards the frame to all segments except the originating one. All
multicast and broadcast packets received by a bridge are sent to all connected segments; the source
address of these packet type are not recorded I the address table. This can result in increased network
traffic and contradict the bandwidth increased gained from segmentation. To combat this, you can
introduce filters to restrict the broadcast and multicast packets, but this can decrease throughput
because of the processing overhead.
With routers segmentation occurs at a higher level than with bridges; they operate at the network layer.
Routers are used to extend networks across multiple data links and find source and destination routes
on an internetwork. Routers are similar to bridges in that they determine where to forward frames
based on an address table; a similar protocol must be used by both the workstation and the router.
Unlike a repeater, a router is directly identified by a workstation using its services. Routers offer
greater network manageability, broader network functionality, and multiple active paths. Network
routing behavior is more apparent thru the operation of explicit protocols, which allows the network
admin greater control over path selection. Routers have the following functional mechanisms: flow
control, error and congestion control, fragmentation, reassembly, and explicit package lifetime control.
At the network layer, routers examine the source of a route, the source service access point (SSAP) and
the destination of a route, the destination service access point (DSAP). With the use of multiple paths,
a router can examine protocol and path metric information before making forwarding or filtering
decisions. Since the complexity of routers is more software intensive, they have a lower output than
bridges. Unlike bridges, routers must examine and interpret the syntax and semantics of a wide range
of fields in a packet. This extra processing leads to 30-40 percent loss of throughput for knowledge-
oriented protocols and 20-30 percent loss of throughput for sliding-windows protocols.

Switches address the problems of bandwidth shortages and network bottlenecks in LAN performance.
In switching there are three ways to forward packets instantaneously:

Thru a cross-point matrix
With a high-speed bus
Using shared memory arrangement

A packet entering a switch has its source and/or destination address examined to determine the
switching action to be taken for the packet. None of the other fields need to be examined until the
packet switches to its destination segment. LAN switches use data-link layer information to create a
point-to-point path across a switch. In ATM, a point-to-point connection can be a unidirectional or bi-
directional connection between two ATM end-systems. By using information from the MAC layer
when transmitting packets, LAN switches can remain protocol independent. The term switching is
applied to the following network concepts:

      Port configuration switching                           Cell switching (ATM)
      Frame (or packet) switching

Port configuration switching is a method of assigning a port to a physical network segment that is
under software control. Packet or frame switching increases the availability of network bandwidth,
while allowing multiple parallel transmissions. Cell switching is similar to frame switching and occurs
when fixed-length cells are switched on an ATM network. At the moment the frame-switching
technique is used by the Catalyst 1900 and 2820 LAN switches.

With full-duplex ethernet, simultaneous transmission and reception occurs on two pairs of cables,
which requires the connection to be switched between nodes. A collision-free environment is achieved
using point-to-point connections between ethernet stations. You must use a single port for each duplex
connection. This will yield double the bandwidth of bi-directional traffic on one port. Full-duplex
ethernet switches (FDES) provide a collision-free environment. FDES use only two circuits to control
the switching process:
The receive circuit (RX)
The transmit circuit (TX)

The transmit and receive circuits are wired directly to each other. The efficiency of standard ethernet
configuration is approximately 50-60 percent of the full bandwidth; a full-duplex ethernet
configuration has typical rates of 100 percent efficiency. The main difference between full-duplex and
half-duplex ethernet is in the way circuits transmit and receive frames. Since the TX circuit is directly
wired to the RX circuit in FDES, no collision detection is necessary. Full-duplex requires you to
disable loopback and collision detection, and software drivers must support two data paths. Full-
duplex requires that you adhere to ethernet distance constraints:

10BaseT to 100BaseT – 100 meters
10BaseFL – 2 kilometers
100BaseFX – 2 kilometers

Fast ethernet technology is based on the 802.3 standard that defines the data-link and physical layers.
It uses the ethernet network protocol CSMA/CD, which controls situations in which two or more nodes
transmit simultaneously. In an ethernet network there is a slot-time of 5.12 microseconds within which
a station must transmit a packet and before another station on the same segment can transmit its packet.
The slot-time refers to the time taken to transmit 512 bits at a speed of 10 Mbps. A station must
transmit and listen for collision notification within its allotted slot-time. Fast ethernet uses the same
slot-time as standard Ethernet; the distance between end nodes must be reduced to accommodate the
increased speed. You can only use Class II repeaters in a fast ethernet network segment. The frame
format, which refers to the amount of data transmitted, and the MAC mechanisms used is the same for
standard ethernet and fast ethernet. The speed of the transfer media in fast Ethernet includee a
mechanism for autonegotiation. Fast ethernet can benefit performance, reliability, switches, and cables
and connectors. Fast ethernet is also a CSMA/CD technology; since it uses the same MAC, common
circuitry, switches, and dual-speed adapters as standard Ethernet, fast Ethernet can easily migrate from
10 Mbps to 100 Mbps networks. In the 100BaseT specification there are two types of repeaters:

Class I
Class II

The class of the repeater is based on its propagation delay value. Cisoc’s FastHub 300 series can be
used as a 100BaseT repeater. A Class II repeater has a delay value of 92 bit times; a Class I repeater
has a delay value of 140 bit times. A Class I repeater is also called a translational repeater, and
supports both 100BaseX and 100BaseT4 signaling. A Class II repeater is called a transparent repeater.
Due to differences in delay time, you can have only a single Class I repeater in a collision domain of a
100BaseT system. The shorter propagation time of Class II repeaters will support only one signaling
system, ie, 100BaseX or 100BaseT4; however, you can have two Class II repeaters in a collision
domain. A fast ethernet repeater is never used as a DTE device.

When using multimode fiber the distance limitation is 412 meter for half-duplex and 2 kilometers for
full-duplex. If you are using an 802.3u 100BaseT Class I or Class II repeater, the maximum distance
between end nodes is 200 or 205 meters. The distance for fiber media using a Class I repeater is 261
meters; for Class II repeaters the distance is 261/305 meters depending on how many repeaters are
used. With the FastHub 300 series of repeaters (sometimes called concentrators) you can support a
longer cable drop than with Class I and Class II repeaters; 200 meters with UTP and 318 meters with
fiber media.
At the MAC-layer level, connectivity to 100BaseT fast ethernet is specified by the media-independent
interface (MII). A generic fast ethernet interface will enable you to connect to 100BaseT4,
100BaseTx, and 100BaseFX. Although based upon 802.3u specifications, the 100BaseT4 cable
specification is unique because it calls for voice-grade, four-pair, twisted wires that support frames
over fast Ethernet; you can use Cat 3, Cat 4, or Cat 5 UTP to meet the specification. As with
100BaseT, 100BaseT4 uses a standard RJ45 connector with a similar pinout. An RJ45 has four sets of
pins; these pins transmit on pins 1 and 2 and receive on pins 3 and 6. They are bi-directional on pins 4,
5, 7, and 8. By using 100BaseTx specifications your network will be able to support NIC’s, routers,
and full-duplex connections for switches. You can use three types of cabling to support 100BaseT:

Type-1 STP
Cat 5 UTP
Two-pair 100-ohm STP

As with 100BaseTX, 100BaseFX supports NIC’s, routers, and switches; it is composed of a two-strand
multimode fiber-optic cable of 50/125 or 62.5/125 microns in size. One strand is used to transmit and
the other is used to receive. You can use the three following connector type for 100BaseFX:

SC
A straight-tipped ST
A media-independent connector (MIC)

The term 100BaseX refers to both 100BaseTX and 100BaseFX specifications; it was approved to link
a 100 Mbps Ethernet (CSMA/CD) that uses a MAC layer with a Fiber Distributed Data Interface
(FDDI) that uses a Physical Medium Dependent (PMD) specification. FDDI uses 100 Mbps token-
passing, dual-ring LAN using fiber-optic cable. Since FDDI and PMD share a sublayer, 100BaseTX
and 100BaseFX have the same signaling system.

LAN Switching

A LAN switch is used to segment networks; they provide higher port density at a lower cost than
bridges. Switches can provide fewer users per segment, which increases the average available
bandwidth per user. This trend toward fewer users per segment is known as microsegmentation, or one
user per segment. With one user per segment, each user receives instant access to the full bandwidth,
and collisions do not occur. LAN switches are categorized in terms of the OSI layer at which they
operate:

Layer 2
Layer 2 with layer 3 features
Multilayer

A layer 2 LAN switch is operationally similar to a multiport bridge, but is has much higher capacity
and supports many advanced features, ie, full-duplex. MAC is a section of the Data-Link Layer; a
layer 2 switch performs switching and filtering based on the MAC address. The operations of a layer 2
switch are transparent to network protocols and user applications.
A layer 2 LAN switch with layer 3 features uses MAC addresses when making switching decisions,
but might also have layer 3 traffic control features like broadcast traffic management, multicast traffic
management, security thru access-lists, and IP fragmentation.

Multilayer LAN switches forward frames using a frame’s layer 3 address; multilayer switches make
switching and filtering decisions based on the layer 2 and layer 3 addresses. A multilayer switch
dynamically decides whether to switch layer 2 or route layer 3 incoming traffic. It switches in a
workgroup and routes between different workgroups. Routing and switching functions are separated to
preserve cost effectiveness, performance, and administrative simplicity. When routing functions are
separated from switching functions there are one or more route processors on the network, path
information is sent to multilayer switches by route processors, and packets are switched at very high
speeds.

A LAN switch is called a frame switch, because it forwards layer 2 frames; by contrast an ATM switch
forwards cells. Ethernet LAN switches are the most common, but Token-ring and FDDI switches are
becoming more prevalent. The first LAN switches were layer 2 devices for solving bandwidth issues;
these were developed by Kalpana in 1990, which is now part of Cisco Systems’ Workgroup Business
unit. Recent LAN switches are evolving into multilayer devices that are capable of handling protocol
issues involved in high-bandwidth applications; in the packets, such problems were handled by routers.

LAN switches are similar to transparent bridges in functions such as learning topology, forwarding,
and filtering. LAN switches support new and unique features such as dedicated communication
between devices, multiple simultaneous conversations, full-duplex communication, and media rate
adaptation. Dedicated communication eliminates collisions, which increase file-transfer throughput.
The multiple simultaneous conversations support is achieved by forwarding switching several packets
at once. Full-duplex communication doubles the throughput. The media rate adaptation means that a
LAN switch can translate between 10 and 100 Mbps. LAN switches can be used without changing
hubs, NIC’s, or cabling, and cost less than other devices for segmenting networks. So, using LAN
switches is the cheapest method of network segmentation. LAN switching does not affect network
addressing, enables the creation of VLAN’s, and can significantly improve performance by providing
switched high-speed connections to servers and backbones.

Multicasting occurs when a packet from one workstation is sent to all other interested workstations,
and has increased with the use of new applications, ie, audio-video conferencing, video servers, and
real-time financial data delivery services. Routers send the multicast down relevant links only, but the
switch forwarding engine sends multicasts to very switched port in a particular LAN, which results in
unnecessary use of bandwidth. Cisco Group Management Protocol (CGMP) prevents flooding in
Cisco devices such as Catalyst 1900 and 2820 switches. CGMP uses a Cisco router to download the
identity of multicast clients to the switch, which enables the switch forwarding engine to switch these
multicast packets at wire speed to interested ports only. The router exchanges and obtains multicast
information with other routers, by using the standard Internet Group Management Protocol (IGMP).
CGMP interoperates with IGMP version 1 and 2 to obtain this multicast information.

A broadcast storm occurs when the level of broadcast traffic is so high that there is almost no
bandwidth available for applications, new network connections cannot be established, and existing
connections may be dropped. Broadcast storms usually result from faulty end-stations, so the chances
of one occurring increase as the switched internetwork grows. Switches can be configured to limit the
number of broadcasts transmitted to combat this problem.
Some switches can filter multicast traffic by MAC address. Each multicast packet has a multicast
group number; to receive a multicast packet, a host has to belong to the multicast group indicated. The
switch keeps a table that links a multicast group address with a set of ports. If a multicast packet does
not correspond to any multicast group within the switch, then it is treated as a unicast packet and
flooded to all ports except the source port. You can disable flooding of unregistered multicast packets
on a per-port basis, which is done manually thru switch configuration. Suppose there are two servers
that provide the same service and belong to the same multicast group, then source port filtering can
configured to ensure that certain ports reach one server and other ports reach another server to provide
load balancing. Multicast address packet filtering can allow for multicast addresses to be manually
entered in a switch, which allows MAC-address based multicast protocols to be used in a network
without IGMP support in routers.

LAN switching operation is based largely upon transparent bridge or learning bridge (since it needs to
learn the addresses of all the stations it deals with) operation. A transparent bridge monitor the source
MAC address of all incoming frames, which allows it to learn which stations can be reached on each of
its ports or interfaces. It does this by maintaining a database of dynamically learned MAC addresses
and their associated interfaces, which are stored in content addressable memory (CAM). The bridge
updates its table regularly when a station sends a frame, and a time stamp is recorded with each
address entered. A bridge flushes entries of stations not heard from within a specified period of time.

LAN switches can be characterized by the forwarding method they support: store-and-forward
switching and cut-through switching. With store-and-forward switching, the switch copies the entire
frame into its buffers and then computes the CRC. The frame will be discarded if it contains a CRC
error, it is a runt (less than 64 bytes including the CRC), or it is a giant (more than 1518 bytes
including the CRC). If the frame has no errors, the switch looks up the destination address in its table
to determine the outgoing interface, and then forwards the frame towards its destination. With cut-
through switching, the switch copies only the destination address (the first 6 bytes following the
preamble) into its buffers, the switch looks up the destination address in its table to determine the
outgoing interface, and then forward the frame to its destination. Latency is reduced in cut-through
frame since it starts to forward the frame as soon as it reads the destination address and determines the
outgoing interface. Some switches can be configured to perform cut-through switching on a per-port
basis until a user-defined error threshold is reached, which will cause it to revert to the store-and-
forward mode. When the error rate falls below the threshold, the port automatically changes back to
the cut-through mode. Fragment-free switching is a modified form of cut-through switching, that waits
for the collision window (64 bytes) to pass before forwarding. If a packet has an error, it usually
occurs in the first 64 bytes. Fragment-free switching provides better error checking than cut-though
switching, and does this with minimal increase in latency. Latency for packet forwarding thru the
switch depends on the choice of switching modes; switch throughput is not affected by the choice of
switching modes, it is always at wire speed.

LAN segmentation groups computers based on physical location; in VLAN’s, segmentation groups
computers based on other criteria, ie, job role or security. The benefit of VLAN’s is that it allows you
to create broadcast domains that are independent of physical location. VLAN uses a technology called
frame tagging; traffic from a particular virtual topology carries a unique VLAN identifier (VLAN ID
or frame tag) as it crosses a common backbone. Each VLAN is differentiated by a color; the VLAN
ID determines the frame coloring for the VLAN. The switch that receives a frame from a source
station inserts the VLAN ID and switches the frame onto the shared backbone. When the frame
reaches the destination switch, the VLAN ID header is stripped and is forwarded to the interface that
matches the VLAN color. If you use a Cisco network management product such as VLAN Director,
you can color-code the VLAN’s and monitor the VLAN graphically.

Often a packet can go from source to destination by way of more than one path or redundant paths,
which can give rise to routing problems. This topology loop can lead to indeterminate forwarding
behavior, so redundant paths need to be logically removed from the network. The spanning tree
protocol is used by Ethernet bridges and switches to prevent these topological loops in a network. The
spanning tree protocol sets up a root node, and then constructs a topology so that there is exactly one
path for reaching any node. Network devices exchange messages to determine loops, and then remove
the loops by shutting down the interface. If an intermediate node fails, then the protocol re-opens the
shut-down interface to ensure that connectivity is maintained.

Some store-and-forward switches use multiple processors to achieve high-speed forwarding capability,
which contrasts with the older that used only a single processor. High-speed processor are placed on
multiple cards, which enable simultaneous processing of multiple packets. Packet forwarding is
achieved only after arbitrating for internal access to multiplexed bus; since the bus has a fixed
capacity, this arbitration can significantly increase latency between source and destination stations.
Cisco switches employ a high-speed Time Division Multiplexed (TDM) bus, that prioritizes access by
speed of transfer.

Cisco Switches

A Cisco switch may have one or more features from each of the following categories:

      LAN switch                                             Management
      High-speed uplink technology                           Design
      IOS software                                           Port configuration

LAN switch features dictate the amount of bandwidth and technology a port uses. A switch may
support the following technologies:

      FDDI                                                   Token-ring
      CDDI (copper)                                          Switched 100VG
      ATM

The high-speed uplink technology features that a switch supports include:

      100BaseTX                                              FDDI
      100BaseFX                                              ATM
      CDDI                                                   Gigabit

A switch may the following IOS features:

      Effective bandwidth utilization by load balancing and broadcast storm suppression
      VLAN’s
      Full-duplex 10Baset and 100BaseTX
      Fast EtherChannel
Fast EtherChannel enables two to four full-duplex Fast Ethernet links to be grouped into one logical
group (trunking).

Port-specific IOS features include:

      CGMP                                                    Inter-switch link (ISL)

CGMP works with IGMP messages to dynamically configure ports in certain switches, and to
download to a Catalyst switch the identity of multicast clients. An ISL is a VLAN multiplexing
protocol that is used by trunks to support multiple VLAN’s; ISL allows a single link to carry traffic for
multiple VLAN’s between switches by becoming an ISL trunk, which enables efficient distribution of
VLAN traffic across a switched internetwork. Any Fast Ethernet port can be configured as an ISL
trunk.

Most switches are designed to manageable, and these management features allow a switch to be
maintained easy and facilitate network troubleshooting. Management of a switch can be made easier if
it can monitored easily and port activity can analyzed. Cisco produces a range of applications that
make management of switches easier.

The most comprehensive of these is CiscoWorks for Switched Networks, which enable you to
configure, monitor, and manage a switched internetwork. CWSI is a suite of network management
applications that includes CiscoView and TrafficDirector. CiscoView is used to display graphical
representation of a device and allows you to configure and monitor device chasis, port, and interface
information. TrafficDirector provides Remote Monitoring (RMON), protocol analysis, and
troubleshooting of protocol-related problems. Both of these applications can be installed separately.

You can easily monitor a switch if it supports Telnet, embedded RMON, or out-of-band console.
Telnet and embedded RMON allow remote monitoring of a switch, while out-of-band console allows
you to have a separate monitor to view management functions. The monitoring of switch port activity
is made easier if the switch supports SNMP, a SwitchProbe port, or RMON. SNMP is a network
management protocol that enables network management stations to access and modify switch
information. A SwitchProbe port allows you to monitor the ports on a switch; it is located on the
switch’s back panel and enables you to connect to probe devices, ie, protocol analyzers, RMON
probes, and other Ethernet-compliant devices. These probe devices enable you to decode packet
contents for troubleshooting or to analyze network characteristics. Embedded RMON services provide
statistics, history, events, and alarm notification for each switched port, which provides vital
information for capacity and growth planning and helps managers troubleshoot network faults.

Some of the other design features that a switch may have include: a backup power supply, a port
expansion slot, a management module slot, and a stackable port.

Port configuration features determine whether a port is fixed, partial, or fully modular. A fixed port
configuration does not allow any further ports to be added to a switch. A partially modular port
configuration contains a number of fixed ports and expansion slots that will allow you to expand the
number of ports or change the port type. A fully modular configuration allows complete flexibility
when configuring ports. Cisco provides a switch selection tool on it web page to aid in switch
selection by feature set; it also provide a reseller switch selection tools that is not online.
Cisco’s Catalyst family is a comprehensive line of high-performance switches and spans the entire
application spectrum. It ranges from desktop to workgroup switches to multilayer switches for
scalable enterprise solutions.

The Catalyst 5000 switch series (models 5000, 5002, 5500, 5505, and 5509) supports high-volume
intranet activity in the wiring closet, data center, or backbone. This series features a Gigabit Ethernet
or ATM-ready platform that offers users high-speed trunking technologies, ie, Fast EtherChannel and
OC-12 ATM. This series also features a redundant architecture, dynamic VLAN’s, complete intranet
services, and media-rate performance with a broad variety of interface modules. This series provides
new functionalities that support multiprotocol NetFlow switching enabling the switch to provide:

      Scalable convergence of layer 2 and layer 3 switching
      Multipleprotocol and multilayer switching
      Other IOS services such as EIGRP support

The range of media supported by the Catalyst 5000 series enables network managers to delivery high-
performance backbone access, ie, it can accommodate web browser-based traffic across an intranet.
This series supports a growing number of interface modules that enable deliver dedicated bandwidth to
users, which is achieved thru high-density group switches and switched LAN switch features. This
series has some features that are unique to its platform, ie, the ATM Switch Processor and ATM switch
interface modules and port adapters.

Cisco’s Catalyst 3900 switch series (models 3900 and 3920) comprises second-generation Token-ring
switches that provide media-rate performance. This series also supports a variety of high-speed
uplinks, extensive network management, and multiple VLAN’s; this series is suitable for backbone,
workgroup, and desktop use.

Cisco’s Catalyst 3000 switch series (models 3000, 3100, and 3200) stackable switch offers wire-speed
performance for both Ethernet and Token-ring environments. It provides switched fault-tolerance
capabilities, embedded VLAN and RMON support, and a unique stackable software/hardware
architecture. This series can accommodate WAN routing, and is a perfect solution for branch offices
and remote wiring closets that require scalability and fault-tolerance.

Cisco Catalyst 2900 switch series (models 2901, 2902, 2926T, and 2926F) provides 10/100
autosensing and autonegotiating interfaces that give high-speed configuration flexibility at the
enterprise level. This series is ideal for Ethernet workgroups and individual users. This series provide
dual high-speed 100BaseTx and 100BaseFx uplinks for wiring closet riser connections to backbones or
to servers, and is designed to help easily migrate from traditional shared-media LAN’s to fully
switched networks. This series provides a high degree of compatibility with the Catalyst 5000 family
of modular switches; it has a similar switch fabric architecture and feature list. The overlap between
the Catalyst 2900 and 5000 series can help to reduce the cost of enterprise networking by simplifying
deployment, increasing your system compatibility alternatives, and enabling your system to expand
using existing network devices. This series supports up to 16,000 MAC addresses and provides either
14 or 26 Fast Ethernet ports in four configurations.

Cisco’s Catalyst 2900 XL series (models 2908, 2916M, 2924, and 2924C) is a full line of 10/100
autosensing Fast Ethernet switches that offer high performance, versatile modularity, and easy-to-use
management. The models in this series come in different port densities, configuration options, and
pricing to meet a broad range of network design requirements.

The most important switch in the Catalyst 2820 series is the Catalyst 2820 Ethernet workgroup switch;
other switches in this series include the 2822 and the 2828. This switch offers 24 switched 10BaseT
ports and two high-speed expansion slots; it also provides for Fast Ethernet, FDDI, and future ATM
plug-in modules.

The most important model in the Catalyst 1900 series is the Catalyst 1900 Ethernet workgroup switch;
other switches in the series include the 1912, 1912C, 1924, 1924C, and 1924F. This switch is
equipped with 12 or 24 switched 10BaseT ports with one or two 100BaseT ports

Both the 2820 and the 1900 platforms support Cisco IOS software features, ie, broadcast storm control
and future ISL support, and are available in Standard and Enterprise editions. The Enterprise edition
software enables them to deliver network configuration flexibility and scalability.

VLAN Configuration

To configure VLAN’s use the following command: set vlan <vlan number> [name <vlan name>] [type
{Ethernet|fddi|fddinet|trcrf|trbrf}] [state {active|suspend}]. Valid VLAN numbers range from 1 to
1005; you cannot assign a VLAN any number in the range since switches have a number of VLAN’s
defined by default. For example the number 1 is the default VLAN number for an Ethernet VLAN,
while 1002-1005 are assigned to Token-ring and FDDI VLAN’s. VLAN’s can be Ethernet, FDDI,
FDDINET, Token-ring Concentrator Function, or Token-ring Bridge Relay Function; the default
VLAN type is Ethernet. By default all switched Ethernet and Fast Ethernet ports belong to VLAN 1.
To add ports to other VLAN’s use the following command: set vlan <vlan number> <module
number>/<port number>. To check VLAN configuration, use the show vlan command.

If a VLAN spans across two or more switches, then these switches must be able to exchange VLAN
information, which is accomplished by configuring trunk ports on these switches. In order for the
trunk ports to communicate with each other, you must configure a trunking protocol, which is
generally the ISL trunking protocol on Cisco products. The trunking protocol 802.11Q trunking
protocol may also be used.

To make management of VLAN’s easier, it is a good idea to configure VLAN Trunking Protocol
(VTP) on each switch; VTP is a layer 2 messaging protocol that maintains VLAN configuration
consistency throughout the network. VTP manages the addition, deletion, and renaming of VLAN’s
on a network-wide basis; it allows you to make changes on one switch, and have those changes
automatically replicated to all other switches in the network. You can configure changes on any
switch that is configured as a VTP server; without VTP you cannot send information about VLAN’s to
other switches. There are two versions of VTP, v1 and v2. Version 2 supports a number of additional
features, ie, it carries out consistency checks on VLAN names and values. One drawback on VTP is
that it sends unnecessary flooded traffic across a network; to combat this, you should configure VTP
pruning on each switch. VTP pruning enhances network bandwidth use by reducing unnecessary
flooded traffic, ie, broadcast, multicast, unknown, and flooded unicast packets; it does this by
restricting flooded traffic to trunk lines. A switch can operate in three different VTP modes:

Server
Client
Transparent

In VTP server mode, you can create, delete, and modify VLAN’s; you can also specify other
configuration parameters, ie, VTP version and VTP pruning for the entire VTP domain. VTP servers
advertise their configuration to other switches in the VTP domain; they can also synchronize their
VLAN configuration with other switches based on advertisements they receive on trunk lines. VTP
server is the default mode.

VTP clients behave the same way as VTP servers, except you cannot create, delete, or modify VLAN’s
on a VTP client. VTP transparent switches do not participate in VTP; they do not advertise their
configuration or synchronize configuration based on advertisements received. However, in VTP v2
they forward VTP advertisements they receive out of their trunk ports.

Before you create a VLAN, you need to decide whether you are going to use VTP in your network; if
you are the you must define a VTP domain and place the switch in VTP server mode or VTP
transparent. To configure VTP domains use the following command: set vtp [domain <domain
name>] [mode {server|client|transparent}] After configuring a VTP domain, you can then configure a
VLAN; then you configure VTP. To configure VTP you need to:

Configure a trunk port
Set transmission modes, ie, duplex
Set VTP option, ie, enable VTP pruning
Add some ports to the VLAN

To configure a trunk use the following command: set trunk <module number>/<port number> on; the
on parameter puts the port into permanent trunking mode and negotiates to convert the link into a trunk
port, even if the other end other link does not agree. To configure duplex on the trunk port use the
following command: set port duplex <module number>/<port number> {full|half|auto}. The default
configuration for all 10 Mbps and 100 Mbps modules is half; the default for all 10/100 Mbps modules
has all ports set to auto. You do not have to configure ISL on a Fast Ethernet port, since it is there by
default. To configure VTP pruning use the following command: set vtp [pruning {enable|disable}]

To verify VTP domain information, use the show vtp domain command. The output for this command
displays VTP domain name, VTP mode, VTP version, and if pruning is enabled. Use the following
command to show port status and counters: show port <module number>/<port number>. To show
trunking information for a specific port use the following command: show trunk <module
number>/<port number>.

				
DOCUMENT INFO
Shared By:
Categories:
Tags:
Stats:
views:6
posted:2/1/2012
language:
pages:95
jianghongl jianghongl http://
About