CCNA 3.0 Semester 4

Document Sample
CCNA 3.0 Semester 4 Powered By Docstoc
					                                                           Chapter 1   Scaling IP Addresses   1

                  Chapter 1               Scaling IP Addresses


     The rapid growth of the Internet has astonished most observers. One reason that the
Internet has grown so quickly is due to the flexibility of the original design. Without
developing new methodologies of IP address assignment, this rapid growth of the Internet
would have exhausted the current supply of IP addresses. In order to cope with a shortage of
IP addresses, several solutions were developed. One widely implemented solution is Network
Address Translation (NAT).

     NAT is a mechanism for conserving registered IP addresses in large networks and
simplifying IP addressing management tasks. As a packet is routed through a network device,
usually a firewall or border router, the source IP address is translated from a private internal
network address to a routable public IP address. This allows the packet to be transported over
public external networks, such as the Internet. The public address in the reply is then
translated back to the private internal address for delivery within the internal network. A
variation of NAT, called Port Address Translation (PAT), allows many internal private
addresses to be translated using a single external public address.

     Routers, servers, and other key devices on the network usually require a static IP
configuration, which is entered manually. However, desktop clients do not require a specific
address but rather any one in a range of addresses. This range is typically within an IP subnet.
A workstation within a specific subnet can be assigned any address within a range while other
values are static, including the subnet mask, default gateway, and DNS server.

     The Dynamic Host Configuration Protocol (DHCP) was designed to assign IP addresses
and other important network configuration information dynamically. Because desktop clients
typically make up the bulk of network nodes, DHCP is an extremely useful timesaving tool
for network administrators.

     Students completing this module should be able to:
          Identify private IP addresses as described in RFC 1918
          Discuss characteristics of NAT and PAT
          Explain the benefits of NAT
          Explain how to configure NAT and PAT, including static translation, dynamic
           translation, and overloading
          Identify the commands used to verify NAT and PAT configuration
          List the steps used to troubleshoot NAT and PAT configuration
2     Cisco Academy – CCNA 3.0 Semester 4
          Discuss the advantages and disadvantages of NAT
          Describe the characteristics of DHCP
          Explain the differences between BOOTP and DHCP
          Explain the DHCP client configuration process
          Configure a DHCP server
          Verify DHCP operation
          Troubleshoot a DHCP configuration
          Explain DHCP relay requests

1.1 Scaling Networks with NAT and PAT

1.1.1 Private addressing

     RFC 1918 sets aside three blocks of private IP addresses. They are one Class A address,
16 Class B addresses, and 256 Class C addresses. These addresses are for private, internal
network use only. Packets containing these addresses are not routed over the Internet.

     Public Internet addresses must be registered by a company with an Internet authority, for
example, ARIN or RIPE. These public Internet addresses can also be leased from an ISP.
Private IP addresses are reserved and can be used by anyone. That means two networks, or
two million networks, can each use the same private address. A router should never route RFC
1918 addresses, because ISPs typically configure the border routers to prevent privately
addressed traffic from being forwarded.

     NAT provides great benefits to individual companies and the Internet. Before NAT, a
host with a private address could not access the Internet. Using NAT, individual companies
can address some or all of their hosts with private addresses and use NAT to provide access
the Internet.

1.1.2 Introducing NAT and PAT

     NAT is designed to conserve IP addresses and enable networks to use private IP
addresses on internal networks. These private, internal addresses are translated to routable,
public addresses. This is accomplished by inter-network devices running specialized NAT
software and can increase network privacy by hiding internal IP addresses.

     A NAT enabled device typically operates at the border of a stub network. A stub network
is a network that has a single connection to its neighbor network. When a host inside the
stub network wants to transmit to a host on the outside, it forwards the packet to the border
gateway router. The border gateway router performs the NAT process, translating the internal
                                                          Chapter 1   Scaling IP Addresses    3
private address of a host to a public, external routable address.     In NAT terminology, the
internal network is the set of networks that are subject to translation. The external network
refers to all other addresses.

     Cisco defines the following NAT terms:
          Inside local address – The IP address assigned to a host on the inside network. The
           address is usually not an IP address assigned by the Network Information Center
           (NIC) or service provider. This address is likely to be an RFC 1918 private
          Inside global address – A legitimate IP address assigned by the NIC or service
           provider that represents one or more inside local IP addresses to the outside world.
          Outside local address – The IP address of an outside host as it is known to the hosts
           on the inside network.
          Outside global address – The IP address assigned to a host on the outside network.
           The owner of the host assigns this address.

Interactive Media Activity

     Drag and Drop: Basic Network Address Translation

     When the student has completed this activity, the student will be able to identify the IP
address translations that occur when using NAT.

1.1.3 Major NAT and PAT features

     NAT translations can be used for a variety of purposes and can be either dynamically or
statically assigned. Static NAT is designed to allow one-to-one mapping of local and global
addresses. This is particularly useful for hosts which must have a consistent address that is
accessible from the Internet. Such hosts may be enterprise servers or networking devices.

     Dynamic NAT is designed to map a private IP address to a public address. Any IP
address from a pool of public IP addresses is assigned to a network host. Overloading, or Port
Address Translation (PAT), maps multiple private IP addresses to a single public IP address.
Multiple addresses can be mapped to a single address because each private address is tracked
by a port number.

     PAT uses unique source port numbers on the inside global IP address to distinguish
between translations. The port number is encoded in 16 bits. The total number of internal
addresses that can be translated to one external address could theoretically be as high as
65,536 per IP address. Realistically, the number of ports that can be assigned a single IP
address is around 4000. PAT will attempt to preserve the original source port. If this source
4     Cisco Academy – CCNA 3.0 Semester 4
port is already used, PAT will assign the first available port number starting from the
beginning of the appropriate port group 0-511, 512-1023, or 1024-65535. When there are no
more ports available and there is more than one external IP address configured, PAT moves to
the next IP address to try to allocate the original source port again. This process continues
until it runs out of available ports and external IP addresses.

     NAT offers the following benefits:
          Eliminates reassigning each host a new IP address when changing to a new ISP.
           NAT eliminates the need to readdress all hosts that require external access, saving
           time and money.
          Conserves addresses through application port-level multiplexing. With PAT,
           internal hosts can share a single public IP address for all external communications.
           In this type of configuration, very few external addresses are required to support
           many internal hosts, thereby conserving IP addresses.
          Protects network security. Because private networks do not advertise their
           addresses or internal topology, they remain reasonably secure when used in
           conjunction with NAT to gain controlled external access.

Interactive Media Activity

     Drag and Drop: Network Address Translation with Overload (NAT)

     When the student has completed this activity, the student will be able to identify the IP
address and port translations that occur when using PAT.

1.1.4 Configuring NAT and PAT

Static Translation

     To configure static inside source address translation, perform the tasks in Figures and .

     Figure    shows the use of static NAT translation. The router will translate packets from
host to a source address of

Dynamic Translation

     To configure dynamic inside source address translation, perform the tasks in Figure .

     The access list must permit only those addresses that are to be translated. Remember that
there is an implicit “deny all” at the end of each access list. An access list that is too
permissive can lead to unpredictable results. Cisco advises against configuring access lists
referenced by NAT commands with the permit any command. Using permit any can result
                                                            Chapter 1   Scaling IP Addresses     5
in NAT consuming too many router resources, which can cause network problems.

      Figure   translates all source addresses passing access list 1, which have source address
from, to an address from the pool named nat-pool1. The pool contains addresses
from to

      Note: NAT will not translate the host, as it is not permitted for translation by the
access list.


      Overloading is configured in two ways depending on how public IP addresses have been
allocated. An ISP can allocate a network only one public IP address, and this is typically
assigned to the outside interface which connects to the ISP. Figure      shows how to configure
overloading in this situation.

      Another way of configuring overload is if the ISP has given one or more public IP
addresses for use as a NAT pool. This pool can be overloaded as shown in the configuration in
Figure .

      Figure   shows an example configuration of PAT.

Lab Activity

      Lab Exercise: Configuring NAT

      In this lab, a router will be configured to use network address translation (NAT).

Lab Activity

      Lab Exercise: Configuring PAT

      In this lab, a router will be configured to use Port Address Translation (PAT).

Lab Activity

      Lab Exercise: Configuring static NAT Addresses

      In this lab, a router will be configured to use network address translation (NAT) to
convert internal IP addresses, typically private addresses, into outside public addresses.

Lab Activity

      e-Lab Activity: Configuring NAT
6       Cisco Academy – CCNA 3.0 Semester 4
     In this lab, the student will configure NAT.

Lab Activity

     e-Lab Activity: Configuring PAT

     In this lab, the students will configure a router to use Port Address Translation (PAT) to
convert internal IP addresses, typically private addresses, into an outside public address

Lab Activity

     e-Lab Activity: Configuring Static NAT Addresses

     In this lab, the student will configure a router to use network address translation (NAT)
to convert internal IP addresses, typically private addresses, into outside public addresses.

1.1.5 Verifying PAT configuration

     Once NAT is configured, use the clear and show commands to verify that it is operating
as expected.

     By default, dynamic address translations will time out from the NAT translation table
after a period of non-use. When port translation is not configured, translation entries time out
after 24 hours, unless reconfigured with the ip nat translation command. Clear the entries
before the timeout by using one of the commands in Figure .

     Translation information may be displayed by performing one of the tasks in EXEC

     Alternatively, use the show run command and look for NAT, access list, interface, or
pool commands with the required values.

Lab Activity

     Lab Exercise: Verifying NAT and PAT Configuration

     In this lab, the student will configure a router for Network Address Translation (NAT)
and Port Address Translation (PAT).

Lab Activity

     e-Lab Activity: Verifying NAT and PAT Configuration

     In this lab, the student will configure a router for Network Address Translation and Port
                                                              Chapter 1   Scaling IP Addresses   7
Address Translation.

1.1.6 Troubleshooting NAT and PAT configuration

     When IP connectivity problems in a NAT environment exist, it is often difficult to
determine the cause of the problem. Many times NAT is mistakenly blamed, when in reality
there is an underlying problem.

     When trying to determine the cause of an IP connectivity problem, it helps to rule out
NAT. Use the following steps to determine whether NAT is operating as expected:
          Based on the configuration, clearly define what NAT is supposed to achieve.
          Verify that correct translations exist in the translation table.
          Verify the translation is occurring by using show and debug commands.
          Review in detail what is happening to the packet and verify that routers have the
           correct routing information to move the packet along.

     Use the debug ip nat command to verify the operation of the NAT feature by displaying
information about every packet that is translated by the router. The debug ip nat detailed
command generates a description of each packet considered for translation. This command
also outputs information about certain errors or exception conditions, such as the failure to
allocate a global address.

     Figure    shows a sample debug ip nat output. In this example, the first two lines of the
debugging output show that a Domain Name System (DNS) request and reply were produced.
The remaining lines show the debugging output of a Telnet connection from a host on the
inside of the network to a host on the outside of the network.

     Decode the debug output by using the following key points:
          The asterisk next to NAT indicates that the translation is occurring in the
           fast-switched path. The first packet in a conversation will always go through the
           slow path, which means this first packet is process-switched. The remaining
           packets will go through the fast-switched path if a cache entry exists.
          s = a.b.c.d is the source address.
          Source address a.b.c.d is translated to w.x.y.z.
          d = e.f.g.h is the destination address.
          The value in brackets is the IP identification number. This information may be
           useful for debugging. This is useful, for example, because it enables correlation
           with other packet traces from protocol analyzers.

Lab Activity
8     Cisco Academy – CCNA 3.0 Semester 4
     Lab Exercise: Troubleshooting NAT and PAT

     In this lab, the student will configure a router for Network Address Translation (NAT)
and Port Address Translation (PAT).

Lab Activity

     e-Lab Activity: Troubleshooting NAT and PAT

     In this lab, the student will configure a router for Network Address Translation and Port
Address Translation.

1.1.7 Issues with NAT

     NAT has several advantages, including:
          NAT conserves the legally registered addressing scheme by allowing the
           privatization of intranets.
          Increases the flexibility of connections to the public network. Multiple pools,
           backup pools, and load balancing pools can be implemented to assure reliable
           public network connections.
          Consistency of the internal network addressing scheme. On a network without
           private IP addresses and NAT, changing public IP addresses requires the
           renumbering of all hosts on the existing network. The costs of renumbering hosts
           can be significant. NAT allows the existing scheme to remain while supporting a
           new public addressing scheme.

     NAT is not without drawbacks. Enabling address translation will cause a loss of
functionality, particularly with any protocol or application that involves sending IP address
information inside the IP payload. This requires additional support by the NAT device.

     NAT increases delay. Switching path delays are introduced because of the translation of
each IP address within the packet headers.

     Performance may be a consideration because NAT is currently accomplished by using
process switching. The CPU must look at every packet to decide whether it has to translate it.
The CPU must alter the IP header, and possibly alter the TCP header.

     One significant disadvantage when implementing and using NAT is the loss of
end-to-end IP traceability. It becomes much more difficult to trace packets that undergo
numerous packet address changes over multiple NAT hops. Hackers who want to determine
the source of a packet will find it difficult to trace or obtain the original source or destination
                                                           Chapter 1   Scaling IP Addresses   9
     NAT also forces some applications that use IP addressing to stop functioning because it
hides end-to-end IP addresses. Applications that use physical addresses instead of a qualified
domain name will not reach destinations that are translated across the NAT router. Sometimes,
this problem can be avoided by implementing static NAT mappings.

     Cisco IOS NAT supports the following traffic types:
         ICMP
         File Transfer Protocol (FTP), including PORT and PASV commands
         NetBIOS over TCP/IP, datagram, name, and session services
         RealNetworks' RealAudio
         White Pines' CUSeeMe
         Xing Technologies' StreamWorks
         DNS "A" and "PTR" queries
         H.323/Microsoft NetMeeting, IOS versions 12.0(1)/12.0(1)T and later
         VDOnet's VDOLive, IOS versions 11.3(4)11.3(4)T and later
         VXtreme's Web Theater, IOS versions 11.3(4)11.3(4)T and later
         IP Multicast, IOS version 12.0(1)T with source address translation only

     Cisco IOS NAT does not support the following traffic types:
         Routing table updates
         DNS zone transfers
         BOOTP
         talk and ntalk protocols
         Simple Network Management Protocol (SNMP)

Interactive Media Activity

     Checkbox: Issues with NAT

     When the student has completed this activity, the student will be able to identify issues
with the use of NAT.

1.2 DHCP

1.2.1 Introducing DHCP

     Dynamic Host Configuration Protocol (DHCP) works in a client/server mode. DHCP
enables DHCP clients on an IP network to obtain their configurations from a DHCP server.
Less work is involved in managing an IP network when DHCP is used. The most significant
configuration option the client receives from the server is its IP address. The DHCP protocol
10    Cisco Academy – CCNA 3.0 Semester 4
is described in RFC 2131.

     A DHCP client is included in most modern operating systems including the various
Windows operating systems, Novell Netware, Sun Solaris, Linux, and MAC OS. The client
requests addressing values from the network DHCP server.              This server manages the
allocation of the IP addresses and will answer configuration requests from clients.         The
DHCP server can answer requests for many subnets. DHCP is not intended for configuring
routers, switches, and servers. These type of hosts all need to have static IP addresses.

     DHCP works by providing a process for a server to allocate IP information to clients.
Clients lease the information from the server for an administratively defined period. When the
lease expires the client must ask for another address, although the client is typically
reassigned the same address.

     Administrators typically prefer a network server to offer DHCP services because these
solutions are scalable and relatively easy to manage. Cisco routers can use a Cisco IOS
feature set, Easy IP, to offer an optional, full-featured DHCP server. Easy IP leases
configurations for 24 hours by default. This is useful in small offices and home offices where
the home user can take advantage of DHCP and NAT without having an NT or UNIX server.

     Administrators set up DHCP servers to assign addresses from predefined pools. DHCP
servers can also offer other information, such as DNS server addresses, WINS server
addresses, and domain names. Most DHCP servers also allow the administrator to define
specifically what client MAC addresses can be serviced and automatically assign them the
same IP address each time.

     DHCP uses UDP as its transport protocol. The client sends messages to the server on
port 67. The server sends messages to the client on port 68.

1.2.2 BOOTP and DHCP differences

     The Internet community first developed the BOOTP protocol to enable configuration of
diskless workstations. BOOTP was originally defined in RFC 951 in 1985. As the predecessor
of DHCP, BOOTP shares some operational characteristics. Both protocols are client/server
based and use UDP ports 67 and 68. Those ports are still known as BOOTP ports.

     The four basic IP parameters:
          IP address
          Gateway address
          Subnet mask
          DNS server address
                                                           Chapter 1   Scaling IP Addresses   11
     BOOTP does not dynamically allocate IP addresses to a host. When a client requests an
IP address, the BOOTP server searches a predefined table for an entry that matches the MAC
address for the client. If an entry exists, then the corresponding IP address for that entry is
returned to the client. This means that the binding between the MAC address and the IP
address must have already been configured in the BOOTP server.

     There are two primary differences between DHCP and BOOTP:
          DHCP defines mechanisms through which clients can be assigned an IP address for
           a finite lease period. This lease period allows for reassignment of the IP address to
           another client later, or for the client to get another assignment, if the client moves
           to another subnet. Clients may also renew leases and keep the same IP address.
          DHCP provides the mechanism for a client to gather other IP configuration
           parameters, such as WINS and domain name.

1.2.3 Major DHCP features

     There are three mechanisms used to assign an IP address to the client:
          Automatic allocation – DHCP assigns a permanent IP address to a client.
          Manual allocation – The IP address for the client is assigned by the administrator.
           DHCP conveys the address to the client.
          Dynamic allocation – DHCP assigns, or leases, an IP address to the client for a
           limited period of time.

     The focus of this section is the dynamic allocation mechanism. Some of the
configuration parameters available are listed in IETF RFC 1533:
          Subnet mask
          Router
          Domain Name
          Domain Name Server(s)
          WINS Server(s)

     The DHCP server creates pools of IP addresses and associated parameters.          Pools are
dedicated to an individual logical IP subnet. This allows multiple DHCP servers to respond
and IP clients to be mobile. If multiple servers respond, a client can choose only one of the

1.2.4 DHCP operation

     The DHCP client configuration process uses the following steps:
          A client must have DHCP configured when starting the network membership
           process. The client sends a request to a server requesting an IP configuration.
12     Cisco Academy – CCNA 3.0 Semester 4
           Sometimes the client may suggest the IP address it wants, such as when requesting
           an extension to a DHCP lease. The client locates a DHCP server by sending a
           broadcast called a DHCPDISCOVER.
          When the server receives the broadcast, it determines whether it can service the
           request from its own database. If it cannot, the server may forward the request on
           to another DHCP server. If it can, the DHCP server offers the client IP
           configuration information in the form of a unicast DHCPOFFER. The
           DHCPOFFER is a proposed configuration that may include IP address, DNS
           server address, and lease time.
          If the client finds the offer agreeable, it will send another broadcast, a
           DHCPREQUEST, specifically requesting those particular IP parameters. Why does
           the client broadcast the request instead of unicasting it to the server? A broadcast is
           used because the first message, the DHCPDISCOVER, may have reached more
           than one DHCP server. If more than one server makes an offer, the broadcasted
           DHCPREQUEST allows the other servers to know which offer was accepted. The
           offer accepted is usually the first offer received.
          The server that receives the DHCPREQUEST makes the configuration official by
           sending a unicast acknowledgment, the DHCPACK. It is possible, but highly
           unlikely, that the server will not send the DHCPACK. This may happen because
           the server may have leased that information to another client in the interim.
           Receipt of the DHCPACK message enables the client to begin using the assigned
           address immediately.
          If the client detects that the address is already in use on the local segment it will
           send a DHCPDECLINE message and the process starts again. If the client received
           a DHCPNACK from the server after sending the DHCPREQUEST, then it will
           restart the process again.
          If the client no longer needs the IP address, the client sends a DHCPRELEASE
           message to the server.

       Depending on an organization's policies, it may be possible for an end user or an
administrator to statically assign a host an IP address that belongs in the DHCP servers
address pool. Just in case, the Cisco IOS DHCP server always checks to make sure that an
address is not in use before the server offers it to a client. The server will issue an ICMP echo
request, or will ping, to a pool address before sending the DHCPOFFER to a client. Although
configurable, the default number of pings used to check for a potential IP address conflict is

1.2.5 Configuring DHCP
                                                            Chapter 1   Scaling IP Addresses   13
     Like NAT, a DHCP server requires that the administrator define a pool of addresses. The
ip dhcp pool command defines which addresses will be assigned to hosts.

     The first command, ip dhcp pool, creates a pool with the specified name and puts the
router in a specialized DHCP configuration mode. In this mode, use the network statement to
define the range of addresses to be leased. If specific addresses on the network are to be
excluded, return to global configuration mode.

     The ip dhcp excluded-address command configures the router to exclude an individual
address or range of addresses when assigning addresses to clients. The ip dhcp
excluded-address command may be used to reserve addresses that are statically assigned to
key hosts, for instance, the interface address on the router.

     Typically, a DHCP server will be configured to assign much more than an IP address.
Other IP configuration values such as the default gateway can be set from the DHCP
configuration mode. Using the default-router command sets the default gateway. The address
of the DNS server, dns-server, and WINS server, netbios-name-server, can also be
configured here. The IOS DHCP server can configure clients with virtually any TCP/IP

     A list of the key IOS DHCP server commands entered in the DHCP pool configuration
mode are shown in Figure .

     The DHCP service is enabled by default on versions of Cisco IOS that support it. To
disable the service, use the no service dhcp command. Use the service dhcp global
configuration command to re-enable the DHCP server process.

1.2.6 Verifying DHCP operation

     To verify the operation of DHCP, the command show ip dhcp binding can be used. This
displays a list of all bindings created by the DHCP service.

     To verify that messages are being received or sent by the router, use the command show
ip dhcp server statistics. This will display count information regarding the number of DHCP
messages that have been sent and received.

Lab Activity

     Lab Exercise: Configuring DHCP

     In this lab, the student will configure a router for Dynamic Host Configuration Protocol
14    Cisco Academy – CCNA 3.0 Semester 4

Lab Activity

     e-Lab Activity: Configuring DHCP

     In this lab, the student will configure a router for DHCP, add the ability for workstations
to remotely obtain DHCP addresses and dynamically assign addresses to the attached hosts.

1.2.7 Troubleshooting DHCP

     To troubleshoot the operation of the DHCP server, the command debug ip dhcp server
events can be used. This command will show that the server periodically checks to see if any
leases have expired. Also, it can be seen when addresses are returned and when they are

1.2.8 DHCP relay

     DHCP clients use IP broadcasts to find the DHCP server on the segment. What happens
when the server and the client are not on the same segment and are separated by a router?
Routers do not forward these broadcasts.

     DHCP is not the only critical service that uses broadcasts. Cisco routers and other
devices may use broadcasts to locate TFTP servers. Some clients may need to broadcast to
locate a TACACS server. A TACACS server is a security server. Typically, in a complex
hierarchical network, clients reside on the same subnet as key servers. Such remote clients
will broadcast to locate these servers. However, routers, by default, will not forward client
broadcasts beyond their subnet.

     Because some clients are useless without services such as DHCP, one of two choices
must be implemented. The administrator will need to place servers on all subnets or use the
Cisco IOS helper address feature. Running services such as DHCP or DNS on several
computers creates overhead and administrative difficulties making the first option inefficient.
When possible, administrators should use the ip helper-address command to relay broadcast
requests for these key UDP services.

     By using the helper address feature, a router can be configured to accept a broadcast
request for a UDP service and then forward it as a unicast to a specific IP address. By default,
the ip helper-address command forwards the following eight UDP services:
            Time
            TACACS
            DNS
            BOOTP/DHCP Server
                                                             Chapter 1   Scaling IP Addresses   15
             BOOTP/DHCP Client
             TFTP
             NetBIOS Name Service
             NetBIOS datagram Service

     In the particular case of DHCP, a client broadcasts a DHCPDISCOVER packet on its
local segment. This packet is picked up by the gateway. If a helper-address is configured,
the DHCP packet is forwarded to the specified address. Before forwarding the packet, the
router fills in the GIADDR field of the packet with the IP address of the router for that
segment. This address will then be the gateway address for the DHCP client, when it gets the
IP address.

     The DHCP server receives the discover packet. The server uses the GIADDR field to
index into the list of address pools, to find one which has the gateway address set to the value
in GIADDR. This pool is then used to supply the client with its IP address.

Lab Activity

     Lab Exercise: Configuring DHCP Relay

     In this lab, a router will be configured for Dynamic Host Configuration Protocol

Lab Activity

     e-Lab Activity: Configuring DHCP Relay

     In this lab, the student will configure a router for DHCP, add the ability for workstations
to remotely obtain DHCP addresses and dynamically assign addresses to the attached hosts.


     An understanding of the following key points should have been achieved:
             Private addresses are for private, internal use and should never be routed by a
              public Internet router.
             NAT alters the IP header of a packet so that the destination address, the source
              address, or both addresses are replaced with different addresses.
             PAT uses unique source port numbers on the inside global IP address to distinguish
              between translations.
             NAT translations can occur dynamically or statically and can be used for a variety
              of uses.
16   Cisco Academy – CCNA 3.0 Semester 4
        NAT and PAT may be configured for static translation, dynamic translation, and
        The process for verifying NAT and PAT configuration include the clear and show
        The debug ip nat command is used for troubleshooting NAT and PAT
        NAT has advantages and disadvantages.
        DHCP works in a client/server mode, enabling clients to obtain IP configurations
         from a DHCP server.
        BOOTP is the predecessor of DHCP and shares some operational characteristics
         with DHCP, but BOOTP is not dynamic.
        A DHCP server manages pools of IP addresses and associated parameters. Each
         pool is dedicated to an individual logical IP subnet.
        The DHCP client configuration process has four steps.
        Usually, a DCHP server is configured to assign more than IP addresses.
        The show ip dhcp binding command is used to verify DHCP operation.
        The debug ip dhcp server events command is used for troubleshooting DHCP.
        When a DHCP server and a client are not on the same segment and are separated
         by a router, the ip helper-address command is used to relay broadcast requests.
                                                           Chapter 2   WAN Technologies     17

                    Chapter 2             WAN Technologies


     As the enterprise grows beyond a single location, it is necessary to interconnect the
LANs in the various branches to form a wide-area network (WAN). This module examines
some of the options available for these interconnections, the hardware needed to implement
them, and the terminology used to discuss them.

     There are many options currently available today for implementing WAN solutions.
They differ in technology, speed, and cost. Familiarity with these technologies is an important
part of network design and evaluation.

     If all data traffic in an enterprise is within a single building, a LAN meets the needs of
the organization. Buildings can be interconnected with high-speed data links to form a
campus LAN if data must flow between buildings on a single campus. However, a WAN is
needed to carry data if it must be transferred between geographically separate locations.
Individual remote access to the LAN and connection of the LAN to the Internet are separate
study topics, and will not be considered here.

     Most students will not have the opportunity to design a new WAN, but many will be
involved in designing additions and upgrades to existing WANs, and will be able to apply the
techniques learned in this module.

     Students completing this module should be able to:
          Differentiate between a LAN and WAN
          Identify the devices used in a WAN
          List WAN standards
          Describe WAN encapsulation
          Classify the various WAN link options
          Differentiate between packet-switched and circuit-switched WAN technologies
          Compare and contrast current WAN technologies
          Describe equipment involved in the implementation of various WAN services
          Recommend a WAN service to an organization based on its needs
          Describe DSL and cable modem connectivity basics
          Describe a methodical procedure for designing WANs
          Compare and contrast WAN topologies
          Compare and contrast WAN design models
          Recommend a WAN design to an organization based on its needs
18    Cisco Academy – CCNA 3.0 Semester 4

2.1 WAN Technologies Overview

2.1.1 WAN technology

     A WAN is a data communications network that operates beyond the geographic scope of
a LAN. One primary difference between a WAN and a LAN is that a company or organization
must subscribe to an outside WAN service provider in order to use WAN carrier network
services. A WAN uses data links provided by carrier services to access the Internet and
connect the locations of an organization to each other, to locations of other organizations, to
external services, and to remote users. WANs generally carry a variety of traffic types, such as
voice, data, and video. Telephone and data services are the most commonly used WAN

     Devices on the subscriber premises are called customer premises equipment (CPE).
The subscriber owns the CPE or leases the CPE from the service provider. A copper or fiber
cable connects the CPE to the service provider’s nearest exchange or central office (CO). This
cabling is often called the local loop, or "last-mile". A dialed call is connected locally to other
local loops, or non-locally through a trunk to a primary center. It then goes to a sectional
center and on to a regional or international carrier center as the call travels to its destination.

     In order for the local loop to carry data, a device such as a modem is needed to prepare
the data for transmission. Devices that put data on the local loop are called data
circuit-terminating equipment, or data communications equipment (DCE). The customer
devices that pass the data to the DCE are called data terminal equipment (DTE). The DCE
primarily provides an interface for the DTE into the communication link on the WAN cloud.
The DTE/DCE interface uses various physical layer protocols, such as High-Speed Serial
Interface (HSSI) and V.35. These protocols establish the codes and electrical parameters the
devices use to communicate with each other.

     WAN links are provided at various speeds measured in bits per second (bps), kilobits per
second (kbps or 1000 bps), megabits per second (Mbps or 1000 kbps) or gigabits per second
(Gbps or 1000 Mbps). The bps values are generally full duplex. This means that an E1 line
can carry 2 Mbps, or a T1 can carry 1.5 Mbps, in each direction simultaneously.

2.1.2 WAN devices

     WANs are groups of LANs connected together with communications links from a
service provider. Because the communications links cannot plug directly into the LAN, it is
necessary to identify the various pieces of interfacing equipment.

     LAN-based computers with data to transmit send data to a router that contains both
                                                           Chapter 2   WAN Technologies     19
LAN and WAN interfaces. The router will use the Layer 3 address information to deliver the
data on the appropriate WAN interface. Routers are active and intelligent network devices and
therefore can participate in network management. Routers manage networks by providing
dynamic control over resources and supporting the tasks and goals for networks. Some of
these goals are connectivity, reliable performance, management control, and flexibility.

     The communications link needs signals in an appropriate format. For digital lines, a
channel service unit (CSU) and a data service unit (DSU) are required. The two are often
combined into a single piece of equipment, called the CSU/DSU. The CSU/DSU may also be
built into the interface card in the router.

     A modem is needed if the local loop is analog rather than digital. Modems transmit
data over voice-grade telephone lines by modulating and demodulating the signal. The digital
signals are superimposed on an analog voice signal that is modulated for transmission. The
modulated signal can be heard as a series of whistles by turning on the internal modem
speaker. At the receiving end the analog signals are returned to their digital form, or

     When ISDN is used as the communications link, all equipment attached to the ISDN bus
must be ISDN-compatible. Compatibility is generally built into the computer interface for
direct dial connections, or the router interface for LAN to WAN connections. Older equipment
without an ISDN interface requires an ISDN terminal adapter (TA) for ISDN compatibility.

     Communication servers concentrate dial-in user communication and remote access to a
LAN. They may have a mixture of analog and digital (ISDN) interfaces and support hundreds
of simultaneous users.

Interactive Media Activity

     Crossword Puzzle: WAN Devices and Interfaces

     Upon completing this activity, the student will be able to describe devices and interfaces
associated with WAN connections.

2.1.3 WAN standards

     WANs use the OSI reference model, but focus mainly on Layer 1 and Layer 2. WAN
standards typically describe both physical layer delivery methods and data link layer
requirements, including physical addressing, flow control, and encapsulation. WAN standards
are defined and managed by a number of recognized authorities.

     The physical layer protocols describe how to provide electrical, mechanical, operational,
20     Cisco Academy – CCNA 3.0 Semester 4
and functional connections to the services provided by a communications service provider.
Some of the common physical layer standards are listed in Figure , and their connectors
illustrated in Figure .

      The data link layer protocols define how data is encapsulated for transmission to remote
sites, and the mechanisms for transferring the resulting frames. A variety of different
technologies are used, such as ISDN, Frame Relay or Asynchronous Transfer Mode (ATM).
These protocols use the same basic framing mechanism, high-level data link control (HDLC),
an ISO standard, or one of its sub-sets or variants.

Interactive Media Activity

      Crossword Puzzle: WAN Standards

      Upon completing this activity, the student will be able to identify various WAN

2.1.4 WAN encapsulation

      Data from the network layer is passed to the data link layer for delivery on a physical
link, which is normally point-to-point on a WAN connection. The data link layer builds a
frame around the network layer data so the necessary checks and controls can be applied.
Each WAN connection type uses a Layer 2 protocol to encapsulate traffic while it is crossing
the WAN link. To ensure that the correct encapsulation protocol is used, the Layer 2
encapsulation type used for each router serial interface must be configured. The choice of
encapsulation protocols depends on the WAN technology and the equipment. Most framing is
based on the HDLC standard.

      HDLC framing gives reliable delivery of data over unreliable lines and includes signal
mechanisms for flow and error control. The frame always starts and ends with an 8-bit flag
field, the bit pattern 01111110. Because there is a likelihood that this pattern will occur in the
actual data, the sending HDLC system always inserts a 0 bit after every five 1s in the data
field, so in practice the flag sequence can only occur at the frame ends. The receiving system
strips out the inserted bits. When frames are transmitted consecutively the end flag of the first
frame is used as the start flag of the next frame.

      The address field is not needed for WAN links, which are almost always point-to-point.
The address field is still present and may be one or two bytes long. The control field indicates
the frame type, which may be information, supervisory, or unnumbered:
            Unnumbered frames carry line setup messages.
            Information frames carry network layer data.
                                                              Chapter 2   WAN Technologies       21
             Supervisory frames control the flow of information frames and request data
              retransmission in the event of an error.

     The control field is normally one byte, but will be two bytes for extended sliding
windows systems. Together the address and control fields are called the frame header. The
encapsulated data follows the control field. Then a frame check sequence (FCS) uses the
cyclic redundancy check (CRC) mechanism to establish a two or four byte field.

     Several data link protocols are used, including sub-sets and proprietary versions of
HDLC. Both PPP and the Cisco version of HDLC have an extra field in the header to
identify the network layer protocol of the encapsulated data.

2.1.5 Packet and circuit switching

     Packet-switched networks were developed to overcome the expense of public
circuit-switched networks and to provide a more cost-effective WAN technology.

     When a subscriber makes a telephone call, the dialed number is used to set switches in
the exchanges along the route of the call so that there is a continuous circuit from the
originating caller to that of the called party. Because of the switching operation used to
establish the circuit, the telephone system is called a circuit-switched network. If the
telephones are replaced with modems, then the switched circuit is able to carry computer data.

     The internal path taken by the circuit between exchanges is shared by a number of
conversations. Time division multiplexing (TDM) is used to give each conversation a share of
the connection in turn. TDM assures that a fixed capacity connection is made available to the

     If the circuit carries computer data, the usage of this fixed capacity may not be efficient.
For example, if the circuit is used to access the Internet, there will be a burst of activity on the
circuit while a web page is transferred. This could be followed by no activity while the user
reads the page and then another burst of activity while the next page is transferred. This
variation in usage between none and maximum is typical of computer network traffic.
Because the subscriber has sole use of the fixed capacity allocation, switched circuits are
generally an expensive way of moving data.

     An alternative is to allocate the capacity to the traffic only when it is needed, and share
the available capacity between many users. With a circuit-switched connection, the data bits
put on the circuit are automatically delivered to the far end because the circuit is already
established. If the circuit is to be shared, there must be some mechanism to label the bits so
that the system knows where to deliver them. It is difficult to label individual bits, therefore
22    Cisco Academy – CCNA 3.0 Semester 4
they are gathered into groups called cells, frames, or packets. The packet passes from
exchange to exchange for delivery through the provider network. Networks that implement
this system are called packet-switched networks.

     The links that connect the switches in the provider network belong to an individual
subscriber during data transfer, therefore many subscribers can share the link. Costs can be
significantly lower than a dedicated circuit-switched connection. Data on packet-switched
networks are subject to unpredictable delays when individual packets wait for other subscriber
packets to be transmitted by a switch.

     The switches in a packet-switched network determine, from addressing information in
each packet, which link the packet must be sent on next. There are two approaches to this link
determination, connectionless or connection-oriented. Connectionless systems, such as the
Internet, carry full addressing information in each packet. Each switch must evaluate the
address to determine where to send the packet. Connection-oriented systems predetermine the
route for a packet, and each packet need only carry an identifier. In the case of Frame Relay,
these are called Data Link Control Identifiers (DLCI). The switch determines the onward
route by looking up the identifier in tables held in memory. The set of entries in the tables
identifies a particular route or circuit through the system. If this circuit is only physically in
existence while a packet is traveling through it, it is called a Virtual Circuit (VC).

     The table entries that constitute a VC can be established by sending a connection request
through the network. In this case the resulting circuit is called a Switched Virtual Circuit
(SVC). Data that is to travel on SVCs must wait until the table entries have been set up. Once
established, the SVC may be in operation for hours, days or weeks. Where a circuit is required
to be always available, a Permanent Virtual Circuit (PVC) will be established. Table entries
are loaded by the switches at boot time so the PVC is always available.

2.1.6 WAN link options

     Figure    provides an overview of WAN link options.

     Circuit switching establishes a dedicated physical connection for voice or data between
a sender and receiver. Before communication can start, it is necessary to establish the
connection by setting the switches. This is done by the telephone system, using the dialed
number. ISDN is used on digital lines as well as on voice-grade lines. If the local loop is not
directly connected to the telephone system, a digital subscriber line (DSL) may be available.

     To avoid the delays associated with setting up a connection, telephone service providers
also offer permanent circuits. These dedicated or leased lines offer higher bandwidth than is
available with a switched circuit. Examples of circuit-switched connections include:
                                                           Chapter 2   WAN Technologies     23
          Plain Old Telephone System (POTS)
          ISDN Basic Rate Interface (BRI)
          ISDN Primary Rate Interface (PRI)

     Many WAN users do not make efficient use of the fixed bandwidth that is available with
dedicated, switched, or permanent circuits, because the data flow fluctuates. Communications
providers have data networks available to more appropriately service these users. In these
networks, the data is transmitted in labeled cells, frames, or packets through a
packet-switched network. Because the internal links between the switches are shared between
many users, the costs of packet switching are lower than those of circuit switching. Delays
(latency) and variability of delay (jitter) are greater in packet-switched than in
circuit-switched networks. This is because the links are shared and packets must be entirely
received at one switch before moving to the next. Despite the latency and jitter inherent in
shared networks, modern technology allows satisfactory transport of voice and even video
communications on these networks.

     Packet-switched networks may establish routes through the switches for particular
end-to-end connections. Routes established when the switches are started are PVCs. Routes
established on demand are SVCs. If the routing is not pre-established and is worked out by
each switch for each packet, the network is called connectionless.

     To connect to a packet-switched network, a subscriber needs a local loop to the nearest
location where the provider makes the service available. This is called the point-of-presence
(POP) of the service. Normally this will be a dedicated leased line. This line will be much
shorter than a leased line directly connected to the subscriber locations, and often carries
several VCs.      Since it is likely that not all the VCs will require maximum demand
simultaneously, the capacity of the leased line can be smaller than the sum of the individual
VCs. Examples of packet or cell switched connections include:
          Frame Relay
          X.25
          ATM

2.2 WAN Technologies

2.2.1 Analog dialup

     When intermittent, low-volume data transfers are needed, modems and analog dialed
telephone lines provide low capacity and dedicated switched connections.

     Traditional telephony uses a copper cable, called the local loop, to connect the telephone
24     Cisco Academy – CCNA 3.0 Semester 4
handset in the subscriber premises to the public switched telephone network (PSTN). The
signal on the local loop during a call is a continuously varying electronic signal that is a
translation of the subscriber voice.

      The local loop is not suitable for direct transport of binary computer data, but a modem
can send computer data through the voice telephone network. The modem modulates the
binary data into an analog signal at the source and demodulates the analog signal at the
destination to binary data.

      The physical characteristics of the local loop and its connection to the PSTN limit the
rate of the signal. The upper limit is around 33 kbps. The rate can be increased to around 56
kbps if the signal is coming directly through a digital connection.

      For small businesses, this can be adequate for the exchange of sales figures, prices,
routine reports, and email. Using automatic dialup at night or on weekends for large file
transfers and data backup can take advantage of lower off-peak tariffs (line charges). Tariffs
are based on the distance between the endpoints, time of day, and the duration of the call.

      The advantages of modem and analog lines are simplicity, availability, and low
implementation cost. The disadvantages are the low data rates and a relatively long
connection time. The dedicated circuit provided by dialup will have little delay or jitter for
point-to-point traffic, but voice or video traffic will not operate adequately at relatively low
bit rates.

Interactive Media Activity

      Checkbox: WAN Technologies Analog Dialup

      Upon completing this activity, the student will be able to identify the characteristics
associated with an analog dialup circuit.

2.2.2 ISDN

      The internal connections, or trunks, of the PSTN have changed from carrying analog
frequency-division multiplexed signals, to time-division multiplexed (TDM) digital signals.
An obvious next step is to enable the local loop to carry digital signals that result in higher
capacity switched connections.

      Integrated Services Digital Network (ISDN) turns the local loop into a TDM digital
connection. The connection uses 64 kbps bearer channels (B) for carrying voice or data and a
signaling, delta channel (D) for call set-up and other purposes.
                                                            Chapter 2   WAN Technologies    25
        Basic Rate Interface (BRI) ISDN is intended for the home and small enterprise and
provides two 64 kbps B channels and a 16 kbps D channel. For larger installations, Primary
Rate Interface (PRI) ISDN is available. PRI delivers twenty-three 64 kbps B channels and one
64 kbps D channel in North America, for a total bit rate of up to 1.544 Mbps. This includes
some additional overhead for synchronization. In Europe, Australia, and other parts of the
world, ISDN PRI provides thirty B channels and one D channel for a total bit rate of up to
2.048 Mbps, including synchronization overhead.       In North America PRI corresponds to a
T1 connection. The rate of international PRI corresponds to an E1 connection.

        The BRI D channel is underutilized, as it has only two B channels to control. Some
providers allow the D channel to carry data at low bit rates such as X.25 connections at 9.6

        For small WANs, the BRI ISDN can provide an ideal connection mechanism. BRI has a
call setup time that is less than a second, and its 64 kbps B channel provide greater capacity
than an analog modem link.        If greater capacity is required, a second B channel can be
activated to provide a total of 128 kbps. Although inadequate for video, this would permit
several simultaneous voice conversations in addition to data traffic.

        Another common application of ISDN is to provide additional capacity as needed on a
leased line connection. The leased line is sized to carry average traffic loads while ISDN is
added during peak demand periods. ISDN is also used as a backup in the case of a failure of
the leased line. ISDN tariffs are based on a per-B channel basis and are similar to those of
analog voice connections.

        With PRI ISDN, multiple B channels can be connected between two end points. This
allows for video conferencing and high bandwidth data connections with no latency or jitter.
Multiple connections can become very expensive over long distances.

Interactive Media Activity

        Checkbox: WAN Technologies ISDN Dialup

        Upon completing this activity, the student will be able to identify the characteristics
associated with an ISDN dialup circuit.

2.2.3 Leased line

        When permanent dedicated connections are required, leased lines are used with
capacities ranging up to 2.5 Gbps.

        A point-to-point link provides a pre-established WAN communications path from the
26    Cisco Academy – CCNA 3.0 Semester 4
customer premises through the provider network to a remote destination. Point-to-point lines
are usually leased from a carrier and are called leased lines. Leased lines are available in
different capacities.   These dedicated circuits are generally priced based on bandwidth
required and distance between the two connected points. Point-to-point links are generally
more expensive than shared services such as Frame Relay. The cost of leased-line solutions
can become significant when they are used to connect many sites. There are times when cost
of the leased line is outweighed by the benefits. The dedicated capacity gives no latency or
jitter between the endpoints. Constant availability is essential for some applications such as
electronic commerce.

     A router serial port is required for each leased-line connection. A CSU/DSU and the
actual circuit from the service provider are also required.

     Leased lines are used extensively for building WANs and give permanent dedicated
capacity.   They have been the traditional connection of choice but have a number of
disadvantages. WAN traffic is often variable and leased lines have a fixed capacity. This
results in the bandwidth of the line seldom being exactly what is needed. In addition, each end
point would need an interface on the router which would increase equipment costs. Any
changes to the leased line generally require a site visit by the carrier to change capacity.

     Leased lines provide direct point-to-point connections between enterprise LANs and
connect individual branches to a packet-switched network. Several connections can be
multiplexed over a leased line, resulting in shorter links and fewer required interfaces.

Interactive Media Activity

     Checkbox: WAN Technologies Leased Lines

     Upon completing this activity, the student will be able to identify the characteristics
associated with a leased line connection.

2.2.4 X.25

     In response to the expense of leased lines, telecommunications providers introduced
packet-switched networks using shared lines to reduce costs. The first of these
packet-switched networks was standardized as the X.25 group of protocols. X.25 provides a
low bit rate shared variable capacity that may be either switched or permanent.

     X.25 is a network-layer protocol and subscribers are provided with a network address.
Virtual circuits can be established through the network with call request packets to the target
address. The resulting SVC is identified by a channel number. Data packets labeled with the
                                                             Chapter 2   WAN Technologies    27
channel number are delivered to the corresponding address. Multiple channels can be active
on a single connection.

     Subscribers connect to the X.25 network with either leased lines or dialup connections.
X.25 networks can also have pre-established channels between subscribers that provide a

     X.25 can be very cost effective because tariffs are based on the amount of data delivered
rather than connection time or distance. Data can be delivered at any rate up to the connection
capacity. This provides some flexibility. X.25 networks are usually low capacity, with a
maximum of 48 kbps. In addition, the data packets are subject to the delays typical of shared

     X.25 technology is no longer widely available as a WAN technology in the US. Frame
Relay has replaced X.25 at many service provider locations.

     Typical X.25 applications are point-of-sale card readers. These readers use X.25 in
dialup mode to validate transactions on a central computer. Some enterprises also use X.25
based value-added networks (VAN) to transfer Electronic Data Interchange (EDI) invoices,
bills of lading, and other commercial documents. For these applications, the low bandwidth
and high latency are not a concern, because the low cost makes the use of X.25 affordable.

Interactive Media Activity

     Checkbox: WAN Technologies X.25

     Upon completing this activity, the student will be able to identify the characteristics
associated with an X.25 connection.

2.2.5 Frame Relay

     With increasing demand for higher bandwidth and lower latency packet switching,
communications providers introduced Frame Relay. Although the network layout appears
similar to that for X.25, available data rates are commonly up to 4 Mbps, with some providers
offering even higher rates.

     Frame Relay differs from X.25 in several aspects. Most importantly, it is a much simpler
protocol that works at the data link layer rather than the network layer.

     Frame Relay implements no error or flow control. The simplified handling of frames
leads to reduced latency, and measures taken to avoid frame build-up at intermediate switches
help reduce jitter.
28      Cisco Academy – CCNA 3.0 Semester 4
     Most Frame Relay connections are PVCs rather than SVCs. The connection to the
network edge is often a leased line but dialup connections are available from some providers
using ISDN lines. The ISDN D channel is used to set up an SVC on one or more B channels.
Frame Relay tariffs are based on the capacity of the connecting port at the network edge.
Additional factors are the agreed capacity and committed information rate (CIR) of the
various PVCs through the port.

     Frame Relay provides permanent shared medium bandwidth connectivity that carries
both voice and data traffic. Frame Relay is ideal for connecting enterprise LANs. The router
on the LAN needs only a single interface, even when multiple VCs are used. The short-leased
line to the FR network edge allows cost-effective connections between widely scattered

Interactive Media Activity

     Checkbox: WAN Technologies Frame Relay

     Upon completing this activity, the student will be able to identify the characteristics
associated with a Frame Relay circuit.

2.2.6 ATM

     Communications providers saw a need for a permanent shared network technology that
offered very low latency and jitter at much higher bandwidths. Their solution was
Asynchronous Transfer Mode (ATM). ATM has data rates beyond 155 Mbps. As with the
other shared technologies, such as X.25 and Frame Relay, diagrams for ATM WANs look the

     ATM is a technology that is capable of transferring voice, video, and data through
private and public networks. It is built on a cell-based architecture rather than on a
frame-based architecture. ATM cells are always a fixed length of 53 bytes. The 53 byte ATM
cell contains a 5 byte ATM header followed by 48 bytes of ATM payload. Small, fixed-length
cells are well suited for carrying voice and video traffic because this traffic is intolerant of
delay. Video and voice traffic do not have to wait for a larger data packet to be transmitted.

     The 53 byte ATM cell is less efficient than the bigger frames and packets of Frame
Relay and X.25. Furthermore, the ATM cell has at least 5 bytes of overhead for each 48-byte
payload. When the cell is carrying segmented network layer packets, the overhead will be
higher because the ATM switch must be able to reassemble the packets at the destination. A
typical ATM line needs almost 20% greater bandwidth than Frame Relay to carry the same
volume of network layer data.
                                                          Chapter 2   WAN Technologies    29
     ATM offers both PVCs and SVCs, although PVCs are more common with WANs.

     As with other shared technologies, ATM allows multiple virtual circuits on a single
leased line connection to the network edge.

Interactive Media Activity

     Checkbox: WAN Technologies Asynchronous Transfer Mode (ATM)

     Upon completing this activity, the student will be able to identify the characteristics
associated with ATM connections.

2.2.7 DSL

     Digital Subscriber Line (DSL) technology is a broadband technology that uses existing
twisted-pair telephone lines to transport high-bandwidth data to service subscribers. DSL
service is considered broadband, as opposed to the baseband service for typical LANs.
Broadband refers to a technique which uses multiple frequencies within the same physical
medium to transmit data. The term xDSL covers a number of similar yet competing forms of
DSL technologies:
         Asymmetric DSL (ADSL)
         Symmetric DSL (SDSL)
         High Bit Rate DSL (HDSL)
         ISDN (like) DSL (IDSL)
         Rate Adaptive DSL (RADSL)
         Consumer DSL (CDSL), also called DSL-lite or G.lite

     DSL technology allows the service provider to offer high-speed network services to
customers, utilizing installed local loop copper lines. DSL technology allows the local loop
line to be used for normal telephone voice connection and an always-on connection for instant
network connectivity. Multiple DSL subscriber lines are multiplexed into a single, high
capacity link by the use of a DSL Access Multiplexer (DSLAM) at the provider location.
DSLAMs incorporate TDM technology to aggregate many subscriber lines into a less
cumbersome single medium, generally a T3/DS3 connection. Current DSL technologies are
using sophisticated coding and modulation techniques to achieve data rates up to 8.192 Mbps.

     The voice channel of a standard consumer telephone covers the frequency range of 330
Hz to 3.3 KHz. A frequency range, or window, of 4 KHz is regarded as the requirements for
any voice transmission on the local loop. DSL technologies place upload (upstream) and
download (downstream) data transmissions at frequencies above this 4 KHz window. This
technique is what allows both voice and data transmissions to occur simultaneously on a DSL
30    Cisco Academy – CCNA 3.0 Semester 4

     The two basic types of DSL technologies are asymmetric (ADSL) and symmetric
(SDSL). All forms of DSL service are categorized as ADSL or SDSL and there are several
varieties of each type. Asymmetric service provides higher download or downstream
bandwidth to the user than upload bandwidth. Symmetric service provides the same capacity
in both directions.

     The different varieties of DSL provide different bandwidths, with capabilities exceeding
those of a T1 or E1 leased line. The transfer rates are dependent on the actual length of the
local loop and the type and condition of its cabling. For satisfactory service, the loop must be
less than 5.5 kilometers (3.5 miles). DSL availability is far from universal, and there are a
wide variety of types, standards, and emerging standards. It is not a popular choice for
enterprise computer departments to support home workers. Generally, a subscriber cannot
choose to connect to the enterprise network directly, but must first connect to an Internet
service provider (ISP). From here, an IP connection is made through the Internet to the
enterprise. Thus, security risks are incurred. To address security concerns, DSL services
provide capabilities for using Virtual Private Network (VPN) connections to a VPN server,
which is typically located at the corporate site.

Interactive Media Activity

     Checkbox: WAN Technologies Digital Subscriber Line (DSL)

     Upon completing this activity, the student will be able to identify the characteristics
associated with DSL connections.

2.2.8 Cable modem

     Coaxial cable is widely used in urban areas to distribute television signals.      Network
access is available from some cable television networks. This allows for greater bandwidth
than the conventional telephone local loop.

     Enhanced cable modems enable two-way, high-speed data transmissions using the same
coaxial lines that transmit cable television. Some cable service providers are promising data
speeds up to 6.5 times that of T1 leased lines. This speed makes cable an attractive medium
for transferring large amounts of digital information quickly, including video clips, audio files,
and large amounts of data. Information that would take two minutes to download using ISDN
BRI can be downloaded in two seconds through a cable modem connection.

     Cable modems provide an always-on connection and a simple installation. An always-on
                                                            Chapter 2   WAN Technologies   31
cable connection means that connected computers are vulnerable to a security breach at all
times and need to be suitably secured with firewalls. To address security concerns, cable
modem services provide capabilities for using Virtual Private Network (VPN) connections to
a VPN server, which is typically located at the corporate site.

     A cable modem is capable of delivering up to 30 to 40 Mbps of data on one 6 MHz cable
channel. This is almost 500 times faster than a 56 Kbps modem.

     With a cable modem, a subscriber can continue to receive cable television service while
simultaneously receiving data to a personal computer. This is accomplished with the help of a
simple one-to-two splitter.

     Cable modem subscribers must use the ISP associated with the service provider. All the
local subscribers share the same cable bandwidth. As more users join the service, available
bandwidth may be below the expected rate. -

Interactive Media Activity

     Checkbox: WAN Technologies Cable Modems

     Upon completing this activity, the student will be able to identify the characteristics
associated with cable modems.

2.3 WAN Design

2.3.1 WAN communication

     WANS are considered to be a set of data links connecting routers on LANs. User end
stations and servers on LANs exchange data. Routers pass data between networks across the
data links.

     Because of cost and legal reasons, a communications provider or a common carrier
normally owns the data links that make up a WAN. The links are made available to
subscribers for a fee and are used to interconnect LANs or connect to remote networks. WAN
data transfer speed (bandwidth) is considerably slower than the 100 Mbps that is common on
a LAN. The charges for link provision are the major cost element of a WAN and the design
must aim to provide maximum bandwidth at acceptable cost. With user pressure to provide
more service access at higher speeds and management pressure to contain cost, determining
the optimal WAN configuration is not an easy task.

     WANs carry a variety of traffic types such as data, voice, and video. The design selected
32    Cisco Academy – CCNA 3.0 Semester 4
must provide adequate capacity and transit times to meet the requirements of the enterprise.
Among other specifications, the design must consider the topology of the connections
between the various sites, the nature of those connections, and bandwidth capacity.

     Older WANs often consisted of data links directly connecting remote mainframe
computers.    Today’s WANs, though, connect geographically separated LANs.            End-user
stations, servers, and routers communicate across LANs, and the WAN data links terminate at
local routers. By exchanging Layer 3 address information about directly connected LANs,
routers determine the most appropriate path through the network for the required data streams.
Routers can also provide quality of service (QoS) management, which allots priorities to the
different traffic streams.

     Because the WAN is merely a set of interconnections between LAN based routers, there
are no services on the WAN. WAN technologies function at the lower three layers of the OSI
reference model. Routers determine the destination of the data from the network layer
headers and transfer the packets to the appropriate data link connection for delivery on the
physical connection.

2.3.2 Steps in WAN design

     Designing a WAN can be a challenging task, but approaching the design in a systematic
manner can lead to superior performance at a reduced cost. Many WANs have evolved over
time, therefore many of the guidelines discussed here may not have been considered. Every
time a modification to an existing WAN is considered, the steps in this module should be
followed. WAN modifications may arise from changes such as an expansion in the enterprise
the WAN serves, or accommodation of new work practices and business methods.

     Enterprises install WAN connectivity because there is a need to move data in a timely
manner between external branches. The WAN is there to support the enterprise requirements.
Meeting these requirements incurs costs, such as equipment provisioning and management of
the data links.

     In designing the WAN, it is necessary to know what data traffic must be carried, its
origin, and its destination. WANs carry a variety of traffic types with varying requirements for
bandwidth, latency, and jitter.

     For each pair of end points and for each traffic type, information is needed on the
various traffic characteristics.   Determining this may involve extensive studies of and
consultation with the network users. The design often involves upgrading, extending, or
modifying an existing WAN. Much of the data needed can come from existing network
management statistics.
                                                            Chapter 2   WAN Technologies     33
     Knowing the various end points allows the selection of a topology or layout for the
WAN. The topology will be influenced by geographic considerations but also by requirements
such as availability. A high requirement for availability will require extra links that provide
alternative data paths for redundancy and load balancing.

     With the end points and the links chosen, the necessary bandwidth can be estimated.
Traffic on the links may have varying requirements for latency and jitter. With the bandwidth
availability already determined, suitable link technologies must be selected.

     Finally, installation and operational costs for the WAN can be determined and compared
with the business need driving the WAN provision.

     In practice, following the steps shown in Figure is seldom a linear process. Several
modifications may be necessary before a design is finalized. Continued monitoring and
re-evaluation are also required after installation of the WAN to maintain optimal performance.

Interactive Media Activity

     Drag and Drop: WAN Design Steps

     When the student has completed this activity, the student will be able to identify the
basic steps in designing a WAN.

2.3.3 How to identify and select networking capabilities

     Designing a WAN essentially consists of the following:
           Selecting an interconnection pattern or layout for the links between the various
           Selecting the technologies for those links to meet the enterprise requirements at an
            acceptable cost

     Many WANs use a star topology. As the enterprise grows and new branches are added,
the branches are connected back to the head office, producing a traditional star topology.
Star end-points are sometimes cross-connected, creating a mesh or partial mesh topology.
This provides for many possible combinations for interconnections. When designing,
re-evaluating, or modifying a WAN, a topology that meets the design requirements must be

     In selecting a layout, there are several factors to consider. More links will increase the
cost of the network services, and having multiple paths between destinations increases
reliability. Adding more network devices to the data path will increase latency and decrease
reliability. Generally, each packet must be completely received at one node before it can be
34      Cisco Academy – CCNA 3.0 Semester 4
passed to the next. A range of dedicated technologies with different features is available for
the data links.

        Technologies that require the establishment of a connection before data can be
transmitted, such as basic telephone, ISDN, or X.25, are not suitable for WANs that require
rapid response time or low latency. Once established, ISDN and other dialup services are low
latency, low jitter circuits. ISDN is often the application of choice for connecting a small
office or home office (SOHO) network to the enterprise network, providing reliable
connectivity and adaptable bandwidth. Unlike cable and DSL, ISDN is an option wherever
modern telephone service is available. ISDN is also useful as a backup link for primary
connections and for providing bandwidth-on-demand connections in parallel with a primary
connection. A feature of these technologies is that the enterprise is only charged a fee when
the circuit is in use.

        The different parts of the enterprise may be directly connected with leased lines, or they
may be connected with an access link to the nearest point-of-presence (POP) of a shared
network. X.25, Frame Relay, and ATM are examples of shared networks. Leased lines will
generally be much longer and therefore more expensive than access links, but are available at
virtually any bandwidth. They provide very low latency and jitter.

        ATM, Frame Relay, and X.25 networks carry traffic from several customers over the
same internal links. The enterprise has no control over the number of links or hops that data
must traverse in the shared network. It cannot control the time data must wait at each node
before moving to the next link. This uncertainty in latency and jitter makes these technologies
unsuitable for some types of network traffic. However, the disadvantages of a shared network
may often be outweighed by the reduced cost. Because several customers are sharing the link,
the cost to each will generally be less than the cost of a direct link of the same capacity.

        Although ATM is a shared network, it has been designed to produce minimal latency and
jitter through the use of high-speed internal links sending easily manageable units of data,
called cells. ATM cells have a fixed length of 53 bytes, 48 for data and 5 for the header. ATM
is widely used for carrying delay-sensitive traffic. Frame Relay may also be used for
delay-sensitive traffic, often using QoS mechanisms to give priority to the more sensitive

        A typical WAN uses a combination of technologies that are usually chosen based on
traffic type and volume.      ISDN, DSL, Frame Relay, or leased lines are used to connect
individual branches into an area. Frame Relay, ATM, or leased lines are used to connect
external areas back to the backbone. ATM or leased lines form the WAN backbone.
                                                             Chapter 2   WAN Technologies    35

2.3.4 Three-layer design model

     A systematic approach is needed when many locations must be joined. A hierarchical
solution with three layers offers many advantages.

     Imagine an enterprise that is operational in every country of the European Union and has
a branch in every town with a population over 10,000. Each branch has a LAN, and it has
been decided to interconnect the branches. A mesh network is clearly not feasible because
nearly 500,000 links would be needed for the 900 centers. A simple star will be very difficult
to implement because it needs a router with 900 interfaces at the hub or a single interface that
carries 900 virtual circuits to a packet-switched network.

     Instead, consider a hierarchical design model. A group of LANs in an area are
interconnected, several areas are interconnected to form a region, and the various regions are
interconnected to form the core of the WAN.

     The area could be based on the number of locations to be connected with an upper limit
of between 30 and 50. The area would have a star topology, with the hubs of the stars linked
to form the region.    Regions could be geographic, connecting between three and ten areas,
and the hub of each region could be linked point-to-point.

     This three-layer model follows the hierarchical design used in telephone systems. The
links connecting the various sites in an area that provide access to the enterprise network are
called the access links or access layer of the WAN. Traffic between areas is distributed by the
distribution links, and is moved onto the core links for transfer to other regions, when

     This hierarchy is often useful when the network traffic mirrors the enterprise branch
structure and is divided into regions, areas, and branches. It is also useful when there is a
central service to which all branches must have access, but traffic levels are insufficient to
justify direct connection of a branch to the service.

     The LAN at the center of the area may have servers providing area-based as well as
local service. Depending on the traffic volumes and types, the access connections may be dial
up, leased, or Frame Relay. Frame Relay facilitates some meshing for redundancy without
requiring additional physical connections. Distribution links could be Frame Relay or ATM,
and the network core could be ATM or leased line.

2.3.5 Other layered design models

     Many networks do not require the complexity of a full three-layer hierarchy. Simpler
36    Cisco Academy – CCNA 3.0 Semester 4
hierarchies may be used.

     An enterprise with several relatively small branches that require minimal inter-branch
traffic may choose a one-layer design. Historically this has not been popular because of the
length of the leased lines. Frame Relay, where charges are not distance related, is now making
this a feasible design solution.

     If there is a need for some geographical concentration, a two-level design is appropriate.
This produces a “star of stars” pattern. Again, the pattern chosen based on leased line
technology will be considerably different from the pattern based on Frame Relay technology.

     When planning simpler networks, the three-layer model should still be considered as it
may provide for better network scalability. The hub at the center of a two-layer model is also
a core, but with no other core routers connected to it. Likewise, in a single-layer solution the
area hub serves as the regional hub and the core hub. This allows easy and rapid future growth
as the basic design can be replicated to add new service areas.

2.3.6 Other WAN design considerations

     Many enterprise WANs will have connections to the Internet. This poses security
problems but also provides an alternative for inter-branch traffic.

     Part of the traffic that must be considered during design is going to or coming from the
Internet. Since the Internet probably exists everywhere that the enterprise has LANs, there are
two principal ways that this traffic can be carried. Each LAN can have a connection to its
local ISP, or there can be a single connection from one of the core routers to an ISP. The
advantage of the first method is that traffic is carried on the Internet rather than on the
enterprise network, possibly leading to smaller WAN links. The disadvantage of permitting
multiple links, is that the whole enterprise WAN is open to Internet-based attacks. It is also
difficult to monitor and secure the many connection points. A single connection point is more
easily monitored and secured, even though the enterprise WAN will be carrying some traffic
that would otherwise have been carried on the Internet.

     If each LAN in the enterprise has a separate Internet connection, a further possibility is
opened for the enterprise WAN. Where traffic volumes are relatively small, the Internet can be
used as the enterprise WAN with all inter-branch traffic traversing the Internet. Securing the
various LANs will be an issue, but the saving in WAN connections may pay for the security.

     Servers should be placed closest to the locations that will access them most often.
Replication of servers, with arrangement for off-peak inter-server updates, will reduce the
required link capacity. Location of Internet-accessible services will depend on the nature of
                                                           Chapter 2   WAN Technologies      37
the service, anticipated traffic, and security issues. This is a specialized design topic beyond
the scope of this curriculum.


     An understanding of the following key points should have been achieved:
          Differences in the geographic areas served between WANs and LANs
          Similarities in the OSI model layers involved between WANs and LANs
          Familiarity with WAN terminology describing equipment, such as CPE, CO, local
           loop, DTE, DCE, CSU/DSU, and TA
          Familiarity with WAN terminology describing services and standards, such as
           ISDN, Frame Relay, ATM, T1, HDLC, PPP, POST, BRI, PRI, X.25, and DSL
          Differences between packet-switched and circuit-switched networks
          Differences and similarities between current WAN technologies, including analog
           dialup, ISDN, leased line, X.25, Frame Relay, and ATM services
          Advantages and drawbacks of DSL and cable modem services
          Ownership and cost associated with WAN data links
          Capacity requirements and transit times for various WAN traffic types, such as
           voice, data, and video
          Familiarity with WAN topologies, such as point-to-point, star, and meshed
          Elements of WAN design, including upgrading, extending, modifying an existing
           WAN, and recommending a WAN service to an organization based on its needs
          Advantages offered with a three-layer hierarchical WAN design
          Alternatives for interbranch WAN traffic
38    Cisco Academy – CCNA 3.0 Semester 4

                                  Chapter 3             PPP


     This module presents an overview of WAN technologies. It introduces and explains
WAN terminology such as serial transmission, time division multiplexing (TDM),
demarcation, data terminal equipment (DTE) and data circuit-terminating equipment (DCE).
The development and use of high-level data link control (HDLC) encapsulation as well as
methods to configure and troubleshoot a serial interface are presented.

     Point-to-Point Protocol (PPP) is the protocol of choice to implement over a serial WAN
switched connection. It can handle both synchronous and asynchronous communication and
includes error detection. Most importantly it incorporates an authentication process using
either CHAP or PAP. PPP can be used on various physical media, including twisted pair, fiber
optic lines and satellite transmission.

     Described in this module are configuration procedures for PPP, available options, and
troubleshooting concepts. Among the options available is the ability of PPP to use PAP or
CHAP authentication.

     Students completing this module should be able to:
          Explain serial communication
          Describe and give an example of TDM
          Identify the demarcation point in a WAN
          Describe the functions of the DTE and DCE
          Discuss the development of HDLC encapsulation
          Use the encapsulation hdlc command to configure HDLC
          Troubleshoot a serial interface using the show interface and show controllers
          Identify the advantages of using PPP
          Explain the functions of the Link Control Protocol (LCP) and the Network Control
           Protocol (NCP) components of PPP
          Describe the parts of a PPP frame
          Identify the three phases of a PPP session
          Explain the difference between PAP and CHAP
          List the steps in the PPP authentication process
          Identify the various PPP configuration options
          Configure PPP encapsulation
                                                                          Chapter 3   PPP    39
          Configure CHAP and PAP authentication
          Use show interface to verify the serial encapsulation
          Troubleshoot any problems with the PPP configuration using debug PPP

3.1 Serial Point-to-Point Links

3.1.1 Introduction to serial communication

     WAN technologies are based on serial transmission at the physical layer. This means that
the bits of a frame are transmitted one at a time over the physical medium.

     The bits that make up the Layer 2 frame are signaled one at a time by physical layer
processes onto the physical medium. The signaling methods include Nonreturn to Zero
Level (NRZ-L), High Density Binary 3, (HDB3), and Alternative Mark Inversion (AMI).
These are examples of physical layer encoding standards, similar to Manchester encoding for
Ethernet. Among other things, these signaling methods differentiate between one serial
communication method and another. Some of the many different serial communications
standards are the following:
          RS-232-E
          V.35
          High Speed Serial Interface (HSSI)

3.1.2 Time-division multiplexing

     Time-Division Multiplexing (TDM) is the transmission of several sources of
information using one common channel, or signal, and then the reconstruction of the original
streams at the remote end.

     In the example shown in the first figure, there are three sources of information carried in
turn down the output channel. First, a chunk of information is taken from each input channel.
The size of this chunk may vary, but typically it is either a bit or a byte at a time. Depending
on whether bits or bytes are used, this type of TDM is called bit-interleaving or

     Each of the three input channels has its own capacity. For the output channel to be able
to accommodate all the information from the three inputs, the capacity of the output channel
must be no less than the sum of the inputs.

     In TDM, the output timeslot is always present whether or not the TDM input has any
information to transmit. TDM output can be compared to a train with 32 railroad cars. Each is
owned by a different freight company and every day the train leaves with the 32 cars attached.
40     Cisco Academy – CCNA 3.0 Semester 4
If one of the companies has product to send, the car is loaded. If the company has nothing to
send, the car remains empty, but it is still part of the train.

     TDM is a physical layer concept, it has no regard for the nature of the information that is
being multiplexed onto the output channel. TDM is independent of the Layer 2 protocol that
has been used by the input channels.

     One TDM example is Integrated Services Digital Network (ISDN). ISDN basic rate
(BRI) has three channels consisting of two 64 kbps B-channels (B1 and B2), and a 16 kbps
D-channel. The TDM has nine timeslots, which are repeated. This allows the telco to
actively manage and troubleshoot the local loop as the demarcation point occurs after the
network terminating unit (NTU).

3.1.3 Demarcation point

     The demarcation point, or "demarc" as it is commonly known, is the point in the
network where the responsibility of the service provider or "telco" ends. In the United States,
a telco provides the local loop into the customer premises and the customer provides the
active equipment such as the channel service unit/data service unit (CSU/DSU) on which the
local loop is terminated. This termination often occurs in a telecommunications closet and the
customer is responsible for maintaining, replacing, or repairing the equipment.

     In other countries around the world, the network terminating unit (NTU) is provided and
managed by the telco. This allows the telco to actively manage and troubleshoot the local loop
with the demarcation point occurring after the NTU. The customer connects a customer
premises equipment (CPE) device, such as a router or frame relay access device, into the
NTU using a V.35 or RS-232 serial interface.

3.1.4 DTE-DCE

     A serial connection has a data terminal equipment (DTE) device at one end of the
connection and a data communications equipment (DCE) device at the other end. The
connection between the two DCEs is the WAN service provider transmission network. The
CPE, which is generally a router, is the DTE. Other DTE examples could be a terminal,
computer, printer, or fax machine. The DCE, commonly a modem or CSU/DSU, is the device
used to convert the user data from the DTE into a form acceptable to the WAN service
provider transmission link. This signal is received at the remote DCE, which decodes the
signal back into a sequence of bits. This sequence is then signaled to the remote DTE.

     Many standards have been developed to allow DTEs to communicate with DCEs. The
Electronics Industry Association (EIA) and the International Telecommunication Union
                                                                         Chapter 3   PPP     41
Telecommunications Standardization Sector (ITU-T) have been most active in the
development of these standards.

     The DTE-DCE interface for a particular standard defines the following specifications:
          Mechanical/physical – Number of pins and connector type
          Electrical – Defines voltage levels for 0 and 1
          Functional – Specifies the functions that are performed by assigning meanings to
           each of the signaling lines in the interface
          Procedural – Specifies the sequence of events for transmitting data

     If two DTEs must be connected together, like two computers or two routers in the lab, a
special cable called a null-modem is necessary to eliminate the need for a DCE. For
synchronous connections, where a clock signal is needed, either an external device or one of
the DTEs must generate the clock signal.

     The synchronous serial port on a router is configured as DTE or DCE depending on the
attached cable, which is ordered as either DTE or DCE to match the router configuration. If
the port is configured as DTE, which is the default setting, external clocking is required from
the CSU/DSU or other DCE device.

     The cable for the DTE to DCE connection is a shielded serial transition cable. The router
end of the shielded serial transition cable may be a DB-60 connector, which connects to the
DB-60 port on a serial WAN interface card. The other end of the serial transition cable is
available with the connector appropriate for the standard that is to be used. The WAN
provider or the CSU/DSU usually dictates this cable type. Cisco devices support the
EIA/TIA-232, EIA/TIA-449, V.35, X.21, and EIA/TIA-530 serial standards.

     To support higher densities in a smaller form factor, Cisco has introduced a smart serial
cable. The serial end of the smart serial cable is a 26-pin connector significantly more
compact than the DB-60 connector.

Interactive Media Activity

     PhotoZoom: DCE/DTE Cable

     In this PhotoZoom, the student will view DCE and DTE cable.

3.1.5 HDLC encapsulation

     Initially, serial communications were based on character-oriented protocols. Bit-oriented
protocols were more efficient but they were also proprietary. In 1979, the ISO agreed on
HDLC as a standard bit-oriented data link layer protocol that encapsulates data on
42    Cisco Academy – CCNA 3.0 Semester 4
synchronous serial data links. This standardization led to other committees adopting it and
extending the protocol. Since 1981, ITU-T has developed a series of HDLC derivative
protocols. The following examples of derivative protocols are called link access protocols:
          Link Access Procedure, Balanced (LAPB) for X.25
          Link Access Procedure on the D channel (LAPD) for ISDN
          Link Access Procedure for Modems (LAPM) and PPP for modems
          Link Access Procedure for Frame Relay (LAPF) for Frame Relay

     HDLC uses synchronous serial transmission providing error-free communication
between two points. HDLC defines a Layer 2 framing structure that allows for flow control
and error control using acknowledgments and a windowing scheme. Each frame has the same
format, whether it is a data frame or a control frame.

     Standard HDLC does not inherently support multiple protocols on a single link, as it
does not have a way to indicate which protocol is being carried. Cisco offers a proprietary
version of HDLC. The Cisco HDLC frame uses a proprietary ‘type’ field that acts as a
protocol field. This field enables multiple network layer protocols to share the same serial link.
HDLC is the default Layer 2 protocol for Cisco router serial interfaces.

     HDLC defines the following three types of frame, each with a different control field
          Information frames (I-frames) – Carry the data to be transmitted for the station.
           Additional flow and error control - data may be piggybacked on an information
          Supervisory frames (S-frames) – Provide request/response mechanisms when
           piggybacking is not used.
          Unnumbered frames (U-frames) – Provide supplemental link control functions,
           such as connection setup. The code field identifies the U-frame type.

     The first one or two bits of the control field serve to identify the frame type. In the
control field of an Information (I) frame, the send-sequence number refers to the number of
the frame to be sent next. The receive-sequence number provides the number of the frame to
be received next. Both sender and receiver maintain send and receive sequence numbers.

3.1.6 Configuring HDLC encapsulation

     The default encapsulation method used by Cisco devices on synchronous serial lines is
Cisco HDLC. If the serial interface is configured with another encapsulation protocol, and the
encapsulation must be changed back to HDLC, enter the interface configuration mode of the
serial interface. Then enter the encapsulation hdlc command to specify the encapsulation
                                                                           Chapter 3   PPP    43
protocol on the interface.

     Cisco HDLC is a point-to-point protocol that can be used on leased lines between two
Cisco devices. When communicating with a non-Cisco device, synchronous PPP is a more
viable option.

3.1.7 Troubleshooting a serial interface

     The output of the show interfaces serial command displays information specific to serial
interfaces. When HDLC is configured, “Encapsulation HDLC” should be reflected in the
output. When PPP is configured, "Encapsulation PPP" should be seen in the output.

     Five possible problem states can be identified in the interface status line of the show
interfaces serial display:
          Serial x is down, line protocol is down
          Serial x is up, line protocol is down
          Serial x is up, line protocol is up (looped)
          Serial x is up, line protocol is down (disabled)
          Serial x is administratively down, line protocol is down

     The show controllers command is another important diagnostic tool when
troubleshooting serial lines. The show controllers output indicates the state of the interface
channels and whether a cable is attached to the interface. In Figure , serial interface 0/0 has a
V.35 DTE cable attached. The command syntax varies, depending on platform. For serial
interfaces on Cisco 7000 series routers, use the show controllers cbus command.

     If the electrical interface output is shown as UNKNOWN, instead of V.35, EIA/TIA-449,
or some other electrical interface type, an improperly connected cable is the likely problem. A
problem with the internal wiring of the card is also possible. If the electrical interface is
unknown, the corresponding display for the show interfaces serial <X> command will show
that the interface and line protocol are down.

     Following are some debug commands that are useful when troubleshooting serial and
WAN problems:
          debug serial interface – Verifies whether HDLC keepalive packets are
           incrementing. If they are not, a possible timing problem exists on the interface card
           or in the network.
          debug arp – Indicates whether the router is sending information about or learning
           about routers (with ARP packets) on the other side of the WAN cloud. Use this
           command when some nodes on a TCP/IP network are responding, but others are
44    Cisco Academy – CCNA 3.0 Semester 4
         debug frame-relay lmi – Obtains Local Management Interface (LMI) information
          which is useful for determining whether a Frame Relay switch and a router are
          sending and receiving LMI packets.
         debug frame-relay events – Determines whether exchanges are occurring between
          a router and a Frame Relay switch.
         debug ppp negotiation – Shows Point-to-Point Protocol (PPP) packets transmitted
          during PPP startup where PPP options are negotiated.
         debug ppp packet – Shows PPP packets being sent and received. This command
          displays low-level packet dumps.
         debug ppp – Shows PPP errors, such as illegal or malformed frames, associated
          with PPP connection negotiation and operation.
         debug ppp authentication – Shows PPP Challenge Handshake Authentication
          Protocol (CHAP) and Password Authentication Protocol (PAP) packet exchanges.

     Caution: Debugging output is assigned high priority in the CPU process and can render
the system unusable. For this reason, debug commands should only be used to troubleshoot
specific problems or during troubleshooting sessions with Cisco technical support staff. It is
good practice to use debug commands during periods of low network traffic and when the
fewest users are online. Debugging during these periods decreases the likelihood that
increased debug command processing overhead will affect system use.

Lab Activity

     Lab Exercise: Troubleshooting a Serial Interface

     In this lab, the students will configure a serial interface on the London and Paris routers.

Lab Activity

     e-Lab Activity: Troubleshooting a Serial Interface

     In this lab, the student will configure a serial interface on the London and Paris routers.

3.2 PPP Authentication

3.2.1 PPP layered architecture

     PPP uses a layered architecture. A layered architecture is a logical model, design, or
blueprint that aids in communication between interconnecting layers. The Open System
Interconnection (OSI) model is the layered architecture used in networking. PPP provides a
method for encapsulating multi-protocol datagrams over a point-to-point link, and uses the
                                                                             Chapter 3   PPP   45
data link layer for testing the connection. Therefore PPP is made up of two sub-protocols:
          Link Control Protocol – Used for establishing the point-to-point link.
          Network Control Protocol – Used for configuring the various network layer

     PPP can be configured on the following types of physical interfaces:
          Asynchronous serial
          Synchronous serial
          High-Speed Serial Interface (HSSI)
          Integrated Services Digital Network (ISDN)

     PPP uses Link Control Protocol (LCP) to negotiate and setup control options on the
WAN data link. PPP uses the Network Control Protocol (NCP) component to encapsulate and
negotiate options for multiple network layer protocols. The LCP sits on top of the physical
layer and is used to establish, configure, and test the data-link connection.

     PPP also uses LCP to automatically agree upon encapsulation format options such as:
          Authentication – Authentication options require that the calling side of the link
           enter information to help ensure the caller has the network administrator's
           permission to make the call. Peer routers exchange authentication messages. Two
           authentication choices are Password Authentication Protocol (PAP) and Challenge
           Handshake Authentication Protocol (CHAP).
          Compression – Compression options increase the effective throughput on PPP
           connections by reducing the amount of data in the frame that must travel across the
           link. The protocol decompresses the frame at its destination. Two compression
           protocols available in Cisco routers are Stacker and Predictor.
          Error detection – Error detection mechanisms with PPP enable a process to identify
           fault conditions. The Quality and Magic Number options help ensure a reliable,
           loop-free data link.
          Multilink – Cisco IOS Release 11.1 and later supports multilink PPP. This
           alternative provides load balancing over the router interfaces that PPP uses.
          PPP Callback – To further enhance security, Cisco IOS Release 11.1 offers
           callback over PPP. With this LCP option, a Cisco router can act as a callback client
           or as a callback server. The client makes the initial call, requests that it be called
           back, and terminates its initial call. The callback router answers the initial call and
           makes the return call to the client based on its configuration statements.

     LCP will also do the following:
          Handle varying limits on packet size
          Detect common misconfiguration errors
46    Cisco Academy – CCNA 3.0 Semester 4
          Terminate the link
          Determine when a link is functioning properly or when it is failing

     PPP permits multiple network layer protocols to operate on the same communications
link. For every network layer protocol used, a separate Network Control Protocol (NCP) is
provided. For example, Internet Protocol (IP) uses the IP Control Protocol (IPCP), and
Internetwork Packet Exchange (IPX) uses the Novell IPX Control Protocol (IPXCP). NCPs
include functional fields containing standardized codes to indicate the network layer protocol
type that PPP encapsulates.

     The fields of a PPP frame are as follows:
          Flag – Indicates the beginning or end of a frame and consists of the binary
           sequence 01111110.
          Address – Consists of the standard broadcast address, which is the binary sequence
           11111111. PPP does not assign individual station addresses.
          Control – 1 byte that consists of the binary sequence 00000011, which calls for
           transmission of user data in an unsequenced frame. A connection-less link service
           similar to that of Logical Link Control (LLC) Type 1 is provided.
          Protocol – 2 bytes that identify the protocol encapsulated in the data field of the
          Data – 0 or more bytes that contain the datagram for the protocol specified in the
           protocol field. The end of the data field is found by locating the closing flag
           sequence and allowing 2 bytes for the frame check sequence (FCS) field. The
           default maximum length of the data field is 1,500 bytes.
          FCS – Normally 16 bits or 2 bytes that refers to the extra characters added to a
           frame for error control purposes.

Interactive Media Activity

     Drag and Drop: PPP Layered Architecture

     When the student has completed this activity, the student will understand the basic PPP
layered architecture.

3.2.2 Establishing a PPP session

     PPP session establishment progresses through three phases. These phases are link
establishment, authentication, and the network layer protocol phase. LCP frames are used to
accomplish the work of each of the LCP phases. The following three classes of LCP frames
are used in a PPP session:
          Link-establishment frames are used to establish and configure a link.
                                                                          Chapter 3    PPP    47
          Link-termination frames are used to terminate a link.
          Link-maintenance frames are used to manage and debug a link.

     The three PPP session establishment phases are:
          Link-establishment phase – In this phase each PPP device sends LCP frames to
           configure and test the data link. LCP frames contain a configuration option field
           that allows devices to negotiate the use of options such as the maximum
           transmission unit (MTU), compression of certain PPP fields, and the
           link-authentication protocol. If a configuration option is not included in an LCP
           packet, the default value for that configuration option is assumed.        Before any
           network layer packets can be exchanged, LCP must first open the connection and
           negotiate the configuration parameters. This phase is complete when a
           configuration acknowledgment frame has been sent and received.
          Authentication phase (optional) – After the link has been established and the
           authentication protocol decided on, the peer may be authenticated. Authentication,
           if used, takes place before the network layer protocol phase is entered. As part of
           this phase, LCP also allows for an optional link-quality determination test. The link
           is tested to determine whether the link quality is good enough to bring up network
           layer protocols.
          Network layer protocol phase – In this phase the PPP devices send NCP packets to
           choose and configure one or more network layer protocols, such as IP. Once each
           of the chosen network layer protocols has been configured, packets from each
           network layer protocol can be sent over the link. If LCP closes the link, it informs
           the network layer protocols so that they can take appropriate action. The show
           interfaces command reveals the LCP and NCP states under PPP configuration.

     The PPP link remains configured for communications until LCP or NCP frames close
the link or until an inactivity timer expires or a user intervenes.

Interactive Media Activity

     Drag and Drop: Establishing a PPP Session

     When the student has completed this activity, the student will know the three steps in
establishing a PPP Session.

Lab Activity

     e-Lab Activity: show interfaces

     In this activity, the student will demonstrate how to use the show interfaces command to
48    Cisco Academy – CCNA 3.0 Semester 4
display statistics for interfaces configured on the router for access server.

3.2.3 PPP authentication protocols

     The authentication phase of a PPP session is optional. After the link has been established
and the authentication protocol chosen, the peer can be authenticated. If it is used,
authentication takes place before the network layer protocol configuration phase begins.

     The authentication options require that the calling side of the link enter authentication
information. This helps to ensure that the user has the permission of the network administrator
to make the call. Peer routers exchange authentication messages.

     When configuring PPP authentication, the network administrator can select Password
Authentication Protocol (PAP) or Challenge Handshake Authentication Protocol (CHAP).
In general, CHAP is the preferred protocol.

3.2.4 Password Authentication Protocol (PAP)

     PAP provides a simple method for a remote node to establish its identity, using a
two-way handshake.          After the PPP link establishment phase is complete, a
username/password pair is repeatedly sent by the remote node across the link until
authentication is acknowledged or the connection is terminated.

     PAP is not a strong authentication protocol. Passwords are sent across the link in clear
text and there is no protection from playback or repeated trial-and-error attacks. The remote
node is in control of the frequency and timing of the login attempts.

3.2.5 Challenge Handshake Authentication Protocol (CHAP)

     CHAP is used at the startup of a link and periodically verifies the identity of the remote
node using a three-way handshake. CHAP is performed upon initial link establishment and is
repeated during the time the link is established.

     After the PPP link establishment phase is complete, the local router sends a "challenge"
message to the remote node. The remote node responds with a value calculated using a
one-way hash function, which is typically Message Digest 5 (MD5). This response is based
on the password and challenge message.         The local router checks the response against its
own calculation of the expected hash value. If the values match, the authentication is
acknowledged, otherwise the connection is immediately terminated.

     CHAP provides protection against playback attack through the use of a variable
challenge value that is unique and unpredictable. Since the challenge is unique and random,
                                                                          Chapter 3   PPP    49
the resulting hash value will also be unique and random. The use of repeated challenges is
intended to limit the time of exposure to any single attack. The local router or a third-party
authentication server is in control of the frequency and timing of the challenges.

Lab Activity

     e-Lab Activity: ppp chap hostname hostname

     In this activity, the student will demonstrate how to use the ppp chap hostname
hostname command to create a pool of dialup routers.

3.2.6 PPP encapsulation and authentication process

     When the encapsulation ppp command is used either PAP or CHAP authentication can
be optionally added. If no authentication is specified the PPP session starts immediately. If
authentication is required the process proceeds through the following steps:
          The method of authentication is determined.
          The local database or security server, which has a username and password database,
           is checked to see if the given username and password pair matches.
          The process checks the authentication response sent back from the local database.
           If it is a positive response, the PPP session is started. If negative, the session is

     The Figure and corresponding Figure        details the CHAP authentication process.

Interactive Media Activity

     Drag and Drop: PPP Encapsulation and Authentication Process

     When the student has completed this activity, the student will know the steps in the PPP
authentication process.

3.3 Configuring PPP

3.3.1 Introduction to configuring PPP

     Configurable aspects of PPP include methods of authentication, compression, error
detection, and whether or not multilink is supported. The following section describes the
different configuration options for PPP.

     Cisco routers that use PPP encapsulation may include the LCP configuration options
described in Figure .
50       Cisco Academy – CCNA 3.0 Semester 4

3.3.2 Configuring PPP

         The following example enables PPP encapsulation on serial interface 0/0:
              Router#configure terminal
              Router(config)#interface serial 0/0
              Router(config-if)#encapsulation ppp

         Point-to-point software compression can be configured on serial interfaces that use PPP
encapsulation. Compression is performed in software and might significantly affect system
performance. Compression is not recommended if most of the traffic consists of compressed

         To configure compression over PPP, enter the following commands:
              Router(config)#interface serial 0/0
              Router(config-if)#encapsulation ppp
              Router(config-if)#compress [predictor | stac]

         Enter the following to monitor the data dropped on the link, and avoid frame looping:
              Router(config)#interface serial 0/0
              Router(config-if)#encapsulation ppp
              Router(config-if)#ppp quality percentage

         The following commands perform load balancing across multiple links:
              Router(config)#interface serial 0/0
              Router(config-if)#encapsulation ppp
              Router(config-if)#ppp multilink

Lab Activity

         Lab Exercise: Configuring PPP Encapsulation

         In this lab, the student will configure a serial interface on the Washington and Dublin
routers with the PPP protocol.

Lab Activity

         e-Lab Activity: Configuring PPP Protocol

         In this lab, the student will configure a serial interface on the Washington and Dublin
routers with the PPP protocol.

3.3.3 Configuring PPP authentication
                                                                          Chapter 3   PPP     51
     The procedure outlined in the table describes how to configure PPP encapsulation and
PAP/CHAP authentication protocols.

     Correct configuration is essential, since PAP and CHAP will use these parameters to

     Figure     is an example of a two-way PAP authentication configuration. Both routers
authenticate and are authenticated, so the PAP authentication commands mirror each other.
The PAP username and password that each router sends must match those specified with the
username name password password command of the other router.

     PAP provides a simple method for a remote node to establish its identity using a
two-way handshake. This is done only upon initial link establishment. The hostname on one
router must match the username the other router has configured. The passwords must also

     CHAP is used to periodically verify the identity of the remote node using a 3-way
handshake. The hostname on one router must match the username the other router has
configured. The passwords must also match. This is done upon initial link establishment and
can be repeated any time after the link has been established.

Lab Activity

     Lab Exercise: Configuring PPP Authentication

     In this lab, the student will configure a serial interface on the Madrid and Tokyo routers.

Lab Activity

     e-Lab Activity: Configuring PPP Authentication

     In this lab, the student will configure a serial interface on the Madrid and Tokyo routers.

Lab Activity

     e-Lab Activity: username name password password

     In this activity, the student will demonstrate how to use the username name password
password command.

3.3.4 Verifying the serial PPP encapsulation configuration

     Use the show interfaces serial command to verify proper configuration of HDLC or PPP
52    Cisco Academy – CCNA 3.0 Semester 4
encapsulation. The command output in Figure          illustrates a PPP configuration. When
high-level data link control (HDLC) is configured, "Encapsulation HDLC" should be
reflected in the output of the show interfaces serial command. When PPP is configured, its
Link Control Protocol (LCP) and Network Control Protocol (NCP) states can be checked
using the show interfaces serial command.

     Figure    lists commands used when enabling, configuring and verifying PPP.

Lab Activity

     Lab Exercise: Verifying PPP Configuration

     In this lab, the student will configure a serial interface on the Brasilia and Warsaw
routers with the PPP protocol.

Lab Activity

     e-Lab Activity: Verify PPP Protocol

     In this lab, the student will configure a serial interface on the Brasilia and Warsaw
routers with the PPP protocol.

3.3.5 Troubleshooting the serial encapsulation configuration

     The debug ppp authentication command displays the authentication exchange sequence.
Figure   illustrates the Left router output during CHAP authentication with the router on the
right when debug ppp authentication is enabled. With two-way authentication configured,
each router authenticates the other. Messages appear for both the authenticating process and
the process of being authenticated. Use the debug ppp authentication command to display the
exchange sequence as it occurs.

     Figure    highlights router output for a two-way PAP authentication.

     The debug ppp command is used to display information about the operation of PPP. The
no form of this command disables debugging output.
          Router#debug ppp {authentication | packet | negotiation | error | chap}
          Router#no debug ppp {authentication | packet | negotiation | error | chap}

Lab Activity

     Lab Exercise: Troubleshooting PPP Configuration

     In this lab, the student will configure a serial interface on the London and Paris routers
                                                                         Chapter 3   PPP    53
and troubleshoot the connection.

Lab Activity

     e-Lab Activity: Troubleshooting a Serial Interface

     In this lab, the student will configure a serial interface on the London and Paris routers
then use show commands to troubleshoot and connectivity issues.


     An understanding of the following key points should have been achieved:
         Time division multiplexing
         The demarcation point in a WAN
         The definition and functions of the DTE and DCE
         The development of HDLC encapsulation
         Using the encapsulation HDLC command to configure HDLC
         Troubleshooting a serial interface using the show interface and show controllers
         The advantages of using PPP protocol
         The functions of the Link Control Protocol (LCP) and the Network Control
          Protocol (NCP) components of PPP
         The parts of a PPP frame
         The three phases of a PPP session
         The difference between PAP and CHAP
         The steps in the PPP authentication process
         The various options for PPP configuration
         How to configure PPP encapsulation
         How to configure CHAP and PAP authentication
         Using show interface to verify the serial encapsulation
         Troubleshooting problems with the PPP configuration using the debug PPP
54    Cisco Academy – CCNA 3.0 Semester 4

                       Chapter 4              ISDN and DDR


     Integrated Services Digital Network (ISDN) is a network that provides end-to-end
digital connectivity to support a wide range of services including voice and data services.

     ISDN allows multiple digital channels to operate simultaneously through the same
regular phone wiring used for analog lines, but ISDN transmits a digital signal rather than
analog. Latency is much lower on an ISDN line than on an analog line.

     Dial-on-demand routing (DDR) is a technique developed by Cisco that allows the use of
existing telephone lines to form a wide-area network (WAN), instead of using separate,
dedicated lines. Public switched telephone networks (PSTNs) are involved in this process.

     DDR is used when a constant connection is not needed, thus reducing costs. DDR
defines the process of a router connecting using a dialup network when there is traffic to send,
and then disconnecting when the transfer is complete.

     Students completing this module should be able to:
           Define the ISDN standards used for addressing, concepts, and signaling
           Describe how ISDN uses the physical and data link layers
           List the interfaces and reference points for ISDN
           Configure the router ISDN interface
           Determine what traffic is allowed when configuring DDR
           Configure static routes for DDR
           Choose the correct encapsulation type for DDR
           Be able to determine and apply an access list affecting DDR traffic
           Configure dialer interfaces

4.1 ISDN Concepts

4.1.1 Introducing ISDN

     There are several WAN technologies used to provide network access from remote
locations. One of these technologies is ISDN. ISDN can be used as a solution to the low
bandwidth problems that small offices or dial-in users have with traditional telephone dial-in
                                                                 Chapter 4   ISDN and DDR     55
     The traditional PSTN was based on an analog connection between the customer
premises and the local exchange, also called the local loop. The analog circuits introduce
limitations on the bandwidth that can be obtained on the local loop. Circuit restrictions do not
permit analog bandwidths greater than approximately 3000 Hz. ISDN technology permits the
use of digital data on the local loop, providing better access speeds for the remote users.

     Telephone companies developed ISDN with the intention of creating a totally digital
network. ISDN allows digital signals to be transmitted over existing telephone wiring. This
became possible when the telephone company switches were upgraded to handle digital
signals. ISDN is generally used for telecommuting and networking small and remote offices
into the corporate LAN.

     Telephone companies developed ISDN as part of an effort to standardize subscriber
services. This included the User-Network Interface (UNI), better known as the local loop. The
ISDN standards define the hardware and call setup schemes for end-to-end digital
connectivity. These standards help achieve the goal of worldwide connectivity by ensuring
that ISDN networks easily communicate with one another. In an ISDN network, the digitizing
function is done at the user site rather than the telephone company.

     ISDN brings digital connectivity to local sites. The following list provides some of the
benefits of ISDN:
          Carries a variety of user traffic signals, including data, voice, and video
          Offers much faster call setup than modem connections
          B channels provide a faster data transfer rate than modems
          B channels are suitable for negotiated Point-to-Point Protocol (PPP) links

     ISDN is a versatile service able to carry voice, video, and data traffic. It is possible to
use multiple channels to carry different types of traffic over a single connection.

     ISDN uses out-of-band signaling, the delta (D channel), for call setup and signaling. To
make a normal telephone call, the user dials the number one digit at a time. Once all the
numbers are received, the call can be placed to the remote user. ISDN delivers the numbers to
the switch at D-channel rates, thus reducing the time it takes to set up the call.

     ISDN also provides more bandwidth than a traditional 56 kbps dialup connection. ISDN
uses bearer channels, also called B channels, as clear data paths. Each B channel provides 64
kbps of bandwidth. With multiple B channels, ISDN offers more bandwidth for WAN
connections than some leased services. An ISDN connection with two B channels would
provide a total usable bandwidth of 128 kbps.

     Each ISDN B channel can make a separate serial connection to any other site in the
56    Cisco Academy – CCNA 3.0 Semester 4
ISDN network. Since PPP operates over both synchronous and asynchronous serial links,
ISDN lines can be used in conjunction with PPP encapsulation.

4.1.2 ISDN standards and access methods

     Work on standards for ISDN began in the late 1960s. A comprehensive set of ISDN
recommendations was published in 1984 and is continuously updated by the International
Telecommunication Union Telecommunication Standardization Sector (ITU-T), formerly
known as the Consultative Committee for International Telegraph and Telephone (CCITT).
The ISDN standards are a set of protocols that encompass digital telephony and data
communications. The ITU-T groups and organizes the ISDN protocols according to the
following general topic areas:
          E Protocols – Recommend telephone network standards for ISDN. For example,
           the E.164 protocol describes international addressing for ISDN.
          I Protocols – Deal with concepts, terminology, and general methods. The I.100
           series includes general ISDN concepts and the structure of other I-series
           recommendations. I.200 deals with service aspects of ISDN. I.300 describes
           network aspects. I.400 describes how the UNI is provided.
          Q Protocols – Cover how switching and signaling should operate. The term
           signaling in this context means the process of establishing an ISDN call.

     ISDN standards define two main channel types, each with a different transmission rate.
The bearer channel, or B channel, is defined as a clear digital path of 64 kbps. It is said to be
clear because it can be used to transmit any type of digitized data in full-duplex mode. For
example, a digitized voice call can be transmitted on a single B channel. The second channel
type is called a delta channel, or D channel. There can either be 16 kbps for the Basic Rate
Interface (BRI) or 64 kbps for the Primary Rate Interface (PRI). The D channel is used to
carry control information for the B channel.

     When a TCP connection is established, there is an exchange of information called the
connection setup. This information is exchanged over the path on which the data will
eventually be transmitted. Both the control information and the data share the same pathway.
This is called in-band signaling. ISDN however, uses a separate channel for control
information, the D channel. This is called out-of-band signaling.

     ISDN specifies two standard access methods, BRI and PRI. A single BRI or PRI
interface provides a multiplexed bundle of B and D channels.

     BRI uses two 64 kbps B channels plus one 16kbps D channel. BRI operates with many
Cisco routers. Because it uses two B channels and one D channel, BRI is sometimes referred
                                                              Chapter 4   ISDN and DDR     57
to as 2B+D.

     The B channels can be used for digitized speech transmission. In this case, specialized
methods are used for the voice encoding. Also, the B channels can be used for relatively
high-speed data transport. In this mode, the information is carried in frame format, using
either high-level data link control (HDLC) or PPP as the Layer 2 protocol. PPP is more robust
than HDLC because it provides a mechanism for authentication and negotiation of compatible
link and protocol configuration.

     ISDN is considered a circuit-switched connection. The B channel is the elemental
circuit-switching unit.

     The D channel carries signaling messages, such as call setup and teardown, to control
calls on B channels. Traffic over the D channel employs the Link Access Procedure on the D
Channel (LAPD) protocol. LAPD is a data link layer protocol based on HDLC.

     In North America and Japan, PRI offers twenty-three 64 kbps B channels and one 64
kbps D channel. A PRI offers the same service as a T1 or DS1 connection. In Europe and
much of the rest of the world, PRI offers 30 B channels and one D channel in order to offer
the same level of service as an E1 circuit. PRI uses a Data Service Unit/Channel Service Unit
(DSU/CSU) for T1/E1 connections.

4.1.3 ISDN 3-layer model and protocols

     ISDN utilizes a suite of ITU-T standards spanning the physical, data link, and network
layers of the OSI reference model:
          The ISDN BRI and PRI physical layer specifications are defined in ITU-T I.430
           and I.431, respectively.
          The ISDN data link specification is based on LAPD and is formally specified in
           the following:
                ITU-T Q.920
                ITU-T Q.921
                ITU-T Q.922
                ITU-T Q.923

     The ISDN network layer is defined in ITU-T Q.930, also known as I.450 and ITU-T
Q.931, also known as I.451. These standards specify user-to-user, circuit-switched, and
packet-switched connections.

     BRI service is provided over a local copper loop that traditionally carries analog phone
service. While there is only one physical path for a BRI, there are three separate information
58    Cisco Academy – CCNA 3.0 Semester 4
paths, 2B+D. Information from the three channels is multiplexed into the one physical path.

     ISDN physical layer, or Layer 1, frame formats differ depending on whether the frame is
outbound or inbound. If the frame is outbound, it is sent from the terminal to the network.
Outbound frames use the TE frame format. If the frame is inbound, it is sent from the network
to the terminal. Inbound frames use the NT frame format.

     Each frame contains two sample frames each containing the following:
          8 bits from the B1 channel
          8 bits from the B2 channel
          4 bits from the D channel
          6 bits of overhead

     ISDN BRI frames contain 48 bits. Four thousand of these frames are transmitted every
second. Each B channel, B1and B2, have a capacity of 8*4000 = 64 kbps, while channel D
has a capacity of 4*4000 = 16 kbps. This accounts for 144 kbps of the total ISDN BRI
physical interface bit rate of 192 kbps. The remainder of the data rate are the overhead bits
that are required for transmission.

     The overhead bits of an ISDN physical layer frame are used as follows:
          Framing bit – Provides synchronization
          Load balancing bit – Adjusts the average bit value
          Echo of previous D channel bits – Used for contention resolution when several
           terminals on a passive bus contend for a channel
          Activation bit – Activates devices
          Spare bit – Unassigned

     Note that the physical bit rate for the BRI interface is 48*4000 = 192 kbps. The effective
rate is 144 kbps = 64 kbps + 64 kbps + 16 kbps (2B+D).

     Layer 2 of the ISDN signaling channel is LAPD. LAPD is similar to HDLC. LAPD is
used across the D channel to ensure that control and signaling information is received and
flows properly.

     The LAPD flag and control fields are identical to those of HDLC. The LAPD address
field is 2 bytes long. The first address field byte contains the service access point identifier
(SAPI), which identifies the portal at which LAPD services are provided to Layer 3. The
command/response bit (C/R), indicates whether the frame contains a command or a response.
The second byte contains the terminal endpoint identifier (TEI). Each piece of terminal
equipment on the customer premises needs a unique identifier. The TEI may be statically
assigned at installation, or the switch may dynamically assign it when the equipment is started
                                                               Chapter 4   ISDN and DDR      59
up. If the TEI is statically assigned during installation, the TEI is a number ranging from 0 to
63. Dynamically assigned TEIs range from 64 to 126. A TEI of 127, or all 1s, indicates a

4.1.4 ISDN functions

     Several exchanges must occur for one router to connect to another using ISDN. To
establish an ISDN call, the D channel is used between the router and the ISDN switch. Signal
System 7 (SS7) signaling is used between the switches within the service provider network.

     The D channel between the router and the ISDN switch is always up. Q.921 describes
the ISDN data-link processes of LAPD, which functions like Layer 2 processes in the OSI
reference model. The D channel is used for call control functions such as call setup, signaling,
and termination. These functions are implemented in the Q.931 protocol. Q.931 specifies OSI
reference model Layer 3 functions. The Q.931 standard recommends a network layer
connection between the terminal endpoint and the local ISDN switch, but it does not impose
an end-to-end recommendation. Because some ISDN switches were developed before Q.931
was standardized, the various ISDN providers and switch types can and do use various
implementations of Q.931. Because switch types are not standard, routers must have
commands in their configuration specifying the ISDN switch to which they are connecting.

     The following sequence of events occurs during the establishment of a BRI or PRI call:
            The D channel is used to send the called number to the local ISDN switch.
            The local switch uses the SS7 signaling protocol to set up a path and pass the
             called number to the remote ISDN switch.
            The remote ISDN switch signals the destination over the D channel.
            The destination ISDN NT-1 device sends the remote ISDN switch a call-connect
            The remote ISDN switch uses SS7 to send a call-connect message to the local
            The local ISDN switch connects one B channel end-to-end, leaving the other B
             channel available for a new conversation or data transfer. Both B channels can be
             used simultaneously.

Interactive Media Activity

     Drag and Drop: ISDN Functions

     When the student has completed this activity, the student will be able to correctly
identify the ISDN establishment cycle.
60    Cisco Academy – CCNA 3.0 Semester 4

4.1.5 ISDN reference points

     ISDN standards define functional groups as devices or pieces of hardware that enable
the user to access the services of the BRI or PRI. Vendors can create hardware that supports
one or more functions. ISDN specifications define four reference points that connect one
ISDN device to another. Each device in an ISDN network performs a specific task to
facilitate end-to-end connectivity.

     To connect devices that perform specific functions, the interface between the two
devices needs to be well defined. These interfaces are called reference points. The reference
points that affect the customer side of the ISDN connection are as follows:
          R – References the connection between a non-ISDN compatible device Terminal
           Equipment type 2 (TE2) and a Terminal Adapter (TA), for example an RS-232
           serial interface.
          S – References the points that connect into the customer switching device Network
           Termination type 2 (NT2) and enables calls between the various types of customer
           premises equipment.
          T – Electrically identical to the S interface, it references the outbound connection
           from the NT2 to the ISDN network or Network Termination type 1 (NT1).
          U – References the connection between the NT1 and the ISDN network owned by
           the telephone company.

     Because the S and T references are electrically similar, some interfaces are labeled S/T
interfaces. Although they perform different functions, the port is electrically the same and can
be used for either function.

Interactive Media Activity

     Drag and Drop: ISDN Reference Points

     When the student has completed this activity, the student will be able to correctly
identify the ISDN reference points.

4.1.6 Determining the router ISDN interface

     In the United States, the customer is required to provide the NT1. In Europe and various
other countries, the telephone company provides the NT1 function and presents an S/T
interface to the customer. In these configurations, the customer is not required to supply a
separate NT1 device or integrated NT1 function in the terminal device. Equipment such as
router ISDN modules and interfaces must be ordered accordingly.
                                                               Chapter 4   ISDN and DDR      61
     To select a Cisco router with the appropriate ISDN interface, do the following:

     Determine whether the router supports ISDN BRI. Look on the back of the router for a
BRI connector or a BRI WAN Interface Card (WIC).
          Determine the provider of the NT1. An NT1 terminates the local loop to the central
           office (CO) of the ISDN service provider. In the United States, the NT1 is
           Customer Premise Equipment (CPE), meaning that it is the responsibility of the
           customer. In Europe, the service provider typically supplies the NT1.
          If the NT1 is CPE, make sure the router has a U interface. If the router has an S/T
           interface, then it will need an external NT1 to connect to the ISDN provider.
          If the router has a connector labeled BRI then it is already ISDN-enabled. With a
           native ISDN interface already built in, the router is a TE1. If the router has a U
           interface, it also has a built-in NT1.

     If the router does not have a connector labeled BRI, and it is a fixed-configuration, or
non-modular router, then it must use an existing serial interface. With non-native ISDN
interfaces such as serial interfaces, an external TA device must be attached to the serial
interface to provide BRI connectivity. If the router is modular it may be possible to upgrade to
a native ISDN interface, providing it has an available slot.

     Caution: A router with a U interface should never be connected to an NT1 as it will
damage the interface.

4.1.7 ISDN switch types

     Routers must be configured to identify the type of switch with which they will
communicate. Available ISDN switch types vary, depending in part on the country in which
the switch is being used. As a consequence of various implementations of Q.931, the D
channel signaling protocol used on ISDN switches varies from vendor to vendor.

     Services offered by ISDN carriers vary considerably from country to country or region
to region. Like modems, each switch type operates slightly differently, and has a specific set
of call setup requirements. Before the router can be connected to an ISDN service, it must be
configured for the switch type used at the CO. This information must be specified during
router configuration so the router can communicate with the switch, place ISDN network
level calls, and send data.

     In addition to knowing the switch type the service provider is using, it may also be
necessary to know what service profile identifiers (SPIDs) are assigned by the telco. A SPID
is a number provided by the ISDN carrier to identify the line configuration of the BRI service.
SPIDs allow multiple ISDN devices, such as voice and data equipment, to share the local loop.
62    Cisco Academy – CCNA 3.0 Semester 4
SPIDs are required by DMS-100 and National ISDN-1 switches.

     SPIDs are used only in North America and Japan. The ISDN carrier provides a SPID to
identify the line configuration of the ISDN service. In many cases when configuring a router,
the SPIDs will need to be entered.

     Each SPID points to line setup and configuration information. SPIDs are a series of
characters that usually resemble telephone numbers. SPIDs identify each B channel to the
switch at the central office. Once identified, the switch links the available services to the
connection. Remember, ISDN is typically used for dialup connectivity. The SPIDs are
processed when the router initially connects to the ISDN switch. If SPIDs are necessary, but
are not configured correctly, the initialization will fail, and the ISDN services cannot be used.

4.2 ISDN Configuration

4.2.1 Configuring ISDN BRI

     The command isdn switch-type switch-type can be configured at the global or interface
command mode to specify the provider ISDN switch.

     Configuring the isdn switch-type command in the global configuration mode sets the
ISDN switch type identically for all ISDN interfaces. Individual interfaces may be configured,
after the global configuration command, to reflect an alternate switch type.

     When the ISDN service is installed, the service provider will issue information about the
switch type and SPIDs. SPIDs are used to define the services available to individual ISDN
subscribers. Depending on the switch type, these SPIDs may have to be added to the
configuration. National ISDN-1 and DMS-100 ISDN switches require SPIDs to be configured,
but the AT&T 5ESS switch does not. SPIDs must be specified when using the Adtran ISDN

     The format of the SPIDs can vary depending on the ISDN switch type and specific
provider requirements. Use the isdn spid1 and isdn spid2 interface configuration mode
commands to specify the SPID required by the ISDN network when the router initiates a call
to the local ISDN exchange.

     Configuration of ISDN BRI is a mix of global and interface commands. To configure
the ISDN switch type, use the isdn switch-type command in global configuration mode:

             Router(config)#isdn switch-type switch-type
                                                                  Chapter 4   ISDN and DDR     63
     The argument switch-type indicates the service provider switch type. To disable the
switch on the ISDN interface, specify isdn switch-type none. The following example
configures the National ISDN-1 switch type in the global configuration mode:

              Router(config)#isdn switch-type basic-ni

     To define SPIDs use the isdn spid# command in interface configuration mode. This
command is used to define the SPID numbers that have been assigned for the B channels:

              Router(config-if)#isdn spid1 spid-number [ldn]
              Router(config-if)#isdn spid2 spid-number [ldn]

     The optional ldn argument defines a local dial directory number. On most switches, the
number must match the called party information coming in from the ISDN switch. SPIDs are
specified in interface configuration mode. To enter interface configuration mode, use the
interface bri command in the global configuration mode:

              Router(config)#interface bri slot/port
              Router(config)#interface bri0/0
              Router(config-if)#isdn spid1 51055540000001 5554000
              Router(config-if)#isdn spid2 51055540010001 5554001

Lab Activity

     Lab Exercise: Configuring ISDN BRI (U-Interface)

     In this lab, the student will configure an ISDN router to make a successful connection to
a local ISDN switch.

4.2.2 Configuring ISDN PRI

     ISDN PRI is delivered over a leased T1 or E1 line. The main PRI configuration tasks are
as follows:
             Specify the correct PRI switch type that the router interfaces with at the CO of the
              ISDN provider.
             Specify the T1/E1 controller, framing type, and line coding for the facility of the
              ISDN provider.
             Set a PRI group timeslot for the T1/E1 facility and indicate the speed used.

     Because routers connect to PRI using T1/E1, there is no "interface pri” command.
Instead, the physical interface on the router that connects to the leased line is called a T1
controller, or an E1 controller, if an E1 line is being used. This controller must be configured
64    Cisco Academy – CCNA 3.0 Semester 4
properly in order to communicate with the carrier network. The ISDN PRI D and PRI B
channels are configured separately from the controller, using the interface serial command.

     Use the isdn switch-type command to specify the ISDN switch used by the provider to
which the PRI connects. As with BRI, this command can be issued globally or in interface
configuration mode. The table shows the switch types available for ISDN PRI configuration:

           Router(config)#isdn switch-type primary-net5

     Configuring a T1 or E1 controller is done in four parts:
          From global configuration mode, specify the controller and the slot/port in the
           router where the PRI card is located:
                   Router(config)#controller {t1 | e1} {slot/port}
          Configure the framing, line coding, and clocking, as dictated by the service
           provider. The framing command is used to select the frame type used by the PRI
           service provider. For T1, use the following command syntax:
                   Router(config-controller)#framing {sf | esf}
           For E1 lines, use the framing command with the following options:
                   Router(config-controller)#framing {crc4 | no-crc4} [australia]
           Use the linecode command to identify the physical-layer signaling method on the
           digital facility of the provider:
                   Router(config-controller)#linecode {ami | b8zs| hdb3}
           In North America, the B8ZS signaling method is used for T1 carrier facilities. It
           allows a full 64 kbps for each ISDN channel. In Europe, it is typically HDB3
           encoding that is used.
          Configure the specified interface for PRI operation and the number of fixed
           timeslots that are allocated on the digital facility of the provider:
                   Router(config-controller)#pri-group [timeslots range]
           For T1, the range of timeslots used is 1-24. For E1 the range of timeslots used is
          Specify an interface for PRI D-channel operation. The interface is a serial interface
           to a T1/E1 on the router:
                   Router(config)#interface serial{slot/port: | unit:}{23 | 15}

     Within an E1 or T1 facility, the channels start numbering at 1. The numbering ranges
from 1 to 31 for E1 and 1 to 24 for T1. Serial interfaces in the Cisco router start numbering at
0. Therefore, channel 16, the E1 signaling channel, is channel 15 on the interface. Channel 24,
the T1 signaling channel, becomes channel 23 on the interface. Thus, interface serial 0/0:23
refers to the D channel of a T1 PRI.
                                                                 Chapter 4   ISDN and DDR      65
         Subinterfaces, commonly used with Frame Relay, are designated with a dot, or period.
For example, Serial 0/0.16 is a subinterface. Do not confuse the channels of a T1 or E1 with
subinterfaces. Channels use a colon instead of a dot to indicate the channel number:
             S0/0.23 refers to a Subinterface
             S0/0:23 refers to a channel

Lab Activity

         e-Lab Activity: isdn switch-type

         In this activity, the student will demonstrate how to use the isdn switch-type command.

4.2.3 Verifying ISDN configuration

         Several show commands can be used to verify that the ISDN configuration has been
implemented correctly.

         To confirm BRI operations, use the show isdn status command to inspect the status of
the BRI interfaces. This command can be used after configuring the ISDN BRI to verify that
the TE1, or router, is communicating correctly with the ISDN switch. In the Figure        output,
the TEIs have been successfully negotiated and ISDN Layer 3 is ready to make or receive

         Verify that Layer 1 Status is ACTIVE, and that the Layer 2 Status state
MULTIPLE_FRAME_ESTABLISHED appears. This command also displays the number of
active calls.

         The show isdn active command displays current call information, including all of the
             Called number
             Time until the call is disconnected
             Advice of charge (AOC)
             Charging units used during the call
             Whether the AOC information is provided during calls or at end of calls

         The show dialer command displays information about the dialer interface:
             Current call status
             Dialup timer values
             Dial reason
             Remote device that is connected

         The show interface bri0/0 displays statistics for the BRI interface configured on the
66    Cisco Academy – CCNA 3.0 Semester 4
router. Channel specific information is displayed by putting the channel number at the end of
the command. In this case, the show interface bri0/0:1 command shows the following:
          The B channel is using PPP encapsulation.
          LCP has negotiated and is open.
          There are two NCPs running, IPCP and Cisco Discovery Protocol Control Protocol

Lab Activity

     e-Lab Activity: show isdn status

     In this activity, the student will demonstrate how to use the show isdn status command
to display the status of all ISDN interfaces.

4.2.4 Troubleshooting the ISDN configuration

     The following commands are used to debug and troubleshoot the ISDN configuration:
          The debug isdn q921 command shows data link layer, or Layer 2, messages on the
           D channel between the router and the ISDN switch. Use this command if the show
           isdn status command does not show Layer 1 as ACTIVE and Layer 2 as
          The debug isdn q931 command shows the exchange of call setup and teardown
           messages of the Layer 3 ISDN connection.
          The debug ppp authentication command displays the PPP authentication protocol
           messages, including Challenge Handshake Authentication Protocol (CHAP) packet
           exchanges and Password Authentication Protocol (PAP) exchanges.
          The debug ppp negotiation command displays information on PPP traffic and
           exchanges while the PPP components are negotiated. This includes LCP,
           authentication, and NCP exchanges. A successful PPP negotiation will first open
           the LCP state, then authenticate, and finally negotiate NCP.
          The debug ppp error command displays protocol errors and error statistics
           associated with PPP connection negotiation and operation. Use the debug ppp
           commands to troubleshoot a Layer 2 problem if the show isdn status command
           does not indicate an ISDN problem.

4.3 DDR Configuration

4.3.1 DDR operation

     Dial-on-demand routing (DDR) is triggered when traffic that matches a predefined set of
                                                                  Chapter 4   ISDN and DDR        67
criteria is queued to be sent out a DDR-enabled interface. The traffic that causes a DDR call
to be placed is referred to as interesting traffic. Once the router has transmitted the interesting
traffic, the call is terminated.

      The key to efficient DDR operation is in the definition of interesting traffic. Interesting
traffic is defined with the dialer-list command. Dialer lists can allow all traffic from a specific
protocol to bring up a DDR link, or they can query an access list to see what specific types of
traffic should bring up the link. Dialer lists do not filter traffic on an interface. Even traffic
that is not interesting will be forwarded if the connection to the destination is active.

      DDR is implemented in Cisco routers in the following steps:
           The router receives traffic, performs a routing table lookup to determine if there is
            a route to the destination, and identifies the outbound interface.
           If the outbound interface is configured for DDR, the router does a lookup to
            determine if the traffic is interesting.
           The router identifies the dialing information necessary to make the call using a
            dialer map to access the next-hop router.
           The router then checks to see if the dialer map is in use. If the interface is currently
            connected to the desired remote destination, the traffic is sent. If the interface is not
            currently connected to the remote destination, the router sends call-setup
            information through the BRI using the D channel.
           After the link is enabled, the router transmits both interesting and uninteresting
            traffic. Uninteresting traffic can include data and routing updates.
           The idle timer starts and runs as long as no interesting traffic is seen during the idle
            timeout period and disconnects the call based on the idler timer configuration.

      The idle timer setting specifies the length of time the router should remain connected if
no interesting traffic has been sent. Once a DDR connection is established, any traffic to that
destination will be permitted. However, only interesting traffic resets the idle timer.

4.3.2 Configuring legacy DDR

      Legacy DDR is a term used to define a very basic DDR configuration in which a single
set of dialer parameters is applied to an interface. If multiple unique dialer configurations are
needed on one interface, then dialer profiles should be used.

      To configure legacy DDR perform the following steps:
           Define static routes
           Specify interesting traffic
           Configure the dialer information
68    Cisco Academy – CCNA 3.0 Semester 4

Lab Activity

     Lab Exercise: Configuring Legacy DDR

     In this lab, the student will configure an ISDN router to make a Legacy dial-on-demand
routing (DDR) call to another ISDN capable router.

4.3.3 Defining static routes for DDR

     To forward traffic, routers need to know what route to use for a given destination. When
a dynamic routing protocol is used, the DDR interface will dial the remote site for every
routing update or hello message if these packets are defined as interesting traffic. To prevent
the frequent or constant activation of the DDR link necessary to support dynamic routing
protocols, configure the necessary routes statically.

     To configure a static route for IP use the following command:

             Router(config)#ip route net-prefix mask {address | interface} [distance]

     The Central router has a static route to network on the Home router. The
Home router has two static routes defined for the two subnets on the Central LAN. If the
network attached to the Home router is a stub network, then all non-local traffic should be
sent to Central. A default route is a better choice for the Home router in this instance.

             Home(config)#ip route

     When configuring static routes, consider the following:
            By default, a static route will take precedence over a dynamic route because of its
             lower administrative distance. Without additional configuration, a dynamic route to
             a network will be ignored if a static route is present in the routing table for the
             same network.
            To reduce the number of static route entries, define a summarized or default static

4.3.4 Specifying interesting traffic for DDR

     DDR calls are triggered by interesting traffic. This traffic can be defined as any of the
            IP traffic of a particular protocol type
            Packets with a particular source address or destination
            Other criteria as defined by the network administrator
                                                                 Chapter 4   ISDN and DDR       69
     Use the dialer-list command to identify interesting traffic. The command syntax is as

           Router(config)#dialer-list dialer-group-num protocol protocol-name {permit | deny
           | list access-list-number}

     The dialer-group-num is an integer between 1 and 10 that identifies the dialer list to the
router. The command dialer-list 1 protocol ip permit will allow all IP traffic to trigger a call.
Instead of permitting all IP traffic, a dialer list can point to an access list in order to specify
exactly what types of traffic should bring up the link. The reference to access list 101 in dialer
list 2 prevents FTP and Telnet traffic from activating the DDR link. Any other IP packet is
considered interesting, and will therefore initiate the DDR link.

4.3.5 Configuring DDR dialer information

     There are several steps involved in configuring the DDR interface. PPP is configured on
the dialer interface using the same commands that enable PPP on a serial interface. HDLC is
the default encapsulation for an ISDN interface on a Cisco router, but most networks employ
PPP for circuit-switched connections. Because of its robustness, interoperability, and
additional features such as authentication, PPP is the data link protocol in use on the B
channels of most routers. To configure PPP on the DDR interface use the following

           Home(config)#username Central password cisco
           Home(config)#interface bri0/0
           Home(config-if)#encapsulation ppp
           Home(config-if)#ppp authentication chap
           Home(config-if)#ip address

     A dialer list specifying the interesting traffic for this DDR interface needs to be
associated with the DDR interface. This is done using the dialer-group group-number

           Home(config-if)#dialer-group 1

     In the command, group-number specifies the number of the dialer group to which the
interface belongs. The group number can be an integer from 1 to 10. This number must match
the dialer-list group-number. Each interface can have only one dialer group. However, the
same dialer list can be assigned to multiple interfaces with the dialer-group command.

     The correct dialing information for the remote DDR interface needs to be specified. This
70       Cisco Academy – CCNA 3.0 Semester 4
is done using the dialer map command.

         The dialer map command maps the remote protocol address to a telephone number. This
command is necessary to dial multiple sites.

               Router(config-if)#dialer map protocol next-hop-address [name hostname] [speed
               56 | 64] [broadcast] dial-string

         If dialing only one site, use an unconditional dialer string command that always dials the
one phone number regardless of the traffic destination. This step is unique to legacy DDR.
Although the information is always required, the steps to configure destination information
are different when using dialer profiles instead of legacy DDR.

         The dialer idle-timeout seconds command may be used to specify the number of idle
seconds before a call is disconnected. The seconds represent the number of seconds until a
call is disconnected after the last interesting packet is sent. The default is 120.

4.3.6 Dialer profiles

         Legacy DDR is limited because the configuration is applied directly to a physical
interface. Since the IP address is applied directly to the interface, then only DDR interfaces
configured in that specific subnet can establish a DDR connection with that interface. This
means that there is a one-to-one correspondence between the two DDR interfaces at each end
of the link.

         Dialer profiles remove the configuration from the interface receiving or making calls
and only bind the configuration to the interface on a per-call basis. Dialer profiles allow
physical interfaces to dynamically take on different characteristics based on incoming or
outgoing call requirements. Dialer profiles can do all of the following:
              Define encapsulation and access control lists
              Determine minimum or maximum calls
              Turn features on or off

         Dialer profiles aid in the design and deployment of more complex and scalable
circuit-switched internetworks by implementing a more scalable DDR model in Cisco routers
and access servers. Dialer profiles separate the logical portion of DDR, such as the network
layer, encapsulation, and dialer parameters, from the physical interface that places or receives

         Using dialer profiles, the following tasks may be performed:
              Configure B channels of an ISDN interface with different IP subnets.
                                                                 Chapter 4   ISDN and DDR      71
          Use different encapsulations on the B channels of an ISDN interface.
          Set different DDR parameters for the B channels of an ISDN interface.
          Eliminate the waste of ISDN B channels by letting ISDN BRIs belong to multiple
           dialer pools.

     A dialer profile consists of the following elements:
          Dialer interface – A logical entity that uses a per-destination dialer profile.
          Dialer pool – Each dialer interface references a dialer pool, which is a group of one
           or more physical interfaces associated with a dialer profile.
          Physical interfaces – Interfaces in a dialer pool are configured for encapsulation
           parameters and to identify the dialer pools to which the interface belongs. PPP
           authentication, encapsulation type, and multilink PPP are all configured on the
           physical interface.

     Like legacy DDR, dialer profiles activate when interesting traffic is queued to be sent
out a DDR interface. First, an interesting packet is routed to a remote DDR IP address. The
router then checks the configured dialer interfaces for one that shares the same subnet as the
remote DDR IP address. If one exists, the router looks for an unused physical DDR interface
in the dialer pool. The configuration from the dialer profile is then applied to the interface and
the router attempts to create the DDR connection. When the connection is terminated, the
interface is returned to the dialer pool for the next call.

4.3.7 Configuring dialer profiles

     Multiple dialer interfaces may be configured on a router. Each dialer interface is the
complete configuration for a destination. The interface dialer command creates a dialer
interface and enters interface configuration mode.

     To configure the dialer interface, perform the following tasks:
          Configure one or more dialer interfaces with all the basic DDR commands:
                IP address
                Encapsulation type and authentication
                Idle-timer
                Dialer-group for interesting traffic
          Configure a dialer string and dialer remote-name to specify the remote router name
           and phone number to dial it. The dialer pool associates this logical interface with a
           pool of physical interfaces.
          Configure the physical interfaces and assign them to a dialer pool using the dialer
           pool-member command.
72    Cisco Academy – CCNA 3.0 Semester 4
     An interface can be assigned to multiple dialer pools by using multiple dialer
pool-member commands. If more than one physical interface exists in the pool, use the
priority option of the dialer pool-member command to set the priority of the interface within a
dialer pool. If multiple calls need to be placed and only one interface is available, then the
dialer pool with the highest priority is the one that dials out.

     A combination of any of these interfaces may be used with dialer pools:
          Synchronous Serial
          Asynchronous Serial
          BRI
          PRI

Lab Activity

     Lab Exercise: Configuring Dialer Profiles

     In this lab, the student will configure ISDN Dialer Profiles on the routers.

4.3.8 Verifying DDR configuration

     The show dialer interface [BRI] command displays information in the same format as
the legacy DDR statistics on incoming and outgoing calls.

     The message “Dialer state is data link layer up” suggests that the dialer came up
properly and interface BRI 0/0:1 is bound to the profile dialer1.

     The show isdn active command displays information about the current active ISDN calls.
In this output, the ISDN call is outgoing to a remote router named Seattle.

     The show isdn status command displays information about the three layers of the BRI
interface. In this output, ISDN Layer 1 is active, ISDN Layer 2 is established with SPID1
and SPID2 validated, and there is one active connection on Layer 3.

4.3.9 Troubleshooting the DDR configuration

     There are two major types of DDR problems. Either a router is not dialing when it
should, or it is constantly dialing when it should not. Several debug commands can be used to
help troubleshoot problems with a DDR configuration.

     The debug isdn q921 command is useful for viewing Layer 2 ISDN call setup exchanges.
The “i =” field in the Q.921 payload field is the hexadecimal value of a Q.931 message. In the
following lines, the seventh and eighth most significant hexadecimal numbers in the “i =”
                                                                Chapter 4   ISDN and DDR   73
field indicate the type of Q.931 message:
          0x05 indicates a call setup message
          0x02 indicates a call proceeding message
          0x07 indicates a call connect message
          0x0F indicates a connect acknowledgment (ack) message

     The debug isdn q931 command is useful for observing call setup exchanges for both
outgoing and incoming calls.

     The debug dialer [events | packets] command is useful for troubleshooting DDR
connectivity. The debug dialer events command sends a message to the console indicating
when a DDR link has connected and what traffic caused it to connect. If a router is not
configured correctly for DDR, then the output of the command will usually indicate the
source of the problem. If there is no debug output, then the router is not aware of any
interesting traffic. An incorrectly configured dialer or access list may be the cause.

     Not all DDR problems result in an interface failing to dial. Routing protocols can cause
an interface to continuously dial, even if there is no user data to send. An interface that is
constantly going up and down is said to be flapping. The debug dialer packet command sends
a message to the console every time a packet is sent out a DDR interface. Use this debug
command to see exactly what traffic is responsible for a flapping DDR interface.

     If a router is not connecting when it should, then it is possible that an ISDN problem is
the cause, as opposed to a DDR problem. The remote router may be incorrectly configured, or
there could be a problem with the ISDN carrier network. Use the isdn call interface command
to force the local router to attempt to dial into the remote router. If the routers cannot
communicate using this command, then the lack of connectivity is an ISDN problem, not a
DDR problem. However, if the routers can communicate, then both the toll network and the
ISDN configurations on the routers are working properly. In this case, the problem is most
likely an error in the DDR configuration on either router.

     In some cases it is useful to reset the connection between the router and the local ISDN
switch. The clear interface bri command clears currently established connections on the
interface and resets the interface with the ISDN switch. This command forces the router to
renegotiate its SPIDs with the ISDN switch, and is sometimes necessary after making changes
to the isdn spid1 and isdn spid2 commands on an interface.

Lab Activity

     e-Lab Activity: isdn spid
74    Cisco Academy – CCNA 3.0 Semester 4
     In this activity, the student will demonstrate how to use the isdn spid command to define
at the router the service profiles identifier number that was assigned by the ISDN service
provided for the B1 channel.


     ISDN refers to a set of communication protocols proposed by telephone companies to
permit telephone networks to carry integrated voice, video, and data services. ISDN permits
communication over high-quality, high-speed, digital communication channels.

     DDR is used in order to save the costs of a dedicated WAN line for organizations and
companies that do not need a permanent connection. It can also be used as a backup by
organizations that use the dedicated line for critical applications.

     An understanding of the following key points should have been achieved:
          ISDN carries data, voice, and video
          ISDN uses standards for addressing, concepts, and signaling
          ISDN uses the physical and data-link layers
          Interfaces and reference points for ISDN
          Router configuration for ISDN
          Which traffic is allowed when configuring DDR
          Static routes for DDR
          The correct encapsulation type for DDR
          Access lists affecting DDR traffic
          Dialer interfaces
                                                                Chapter 5   Frame Relay   75

                         Chapter 5            Frame Relay


     Frame Relay was originally developed as an extension of Integrated Services Digital
Network (ISDN). It was designed to enable the circuit-switched technology to be transported
on a packet-switched network. The technology has become a stand-alone and cost-effective
means of creating a WAN.

     Frame Relay switches create virtual circuits to connect remote LANs to a WAN. The
Frame Relay network exists between a LAN border device, usually a router, and the carrier
switch. The technology used by the carrier to transport the data between the switches is not
important to Frame Relay.

     The sophistication of the technology requires a thorough understanding of the terms
used to describe how Frame Relay works. Without a firm understanding of Frame Relay, it is
difficult to troubleshoot its performance.

     Frame Relay has become one of the most extensively used WAN protocols. One reason
for its popularity is that it is inexpensive compared to leased lines. Another reason Frame
Relay is popular is that configuration of user equipment in a Frame Relay network is very

     This module explains how to configure Frame Relay on a Cisco router. Frame Relay
connections are created by configuring routers or other devices to communicate with a Frame
Relay switch. The Frame Relay switch is usually configured by the service provider. This
helps keep end-user configuration tasks to a minimum.

     Students completing this module should be able to:
          Identify the components of a Frame Relay network
          Explain the scope and purpose of Frame Relay
          Discuss the technology of Frame Relay
          Compare point-to-point and point-to-multipoint topologies
          Examine the topology of a Frame Relay network
          Configure a Frame Relay Permanent Virtual Circuit (PVC)
          Create a Frame Relay Map on a remote network
          Explain the issues of a non-broadcast multi-access network
          Describe the need for subinterfaces and how to configure them
          Verify and troubleshoot a Frame Relay connection
76     Cisco Academy – CCNA 3.0 Semester 4

5.1 Frame Relay Concepts

5.1.1 Introducing Frame Relay

     Frame Relay is an International Telecommunication Union Telecommunications
Standardization Sector (ITU-T) and American National Standards Institute (ANSI) standard.
Frame Relay is a packet-switched, connection-oriented, WAN service. It operates at the data
link layer of the OSI reference model. Frame Relay uses a subset of the high-level data link
control (HDLC) protocol called Link Access Procedure for Frame Relay (LAPF). Frames
carry data between user devices called data terminal equipment (DTE), and the data
communications equipment (DCE) at the edge of the WAN.

     Originally Frame Relay was designed to allow ISDN equipment to have access to a
packet-switched service on a B channel. However, Frame Relay is now a stand-alone

     A Frame Relay network may be privately owned, but it is more commonly provided as a
service by a public carrier. It typically consists of many geographically scattered Frame Relay
switches interconnected by trunk lines.

     Frame Relay is often used to interconnect LANs. When this is the case, a router on each
LAN will be the DTE. A serial connection, such as a T1/E1 leased line, will connect the router
to a Frame Relay switch of the carrier at the nearest point-of-presence for the carrier. The
Frame Relay switch is a DCE device. Frames from one DTE will be moved across the
network and delivered to other DTEs by way of DCEs.

     Computing equipment that is not on a LAN may also send data across a Frame Relay
network. The computing equipment will use a Frame Relay access device (FRAD) as the

5.1.2 Frame Relay terminology

     The connection through the Frame Relay network between two DTEs is called a virtual
circuit (VC). Virtual circuits may be established dynamically by sending signaling messages
to the network. In this case they are called switched virtual circuits (SVCs). However, SVCs
are not very common. Generally permanent virtual circuits (PVCs) that have been
preconfigured by the carrier are used. The switching information for a VC is stored in the
memory of the switch.

     Because it was designed to operate on high-quality digital lines, Frame Relay provides
no error recovery mechanism. If there is an error in a frame it is discarded without
                                                                   Chapter 5   Frame Relay       77

     The FRAD or router connected to the Frame Relay network may have multiple virtual
circuits connecting it to various end points. This makes it a very cost-effective replacement
for a mesh of access lines. With this configuration, each end point needs only a single access
line and interface. More savings arise as the capacity of the access line is based on the average
bandwidth requirement of the virtual circuits, rather than on the maximum bandwidth

     The various virtual circuits on a single access line can be distinguished because each VC
has its own Data Link Channel Identifier (DLCI). The DLCI is stored in the address field of
every frame transmitted. The DLCI usually has only local significance and may be different at
each end of a VC.

Interactive Media Activity

     Drag and Drop: Frame Relay Terminology

     When the student has completed this activity, the student will be able to correctly
identify frame relay terminology.

5.1.3 Frame Relay stack layered support

     Frame Relay functions by doing the following:
          Takes data packets from a network layer protocol, such as IP or IPX
          Encapsulates them as the data portion of a Frame Relay frame
          Passes them to the physical layer for delivery on the wire

     The physical layer is typically EIA/TIA-232, 449 or 530, V.35, or X.21. The Frame
Relay frame is a sub-set of the HDLC frame type. Therefore it is delimited with flag fields.
The 1-byte flag uses the bit pattern 01111110. If the Frame CheckSum (FCS) does not match
the address and data fields at the receiving end, the frame is discarded without notification.

5.1.4 Frame Relay bandwidth and flow control

     The serial connection or access link to the Frame Relay network is normally a leased
line. The speed of the line is the access speed or port speed. Port speeds are typically between
64 kbps and 4 Mbps. Some providers offer speeds up to 45 Mbps.

     Usually there are several PVCs operating on the access link with each VC having
dedicated bandwidth availability. This is called the committed information rate (CIR). The
CIR is the rate at which the service provider agrees to accept bits on the VC.
78      Cisco Academy – CCNA 3.0 Semester 4
        Individual CIRs are normally less than the port speed. However, the sum of the CIRs
will normally be greater than the port speed. Sometimes this is a factor of 2 or 3. This
statistical multiplexing accomodates the bursty nature of computer communications since
channels are unlikely to be at their maximum data rate simultaneously.

        While a frame is being transmitted, each bit will be sent at the port speed. For this
reason, there must be a gap between frames on a VC if the average bit rate is to be the CIR.

        The switch will accept frames from the DTE at rates in excess of the CIR. This
effectively provides each channel with bandwidth on demand up to a maximum of the port
speed. Some service providers impose a VC maximum that is less than the port speed. The
difference between the CIR and the maximum, whether the maximum is port speed or lower,
is called the Excess Information Rate (EIR).

        The time interval over which the rates are calculated is called the committed time (Tc).
The number of committed bits in Tc is the committed burst (Bc). The extra number of bits
above the committed burst, up to the maximum speed of the access link, is the excess burst

        Although the switch accepts frames in excess of the CIR, each excess frame is marked at
the switch by setting the Discard Eligibility (DE) bit in the address field.

        The switch maintains a bit counter for each VC. An incoming frame is marked DE if it
puts the counter over Bc. An incoming frame is discarded if it pushes the counter over Bc +
Be. At the end of each Tc seconds the counter is reduced by Bc. The counter may not be
negative, so idle time cannot be saved up.

        Frames arriving at a switch are queued or buffered prior to forwarding. As in any
queuing system, it is possible that there will be an excessive buildup of frames at a switch.
This causes delays. Delays lead to unnecessary retransmissions that occur when higher-level
protocols receive no acknowledgment within a set time. In severe cases this can cause a
serious drop in network throughput.

        To avoid this problem, frame relay switches incorporate a policy of dropping frames
from a queue to keep the queues short. Frames with their DE bit set will be dropped first.

        When a switch sees its queue increasing, it tries to reduce the flow of frames to it. It
does this by notifying DTEs of the problem by setting the Explicit Congestion Notification
(ECN) bits in the frame address field.

        The Forward ECN (FECN) bit is set on every frame that the switch receives on the
                                                                   Chapter 5   Frame Relay     79
congested link. The Backward ECN (BECN) bit is set on every frame that the switch places
onto the congested link. DTEs receiving frames with the ECN bits set are expected to try to
reduce the flow of frames until the congestion clears.

     If the congestion occurs on an internal trunk, DTEs may receive notification even
though they are not the cause of the congestion.

     The DE, FECN and BECN bits are part of the address field in the LAPF frame.

5.1.5 Frame Relay address mapping and topology

     When more than two sites are to be connected, consideration must be given to the
topology of the connections between them.

     Frame Relay is unlikely to be cost-effective when only two sites are interconnected with
a point-to-point connection. Frame Relay is more cost-effective where multiple sites must be

     WANs are often interconnected as a star topology. A central site hosts the primary
services and is connected to each of the remote sites needing access to the services.      In this
hub and spoke topology the location of the hub is chosen to give the lowest leased line cost.
When implementing a star topology with Frame Relay, each remote site has an access link to
the frame relay cloud with a single VC. The hub has an access link with multiple VCs, one for
each remote site. Because Frame Relay tariffs are not distance related, the hub does not
need to be in the geographical center of the network.

     A full mesh topology is chosen when services to be accessed are geographically
dispersed and highly reliable access to them is required. With full mesh, every site is
connected to every other site. Unlike with leased line interconnections, this can be achieved in
Frame Relay without additional hardware. It is necessary to configure additional VCs on the
existing links to upgrade from star to full mesh topology. Multiple VCs on an access link will
generally make better use of Frame Relay than single VCs. This is because they take
advantage of the built-in statistical multiplexing.

     For large networks, full mesh topology is seldom affordable. This is because the number
of links required for a full mesh topology grows at almost the square of the number of sites.
While there is no equipment issue for Frame Relay, there is a limit of less than 1000 VCs per
link. In practice, the limit will be less than that, and larger networks will generally be partial
mesh topology. With partial mesh, there are more interconnections than required for a star
arrangement, but not as many as for a full mesh. The actual pattern is very dependant on the
data flow requirements.
80    Cisco Academy – CCNA 3.0 Semester 4
     In any Frame Relay topology, when a single interface is used to interconnect multiple
sites, there may be reachability issues. This is due to the nonbroadcast multiaccess (NBMA)
nature of Frame Relay. Split horizon is a technique used by routing protocols to prevent
routing loops. Split horizon does not allow routing updates to be sent out the same interface
that was the source of the route information. This can cause problems with routing updates in
a Frame Relay environment where multiple PVCs are on a single physical interface.

     Whatever the underlying topology of the physical network, a mapping is needed in each
FRAD or router between a data link layer Frame Relay address and a network layer address,
such as an IP address. Essentially, the router needs to know what networks are reachable
beyond a particular interface. The same problem exists if an ordinary leased line is connected
to an interface. The difference is that the remote end of a leased line is connected directly to a
single router. Frames from the DTE travel down a leased line as far as a network switch,
where they may fan out to as many as 1000 routers. The DLCI for each VC must be
associated with the network address of its remote router. This information can be configured
manually by using map commands. It can also be configured automatically, taking LMI status
information and sending a Reverse Address Resolution Protocol (RARP) message on each VC
identified. This process is described in more detail in a separate section.

5.1.6 Frame Relay LMI

     Frame Relay was designed to provide packet-switched data transfer with minimal
end-to-end delays. Anything that might contribute to delays was omitted. When vendors came
to implement Frame Relay as a separate technology rather than as one component of ISDN,
they decided that there was a need for DTEs to dynamically acquire information about the
status of the network. This feature was omitted in the original design. The extensions for this
status transfer are called the Local Management Interface (LMI).

     The 10-bit DLCI field allows VC identifiers 0 through 1023. The LMI extensions
reserve some of these identifiers. This reduces the number of permitted VCs. LMI messages
are exchanged between the DTE and DCE using these reserved DLCIs.

     The LMI extensions include the following:
          The heartbeat mechanism, which verifies that a VC is operational
          The multicast mechanism
          The flow control
          The ability to give DLCIs global significance
          The VC status mechanism

     There are several LMI types, each of which is incompatible with the others. The LMI
                                                                 Chapter 5   Frame Relay    81
type configured on the router must match the type used by the service provider. Three types of
LMIs are supported by Cisco routers:
            Cisco – The original LMI extensions
            Ansi – Corresponding to the ANSI standard T1.617 Annex D
            q933a – Corresponding to the ITU standard Q933 Annex A

      LMI messages are carried in a variant of LAPF frames. This variant includes four extra
fields in the header so that they will be compatible with the LAPD frames used in ISDN. The
address field carries one of the reserved DLCIs. Following this are the control, protocol
discriminator, and call reference fields that do not change. The fourth field indicates the LMI
message type.

      There are one or more information elements (IE) that follow the header. Each IE consists
of the following:
            A one byte IE identifier
            An IE length field
            One or more bytes containing actual data that typically includes the status of a

      Status messages help verify the integrity of logical and physical links. This information
is critical in a routing environment because routing protocols make decisions based on link

Interactive Media Activity

      Drag and Drop: LMI Message Format

      When the student has completed this activity, the student will be able to correctly order
the fields in a LMI message frame.

5.1.7 Stages of Inverse ARP and LMI operation

      LMI status messages combined with Inverse ARP messages allow a router to associate
network layer and data link layer addresses.

      When a router that is connected to a Frame Relay network is started, it sends an LMI
status inquiry message to the network. The network replies with an LMI status message
containing details of every VC configured on the access link.

      Periodically the router repeats the status inquiry, but subsequent responses include only
status changes. After a set number of these abbreviated responses, the network will send a full
status message.
82    Cisco Academy – CCNA 3.0 Semester 4
     If the router needs to map the VCs to network layer addresses, it will send an Inverse
ARP message on each VC. The Inverse ARP message includes the network layer address of
the router, so the remote DTE, or router, can also perform the mapping. The Inverse ARP
reply allows the router to make the necessary mapping entries in its address to DLCI map
table. If several network layer protocols are supported on the link, Inverse ARP messages will
be sent for each.

5.2 Configuring Frame Relay

5.2.1 Configuring basic Frame Relay

     This section explains how to configure a basic Frame Relay PVC. Frame Relay is
configured on a serial interface and the default encapsulation type is the Cisco proprietary
version of HDLC. To change the encapsulation to Frame Relay use the encapsulation
frame-relay [cisco | ietf] command.
          cisco Uses the Cisco proprietary Frame Relay encapsulation. Use this option if
           connecting to another Cisco router. Many non-Cisco devices also support this
           encapsulation type. This is the default.
          ietf Sets the encapsulation method to comply with the Internet Engineering Task
           Force (IETF) standard RFC 1490. Select this if connecting to a non-Cisco router.

     Cisco’s proprietary Frame Relay encapsulation uses a 4-byte header, with 2 bytes to
identify the data-link connection identifier (DLCI) and 2 bytes to identify the packet type.

     Set an IP address on the interface using the ip address command. Set the bandwidth of
the serial interface using the bandwidth command. Bandwidth is specified in kilobits per
second (kbps). This command is used to notify the routing protocol that bandwidth is
statically configured on the link. The bandwidth value is used by Interior Gateway Routing
Protocol (IGRP), Enhanced Interior Gateway Routing Protocol (EIGRP), and Open Shortest
Path First (OSPF) to determine the metric of the link. This command also affects bandwidth
utilization statistics that can be found by using the show interfaces command.

     The LMI connection is established and configured by the frame-relay lmi-type [ansi |
cisco | q933a] command. This command is only needed if using Cisco IOS Release 11.1 or
earlier. With IOS Release 11.2 or later, the LMI-type is autosensed and no configuration is
needed. The default LMI type is cisco. The LMI type is set on a per-interface basis and is
shown in the output of the show interfaces command.

     These configuration steps are the same, regardless of the network layer protocols
operating across the network.
                                                                  Chapter 5   Frame Relay    83

Lab Activity

     Lab Exercise: Configuring Frame Relay

     In this lab, the student will configure a router to make a successful connection to a local
Frame Relay switch.

Lab Activity

     e-Lab Activity: Configuring Frame Relay

     In this activity, a router will be configured to make a successful connection to a local
Frame Relay switch.

5.2.2 Configuring a static Frame Relay map

     The local DLCI must be statically mapped to the network layer address of the remote
router when the remote router does not support Inverse ARP. This is also true when broadcast
traffic and multicast traffic over the PVC must be controlled. These static Frame Relay map
entries are referred to as static maps.

     Use the frame-relay map protocol protocol-address dlci [broadcast] command to
statically map the remote network layer address to the local DLCI.

Lab Activity

     Lab Exercise: Configuring Frame Relay PVC

     In this lab, the student will configure two routers back-to-back as a Frame Relay
permanent virtual circuit (PVC).

Lab Activity

     e-Lab Activity: show frame-relay map

     In this activity, the student will demonstrate how to use the show frame-relay map
command to display the current map entries and information about the connections.

5.2.3 Reachability issues with routing updates in NBMA

     By default, a Frame Relay network provides non-broadcast multi-access (NBMA)
connectivity between remote sites. An NBMA environment is viewed like other multiaccess
media environments, such as Ethernet, where all the routers are on the same subnet. However,
to reduce cost, NBMA clouds are usually built in a hub-and-spoke topology. With a
84    Cisco Academy – CCNA 3.0 Semester 4
hub-and-spoke topology, the physical topology does not provide the multi-access capabilities
that Ethernet does. The physical topology consists of multiple PVCs.

     A Frame Relay NBMA topology may cause two problems:
          Reachability issues regarding routing updates
          The need to replicate broadcasts on each PVC when a physical interface contains
           more than one PVC

     Split-horizon updates reduce routing loops by not allowing a routing update received on
one interface to be forwarded out the same interface. If Router B, a spoke router, sends a
broadcast routing update to Router A, the hub router, and Router A has multiple PVCs over a
single physical interface, then Router A cannot forward that routing update through the same
physical interface to other remote spoke routers. If split-horizon is disabled, then the routing
update can be forwarded out the same physical interface from which it came. Split-horizon is
not a problem when there is a single PVC on a physical interface. This would be a
point-to-point Frame Relay connection.

     Routers that support multiple connections over a single physical interface have many
PVCs that terminate in a single router. This router must replicate broadcast packets such as
routing update broadcasts, on each PVC, to the remote routers. The replicated broadcast
packets can consume bandwidth and cause significant latency to user traffic. It might seem
logical to turn off split-horizon to resolve the reachability issues caused by split-horizon.
However, not all network layer protocols allow split-horizon to be disabled and disabling
split-horizon increases the chances of routing loops in any network.

     One way to solve the split-horizon problem is to use a fully meshed topology. However,
this will increase the cost because more PVCs are required. The preferred solution is to use

Lab Activity

     e-Lab Activity: Configuring Frame Relay Subinterfaces

     In this activity, the student will configure three routers in a full-mesh Frame Relay

5.2.4 Frame Relay subinterfaces

     To enable the forwarding of broadcast routing updates in a hub-and-spoke Frame Relay
topology, configure the hub router with logically assigned interfaces. These interfaces are
called subinterfaces. Subinterfaces are logical subdivisions of a physical interface.
                                                                    Chapter 5   Frame Relay   85
     In split-horizon routing environments, routing updates received on one subinterface can
be sent out another subinterface. In a subinterface configuration, each virtual circuit can be
configured as a point-to-point connection. This allows each subinterface to act similarly to a
leased line. Using a Frame Relay point-to-point subinterface, each pair of the point-to-point
routers is on its own subnet.

     Frame Relay subinterfaces can be configured in either point-to-point or multipoint
          Point-to-point – A single point-to-point subinterface is used to establish one PVC
           connection to another physical interface or subinterface on a remote router. In this
           case, each pair of the point-to-point routers is on its own subnet and each
           point-to-point subinterface would have a single DLCI. In a point-to-point
           environment, each subinterface is acting like a point-to-point interface. Therefore,
           routing update traffic is not subject to the split-horizon rule.
          Multipoint – A single multipoint subinterface is used to establish multiple PVC
           connections to multiple physical interfaces or subinterfaces on remote routers. All
           the participating interfaces would be in the same subnet. The subinterface acts like
           an NBMA Frame Relay interface so routing update traffic is subject to the
           split-horizon rule.

     The encapsulation frame-relay command is assigned to the physical interface. All other
configuration items, such as the network layer address and DLCIs, are assigned to the

     Multipoint configurations can be used to conserve addresses that can be especially
helpful if Variable Length Subnet Masking (VLSM) is not being used. However, multipoint
configurations may not work properly given the broadcast traffic and split-horizon
considerations. The point-to-point subinterface option was created to avoid these issues.

5.2.5 Configuring Frame Relay subinterfaces

     The Frame Relay service provider will assign the DLCI numbers. These numbers range
from 16 to 992, and usually have only local significance. DLCIs can have global significance
in certain circumstances. This number range will vary depending on the LMI used.

     In the figure, router A has two point-to-point subinterfaces. The s0/0.110 subinterface
connects to router B and the s0/0.120 subinterface connects to router C. Each subinterface is
on a different subnet. To configure subinterfaces on a physical interface, the following steps
are required:
          Configure Frame Relay encapsulation on the physical interface using the
86    Cisco Academy – CCNA 3.0 Semester 4
           encapsulation frame-relay command
          For each of the defined PVCs, create a logical subinterface

     router(config-if)#interface      serial     number.subinterface-number        {multipoint     |

     To create a subinterface, use the interface serial command. Specify the port number,
followed by a period (.), and then by the subinterface number. Usually, the subinterface
number is chosen to be that of the DLCI. This makes troubleshooting easier. The final
required   parameter   is   stating    whether    the   subinterface    is   a   point-to-point   or
point-to-multipoint interface. Either the multipoint or point-to-point keyword is required.
There is no default. The following commands create the subinterface for the PVC to router B:

           routerA(config-if)#interface serial 0/0.110 point-to-point

     If the subinterface is configured as point-to-point, then the local DLCI for the
subinterface must also be configured in order to distinguish it from the physical interface. The
DLCI is also required for multipoint subinterfaces for which Inverse ARP is enabled. It is not
required for multipoint subinterfaces configured with static route maps. The frame-relay
interface-dlci command is used to configure the local DLCI on the subinterface

           router(config-subif)#frame-relay interface-dlci dlci-number

Lab Activity

     Lab Exercise: Configuring Frame Relay Subinterfaces

     In this lab, the student will configure three routers in a full-mesh Frame Relay network.

5.2.6 Verifying the Frame Relay configuration

     The show interfaces command displays information regarding the encapsulation and
Layer 1 and Layer 2 status. It also displays information about the following:
          The LMI type
          The LMI DLCI
          The Frame Relay data terminal equipment/data circuit-terminating equipment
           (DTE/DCE) type

     Normally, the router is considered a data terminal equipment (DTE) device. However, a
Cisco router can be configured as a Frame Relay switch. The router becomes a data
circuit-terminating equipment (DCE) device when it is configured as a Frame Relay switch.
                                                                 Chapter 5     Frame Relay   87
     Use the show frame-relay lmi command to display LMI traffic statistics. For example,
this command demonstrates the number of status messages exchanged between the local
router and the local Frame Relay switch.

     Use the show frame-relay pvc [interface interface] [dlci] command to display the status
of each configured PVC as well as traffic statistics. This command is also useful for viewing
the number of BECN and FECN packets received by the router. The PVC status can be active,
inactive, or deleted.

     The show frame-relay pvc command displays the status of all the PVCs configured on
the router. Specifying a PVC will show the status of only that PVC. In Figure , the show
frame-relay pvc 100 command displays the status of only PVC 100.

     Use the show frame-relay map command to display the current map entries and
information about the connections. The following information interprets the show frame-relay
map output that appears in Figure :
          100 is the decimal value of the local DLCI number
          0x64 is the hex conversion of the DLCI number, 0x64 = 100 decimal
          0x1840 is the value as it would appear on the wire because of the way the DLCI
           bits are spread out in the address field of the Frame Relay frame
      is the IP address of the remote router, dynamically learned via the
           Inverse ARP process
          Broadcast/multicast is enabled on the PVC
          PVC status is active

     To clear dynamically created Frame Relay maps, which are created using Inverse ARP,
use the clear frame-relay-inarp command.

Lab Activity

     e-Lab Activity: show frame-relay pvc

     In this activity, the student will demonstrate how to use the show frame-relay pvc
command to display statistics about PVCs for Frame Relay interfaces.

5.2.7 Troubleshooting the Frame Relay configuration

     Use the debug frame-relay lmi command to determine whether the router and the Frame
Relay switch are sending and receiving LMI packets properly. The “out” is an LMI status
message sent by the router. The “in” is a message received from the Frame Relay switch.
“type 0” is a full LMI status message. “type 1” is an LMI exchange. The “dlci 100, status
88       Cisco Academy – CCNA 3.0 Semester 4
0x2” means that the status of DLCI 100 is active. The possible values of the status field are as
            0x0 – Added/inactive means that the switch has this DLCI programmed but for
             some reason it is not usable. The reason could possibly be the other end of the PVC
             is down.
            0x2 – Added/active means the Frame Relay switch has the DLCI and everything is
            0x4 – Deleted means that the Frame Relay switch does not have this DLCI
             programmed for the router, but that it was programmed at some point in the past.
             This could also be caused by the DLCIs being reversed on the router, or by the
             PVC being deleted by the service provider in the Frame Relay cloud.

Lab Activity

     e-Lab Activity: Frame Relay Configuration

     In this activity, the student will work through several tasks for configuring basic Frame


     An understanding of the following key points should have been achieved:
            The components of a Frame Relay network
            The scope and purpose of Frame Relay
            The technology of Frame Relay
            Point-to-point and point-to-multipoint topologies
            The topology of a Frame Relay network
            How to configure a Frame Relay Permanent Virtual Circuit (PVC)
            How to create a Frame Relay Map on a remote network
            Potential problems with routing in a non-broadcast multi-access network
            Why subinterfaces are needed and how they are configured
            How to verify and troubleshoot a Frame Relay connection
                                          Chapter 6   Introduction to Network Administration   89

 Chapter 6             Introduction to Network Administration


     The first PCs were designed as standalone desktop systems. The operating system (OS)
software allowed one user at a time to access files and system resources. The user had
physical access to the PC. As PC-based computer networks gained popularity in the
workplace, software companies developed specialized network operating systems (NOS).
Developers designed NOS to provide file security, user privileges, and resource sharing
among multiple users. The explosive growth of the Internet compelled developers to build the
NOS of today around Internet-related technologies and services like the World Wide Web.

     Within a decade, networking has become of central importance to desktop computing.
The distinction between modern desktop operating systems, now loaded with networking
features and services, and their NOS counterparts has blurred. Now, most popular operating
systems, such as Microsoft Windows 2000 and Linux, are found on high-powered network
servers and on the desktops of end users.

     Knowledge of different operating systems will ensure that the correct operating system
is selected to offer all   the necessary services. UNIX, Linux, Mac OS X, and several
Windows operating systems will be introduced.

     Effective management of LANs and WANs is the key element to maintaining a
productive environment in the networking world. As more services become available to more
users, the performance of networks suffer. Network administrators, through constant
monitoring, must recognize and be able to rectify problems before they become noticeable to
the end users.

     Various tools and protocols are available to monitor the network on a local and remote
basis. A comprehensive understanding of these tools is critical to effective network

     Students completing this module should be able to:
          Identify several potential functions of a workstation
          Identify several potential functions of a server
          Describe the roles of equipment in a client/server environment
          Describe the differences between a Networking Operating System (NOS) and a
           traditional operating system
          List several Windows operating systems and their features
90    Cisco Academy – CCNA 3.0 Semester 4
          List several alternatives to the Windows operating systems and their features
          Describe several functions of a server
          Identify network management tools
          Identify the driving forces behind network management
          Describe the OSI and network management model
          Describe SNMP and CMIP
          Describe how management software gathers information and records problems

6.1 Workstations and Servers

6.1.1 Workstations

     A workstation is a client computer that is used to run applications and is connected to a
server from which it obtains data shared with other computers. A server is a computer that
runs a network operating system (NOS). A workstation uses special software, such as a
network shell program to perform the following tasks:
          Intercepts user data and application commands
          Decides if the command is for the local operating system or for the NOS.
          Directs the command to the local operating system or to the network interface card
           (NIC) for processing and transmission onto the network
          Delivers transmissions from the network to the application running on the

     Some Windows operating systems may be installed on workstations and servers. The
NT/2000/XP versions of Windows software provide network server capability. Windows 9x
and ME versions only provide workstation support.

     UNIX or Linux can serve as a desktop operating system but are usually found on
high-end computers. These workstations are employed in engineering and scientific
applications, which require dedicated high-performance computers. Some of the specific
applications that are frequently run on UNIX workstations are included in the following list:
          Computer-aided design (CAD)
          Electronic circuit design
          Weather data analysis
          Computer graphics animation
          Telecommunications equipment management

     Most current desktop operating systems include networking capabilities and support
multi-user access. For this reason, it is becoming more common to classify computers and
operating systems based on the types of applications the computer runs. This classification is
                                             Chapter 6   Introduction to Network Administration   91
based on the role or function that the computer plays, such as workstation or server. Typical
desktop or low-end workstation applications might include word processing, spreadsheets,
and financial management. On high-end workstations,                the applications might include
graphical design or equipment management and others as listed above.

     A diskless workstation is a special class of computer designed to run on a network. As
the name implies, it has no disk drives but does have a monitor, keyboard, memory, booting
instructions in ROM, and a network interface card. The software that is used to establish a
network connection is loaded from the bootable ROM chip located on the NIC.

     Because a diskless workstation does not have any disk drives, it is not possible to upload
data from the workstation or download anything to it. A diskless workstation cannot pass a
virus onto the network, nor can it be used to take data from the network by copying this
information to a disk drive. As a result, diskless workstations offer greater security than
ordinary workstations. For this reason, such workstations are used in networks where security
is paramount.

     Laptops can also serve as workstations on a LAN and can be connected through a
docking station, external LAN adapter, or a PCMCIA card. A docking station is an add-on
device that turns a laptop into a desktop.

6.1.2 Servers

     In a network operating system environment, many client systems access and share the
resources of one or more servers.       Desktop client systems are equipped with their own
memory and peripheral devices, such as a keyboard, monitor, and a disk drive. Server systems
must be equipped to support multiple concurrent users and multiple tasks as clients make
demands on the server for remote resources.

     Network operating systems have additional network management tools and features that
are designed to support access by large numbers of simultaneous users. On all but the smallest
networks, NOSs are installed on powerful servers. Many users, known as clients, share these
servers. Servers usually have high-capacity, high-speed disk drives, large amounts of RAM,
high-speed NICs, and in some cases, multiple CPUs. These servers are typically configured to
use the Internet family of protocols, TCP/IP, and offer one or more TCP/IP services.

     Servers running NOSs are also used to authenticate users and provide access to shared
resources. These servers are designed to handle requests from many clients simultaneously.
Before a client can access the server resources, the client must be identified and be authorized
to use the resource. Identification and authorization is achieved by assigning each client an
account name and password. The account name and password are then verified by an
92    Cisco Academy – CCNA 3.0 Semester 4
authentication service to permit or deny access to the network. By centralizing user accounts,
security, and access control, server-based networks simplify the work of network

     Servers are typically larger systems than workstations and have additional memory to
support multiple tasks that are active or resident in memory at the same time. Additional disk
space is also required on servers to hold shared files and to function as an extension to the
internal memory on the system. Also, servers typically require extra expansion slots on their
system boards to connect shared devices, such as printers and multiple network interfaces.

     Another feature of systems capable of acting as servers is the processing power.
Ordinarily, computers have a single CPU, which executes the instructions that make up a
given task or process. In order to work efficiently and deliver fast responses to client requests,
a NOS server requires a powerful CPU to execute its tasks or programs. Single processor
systems with one CPU can meet the needs of most servers if the CPU has the necessary speed.
To achieve higher execution speeds, some systems are equipped with more than one processor.
Such systems are called multiprocessor systems. Multiprocessor systems are capable of
executing multiple tasks in parallel by assigning each task to a different processor. The
aggregate amount of work that the server can perform in a given time is greatly enhanced in
multiprocessor systems.

     Since servers function as central repositories of resources that are vital to the operation
of client systems, these servers must be efficient and robust. The term robust indicates that the
server systems are able to function effectively under heavy loads. It also means the systems
are able to survive the failure of one or more processes or components without experiencing a
general system failure. This objective is met by building redundancy into server systems.
Redundancy is the inclusion of additional hardware components that can take over if other
components fail. Redundancy is a feature of fault tolerant systems that are designed to survive
failures and can be repaired without interruption while the systems are up and running.
Because a NOS depends on the continuous operation of its server, the extra hardware
components justify the additional expense.

     Server applications and functions include web services using Hypertext Transfer
Protocol (HTTP), File Transfer Protocol (FTP), and Domain Name System (DNS). Standard
e-mail protocols supported by network servers include Simple Mail Transfer Protocol (SMTP),
Post Office Protocol 3 (POP3), and Internet Messaging Access Protocol (IMAP). File sharing
protocols include Sun Microsystems Network File System (NFS) and Microsoft Server
Message Block (SMB).

     Network servers frequently provide print services. A server may also provide Dynamic
                                         Chapter 6   Introduction to Network Administration    93
Host Configuration Protocol (DHCP), which automatically allocates IP addresses to client
workstations. In addition to running services for the clients on the network, servers can be set
to act as a basic firewall for the network. This is accomplished using proxy or Network
Address Translation (NAT), both of which hide internal private network addresses from the

     One server running a NOS may work well when serving only a handful of clients. But
most organizations must deploy several servers in order to achieve acceptable performance. A
typical design separates services so one server is responsible for e-mail, another server is
responsible for file sharing, and another is responsible for FTP.

     The concentration of network resources, such as files, printers, and applications on
servers, also makes the data generated easier to back up and maintain. Rather than have these
resources distributed on individual machines, network resources can be located on specialized,
dedicated servers for easy access and back up.

Interactive Media Activity

     PhotoZoom: Server Components

     In this PhotoZoom, the student will view components inside a server.

6.1.3 Client-server relationship

     The client-server computing model distributes processing over multiple computers.
Distributed processing enables access to remote systems for the purpose of sharing
information and network resources. In a client-server environment, the client and server share
or distribute processing responsibilities. Most network operating systems are designed around
the client-server model to provide network services to users. A computer on a network can be
referred to as a host, workstation, client, or server. A computer running TCP/IP, whether it is a
workstation or a server, is considered a host computer.

     Definitions of other commonly used terms are:
           Local host – The machine on which the user currently is working.
           Remote host – A system that is being accessed by a user from another system.
           Server – Provides resources to one or more clients by means of a network.
           Client – A machine that uses the services from one or more servers on a network.

     An example of a client-server relationship is a File Transfer Protocol (FTP) session. FTP
is a universal method of transferring a file from one computer to another. For the client to
transfer a file to or from the server, the server must be running the FTP daemon or service. In
94    Cisco Academy – CCNA 3.0 Semester 4
this case, the client requests the file to be transferred. The server provides the services
necessary to receive or send the file.

     The Internet is also a good example of a distributed processing client-server computing
relationship. The client or front end typically handles user presentation functions, such as
screen formatting, input forms, and data editing. This is done with a browser, such as
Netscape or Internet Explorer. Web browsers send requests to web servers. When the browser
requests data from the server, the server responds, and the browser program receives a reply
from the web server. The browser then displays the HTTP data that was received. The server
or back end handles the client's requests for Web pages and provides HTTP or WWW

     Another example of a client-server relationship is a database server and a data entry or
query client in a LAN. The client or front end might be running an application written in the
C or Java language, and the server or back end could be running Oracle or other database
management software. In this case, the client would handle formatting and presentation tasks
for the user. The server would provide database storage and data retrieval services for the user.

     In a typical file server environment, the client might have to retrieve large portions of
the database files to process the files locally. This retrieval of the database files can cause
excess network traffic. With the client-server model, the client presents a request to the server,
and the server database engine might process 100,000 records and pass only a few back to the
client to satisfy the request. Servers are typically much more powerful than client computers
and are better suited to processing large amounts of data. With client-server computing, the
large database is stored, and the processing takes place on the server. The client has to deal
only with creating the query. A relatively small amount of data or results might be passed
across the network. This satisfies the client query and results in less usage of network
bandwidth. The graphic shows an example of client-server computing. Note that the
workstation and server normally would be connected to the LAN by a hub or switch.

     The distribution of functions in client-server networks brings substantial advantages, but
also incurs some costs. Although the aggregation of resources on server systems brings
greater security, simpler access, and coordinated control, the server introduces a single point
of failure into the network. Without an operational server, the network cannot function at all.
Additionally, servers require trained, expert staff to administer and maintain them, which
increases the expense of running the network. Server systems require additional hardware and
specialized software that adds substantially to the cost.

6.1.4 Introduction to NOS
                                          Chapter 6   Introduction to Network Administration   95
     A computer operating system (OS) is the software foundation on which computer
applications and services run on a workstation. Similarly, a network operating system (NOS)
enables communication between multiple devices and the sharing of resources across a
network. A NOS operates on UNIX, Microsoft Windows NT, or Windows 2000 network

     Common functions of an OS on a workstation include controlling the computer
hardware, executing programs and providing a user interface. The OS performs these
functions for a single user. Multiple users can share the machine but they cannot log on at the
same time. In contrast, a NOS distributes functions over a number of networked computers. A
NOS depends on the services of the native OS in each individual computer. The NOS then
adds functions that allow access to shared resources by a number of users concurrently.

     Workstations function as clients in a NOS environment. When a workstation becomes a
client in a NOS environment, additional specialized software enables the local user to access
non-local or remote resources, as if these resources were a part of the local system. The NOS
enhances the reach of the client workstation by making remote services available as
extensions of the local operating system.

     A system capable of operating as a NOS server must be able to support multiple users
concurrently. The network administrator creates an account for each user, allowing the user to
logon and connect to the server system. The user account on the server enables the server to
authenticate that user and allocate the resources that the user is allowed to access. Systems
that provide this capability are called multi-user systems.

     A NOS server is a multitasking system, capable of executing multiple tasks or processes
at the same time. The NOS scheduling software allocates internal processor time, memory,
and other elements of the system to different tasks in a way that allows them to share the
system resources. Each user on the multi-user system is supported by a separate task or
process internally on the server. These internal tasks are created dynamically as users connect
to the system and are deleted when users disconnect.

     The main features to consider when selecting a NOS are performance, management and
monitoring tools, security, scalability, and robustness or fault tolerance. The following section
briefly defines each of these features.


     A NOS must perform well at reading and writing files across the network between
clients and servers. It must be able to maintain fast performance under heavy loads, when
many clients are making requests. Consistent performance under heavy demand is an
96    Cisco Academy – CCNA 3.0 Semester 4
important standard for a NOS.

Management and monitoring

     The management interface on the NOS server provides the tools for server monitoring,
client administration, file, print, and disk storage management. The management interface
provides tools for the installation of new services and the configuration of those services.
Additionally, servers require regular monitoring and adjustment.


     A NOS must protect the shared resources under its control. Security includes
authenticating user access to services to prevent unauthorized access to the network resources.
Security also performs encryption to protect information as it travels between clients and


     Scalability is the ability of a NOS to grow without degradation in performance. The
NOS must be capable of sustaining performance as new users join the network and new
servers are added to support them.

Robustness/fault tolerance

     A measure of robustness is the ability to deliver services consistently under heavy load
and to sustain its services if components or processes fail. Using redundant disk devices and
balancing the workload across multiple servers can improve NOS robustness.

6.1.5 Microsoft NT, 2000, and .NET

     Since the release of Windows 1.0 in November 1985, Microsoft has produced many
versions of Windows operating systems with improvements and changes to support a variety
of users and purposes. Figure   summarizes the current Windows OS.

     NT 4 was designed to provide an environment for mission critical business that would
be more stable than the Microsoft consumer operating systems. It is available for both desktop
(NT 4.0 Workstation) and server (NT 4.0 Server). An advantage of NT over previous
Microsoft OSs is that DOS and older Windows programs can be executed in virtual machines
(VMs). Program failures are isolated and do not require a system restart.

     Windows NT provides a domain structure to control user and client access to server
resources. It is administered through the User Manager for Domains application on the
                                          Chapter 6   Introduction to Network Administration   97
domain controller. Each NT domain requires a single primary domain controller which holds
the Security Accounts Management Database (SAM) and may have one or more backup
domain controllers, each of which contains a read-only copy of the SAM. When a user
attempts to logon, the account information is sent to the SAM database. If the information for
that account is stored in the database, the user will be authenticated to the domain and have
access to the workstation and network resources.

     Based on the NT kernel, the more recent Windows 2000 has both desktop and server
versions. Windows 2000 supports “plug-and-play” technology, permitting installation of new
devices without the need to restart the system. Windows 2000 also includes a file encryption
system for securing data on the hard disk.

     Windows 2000 enables objects, such as users and resources, to be placed into container
objects called organizational units (OUs). Administrative authority over each OU can be
delegated to a user or group. This feature allows more specific control than is possible with
Windows NT 4.0.

     Windows 2000 Professional is not designed to be a full NOS. It does not provide a
domain controller, DNS server, DHCP server, or render any of the services that can be
deployed with Windows 2000 Server. The primary purpose of Windows 2000 Professional is
to be part of a domain as a client-side operating system. The type of hardware that can be
installed on the system is limited. Windows 2000 Professional can provide limited server
capabilities for small networks and peer-to-peer networks. It can be a file server, a print server,
an FTP server, and a web server, but will only support up to ten simultaneous connections.

     Windows 2000 Server adds to the features of Windows 2000 Professional many new
server-specific functions. It can also operate as a file, print, web and application server. The
Active Directory Services feature of Windows 2000 Server serves as the centralized point of
management of users, groups, security services, and network resources. It includes the
multipurpose capabilities required for workgroups and branch offices as well as for
departmental deployments of file and print servers, application servers, web servers, and
communication servers.

     Windows 2000 Server is intended for use in small-to-medium sized enterprise
environments. It provides integrated connectivity with Novell NetWare, UNIX, and
AppleTalk systems. It can also be configured as a communications server to provide dialup
networking services for mobile users. Windows 2000 Advanced Server provides the
additional hardware and software support needed for enterprise and extremely large networks.

     Windows .NET Server is built on the Windows 2000 Server kernel, but tailored to
98    Cisco Academy – CCNA 3.0 Semester 4
provide a secure and reliable system to run enterprise-level web and FTP sites in order to
compete with the Linux and UNIX server operating systems. The Windows .NET Server
provides XML Web Services to companies which run medium to high volume web traffic.

6.1.6 UNIX, Sun, HP, and LINUX

Origins of UNIX

     UNIX is the name of a group of operating systems that trace their origins back to 1969
at Bell Labs. Since its inception, UNIX was designed to support multiple users and
multitasking. UNIX was also one of the first operating systems to include support for Internet
networking protocols. The history of UNIX, which now spans over 30 years, is complicated
because many companies and organizations have contributed to its development.

     UNIX was first written in assembly language, a primitive set of instructions that control
the internal instructions of a computer. However, UNIX could only run on a specific type of
computer. In 1971, Dennis Ritchie created the C language. In 1973, Ritchie along with fellow
Bell Labs programmer Ken Thompson rewrote the UNIX system programs in C language.
Because C is a higher-level language, UNIX could be moved or ported to another computer
with far less programming effort. The decision to develop this portable operating system
proved to be the key to the success of UNIX. During the 1970s, UNIX evolved through the
development work of programmers at Bell Labs and several universities, notably the
University of California, at Berkeley.   -

     When UNIX first started to be marketed commercially in the 1980s, it was used to run
powerful network servers, not desktop computers. Today, there are dozens of different
versions of UNIX, including the following:
          Hewlett Packard UNIX (HP-UX)
          Berkeley Software Design, Inc. (BSD UNIX), which has produced derivatives
           such as FreeBSD
          Santa Cruz Operation (SCO) UNIX
          Sun Solaris
          IBM UNIX (AIX)

     UNIX, in its various forms, continues to advance its position as the reliable, secure OS
of choice for mission-critical applications that are crucial to the operation of a business or
other organization. UNIX is also tightly integrated with TCP/IP. TCP/IP basically grew out of
UNIX because of the need for LAN and WAN communications.

     The Sun Microsystems Solaris Operating Environment and its core OS, SunOS, is a
high-performance, versatile, 64-bit implementation of UNIX. Solaris runs on a wide variety
                                        Chapter 6   Introduction to Network Administration   99
of computers, from Intel-based personal computers to powerful mainframes and
supercomputers. Solaris is currently the most widely used version of UNIX in the world for
large networks and Internet websites. Sun is also the developer of the "Write Once, Run
Anywhere" Java technology.

     Despite the popularity of Microsoft Windows on corporate LANs, much of the Internet
runs on powerful UNIX systems. Although UNIX is usually associated with expensive
hardware and is no user friendly, recent developments, including the creation of Linux, have
changed that image.

Origins of Linux

     In 1991, a Finnish student named Linus Torvalds began work on an operating system for
an Intel 80386-based computer. Torvalds became frustrated with the state of desktop operating
systems, such as DOS, and the expense and licensing issues associated with commercial
UNIX. Torvalds set out to develop an operating system that was UNIX-like in its operation
but used software code that was open and completely free of charge to all users.

     Torvald's work led to a world-wide collaborative effort to develop Linux, an open source
operating system that looks and feels like UNIX. By the late 1990s, Linux had become a
viable alternative to UNIX on servers and Windows on the desktop. The popularity of Linux
on desktop PCs has also contributed to interest in using UNIX distributions, such as FreeBSD
and Sun Solaris on the desktop. Versions of Linux can now run on almost any 32-bit processor,
including the Intel 80386, Motorola 68000, Alpha, and PowerPC chips.

     As with UNIX, there are numerous versions of Linux. Some are free downloads from
the web, and others are commercially distributed. The following are a few of the most popular
versions of Linux:
         Red Hat Linux – distributed by Red Hat Software
         OpenLinux – distributed by Caldera
         Corel Linux
         Slackware
         Debian GNU/Linux
         SuSE Linux

     Linux is one of the most powerful and reliable operating systems in the world today.
Because of this, Linux has already made inroads as a platform for power users and in the
enterprise server arena. Linux is less often deployed as a corporate desktop operating system.
Although graphical user interfaces (GUIs) are available to make Linux user-friendly, most
beginning users find Linux more difficult to use than Mac OS or Windows. Currently, many
100   Cisco Academy – CCNA 3.0 Semester 4
companies, such as Red Hat, SuSE, Corel, and Caldera, are striving to make Linux a viable
operating system for the desktop.      -

      Application support must be considered when Linux is implemented on a desktop
system. The number of business productivity applications is limited when compared to
Windows. However, some vendors provide Windows emulation software, such as WABI and
WINE, which enables many Windows applications to run on Linux. Additionally, companies
such as Corel are making Linux versions of their office suites and other popular software

Networking with Linux

      Recent distributions of Linux have networking components built in for connecting to a
LAN, establishing a dialup connection to the Internet, or other remote network. In fact,
TCP/IP is integrated into the Linux kernel instead of being implemented as a separate

      Some advantages of Linux as a desktop operating system and network client include the
            It is a true 32-bit operating system.
            It supports preemptive multitasking and virtual memory.
            The code is open source and thus available for anyone to enhance and improve.

6.1.7 Apple

      Apple Macintosh computers were designed for easy networking in a peer-to-peer,
workgroup situation. Network interfaces are included as part of the hardware and networking
components are built into the Macintosh operating system. Ethernet and Token Ring network
adapters are available for the Macintosh.

      The Macintosh, or Mac, is popular in many educational institutions and corporate
graphics departments. Macs can be connected to one another in workgroups and can access
AppleShare file servers. Macs can also be connected to PC LANs that include Microsoft,
NetWare, or UNIX servers.

Mac OS X (10)

      The Macintosh operating system, Mac OS X, is sometimes referred to as Apple System

      Some of the features of Mac OS X are in the GUI called Aqua. The Aqua GUI resembles
a cross between Microsoft Windows XP and Linux X-windows GUI. Mac OS X is designed
                                        Chapter 6   Introduction to Network Administration   101
to provide features for the home computer, such as Internet browsing, video and photo editing,
and games, while still providing features that offer powerful and customizable tools that IT
professionals need in an operating system.

     The Mac OS X is fully compatible with older versions of the Mac operating systems.
Mac OS X provides a new feature that allows for AppleTalk and Windows connectivity. The
Mac OS X core operating system is called Darwin. Darwin is a UNIX-based, powerful system
that provides stability and performance. These enhancements provide Mac OS X with support
for protected memory, preemptive multitasking, advanced memory management, and
symmetric multiprocessing. This makes Mac OS X a formidable competitor amongst
operating systems.

6.1.8 Concept of service on servers

     Networking operating systems (NOSs) are designed to provide network processes to
clients. Network services include the World Wide Web (WWW), file sharing, mail exchange,
directory services, remote management, and print services. Remote management is a powerful
service that allows administrators to configure networked systems that are miles apart. It is
important to understand that these network processes are referred to as services in Windows
2000 and daemons in UNIX and Linux. Network processes all provide the same functions, but
the way processes are loaded and interact with the NOS are different in each operating

     Depending on the NOS, some of these key network processes may be enabled during a
default installation. Most popular network processes rely on the TCP/IP suite of protocols.
Because TCP/IP is an open, well-known set of protocols, TCP/IP-based services are
vulnerable to unauthorized scans and malicious attacks. Denial of service (DoS) attacks,
computer viruses, and fast-spreading Internet worms have forced NOS designers to reconsider
which network services are started automatically.

     Recent versions of popular NOSs, such as Windows 2000 and Red Hat Linux 7, restrict
the number of network services that are on by default. When deploying a NOS, key network
services will need to be enabled manually.

     When a user decides to print in a networked printing environment, the job is sent to the
appropriate queue for the selected printer. Print queues stack the incoming print jobs and
services them using a first-in, first-out (FIFO) order. When a job is added to the queue, it is
placed at the end of the waiting list and printed last. The printing wait time can sometimes be
long, depending on the size of the print jobs at the head of the queue. A network print service
will provide system administrators with the necessary tools to manage the large number of
102   Cisco Academy – CCNA 3.0 Semester 4
print jobs being routed throughout the network. This includes the ability to prioritize, pause,
and even delete print jobs that are waiting to be printed.

File sharing

      The ability to share files over a network is an important network service. There are many
file sharing protocols and applications in use today. Within a corporate or home network, files
are typically shared using Windows File Sharing or the Network File Sharing (NFS) protocol.
In such environments, an end user may not even know if a given file is on a local hard disk or
on a remote server. Windows File Sharing and NFS allow users to easily move, create, and
delete files in remote directories.

File Transfer Protocol (FTP)

      Many organizations make files available to remote employees, to customers, and to the
general public using FTP. FTP services are made available to the public in conjunction with
web services. For example, a user may browse a website, read about a software update on a
web page, and then download the update using FTP. Smaller companies may use a single
server to provide FTP and HTTP services, while larger companies may choose to use
dedicated FTP servers.

      Although FTP clients must logon, many FTP servers are configured to allow anonymous
access. When users access a server anonymously, they do not need to have a user account on
the system. The FTP protocol also allows users to upload, rename, and delete files, so
administrators must be careful to configure an FTP server to control levels of access.

      FTP is a session-oriented protocol. Clients must open an application layer session with
the server, authenticate, and then perform an action, such as download or upload. If the client
session is inactive for a certain length of time, the server disconnects the client. This inactive
length of time is called an idle timeout. The length of an FTP idle timeout varies depending on
the software.

Web services

      The World Wide Web is now the most visible network service. In less than a decade, the
World Wide Web has become a global network of information, commerce, education, and
entertainment. Millions of companies, organizations, and individuals maintain websites on the
Internet. Websites are collections of web pages stored on a server or group of servers.

      The World Wide Web is based on a client/server model. Clients attempt to establish TCP
sessions with web servers. Once a session is established, a client can request data from the
                                        Chapter 6   Introduction to Network Administration   103
server. HTTP typically governs client requests and server transfers. Web client software
includes GUI web browsers, such as Netscape Navigator and Internet Explorer.

     Web pages are hosted on computers running web service software. The two most
common web server software packages are Microsoft Internet Information Services (IIS) and
Apache Web Server. Microsoft IIS runs on a Windows platform and Apache Web Server runs
on UNIX and Linux platforms. A Web service software package is available for virtually all
operating systems currently in production.

Domain Name System (DNS)

     The Domain Name System (DNS) protocol translates an Internet name, such as, into an IP address. Many applications rely on the directory services provided
by DNS to do this work. Web browsers, e-mail programs, and file transfer programs all use
the names of remote systems. The DNS protocol allows these clients to make requests to DNS
servers in the network for the translation of names to IP addresses. Applications can then use
the addresses to send their messages. Without this directory lookup service, the Internet would
be almost impossible to use.

Dynamic Host Configuration Protocol (DHCP)

     The purpose of Dynamic Host Configuration Protocol (DHCP) is to enable individual
computers on an IP network to learn their TCP/IP configurations from the DHCP server or
servers. DHCP servers have no information about the individual computers until information
is requested. The overall purpose of this is to reduce the work necessary to administer a large
IP network. The most significant piece of information distributed in this manner is the IP
address that identifies the host on the network. DHCP also allows for recovery and the ability
to automatically renew network IP addresses through a leasing mechanism. This mechanism
allocates an IP address for a specific time period, releases it, and then assigns a new IP
address. DHCP allows all this to be done by a DHCP server which saves the system
administrator considerable amount of time.

6.2 Network Management

6.2.1 Introduction to network management

     As a network evolves and grows, it becomes a more critical and indispensable resource
to the organization. As more network resources are available to users, the network becomes
more complex, and maintaining the network becomes more complicated. Loss of network
resources and poor performance are results of increased complexity and are not acceptable to
104   Cisco Academy – CCNA 3.0 Semester 4
the users. The network administrator must actively manage the network, diagnose problems,
prevent situations from occurring, and provide the best performance of the network for the
users. At some point, networks become too large to manage without automated network
management tools.

      Network Management includes:
          Monitoring network availability
          Improved automation
          Monitoring response time
          Security features
          Traffic rerouting
          Restoration capability
          User registration

      The driving forces behind network management are shown in Figure         and explained
          Controlling corporate assets – If network resources are not effectively controlled,
           they will not provide the results that management requires.
          Controlling complexity – With massive growth in the number of network
           components, users, interfaces, protocols, and vendors, loss of control of the
           network and its resources threatens management.
          Improved service – Users expect the same or improved service as the network
           grows and the resources become more distributed.
          Balancing various needs – Users must be provided with various applications at a
           given level of support, with specific requirements in the areas of performance,
           availability, and security.
          Reducing downtime – Ensure high availability of resources by proper redundant
          Controlling costs – Monitor and control resource utilization so that user needs can
           be satisfied at a reasonable cost.

      Some basic network management terms are introduced in Figure .

Interactive Media Activity

      Drag and Drop: Network Management Terminology

      When the student has completed this activity, the student will be able to identify the
terminology used in Network Management.

6.2.2 OSI and network management model
                                          Chapter 6   Introduction to Network Administration   105
     The International Standards Organization (ISO) created a committee to produce a model
for network management, under the direction of the OSI group.

     This model has four parts:
          Organization
          Information
          Communication
          Functional

     This is a view of network management from the top-down, divided into four submodels
and recognized by the OSI standard.

     The Organization model describes the components of network management such as a
manager, agent, and so on, and their relationships. The arrangement of these components
leads to different types of architecture, which will be discussed later.

     The Information model is concerned with structure and storage of network management
information. This information is stored in a database, called a management information base
(MIB). The ISO defined the structure of management information (SMI) to define the syntax
and semantics of management information stored in the MIB. MIBs and SMI will be covered
in more depth later.

     The Communication model deals with how the management data is communicated
between the agent and manager process. It is concerned with the transport protocol, the
application protocol, and commands and responses between peers.

     The Functional model addresses the network management applications that reside upon
the network management station (NMS). The OSI network management model categorizes
five areas of function, sometimes referred to as the FCAPS model:
          Fault
          Configuration
          Accounting
          Performance
          Security

     This network management model has gained broad acceptance by vendors as a useful
way of describing the requirements for any network management system.

6.2.3 SNMP and CMIP standards

     To allow for interoperability of management across many different network platforms,
106   Cisco Academy – CCNA 3.0 Semester 4
network management standards are required so that vendors can implement and adhere to
these standards. Two main standards have emerged:
           Simple Network Management Protocol – IETF community
           Common Management Information Protocol – Telecommunications community

      Simple Network Management Protocol (SNMP) actually refers to a set of standards for
network management, including a protocol, a database structure specification, and a set of
data objects. SNMP was adopted as the standard for TCP/IP internets in 1989 and became
very popular. An upgrade, known as SNMP version 2c (SNMPv2c) was adopted in 1993.
SNMPv2c provides support for centralized and distributed network management strategies,
and included improvements in the structure of management information (SMI), protocol
operations, management architecture, and security. This was designed to run on OSI based
networks as well as TCP/IP based networks. Since then SNMPv3 has been released. To solve
the security shortcomings of SNMPv1 and SNMPv2c, SNMPv3 provides secure access to
MIBs by authenticating and encrypting packets over the network. The common management
information protocol (CMIP) is an OSI network management protocol that was created and
standardized by the ISO for the monitoring and control of heterogeneous networks.

6.2.4 SNMP operation

      Simple Network Management Protocol (SNMP) is an application layer protocol
designed to facilitate the exchange of management information between network devices. By
using SNMP to access management information data, such as packets per second sent on an
interface or number of open TCP connections, network administrators can more easily
manage network performance to find and solve network problems.

      Today, SNMP is the most popular protocol for managing diverse commercial, university,
and research internetworks.

      Standardization activity continues even as vendors develop and release state-of-the-art
SNMP-based management applications. SNMP is a simple protocol, yet its feature set is
sufficiently powerful to handle the difficult problems involved with the management of
heterogeneous networks.

      The organizational model for SNMP based network management includes four
           Management station
           Management agent
           Management information base
           Network management protocol
                                           Chapter 6    Introduction to Network Administration   107
     The network management station (NMS) is usually a standalone workstation, but it may
be implemented over several systems. It includes a collection of software called the network
management application (NMA). The NMA includes a user interface to allow authorized
network managers to manage the network. It responds to user commands and issued
commands to management agents throughout the network. The management agents are key
network platforms and devices, other hosts, routers, bridges and hubs, equipped with SNMP
so that they can be managed. They respond to requests for information and requests for
actions from the NMS, such as polling, and may provide the NMS with important but
unsolicited information, such as traps. All the management information of a particular agent is
stored in the management information base on that agent. An agent might keep track of the
            Number and state of its virtual circuits
            Number of certain kinds of error messages received
            Number of bytes and packets in and out of the device
            Maximum output queue length, for routers and other internetworking devices
            Broadcast messages sent and received
            Network interfaces going down and coming up

     The NMS performs a monitoring function by retrieving the values from the MIB. The
NMS can cause an action to take place at an agent. The communication between the manager
and the agent is carried out by an application layer network management protocol. SNMP
uses User Datagram Protocol (UDP) and communicates over ports 161 and 162. It is based on
an exchange of messages. There are three common message types:
            Get – Enables the management station to retrieve the value of MIB objects from
             the agent.
            Set – Enables the management station to set the value of MIB objects at the agent.
            Trap – Enables the agent to notify the management station of significant events.

     This model is referred to as a two-tier model. However, it assumes that all network
elements are manageable by SNMP. This is not always the case, as some devices have a
proprietary management interface. In these cases, a three-tiered model is required.               A
network manager who wants to obtain information or control this proprietary node
communicates with a proxy agent. The proxy agent then translates the manager’s SNMP
request into a form appropriate to the target system and uses whatever proprietary
management protocol is appropriate to communicate with the target system. Responses from
the target to the proxy are translated into SNMP messages and communicated back to the

     Network management applications often offload some network management
108     Cisco Academy – CCNA 3.0 Semester 4
functionality to a remote monitor (RMON) probe. The RMON probe gathers management
information locally, and then the network manager periodically retrieves a summary of this

        The NMS is an ordinary workstation, running a typical operating system.    It has a large
amount of RAM, to hold all the management applications running at the same time. The
manager runs a typical network protocol stack, such as TCP/IP. The network management
applications rely on the host operating system, and on the communication architecture.
Examples of network management applications are Ciscoworks2000, HP Openview, and

        As discussed before, the manager may be a standalone, centralized workstation sending
out queries to all agents, no matter where they are located. In a distributed network, a
decentralized architecture is more appropriate, with local NMS at each site. These distributed
NMS can act in a client-server architecture, in which one NMS acts as a master server, and
the others are clients. The clients send their data to the master server for centralized storage.
An alternative is that all distributed NMSs have equal responsibility, each with their own
manager databases, so the management information is distributed over the peer NMSs.

6.2.5 Structure of management information and MIBs

        A management information base (MIB) is used to store the structured information
representing network elements and their attributes. The structure itself is defined in a standard
called the structure of management information (SMI), which defines the data types that can
be used to store an object, how those objects are named, and how they are encoded for
transmission over a network.

        MIBs are highly structured depositories for information about a device. Many standard
MIBs exist, but more MIBs that are proprietary exist to uniquely manage different vendor’s
devices. The original SMI MIB was categorized into eight different groups, totaling 114
managed objects. More groups were added to define MIB-II, which now replaces MIB-I.

        All managed objects in the SNMP environment are arranged in a hierarchical or tree
structure. The leaf objects of the tree, which are the elements that appear at the bottom of the
diagram, are the actual managed objects. Each managed object represents some resource,
activity or related information that is to be managed. A unique object identifier, which is a
number in dot notation, identifies each managed object. Each object identifier is described
using abstract syntax notation (ASN.1).

        SNMP uses these object identifiers to identify the MIB variables to retrieve or modify.
Objects that are in the public domain are described in MIBs introduced in Request for
                                         Chapter 6    Introduction to Network Administration   109
Comments (RFCs). They are readily accessible at:

     All vendors are encouraged to make their MIB definitions known. Once an assigned
enterprise value has been given, the vendor is responsible for creating and maintaining

6.2.6 SNMP protocol

     The agent is a software function embedded in most networked devices, such as routers,
switches, managed hubs, printers, and servers.         It is responsible for processing SNMP
requests from the manager. It is also responsible for the execution of routines that maintain
variables as defined in the various supported MIBs.

     Interaction between the manager and the agent is facilitated by the Simple Network
Management Protocol (SNMP). The term simple comes from the restricted number of
message types that are part of the initial protocol specification. The strategy was designed to
make it easier for developers to build management capabilities into network devices. The
initial protocol specification is referred to as SNMPv1 (version 1).

     There are three types of SNMP messages issued on behalf of an NMS. They are
GetRequest, GetNextRequest and SetRequest. All three messages are acknowledged by the
agent in the form of a GetResponse message. An agent may issue a Trap message in response
to an event that affects the MIB and the underlying resources.

     The development of SNMPv2c addressed limitations in SNMPv1. The most noticeable
enhancements were the introduction of the GetBulkRequest message type and the addition of
64-bit counters to the MIB. Retrieving information with GetRequest and GetNextRequest was
an inefficient method of collecting information. Only one variable at a time could be solicited
with SNMPv1. The GetBulkRequest addresses this weakness by receiving more information
with a single request. Secondly, the 64-bit counters addressed the issue of counters rolling
over too quickly, especially with higher speed links like Gigabit Ethernet.

     The management entity is also referred to as the manager or network management
station (NMS). It is responsible for soliciting information from the agent. The solicitations
are based on very specific requests. The manager processes the retrieved information in a
number of ways. The retrieved information can be logged for later analysis, displayed using a
graphing utility, or compared with preconfigured values to test if a particular condition has
been met.

     Not all manager functions are based on data retrieval. There is also the ability to issue
changes of a value in the managed device. This feature enables an administrator to configure a
110   Cisco Academy – CCNA 3.0 Semester 4
managed device using SNMP.

      The interaction between the manager and the managed device does introduce traffic to
the network. Caution should be taken when introducing managers on to the network.
Aggressive monitoring strategies can negatively affect network performance. Bandwidth
utilizations will go up, which may be an issue for WAN environments. Also, monitoring has a
performance impact on the devices being monitored, since they are required to process the
manager requests. This processing should not take precedence over production services.

      A general rule is that a minimum amount of information should be polled as infrequently
as possible. Determine which devices and links are most critical and what type of data is

      SNMP uses UDP as a transport protocol. Since UDP is connectionless and unreliable, it
is possible for SNMP to lose messages. SNMP itself has no provision for guarantee of
delivery, so it is up to the application using SNMP to cope with lost messages.

      Each SNMP message contains a cleartext string, called a community string. The
community string is used like a password to restrict access to managed devices. SNMPv3
has addressed the security concerns raised by tranmitting the community string in cleartext.

      An example of what the SNMPv2c message looks like is illustrated in Figure . A
detailed presentation of the protocol can be found in the Internet standard RFC1905.

      The fact that the community string is cleartext is no surprise to anyone who has studied
the Internet Protocol (IP) protocol suite. All fields specified in the protocol suite are cleartext,
except for security authentication and encryption specifications.

      The community string was essentially a security placeholder until the SNMPv2 working
group could ratify security mechanisms. The efforts were referred to the SNMPv3 working
group. All SNMP-based management applications need to be configured to use the
appropriate community strings. Some organizations frequently change the community string
values to reduce the risk of malicious activity from the unauthorized use of the SNMP service.

      In spite of the weakness associated with community-based authentication, management
strategies are still based on SNMPv1. Cisco devices do support SNMPv3 message types and
the increased security capabilities, but most management software applications do not support

      SNMPv3 supports the concurrent existence of multiple security models.
                                          Chapter 6   Introduction to Network Administration   111

6.2.7 Configuring SNMP

     In order to have the NMS communicate with networked devices, the devices must have
SNMP enabled and the SNMP community strings configured. These devices are configured
using the command line syntax described in the following paragraphs.

     More than one read-only string is supported. The default on most systems for this
community string is public. It is not advisable to use the default value in an enterprise
network. To set the read-only community string used by the agent, use the following

     Router(config)#snmp-server community string ro
          String – Community string that acts like a password and permits access to the
           SNMP protocol
          ro – (Optional) Specifies read-only access. Authorized management stations are
           only able to retrieve MIB objects.

     More than one read-write string is supported. All SNMP objects are available for write
access. The default on most systems for this community string is private. It is not advisable to
use this value in an enterprise network. To set the read-write community string used by the
agent, use the following command:

     Router(config)#snmp-server community string rw
          rw – (Optional) Specifies read-write access. Authorized management stations are
           able to both retrieve and modify MIB objects

     There are several strings that can be used to specify location of the managed device and
the main system contact for the device.

     Router(config)#snmp-server location text
     Router(config)#snmp-server contact text
          text – String that describes the system location information

     These values are stored in the MIB objects sysLocation and sysContact.

6.2.8 RMON

     RMON is a major step forward in Internetwork management. It defines a remote
monitoring MIB that supplements MIB-II and provides the network manager with vital
information about the network. The remarkable feature of RMON is that while it is simply a
specification of a MIB, with no changes in the underlying SNMP protocol, it provides a
significant expansion in SNMP functionality.
112   Cisco Academy – CCNA 3.0 Semester 4
      With MIB-II, the network manager can obtain information that is purely local to
individual devices. Consider a LAN with a number of devices on it, each with an SNMP
agent. An SNMP manager can learn of the amount of traffic into and out of each device, but
with MIB-II it cannot easily learn about the traffic on the LAN as a whole.

      Network management in an internetworked environment typically requires one monitor
per subnetwork.

      The RMON standard originally designated as IETF RFC 1271, now RFC 1757, was
designed to provide proactive monitoring and diagnostics for distributed LAN-based networks.
Monitoring devices, called agents or probes, on critical network segments allow for
user-defined alarms to be created and a wealth of vital statistics to be gathered by analyzing
every frame on a segment.

      The RMON standard divides monitoring functions into nine groups to support Ethernet
topologies and adds a tenth group in RFC 1513 for Token Ring-unique parameters. The
RMON standard was crafted to be deployed as a distributed computing architecture, where
the agents and probes communicate with a central management station, a client, using Simple
Network Management Protocol (SNMP). These agents have defined SNMP MIB structures
for all nine or ten Ethernet or Token Ring RMON groups, allowing interoperability between
vendors of RMON-based diagnostic tools. The RMON groups are defined as:
          Statistics group – Maintains utilization and error statistics for the subnetwork or
           segment being monitored. Examples are bandwidth utilization, broadcast, multicast,
           CRC alignment, fragments, and so on.
          History group – Holds periodic statistical samples from the statistics group and
           stores them for later retrieval. Examples are utilization, error count, and packet
          Alarm group – Allows the administrator to set a sampling interval and threshold
           for any item recorded by the agent. Examples are absolute or relative values and
           rising or falling thresholds.
          Host group – Defines the measurement of various types of traffic to and from hosts
           attached to the network. Examples are packets sent or received, bytes sent or
           received, errors, and broadcast and multicast packets.
          Host TopN group – Provides a report of TopN hosts based on host group statistics.
          Traffic matrix group – Stores errors and utilization statistics for pairs of
           communicating nodes of the network. Examples are errors, bytes, and packets.
          Filter group – A filter engine that generates a packet stream from frames that match
           the pattern specified by the user.
          Packet capture group – Defines how packets that match filter criteria are buffered
                                          Chapter 6   Introduction to Network Administration   113
          Event group – Allows the logging of events, also called generated traps, to the
           manager, together with time and date. Examples are customized reports based upon
           the type of alarm.

Interactive Media Activity

     Matching: RMON Matching Activity

     When the student has completed this activity, the student will be able to understand how
RMON operates and its terms and definitions.

6.2.9 Syslog

     The Cisco syslog logging utility is based on the UNIX syslog utility. System events are
usually logged to the system console unless disabled. The syslog utility is a mechanism for
applications, processes, and the operating system of Cisco devices to report activity and error
conditions. The syslog protocol is used to allow Cisco devices to issue these unsolicited
messages to a network management station.

     Every syslog message logged is associated with a timestamp, a facility, a severity, and a
textual log message. These messages are sometimes the only means of gaining insight into
some device misbehaviors.

     Severity level indicates the critical nature of the error message.    There are eight levels
of severity, 0-7, with level 0 (zero) being the most critical, and level 7 the least critical. The
levels are as follows:
          0 Emergencies
          1 Alerts
          2 Critical
          3 Errors
          4 Warnings
          5 Notifications
          6 Informational
          7 Debugging

     The facility and severity level fields are used for processing the messages. Level 0 (zero)
to level 7 are facility types provided for custom log message processing. The Cisco IOS
defaults to severity level 6.This setting is configurable.

     In order to have the NMS receive and record system messages from a device, the device
114   Cisco Academy – CCNA 3.0 Semester 4
must have syslog configured.      Below is a review of the command line syntax on how to
configure these devices.

      To enable logging to all supported destinations:

           Router(config)#logging on

      To send log messages to a syslog server host, such as CiscoWorks2000:

           Router(config)#logging hostname | ip address

      To set logging severity level to level 6, informational:

           Router(config)#logging trap informational

      To include timestamp with syslog message:

           Router(config)#service timestamps log datetime


      An understanding of the following key points should have been achieved:
          The functions of a workstation and a server
          The roles of various equipment in a client/server environment
          The development of Networking Operating Systems (NOS)
          An overview of the various Windows platforms
          An overview of some of the alternatives to Windows operating systems
          Reasons for network management
          The layers of OSI and network management model
          The type and application of network management tools
          The role that SNMP and CMIP play in network monitoring
          How management software gathers information and records problems
          How to gather reports on network performance
                                                                Appendix   URL   115

                         Appendix             URL
116   Cisco Academy – CCNA 3.0 Semester 4
                                                            Appendix   URL      117

Shared By: