Bandwidth Management for Corporate Intranets by kasibhotla85


									Bandwidth Management for Corporate Intranets
By Chuck Semeria Chuck Semeria is a marketing engineer in the network systems division at 3Com. He previously developed classroom and independent study courses for the education services department in the customer services organization. Prior to joining 3Com, Chuck was the senior course developer and instructor for Adept, a robotics and vision systems company. Before that, he taught mathematics and computer science in California high schools and junior colleges. Chuck is a graduate of the University of California at Davis. Contents Corporate Intranets Emergence of TCP/IP Explosion of Corporate Intranets Challenges for Network Managers Supporting Intranet Applications Monitoring Intranet Traffic Flows with RMON/RMON2 Remote Monitoring (RMON) RMON MIB RMON2 MIB Monitoring LAN Traffic with RMON and RMON2 RMON and RMON2 Monitoring Switched Environments Monitoring WAN Environments with RMON2 Optimizing LAN Performance Switching at the Edge of the LAN Intelligent Switching at the LAN Core Fast IP for 3D Networking Optimizing WAN Performance WANs Are Different from LANs The WAN Is the Network Bottleneck

Router Software Features Preserve WAN Bandwidth Demand Circuits Compression Bandwidth Aggregation Data Prioritization Protocol Reservation Session Fairness Packet Ranking Multicast Technologies Resource Reservation Protocol (RSVP) Server Access Summary Appendix A: Remote Network Monitoring MIB (RFC 1757) Appendix B: Token Ring Extensions to the Remote Network Monitoring MIB (RFC 1513) Appendix C: Remote Network Monitoring MIB Version 2 (RFC 2021) Acronyms and Abbreviations For More Information

The task of managing bandwidth has become increasingly complex as enterprises have evolved from highly structured SNA and X.25 networks, to interconnected LANs supporting client/server computing, and most recently to the corporate intranet model. The deployment of new applications such as Netscape Navigator, Microsoft Internet Explorer, and PointCast on enterprise networks can have a significant impact on network support requirements. These applications dramatically alter traffic and usage patterns while requiring additional bandwidth, reduced latency, and less jitter. This paper describes the emergence of corporate intranets, the new demands that intranets place on the computing environment, and what can be done to proactively ensure that intranet applications will have an adaptive environment that permits them to coexist with legacy applications and computing models.

Corporate Intranets
Whole generations of applications have been developed around various computing models, each with its own requirements for network support (Figure 1). In the “good old days” of host

computing, applications required minimal bandwidth and were relatively time sensitive, and traffic flows were deterministic between dumb terminals and the mainframe. Capacity planning involved designing efficient topologies, scaling switches, and sizing trunk lines. These were relatively straightforward problems, since the application environment was stable and user requirements were well understood.

Terminal-Host Client-Server CICS IMS PROFS OS/400 Text SAP Transaction Bursty Files Mixed + Multimedia Netware Groupware Internet Explorer PointCast Corporate Intranet Netscape Navigator

Figure 1. Evolution of Corporate Networks As networks evolved toward interconnected LANs supporting the client/server model, planning became more difficult. Client/server applications were much more bandwidth intensive and extremely time sensitive. Traffic flows were distributed across the entire network, although they were relatively deterministic. Network managers were faced with the more complex challenge of defining a hierarchy and ensuring that traffic flowed across it, but deterministic traffic flows aided in the development of applications that successfully addressed business needs. The intranet model is the latest step in the evolution of enterprise networks to a peer-to-peer computing paradigm. The advent of the corporate intranet is replacing traditional client/server applications with new concepts of information sharing and Web navigation. Emerging intranet applications are both bandwidth intensive and time sensitive, often requiring support for voice, video, and data multimedia applications across a common infrastructure. To make things even more challenging, these applications can be deployed by individual workgroups in an unstructured manner without centralized planning. This results in peer-to-peer flows that are much less deterministic than traditional client/server applications while consuming unpredictable amounts of bandwidth. The corporate intranet model places new demands on the networking infrastructure:

• • •

Intranets provide their user community with access 24 hours a day, 7 days a week. Intranets require immediate connectivity to any local or remote site in the world. Intranet users expect instant access to information without restriction.

The term “intranet” is broadly used to describe the application of Internet technologies within a private corporate computing environment (Figure 2). Intranets take advantage of the large family of open standards protocols and multiplatform support (desktops to mainframes) that have emerged from the Internet to more effectively share information across the enterprise. The goal is to seamlessly link the organization’s workforce and information to make employees more productive and information more widely available.
Web-Based Applications Unsecure public network TCP/IP Limited management

Legacy Applications Secure private network Multiprotocol network Managed

Web-based Secure TCP/IP Managed


Figure 2. Intranets Combine Internet Technology with Corporate Network Control Intranet deployment requires an existing network that supports the TCP/IP protocol suite and related Internet applications. TCP/IP provides the fundamental set of communication protocols that permit basic connectivity between networks and individual desktop systems. Internet applications (electronic mail, Web browsing, file transfer, terminal emulation, etc.) provide the tools and services that allow workers to share information across one or more local area networks (LANs), a wide area network (WAN), or the Internet. These applications significantly increase demand for bandwidth (Figure 3).


Full Multimedia Web Multimedia Web Graphics Web Text

Overhead Transaction Packets (hidden packets)

50 40 30 20 10 terminal mode 10 100 1,000 10,000 Client/ Server

Source: NCRI 100,000

User Information Bytes (visible data)

Figure 3. Intranet Applications Increase Bandwidth Demands Of all the Internet applications, the Web browser has been greatest driver of corporate intranet development. Browsers support a number of features that make them extremely powerful information-gathering tools. They have been called the “Swiss army knife” interface to information because they are platform independent and provide users with transparent access to FTP, Gopher, Telnet, Network News, e-mail, online content search, and interactive database queries. The tremendous popularity of Web browsers lies in their potential to act as the universal client interface for any client/server application.

Emergence of TCP/IP
Over the years, efforts to simplify and consolidate network operations has reduced the number of different protocols running across enterprise networks, with TCP/IP emerging as the clear winner. Looking back, there are a number of reasons for the inevitable emergence of TCP/IP as the winner of the multiprotocol sweepstakes: • • TCP/IP has a solid track record based on years of operational experience on the Internet. TCP/IP was originally designed to provide efficient communication across both LANs and WANs; over the years, it has demonstrated its ability to scale to global proportions.


TCP/IP provides universal information exchange between all types of end systems—PCs, Macintosh computers, UNIX platforms, supercomputers (Crays, NECs), mainframes (IBM, Tandem, etc.), minicomputers (HP3000, AS/400), etc.


TCP/IP has developed a robust set of network management tools—Simple Network Management Protocol (SNMP), Remote Monitoring (RMON), RMON2—and network management platform support from leading vendors.


TCP/IP has an active development community (the Internet Engineering Task Force or IETF) to continually enhance and extend existing tools, develop new tools, and extend the protocol suite.


Internet applications run over TCP/IP—not IPX, AppleTalk, DECnet, OSI, or VINES. To deploy native Internet applications on desktops, the network must be running TCP/IP.

Explosion of Corporate Intranets
The public Internet is based on TCP/IP technology that has been evolving for the past 30 years into a remarkable tool that allows people to communicate and share information. Until recently, the riches of the Internet were not readily available to novice computer users due to the lack of user interface. It was the development of easy-to-use, point-and-click Web browsers like NCSA Mosaic, Netscape Navigator, and Microsoft Internet Explorer that fueled the recent growth of the Internet and the World Wide Web. Web browsers were first adopted by organizations to support inter-enterprise communications across the Internet. Typically, an enterprise developed an Internet presence and deployed a Web server to provide sales, marketing, and support information to external users. As time passed, organizations realized the financial and strategic benefits of using Web technology to provide information to internal users—the corporate intranet. Many believe that the next evolutionary step will be the development of secure “extended” intranets where Web technology is used to reach key external customers and suppliers and to support all business transactions electronically (Figure 4).

Desktop Client Web Browser HTTP

Web Server (Department Data) Home Page common gateway interface

Database Server

Figure 4. Common Intranet Information Sharing Model According to Forrester Research (Cambridge, MA), 22 percent of the Fortune 1000 companies currently use Web servers for internal applications, and another 40 percent are seriously considering their deployment in the near future. Zona Research (Redwood City, CA) predicts that by the year 2000 there will be nearly 3.3 million private intranet servers, as opposed to 650,000 for the public Internet. There are many forces driving organizations to migrate their internal information management systems to the intranet model: • Intranets flatten the information delivery system within an organization, delivering content on demand. Intranets foster the rapid exchange of information required to support reduced product life cycles, increased cost pressures, demand for customer service, and rapidly changing markets. • The information on Web servers has the potential to be the latest and most accurate. Information can be cross-linked to provide point-and-click access to any other document within the organization or around the world. • Intranets are based on open standards. This plays an important role in overcoming problems with incompatible computing environments (UNIX, MacOS, DOS, Windows 3.1, Windows 95, Windows NT, OS/2, VMS, etc.) that previously prevented universal access to information. • Desktop browsers can be modified to function as the client interface for almost any client/server application. Since they are intuitive and extremely easy to use, browsers do not require a large investment in training. • Intranets provide a high level of security. Many organizations do not have a direct connection to the global Internet, and if they are do, they can protect their intranet using an Internet firewall system (Figure 5). Java-enabled browsers support strong memory protection, encryption and signatures, and run-time verification.

security perimeter
Internet Firewall System

bastion host
Internet Corporate Intranet

public information server


dial in users

Figure 5. Internet Firewall System •

The prevailing desktop-centric model of computing is implemented by the deployment of heterogeneous “fat clients.” A fat client is a desktop system that is loaded with a complex operating system, local configuration files, a number of popular desktop applications, and a variety of incompatible client applications for each network services. In contrast, the network-centric model of computing motivated by the corporate intranet is based on the deployment of “thin clients” with Java and/or ActiveX-enabled browsers (Figure 6). By deploying thin clients, network managers can greatly reduce the complexity and administrative expense of running their networks. Sun Microsystems estimates that the annual cost of operating Java-based clients will be less than $2,500/desktop, compared to a $10,000 to $15,000/desktop range for today’s fat clients.

Desktop Server

Boot, Applets Server File, Print, Mail, Directory
Java Webtop

(Database, File, Mail, Print, Directory) Database

Source: Sun Microsystems

TCP/IP TN3270 Application Server Legacy Systems

Figure 6. “Thin Client” Implementing a Java “Webtop” • Internet applications provide better performance within a corporate intranet than they do across the Internet. Most individual systems are connected to the Internet at data rates from 28.8/33.6 Kbps to the emerging 56 Kbps dial rates. Corporate users may have access at rates ranging from 128 Kbps (ISDN BRI) up to T1/E1 Mbps, but this may still seem relatively slow since the bandwidth is shared by many other workstations. In contrast, corporate intranets generally provide each desktop with access to local Web servers at data rates of 10/100 Mbps. • Intranets are based on inexpensive technology that is widely deployed in many private networks. If the enterprise has TCP/IP on its network, it can easily install Web servers and browsers. This allows organizations to slowly migrate to the intranet model without discarding a single computing platform or application.

Challenges for Network Managers
In many enterprises, the deployment of internal Web servers is a grassroots operation. The proliferation of HyperText Markup Language (HTML) editors has eased the burden of content creation, and Web servers are springing up all over organizations, in many cases without the knowledge or cooperation of the local network administrator or the traditional IS department. Network administrators who have been accustomed to carefully planning for traffic flows and bandwidth requirements associated with the rollout of a network operating system upgrade or a new client/server application are facing new challenges with the rapid migration to the intranet model: • Internal Web sites with large graphic or multimedia content will make increasing bandwidth demands on the corporate infrastructure. The total volume of network traffic will grow as the content from bandwidth-intensive pages is transmitted across the organization’s infrastructure. • The rollout of new applications based on Java and/or ActiveX applets can result in additional bandwidth and performance challenges. In the current client/server model, specialized client software is preinstalled on the user’s hard drive for each server that needs to be accessed. In the intranet model, applets are downloaded from the Web server to the client as they are

needed. Although this greatly reduces LAN administrative costs, it can cause network congestion and slow response times. • Intranet traffic flows will be impossible to predict and constantly changing. Since the goal of the intranet is to enable free exchange of information, network managers will not be able to predict in advance where the traffic bottlenecks will appear. In contrast to the traditional client/server model, in which large pipes can be provisioned to provide adequate bandwidth at the core of the network, user-developed Web servers are typically located on shared media at the edge of the network. The infrastructure at the periphery may not have sufficient capacity to support access from clients located across the enterprise. In addition, narrow WAN communication pipes may experience increased traffic volumes. Intranets can result in severe network congestion and performance problems at random locations across the enterprise network infrastructure. • All of these changes will have a negative impact on the performance of existing client/server and legacy applications. These applications must now fight for basic bandwidth in a network environment in which there are increasing traffic volumes and unpredictable/unstable loads.

Supporting Intranet Applications In an intranet environment, it is imperative that network managers deploy RMON and RMON2 probes to gather baseline information about traffic flows and application trends. RMON and proactive monitoring are the keys to optimizing existing LAN and WAN resources, uncovering bottlenecks before they appear, establishing policies regarding the use of applications on the enterprise intranet, and making wise decisions about capacity planning and future growth. After potentially disruptive trends are discovered via RMON/RMON2, network managers have several options meeting the challenges to efficient network operation. They can provide additional LAN bandwidth and improve performance by deploying layer 2 switches at the network edge. They can provide increased traffic control and packet throughput by deploying intelligent switches with Fast IP at the network core. Finally, they can use bandwidth grooming features to overcome bottlenecks resulting from increased bandwidth demands on narrow WAN links. The remainder of this paper focuses on each of these options and shows how they can be combined to optimize your network for intranet applications.

Monitoring Intranet Traffic Flows with RMON/RMON2
In recent years, SNMP has become the dominant mechanism for the management of distributed network equipment. SNMP uses agent software embedded within each network device to collect network traffic information and device statistics (Figure 7). Each agent continually gathers statistics, such as the number of packets received, and records them in the local system’s management information base (MIB). A network management station (NMS) can obtain this information by sending queries to an agent’s MIB, a process called polling.

NMS (SNMP Manager) agent history manager poll (for each counter)

Network Device (SMNP Agent) MIB counters

analysis tools

agent response (counter value)

Figure 7. SNMP Manager and Agent Communication While MIB counters record aggregate statistics, they do not provide a historical analysis of network traffic. To compile a comprehensive view of traffic flows and rates of change over a period of time, management stations must poll SNMP agents at regular intervals for each specific counter. In this way, network managers can use SNMP to evaluate the performance of the network to uncover traffic trends, such as which segments are reaching their maximum capacity or are experiencing large amounts of corrupted traffic. Unfortunately, the traditional SNMP model has several limitations when deployed in a corporate intranet environment: • Constant NMS polling does not scale. Management traffic alone can cause increasing congestion and eventually gridlock at known network bottlenecks—especially bandwidthconstrained WAN links. • SNMP places the entire burden of information gathering on the NMS. A management station that has the CPU power to collect information from ten network segments may not be able to monitor 50 or more network segments.


Most of the MIBs defined for SNMP only provide information about each individually monitored system. An SNMP manager can discover the amount of traffic that flows into and out of each device, but cannot readily acquire information about traffic flows on the intranet as a whole. For example, SNMP does not easily permit the generation of a traffic matrix that displays what clients are talking to which servers across an enterprise intranet.

Remote Monitoring (RMON) Probably the most important enhancement to the basic set of SNMP standards is the RMON specification. RMON was developed by the IETF to overcome the limitations of SNMP and permit more efficient management of large distributed networks. Similar to SNMP, RMON is based on a client/server architecture (Figure 8). An RMON probe, or agent, functions as a server that maintains a history of statistical data collected by the probe. This eliminates the need for constant polling by the NMS to build a historical view of network performance trends. RMON agents can be deployed as either a stand-alone device (with a dedicated CPU and memory) or as an intelligent agent embedded in a hub, switch, or router. The NMS functions as the client, structuring and analyzing the data collected from probes to diagnose network problems. Communication between the NMS and distributed RMON probes takes place via SNMP, providing the basis for vendor-independent remote network analysis.

NMS (RMON Client) manager poll (send history)

Network Device (RMON Server) local history RMON MIB counters

analysis tools

agent response (history table)

Figure 8. RMON Manager and Probe Communication RMON’s ability to archive statistical information allows network managers to develop baseline models for network performance and usage. Trending models permit the network manager to analyze a set of counters over a period of time in order to identify regular, predictable changes in network utilization and to determine when bandwidth-intensive tasks should be scheduled or when additional bandwidth is needed. After these baseline models are in place, a network manager can define thresholds that indicate when a segment is operating in an abnormal state. If

traffic on a monitored segment deviates from these thresholds, the probe can be configured to transmit an alarm to the NMS. When the NMS receives an alarm, it can activate or query a broad range of advanced RMON capabilities to diagnose the problem. For example, the HostTopN group within RMON can be used to determine which hosts are doing the majority of talking on a segment. The Matrix group can be employed to determine who is talking to whom and whether the conversations are on the local segment or across the intranet. With this information, the network manager can employ the Host group to focus on the specific host or hosts responsible for the alarm. Finally, the Filter and Packet Capture groups permit the probe to capture packets for examination by a protocol analyzer to assist in resolving the abnormality. When compared to traditional SNMP, RMON has a number of benefits as an intranet management tool: • RMON dramatically reduces the amount of network management traffic. Any reduction in the amount of management traffic across WAN links can offer significant performance benefits in the operation of a global intranet. Since WAN pipes always provide less bandwidth than LAN data links, network managers should seriously consider the use of RMON probes rather than SNMP polling to make the most efficient use of scarce WAN bandwidth. • RMON provides information about the entire intranet, relating traffic flows from all of its devices, servers, applications, and users. While device-specific management tools are important, they cannot provide the overall view of network performance and traffic flows that is required to successfully manage a corporate intranet. RMON increases network management productivity and offers cost savings. A recent study by McConnell Consulting (Boulder, CO) found that, when compared to traditional management tools, RMON allows a management team to support as many as two-and-a-half times the number of existing users and segments without adding new staff (Figure 9).

Number of network segments




101 to 200

51 to 100 11 to 50 Source: McConnell Consulting, Inc. 0 5 10 15 20 25 Segments managed per staff person 30

Figure 9. RMON More Than Doubles Network Management Capacity

The RMON MIB was first published in November 1991 as RFC 1271. The specification focused on Ethernet, providing statistics and monitoring network traffic at the MAC-layer. With successful interoperability testing, the RMON MIB advanced to draft standard status in December 1994. After minor revisions employing the MIB conventions for SNMPv2, it was reissued in February 1995 as RFC 1757. The Token Ring extensions to the RMON MIB were published in September 1993 as RFC 1513. This specification focused on adding Token Ring extensions to the existing RMON MIB. Appendixes A, B, and C list and describe the functional groups for the RMON MIB, the Token Ring extensions, and RMON2. While the IETF has concentrated its efforts on developing the Ethernet and Token Ring MIBs, vendors are designing proprietary MIBs to support other MAC-layer technologies. It is relatively easy to develop 100 Mbps Ethernet probes, since Fast Ethernet employs the same access method as 10 Mbps Ethernet. Likewise, support for FDDI is relatively straightforward since the technology is quite similar to Token Ring. In Figure 10, the RMON probe monitors all traffic on the LAN to which it is attached. The probe can collect MAC-layer statistics, maintain a MAC-layer history, create TopN tables, generate alarms based on MAC-layer thresholds, and analyze MAC-layer traffic flows. The probe can also monitor all of the traffic into and out of each workstation, server, and router.

RMON1 Probe Router A Router B Hub



Figure 10. RMON Monitors MAC-Layer Traffic Flows

RMON has certain limitations. An RMON MAC-layer probe cannot determine the ultimate source of a packet that enters the local segment via a router, the ultimate destination of a packet that exits the local segment via a router, or the ultimate source and destination of traffic that simply traverses the monitored segment. For example, in Figure 10 the MAC-layer probe can identify significant traffic flows between Router A and Router B, but it cannot determine which remote server sent a packet, which remote user is destined to receive the packet, and what application the packet represents. Likewise, the MAC-layer probe can monitor traffic flows between the local server and Router A, but it cannot see beyond the segment to specifically identify the remote client that accesses the local server. The RMON2 Working Group began their efforts in 1994 with the goal of enhancing the existing RMON specification beyond layers 1 and 2 to provide history and statistics for both the network and application layers (Figure 11). RMON2 keeps track of network usage patterns by monitoring end-to-end traffic flows at the network layer. In addition, RMON2 provides information about the amount of bandwidth consumed by individual applications, a critical factor when troubleshooting corporate intranets.

application presentation session transport network data link


Figure 11. Layers Supported by RMON and RMON2 By monitoring traffic at the higher protocol layers, RMON2 is able to provide the information that a network manager needs to get an intranet or enterprise view of network traffic flows. RMON2 delivers the following capabilities: • RMON2 provides traffic statistics, Host, Matrix, and TopN tables for both the network and application layers. This information is crucial for determining how enterprise traffic patterns are evolving, and for ensuring that users and resources are placed in the correct location to optimize performance and reduce costs. • With RMON2, the network manager can archive the history of any counter on the system. This allows the gathering of protocol-specific information about traffic between communicating pairs of systems across an intranet. • • RMON2 enhances the packet filtering and packet capture capabilities of RMON to support more flexible and efficient filters for higher-layer protocols. RMON2 supports address translation by creating a binding between MAC-layer addresses and network-layer addresses. This capability is especially useful for node discovery, node identification, and management applications that create topology maps. • RMON2 enhances interoperability by defining a standard that allows one vendor’s management application to remotely configure another vendor’s RMON probe.

Monitoring LAN Traffic with RMON and RMON2
To proactively manage a network with RMON, network managers must first determine which segments are critical to the successful operation of their network. Typically these include campus backbones, key workgroups, switch-to-switch links, and segments providing access to application servers and server farms. Network managers should attempt to achieve full

RMON/RMON2 instrumentation on each of the key segments in their network environment, equipping the most critical areas first. It is essential that network managers develop a strategy for cost-effective deployment of probes throughout the computing environment. The strategy should ensure that RMON/RMON2 statistics and traffic flow information are collected and maintained at the appropriate points in the network infrastructure. This collection can take place at the adapter card, in the hubs, switches, and routers, and in stand-alone probes. Client applications that execute on dedicated management stations, Windows stations, and Web servers interpret the collected data and present it to the network management team. If the probes are deployed effectively, network managers should be able to achieve some level of monitoring on every shared media LAN, every switched port, and every virtual LAN (VLAN), providing the information they need to support the emerging intranet model. RMON and RMON2 Today, both RMON and RMON2 are relatively new technologies that are being deployed at the core of the network. As the deployment of probes becomes more universal, RMON and RMON2 will function in different parts of the network infrastructure. RMON probes will be placed at the edge of the network where users “plug in” to the corporate intranet. RMON probes are excellent for the workgroup environment, where most physical errors occur, utilization statistics need to be gathered, and segmentation issues need to be resolved. On the other hand, RMON2 probes will typically be placed at selected core locations in the network, where they are positioned to observe network and application-layer flows for the entire intranet. Monitoring Switched Environments In a switched environment, a packet is forwarded only to the specific port required to reach the destination. This means that many packets would bypass a probe no matter where it is situated. Network managers typically employ one of the following techniques to provide RMON management in switched environments: (a) deploy only a limited number of the RMON groups on each switched interface, (b) implement a statistical sampling technique, or (c) utilize a roving analysis port to allow full coverage on a selected port.

Desktop agents gather RMON data

Network Management Station dRMON Collector

switch/hub agent gathers statistics

sees one agent with full RMON statistics

Figure 12. Desktop dRMON 3Com’s Transcend® dRMON Edge Monitor System, or desktop dRMON, offers an option for network managers who want to achieve full-time, comprehensive RMON support in high-speed and switched environments (Figure 12). Desktop dRMON leverages the processing power of the desktop and end-station NIC to assist in processor-intensive RMON data collection. dRMON lowers the total cost of ownership by affordably extending full nine-group RMON network management capability to switched Ethernet and shared/switched Fast Ethernet segments. It is important to note that dRMON must be part of an enterprise-wide solution, since it works in conjunction with embedded agents and stand-alone probes to monitor the entire intranet.

Monitoring WAN Environments with RMON2
Over the past few years, network managers have become increasingly interested in monitoring traffic across their WAN links. Their objectives are more than simply to troubleshoot traffic problems; they are seeking to maximize their organization’s investment by determining how effectively they are using expensive WAN bandwidth. Over-utilization of WAN links could lead to increased errors and poor performance due to congested data being dropped or delayed, while low utilization means that the organization is paying for bandwidth that it is not using. The RMON effort has focused on developing standards for LAN technologies such as Ethernet and Token Ring. There are no link-level monitoring standards for WAN probes. This means that WAN-monitoring tools are based on proprietary technology and are not designed to interoperate with probes from other vendors. However, proprietary WAN probes can be extremely useful for

quality assurance and to verify that carriers are meeting their contractual obligations to provide an agreed-upon level of service. Many of these probes provide user interfaces that plot link-level errors and bandwidth utilization in easy-to-understand graphs. In addition, many of these monitoring tools have the ability to track and store information about end-to-end conversations based on IP addresses.




RMON2 Probe

Figure 13. Monitoring WAN Links with RMON2 RMON2 is an excellent tool for monitoring traffic flows across WAN links. It allows network managers to tune their networks based on application utilization and throughput rather than linklevel technology specifications. In Figure 13, an RMON2 probe is placed on a shared LAN segment over which traffic travels to and from other end points across a WAN. In this location, the probe can see all traffic entering or exiting the WAN and can provide higher-layer protocol analysis across the entire intranet. 3Com’s Transcend Traffix™ Manager client application lets network managers see all traffic flowing throughout the enterprise, identifying communicating systems, protocols and applications in use, and traffic patterns for the entire intranet. Traffix Manager allows trends to be identified over time, resulting in efficient resource planning and allowing problems to be solved before they occur.

Optimizing LAN Performance
In today’s growing bandwidth environments, network managers are designing their LAN infrastructures around switching solutions. Enabled by high-speed, ASIC-based forwarding engines and large address caches, switches deliver far more bandwidth than routers do. They use simple MAC addresses or VLAN ID information to forward traffic at nonblocking wire speeds. They offer lower cost per switched segment and port densities ranging from tens to hundreds of switched ports. This allows network managers to reduce the number of users per segment and to provide dedicated switched ports if needed. In addition, switches create a single logical LAN,

reducing the costs associated with adds, moves, and changes. Switching requirements are different at the edge and core of the network (Figure 14).

server Intelligent Switch
Core Edge

Intelligent Switch




edge switch

edge switch

Figure 14. Edge and Core of a Campus Network

Switching at the Edge of the LAN
At the edge of a campus network (typically the workgroup or desktop), a shortage of network capacity coupled with a proliferation of broadcast and multicast traffic creates a significant network challenge. The deployment of high-performance workstations and servers along with bandwidth-hungry intranet applications aggravates this challenge. Edge switches must provide simple, plug-and-play connectivity that is economical and flexible enough to support any-to-any traffic and handle future growth, and that offers control over quality of service. At the edge of the network, bandwidth management can be achieved with the deployment of simple layer 2 switches (also known as Boundary Switches) that provide plug-and-play bandwidth and support high-speed interfaces. Edge switches provide the economical connectivity that network managers seek, since they offer a low cost per port and have the flexibility to handle future growth. Network managers can effectively monitor traffic at the edge of their networks with SmartAgent® intelligent agents and distributed RMON (dRMON). SmartAgent technology maximizes automated network management by identifying system faults and taking corrective

action. dRMON leverages the available unused processing power at the end station to collect RMON data and provide full RMON management that a switch could not provide alone. Finally, PACE™ technology allows users to cost-effectively run multimedia applications to the desktop without requiring forklift upgrades to their existing edge infrastructure. PACE is an enhancement to Ethernet switching technology; it delivers optimized multimedia support by transforming Ethernet and Fast Ethernet into a high-bandwidth, deterministic technology with built-in traffic prioritization.

Intelligent Switching at the LAN Core
The intranet environment entails different requirements for the network core (typically the data center), which provides services to a community of computing interests. Connectivity at the core must provide high port densities to scale network growth, and must be extremely resilient. Since the core is a control point for VLANs, network segmentation, and network management, core switches must be scalable and able to efficiently isolate traffic and define broadcast domains. Core switches must provide enough bandwidth to enable users at the edge to access resources across the intranet, even during periods of peak demand, without compromising the network’s performance. At the core, intelligent switches (also known as High-Function Switches) provide the services that are required to support a large switched environment. These switches must provide sufficient bandwidth to support the demands of the edge, high port densities to scale network growth, resiliency to furnish connectivity for the entire organization, and a rich set of bandwidth management features. Bandwidth management features provide the necessary traffic control at the core, allowing network administrators to improve network performance while enforcing traffic flows and other network policies. Traditional switches are unable to efficiently distribute and control bandwidth across the LAN because they are either based on ASIC technology and lack the flexibility required to deliver the complex functionality associated with bandwidth management, or they are general-purpose, processor-based, and slow in performance when high functionality is invoked. Intelligent switches, by contrast, employ an ASIC-plus-RISC architecture specifically designed

to deliver the control and performance features that are not available from conventional switches or routers. The bandwidth management features in intelligent switches allow network managers to control broadcast traffic, allocate bandwidth more efficiently, provide security within switched network environments, and preserve existing layer 3 structures. These features include the following: • Some degree of routing functionality will always be required in a large switched environment to isolate traffic and define broadcast domains. The “routing-over-switching” model employed by intelligent switches provides layer 3 routing without requiring an external router. When traffic must be switched, it is switched using enhanced ASIC technology. When traffic must traverse a routing domain, it is preprocessed by the ASIC and then forwarded by the built-in RISC-based routing engine. • Policy-based VLANs permit users to be grouped logically, independent of their physical network location. This lets network managers retain many of the broadcast containment and security benefits of routing. Policy-based VLANs also reduce the costly administration associated with adds/moves/changes by allowing any switched port to be part of a common VLAN group. • The implementation of SmartAgent intelligent agents, embedded RMON probes, and a Roving Analysis Port (RAP) feature eases the monitoring and management of a switched network environment. SmartAgent software automatically gathers and correlates information from all LAN segments attached to the device. Embedded probes support the standard RMON management groups. A RAP functions as an administrative “wiretap,” allowing a probe to be logically attached to any port on a switch in order to collect detailed traffic information without being physically attached to that segment. • AutoCast VLANs allow switches to efficiently forward IP Multicast traffic. This feature allows the switch to listen to Internet Group Management Protocol (RFC 1112) messages so that multicast packets are forwarded only to ports with registered users rather than being flooded to all switched ports. • • Broadcast firewalls protect against broadcast storms using filters or user-defined throttle settings to limit broadcast propagation to a certain rate. User-defined filters allow network managers to apply filters to packets based on any packet attribute, including port, group of ports, or group of MAC addresses, for optimal traffic management.

Fast IP for 3D Networking
While intelligent switches are able to satisfy the performance needs of today’s networks, the increasing use of bandwidth-intensive intranet applications is placing even greater demands on available pipelines and increasing traffic between VLANs. When coupled with the loss of server locality, traffic flows are quite unpredictable and the pressure on inter-VLAN forwarding engines is greater than ever. This problem will only be exacerbated by next-generation network fabrics, Gigabit Ethernet (1000 Mbps), and ATM at OC-12 speeds (622 Mbps). As a result, IP switching is rapidly emerging as the solution to meet the demands of gigabit networks. As networks continue to scale to larger size and greater complexity, a third dimension will be needed in addition to the traditional dimensions of speed and distance. In addition to capacity and reach, networks will have to be able to implement policies that explicitly address the needs of the ever-expanding range of services. These policies must address issues such as privacy and authentication as well as quality of service guarantees. The major challenge of tomorrow’s networks will be to add a policy dimension without compromising performance or letting complexity get out of control. 3Com has developed Fast IP to scale performance in today’s LAN networks and to lay the foundation for 3D networking. For communication between two subnets, traditional architectures have used a layer 3 path in order to control broadcast domains and enable the use of security firewalls. Fast IP uses this same controlled layer 3 path for inter-subnet communication. But once the communication is established, Fast IP desktops and servers look for a faster, lower latency, layer 2 path. If an end-to-end layer 2 path is discovered, the communication automatically moves over to the faster path. If no layer 2 path is discovered, communication continues undisturbed over the original layer 3 path. Fast IP gives desktops and servers an active role in requesting the services they need, eliminating the guesswork of competing solutions. Fast IP helps keep the Boundary Switches in the wiring closet simple and cost effective, and reduces the number of router hops needed across the enterprise. In fact, it offers the performance of switching with the control of routing. For more information about Fast IP, see “Fast IP: The Foundation for 3D Networking” at

Optimizing WAN Performance
WANs provide the communications pathway between dispersed geographic sites in a corporate intranet. As corporate intranets become central to an organization’s success, the reliability and scalability of its WAN links will determine whether the intranet can effectively support the user demands.

WANs Are Different from LANs
WAN environments are very different from LAN environments (Table 1). The design criteria for LANs, where bandwidth is readily available and inexpensive and raw performance dominates, are very different from those for WANs, where the goal is to conserve scarce and expensive WAN bandwidth. Because the fundamental issues are different, an entirely different set of solutions is required.

Table 1. Differences Between LANs and WANs



Switching is dominant Users at same site Private cable plant Equipment costs dominate High speed Plentiful bandwidth Inexpensive bandwidth Fast response times

Routing is dominant Geographically dispersed Public telco facilities Line costs dominate Low speed Limited bandwidth Expensive bandwidth Slower response times

The WAN Is the Network Bottleneck
In a corporate intranet, WAN interfaces create network bottlenecks (Figure 15). A typical 64 Kbps WAN circuit provides 1/160 the bandwidth of a 10 Mbps Ethernet link. When interfacing a

LAN to a WAN, the primary issue isn't raw speeds and feeds—it's how thrifty the interface device is at using the limited bandwidth of the WAN circuit to support application requirements.

greater than a 100:1 mismatch


Figure 15. WAN Is the Network Bottleneck The internetworking device interfacing the LAN to the WAN is responsible for managing access to scarce WAN bandwidth. It must keep unnecessary traffic off the WAN, reduce the amount of network protocol traffic related to overhead, provide features that help manage the allocation and use of WAN bandwidth, and provide cost-effective methods of providing temporary additional bandwidth when it is needed to overcome congestion. The following are key design issues for WANs: • • WAN bandwidth is precious—there is less of it than on a LAN and it is expensive. There are tremendous bandwidth mismatches when a LAN interfaces to a WAN.

Router Software Features Preserve WAN Bandwidth
Routers provide the interface between the LAN and the WAN. Since WAN bandwidth is a scarce and expensive resource, the router must keep unnecessary LAN traffic—including broadcast traffic, traffic from unsupported protocols, and traffic destined for unknown networks—from crossing the WAN link. Since routers examine every packet, they are excellent devices for controlling, queuing, and prioritizing traffic across the LAN/WAN interface. Routers also offer access to a wide variety of WAN technologies, allowing network managers to select the best value for their networking needs. Finally, routers mark the edges of a network, limiting problems such as misconfigurations, chatty hosts, and equipment failures to the area in which they occur and preventing them from spreading across the intranet. The sections that follow describe a number of router-based software features that can play a significant role in the efficient use of WAN bandwidth.

Link State Routing Protocols

Routers implement dynamic routing protocols to exchange information with other routers and keep their routing tables up to date. To conserve WAN bandwidth, network managers should consider selecting a routing protocol that transmits the least amount of routing update information over WAN links. There are two classes of routing protocols—distance-vector and link-state. Distance-vector routing protocols periodically transmit their entire routing table to each of their neighbors, even if a topology change has not taken place. In contrast, link-state routing protocols transmit smaller update packets to maintain a link or when an actual topology change has taken place. As a result, link-state routing protocols, though more computationally intensive, consume much less bandwidth than distance-vector routing protocols. In addition to reduced routing protocol traffic, link-state routing protocols such as Open Shortest Path First (OSPF) offer several key benefits over distance-vector routing protocols. • Link-state protocols converge in a single iteration, which means that when a network link fails, routes are recalculated and traffic can be forwarded much faster than if a distancevector routing protocol were deployed. Distance-vector routing protocols have to employ features such as counting to infinity, split horizon, poison reverse, and triggered updates to speed up convergence. • Link-state protocols support hierarchical routing, which means that an organization’s network can be divided into several individual routing domains. A topology change in one domain does not cause a major recalculation in the other areas, which helps to increase route stability. Also, hierarchical routing reduces the amount of routing information, so that routers need less memory and consume less bandwidth as they transmit routing updates.

Demand Circuits
Demand (dial) circuits play an important role in reducing WAN network costs and providing resilient backup lines. Dial lines can provide bandwidth when it is needed and can be released when they are no longer required.

Dial-on-demand (DOD) is an economical way to use phone lines when communicating between routers across a WAN. A dial-up link operating in the DOD mode will be brought up and down depending on the traffic pattern over that link. Whenever there is demand, the link will either be brought up if it is down, or held in the up state if it is already up. When the traffic drops off, the link is brought down after a user-configurable idle timer expires. By carefully controlling the traffic over that link, network managers can obtain substantial cost savings when using DOD. When a primary WAN connection begins to experience congestion because of increased traffic over that line and the congestion persists for a user-specified period of time, the router can automatically bring up an associated secondary dial-up line to provide additional bandwidth. This feature is called bandwidth-on-demand (BOD). When a router detects a failure of the primary WAN connection and the failure persists for a user-specified period of time, the router can automatically dial a backup connection. This feature is known as disaster recovery.

An obvious way to squeeze more bandwidth out of a narrow WAN link is to use data compression. Compression permits larger volumes of traffic to be sent across a narrow WAN pipe. In an intranet environment, two fundamental types of compression are typically employed: • History-based compression looks for repetitive data patterns across multiple packets and replaces them with shorter codes. The sending and receiving ends both build up a dictionary, then encode and decode the data in the packet according to the dictionary. Because the history information is transferred along with compressed data, the sending side must be assured that the receiving side reliably gets the data. As a result, history-based compression can operate only over a reliable data link. • Per-packet compression looks for repetitive patterns within each packet and replaces them with shorter codes. Because the sending and receiving ends do not preserve the history between packets, per-packet compression does not need a reliable data link.

Bandwidth Aggregation
By its nature, data communications traffic is bursty. The network may be relatively quiet, then there is a burst of activity (for example the transmission of a Web page from a server to a

browser), then the network becomes quiet again. A major challenge facing network designers is how to size WAN pipes so that they can handle bursty traffic. If the lines are sized too large, the organization will be paying for bandwidth that it may never use. If the lines are sized too small, the WAN links may experience long transmission queues resulting in increased latency, unnecessarily retransmitted frames, dropped packets, slow response times, session timeouts, and poor performance.

Logical Pipe

data link
64 Kb leased 64 Kb leased 28.8 K POTS ISDN B 64 Kb leased 28.8 K POTS 64 Kb leased 28.8 K POTS ISDN B ISDN B

28.8 K POTS

Normal Size Pipe Increasing Traffic

Expanding Pipe Size

Maximum Pipe Size

Decreasing Traffic

Figure 16. Bandwidth Aggregation

Bandwidth aggregation builds on Multilink Point-to-Point Protocol (PPP) (RFC 1717) to provide the network administrator with tremendous flexibility in defining link speeds (Figure 16). Multilink PPP defines a method for splitting, recombining, and sequencing datagrams across multiple transmission paths. It was originally motivated by the desire to exploit multiple bearer channels in ISDN, but is equally applicable to any situation in which multiple PPP links connect two systems, including combinations of asynchronous or leased lines. Bandwidth aggregation should be viewed as an extension to BOD that is limited only by the underlying available dial resources. Threshold mechanisms can be configured to allow the logical pipe size of the WAN connection to change according to load. For each WAN port, WAN bandwidth is provided by a virtual pipe of underlying serial link path resources that can be leased

lines only, dial-up lines only, or a combination of leased and dial-up lines. This virtual bundle can expand or contract to meet ever-changing network bandwidth requirements.

Data Prioritization
In a congested network, high-priority traffic may not get across the WAN as fast as the network administrator would like or specific applications require. It is often necessary to combine timesensitive traffic (such as a terminal session) with batch traffic (a file transfer) over the same WAN interface. The first step in solving this problem is to get more bandwidth via compression, and then figure out how to efficiently share the limited amount of available bandwidth. Prioritization provides a network manager with the flexibility to distinguish between timesensitive traffic and non–time-sensitive traffic and to give the time-sensitive traffic higher priority in the WAN transmission queue. A typical prioritization scheme assigns an administratively defined priority to each packet, and then forwards the packet to a high-, medium-, or low-priority queue. Network-critical traffic like topology changes are automatically assigned an urgent priority, which takes precedence over all other traffic. The router forwards all packets in the urgent-priority queue before packets in the high-, medium-, and low-priority queues. After all packets in the urgent-priority queue are forwarded, the router forwards packets from the other queues across the WAN interface in an order controlled by a user-configurable parameter.

Protocol Reservation
Certain mission-critical applications may require a guaranteed percentage of WAN bandwidth. Protocol reservation lets the network administrator guarantee that a portion of a WAN interface’s bandwidth will be reserved for a specific protocol or application (Figure 17). Protocol reservation provides improved service for interactive and transaction-oriented network applications through a very different scheme than data prioritization. A minimum percentage of a link’s bandwidth is statically configured for each protocol that operates over the WAN link. For example, if the network manager reserves 10 percent of the pipe for Hypertext Transfer Protocol (HTTP) traffic, it will be there—even if the pipe is otherwise full. If the HTTP traffic falls below 10 percent, the reserved bandwidth is made available for use by other protocols and applications.

WAN Pipe
10% HTTP

10% FTP

3% Telnet

Figure 17. Protocol Reservation

Session Fairness
Session fairness is an enhancement to the protocol reservation scheme; it ensures that traffic is forwarded evenly from all users so that no single user is allowed to monopolize WAN bandwidth (Figure 18). Session fairness can be an extremely powerful tool for network managers who need to specify average response times for their user community.

WAN Pipe

25% HTTP

15 concurrent HTTP Sessions

15% FTP

Figure 18. Session Fairness

In a typical example, a WAN link provides Internet access for an organization. Without session fairness, a network manager cannot prevent HTTP traffic from a single user from dominating the entire capacity of the HTTP protocol reservation and blocking other employees needing Internet access. With session fairness, if HTTP is allocated 25 percent of the WAN capacity and there are 15 concurrent HTTP sessions, no single user is allowed to consume more than 1/15 of the bandwidth reserved for HTTP. If the number of sessions changes so that there are only 10

concurrent HTTP sessions, then no single user is allowed to consume more than 1/10 of the bandwidth reserved for HTTP.

Packet Ranking
Packet ranking is an enhancement to the data prioritization and protocol reservation schemes; it allows network managers to specifically identify “pressing” priority traffic. This feature implements a critical delivery mechanism by expediting user-defined traffic through the router’s internal buffers and placing it at the head of the interface's transmission queue. Packet ranking reduces latency, ensures the timely delivery of customer-prioritized traffic, provides an incrementally improved method for managing traffic entering the WAN’s transmission queue, and supports enhanced data prioritization within a protocol reservation scheme.

Multicast Technologies
One of the major problem with tuning applications across a WAN is that the same information may have to be distributed to many different sites, requiring the same huge files to be sent over the same oversubscribed pipe again and again. Multicasting allows the file to be distributed from the source just once, and then replicated at forks in a multicast delivery “tree.” Multicasting is rapidly shifting from the laboratory to practical applications as more and more users discover its benefits.
source station router LAN with receivers unused WAN links multicast delivery path

Figure 19. Multicast Delivery Tree Multicast routing protocols build a shortest-path distribution tree from the source station to all listeners (multicast group members). The source station is placed at the root of the distribution

tree and packets flow from the root across the branches of the tree (Figure 19). The source station injects one message into the distribution tree and the packet flows across the tree and is replicated at forks in the tree until it reaches all group members. If there are no downstream listeners on a particular branch, the packet is not forwarded across that branch. Multicast technologies reduce bandwidth consumption across narrow WAN pipes in several ways: • • • • The source station transmits only a single packet, not one packet for each listener. A packet follows the shortest path across the distribution tree from the source station to each listener—consuming the minimum amount of network bandwidth. A packet traverses a network link only once to reach all downstream receivers. Packets are sent to a region of the network only if receivers are present.

Resource Reservation Protocol
The successful deployment of real-time applications such as voice and video may require that the network provide a specific quality of service (QoS) for different applications. A QoS request is described by a flow specification that stipulates the maximum frame transmission rate, the longterm average frame transmission rate, the maximum frame jitter, and the maximum end-to-end delay, along with other performance parameters. If the network is unable to deliver the required QoS (especially over bandwidth-constrained WAN links), certain real-time applications may be doomed to failure. The Resource Reservation Protocol (RSVP) is an Internet draft designed to support QoS flows by placing and maintaining resource reservations across a network. RSVP is receiver-oriented, in that an end system may transmit an RSVP request on behalf of a resident application to request a specific QoS from the network. At each hop along the reverse path back toward the source, the routers register the reservation and attempt to provide the required QoS. If the requested QoS cannot be granted, the RSVP process executing in the router returns an error indication to the host that initiated the request.

Figure 20 illustrates how an RSVP request flows upstream along the branches of the multicast delivery tree and is “merged” with requests from other users. A specific request is required to flow upstream only until it can be merged with another reservation request for the same source.

source station


RSVP Reservation Requests

User #1

User #2

User #3

Figure 20. RSVP Requests Flow Upstream and Are Merged with Other Requests Although RSVP is specifically designed for multicast applications, it also supports resource reservation for unicast applications and point-to-point transmissions. It is important to note that RSVP is a resource reservation protocol; it is not a routing protocol. RSVP is designed to work with current and emerging unicast and multicast routing protocols, but it does not itself implement a routing algorithm or an integrated routing protocol.

Server Access
As intranets flatten information delivery within an organization, network managers need to be concerned about increased WAN traffic to allow users to access remote application servers and popular internal Web sites. There are several solutions that either physically or logically move clients and servers into a closer proximity: • Specialized communications servers can logically shift remote clients to the central site. These dedicated servers execute protocols that are similar to UNIX X-Windows in that they allow users to create “remote control” sessions over the WAN link. The database client executes on the dedicated server located at the central site, while the remote workstation only sends screen update information over the WAN link. When compared to native SQL

transmission, limiting the exchange to screen update information dramatically conserves WAN bandwidth. • Web browsers and servers can also be used to logically shift remote clients to the central site to permit access to SQL database servers. A Web server at the central site terminates the SQL stream and acts as the gateway for remote users. The Web servers connect to database back-ends through proprietary interfaces or via the standards-based Common Gateway Interface (CGI). • Popular Web servers and databases can be replicated and distributed during nonpeak hours to remote sites across the corporate intranet. This is expensive, since it requires multiple servers, but it provides local access to popular servers while reducing traffic across WAN links.

The widespread deployment of intranet applications will result in steadily changing network requirements that will continue to challenge network administrators. In this environment of constant change, network planners must deploy a combination of RMON and RMON2 probes to continuously monitor the traffic flows on their intranets. When bottlenecks and performance limitations are discovered, network managers have several options for overcoming these threats to successful network operation. They can cost-effectively provide additional bandwidth by deploying level 2 switches at the edge of the LAN. They can provide increased traffic control and packet throughput by deploying intelligent switches with Fast IP at the core. Finally, they can use bandwidth grooming features to eliminate bottlenecks resulting from increased traffic across narrow WAN links. Proactive management of these challenges will become increasingly important as intranets grow to global proportions and become essential to the operation and survival of the business entity.

Appendix A: Remote Network Monitoring MIB (RFC 1757)
The RMON MIB was first published in November 1991 as RFC 1271. The specification focused on Ethernet—providing statistics and monitoring network traffic at the MAC layer. With successful interoperability testing, the RMON MIB advanced to draft standard status in December 1994. After minor revisions employing the MIB conventions for SNMPv2 (while still remaining compatible with SNMPv1), the RMON MIB was reissued in February 1995 as RFC 1757.

The objects in the RMON MIB are arranged into the following functional groups: Ethernet Statistics Group Contains counters that are maintained for each monitored Ethernet interface of the probe. Network managers can collect statistics describing the number of packets, the number of octets, the count of broadcast and unicast traffic, packet size distribution, and errors. Manages the periodic collection and storing of statistical data from different network interfaces. Sets of statistics can be collected to compare behavior, build baselines, and develop trending information. Collects periodic samples from counters in the probe and compares them to user-defined thresholds. If a sample is collected and it exceeds the user-defined threshold, an event is generated. The specific action initiated by each event is defined in the Events group. Maintains statistical information about newly discovered hosts on the network. Information similar to that in the Ethernet Statistics group is maintained on a per-host basis. Permits the preparation of reports that sort hosts based on the value of one of their statistical counters. For example, the administrator can determine the ten nodes generating the highest levels of broadcast traffic. Traces conversation between pairs of systems identified by two MAC addresses. Information about traffic volumes and errors is kept for each conversation and in each direction. Allows packets to be captured with an arbitrary filter expression. Packets that match the filter expression are said to be members of an event stream or “channel.” The channel can be turned on or off, captured for later analysis, and configured to generate events when packets pass through it. Allows packets that match a filter defined by the Filter group to be stored for later examination. These packets can then be decoded with a packet analyzer that is included as part of the RMON management application. Manages the creation and transmission of events from this probe. It contains the parameters that describe the particular event that is generated when certain conditions are met.

History Group

Alarms Group

Host Group

HostTopN Group

Matrix Group

Filter Group

Packet Capture Group

Events Group

Appendix B: Token Ring Extensions to the Remote Network Monitoring MIB (RFC 1513)
The Token Ring extensions to the Remote Monitoring MIB were published in September 1993 as RFC 1513. The specification focused on adding Token Ring extensions to the existing RMON MIB. This MIB defines Token Ring extensions to the RMON statistics and history groups. In addition, it defines additional functions that are specific to Token Ring—the ring station group, the ring order group, the ring station configuration group, and the source routing statistics group.

The objects in the Token Ring extensions to RMON MIB are arranged into the following functional groups: Token Ring Statistics Group Contains counters that are collected by the probe for each monitored Token Ring interface of the supervised system. The MAC-layer statistics include error reports for the ring (congestion errors, token errors, line errors, etc.) and ring utilization of the MAC layer (drop events, MAC packets, MAC octets, beacon events, etc.). The promiscuous statistics include utilization statistics from data packets collected promiscuously (number of packets, number of octets, count of broadcast and unicast traffic, packet size distribution, etc.) on the monitored interface. Contains historical utilization and error statistics. Information similar to that in the Token Ring Statistics group (both MAC-layer and promiscuous statistics) is maintained. The storing of this information is managed by the History group defined in RFC 1757. Maintains statistical information about each station discovered on the ring. Information similar to that in the Token Ring Statistics group is maintained on a per-station basis (MAC address, station status, last enter time, last exit time, and a large number of error counters). Provides the order of stations on the monitored ring relative to the RMON probe. Permits the active management of stations on the ring. Any station on the ring may be removed or may have its configuration information (MAC address, group address, functional address, etc.) uploaded and stored on the probe. Contains utilization statistics (allroutes broadcasts, singleroute broadcasts, in octets, out octets, through octets, and counters for frames requiring multiple hops) obtained by monitoring source routing information contained in Token Ring packets seen on the ring.

Token Ring History Group

Token Ring Station Group

Token Ring Station Order Group Token Ring Station Config Group

Token Ring Source Routing Group

Appendix C: Remote Network Monitoring MIB Version 2 (RFC 2021)
The RMON2 Working Group began their efforts in 1994 with the goal of enhancing the RMON specification to provide history and statistics for both the network and application layers. RMON2 keeps track of network usage patterns by monitoring end-to-end traffic flows at the network layer. In addition, RMON2 provides information about the amount of bandwidth consumed by individual applications, a critical factor when troubleshooting corporate intranets. After several internet drafts, RFC 2021 was published in January 1997.

The objects in the RMON2 MIB are arranged into the following functional groups: Protocol Directory Group Maintains a list of protocols that the RMON2 probe has the ability to decode and count packets. These protocols represent different network-layer, transport-layer, and higher-layer protocols. This group also permits the addition, deletion, and configuration of entries in this list. Maintains a table that counts the total number of packets and octets the probe has seen for each supported protocol. The information is aggregated for each protocol—not for each host or individual application running on each host. Finer granularity is provided by the Network Layer Host group and the Application Layer Host group. Maintains a table of MAC address to network address mappings discovered by the probe. This table is extremely useful in node discovery and topology map applications. Counts the amount of traffic sent to and from each network address discovered by the probe. This group is similar to the RMON1 Host group, which gathers statistics based on MAC addresses. Counts the amount of traffic sent between each pair of network addresses discovered by the probe. This group collects connection-oriented statistics in a matrix that counts traffic between each source/destination pair. In addition, this group maintains a TopN table that ranks pairs of hosts based on number of octets or number of packets between pairs of hosts. Similar to the Network Layer Host group. However, this group counts the amount of traffic, by application, sent to and from each network address discovered by the probe. Statistics are available for each application running on each host. Similar to the Network Layer Matrix group. However, this group counts the amount of traffic, by application, sent between each pair of network addresses discovered by the probe. In addition, this group maintains a TopN Table that ranks pairs of hosts based on the amount of application-layer traffic. Permits users to specify sampling rates so that data can be collected and stored for trend analysis. Defines a standard set of parameters that manage the configuration of the probe. The functionality provided by this group supports interoperability by permitting one vendor’s RMON management application to configure another vendor’s RMON probe.

Protocol Distribution Group

Address Mapping Group

Network Layer Host Group

Network Layer Matrix Group

Application Layer Host Group

Application Layer Matrix Group

User History Collection Group Probe Configuration

ASIC BOD BRI DOD dRMON HTML HTTP IETF ISDN LAN MIB NIX NMS OSPF PPP QoS RAP RFC RMON RSVP SNMP TCP/IP VLAN WAN application-specific integrated circuit bandwidth on demand Basic Rate Interface dial on demand distributed RMON HyperText Markup Language Hypertext Transfer Protocol Internet Engineering Task Force Integrated Services Digital Network local area network management information base network interface card network management station Open Shortest Path First Point-to-Point Protocol quality of service Roving Analysis Port Request for Comment Remote Monitoring Resource Reservation Protocol Simple Network Management Protocol Transmission Control Protocol/Internet Protocol virtual LAN wide area network

For More Information
For in-depth information on issues discussed in this paper, refer to the following 3Com technical publications and white papers. “TranscendWare® Software: The Intelligence That Powers Transcend Networking.” URL< /nsc/600256.html>

“Evolving Corporate Networks into Intranets: Managing the Growth, Scaling the Performance, Extending the Reach.” URL< > “3Com Transcend VLANs: Leveraging Virtual LAN Technology to Make Networking Easier.” URL:<> “Bandwidth Grooming: 3Com's Solutions for Prioritizing Data,” Connie Knighton. URL:<> “DynamicAccess™ Features: Redefining the Role of the Network Interface Card,” David Flynn. URL:<> “Fast IP: The Foundation for 3D Networking” John Hart. URL:<>. “CoreBuilder™ 2500/6000 High-Function Switches: Delivering Performance and Services for the Network Core,” Bob Gohn and Brendon Howe. URL:<> “Multimedia Technology Tutorial.” URL:<> “PACE Solutions Guide.” URL:<> “Power Grouping™ Technology: Enhanced Performance for Ethernet Workgroups.” URL:<> “RMON2 Backgrounder.” URL:<> “Scaling Performance with 3Com Switching.” URL:<> “Scaling Workgroup Performance: Switched Ethernet and Fast Ethernet,” Dana Christensen and David Flynn. URL:<>

“Using SmartAgent Gauges in LinkBuilder® FMS II and LinkBuilder MSH™ Hubs,” Gordon Hutchison. URL:<> ©1997 3Com Corporation. All rights reserved. 3Com, LinkBuilder, SmartAgent, and Transcend are registered trademarks of 3Com Corporation. CoreBuilder, DynamicAccess, MSH, PACE, Power Grouping, Traffix, and TranscendWare are trademarks of 3Com Corporation. AppleTalk and Macintosh are trademarks of Apple Computer. VINES is a trademark of Banyan Systems. DECnet and VMS are trademarks of Digital Computer Corp. AS/400 and OS/2 are trademarks of IBM. Windows and Windows NT are trademarks of Microsoft. Mosaic is a trademark of NCSA. IPX is a trademark of Novell. Java is a trademark of Sun Microsystems. UNIX is a trademark of UNIX Laboratories. Other brand and product names may be trademarks or registered trademarks of their respective owners. All specifications are subject to change without notice. 500631-002 5/97

To top