Document Sample
implementation Powered By Docstoc
					                                 Chapter 5


5.1 Introduction

       The implementation involved first choosing and obtaining the hardware

and then creating the system using available applications and bespoke

scripts. While an old computer could have been used and would have worked

satisfactorily it was decided to buy a new one. Money had already been set

aside for this purpose and a new computer would be more reliable and more

future proof.

5.2 Hardware

       A dedicated PC system unit was bought with a large hard disk and

plenty of memory., No monitor, keyboard or mouse was required since once

the system had been initially set up it would run headless. As it was to be kept

in the room housing the main network cabinet rather than an office it was to

be normally accessed remotely.

       Because the system was self-contained and merely watched and

gathered information, there was no need to make any changes to the existing

network design. This also meant that the system would not have caused any

ill-effects on the rest of the network should it have gone down for any reason.
5.3 Software

       Red Hat Fedora Linux was installed as the operating system. The

freeware applications mrtg and arpwatch were installed and configured for the

College network. The raw data recorded by them, along with that pulled down

from the University of Cambridge, was interrogated by Perl scripts. These

generated succinct daily reports containing information of particular interest.

The raw data was stored in case it was needed. In addition Snort was

installed and configured, providing more data although not being used to

contribute to the report due to the volume of data produced.

5.3.1 The scripts

       The Perl coding was split into a number of scripts, run automatically

using daily cron jobs and a shell script. Multiple shorter scripts were chosen

over a single long one for two reasons. Firstly, due to the nature of the data to

be collected one part of the program had to be run at a different time to the

rest. Secondly, multiple scripts made for easier debugging and future

maintenance. Script 1:

       This script was set to run at 11:59pm each night. It interrogated the

/var/log/messages file and pulled out the arpwatch messages generated that

day, placing them in a local file arp_latest.txt. This file was then used by other

scripts in the generation of the next morning’s report. Script 2:

       This and the other reports were all run in order at 9:00am each

morning. The simply created a file report_yyyymmdd.html,

populating it with basic opening headers, a title and also a link to a traffic

graphs index page. Script 3:

       This downloaded the daily list of top 20 traffic generators for the

College from the University of Cambridge. It then filtered it for those users

generating more than 500Mb of traffic, those using certain P2P applications,

and those using certain ports. Those falling into one or more of these

categories were added to the report.

5.3 1.4 Script 4:

       This took the arpwatch output files and carried out several tasks. Firstly

it created a separate database of IP addresses and their corresponding MAC

addresses and host names, in a file ipdatabase.html, also adding a link from

the daily report. Secondly it interrogated the arpwatch messages collected

from the previous day by the script. From these it collected

data of particular interest and added it to the report – new Ethernet addresses

seen; Ethernet/IP address pairs not seen for six months or more (an unusual

occurrence on the College network and one worth noting); hosts that changed

to a new IP address (a possible indication of an IP address conflict); and

Ethernet mismatches, where the source MAC address and that inside the arp
packet did not tally (an odd occurrence, possibly indicating suspicious

activity). Script 5:

        This script added the closing HTML tags to the report.

5.4 Output

    The output was in the form of a series of password-protected HTML

pages. This format was chosen as being the most convenient way of

presenting and accessing the information. A home page provided links to

three sections: the most recent daily report, the up-to-date IP and MAC

address listing and the traffic graphs.

5.5 Problems encountered

        Few significant problems were encountered during the implementation

phase, although the information on traffic from individual ports in the report

was not as extensive as was at first envisioned when this project was started.

This was partly due to restrictions imposed by the Data Protection Act, which

required careful consideration throughout this project, and partly due to the

compromise between adding information to the report and not allowing it to

become unwieldy and therefore of little practical use. However, once the

system went live, other areas proved to be even more useful than was at first

imagined, as will be discussed later.
5.6 Changes to previous procedures

      The report was generated at 9:00a.m. each morning, to ensure that the

latest information from the University would be included. It took only a few

moments to complete and was then available for viewing by the College

Computer Officer - ideally soon afterwards so that any problems and

information reported would be seen quickly. Previous reports were also easily

available. Any action taken depended on what was seen.

5.7 Conclusions

      The implementation phase went extremely well. While some problems

were encountered these were more than balanced out by other parts. The

resulting system was robust and highly usable.

Shared By: