Intrusion Detection by dfhdhdhdhjr


									Intrusion Detection
Two most publicised threats are
 intruders – generally referred to as a
  cracker or hacker
 viruses
Classes of intruders
   Masquerader: an individual who is not authorsied
    to use the computer and who penetrates a
    system’s access controls to exploit a legitimate
    user’s account
   Misfeasor: a legitimate user who accesses data,
    programs, or resources for which such access is
    not authorised, or who is authorisied for such
    access, but misuses his or her privileges.
   Clandestine user: an individual who seizes
    supervisory control of the system and uses this
    control to evade auditing and access controls or
    to suppress audit collection.
   The masquerader is likely to be an outsider; the
    misfeasor generally is an insider; and the
    clandestine user can be either an outsider or an
Intrusion Techniques
   Objective of the intruder is to gain access
    to a system or to increase the range of
    privileges accessible on a system.
    – Needs to acquire information that are generally
      protected (such as password files)
   General protection techniques are:
    encryption and access control.
Intrusion detection
 Inevitably, the best intrusion prevention
  system will fail.
 Hence we need intrusion detection which
  also will be the second line of defense
Approaches to intrusion detection
 Statistical anomaly detection
 Rule-based detection
 Auditing (which requires logging)
Approaches to intrusion detection
   Statistical anomaly detection
    – Applying statistical tests to the observed
      behaviour to determine with a high level of
      confidence whether the behaviour is not a
      legitimate user behaviour (need data
        • Threshold detection: - define threshold,
          independent of user, for the frequency of
          occurrence of various events.
        • Profile-base: profile of the activity for each
          (class of) is developed and used to detect
          changes in the behaviour of individuals.
Approaches to intrusion detection
   Rule-based detection
    – define a set of rules that can be used to decide
      that a given behaviour is that of an intruder
    – Anomaly detection: rules that identify
      deviation from previous usage(for a user).
    – Penetration identification: an expert system
      approach that searches for suspicious
Approaches to intrusion detection
 Statistical anomaly detection is effective
  against masqueraders, who are unlikely to
  mimic the behaviour patterns of the
  accounts (users) they appropriate.
 Rule based approaches is appropriate for
  misfeasors which may be able to
  recognise events and sequences that, in
  context, reveal penetration.
Approaches to intrusion detection
   However the most common and effective
    strategy used in intrusion detection is
    logging and auditing.
    – logging records the foot prints of the intruder
      and auditing those foot prints may reveal the
      actions performed by the intruder.
   Logging help to identify that there was an
    intrusion while the auditing allows to
    identify intruder’s tracks and clean up the
Discovering a break-in
   Three major rules for handling security breaches:
    – Do not panic!!
       • Is there a real breach of security, how important it is
         bring the system to normal operation as soon as
    – Document
       • Keep a log of all the events and information in a hard
         copy so that different tracks of behaviour can be
    – Plan ahead
       • Identify and understand the problem;contain or stop
         the damage;confirm your diagnosis and determine
         the damage; restore the system;deal with the cause
         and perform related recovery.
Do not Panic!!
   One of the student in the class (I am not naming the person here – who is he? – may
    be Chad!! Or Nathan who is still not happy with the negative marking!!) – let us call
    him geek user - bosses over others including myself. I decided to take a revenge
    through this unit as follows:
    The geek user has to use a Unix system run by ITS for this unit. The local operational
    staff of this machine has made sure that no one can execute the su command expect
    the members of their group by removing the world execute permission for that
    program.Obviously the operations staff are somewhat worried when the following
    message started scrolling on their system console:
     BAD SU: geekuser ON ttyp4 AT 12:07:20
     BAD SU: geekuser ON ttyp4 AT 12:07:30
     BAD SU: geekuser ON ttyp4 AT 12:07:40
    When the console eventually displayed the following message
    SU: geekuser ON ttyp4 at 12:08:12
    All hell broke loose and the system administrator form ITS grabbed our popular (and
    loud mouth) geekuser, took him away and gave him a good lesson on security and
    However nobody noticed me. I was sitting at the corner of the room, quietly running
    the following script which periodically issued the above message and redirected to
    /dev/console which was world-writable.
               echo “BAD SU: geekuser ON ttyp4 AT `date`
    and after a while I issued the second message.

    The moral is – do not panic – you should treat your audit trail with suspicion.
Discovering an intruder
 Catching the perpetrator in the act.
 Deducing that a break-in has taken place
  based on changes that have been made to
  the system.
    – e.g: email message from the attacker itself!!;
      change in the system files (such as
      /etc/passwd) – how to detect them?
   Strange activities on the system such as
    system crashes, significant disk activity,
    unexpected reboots, sluggish responses
    when it is not expected, etc.
Catching the intruder in the act
   Many systems provide commands to help to
    figure out who is doing what on the system.
    – e.g: finger, users, whodo, w, who, ps etc on Unix system
   Monitoring the intruder
    – e.g: monitor the intruder’s keystrokes using programs
      such as ttywatch or snoop – these programs provide a
      detailed, packet-by-packet account of information sent
      over the network.
   Tracing a connection
    – Determine the terminals using ps, w, who commands or
      by last and netstat and use to find
      the appropriate system administration of the remote
Discovering an intruder’s foot prints
   Log files – look for things out of ordinary
    such as
    – Users logged in at strange hours; unexplained
      reboots; unexplained changes to the system
      clock; unusual error messages from the
      mailer, ftp daemon or other network servers;
      failed login attempts with bad password;
      unauthorised su command; users logging from
      unfamiliar sites on the network, etc.
   Check the integrity of system files.
    – The behaviour of system is not changed.
Auditing and Logging
 Log files are an important building block
  of a secure system: they form a recorded
  history, or audit trail, of the computer’s
  past, making it easier to track an attack.
 Log files also have a fundamental
  vulnerability as they are also stored on the
  system which can be modified by the
The basic log files in Unix
   acct or pacct   records commands run by every user
    aculog          records of dial-out modems (automatic all units)
    lastlog         logs each user’s most recent successful login time;
                    and possibly the last unsuccessful login too
    loginlog        records bad login attempts
    messages        records output to the system’s console and other
                    messages generated from the syslog facility
    sulog           logs use of the su command
    utmp            records each user currently logged in (usually it is in /etc)
    utmpx           extended utmp
    wtmp            provides a permanent record of each time a user logged in
                    and logged out. Also records system shutdown and
                    startup (usually in /var directory – see /etc/syslog.conf file
                    for the location of log files)
    wtmps           extended wtmp
    vold.log        logs errors encountered with the use of external media,
                    such as floppy disks or CD-ROMs
    xferlog         logs FTP access
Per-user trails in the file system
   Generally intruders login as an existing user on
    the system and hence per-user trail file may give
    a clue to the intruder’s activities.
   These files are not real log files.
    – e.g: many standard user command shells keep a history
      file (e.g: on Linux, .bash_history for the bash shell).
        • However an intruder may delete this file before
          logging out. One possible way not to lose the
          contents of this history file is make a link in a
          directory on the same disk that is normally
          inaccessible to the user (e.g. in a root-owned
          directory). Even the intruder unlinks the file from the
          user’s directory, it can still be accessed through the
          extra link.
    – Mail – some user accounts are configured to make a
      copy of all the outgoing mail in a file or records
      information about the mails sent and received.
swatch – a log file tool
 A perl program to monitor log files.
 Available from or
Managing log files
   Plan to backup log files
   Review periodically the log files (may be
    daily or more often)
   Apply filters so that you do not get bored
    seeing the log messages
   Don’t trust logs completely!! – they can be
    altered or deleted by intruder
   Plan to install software which can add
    security to the operating systems controls
    (such as tcp wrappers).
Integrity Management
   The goal of integrity management is to
    prevent alterations to (or deletions of)
    data, to detect modification or deletions if
    they occur, and to recover from alterations
    or deletions if they happen.
File protection
   Basic
    – all-none protection
    – group protection
   Single permission
    – password or token
    – temporary acquired permission
 Per-object & per-user protection
 Example
    – UNIX
Integrity management
   Is achieved by
    – prevention
    – detecting change
  By placing controls – such as software,
  hardware, file system and operating
  system controls.
 By having immutable and append-only
    – immutable files are those that cannot be
      modified once the system is running (suitable
      for system programs such as login, passwd
      programs) and append-only files to which data
      can be appended, but in which the existing
      data cannot be changed(suitable for log files)
Integrity Management Techniques
 Setting appropriate file permissions and
  restricting access to the root account on
 Immutable files – that cannot be modified
  once the system is running.
 Append only files – files to which data can
  be appended, but in which the existing
  data cannot be changed. This type is
  ideally suitable for log files.
 Read-only file systems – a hardware read
  only protection will be even better.
Detecting a change in a file(s)
 Meta data - such as file sizes, last
  modification time, etc
 Comparison copies – comparing byte-by-
  byte – unwieldy and time consuming.
 Checksum – file content can be modified
  in such a way that it generates the same
  checksum – not effective.
 Signatures
Detection a change
   Comparison of files with a (good) backup
    – the backup copy has to be in a protected
    – comparison has to be performed byte-by-byte
      and hence time consuming process (especially
      for large files – such as database files)
    – once an authorised change is detected,
      replace the altered version with the
      comparison copy, thereby restoring the
      system to normal.
Detection a change
   Checklists and metadata
    – Store only a summary of important
      characteristics of each file and directory and
      use this information for comparison.
       • e.g. of summary information – time stamps
         (last read/modified, file protection
         modes,link count using ncheck etc)
       • Running this kind of detection change as a
         cron (as a background) job may not be a
         good idea! Why?
Detection a change
   Checksum and Signatures
    – changes can be made in such a way that the
      checksum and metadata may not change and
      hence the previous method may fail.
       • e.g. setting the clock backwards, perform
         the changes and the set the clock forward
    – CRC checksums – useful only when there are
      few bits of change and they are generated by
      well known polynomials.
    – Generate signatures for the file contents and
      use the signature to detect the change.
Detection of changes using Signatures
 Remember that signatures are one way
  function and it is possible to generate
  signature for small and large files.
 Since signature is generated by one-way
  function and good signature function will
  generate different signatures for different
  files, it is difficult for the intruder to
  modify the content of a file and still able to
  generate the same signature as that of the
  un-modified file.
Detection of changes using Signatures
   Assume that the initial signatures for the files
    listed in /usr/adm/filelist are stored in the file
    /usr/adm/savelist (say using MD5 algorithm).
    Then the following shell script can verify whether
    any of the files in the filelist has been modified in
    its contents or not.

    # generate new signatures for the files in the list

    find `cat /usr/adm/filelist` -ls –type f –exec md5 {}\; >/tmp/now

    # compare the old signature and the new signature (line by line and
    report the files which does not match)

    diff –b   /usr/adm/savelist   /tmp/now
Detection of changes using Signatures
   It is important that the original signature
    file is not modified by the intruder.
    – It may be a good idea to store this file on a
      different system.
   For some files detection change with
    signature may not be meaningful. For
    example, /etc/passwd (or /etc/shadow) file
    contents will change quite often, hence
    hybrid of metadata (for such files) and
    signatures for other files should be used
    for detecting changes.
    In practice one need not generate digital
    signature on the content of each of the
    – e.g. We need to know if the owner or
      protection of /etc/passwd file is changed, but
      we do not care about the size or checksum
      because we do expect the contents to change
      while we should be concerned if the contents
      of /bin/login is altered.
   tripwire is a package that allows to
    configure the files, directories that need to
    be monitored using MD algorithms.

To top