Monitoring IDS

Document Sample
Monitoring IDS Powered By Docstoc
					"Monitoring IDS"

Anton Chuvakin, Ph.D., GCIA, GCIH



Security is a rapidly changing field of human endeavor. Threats we face literally change every day;
moreover, many security professionals consider the rate of change to be accelerating. On top of that, to be
able to stay in touch with such ever-changing reality, one has to evolve with the space as well. Thus, even
though I hope that this document will be useful for to my readers, please keep in mind that is was possibly
written years ago. Also, keep in mind that some of the URL might have gone 404, please Google around.

Intrusion Detection Systems (IDS) occupy an important place in the

prevention-detection-response metaphor, aiming to alert users of

security-relevant events occurring on their networks and

systems. Network intrusion detection technology starts to become the

standard part of network infrastructure, just as firewalls did in

recent years. For example, NIDS switch modules and integrated

firewall-IDS appliances present further evidence of that.

However, there is one important difference between the intrusion

detection and most other security technologies, such as firewalls and

anti-virus products. IDS need to be monitored to realize most of its

benefits. Unlike well-configured firewall, that will provide a large

part of its benefits (admittedly, not all!) even if configured

correctly and then forgotten, an IDS is next to useless unless

somebody is looking at the alerts it produces. IDS has been likened to

a "Christmas puppy"

(, for
being exciting right after purchase, but needing lots of attention

during the whole life cycle.

Most network IDS, the prime subject of this article, can be loosely

categorized into signature-based and anomaly-based (see more details

here One of the crucial

differences is that signature IDS should know a lot about the attack

that it is trying to detect, while anomaly IDS should know more about

the normal traffic that it should ignore. Some IDS (hybrid IDS)

combine both detection methods to some extent.

Monitoring IDS is never as easy as "see an attack alert -> take action

to stop it". Current NIDS systems are plagued by "false alarms" and

"false positives". To clarify, a "false positive" is an event when an

IDS sends an alert based on completely innocuous activity. Triggering

a NOP-based generic buffer overflow signature as a result of a large

binary file transfer is likely the most well-known example. This

happens since the file might contain many different combinations of

bits, including the one that is being looked at by the NIDS

signature. "False alarms" are alarms that do not (and, actually,

cannot) impact their target. IIS overflow hurled towards an Apache web

server is an example of the latter. IDS vendors spent a lot of efforts

trying to minimize those false triggers, while keeping the "false

negatives" (i.e. missed attacks) at a reasonable level.

It is hard to judge whether IDS needing attention is an inherent

characteristic of the technology or a nasty problem that need to be
"fixed". The very issue of detection of the attacks implies that the

fact of the attack is brought to someone's attention and not magically

"prevented" or "responded to". While "intrusion prevention" systems or

IDS with active response might blur the line between prevention,

detection and response, the main function of IDS seems to be to notice

and reveal attacks.

One might ask why is detection even needed if prevention technologies

are constantly perfected. However, the nature and complexity of

electronic communication leads one to believe that 100% effective

prevention is unachievable. If security enables safe business

transactions, it simulatenously makes abuse possible. In fact, as

Marcus Ranum puts it, the only solution that provides absolute

security is his "Adaptive Packet Destructive Filter"

( Thus, detection and

monitoring will always be needed, no matter what the advances in

prevention the future might bring.

Now that the need for IDS monitoring is established, let's review some

of the approaches possible. It is widely believed that a "real-time"

monitoring provides the highest level of security. As network activity

on the Internet-exposed networks occurs 24 hours a day, one would

suspect that monitoring should occur throughout the day as well. The

resulting conclusion is that 24 hours a day, real-time monitoring is

needed. However, this suspicion is always hit straight in the face

with "What are the business-driven monitoring needs?" together with

"What can you afford?" Certainly, some sites do implement such
extensive monitoring programs, but it is likely that most

organizations simply cannot afford it. It is worthwhile to note that

"working hours only" "real-time" monitoring is based on the assumption

that hackers only operate during business hours in your part of the

world, which is provably false.

It is more likely to happen that periodic monitoring in combination

with off-hours automatic alerting (such as via the dreaded pager

a.k.a. "3am alarm clock") will be implemented. The current practices

seem to differ greatly depending upon the company size. It appears

that small companies in general are not deploying intrusion detection

due to cost reasons, be it the cost of software (for commercial

solutions) or cost of personnel (for free open source solutions). Even

if somebody tucked a Snort box in the corner of their network

infrastructure, it is likely that the collected events will only be

looked at in the case of a major incident and no ongoing monitoring

program is implemented. Larger companies with part-time or even

dedicated full-time security staff are more likely to look at the IDS

alerts as a part of daily security routine. Spending an hour or two

viewing alerts and maybe plotting the trend to see how the threat

landscape is changing is likely. During the off-hours, the on-call

personnel might be available. Only few of the largest organizations

are likely to go for the near real-time 24 hour monitoring in the

dedicated SOC. Some of the challenges of monitoring a large IDS

deployment are discussed here A viable option here is
to sign up for a managed security monitoring which is however out of

the scope of this paper.

It should be noted that not only IDS should be monitored to receive

maximum benefits. In fact, monitoring IDS alerts becomes much more

valuable if they are fused with messages from other devices such as

firewalls, host OS, and even applications. Moreover, looking for

specific patterns within the firewall logs might provide for a crude

variant of "intrusion detection" with no actual IDS (see for example Combining host logs with IDS

alerts often allows one to reliably conclude whether the attack

noticed by the IDS actually impacted the target machines.

Subject of IDS monitoring cannot be addressed without looking at the

level of detail the IDS provide. How much data is "needed" likely has

no single answer for all situations, since the data retention policy

will likely conflict with the desire to "log everything" and "keep it

forever". Little detail is needed for simple alerts, more for causal

log perusal and simple automated log analysis tools and yet more for

in-depth analysis.

However, nothing beats the full packet dump, absolutely crucial for a

detailed attack investigation. Many things which are rarely clear from

looking at the whatever alert messages IDS generates instantly fall

into place from a signle peek at the decoded packet dump. For example,

complete TCP header reveals many things to a skilled analyst such as

chances of packet being spoofed, OS of the systems that produced it
(via passive OS fingerprinting), etc. Many IDS "false positives" can

be quickly clarified by looking at the packet header and payload.

Here are 3 examples of Snort logs with various level of detail:


Feb 10 01:31:05 bastion snort: [1:1622:5] FTP RNFR ././ attempt [Classification: Misc Attack] [Prior

ity: 2]: {TCP} ->


01/29-17:48:11.577739 [**] [1:648:5] SHELLCODE x86 NOOP [**] [Classification: Executable code was d

etected] [Priority: 1] {TCP} ->


[**] [1:1622:5] FTP RNFR ././ attempt [**]

[Classification: Misc Attack] [Priority: 2]

02/10-01:31:08.856949 ->

TCP TTL:49 TOS:0x0 ID:12424 IpLen:20 DgmLen:62 DF

***AP*** Seq: 0x3DE4414B Ack: 0xC0F209E6 Win: 0x16D0 TcpLen: 32

TCP Options (3) => NOP NOP TS: 39325796 22828471

In this third sample, the important parts of the IP and TCP headers

(such as flags, TTL, IP ID, TCP sequence and ACK number and almost all

others) are shown. The next level of reporting detail is a complete
packet dump, as was seen on the wire (such as in tcdump format).

Most major commercial IDS systems can also be configured to log more

or less data up to and including the binary traffic capture. Some

systems (such as Dragon, ManHunt and others) can be configured to log

the entire connection session that includes the attack.

So, the optimum level of reporting detail is chosen and multiple IDS

sensors are installed at crucial points of the enterprise. Now, one

need to collect all the IDS data in one place. Why is that essential?

Early ID systems required the user to log in to each sensor to view

the collected event data, such as alerts and captured packets. With

IDS deployment growing , most IDS vendors introduced various console

solutions to monitor and configure multiple sensors. For multi-device,

cross-platform analysis (such as for IDS, firewall and host data) the

Security Informtion Management (SIM) solution can be used. Aggregating

data in one point cuts down the cost of operating the IDS

infrastructure and also enabled detection of various distributed

attack patterns, such as scans against different parts of the


Even monitoring many IDS of the same kind presents a data volume

management challenge. Depending upon the quality of system tuning,

multiple IDS might produce gigabytes of raw logs and megabytes of

alerts per day. For multi-vendor IDS deployment, for example, such as

often suggested signature-based and anomaly-based IDS from two

different vendors, the monitoring challenges multiply. This paper sheds some light on the

problem. Some of the challenges include data normalization,

correlation, aggregation, preventing loss in transit, etc.

Additional challenge that plagues any large security monitoring

project is data volume. Whether its 50 Snort machines or 50 different

security devices, the data volume produced daily is staggering. So,

naturally, the question of a database comes up. Even if most problems

of multi-platform IDS monitoring outlined above are resolved, the

organization will end up with a quickly filling event

database. Indeed, simply collecting IDS alarms in one place makes

consistent monitoring possible, but it does not make it easy.

What are the approaches for dealing with the data volume? Filtering

"irrelevant" events is one option. However, in most cases determining

what is irrelevant is hard, and what looks irrelevant now might prove

to be a crucial missing piece of evidence in case of a security

incident. Intelligent aggregation (such as the one performed by SIM

solutions) is another option, but it requires deploying a SIM

software. In fact, the IDS data volume may be significantly reduced by

tuning the IDS for the environment. Unfortunately, some evidence

indicates that few people actually tune IDS signatures for their

environments. There are many ways to intelligently customize the IDS

signature set either starting from "all enabled" (and getting flooded

with alerts) or with "only relevant" (and then growing the rule set

with custom signatures).
IDS monitoring, if done rarely and only in the case of incidents,

starts to morph into "IDS forensics". Consulting IDS logs after some

machines are broken into seems like a reasonable thing to do,

especially if IDS logs are correlated with firewall and host

messages. For example, if one observes a set of 'connection denied'

from a firewall, than 'connection allowed', then an IDS alert for an

exploit followed by some service errors on the target, the picture of

an incident becomes that much more clear, even before any of the host

forensics (such as hard disk examination) is performed. For example,

this paper reflects how incident investigation was completed even

before the hard drive evidence was touched

When data volume problem is reduced to a manageable size by using some

of the above suggestions, the question of responding to events

reported by the IDS comes up. Before events can be responded to, a

response strategy based on the event priority needs to be

built. Intuitively, "MISC successful gobbles ssh exploit (uname)"

warrants more immediate action than "ICMP PING Windows". Most IDS

vendors assign severities or priorities to events and some even go as

far as to assign many such metrics such as severity, priority,

reliability, etc. Not only IDS vendors use different scales, they also

seem to use different philosophies while coming up with event

severities. Is that the chances of getting "root"? Criticality of

attacked service? Potential for expensive damage? Apparently, there is

no universal event prioritization (or even naming - not counting the

dormant MITRE CIEL project

scheme. And if one adds host IDS, host OS and firewall events with

their own priorities to the mix, the situation becomes

unmanageable. Normalized severity, as used by some SIM products, is

likely to help the issue, however, a lot more research is needed for a

consistent security event classification.

As a concluding remark, the eternal need for IDS and other security

monitoring will be emphasized. In this day and age when many vendors

sell "intrusion prevention" technologies (both host and

network-based), IDS acquire "active response" capabilities and

firewalls are getting smarter, one might think that effective

prevention will eliminate the need for IDS monitoring (and for

intrusion detection systems themselves). Such a thing will never

happen. First, even prevention technologies need to be carefully

monitored to assure that they are indeed preventing. Second, the

ever-changing nature of information security threat landscape will

always assure new harder-to-prevent threats will be

surfacing. However, these threats are likely to leave traces that can

be detected. Third, firewalls and other prevention technologies are by

nature enabling connectivity (with the notable exception of Marcus

Ranum's ultimate firewall, referenced above) and thus attacks and

abuse that need to be monitored. Many more factors are at play to

assure that detection and monitoring will never become obsolete.


This is an updated author bio, added to the paper at the time of reposting in 2010.
Dr. Anton Chuvakin ( is a recognized security expert in the field of log
management and PCI DSS compliance. He is an author of books "Security Warrior" and "PCI Compliance"
and a contributor to "Know Your Enemy II", "Information Security Management Handbook" and others.
Anton has published dozens of papers on log management, correlation, data analysis, PCI DSS, security
management (see list . His blog is one of the most
popular in the industry.

In addition, Anton teaches classes and presents at many security conferences across the world; he recently
addressed audiences in United States, UK, Singapore, Spain, Russia and other countries. He works on
emerging security standards and serves on the advisory boards of several security start-ups.

Currently, Anton is developing his security consulting practice, focusing
on logging and PCI DSS compliance for security vendors and Fortune 500 organizations. Dr. Anton Chuvakin
was formerly a Director of PCI Compliance Solutions at Qualys. Previously, Anton worked at LogLogic as a
Chief Logging Evangelist, tasked with educating the world about the importance of logging for security,
compliance and operations. Before LogLogic, Anton was employed by a security vendor in a strategic
product management role. Anton earned his Ph.D. degree from Stony Brook University.

Shared By:
Description: Misc security dump