Seven Steps to A Working Intrusion Detection System

Using tcpdump

As more organizations rely on the internet for both communications and electronic commerce, system administrators, CIOs and business executives increasingly share a common worry. Their joint concern is that malicious people may be using the internet to launch attacks on their computers and that those attacks could do great damage to their organization.

A small number of organizations have gone from wondering whether anyone was trying to attack their systems, to knowing for a fact that people were, how often, and what vulnerabilities were being tapped. They have built intrusion detection systems.

With that knowledge the organizations have been able to tune their firewalls and system protection strategies based on information, instead of on guesses.

The early adopters of intrusion detection systems crafted their own unique tools, but now some of the pioneers in intrusion detection are joining forces to perfect a common library of public domain tools to automate the process Their goal is nothing less than a new "cops for the network." (cops is a widely-used public-domain utility that helps identify system vulnerabilities.) Their new tool, called "cid" (for cooperative intrusion detection), automates the process of information gathering and traffic analysis for intrusion detection.

A cid-based intrusion detection capability can be deployed using freely-available software and existing hardware or hardware that can be purchased for less than $10,000. And cid complements and enhances the impact of commercially available intrusion detection systems.

Overview of the cid architecture

A cid system is based on two computers: one for the sensor and one for the analysis station. In this design, both systems are UNIX-based.

The sensor is (usually) located outside your firewall and between your firewall and your internet connection, an area often called the DMZ. We recommend that you install a small hub with at least eight ports to support the sensor.

The analysis station is located inside your firewall. Its job is to download and evaluate the data collected by the sensor. It uses filters to collect events of interest (such as probable attacks). This information will then be displayed on a web page.

The computers may be any UNIX systems which can compile libpcap and tcpdump. In developing the reference system, we used a SUN Microsystems Sparc II that had outlived its life as an engineering workstation as the sensor, and a pentium based PC running Red Hat Linux 5.0 as the analysis system. You will need root access on these systems. You will also need a recent version of Perl, gzip, the Apache web server and secure shell installed on the analysis system (the sensor needs gzip and secure shell). Except for secure shell, these come with Red Hat and probably other versions of Linux.

Intrusion detection requires a very large amount of disk space. More than half of your cost may be tied up in mass storage. Our reference system uses a nine gigabyte Seagate Baracuda on the sensor and a twenty-three gigabyte Seagate Elite 23 on the analysis station. Any large disk drive compatible with your unix systems should work.

Step 1: Acquiring the software

The cid software is available at no cost from the Lawrence Berkeley Research Laboratory the Naval Surface Warfare Center, and the SANS Institute.

Action 1.1. Download tcpdump and related software from:

The main program will be labeled tcpdump.tar.Z Make sure you also get libpcap, (libpcap.tar.Z), since that is how the Unix system gets the network information from its kernel. You will also want tcpslice (tcpslice.tar.Z). These software packages have been made available by the Network Research Group at the Lawrence Berkeley Laboratory.

Action 1.2 Acquire secure shell.

If you are not already using secure shell, visit the "getting started with secure shell" home page at to obtain the software.

Action 1.3 Obtain the cid code. The remaining intrusion detection software is available at:

Step 2. Build the sensor.

The first system to build is the hardware/software facility that will serve as the sensor.

Action 2.1 Obtain a computer running UNIX.

Action 2.2 Partition a large disk.

Since the primary purpose of the system is to collect data you want the /LOG partition to be as large as possible.

Action 2.3 Build libpcap.

We start with libpcap. The unix command to unpack the software is:
$ uncompress libpcap.tar.Z

The result of this operation should be a file called libpcap.tar. Next, type:
$ tar xf libpcap.tar

Unpack all the files that came in the software distribution and place them in the directory you are working in. Then compile libpcap. Read the documentation that came with the software. The highpoints are to run configure and make.

Action 2.4 Compile tcpdump. Follow the same steps as in Action 2.3 to uncompress, extract the tar file, configure and make to build tcpdump.

Step 3. Configuring the Sensor

In this section, we describe the actions necessary to install programs that support tcpdump and allow it to run. They include cron,, and

Action 3.1 Partition a large disk. Since the primary purpose of the system is to collect data you want the /LOG partition to be as large as possible.

Action 3.2 Unpack the cid tar file. The files in the sensor directory are needed to build the sensor. Copy these files to the sensor's /usr/local/bin directory.

Action 3.3 Secure the sensor. The sensor will be located outside the firewall and the sensor programs run as root. Disable all unneeded services. Make sure all available security patches for your system are installed. The step by step programs expect secure shell to be present for secure file transfer.

Action 3.4 Configure cron. Unix scheduling system, called cron, automatically runs programs at a certain time. Generally, you create a shell script and call cron to run it. On a SUNOS 4.1.3 sensor, the way to edit these cron files was:

# crontab Ėe

Cron requires a particular format; the line we added to our cron file is below:

0 * * * * /usr/local/bin/ > /dev/null 2>&1

This entry tells cron to execute a shell script named at the zero minute of every hour.

Action 3.5 Review the activities of This is the program, run every hour by cron, that sets up some variables, calls one shell script to stop the current tcpdump, run and then calls another shell script to start tcpdump again. is the shell script that stops the old tcpdump file and starts a new one every hour. Experience shows that one hour is a long time to run tcpdump. Now that we have stopped the logger, we must start it again. This is what does.

The other program,, runs on the analysis station rather than the sensor. It is called by cron and its job is to keep the sensorís disk from filling up by deleting older tcpdump files. We store three days worth of data on the sensor, but this is configurable.

Action 3.6 Test the script. Run the script from the command line. Check the process table (ps ax on linux, ps -ax on SUNOS, ps -ef on ATT variants of Unix). Check to see if tcpdump is running. Go to the /LOG directory and check to see whether the log file is there and growing. If it is, you are probably in good shape.

And that is all there is to it. A spare unix platform, tcpdump, a large disk and a couple shell scripts and you have an intrusion detection sensor.

Step 4. Set up the Analysis Station

In this section we describe methods for getting the data from the sensor to the analysis station securely

Action 4.1 Configure the firewall to allow secure shell initiated from the analysis station inside the firewall to copy the data from the sensor to the analysis station.

Action 4.2 Edit cron to call

Here is a cron example

# DO NOT EDIT THIS FILE - edit the master and reinstall.

# (/tmp/crontab.2899 installed on Thu Jan 22 11:40:22 1998)

# (Cron version -- $Id: crontab.c,v 2.13 1994/01/17 03:20:37 vixie Exp $)

7 * * * * /usr/local/bin/ > /dev/null 2>&1's job is to pull from the sensor machine the raw tcpdump file, store it in a descriptive directory, and run it through tcpdump again with a filter file called "bad_events" which will detect those events you suspect may be intrusion attempts. The output from this tcpdump filter run is written to a file that happens to be within the domain of an http web server. Consequently, pointing a web browser to this domain allows an analyst to quickly review any findings of the tcpdump filter run. We will describe the format and usage of tcpdump filters in the next section.

Action 4.3 Test Run fetchem interactively and see whether the files are correctly being copied from the sensor into the /LOG directory.

Step 5. Use tcpdump and its filters

This section is a step-by-step guide for learning the language you will use to control tcpdump's filtering activities. To get the most out of your system you will want to be able to customize your filters.

Tcpdump can operate in two modes: raw packet collection and packet filtering. The sensor merely reads the raw packets from the network and saves them into a file. The analysis station reads the packet files and applies filters to look for specific patterns. Lets take a look at examples of simple filters that we have found to be useful. Simple filters can be strung together with "and", "or" and so forth to make a more complex filter. We will also have an example of a complex filter.

NOTE: This is a good point to advise readers that a beginning intrusion detection analyst should be very slow to panic! IMAP, PORTMAP and so forth are common attacks, but are also commonly used services. Take some time to learn how your site does business network wise. Of course, once you start looking, it is probable that you will detect some attempted attacks.

Example 5.1: Simple filter to detect telnets.

Since the files are so large, especially on busy networks, the files are kept compressed. So the first thing we will do in constructing our filter is call GNU Unzip. Here is the whole command:

$ gunzip -c yourlogfile.gz | tcpdump -r - "tcp and dst port 23"

tcpdump -r - tells tcpdump to read from a file instead of a network interface and the trailing dash means standard input (the output of gunziping the logfile). Our filter is in quotes. For the remainder of these filter examples, we will provide only the filter itself rather than the whole command line.

Example 5.2 A filter for IMAP.

The following filter scans packets looking for tcp SYN and FIN packets sent to destination port 143, the IMAP service. IMAP is a very common attack.

tcp and (tcp[13] & 3 != 0) and (dst port 143)

This filter above is pretty simple. The term "dst port" refers to the destination port. Most unix computers have a file called /etc/services which provide names for these numeric ports. For instance email (smtp) is TCP port 25, telnet is TCP port 23 and so forth. IMAP as we mentioned earlier is port 143.

If the filter finds the target pattern in a data file, tcpdump will display the results. Letís begin to look at some sample output!

First here is our file-naming format.

98 = The year, 1998, we arenít doing date mathematics so 2000 isnít an issue

01 = The month, January

21 = The day

16 = The hour in military time. Remember, we chose to roll the files over hourly.

From 98012116.txt:

Time Sourcehost.sourceport > Desthost.destport

16:33:14.296403 > S 0:0(0) win 512

16:33:14.301846 > S 0:0(0) win 512

16:33:14.323030 > S 0:0(0) win 512

Example 5.3 ICMP filter

Here is another simple filter from the tcpdump man page to look for all ICMP packets that are not echo requests/replies (i.e., not ping packets).

icmp[0] != 8 and icmp[0] != 0

ICMP ports are not listed in /etc/services. However there is a helpful file in /usr/include/netinet/ip_icmp.h on Red Hat Linux 5.0 and SUNOS 4.1.3 that defines the names of the ICMP port numbers. ICMP, the Internet Control Message Protocol, (to the best of our knowledge), cannot be used to break in to your siteís computer systems, but can be used, and is being used, for numerous denial of service attacks. Of course, ICMP was designed as a network health indicator and if you were to suddenly see a lot of "TIMEXs" Time Exceeded, or UNREACH, (net, host, protocol Ö unreachable), SOURCEQUENCH this could simply indicate your network operations folks are about to have a bad day.

Example 5.4 Construct a filter to detect broadcasts:

ip and ip[19] = 0xff

If you look at ICMP at a busy site, you are likely to see a LOT of ICMP. One of the classic attacks with ICMP, in which very large ping packets are fragmented, is often combined with a broadcast. After all, why go for a system when you can go for a subnet? A filter like the one above will detect the broadcast ICMPs, or any other broadcasts. There are several variants of this attack; here is one of our favorites:

16:00:03.828071 > (frag 27392:548@1480)

16:00:03.896593 > (frag 21248:548@1480)

16:00:06.118729 > icmp: echo request (frag


16:00:06.250349 > (frag 52480:548@1480)

(When we looked up, we found this domain was registered!)

Example 5.5 Land attack filter

Speaking of denial of service, it turns out that some computer systems will freeze if they receive a packet in which the source address is spoofed to be the same as the destination address. Here is a filter to detect this:

ip and ip[12:4] = ip[16:4]

Example 5.6 A filter designed to detect SNMP

(udp port 161 or udp port 162) and not src net 172.17

SNMP was developed for network management. However, adversaries of your organization can use SNMP to collect a lot of information about your networks and computer systems. Network appliances, such as routers, hubs and bridges, devices often have SNMP agents built in but, in addition, many other devices, such as print servers and X terminals, have built-in SNMP agents. These devices use a "community string" to control access. Many SNMP agents default to a community string called public, which means just that.

Example 5.7 A filter to watch for the r-utilities:

ip and ( tcp dst port 512 or tcp dst port 513 or tcp dst port 514 )

The r-utilities, rlogin, rshell, r-exec, and so forth, allow two trusted systems to exchange files and commands without authentication. Ideally, systems that need to do this type of trusted exchange across the internet will change to secure shell or some other more secure mechanism. Two problems arise with the use of r-utilities: (1) /etc/hosts.equiv and (2) individual userís .rhosts files. If a system has a "+ +" in its host.equiv, that means it trusts all users from all systems. Needless to say, it is wise to look out for packets with r-utilities from unknown sites.

Example 5.8 A filter to detect access to portmapper:

ip and dst port 111

My /etc/services file calls this sunrpc. RPCs tend to connect to portmapper at either TCP or UDP 111 to find other services. This is a very old (and effective) gateway to a series of attacks. In late 1997, we started to see an increase in portmap attempts.

From 98012403.txt:

03:20:42.579548 > S

3648872793:3648872793(0) win 512 <mss 1460>

03:20:45.547040 > S

3648872793:3648872793(0) win 31744 <mss 1460>

03:20:51.549055 > S

Note the ĎSí, or SYN packet flag. A firewall screens the DNS server so that the sunrpc attempt never actually reaches it.

Example 5.9 An NFS filter

NFS is a good service to keep an eye on, and here is a filter to do it:

ip and udp port 2049

05:17:50.562188 jokull.Colorado.EDU.885592240 >

l.nfs: 40 null

17:52.553265 jokull.Colorado.EDU.885592240 >

l.nfs: 40 null

05:17:56.551772 jokull.Colorado.EDU.885592240 >

l.nfs: 40 null

Not all "hits" are intrusion attempts. Our assessment of the data series above is that it shows the results from an automated process with a typographical error that happens to be doradoís internet address. We just hope for "Joe Koolís" sake that their system is trying to do this mount in the background.

Example 5.10 A NetBIOS filter.

Microsoft Workgroup for Windows, Windows 95, Windows NT and SAMBA all use a protocol called NetBIOS to communicate over the internet. This protocol uses ports 137, 138, 139 of both TCP and UDP. Letís build this filter one part at a time:

ip and

Will match both TCP and UDP Now we need to match the port numbers: 137, 138, 139

port 137 or port 138 or port 139

Finally add parentheses for the precedence challenged

ip and (port 137 or port 138 or port 139)

Example 5.11 Construct a bad_events filter.

Sophisticated filters can be constructed to scan for any set of events you want to detect. For example, hereís a portion of a script called "bad_events" that is used to detect any packets that could indicate suspicious activity that might warrant further attention. This filter might be an excellent start towards implementing an intrusion detection capability at your site.

If you find the filter syntax to be a bit tricky, youíll need access to a book called Internetworking with TCP/IP Volume I by Douglas E. Comer, before you can do anything really fancy. The book will be a good investment for your organization. In fact someone probably already has it, so try to borrow one until your copy comes in.

(tcp and (tcp[13] & 3 != 0) and

((dst port 143) or

(dst port 111) or

(tcp[13] & 3 != 0 and tcp[13] & 0x10 = 0 and dst net 172.16 and dst port 1080) or

(dst port 512 or dst port 513 or dst port 514) or

((ip[19] = 0xff) and not (net 172.16/16 or net 192.168/16)) or

(ip[12:4] = ip[16:4])))


(not tcp and not igrp and not dst port 520 and

((dst port 111) or

(udp port 2049) or

((ip[19] = 0xff) and not (net 172.16/16 or net 192.168/16)) or

(ip[12:4] = ip[16:4])))




Step 6. Display the information for maximum analytical value

Letís consider what you have done so far. You have set up the sensor in the DMZ. You have collected data; yes, a lot of data. You have examined this data with filters.

Sadly, when the files get very large, it takes increasingly large amounts of time to parse the files with filters. And when tasks take time and effort, many people just quit doing those tasks. For this reason, how we display the information is every bit as important as what we filter for. writes the information into html format into a directory that is served by the web server. People responsible for watching the network simply use their web browser to see what is going on. The web files are chained by forward and back arrows, so the analyst can go from hour to hour easily.

Dir-it is the program that creates and updates the home page which is an index to all the data. This was written by Robert Niles and we have included it in the source distribution.

There are many web servers available. The one that we have used for displaying intrusion detection information for several years is called the Apache web server. It is available from Enhancing your Intrusion Detection Capability

Step 7. Implement more advanced analytical capabilities.

There is a lot more to intrusion detection than looking for a few bad events. So how can we improve the capability of our system?

Example 7.1 A filter to detect all non-smtp or domain accesses to your mail and name server (DNS) hosts:

ip and (host or


and not (dst port smtp or dst port domain)

At most sites, the systems that draw the most attacks are your DNS and e-mail servers. They become prime targets because their IP addresses are well advertised and because, if an attacker can gain control of these systems, the attacker can probably control your whole site fairly soon thereafter.

Example 7.2 A bad hosts filter.

After you detect an attack from a host, you will want to add it to a bad hosts list. Working through the SANS Institute, you may be able to find organizations that are in similar, or complementary lines of work as yours, and with whom you can share attack information.

Here a is filter that can help you do this, of course you have to add your own bad hosts.

ip and (host or host or

net 192.168.4 or net 176.16.41 or net 10.1


Action 7.3 Reduce the data volume

We have found there is a practical limit to the analysis that can be performed with filters. Also, tcpdump collects a LOT of data. In order to perform more advanced analysis, it is necessary to reduce the data. We do this in two steps.

first we use filters to separate the data by protocol.

Then, we further reduce the data to a common format that we use to store information from any sensor.

The reduced data is comma delimited so it can easily be imported into databases or spreadsheets and contains the following fields: date, time, src, srcport, dst, dstport, protocol. We refer to this format as bcp, since it is Bulk CoPied from all analysis stations to a huge database for storage, or historical analysis.

Our first step is to convert from hourly to daily files. We do this because, in comparing historical information, delays are created by the overhead involved in opening and closing hourly files. Hereís how we do conversion an dreduction.

Use tcpslice to concatenate hourly files into daily files. We can put a whole day's worth of files together using tcpslice. From our /LOG/date, the directory holding the day's data we could type:

$ tcpslice *.9802* > tcpdump.980202

tcpslice would cat all the hourly files into a daily file tcpdump.980202, the daily file for February 02, 1998.

Next, we extract the various protocols we are interested in as part of the data reduction. At our site the primary protocols in use are: TCP, UDP, ICMP, IGMP and IGRP. To extract UDP into an ascii file for further processing:

$ tcpdump -r tcpdump.980202 udp > udp.980202

We repeat this operation for protocol we are interested in keeping for long term analysis. We run some cursory tests for ICMP looking for routing updates that are from external addresses, but do not archive IGRP or IGMP. We do reduce and archive, UDP, TCP, and ICMP.

Then we run a simple test on the data, from time to time we find some interesting IP traffic that is not one of the protocols.

$ tcpdump -r tcpdump.980202 (not udp and not tcp and not icmp and not igrp and not igmp) > other.980202

If other is not an empty file then it would pay to invest some time tracking down the source and destination addresses to sort out what is going on.

While it is important to know how to do all these things, it does get monotonous so we have included to take all the hourly files and produce daily files from them.

Finally, in order to compare results over time, we convert the protocol files to bcp format using Another advantage of converting files to this format is that its universal format allows us to integrate data from different kinds of sensors.