Essential System Administration, Third Edition

So far, we've looked at lots of ways to prevent security problems. The remainder of this chapter will look at ways to detect and investigate security breaches. We'll consider all of the various monitoring activities that you might want to use as they would be performed manually and in isolation from one another. There are both vendor-supplied and free tools to simplify and automate the process, and you may very well choose to use one of them. However, knowing what to look for and how to find it will help you to evaluate these tools and use them more effectively. The most sophisticated system watchdog package is ultimately only as good as the person reading, interpreting, and acting on the information it produces.

The fundamental prerequisite for effective system monitoring isknowing what normal is, that is, knowing how things ought to be in terms of:

  • General system activity levels and how they change over the course of a day and a week.

  • Normal activities for all the various users on the system.

  • The structure, attributes, and contents of the filesystem itself, key system directories, and important files.

  • The proper formats and settings within important system configuration files.

Some of these things can be determined from the current system configuration (and possibly by comparing it to a newly installed system). Others are a matter of familiarity and experience and must be acquired over time.

7.8.1 Password File Issues

It is important to examine the password file regularly for potential account-level security problems, as well as the shadow password file when applicable. In particular, it should be examined for:

  • Accounts without passwords.

  • UIDs of 0 for accounts other than root (which are also superuser accounts).

  • GIDs of 0 for accounts other than root. Generally, users don't have group 0 as their primary group.

  • Accounts added or deleted without your knowledge.

  • Other types of invalid or improperly formatted entries.

  • The password and shadow files' own ownership and permissions.

On some systems, the pwck command performs some simple syntax checking on the password file and can identify some security problems with it (AIX provides the very similar pwdck command to check its several user account database files). pwck reports on invalid usernames (including null ones), UIDs, and GIDs, null or nonexistent home directories, invalid shells, and entries with the wrong number of fields (often indicating extra or missing colons and other typos). However, it won't find a lot of other, more serious security problems. You'll need to check for those periodically in some other manner. (The grpck command performs similar simple syntax checking for the /etc/group file.)

You can find accounts without passwords with a simple grep command:

# grep '^[^:]*::' /etc/passwd root::NqI27UZyZoq3.:0:0:SuperUser:/:/bin/csh demo::7:17:Demo User:/home/demo:/bin/sh ::0:0:::

The grep command looks for two consecutive colons that are the first colon characters in the line. This command found three such entries. At first glance, the entry for root appears to have a password, but the extra colon creates a user root with a nonsense UID and no password; this mistake is probably a typo. The second line is the entry for a predefined account used for demonstration purposes, probably present in the password file as delivered with the system. The third line is one I've found more than once and is a significant security breach. It creates an account with a null username and no password with UID and GID 0: a superuser account. While the login prompt will not accept a null username, some versions of su will:

$ su "" # No password prompt!

In the password file examined with grep, the extra colon should be removed from the root entry, the demo account should be assigned a password (or disabled with an asterisk in the password field in /etc/passwd or perhaps just deleted), and the null username entry should be removed.

Accounts with UID or GID 0 can also be located with grep:

# grep ':00*:' /etc/passwd root:NqI27UZyZoq3.:0:0:SuperUser:/:/bin/csh harvey:xyNjgMPtdlx*Q:145:0:Thomas G. Harvey:/home/harvey:/bin/ksh badguy:mksU/.m7hwkOa:0:203:Bad Guy:/home/bg:/bin/sh larooti:lso9/.7sJUhhs:000:203:George Larooti:/home/harvey:/bin/csh

The final line of output indicates why you should resist using a command like this:

# grep ':0:' /etc/passwd | grep -v root This won't catch everything.

Whoever added user larooti has been tricky enough to add multiple zeros as the UID and the word "root" in the GECOS field. That person has also attempted to throw suspicion on user harvey by including his home directory in this entry. That is one of its two functions; the other is to enable the entry to pass some password file checking programs (including pwck). It seems unlikely, although not impossible, that user harvey is actually responsible for the entry; harvey could be very devious (or monumentally stupid, which can look very similar). I wouldn't consider the home directory clear evidence either way.

You can find new accounts by scanning the password file manually or by comparing it to a saved version you've squirreled away in an obscure location. The latter is the best way to find missing accounts, because it's easier to notice something new than that something is missing. Here is a sample command:

# diff /etc/passwd /usr/local/bin/old/opg 36c36,37 < chavez:9Sl.sd/i7snso:190:20:Rachel Chavez:/home/chavez:/bin/csh --- > claire:dgJ6GLVsmOtmI:507:302:Theresa Claire:/home/claire:/bin/csh > chavez:9So9sd/i7snso:190:20:Rachel Chavez:/home/chavez:/bin/csh 38d38 < wang:l9jsTHn7Hg./a:308:302:Rich Wang:/home/wang:/bin/sh

The copy of the password file is stored in the directory /usr/local/bin/old and is named opg. It's a good idea to choose a relatively unconventional location and misleading names for security-related data files. For example, if you store the copy of the password file in /etc[19] or /var/adm (the standard administrative directory) and name it passwd.copy, it won't be hard for an enterprising user to find and alter it when changing the real file. If your copy isn't secure, comparing against it is pointless. The example location given above is also a terrible choice, but it's merely a placeholder. You'll know what good choices are on your system. You might also want to consider keeping the comparison copy encrypted (assuming you have access to an effective encryption program) or storing it on removable media (which are not available in general).

[19] There may be copies of the password file in /etc, but these are for backup rather than security purposes.

The sample output displayed previously indicates that user wang has been added, user claire has been deleted, and the entry for user chavez has changed since the last time the copy was updated (in this case, her password changed). This command represents the simplest way of comparing the two files (we'll look at more complex ones soon).

Finally, you should regularly check the ownership and permissions of the password file (and any shadow password file in use). In most cases, the password file should be owned by root and a system administrative group and be readable by everyone but writable only by the owner; the shadow password file should not be readable by anyone but root. Any backup copies of either file should have the same ownership and permissions:

$ cd /etc; ls -l *passwd* *shadow* -rw-r--r-- 1 root system 2732 Jun 23 12:43 /etc/passwd.sav -rw-r--r-- 1 root system 2971 Jul 12 09:52 /etc/passwd -rw------- 1 root system 1314 Jul 12 09:55 /etc/shadow -rw------- 1 root system 1056 Apr 29 18:39 /etc/shadow.old -rw------- 1 root system 1276 Jun 23 12:54 /etc/shadow.sav

7.8.2 Monitoring the Filesystem

Checking the contents of important configuration files such as /etc/passwd is one importantmonitoring activity. However, it is equally important to check the attributes of the file itself and those of the directory where it is stored. Making sure that system file and directory ownerships and protections remain correct over time is vital to ensuring continuing security. This includes:

  • Checking the ownership and protection of important system configuration files.

  • Checking the ownership and protection on important directories.

  • Verifying the integrity of important system binary files.

  • Checking for the presence or absence of certain files (for example, /etc/ftpusers and /.rhosts, respectively).

Possible ways to approach these tasks are discussed in the following subsections of this chapter. Each one introduces an increased level of cautiousness; you'll need to decide how much monitoring is necessary on your system.

7.8.2.1 Checking file ownership and protection

Minimally, you should periodically check the ownership and permissions of important system files and directories. The latter are important because if a directory is writable, a user could substitute a new version of an important file for the real one, even if the file itself is protected (as we've seen).

Important system files that need monitoring are listed in Table 7-6 (note that filenames and locations vary somewhat between Unix versions). In general, these files are owned by root or another system user; none of them should be world-writable. You should become familiar with all of them and learn their correct ownerships and protections.

Table 7-6. Important files and directories to protect and monitor

File(s)

Purpose

/.cshrc, /.login, /.logout, /.kshrc,

/.profile, and so on

root account's initialization files (traditional location)

/.forward, /.mailrc

root's mail initialization files

/.emacs, /.exrc

root's editor initialization files

/.rhosts

Should not exist

~, ~/.cshrc, ~/.login, ~/.profile,

User home directories and initialization files

~/.rhosts

Probably should not exist

~/bin

User binary directory (conventional location)

/dev/*

Special files (the disk and memory devices are the most critical)

/etc/*

Configuration files in /etc and its subdirectories (use find /etc -type f to find them all)

/sbin/init.d

Boot script location on some systems

/tcb

Enhanced security directory (HP-UX and Tru64)

/var/adm/*

Administrative databases and scripts

/var/spool/*, /usr/spool/*

Spooling directories

/bin, /usr/bin, /usr/ucb, /sbin, /usr/sbin

System (and local) binaries directories

/usr/local/bin, ...

Local binaries directory (as well as any other such locations in use)

/lib/*, /usr/lib/*

System libraries directories; shared libraries (common code that is called at runtime by standard commands) are the most vulnerable

/usr/include

System header (.h) files (replacing one of these can introduce altered code the next time a program is built locally)

All setuid and setgid files

Wherever they may be

You should be familiar with the correct ownership and protection for these files (as well as any others of importance to your system). You can facilitate the task of checking them with a script that runs a command like ls -l on each one, saves the output, and compares it to a stored list of the proper ownerships and permissions. Such a script can be very simple:

#!/bin/csh # sys_check - perform basic filesystem security check umask 077 # Make sure output file is empty. /usr/bin/cp /dev/null perm.ck alias ck "/usr/bin/ls -l \!:* >> perm.ck" ck /.[a-z]* ck /dev/{,r}disk* . . . ck /usr/lib/lib* /usr/bin/diff /usr/local/bin/old/pm perm.ck > perm.diff

This script is a C shell script so that it can define an alias to do the work; you could do the same thing with a Bourne shell function. The script runs the ls -l command on the desired files, saving the output in the file perm.ck. Finally, it compares the current output against a saved data file. If the files on your system change a lot, this script will produce a lot of false positives: files that look suspicious because their modification time changed but whose ownership and protection are correct. You can avoid this by making the ls command a bit more complex:[20]

[20] The corresponding alias command is:

ls -l files | awk '{print $1,$3,$4,$NF}' >> perm.ck

This command compares only the file modes, user owner, group owner, and filename fields of the ls command.

In addition to checking individual files, it is important to check the protection on all directories that store important files, making sure that they are owned by the proper user and are not world-writable. This includes both directories where Unix commands are stored, administrative directories like /var/adm and /etc's subdirectories, and the spooling directories under /var/spool. Any other directory containing a setuid or setgid file should also be checked.

7.8.2.2 Looking for setuid and setgid files

The number of setuid commands on the system should be kept to a minimum. Checking the filesystem for new ones should be part of general system security monitoring. The following command will list all files that have the setuid or setgid access mode set:

# find / \( -perm -2000 -o -perm -4000 \) -type f -print

You can compare the command's output against a saved list of setuid and setgid files and thereby easily locate any changes to the system. Again, you can do a more comprehensive comparison by running ls -l on each file and comparing that output to a saved list:

# find / -type f \( -perm -2000 -o -perm -4000 \) \ -exec ls -l {} \; | diff - /usr/local/bin/old/fs 2d1 < -rwsr-xr-x 1 root bin 41792 Jun 7 1995 /usr/local/bin/xpostit

Any differences uncovered should be investigated right away. The file storing the expected setuid and setgid files' data can be generated initially using the same find command after you have checked all of the setuid and setgid files on the system and know them to be secure. As before, the file itself must be kept secure, and offline copies should exist. The data file and any scripts which use it should be owned by root and be protected against all group and other access. Even with these precautions, it's important that you be familiar with the files on your system, in addition to any security monitoring you perform via scripts, rather than relying solely on data files you set up a long time ago.

7.8.2.3 Checking modification dates and inode numbers

If you want to perform more carefulmonitoring of the system files, you should compare not only file ownership and protection, but also modification dates, inode numbers, and checksums (see the next section). For the first two items, you can use the ls command with the options -lsid for the applicable files and directories. These options display the file's inode number, size (in both blocks and bytes), owners, protection modes, modification date, and name. For example:

$ ls -lsid /etc/rc* 690 3 -rwxr-xr-x 1 root root 1325 Mar 20 12:58 /etc/rc0 691 4 -rwxr-xr-x 1 root root 1655 Mar 20 12:58 /etc/rc2 692 1 drwxr-xr-x 2 root root 272 Jul 22 07:33 /etc/rc2.d 704 2 -rwxr-xr-x 1 root root 874 Mar 20 12:58 /etc/rc3 705 1 drwxr-xr-x 2 root root 32 Mar 13 16:14 /etc/rc3.d

The -d option allows the information on directories themselves to be displayed, rather than listing their contents.

If you check this data regularly, comparing it against a previously saved file of the expected output, you will catch any changes very quickly, and it will be more difficult for someone to modify any file without detection (although, unfortunately, far from impossible rigging file modification times is not really very hard). This method inevitably requires that you update the saved data file every time you make a change yourself, or you will have to wade through lots of false positives when examining the output. As always, it is important that the data file be kept in a secure location to prevent it from being modified.

7.8.2.4 Computing checksums

Checksums are a more sophisticated method for determining whether a file's contents have changed. A checksum is a number computed from the binary bytes of the file; the number can then be used to determine whether a file's contents are correct. Checksums are most often used to check files written to disk from tape to be sure there have been no I/O errors, but they may also be used for security purposes to see whether a file's contents change over time.

For example, you can generate checksums for the system commands' executable files and save this data. Then, at a later date, you can recompute the checksums for the same files and compare the results. If they are not identical for a file, that file has changed, and it is possible that someone has substituted something else for the real command.

The cksum command computes checksums; it takes one or more filenames as its arguments and displays the checksum and size in blocks for each file:

$ cksum /bin/* 09962 4 /bin/[ 05519 69 /bin/adb ...

This method is far from foolproof. For example, crackers have been known to pad a smaller file with junk characters to make its checksum match the old value. Unfortunately, cksum computes a very easy-to-simulate file signature. There are even cases of viruses remaining in memory, intercepting directory listing and checksum commands, and returning the correct information (which the virus saved before making alterations to the system).

NOTE

The GNU md5sum utility is a better checksum choice. It is part of the textutils package, and it is included with some Linux distributions. See http://www.gnu.org/manual/textutils-2.0/html_node/textutils_21.html for more information.

In any case. you'll need to take the following precautions when computing and comparing checksums if you suspect the system has been compromised:

  • Make sure that you have a copy of the checksum utility that you know to be secure. This means restoring the utility from original operating system distribution media or a post-installation backup you made if there is any doubt about system integrity.

  • Compare the current system state with a data file that has been stored offline, because the copy on the disk may have been altered.

  • Make the comparisons after rebooting to single-user mode.

Paranoia Is Common Sense

Sooner or later, a recalcitrant user will accuse you of being overly paranoid because she resents some restriction that reasonable security measures impose. There's not really much you can say in response except to explain again why security is important and what you are trying to protect against. In general, cries of "paranoia" are really just a sign that you are performing your job well. After all, it is your job to be at least one level more paranoid than your users think you need to be and than potential intruders hope you will be.

7.8.2.5 Run fsck occasionally

It is also possible for modifications to be made on a filesystem if someone succeeds in breaking into a system, usually via the fsdb utility. Running fsck occasionally, even when it is not necessary for filesystem integrity purposes, never hurts. You should also run fsck after rebooting if you think someone has succeeded in breaking into the system.

7.8.3 Automating Security Monitoring

There are a variety of tools available for automating many of the securitymonitoring activities we have considered so far. We'll look briefly at a few of them in this section.

7.8.3.1 Trusted computing base checking

A trusted computing base (TCB) is a system environment whose security is verifiably trustworthy and that includes the capability of ensuring its continued integrity. The TCB may be present on a computer along with other software, and users interact with the system in a trusted mode via a trusted path, which eliminates any untrusted applications and operating system components before allowing access to the TCB. Communication with the TCB is usually initiated by a specific key sequence on such systems; for example, on an AIX system, pressing the Secure Attention Key sequence (CTRL-X CTRL-R by default) accesses the TCB. These facilities are used in systems secured at B1 and higher levels, and the requirements specify that the operating system must be reinstalled in the high security mode (a TCB cannot be added to an existing system).

A full consideration of trusted computing is beyond the scope of this book. However, some of the utilities provided as part of TCB support can still be used for general filesystem monitoring even when the TCB facility is not active. Typically, these utilities compare all important system files and directories against a list of correct attributes that was created at installation time, checking file ownerships, protection modes, sizes, and checksums, and, in some cases, modification dates. TCB-checking utilities and similar programs also usually have the ability to correct any problems that they uncover.

These are the facilities provided by the Unix versions we are considering (their capabilities vary somewhat):

AIX
tcbck
HP-UX
swverify
Solaris
aset
Tru64
fverify

7.8.3.2 System integrity checking with Tripwire

The Tripwire facility, originally produced by the COAST project of Purdue University, is unquestionably among the finest free software packages in existence. The current home page is http://www.tripwire.org.

Tripwire compares the current state of important files and directories with their stored correct attributes according to criteria selected by the system administrator. It can compare all important file properties (more precisely, all inode characteristics), and it includes the ability to compute file signatures in many different ways (nine are included as of this writing). Comparing filechecksums computed using two different algorithms makes it extremely difficult for a file to be altered without detection.

Tripwire uses an ASCII database to store file attributes to be used for future comparisons. This database is created the first time you run the tripwire command (by including the -init option). Ideally, you should use this option after reinstalling the operating system from the original media to eliminate the possibility that the system is already corrupt. tripwire creates database entries and makes comparisons to them based on the instructions in its configuration file, tw.config by default.

Here is an excerpt from a configuration file:

# Pathname Attributes to Check /usr/bin +ugpinsm12-a /usr/local/bin R /usr/lib R-2 ... /usr/bin/at R+8-2

The first entry indicates that the user and group owners, protection, inode number, number of links, inode creation time, and file modification times as well as file signatures 1 and 2 (which correspond to the MD5 and Snefru algorithms) will be checked for the files in /usr/bin, and that any changes in file access times will be ignored. The second entry performs the same checks for the files in /usr/local/bin, because R is a built-in synonym for the string specified for /usr/bin (it is also the default). For the files in /usr/lib, all checks except file signature 2 are performed. The final entry refers to a file rather than a directory, and it substitutes file signature 8 (Haval) for signature 2 for the at command executable (overriding the specification it would otherwise have from the first sample entry).

Thus, it is very easy to perform different tests on different parts of the filesystem depending upon their unique security features. The configuration file syntax also includes C preprocessor-style directives to allow a single configuration file to be used on multiple systems.

Once the Tripwire database is created, it is essential to protect it from tampering and unauthorized viewing. As the Tripwire documentation repeatedly states, the best way to do so is to store it on a removable, write-protectable medium like a floppy disk; the locked disk with the database will be placed in the drive only when it is time to run Tripwire. In fact, in most cases, both the database and the executable fit easily onto a single floppy disk. In any case, you will want to make a secure backup copy of both tripwire and its related siggen utility after building it, so that the online copies can be easily restored in case of trouble.

When you create the initial database for a system, take the time to generate all of the file signatures you might conceivably want. The set you select should include two difficult-to-forge signatures; you may also want to include one quickly computed, lower-quality signature. You don't have to use as time-consuming a procedure on a regular basis for example, you might use one quick and one good signature for routine checks but the data will be available should you ever need it.

Here is part of a report produced by running tripwire:

changed: -rwsrwsr-x root 40120 Apr 28 14:32:54 2002 /usr/bin/at deleted: -rwsr-sr-x root 149848 Feb 17 12:09:22 2002 /usr/local/bin/chost added: -rwsr-xr-x root 10056 Apr 28 17:32:01 2002 /usr/local/bin/cnet2 changed: -rwsr-xr-x root 155160 Apr 28 15:56:37 2002 /usr/local/bin/cpeople ... ### Attr Observed (what it is) Expected (what it should be) ###=========== ============================= ================= /usr/bin/at st_mode: 104775 104755 st_gid: 302 0 st_ctime: Fri Feb 17 12:09:13 2002 Fri Apr 28 14:32:54 2002 /usr/local/bin/cpeople st_size: 155160 439400 st_mtime: Fri Feb 17 12:10:47 2002 Fri Apr 28 15:56:37 2002 md5 (sig1): 1Th46QB8YvkFTfiGzhhLsG 2MIGPzGWLxt6aEL.GXrbbM

On this system, the chost command executable has been deleted, and a file named cnet2 has been added (both in /usr/local/bin). Two other files on the system have been changed. The at command has had its group owner changed to group 302, and /usr/bin/at is group-writable. The cpeople executable has been replaced: it is a different size and has a different signature and modification time.

More Administrative Virtues

Security monitoring primarily requires two of the seven administrative virtues: attention to detail and adherence to routine. They are related, of course, and mutually reinforce one another. Both also depend on that metavirtue, foresight, to keep you on the right path during those times when it seems like too much trouble.

  • Attention to detail. Many large security problems display only tiny symptoms, which the inattentive system administrator will miss, but you (and your tools and scripts) will not.

  • Adherence to routine. The night you decide to forego security monitoring so that some other job can run overnight has a much better than average chance of being the night the crackers find your system.

7.8.3.3 Vulnerability scanning

The next step up in monitoring intensity is to actively search for known problems and vulnerabilities within the system or network. In this section, we'll look at a couple of the packages designed to do this (as well as mentioning several more).

7.8.3.3.1 General system security monitoring via COPS

The free Computer Oracle and Password System (COPS) can automate a variety of security monitoring activities with a single system. Its capabilities overlap somewhat with Crack and Tripwire, but it offers many unique ones as well. It was written by Dan Farmer, and its home page is http://dan.drydog.com/cops/software/.

These are COPS' most important capabilities:

  • Checks root's environment by examining the account's initialization files in the root directory for umask and path definition commands (and then checking path components for writable directories and binaries), as well as ownership and protections of the files themselves. Also checks for non-root entries in any /.rhosts file.

    COPS also performs similar checks of the user environment of each account defined in the password file.

  • Checks the permissions of the special files corresponding to entries in the filesystem configuration file, /etc/fstab.

  • Checks whether any commands or files referenced in the system boot scripts are writable.

  • Checks whether any commands or files mentioned in crontab entries are writable.

  • Checks password file entries for syntax errors, duplicate UID's, non-root users with UID 0, and the like. Performs a similar check of the group file.

  • Checks the system's anonymous FTP setup (if applicable), as well as the security of the tftp facility and some other facilities.

  • Checks the dates of applicable system command binaries against ones noted in CERT advisories to determine whether known vulnerabilities still exist.

  • Runs the Kuang program, an expert system that tries to determine if your system can be compromised by its current file and directory ownerships and permissions (see the upcoming example output). It attempts to find indirect routes to root access like those we considered earlier in this chapter.

  • The COPS facility also has the (optional) ability to check the system for new setuid and setgid files and to compute checksums for files and compare them to stored values. Both the C/shell-script version and the Perl version are initiated via the cops script. You can configure the first version by editing this script as well as the makefile before building the COPS binaries. You configure the Perl version, which resides in the perl subdirectory of the main COPS directory, by editing the cops script and its configuration file, cops.cf. The following output is excerpted from a COPS report. The lines beginning with asterisks denote the script or program within the COPS facility that produced the subsequent output section (use -v to produce this verbose output):

    **** dev.chk **** Checks device files for local file systems. Warning! /dev/sonycd_31a is _World_ readable! **** rc.chk **** Checks boot scripts' contents. Warning! File /etc/mice (inside /etc/rc.local) is _World_writable (*)! **** passwd.chk **** Checks password file. Warning! Passwd file, line 2, user install has uid == 0 and is not root install:x:0:0:Installation program:/:/INSTALL/install Warning! Passwd file, line 8, invalid home directory: admin:x:10:10:basic admin:: **** user.chk **** Checks user initialization files. Warning! /home/chavez/.cshrc is _World_ writable! **** kuang **** Searches for system vulnerabilities. Success! grant uid -1 replace /home/chavez/.cshrc grant uid 190 grant gid 0 replace /etc/passwd grant uid 0

The final section of output from Kuang requires a bit of explanation. The output here describes chains of actions that will result in obtaining root access based on current system permissions. The item here notes that user nobody meaning in this case, anybody at all who wants to can replace the .cshrc file in user chavez's home directory (because it is world-writable), making user 190 (chavez) the user owner and group 0 the group owner (possible because chavez is a member of the system group). Commands in this file can replace the password file (because it is group writable), which means that root access can be obtained.

The example output also illustrates that COPS can produce some false positives. For example, the fact that /dev/sonycd_31a is world-readable is not a problem because the device is used to access the system's CD-ROM drive. The bottom line is that it still takes a human to make sense of the results, however automated obtaining them may be.

7.8.3.4 Scanning for network vulnerabilities

The are a variety of tools now available for scanning systems for network-based vulnerabilities that might offer opening to potential intruders. One of the best is the Security Administrator's Integrated Network Tool (Saint), also written by Dan Farmer (see http://www.wwdsi.com/saint/). It is based on Dan's earlier, now infamous, Satan[21] tool. It is designed to probe a network for a set of known vulnerabilities and security holes, including the following:

[21] The Security Administrator Tool for Analyzing Networks.

  • NFS vulnerabilities: exporting filesystems read-write to the world, accepting requests from user (unprivileged) programs, NFS-related portmapper security holes.

  • Whether the NIS password file can be retrieved.

  • ftp and tftp problems, including whether the ftp home directory is writable and whether tftp has access to parts of the filesystem that it should not.

  • A + entry in /etc/hosts.equiv, granting access to any user with the same name as a non-root local account on any accessible system.

  • The presence of an unprotected modem on the system (which could be used by an intruder for transport to other systems of interest).

  • Whether X server access control is enabled.

  • Whether the rexd facility is enabled (it is so insecure that it should never be used).

  • Whether any versions of software with reported vulnerabilities are present. The software is updated for new security vulnerabilities as they are discovered.

  • Whether any of the SANS top 20 vulnerabilities is present. See http://www.sans.org/top20.htm for the current list (scroll past the very long self-promotional section and you'll find the list).

Saint works by allowing you to select a system or subnetwork for scanning, probing the systems you have designated at one of three levels of enthusiasm, and then reporting its findings back to you. Saint is different from most other security monitoring facilities in that it looks for vulnerabilities on a system from the outside rather than the inside. (This was one of the main sources of the considerable controversy that surrounded Satan at its release, although it was not the first facility to operate in this manner.)

One excellent feature of Saint is that its documentation tells you how to fix the vulnerabilities that it finds. The add-on interfaces also contain many helpful links to articles and CERT advisories related to its probes as well as to software designed to plug some of the holes that it finds.

Figure 7-4 illustrates one of the reports that can be produced from Saint runs using the add-on reporting tool. This one shows a summary of the vulnerabilities that it found categorized by type, and the detail view of the first category is also displayed.

Figure 7-4. Saint vulnerabilities overview report

Renaud Deraison's Nessus package has similar goals to Saint. For more information about it, see http://www.nessus.org.

Security and the Media: An Unhelpful Combination

Many well-meaning persons suppose that the discussion respecting the means for baffling the supposed safety of locks offers a premium for dishonesty, by showing others how to be dishonest. This is a fallacy.... Rogues knew a good deal about lockpicking long before locksmiths discussed it among themselves.

Rudimentary Treatise on the Construction of Locks (1853)[Quoted in Cheswick and Bellovin (1994)]

Intelligent people disagree about how much detail to include when discussing security problems. Some say never to mention anything that an intruder could use; however, it's difficult for system administrators to evaluate how vulnerable their system is without understanding how potential threats work. Given the sheer volume of security alerts, people need enough details to be both technically and emotionally able to take a problem seriously.

In my view, however, media coverage of emerging security problems is seldom helpful. Any benefit obtained from the quick spread of information is more than offset by the panic that sets in among nontechnical folks based on the incomplete, exaggerated, and often inaccurate reports. Managers all too often overreact to such media reports, especially when open source operating systems are involved. Demands to immediately remove services that are actually needed are all too common. Part of the administrator's job is to attempt to keep things in perspective, with both managers and users.

It is important to keep in mind the media's motives in these instances: capturing viewers and selling newspapers. Security concerns are not the prime motivation behind such stories, and better computer security is not among the benefits that they reap from them.

7.8.4 What to Do if You Find a Problem

If one of the security monitoring tools you use finds a problem, there are two concerns facing you: preventing further damage and correcting whatever the current problem is. How strongly to react depends to a great extent on the security requirements of your site; everyone needs to investigate every unexpected change to the system uncovered in a security check, but how quickly it has to be done and what to do in the meantime will depend on what the problem is and how much of a risk you and your site are willing to assume.

For example, suppose Tripwire finds a single change on the system: the group owner of /usr/local/bin has been changed from bin to system. Assuming you've set up an appropriate configuration file and are running Tripwire nightly, you can probably just change the group owner back and find out which system administrator made this silly mistake. At the other extreme, if the one change is a replacement of /etc/passwd, and you are doing only minimal security monitoring checking file ownerships, modes, sizes and modification dates you've got a much bigger problem. You can no longer really trust any file on the system, because the data you have isn't good enough to determine which files have been altered. In such an extreme case, this is the right if extremely painful thing to do:

  • Disconnect the system from any unsecured network (which is pretty much any network).

  • Reboot the system immediately to single-user mode to attempt to get rid of any malignant users or processes. There are more complex strategies for handling an intrusion in progress; however, they are not recommended for the uninitiated or the fainthearted.

  • Back up any files that you cannot afford to lose (but be aware that they may already be tainted). Back up all log and accounting files to aid in future investigation of the problem.

  • You may want to keep the system down while you investigate. When you are ready to bring the system back online, reinstall the operating system from scratch (including remaking all filesystems). Restore other files manually and check them out carefully in a secure filesystem. Rebuild all executables for which you have the source code.

NOTE

If you anticipate ever taking an legal action with respect to the break-in, you must save the original disks in the system unaltered. You will have to replace the hard disks to reinstall the operating system and bring the system back online.

The severity of this cure should emphasize once again the importance of formulating and implementing an effective security monitoring process.

7.8.5 Investigating System Activity

Regularly monitoring the processes running on your system is another way to minimize the likelihood of security breaches. You should do this periodically, perhaps as often as several times during the day. Very shortly, you will have a good sense of what "normal" system activity is: what programs run, how long they run, who runs them, and so on. You'll also be in a reasonably good position to notice any unusual activity: users running different programs than they usually do, processes that remain idle for long periods of time (potential Trojan horses), users logged in at unusual times or from unusual locations, and the like.

As you know, the ps command lists characteristics of system processes. You should be familiar with all of its options. Let's look at some examples of how you might use some of these options. Using the BSD command format, you can use ww to get the entire command run by a user into the display (this output is wrapped):

$ ps ax | egrep 'PID|harvey' 241 co R 0:02 rm /home/harvey/newest/g04/l913.exe /home/mar $ ps axww | egrep 'PID|harvey' PID TT STAT TIME COMMAND 241 co R 0:02 rm /home/harvey/newest/g04/l913.exe /home/harvey/newest/g04-221.chk /home/harvey/newest/g04-271.int /home/harvey/newest/g04-231.rwf /home/harvey/newest/g04-291.d2e /home/harvey/newest/g04-251.scr /usr/local/src/local_g04

In this case, you can see all the files that were deleted by using two w's.

The c option reveals the actual command executed, rather than the one typed in on the command line. This is occasionally useful for discovering programs run via symbolic links:

$ ps aux | egrep 'PID|smith' USER PID %CPU %MEM SZ RSS TT STAT TIME COMMAND smith 25318 6.7 1.1 1824 544 p4 S 0:00 vi smith 23888 0.0 1.4 2080 736 p2 I 0:02 -csh (csh) $ ps -auxc | egrep 'PID|smith' USER PID %CPU %MEM SZ RSS TT STAT TIME COMMAND smith 25318 6.7 1.1 1824 544 p4 S 0:00 backgammon smith 23888 0.0 1.4 2080 736 p2 I 0:02 -csh (csh)

User smith evidently in his current directory has a file named vi, which is a symbolic link to /usr/games/backgammon.

The -f option under System V can help you identify processes that have been idle for a long time:

$ ps -ef UID PID PPID C STIME TTY TIME COMMAND chavez 2387 1123 0 Apr 22 ? 0:05 comp_h2o

This process has been around for a long time but has accumulated very little CPU time. For instance, if today is May 5, it's time to look into this process. Hopefully, you'd actually notice it before May 5.

As these examples indicate, creative use of common commands is what's needed in a lot of cases. The more familiar you are with the commands' capabilities, the easier it will be to know what to use in the situations you encounter.

7.8.5.1 Monitoring unsuccessful login attempts

Repeated unsuccessful login attempts for any user account can indicate someone trying to break into the system. Standard Unix does not keep track of this statistic, but many Unix versions provide facilities that do so.

Under AIX, checking for lots of unsuccessful login attempts is relatively easy. The file /etc/security/user includes the keyword unsuccessful_login_count in the stanza for each user:

chavez: admin = false time_last_login = 679297672 unsuccessful_login_count = 27 tty_last_unsuccessful_login = pts/2 time_last_unsuccessful_login = 680904983 host_last_unsuccessful_login = hades

This is clearly a lot of unsuccessful login attempts. Anything above two or three is probably worth some investigation. The following command displays the username and number of unsuccessful logins when this value is greater than 3:

# egrep '^[^*].*:$|gin_coun' /etc/security/user | \ awk '{if (NF>1 && $3>3) {print s,$0}} ; NF==1 {s=$0}' chavez: unsuccessful_login_count = 27

The egrep command prints lines in /etc/security/user that don't begin with an asterisk and end with a colon (the username lines) and that contain the string "gin_coun" (the unsuccessful login count lines). For each line printed by egrep, the awk command checks whether the value of the third field is greater than 3 when there is more than one field on the line (the username lines have just one field). If it is, it prints the username (saved in the variable s) and the current line.

When the user logs in, she gets a message about the number of unsuccessful login attempts, and the field in /etc/security/user is cleared. However, if you check this file periodically using the cron facility, you can catch most strings of unsuccessful login attempts before they are erased. Users should also be encouraged to report any unexpected unsuccessful login attempts that they are informed of at login time.

Tru64 also keeps track of unsuccessful login attempts in this way, storing the current number in the u_numunsuclog field in each user's protected password database file.

7.8.5.2 su log files

Virtually all Unix implementations provide some mechanism for logging all attempts to become superuser. Such logs can be very useful when trying to track down who did something untoward. Messages from su are typically written to the file /var/adm/sulog, and they look something like this:

SU 07/20 07:27 - ttyp0 chavez-root SU 07/20 14:00 + ttyp0 chavez-root SU 07/21 18:36 + ttyp1 harvey-chavez SU 07/21 18:39 + ttyp1 chavez-root

This display lists all uses of the su command, not just those used to su to root, as when user harvey first su'ed to chavez and then to root. If you look only at su commands to root, you might mistakenly suspect chavez of doing something that harvey was actually responsible for. On some systems, su log messages are always entered under the real username, ignoring any intermediate su commands.

Here are the locations of the su log file on various systems:

AIX
/var/adm/sulog
FreeBSD
Within /var/log/messages
HP-UX
/var/adm/sulog
Linux
Within /var/log/messages
Tru64
/var/adm/sialog
Solaris
Specified in the SULOG setting in /etc/default/su.
sudo facility
/var/adm/sudo.log

7.8.5.3 History on the root account

A simple way of retaining some information about what's been done as root is to give root a shell that supports a history mechanism, and in root's initialization file set the number of commands saved across login sessions to a large number. For example, the following commands cause the last 200 commands entered by root to be saved:

C shell set history = 200 set savehist = 200

Korn shell export HISTSIZE=200 export HISTFILE=/var/adm/.rh

Under the C shell, commands are saved in the file /.history for root. Under the Korn shell, commands are written to the file named in the HISTFILE environment variable ($HOME/.sh_history by default). Of course, a clever user can turn off the history feature before misbehaving with the root account, but it can also often be overlooked (especially if you don't put the command number in the prompt string). Alternatively, you can copy the history file to some secure location periodically via the cron facility.

7.8.5.4 Tracking user activities

There are other utilities you can use to determine what users have been doing on the system, sometimes enabling you to track down the cause of a security problem. These commands are listed in Table 7-7.

Table 7-7. Command summary utilities

Command

Unix versions

Displays information about

last

All

User login sessions

lastcomm

All

All commands executed (by user and TTY)

acctcom

AIX, HP-UX, Solaris, Tru64

All commands executed (by user and TTY)

These commands draw their information from the system accounting files, the age of which determines the period of time that they cover. Note that accounting must be running on the system for any of them to be available (see Chapter 17).

The last command displays data for each time a user logged into the system. last optionally may be followed by a list of usernames and/or terminal names. If any arguments are distinguished, the report is limited to records pertaining to at least one of them (OR logic):

$ last harvey ttyp1 iago Fri Sep 16 10:07 still logged in ng ttyp6 Fri Sep 16 10:00 10:03 (00:02) harvey ttyp1 iago Fri Sep 16 09:57 10:07 (00:09) chavez ttyp5 Fri Sep 16 09:29 still logged in $ last chavez chavez ttyp5 Fri Sep 16 09:29 still logged in chavez ttypc duncan Thu Sep 15 21:46 - 21:50 (00:04) chavez ttyp9 Thu Sep 15 11:53 - 18:30 (07:23) $ last dalton console dump console Wed Sep 14 17:06 - 18:56 (01:49) dalton ttyq4 newton Wed Sep 14 15:58 - 16:29 (00:31) dalton ttypc newton Tue Sep 13 22:50 - 00:19 (01:28) dalton console Tue Sep 13 17:30 - 17:49 (00:19) ng console Tue Sep 13 08:50 - 08:53 (00:02)

last lists the username, tty, remote hostname (for remote logins), starting and ending times, and total connect time in hours for each login session. The ending time is replaced by the phrase "still logged in" for current sessions. At the end of each listing, last notes the date of its data file, usually /var/adm/wtmp, indicating the period covered by the report.

The username reboot may be used to list the times of system boots:

$ last reboot reboot ~ Fri Sep 9 17:36 reboot ~ Mon Sep 5 20:04

lastcomm displays information on previously executed commands. Its default display is the following:

$ lastcomm lpd F root 0.08 secs Mon Sep 19 15:06 date harvey ttyp7 0.02 secs Mon Sep 19 15:06 sh smith ttyp3 0.05 secs Mon Sep 19 15:04 calculus D chavez ttyq8 0.95 secs Mon Sep 19 15:09 more X ng ttypf 0.17 secs Mon Sep 19 15:03 ruptime harvey console 0.14 secs Mon Sep 19 15:03 mail S root ttyp0 0.95 secs Fri Sep 16 10:46

The display lists the command name, flags associated with the process, the username and tty associated with it, the amount of CPU time consumed by its execution, and the time the process exited. The flags may be one or more of:

S

Command was run by the superuser.

F

Command ran after a fork.

D

Command terminated with a core dump.

X

Command was terminated by a signal (often CTRL-C).

The command optionally accepts one or more image or command names, usernames, or terminal names to further limit the display. If more than one item is specified, only lines that contain all of them will be listed (Boolean AND logic). For example, the following command lists entries for user chavez executing the image calculus:

$ lastcomm chavez calculus calculus D chavez ttyq8 0.95 secs Mon Sep 19 15:09 calculus chavez ttyp3 10.33 secs Mon Sep 19 22:32

Under System V, the acctcom command produces similar information (output is shortened):

$ acctcom COMMAND START END CPU NAME USER TTYNAME TIME TIME (SECS) calculus chavez ttyq8 15:52:49 16:12:23 0.95 grep harvey ttyq3 15:52:51 15:52:55 0.02 rm root tty02 15:52:55 15:55:56 0.01

acctcom's most useful options are -u and -t, which limit the display to the user or TTY specified as the option's argument (respectively), and -n pattern, which limits the display to lines containing pattern. The pattern can be a literal string or a regular expression. This option is often used to limit the display by command name. If more than one option is specified, records must match all of them to be included (AND logic). For example, the following command displays vi commands run by root:

$ acctcom -u root -n vi COMMAND START END CPU NAME USER TTYNAME TIME TIME (SECS) vi root tty01 10:33:12 10:37:44 0.04 vi root ttyp2 12:34:29 13:51:47 0.11 vi root ttyp5 11:43:28 11:45:38 0.08

Unfortunately, acctcom doesn't display the date in each line as lastcomm does, but you can figure it out by knowing when its data file (/var/adm/pacct) was created and watching the dates turn over in the display (records are in chronological order). If you're trying to track down a recent event, use the -b option, which displays records in reverse chronological order.

So what can you do with these commands? Suppose you find a new UID 0 account in the password file and you know the file was all right yesterday. After checking its modification time, you can use the su log file to see who became root about that time; last will tell you if root was logged in directly at that time. Assuming root wasn't directly logged in, you can then use lastcomm or acctcom to find out who ran an editor at about the right time. You may not get conclusive proof as to who made the change, but it may help you to narrow the possibilities; you can then talk to those users in person. Of course, there are trickier ways of changing the password file that will evade detection by this method; there's no substitute for limiting access to the root account to trustworthy people. This example also illustrates the importance of detecting security problems right away; if you can't accurately narrow down the time that the password file was changed, it will surely be impossible to figure out who did it.

7.8.5.5 Event-auditing systems

Event-auditing systems are much more sophisticated tools for tracking system activities, and they are accordingly much more useful than the simple tools provided by standard Unix. Auditing is a required part of the U.S. government C2 and higher security levels. All of the commercial Unix versions we are considering have an auditing facility as a standard or optional component.

Auditing systems all work in basically the same way, although the details of the mechanics of setting up and administering auditing are different. Once you understand how one auditing system works, you can work with another one very easily. These are the main steps required to set up event auditing on a system:

  • Choose which events you want to keep track of. In general, auditing events are defined at the system call level. Thus, you can track file opens, closes, reads, writes, unlinks (deletions), and so on, but you can't track file edits with vi. Some systems let you define new events, but this is rarely necessary.

  • Choose which system objects for the most part, this means individual files you want to monitor. Not all auditing systems let you narrow the scope to specific files.

  • Group events and/or objects into classes of related items. Sometimes this step is done for you, and you have no choice as to how classes are defined.

  • Set the system default audit event (or class) list, and then indicate which events or classes should be audited for the various users on the system. On some systems, you have to do both variations of this task: designate system defaults, including a list of users to be audited, and then specify what to audit for each applicable user.

  • Decide where the audit trail data files should be located in the filesystem. Many auditing systems allow or require you to specify a list of audit logging directories (so that the next one is already waiting when one fills up).

  • Set any other audit system parameters: how large audit files can get, how often to switch to a new file, what file format to use, and so on.

  • Change the system boot scripts so that auditing is started automatically at boot time and terminated before a system shutdown.

Auditing is one case where a well-designed system administration tool is a tremendous help, due to the number of tasks that it includes and the staggering amount of data that an auditing system generates. However, it sometimes takes a bit of time to figure out the mappings between the less than intuitive descriptions of the available events and what you actually want to watch for.

Once auditing is in effect, the next step is figuring out how to generate reports from the data. This will take some time. The best way to learn how to do this is to simulate the kinds of events you want to be able to detect on an idle system: turn auditing on for all events (ensuring that the records will go to a new audit file), do something you want to be able to track (for example, make a trivial change in the password file, delete a file in /tmp, change the ownership of a file, etc.), and then turn auditing off.[22] Then look at the audit records you've just generated using the system's report facilities. This will enable you both to recognize what your target act looks like in terms of audit events and to learn the correspondence between audit event and classes and higher-level commands. In some cases, performing different commands as different users will be helpful in sorting things out.

[22] On some systems, you need to execute a few commands to force the auditing records out to disk; ls -l a few times will usually do the trick.

7.8.6 Intruders Can Read

At various points in this chapter, I've said that intruders will go to very great lengths to cover their tracks. The most sophisticatedintruders know all the ins and outs of the available types of system protection and monitoring facilities and all of their vulnerabilities. That is why it is important to have system-checking tools and their associated data files that are beyond the reach of any system intruder.

There are various ways to accomplish this:

  • Have backup copies of important utilities, preferably made at the time of their original installation. Depending on the media type, two backup copies might be called for.

  • Be cautious in keeping online data files describing the correct system configuration. Storing them on a write-protected diskette, which is accessed only as needed, is one approach (assuming that the database is small enough to fit). Again, redundant copies are probably a good idea. Making a printed copy is another way to protect such data (provided it is in ASCII form).

  • System log files from su, the syslog facility, the auditing subsystem, and so on also need restrictive permissions online and frequent backing up. Redundant copies are also a possibility here. For example, you can log syslog messages locally and to a secure remote system, and both trails would need to be altered for a cracker to hide an action. Important log files can also be printed out on a regular basis or in real time; those ancient hardcopy system consoles had their uses.

You'll need to be careful about storing these backup copies. Remember that threats don't always come from outsiders.

Категории