Practical Guide to Software Quality Management (Artech House Computing Library)
< Day Day Up > |
Even with the careful assignment of rights to administrators, system security needs to be in the forefront of every administrator's mind as the system ages. A carefully built system can start off pretty secure and then you put it online and start installing software. After that, you might add accounts or configure the system so that it can be accessed anonymously. The following section in the chapter focuses on software installations and updates that may have an impact on the security of your system. 4.2.1. Installing Software
Installing software on your OpenBSD or FreeBSD system is accomplished using packages or the ports system. Individuals who have taken on the responsibility of being a port or package maintainer try to ensure that the latest or best version of the software will build correctly on the operating system and will install according to the operating system's scheme. They don't necessarily audit the software for software vulnerabilities. Installing a port is often as simple as typing in make with a few command-line arguments based on your functionality requirements. Package installs are even easier. Dependencies can be automatically installed. Downloading source tarballs and configuring them yourself is certainly also possible but more cumbersome. You run the risk of not having applied the latest patches and you will have to install dependencies first, manually. 4.2.1.1 Ports and packages
The ports system is one of the most obvious differentiators between the BSD systems and other free and commercial Unix platforms. All platforms offer "binary packages," but only the BSDs offer the flexibility of ports. From a security perspective, there are few strong reasons for choosing one paradigm over the other. Some argue that it is easier to verify file signatures for one precompiled package than for several .tgz files used by a port.
Most administrators who are aware and diligent about verifying file integrity will go no father than checking to see that the signature matches the one provided by the same site from which they obtained the package. As it turns out, this trivial check is conducted by the ports system every time a file is downloaded. Few administrators take the time to check the signature of a package at all, much less cross-reference it with the site that originally provided the package. In an ideal world, administrators would cross-reference signatures with several mirror sites and the main distribution site to verify file integrity. Few administrators have the inclination or the time. The greatest advantage of a port is that it offers complete flexibility in configuring your ported applications. Packages can be compiled to support no related software, some related software, or all related software, and you may not always find the exact combination that you seek. Ports, on the other hand, offer options for linking with specific pieces of software to provide additional functionality. In FreeBSD, this is often accomplished with a small menu during the configuration of a port or the definition of some environment variables. OpenBSD allows administrators to set a FLAVOR for a port before installation. You will see examples of both throughout this book. If the goal is to have compiled binaries, why not just install precompiled software and be done with it? This is, in fact, the main argument against using the ports system. Ports require more system resources than packages. Not only must source code be downloaded and extracted, it must also be compiled and linked to produce binaries, which are finally installed. In many cases, this proves to be a compelling argument, but when flexibility is needed, ports are often the answer.
Most of the examples in this book will describe the ports style of installation, as the package may be either not available or trivial to install. Nevertheless, there are two things to watch out for when working with the ports system. 4.2.1.2 Ports ownership
The ports hierarchy usually lives in /usr/ports. Because only root can write to /usr, administrators often install the ports hierarchy from CD or via cvs as root. Unfortunately, this means that whenever the administrator needs to build a package, she must do so as root (via sudo, for instance). This is not a safe practice. Small errors in Makefiles can result in very interesting behavior during a make. Malicious Makefiles have also been known to exist. This presents a valuable opportunity for the separation of responsibilities. Before updating your ports tree, ensure /usr/ports is writable by someone other than root. Make this directory group-writable if a group of people install software, or change the ownership to the user responsible for installing software.
You may now update your ports tree and build software as an ordinary user. When your make or make install needs to do something as root, you will be prompted for the root password. To adjust this behavior somewhat, set SU_CMD=sudo in the file /etc/make.conf. Now while installing ports, sudo will be used instead of su.
FreeBSD administrators who use the portupgrade utility to manage ports will want to provide the -s or flag. This makes portupgrade use sudo when it needs to perform actions as root. OpenBSD administrators should set SUDO=sudo in /etc/mk.conf. Makefiles know when certain commands need to be run by root and will automatically run these commands as arguments to the program specified by $SUDO. 4.2.1.3 Ports and base conflicts
In FreeBSD, some software in the ports system has already been installed with the base system. Prime examples are BIND, ssh, and various shells. The version in ports is often more recent than the version in the base distribution, and you may decide that you want to overwrite the base version. Newer is not always better, however. The version included as part of the base distribution is likely older, but will have all relevant security patches applied to it and will have undergone more widespread scrutiny longer. The version in ports will include functionality that has probably not yet been extensively tested. Use the version from ports when you need additional functionality, but stick with the base for reliability and security. Ensure that if you install the version from ports, it either completely overwrites the base installation or you manually eradicate all traces of the base version to avoid confusion. The method to do this will vary based on the package. The Makefile for BIND9 on FreeBSD systems understands a PORT_REPLACES_BASE_BIND9 flag, which will overwrite the base install for you (this is described in detail in Chapter 5). The Makefile for the FreeBSD openssh-portable port looks for an OPENSSH_OVERWRITE_BASE flag, which does about the same thing. Other ports may require that you manually search for installed binaries, libraries, and documents and remove them. OpenBSD includes applications such as Apache, BIND, OpenSSH, and sudo in the base distribution and does not provide a means to track this software through ports. After all, the installed applications have gone through rigorous security review. If you want, for instance, to use Apache Version 2 or a different version of BIND, you must fetch, compile, and install the package manually. Otherwise, updates to software within the OpenBSD base distribution may be installed by tracking the stable branch as described later in this chapter. 4.2.1.4 Multiple versions installed (FreeBSD only)
If you do choose to manage your installed software using ports instead of with your base, you may run into version problems. Let's say you installed Version 1.0 of port foo. After installation, you modified some of the files that were installed with the port in /usr/local/etc and used foo for several months. When you learn of a security vulnerability in foo, you decide to upgrade to Version 1.1, but instead of uninstalling the old version first, you install v1.1 on top of the old version. The package database now lists two versions of foo installed, but that is not really the case. The installation of v1.1 does not clobber your configuration files in /usr/local/etc because they were modified since the install of v1.0, but it does replace binaries, libraries, shared/default configuration files, and so on, provided they were not modified since the installation of v1.0. So far, so good. The new version of the port is in fact properly installed and may be used, though you might have had to update the configuration files. You may choose at some point to uninstall foo v1.0. All installed files that match the MD5 checksums of the files distributed with v1.0 will be removed. Any shared/default configuration files that were identical in Version 1.1 will also be removed, resulting in a broken foo v1.1. You will need to reinstall v1.1 to replace these files. The same kind of situation may arise if foo v1.0 depended on libbar v2.0 but v1.1 of foo depended on libbar v2.1. While uninstalling foo v1.0 before installing the new version would avoid problems down the road for that port, libbar may be in trouble. As you can see, the ports system's tracking of dependencies is handy, but it only goes so far.
To avoid these situations, ensure you uninstall installed ports before installing new ones, or, better yet, use the portupgrade port to manage upgrades of software installed from the ports tree. This handy utility will make these dependency problems moot and save you time and headache upgrading ports. portupgrade is well documented in its manpage and should be considered mandatory for any system with more than a few ports installed.
4.2.2. Change Control
Software gets installed. Software gets upgraded. All this administration is important but must be audited in some way so that other administrators and managers can answer questions like:
Detailed change control procedures are generally designed around organizational priorities and therefore are beyond the scope of this book. Nevertheless, change control is an important aspect of system administration. As you build your FreeBSD or OpenBSD systems, ensure you have a written list of requirements (both security-related and functional) to which your system must conform. As you build your system, document the steps you've taken to achieve these requirements. These documents will form the basis of your configuration management doctrine and will help you rebuild the system in the event of a system failure and transfer ownership of the system to another administrator should the need arise. As time goes on, you will find a need to change your system configuration or upgrade installed software. If you have a test environment in which you can put these changes into effect, so much the better. Carefully document the steps you take to accomplish these upgrades and configuration changes. When you're done, you will be able to test your system to ensure it continues to meet the requirements you already have documented. Should problems arise, you will likely be able to quickly isolate the change that gave rise to these problems. Although describing complete change control procedures is out of scope, FreeBSD and OpenBSD do provide tools to help administrators carry out change control policies on system configuration files. 4.2.3. Tracking Changes
FreeBSD and OpenBSD are large software projects with developers scattered around the world. Building an operating system without keeping a close eye on changes is impossible. From a user perspective, we see software version numbers that continually increase, but in the background, developers are regularly "checking out" files from some development repository, modifying them, and checking them back in. All of these files also have version numbers, which continually increment as they are modified. For example, examine the following snippet from /etc/rc.conf on an OpenBSD system: # $OpenBSD: rc.conf,v 1.95 2004/03/05 23:54:47 henning Exp $
This string indicates that this file's version number is 1.95. It was last modified at late on the fifth of March by user henning. Both FreeBSD and OpenBSD development teams have chosen to use the Concurrent Versions System (CVS) to manage file versions and ensure changes are closely tracked. CVS uses the basic functionality of the Revision Control System (RCS) to track changes to individual files and adds functionality to manage collections of files locally or over a network. This may seem a little far afield for system administration, but tracking changes is as important to developers as it is to system administrators. Imagine if every configuration file you touched were managed in this same way you could know what changes were made to any given file, by whom, and when. You would also be able to get a log of comments of all changes as entered by those who made modifications. Best of all, you could trivially roll back to a previous configuration file without having to pull data off of a tape. In cases where multiple modifications are made in a day, that kind of information will likely not be found on a tape. As it turns out, setting up a CVS repository is fairly straightforward.
Before creating your repository, you should create a CVS administrative user and corresponding primary group, which will own the files in the repository on some tightly secured central administration host that has very limited shell access. We'll call both the user and group admincvs. Ensure this account is locked. The home directory can be set to /nonexistent (this is a service account, not meant for users), and shell can be /sbin/nologin. Once this is done, initialize the repository as shown in Example 4-4. This example assumes the user under which you are operating can run the commands listed via sudo.
Example 4-4. Initializing a CVS repository
% sudo mkdir /path/to/repository % sudo chmod g+w /path/to/repository % sudo chown admincvs:admincvs /path/to/repository % sudo -u admincvs /usr/bin/cvs -d /path/to/repository init % sudo chmod -R o-wrx /path/to/repository
At this point, you must configure your CVSROOT. This environment variable lets the CVS program know where the repository is. If you will be working with a CVS repository on the local system, you may set the CVSROOT to be the full path to that directory. Otherwise set your CVSROOT to username@hostname:/path/to/repository. If you choose to access the repository from a remote FreeBSD or OpenBSD system, your cvs client will attempt to contact the server using ssh. Thus, CVS may cause ssh to ask for your password, passphrase, or just use your Kerberos ticket, depending on how you have ssh configured. Whether the repository is local or remote, your access will map to some account on the target system. In order to be able to check items in and out of CVS, you (and everyone else who needs to use this CVS repository) must be a member of the admincvs group. If you have not already done so, add yourself to this group. You are then ready to perform your first checkout of the repository, as shown in Example 4-5. Example 4-5. First checkout of a CVS repository
% mkdir local_repos_copy && cd local_repos_copy % cvs checkout . cvs server: Updating . cvs server: Updating CVSROOT U CVSROOT/checkoutlist U CVSROOT/commitinfo U CVSROOT/config U CVSROOT/cvswrappers U CVSROOT/editinfo U CVSROOT/loginfo U CVSROOT/modules U CVSROOT/notify U CVSROOT/rcsinfo U CVSROOT/taginfo U CVSROOT/verifymsg Finally, you're ready to add projects into the repository. Simply make any directories you would like under your local copy of the repository (local_repos_copy in our example) and add them using cvs add directory_name. Files may be created within these directories as needed and added to the repository via the same cvs add mechanism. In order for files to actually be copied into the repository, and subsequently whenever you make modifications to the file, you will need to issue a cvs commit filename. If you have made widespread modifications, you may simply run cvs commit from a higher level directory, and all modified files under that directory will be found and committed en masse. Once your CVS repository is created, you are left with two problems.
Unfortunately, both of these topics are beyond the scope of this book. We can provide a few tips, however.
If security requirements in your organization prevent you from using CVS in this way to track changes to documents or copy them to target systems, you may also opt to track changes directly on the system. You could create CVS repositories on every system, perhaps in some consistent location, precluding the need for configuration file transfer. You may also use RCS a far less fully featured revision control system, which merely tracks changes to a given file in ./RCS (RCS creates subdirectories in every directory that contain RCS-controlled files). If you choose this route, you may want to evaluate tools like rcsedit and rcs.mgr, which turn up quickly in a web search. After you have solved these problems, you will be in a much better position to handle changes to system configuration than you were before. You will then be better prepared to turn your attention to more significant system changes like patching and upgrading. 4.2.4. Data Recovery
Data backup and recovery typically serves several purposes:
FreeBSD and OpenBSD administrators typically turn to one of two pieces of open source software for performing data backups: dump(8) or the Advanced Maryland Network Disk Archiver (Amanda). For the most basic jobs, dump is probably adequate. It gives you the ability to record a complete snapshot of a filesystem at a given point in time. Amanda is largely an automation suite built on top of tools like dump and tar(1). If you need a complex tape rotation or want to automate the use of a multi-tape library, Amanda can save you a lot of work. When it comes time to read data off your backup tapes, however, the tools of the trade are restore(8) and tar. Of course tar is tar's own complement as it supports both creation of tape archives with -c and extraction with -x. The restore program is the complement to dump, and it reads the data format that dump writes. Amanda uses dump, so restore will be the tool you use to retrieve data from tapes whether you use dump directly or use Amanda. 4.2.4.1 Data completeness
If you want to be able to restore your complete system from a hard drive crash, it is critical that you use dump to make your backup. Other techniques like tar(1) and cpio(1) will fail to capture critical filesystem information that you will want when you restore. Although they both capture symbolic links and can work with device files, their support is problematic in some corner cases. For example, for compatibility across platforms, tar's datafile format uses some fixed-sized fields. FreeBSD uses device numbers that cannot be accommodated in tar's format. Thus, if you use tar to backup your root partition, the devices in /dev will not be stored correctly. Although it is easy to fix them during a restoration, it is a detail worth considering. You might think that FreeBSD's use of devfs (a filesystem that automatically creates devices in /dev based on your system's hardware) means that you have few, if any, device files to back up. However, if you have followed the guidelines in this book, you have probably created jails and/or chroot environments for various mission-critical services. You will have created device files in those environments that are not automatically created by devfs and are not correctly backed up using tar. Similarly, "hard linked" files, which share a common inode (as opposed to "symbolically linked" files), are stored twice in a tar or cpio backup, instead of once in a dump backup. If you have a dedicated server that only runs one critical service, such as DNS, you may find complete system dumps more work than they are worth. If you have all your service-specific data backed up (e.g., the whole /var/named directory and configuration files from /etc), you might be able to recover from a disaster simply by installing fresh from the CD. You reinstall the service, restore your service-specific data, and reboot. If you plan to perform restorations this way, you will have to write much of the backup and restoration procedures yourself, although they may not be very elaborate. 4.2.4.2 Data confidentiality
Your backup data is a snapshot of all the data that is in your filesystem. It probably contains a variety of critical files that should not be disclosed to anyone. Most backup files, however, can be read by anyone who has access to the media. Unless you go out of your way to add encryption to your backup scheme (neither dump nor tar have innate support for this), your data is easily readable from a medium that has no concept of permissions or privileges. Thus, if you store your backup tapes somewhere without strict physical access control, unauthorized people may be able to walk off with all of your data. Barring physical theft of data, however, there are still confidentiality concerns related to how you manage your backups. If you use Amanda to back up over the network, it will spool the data up on a local hard drive as part of the process. Although this improves your tape drive's performance by allowing it to stream at its maximum data rate, it means all your confidential data will temporarily exist on the tape server, until it gets written to tape. If the tape should jam or fail to write, this file will remain on the hard disk until it is successfully flushed to tape by an administrator. If you assign backups to a junior administrator because they are tedious (what senior administrator does not do this?), remember that the junior administrator may effectively gain read access to all the data that the backup server sees. This may not be what you want. 4.2.4.3 Data retention
If your organization does not have a data retention policy that governs the storage of backup tapes, you might want to consider establishing one before an external event forces the issue. If your organization is not involved in any sensitive activities, perhaps you do not need to worry as much. Most organizations, however, are surprised to realize how much they care about old data. If the CEO, chairman, or other leader of the organization deletes a sensitive file, she probably thinks it is gone for good. However, you know that it lives on your backups for some amount of time, and you can retrieve it if you are compelled to. 4.2.4.4 Filesystem access
On a typical server (either OpenBSD or FreeBSD), the raw disk devices are owned by root, but the group operator has access to read them. This allows the operator group to bypass the filesystem and its permissions and read raw data blocks from the disk. This is how dump is able to take a near image of a disk device. If you rebuild a filesystem with newfs(8) and then restore your files, the files will be restored almost exactly, down the inode numbers in many cases. The operator group is especially designed for backups this way. If you look in /dev, you will find that operator has read access to almost all significant raw data devices: floppy disks, hard drives, CD drives, RAID controller devices, backup tape drive devices, etc. Furthermore, the operator user's account is locked down in a way that the user cannot log in. If you run backups, either by custom scripts or by Amanda, you should use the operator user and/or group. The default Amanda configuration will do just that.
4.2.4.5 Network access
Generally we assume that you will have a small number of servers that have tape drives installed (possibly just one) and data will traverse the network from clients to these servers. This transfer happens via either a push or a pull paradigm. Since the tape host knows how many tape drives it has and whether or not they are busy, most systems favor having the tape host pull data from data hosts. Amanda and other methods of remotely collecting data will send the contents of your filesystems in the clear over the network. Regardless of where your backup server is in relation to the backup clients, your data may be observable while in transit. This is clearly a problem, and you should establish some means of protecting the data in transit, either through a VPN, SSH tunnel, or some other form of encryption. One of the most powerful ways of restricting (and encrypting) backup connections is by using ssh. It is possible to use ssh keys that have been configured on the client side to only allow connections from the backup server, not provide a pty, and run only one command (e.g., some form of dump). This is accomplished by creating a specially crafted authorized_keys file as shown in Example 4-6. Example 4-6. The operator's ssh key in ~operator/.ssh/authorized_keys
from="backupserver.mexicanfood.net",no-pty,command="/sbin/dump -0uan -f - /" ssh-dss base64-ssh-key OPERATOR
If a backup client is configured in this way, the backup server needs only to ssh to the client and pipe output from the ssh command as follows: % ssh operator@backupclient | dd of=/dev/nrst0
Of course, the target command could be a script, which, based on the day, would perform a different level dump. It is also possible to perform secure backups initiated by the backup client by setting the RSH variable to /usr/bin/ssh and subsequently running dump as follows: % /sbin/dump -0uan -f operator@backupserver.mexicanfood.net:/dev/nrst0
If you choose to use the operator account for ssh-enabled backups, not only will you need to create a home directory for this user, you will also need to change the login shell from nologin to /usr/bin/false. Of course, other levels of protection are available to protect access including creating a specific interface used for backup traffic, configuring a local firewall, or using intervening firewalls.
It is also possible to use the primitive rdump command to backup data across a network. Unfortunately this tool relies on the use of ~/.rhosts files and programs like rcmd and ruserok. There are severe security implications to using these tools and providing reasonable security is more trouble than it is worth. Given the ease with which Amanda and dump can be used securely, there is little need to use rdump. |
< Day Day Up > |