LDAP in the Solaris Operating Environment[c] Deploying Secure Directory Services

This section describes in detail a sample benchmark configuration. This configuration was actually built and used for testing in our labs at Sun, so you can use it knowing it works. Even if your systems and configuration differ , the following specifications provide an idea of the kinds of things you need to consider when setting up a directory server performance benchmark environment.

Note

The performance benchmark tests are not designed to verify full LDAPv3 protocol conformance.

System Hardware Details

When it comes to selecting your hardware, there are obviously many permutations . The following Sun systems were used in our benchmark configuration:

Benchmark Directory Server

  • System Model: Sun Fire V880 server

  • CPU Type: 900-MHz UltraSPARC III

  • Number of CPUs: 8

  • Memory: 48 Gbytes

  • Disk Array Type: Four 36-Gbyte 7200-rpm SCSI disks (non RAID) / Eight 36-Gbyte 7200-rpm StorEdge T3 disk subsystem (RAID 1/0)

Benchmark Client

It is possible that when performing a benchmark to run the load generation process, that a single client system will not be sufficient, because the client itself will be the bottleneck rather than the server. Running the load generation process on multiple client systems at the same time will make it possible to generate higher levels of load than can be achieved using only a single client system This can be used to more accurately simulate a production environment that may need to deal with large numbers of clients .

  • System Model: Sun Fire 420R system

  • CPU Type: 400-MHz UltraSparc II

  • Number of CPUs: 4

  • Memory: 4 Gbytes

System Software Details

  • Standard Solaris 9 OE installation, with the latest operating system patches.

  • Solaris kernel and TCP/IP parameters tuned based on tuning recommendations in the Sun ONE Directory Server 5.x product documentation, and as recommend by the idsktune utility provided with the Sun ONE Directory Server 5.x product.

  • For a non-RAID volume configuration, the Solaris OE UFS was used.

  • For a RAID 1/0 (mirroring and striping) volume configuration, the file system used was Veritas software.

Storage Architecture and Configuration

As discussed in "Selecting Storage for Optimum Directory Server Performance" on page 443, the storage you choose and the way you configure your storage plays a big part in the performance of your directory services.

Given that the Sun StorEdge T3b storage array is the mainstay of the Sun ONE Directory Server 5.2 performance testing, a large portion of this section is devoted to its configuration.

This section also describes the volume management software and how it was used with the Sun StorEdge T3b array.

The following basic elements were used in our benchmark configuration:

  • Internal SCSI disks were used on the entry-level 280R benchmark configuration for reasons of price.

  • The Sun StorEdge T3b arrays were used on all configurations, where the number required was balanced with the available processing power and I/O throughput of the host. The main driver for using Sun StorEdge T3b arrays was price and performance.

  • The Sun StorEdge T3b arrays were configured as workgroup arrays. Each array had a single eight-disk hardware RAID 5 layout with 16-Kbyte block size and a hot spare for further resiliency.

  • Each array was connected to its own controller at the host machine.

  • A software stripe was applied over two or more Sun StorEdge T3b arrays to provide plaiding for better performance.

  • Veritas Volume Manager 3.2 for Solaris 9 OE is used for software RAID management because this is most likely to be used by customers.

  • UFS was chosen over VxFS as a file system because there are no longer any advantages in using the latter.

  • The Network Interconnect was an isolated 100 Mbit/sec switched Ethernet.

The following sections describe how file systems and volumes were laid out on the storage for this benchmark.

Sun ONE Directory Server 5.2 Enterprise SCSI Disk Layout

In our performance tests, the following SCSI disks were only used for low-end to mid-range directory server tests:

  • Two SCSI disks were used: the root disks which had the directories /ds-data, /ds-logs, /ds-bak and /ds-db directly under root ( / ). The following describes the purpose of each of the directories created for use with the directory server.

    • /ds-data Generated LDIF files.

    • /ds-logs Server logging generates a relatively small amount of write activity, and it is unlikely to saturate any disk. In tests we have performed on understanding the optimal file system layout in the directory server has shown us that there is virtually no difference in performance when placing the logs on a file system shared by other files (the transaction logs) versus isolating them to their own disk. The Sun ONE Directory Server software provides access, error, and audit logs, which feature buffered logging capabilities. Log buffering is only available for the access log. Neither the error log nor the audit log are buffered. As such, enabling audit logging can have a significant impact on update performance.

    • /ds-bak A backup directory is created, which consists of all the databases, including the changelog database, and the transaction log.

    • /ds-db Multiple database support, which refers to the database that consists of all .db3 files ( id2entry.db3 and index files) for any given suffix (or subsuffix), allows each database to be put on its own physical disk. The load of the directory server can thus be distributed across multiple sets of database files, which can leverage individual disk subsystems. In order to prevent I/O contention for database operations, place each set of database files, including changelog database files, on a separate disk subsystem.

    • /ds-txnlog Transaction logs are critical to directory server write performance. The Sun ONE Directory Server software is generally run with Durable Transaction capabilities enabled. When this is the case, a synchronous I/O operation occurs on the transaction log with each add, delete, or modify to the directory. This is a potential I/O bottleneck, as the operation may be blocked when the disk is busy with other directory, application, or Solaris OE operations. Placing transaction logs on an individual disk can provide an increase in write performance to the directory server, especially in high-volume environments. Previous testing has shown us that transaction logging does not incur as much I/O overhead as was previously believed. In fact, it is extremely unlikely that transaction logging will saturate the disk subsystem. However, a database checkpoint can be a very expensive process and can certainly saturate even a T3 array. As such, if only two disks are available, it is much more valuable to isolate the database than to isolate the transaction logs.

  • A further disk was partitioned using the format(1M) command to create just one partition. A UFS file system was applied using newfs (1M) command and mounted under /ds-txnlog .

  • No volume management was used. The /ds-txnlog partition was separated from the other directories because it was anticipated that most activity would occur in this directory.

Sun ONE Directory Server 5.2 Enterprise Volume-Managed Sun StorEdge T3b Array Layout

For low-end to mid-range directory server tests, the following configuration was used:

  • Two Sun StorEdge T3b arrays were connected to separate FC-AL controllers on the host.

  • A Veritas disk group was created and the hardware RAID 5 LUNs from the arrays were added to it.

  • Five striped volumes were created.

  • UFS file systems were added to the volumes and the following directories served as mount points: /ds-data , /ds-logs , /ds-bak , /ds-db and /ds-txnlog .

Low-End Configuration

The following diagrams depict the details of how the storage for the E280R/T3b was utilized with the low-end to mid-range servers.

Figure 9-1. Cabling of Host Arrays

Mid-Range Configuration

FIGURE 9-2 depicts the details of the logical view of the Sun StorEdge Array Volumes that were utilized with the low-end to mid-range servers.

Figure 9-2. Logical View of Volumes From the Host

A Veritas disk group was created and the hardware RAID 5 LUNs from the arrays were added. RAID 5 was used because RAID 5 provides redundancy inasmuch as the directory data is still available after a single disk failure. It does this by creating a parity stripe by logically XORing the bytes of the corresponding stripes on the other disks.

In normal operation, RAID 5 is less performant than RAID 0, 1+0, and 0+1. This is because a RAID 5 volume must perform four physical I/O operations for every logical write.

FIGURE 9-3 depicts the details of the physical view of the volume blocks on the Sun StorEdge T3b that were used with the low-end to mid-range servers.

Figure 9-3. Physical View of Volume Block on the Sun StorEdge Arrays

The main point to note here is that the volumes are all created from the same two disks, and no attempt is made to separate file systems onto disks with respect to their anticipated loads as per the SCSI disk tests described above. Experience has shown that random data access over aggregated controllers and disks (that is, a stripe) is generally better than that achieved by placing file systems on discrete disks. This makes sense when you consider that not all of the file systems are busy all the time, and any spare capacity in bandwidth and I/O operations can be used for the file systems that are busy.

The Sun Fire v880 Server and Volume-Managed T3b Array Layout

The v880 tests were conducted after the E280R tests. Given that the same (Sun StorEdge T3b array) storage was required, the Sun StorEdge T3b arrays and their associated volumes and file systems were deported from the E280R and imported onto the v880 using Veritas utilities.

FIGURE 9-4 through FIGURE 9-6 show the details of how the storage for the v880/T3b was utilized with the low-end to mid-range servers.

Figure 9-4. Cabling of Host Arrays

Figure 9-5. Logical View of Volumes From the Host

Figure 9-6. Physical View of Volume Blocks on the Sun StorEdge Arrays

E6800 and Volume-Managed T3b Array Layout

The same method of plaiding is used for the E6800 tests, the only difference being that four Sun StorEdge T3b arrays are used instead of two. This means that each software volume comprises blocks from all four arrays. FIGURE 9-7 through FIGURE 9-9 show the details.

Figure 9-7. Cabling of Host Arrays

Figure 9-8. Logical View of Volumes from the Host

Figure 9-9. Physical View of Volume Blocks on the Sun StorEdge Arrays

Benchmark DIT Structure and Database Topology

The DIT structure for the benchmark configuration is dc=example,dc=com in which the directory clients and directory server support domain-component-based naming (RFC 2247).

The immediate sub entries of the root suffix are container entries for people and groups. The people branch is loaded with the test data and all performance tests are run against this subtree .

A corresponding database topology consists of one database back end. This holds the people branch, the root suffix and all other branches.

Directory Server Settings

The following directory server options were set:

  • Entries Each was 2 Kbytes. All were located in a flat entry directory tree that was rooted at dc=Entries,dc=example,dc=com . The entries are governed by the inetOrgPerson standard ObjectClass.

  • Indexing Enabled for the searchable user attributes (for example, cn , uid , and so forth).

  • Bulkload Import cache set to 2 Gbyte for bulkloading. The bulk load process is not I/O intensive because the cache is heavily used to delay writes to disk. As such, it is unlikely that the addition of RAID will double import times.

  • Search It is good practice to put the entire entry database (if it is able to fit) into the system memory. The in-memory cache is then primed by using the SLAMD LDAP Prime Job, which can use multiple clients and threads per client, as well as priming an attribute into the entry cache. It should be noted that using a subtree "(objectClass=*)" search is an extremely bad method to use when priming the server for a benchmark. It is very slow, uses only a single client and thread, and does not do anything to prime the database cache. To maximize search performance, ensure that sufficient RAM is available for the entries and indexes which are most frequently accessed. The Sun ONE Directory Server 5.2 software supports a 64-bit addressable cache, which makes it potentially possible to use terabytes of memory for caching data.

  • Modify Durability, transactions, referential integrity, and uid uniqueness options are all turned on. These options ensure that all adds, modifies, and deletes are recoverable transactions, and that any DN-valued attributes (pointers) always point to valid entries, and that uniqueness on adds is enforced. These options are recommended for mission critical deployments where disaster recovery is critical. There is a cost associated with these options with regard to write performance when using non-RAID, standard disks. When using high-end disk subsystems, with fast-write caching technology and fast disks, write performance increases significantly. The dominant cost for writing is Database check pointing, which is the most I/O-intensive component and which can saturate a disk. For high performance writes, we always recommend investing in high-end I/O hardware (for example, Sun StorEdge technology, high-end systems from EMC, and so on.) While it is true that you definitely want to have transaction logging and durability enabled, you may find that transaction batching can provide a notable improvement in write performance without any cause for concern about database integrity. It does however introduce a very small chance that changes could be lost in the event of a directory server or system failure (up to transaction-batch-val minus one changes in a maximum of 1000 milliseconds ), but in many cases this is an acceptable risk particularly when compared with the performance improvement it can provide. The better the disk subsystem, the higher the write performance. Although disk I/O is generally not the main bottleneck in write performance, spending less time writing data to disk can result in a higher number of updates per second.

Including Directory Server Replication in Your Benchmark

Performing a directory benchmark can be time consuming, and depending on various factors, you may decide not to include directory server replication in your benchmark.

Replication performance is similar to update performance. The dominant issues include disk I/O performance and network bandwidth, end-to-end latency, packet loss, and network congestion. You can consider your modify performance numbers to be similar to the rate for replication over 100 Mbit/sec Ethernet with the same hardware configuration. One very important aspect to note is when performing performance tests involving replication is that it will virtually always be the case that the replication subsystem cannot keep up with the directory server's update rate if you are performing changes as quickly as possible. If you want to prevent divergence , it is recommended that you perform stabilized loads (by inserting a delay between requests ) to target a particular number of updates per second at a rate where replication can keep up.

Benchmark Network Topology

An isolated network is used for benchmark best practices; otherwise , you have an unknown variable (network traffic) affecting your test results. For our scenario, a standard 16-port 100 base-T Ethernet hub running at 100 Mbps, full duplex over RJ-45 is used.

Категории