The Pragmatic Programmer: From Journeyman to Master

Algorithm Speed

In Estimating, we talked about estimating things such as how long it takes to walk across town, or how long a project will take to finish. However, there is another kind of estimating that Pragmatic Programmers use almost daily: estimating the resources that algorithms use ”time, processor, memory, and so on.

This kind of estimating is often crucial. Given a choice between two ways of doing something, which do you pick? You know how long your program runs with 1,000 records, but how will it scale to 1,000,000? What parts of the code need optimizing?

It turns out that these questions can often be answered using common sense, some analysis, and a way of writing approximations called the "big O" notation.

What Do We Mean by Estimating Algorithms?

Most nontrivial algorithms handle some kind of variable input ”sorting n strings, inverting an m — n matrix, or decrypting a message with an n -bit key. Normally, the size of this input will affect the algorithm: the larger the input, the longer the running time or the more memory used.

If the relationship were always linear (so that the time increased in direct proportion to the value of n ), this section wouldn't be important. However, most significant algorithms are not linear. The good news is that many are sublinear. A binary search, for example, doesn't need to look at every candidate when finding a match. The bad news is that other algorithms are considerably worse than linear; runtimes or memory requirements increase far faster than n. An algorithm that takes a minute to process ten items may take a lifetime to process 100.

We find that whenever we write anything containing loops or recursive calls, we subconsciously check the runtime and memory requirements. This is rarely a formal process, but rather a quick confirmation that what we're doing is sensible in the circumstances. However, we sometimes do find ourselves performing a more detailed analysis. That's when the O () notation comes in useful.

The O() Notation

The O () notation is a mathematical way of dealing with approximations. When we write that a particular sort routine sorts n records in O ( n 2 ) time, we are simply saying that the worst-case time taken will vary as the square of n. Double the number of records, and the time will increase roughly fourfold. Think of the O as meaning on the order of. The O () notation puts an upper bound on the value of the thing we're measuring (time, memory, and so on). If we say a function takes O ( n 2 ) time, then we know that the upper bound of the time it takes will not grow faster than n 2 . Sometimes we come up with fairly complex O () functions, but because the highest-order term will dominate the value as n increases , the convention is to remove all low-order terms, and not to bother showing any constant multiplying factors. O ( n 2 /2 + 3 n ) is the same as O ( n 2 /2), which is equivalent to O ( n 2 ). This is actually a weakness of the O () notation ”one O ( n 2 ) algorithm may be 1,000 times faster than another O ( n 2 ) algorithm, but you won't know it from the notation.

Figure 6.1 shows several common O () notations you'll come across, along with a graph comparing running times of algorithms in each category. Clearly, things quickly start getting out of hand once we get over O ( n 2 ).

Figure 6.1. Runtimes of various algorithms

For example, suppose you've got a routine that takes 1 s to process 100 records. How long will it take to process 1,000? If your code is O (1), then it will still take 1 s. If it's O (lg( n )), then you'll probably be waiting about 3 s. O ( n ) will show a linear increase to 10 s, while an O ( n lg( n )) will take some 33 s. If you're unlucky enough to have an O ( n 2 ) routine, then sit back for 100 s while it does its stuff. And if you're using an exponential algorithm O (2 n ), you might want to make a cup of coffee ”your routine should finish in about 10 263 years . Let us know how the universe ends.

The O () notation doesn't apply just to time; you can use it to represent any other resources used by an algorithm. For example, it is often useful to be able to model memory consumption (see Exercise 35).

Common Sense Estimation

You can estimate the order of many basic algorithms using common sense.

Algorithm Speed in Practice

It's unlikely that you'll spend much time during your career writing sort routines. The ones in the libraries available to you will probably outperform anything you may write without substantial effort. However, the basic kinds of algorithms we've described earlier pop up time and time again. Whenever you find yourself writing a simple loop, you know that you have an O ( n ) algorithm. If that loop contains an inner loop, then you're looking at O ( m — n ) . You should be asking yourself how large these values can get. If the numbers are bounded, then you'll know how long the code will take to run. If the numbers depend on external factors (such as the number of records in an overnight batch run, or the number of names in a list of people), then you might want to stop and consider the effect that large values may have on your running time or memory consumption.

Tip 45

Estimate the Order of Your Algorithms

There are some approaches you can take to address potential problems. If you have an algorithm that is O ( n 2 ) , try to find a divide and conquer approach that will take you down to O ( n lg( n )).

If you're not sure how long your code will take, or how much memory it will use, try running it, varying the input record count or whatever is likely to impact the runtime. Then plot the results. You should soon get a good idea of the shape of the curve. Is it curving upward, a straight line, or flattening off as the input size increases? Three or four points should give you an idea.

Also consider just what you're doing in the code itself. A simple O ( n 2 ) loop may well perform better that a complex, O ( n lg( n )) one for smaller values of n, particularly if the O ( n lg( n )) algorithm has an expensive inner loop.

In the middle of all this theory, don't forget that there are practical considerations as well. Runtime may look like it increases linearly for small input sets. But feed the code millions of records and suddenly the time degrades as the system starts to thrash. If you test a sort routine with random input keys, you may be surprised the first time it encounters ordered input. Pragmatic Programmers try to cover both the theoretical and practical bases. After all this estimating, the only timing that counts is the speed of your code, running in the production environment, with real data. [2] This leads to our next tip.

[2] In fact, while testing the sort algorithms used as an exercise for this section on a 64MB Pentium, the authors ran out of real memory while running the radix sort with more than seven million numbers. The sort started using swap space, and times degraded dramatically.

Tip 46

Test Your Estimates

If it's tricky getting accurate timings, use code profilers to count the number of times the different steps in your algorithm get executed, and plot these figures against the size of the input.

Best Isn't Always Best

You also need to be pragmatic about choosing appropriate algorithms ”the fastest one is not always the best for the job. Given a small input set, a straightforward insertion sort will perform just as well as a quicksort, and will take you less time to write and debug. You also need to be careful if the algorithm you choose has a high setup cost. For small input sets, this setup may dwarf the running time and make the algorithm inappropriate.

Also be wary of premature optimization. It's always a good idea to make sure an algorithm really is a bottleneck before investing your precious time trying to improve it.

Related sections include:

Challenges

Exercises

34.

We have coded a set of simple sort routines, which can be downloaded from our Web site (http://www.pragmaticprogrammer.com). Run them on various machines available to you. Do your figures follow the expected curves? What can you deduce about the relative speeds of your machines? What are the effects of various compiler optimization settings? Is the radix sort indeed linear?

35.

The routine below prints out the contents of a binary tree. Assuming the tree is balanced, roughly how much stack space will the routine use while printing a tree of 1,000,000 elements? (Assume that subroutine calls impose no significant stack overhead.)

void printTree( const Node *node) { char buffer[1000]; if (node) { printTree(node->left) ; getNodeAsString(node, buffer); puts(buffer); printTree(node->right); } }

36.

Can you see any way to reduce the stack requirements of the routine in Exercise 35 (apart from reducing the size of the buffer)?

37.

we claimed that a binary chop is O (lg(n)). Can you prove this?

Категории