Benchmarking Competing Solutions
Problem
You want to see which of two solutions to a problem is faster. You might want to compare two different algorithms, or two libraries that do the same thing.
Solution
Use the benchmark library to time the tasks you want to run. The Benchmark.bm method gives you an object that can report on how long it takes for code blocks to run.
Lets explore whether the member? method is faster on arrays or hashes. First, we create a large array and a large hash with the same data, and define a method that exercises the member? method:
RANGE = (0..1000) array = RANGE.to_a hash = RANGE.inject({}) { |h,i| h[i] = true; h } def test_member?(data) RANGE.each { |i| data.member? i } end
Next, we call Benchmark.bm to set up a series of timing tests. The first test calls test_member? on the array; the second one calls it on the hash. The results are printed in a tabular form to standard error:
require enchmark Benchmark.bm(5) do |timer| timer.report(Array) { test_member?(array) } timer.report(Hash) { test_member?(hash) } end # user system total real # Array 0.260000 0.060000 0.320000 ( 0.332583) # Hash 0.010000 0.000000 0.010000 ( 0.001242)
As youd expect, member? is much faster on a hash.
Discussion
What do the different times mean? The real time is "wall clock" time: the number of seconds that passed in the real world between the start of the test and its completion. This time is actually not very useful, because it includes time during which the CPU was running some other process. If your system is operating under a heavy load, the Ruby interpreter will get less of the CPUs attention and the real times won reflect the actual performance of your benchmarks. You only need real times when you e measuring user-visible performance on a running system.
The user time is time actually spent running the Ruby interpreter, and the system time is time spent in system calls spawned by the interpreter. If your test does a lot of I/O, its system time will tend to be large; if it does a lot of processing, its user time will tend to be large. The most useful time is probably total, the sum of the user and system times.
When two operations take almost exactly the same time, you can make the difference more visible by putting a times loop within the code block passed to report. For instance, array lookup and hash lookup are both very fast operations that take too little time to measure. But by timing thousands of lookup operations instead of just one, we can see that hash lookups are a tiny bit slower than array lookups:
Benchmark.bm(5) do |timer| timer.report(Array) { 1000.times { RANGE.each { |i| array[i] } } } timer.report(Hash) { 1000.times { RANGE.each { |i| hash[i] } } } end # user system total real # Array 0.950000 0.210000 1.160000 ( 1.175042) # Hash 1.010000 0.210000 1.220000 ( 1.221090)
If you want to measure one operation instead of comparing several operations to each other, use Benchmark#measure. It returns an object that you can interrogate to get the times, or print out to get a listing in the same format as Benchmark.bm. This code demonstrates that I/O-bound code has a larger system time:
def write_to_file File.open(out, w) { |f| f.write(a) } end puts Benchmark.measure { 10000.times { write_to_file } } # 0.120000 0.360000 0.480000 ( 0.500653)
Recall that the real time can be distorted by the CPU doing things other than running your Ruby process. The user and system times can also be distorted by the Ruby interpreter doing things besides running your program. For instance, time spent doing garbage collection is counted by benchmark as time spent running Ruby code.
To get around these problems, use the Benchmark.bmbm method. It runs each of your timing tests twice. The first time is just a rehearsal to get the interpreter into a stable state. Nothing can completely isolate the time spent running benchmarks from other tasks of the Ruby interpreter, but bmbm should be good enough for most purposes.
See Also
- The standard library documentation for the benchmark library has lots of information about varying the format of benchmark reports
Категории