I have read the paper before, and it's certainly right -- but Gabriel never claimed that the fib benchmark was anything other than a microbenchmark of certain aspects of a system.
For example, when discussing the TAK microbenchmark, he says:
> When called on the above arguments, TAK makes 63,609 function calls and performs 47,706 subtractions by 1. The result of the function is 7. The depth of recursion is never greater than 18. No garbage collection takes place in the MacLisp version of this function because small fixnums are used.
With such a precise description of what the benchmark tests, it's obvious (a) that it's a microbenchmark, but also (b) that it does indicate the speed of certain combinations of operations.
So I don't think that the title of this article is actually the takeaway message from that paper. Microbenchmarks like the Gabriel benchmarks are around for a reason.
For example, when discussing the TAK microbenchmark, he says:
> When called on the above arguments, TAK makes 63,609 function calls and performs 47,706 subtractions by 1. The result of the function is 7. The depth of recursion is never greater than 18. No garbage collection takes place in the MacLisp version of this function because small fixnums are used.
(See http://www.dreamsongs.com/Files/Timrep.pdf)
With such a precise description of what the benchmark tests, it's obvious (a) that it's a microbenchmark, but also (b) that it does indicate the speed of certain combinations of operations.
So I don't think that the title of this article is actually the takeaway message from that paper. Microbenchmarks like the Gabriel benchmarks are around for a reason.