Recursion is often times less efficient than a well designed and concrete imperative solution, even if the imperative solution uses more memory. This is because most languages, the stack is not free. There's a lot of meta data and house keeping that goes on when you invoke a function to support things like stack traces and introspection.
The provided example (at least to me) is also convoluted and fails to illustrate the elegance of using a recursive data structure.
In general parlance, an affine transform or rotation matrix would both be more elegant and readable.
Although I'm satisfied with the choice I made for what to put in my essay, I want to thank you for pointing out the gaps between my essay and the real world, and for doing so in a forceful way without crossing into being querulous.
This kind of discourse is, IMO, an example of Hacker News at its finest.
Are you aware of a programming language/compiler that optimizes on recursively written code? I'm finding it difficult to imagine a PL where stack _isn't_ free except tail-recursive code.
My understanding is that it's the opposite -- function calls expend a cost to save (and later restore) the registers and push the locals onto the stack. Tail recursive functions on the other hand can be implemented as a jump to the top of the function, right after all the stack bookkeeping, and so saves on that expense.
Tail-merging doesn't only apply to recursive functions. As soon as you don't need to store intermediate result, you can jump to another function without allocating a frame.
Haskell (sometimes) does this kind of thing much more efficiently - it doesn't specify or always provide a stack, it models the program as a graph to be evaluated by reduction. And any language where you can implement continuations allows you to write this kind of thing efficiently (e.g. https://github.com/Atry/memcontinuationed ).
(Ultimately you can do "userspace continuations" in any programming language by explicitly transforming your code into CPS, using an explicit FSM if the language doesn't support tail-calls as such, the only price being developer sanity)
I do think there's a real gap in many languages between the natural way to express graph/tree transformations (recursively) and a cache-friendly/mechanical-sympathy-style runtime - see e.g. https://medium.com/@gkossakowski/kentucky-mule-limits-of-sca... . And really this gap shouldn't be there - the language implementation should be able to transform and fuse this kind of tree-walking operation into an efficient implementation - but at the same time it's hard to see how you'd ever make the language spec permit this without resorting to a full-Haskell "everything is lazy everywhere and the language implementation is allowed to evaluate whatever it likes however many times it likes (including 0) in whatever order".
Maybe an actual stack language like Forth could do so? The stack overhead of calling a word is a lot lower to begin with, and ColorForth for already has tail call optimisation[0]. More advanced concatenative languages like Factor[1] or Kitten[2] might be able to perform more interesting optimisations.
The provided example (at least to me) is also convoluted and fails to illustrate the elegance of using a recursive data structure.
In general parlance, an affine transform or rotation matrix would both be more elegant and readable.