The directly translated ruby version (from stack overflow of course) is even shorter:
def fib(n, cache=Hash.new{ |h,k| h[k] = k < 2 ? k : h[k-1] + h[k-2] })
cache[n]
end
It runs out of stack around 7146 on my machine at least. The python one is limited by the recursion depth limit in my test but of course that's configurable at your own risk.
The Python version and this Ruby version are not equivalent.
In Ruby, default parameters are allocated for each call to the function, which means that they are not shared across calls.
In Python, default parameters are allocated once and shared across all calls, allowing them to be mutated.
This becomes obvious if we change the Python and Ruby versions to print when they're computing something that is not yet cached. For back-to-back calls to `fib(10)`, the Python version prints only on the first call to `fib(10)`. The Ruby version must recompute all values from 2 – 10 again.
This is the proper Ruby form since it relies on the [] operator and is therefore substitutable for any other callable esp. procs/lambdas, and able to wrap them as well.
This isn’t just golf, it’s an excellent way to pass around a lazily computed, self-caching closure. I use it often in preference to memoization gems.
Aside from noting the false/nil equivalence concern, the original article seems over-engineered to me. Excessive metaprogramming is a common trap for Ruby devs.
It's definitely something I remember being interested to find out about the hard way.
Every recursion class talks about the Fib algorithm and why it's nice with recursion. But the iterative version doesn't have these stack limitations, presumably uses less memory and is just as fast. Doesn't that make it a better implementation?
> Every recursion class talks about the Fib algorithm and why it's nice with recursion.
It is nice with recursion in the sense that it's a straightforward, easy algorithm (what college CS student doesn't know basic arithmetic?) that illustrates recursion. It's also trivial to write the recursive version based on the mathematical definition.
It is not nice in that it's very slow and blows up due to the exponential growth of the recursive calls. No one outside of CS 101 or an algorithms class is going to ever write the recursive version again. But that not-nice aspect makes it nice again, because it's a good jumping off point in how to improve an algorithm's runtime by understanding its structure (the iterative version) or throwing more memory at it (the memoized version).
> But the iterative version doesn't have these stack limitations, presumably uses less memory and is just as fast.
There is no "presumably", unless you store every Fibonacci number and not just the last two, the iterative Fibonacci does use less memory than the naive recursive Fibonacci. And there's no "just as fast". The iterative Fibonacci is faster, unless you've done something horribly wrong. It runs in linear time wrt N rather than exponential. If your linear algorithm is only just as fast as an exponential one, you haven't made a linear algorithm.
> Doesn't that make it a better implementation?
Yes, obviously. Which is why after CS 101 or an algorithms class you pretty much never write anything like that except maybe as a prototype. "This recursive program follows the definition closely, but it blows up memory/time." Start there and improve is a very reasonable thing. Start there and stop is just silly.
The stack is just an implementation detail of some languages.
Some other languages have non-broken function calls, and don't need to grow the stack for all of them.
In those languages, you don't need to have loops as built-in to work around broken function calls. You can just write your own loops as library functions.
(Of course, the naive recursive definition of Fibonacci would still be slow. You'd want to write a version that can be efficiently evaluated. You are already familiar with the linear time version in the iterative form.
You can also write a logarithmic time version of Fibonacci.)
This is fun. Just wanted to say that both my 64 gb desktop and my 16 gb of RAM laptop reach the same stack limit at exactly 7692 when using pry. And reached the limit at 7694 with irb.
TXR Lisp does not have broken Python semantics for evaluating the default expressions for optional arguments. The expressions are freshly evaluated on each call in which they are needed, in the lexical scope in in which the prior parameters are already visible, as well as the scope surrounding the function definition.
Why this works is that the #H hash literal syntax is a true literal. Every time that expression is evaluated, it yields the same hash table, which is mutable. This wouldn't work in an implementation of TXR Lisp in which literals are put into a ROM image or mapped into read only virtual memory. We will not likely see such a thing any time soon.
If we change #H(...) to, say, (hash-props 0 1 1 1), it won't work any more.
> Why this works is that the #H hash literal syntax is a true literal. Every time that expression is evaluated, it yields the same hash table, which is mutable.
While this is cool and I think I grok the semantics, the identification of it as a "true literal" has me scratching my head. To me a literal is a syntactical term, so putting a literal into a rom image only makes sense in terms of storing the source itself (which is not necessary most of the time, but might make sense in a lisp).
First-class support for partial evaluation looks really cool. I've been playing around with a scheme dialect built around this very concept.
A literal is not purely a syntactical term. A literal is an object that is embedded in the program itself. As such, it is necessarily part of the syntax: the piece of syntax denoting the literal is understood to be that object itself.
However, note that syntax disappears when the program is compiled. The literal itself does not!!! (Unless identified as dead code and eliminated, of course).
Even using the C language as an example we can readily distinguish between literal syntax and semantics. The string literal syntas "abc\n" includes the double quotes and backslash. Yet the string literal object at run time has no double quotes or backslash, and the n after the backslash was turned into a newline (linefeed in ASCII). (C compilers on Unix-like platforms do in fact map string literals to read-only memory. It's not ROM, but wite-protected VM. Firmwares written in C being burned into ROM are thing also. The literals are then in ROM.)
In Lisp, we think of the objects as being source code, so it's a little different. The C idea of the double quotes and backslash being source is called "read syntax" in Lisp, particularly Common Lisp. Scheme might use "read syntax", I'm not sure. In any case, "surface syntax" or "character-level syntax" are also useful synonyms. Note
Lisp compilers and interpreters take the scanned-surface-syntax-turned-object as their input. We could call that deep syntax. So a quoted literal is literally a chunk of the deep syntax, turned into a datum for the program: (quote X) embeds the syntax X into the program, making it available as a datum, and the compiler will process that quote construct by propagating X as is into the compiled image, arranging it to be part of some table or literals or whatever.
Some object are self-quoting, like numbers of strings, or #(...) vectors in Common Lisp and Scheme.
The TXR Lisp #H(...) hash syntax is like this. If it appears as an evaluated expression in code, then it is a literal. The compiler must propagate that to the compiled image, just like a vector, string, or number.
> However, note that syntax disappears when the program is compiled. The literal itself does not!!!
I would term this a "value". The word "literal" is inherently lexical, dealing with letters (ie a signifier) from an etymological standpoint as opposed to the value (the signified).
Or to further clarify my confusion, a symbol is a value that can be evaluated to a different value at runtime but I wouldn't call it a "literal" outside the context of a legible source file.
Perhaps the one exception I can think of is the use of macros, but macros are distinguished from other list transformations by being specifically applied to source code (and also runtime values, but at LEAST source code). So it makes sense that they have some value-form of literals—ie signified signifiers that are not themselves symbols or quoted values.
I don't want to argue, I'm just pointing out the diction here is super confusing at the very least to me. A literal is just that-a lexical object that makes up part of a source
> Yet the string literal object at run time
I would again term this a "value" that the literal evaluates to at compile-time, or alternatively a "value" that the literal signifies to the reader. Even homoiconic languages like lisp differentiate between source and runtime forms. In the context of c I would consider the concept of a "runtime literal" to be a contradictory/paradoxical/incoherent/nonsensical one.
That said, what you were showcasing remains very cool regardless of how we refer to the forms. A great demonstration on how there's only really one difficult problem—naming shit! Basically all other problems follow from this original one (yes, even cache invalidation and coloring and off-by-one errors).
"literal" a short for "literal constant" which is in contrast to "symbolic constant".
3.1415926535 is a literal constant; π is symbolic.
The former constant literally is the value, whereas π isn't literally that value.
There's another term we could use for the runtime counterpart of the literal. In machine languages we have something called an immediate operand. For instance an instruction which moves a constant value into a register, like mov r1, 42 is said to have an immediate operand. This operand is baked right into the instruction code word, or an adjacent word.
Thus, a source code literal constant, or possibly a symbolic constant, appearing in the assembly language, becomes an immediate operand in the running machine language.
In higher level languages we just tend to use the word literal for both. And of course what that means is that a symbolic constant can become a literal one at runtime, due to a translation time substitution of symbolic constants by their values in place.
An immediate operand is an object. What makes it special is that it lives inside the program code. Each time the instruction is executed which references the immediate operand, it references that one and only operand. The only way for that reference to get a different value would be if the program were modified the runtime: that is, if it were self-modifying.
Literals, or shall we say immediate operands, being part of the program, are assumed not to be modified. Compilers can optimize them in various ways, like usung one object for multiple occurrences of the same literal. Or they can arrange for these literals to be in unwritable storage. Some literals can be stuffed into the immediate operand fields of machine instructions.
So yes, literals correspond to runtime objects, but objects which often have special treatment and rules regarding their use.
Does that work due to Ruby having Python-like broken semantics for evaluating the default value expressions for optional arguments? (I have no idea.) Or does it work due to the object denoted by {...} being a real literal?
If the default value expression is evaluated on each call to the function in which it is required rather than at function definition time, and if the { ... } syntax is a constructor that creates a fresh object, then this wouldn't work; one of those two conditions (or both) must be broken.
It cleverly uses both = and := together. Usually people use = for immediate assignment (such as constants) and := for delayed assignment (such as functions) but this combines the two.
Probably because not using mutable default arguments is python 101. Just something you don't do or think about. But this is indeed clever use of that gotcha.