The author is not complying with their own requirements:
> Pattern matching and polymorphism is not allowed
Using dynamic dispatch on different functions of type Int -> Int is, precisely, polymorphism. Doesn't look much like the OO polymorphism that some people are used to, but FP folks use this kind of functional polymorphism all the time.
That being said, the main premise of the article is quite valuable: yes, you can program with no conditionals. And i think it's quite useful to know that polymorphism can be used like that without suffering too much of an unfriendly syntax (see lmm's comment on Smalltalk: http://news.ycombinator.com/item?id=5198419).
Finally, although pattern matching can be seen as an overpowered if-statement, the relation can be also turned around: the if-statement is a special case of pattern matching. I won't say that pattern matching is not conditional statements, but i think it's also quite valuable to notice that you can use pattern matching as a new programming primitive instead of the if-statement. For example, the if-statement implemented in Haskell:
if :: Bool -> a -> a -> a
if True x _ = x
if False _ y = y
Relying only on pattern matching instead of conditional if-statements can make quite the difference on how one reasons about programming problems (e.g. Prolog and declarative programming in general).
So you just end up using function pointers and arrays to fake polymorphism. If you have the requirement that every part of a function has to be evaluated, you just end up branching between function calls rather than eliminating branching. This is vaguely reminiscent of how some functional languages use tail-recursion instead of loops--they don't eliminate iteration, they just express it a different way.
Where I disagree with the author is in pointing out that there is no real difference between declaring "fib(1) -> 1" in a pattern matching function definition and inserting a pointer to a function that returns 1 into an array of function pointers. Pattern matching and subtype polymorphism can even still get all the gains the author is looking for--namely, being able to debug branching by inspecting the call stack. For that matter, so would pushing a dummy stack frame every time you come to a branch, or even building a separate branch stack.
Certainly, there's no practical difference between pointers+arrays, polymorphism and conditional statements. That's the whole point: We're "simulating" or "creating" if-statements/polymorphism using other language constructs.
My goal was to show that you can do conditional statements and polymorphism without having those constructs by implementing them on top of other constructs (and possibly vice versa). And by looking at them from a different angle, you may end up with some ideas which may or may not be of value, such as the debug-thing. Building a separate branch stack would be the same thing, the question at hand is how you find such an idea/solution to a problem.
This is sad. If only people would take the Expression Problem[1] more seriously and realise that there is a good time to use ifs/pattern matching and a good time to use polymorphism/function arguments
I think CLOS doesn't suffer from the expression problem. Both alterations to the data model can be implemented as a "patch" to the program without editing any of the original code.
Gah, do you know if there is a better name for this "pattern matching vs polymorphism" duality then? I think its still important in its own right since multimethods and other solutions to the expression problem usually apply to a more global-ish scope, unlike pattern matching and functions since those are primitive constructs and naturally nested as deep as you want.
I love the Erlang example, I've been curious about Erlang and that's a powerful demonstration of conditional-free programming, contrary to the author's intent.
Sorry, but "n < 2" is a conditional, whether you're using an if statement, the "?" operator, or to reference an array of function pointers (at least from this old-school C/assembler programmer's perspective). Whereas that Erlang code looks elegantly conditional-free. Clearly branching will appear behind the scenes, but the Erlang code has no conditionals.
Pedantic: "n>2" is a comparison operator, and doesn't incur a branch. if statements, ternary operators, etc are conditionals. The effect in the article is the same though, without the potential for a mispredict penalty.
Whether that's actually faster is debatable though. If you're writing in C, you're probably concerned about performance, and the function pointer invocation is going to be a lot slower than a branch mispredict. A sufficiently complex array dereference will slow it down too, but I wouldn't be surprised if the compiler can turn simple cases like this into an equivalent switch/jumptable.
Thus conditional-free programming (as presented here, in C) is mostly academic. Debug tracing is an interesting application, but readability and performance both suffer.
If a language could internalize conditional-free constructs natively, however, it could be an interesting study.
`n > 2` could be a comparison operator in some architectures. For example, in dcpu16 (fictional, but I'm sure this was inspired from somewhere), `a = b > 2` would be written as:
set A, 0
ifg B, 2
set A, 1
It conditionally evaluates a machine instruction, which is, indeed, branching.
>> Thus conditional-free programming (as presented here, in C) is mostly academic
Do you think this is true because C code has traditionally relied so heavily on branching?
Put another way, perhaps branch-free programming is has greater speed potential, but the tools we use have been heavily optimized for branch-laden programming. Maybe putting sufficient resources into optimizing code for branch-free programming (and stripping out optimizations for branching) would yield faster execution.
I don't know the answer, just musing. I am curious, though.
Well, the "branch-free" paradigm requires pushing a stack frame and making a jump to the passed function (requiring a pointer lookup) and a jump back to the calling function. In the branching paradigm, only one execution path requires a jump at all, and that jump is made based on an offset embedded directly into the machine code, so the additional pointer lookup is not needed.
So, I'm going to come down on the side of the traditional model being inherently more performant.
agree about the conditional, I was expecting some sort of functional alternative and was interested by the title, but on reading and realizing that the OP is proposing that "(n < 2)" evaluation and calling a func ptr as non-conditional programming is simply not true, that's conditional programming.
I consider "n < 2" to be an expression evaluating to either true (1) or false (0). You can do basic if-else branching by considering true/false statements as conditions to if/elseif/else branching, but true/false cannot be used to evaluate only parts of a function by itself. That's what I intended to express, sorry if it's not evident.
What constitutes a "part of a function" in the article is a little confusing to me, for example: "I consider it as 'cheating' as we now have evaluated only parts of the function." Once you pass functions to a function, are those "part of the function"? If we say no, they're just arguments, it seems to sidestep the issue in an unsatisfying way; I could rewrite all my functions to take all the functions they call as arguments (dependency injection taken to the extreme) and suddenly all my functions have... no parts? That can't be right! On top of that, looking up an entry in a list (even one of functions), given an integer index, in a functional language, is arguably a series of conditional tests (is index 0? is index-1 0? is index-2 0? ...) Overall it's a fair article, but I think there have been a number of esolangs over the years that have tried to turn the idea of the conditional on its head, as well; more references might make it stronger.
If you are interested in ideas that can arise from putting some hard restrictions in the same vein, see "programming with nothing"[1]. SICP also explores this topic slightly in the exercise about Church numerals and thought experiments on how you could try to reimplement special forms using functions.
Makes me think of the smalltalk approach to branching, where it's there, but just another method call (on a boolean) - rather than
if(a > b) {...code...}
you do something like
(a > b).ifTrue({...code...})
I think it's an approach more languages should use - it means there are fewer special cases in the syntax, and you can compose conditionals or use them in higher-order functions more easily.
Here's a real implementation of prettyprint without conditional jumps (this assumes signed right shifts are implemented as arithmetic shifts by your compiler, but everyone does that anyway):
#include <stdio.h>
#include <stdint.h>
void prettyprint(int a, int b)
{
int lt_mask = (a-b) >> (sizeof(int) * 8 - 1);
intptr_t lt = (intptr_t)"%d is < %d\n";
intptr_t ge = (intptr_t)"%d is >= %d\n";
char *text = (char*) (lt & lt_mask) + (ge & ~lt_mask);
printf(text, a, b);
}
int main(int argc, char **argv)
{
prettyprint(atoi(argv[1]), atoi(argv[2]));
return 0;
}
I can't speak for anyone else, but I can think of one reason to do things like this. If I can use a minimal subset of language features and verify their correctness, and also verify the correctness of the ways in which I combine them, then it would follow that my entire program is much, much easier to prove correct.
This is perhaps more useful in lisps or Haskell but it's still cool to think about. And, hell, maybe someone implementing a language in C could do something like this for the debugging properties.
And while yes you could take this to the extreme and implement everything on top of SKI combinators, perhaps one wants to start a touch higher up than that. Anyway, maybe that's a useful answer?
I guess I was just puzzled as to why the author felt that avoiding conditionals was a useful exercise in the first place. I realise that I'm probably wedded to a conditional-heavy way of thinking, I guess what threw me was that the author seemed to assume that eliminating conditionals stands as an aim on its own, regardless of minimising use of language features. At the same time he has a test in his array de-reference that seems to me to be effectively a conditional without using the explicit if(){} else{} construction.
TBH I would have thought verifying the implicit conditional and function-pointer jump was no easier than the explicit...
From the article: "If the compiler/debugger is able to create this "evaluate every piece of line" format for functions (generating new ones when needed), then we can use the stack to check which branches we've taken — they're on the stack, after all!"
No, they are not. Having called functions instead of evaluating conditional statements doesn't change anything to the state of the [non-stale part of the] stack at the time you reach the breakpoint.
To achieve this one would need to execute the code following the branching (and containing the breakpoint) as callback of any of the "conditional" functions, so the followed branch would indeed appear in the stack. But that would just be impractical as you would reach stack overflow extremely quickly - not to mention the hit on performances.
I think it is more instructive to realize that certain (most?) problems rely on conditionals conceptually. You could then work around them by effectively implementing conditionals anew and spreading code over smaller functions, but the algorithm is still conditional in nature. I am also curious about the relation of conditionals to other constructs. Is the for loop disallowed because it checks for the end of the loop? But it could be implemented with a map over a range, which does no such check (assuming range is a primitive operation). In this way conditionals can be eliminated by recruiting a variety of possible constructs, but it all depends on the rules you're setting.
I think a good way to get acquainted with FP is to replace conditionals with lookup tables and lambda functions. But it is easy to overdo it, sometimes plain old if-statements are the most readable alternative.
>>> (1 is True) or (True is 1)
False
>>> (0 is False) or (False is 0)
False
>>> isinstance(0, bool) or isinstance(1, bool)
False
>>> isinstance(False, bool) and isinstance(True, bool)
True
Indexing with boolean expressions is bad style to begin with, but if you are going to do it then a bool cast shows explicitly what you are trying to do.
Downvoting notwithstanding, the statement that 1 and 0 ARE True and False is flat false. It's not even kind of true: True is not 1 and False is not 0, the type of True and False is not int, and the established convention for comparing to these values is to use 'is' if you are not using the vaguer 'if cond' or 'if not cond'.
I can honestly say that the best thing to have come out of this article was the link to Bret Victors talk on inventing on pricple - http://vimeo.com/36579366#t=1004. Thank you.
Thanks for singling that out. The specific demos of interactive coding/visualization were cool, but the end where he really gets into the general topic of finding a guiding principle and using that to change the world - was very, very well worth the time to listen to.
It uses recursion, while a simple loop will suffice.
Your fib(10000000000) will blow your RAM limit
with growing stack (and will be slow)
while implementation with loop would consume zero RAM and would run several times faster.
"And as if by magic, there's no need for conditionals"
: also incorrect.
You have a conditional in
int res = (fn[(n < 2)])(n);
specifically the (n < 2) part.
All you have done after the hard work is moving conditional from one line to another.
"But we're still not sure of which branch we took."
- So why don't you put the breakpoint before* the condition occurs, and step through it in the debugger??
It's true that first class functions can remove the need for conditionals. Isn't this simply functional programming? He starts with an Erlang example then doesn't talk about FP. I must be missing something.
They're coverage tools rather than debuggers, but they can give you an html-browseable version of your source code coloured like that to show you which lines have been hit and how many times.
A clause in Prolog is a predicate that is true when its conditions are met, i.e., can be seen as a single conditional in itself. Furthermore, there is pattern matching (unification, even), which was declared cheating in the post. Fibonacci examples would use this, just as the Erlang example in the post. There are also if-then rules (the -> operator), which I find makes code clearer. The same trick described in the post could be implemented in Prolog I suppose.
> Pattern matching and polymorphism is not allowed
Using dynamic dispatch on different functions of type Int -> Int is, precisely, polymorphism. Doesn't look much like the OO polymorphism that some people are used to, but FP folks use this kind of functional polymorphism all the time.
That being said, the main premise of the article is quite valuable: yes, you can program with no conditionals. And i think it's quite useful to know that polymorphism can be used like that without suffering too much of an unfriendly syntax (see lmm's comment on Smalltalk: http://news.ycombinator.com/item?id=5198419).
Finally, although pattern matching can be seen as an overpowered if-statement, the relation can be also turned around: the if-statement is a special case of pattern matching. I won't say that pattern matching is not conditional statements, but i think it's also quite valuable to notice that you can use pattern matching as a new programming primitive instead of the if-statement. For example, the if-statement implemented in Haskell:
Relying only on pattern matching instead of conditional if-statements can make quite the difference on how one reasons about programming problems (e.g. Prolog and declarative programming in general).