Interestingly I've never seen the parallels between the Unix way and functional programming. But it makes perfect sense. Code programs/functions that do one thing and do them well. However, that's how code should work in general. That's what we got functions/methods for in the first place.
I agree, functional programming is great but sometimes I just see ideas being sold as "functional" that apply to programming in general. Splitting programs into smaller parts that are easily solvable and work in a predictable way is one of the most fundamental parts of coding.
Sure, that's not all there is to it, but I'm a little bothered that it's being marketed as the new thing.
I think one of the differences is pipelining/laziness, which makes it easier to decompose problems into smaller pieces.
For instance, look at his solution to FizzBuzz. Would you use a similar solution in ruby? He uses an infinite list, so if you naively translated it, you'd run out of memory. Instead, you'd have to explicitly use a generator in order to achieve a similar solution.
Of course, you can try always returning a generator in ruby, but that would be incredibly awkward because the built-in operators compute the entire result (like the + operator on lists). So, in practice, people intuitively avoid creating large intermediate results because it's wasteful, even in cases where that would simplify the code and make it more modular.
But in haskell, go ahead and use the large intermediate result, because it won't actually compute it before returning. It will just return a generator for the large intermediate result, and compute only the parts of that large result that are needed.
The parallels to Unix are strong enough that having an actual pipe operator can be very useful. I frequently write code that looks something like this:
Yes, in fact F# has that built in as |> , which I've taken to using in Haskell as well. Though it's a bit more intuitive to me when it has the lowest precedence like so:
(|>) :: a -> (a->b) -> b
(|>) x f = f x
infixl 0 |>
There infixl 0 means infix operator with left associativity and 0 precedence.
Then you can use all sorts of expressions inbetween:
(5 + 3 |> show) ++ " times" |> putStrLn
Also, tying this back to one of the article's ideas, this piping-like syntax for doing things really manages to take away the emphasis from the function and put it back on the data. It isn't all about functions after all!
True, but my limited experience in Haskell, and Ocaml says FP makes it harder to do the wrong thing. It is really hard to write a large Haskell function(on the order of 100s of lines of code.) I am not sure I could do it without breaking the function up in to a bunch of sub functions in a where clause, which is a case I am not even sure I would count since each of those sub functions could be moved out of the parent function with no change in functionality. Whereas my experience in C and C++ points to it being easy to write functions that run in to the low thousands.
Well, speaking of the Unix philosophy, this could possibly blow your mind. It's tens of Unix command line tools (well, the gist of them, not every functionality), implemented in a few lines of Haskell:
I wish PragProg would do a (whole) book one Haskell. The current ones just aren't very good.
Books such as "Real World Haskell", the "gentle guide" and "Yet Another Haskell Tutorial" all manage the rather unbelievable feat of making many Haskell concepts unnecessarily complex, unintuitive and hard to understand -- not something you want for a language that is already on the complex side. For example, RWH goes into an absurdly complicated sample scenario (a parser, iirc) to introduce the reader to monads, and never slows down to go back to first principles. YAHT does the same thing.
Monads have a reputation for being hard to understand, but I think this is mostly the fault of the current literature; monads are conceptually simple, yet every book manages to botch their explanation, making the learning curve particularly steep for non-FP programmers.
Have you tried Learn You a Haskell for Great Good (http://learnyouahaskell.com/)? It's not as comprehensive or full of practical examples as something like RWH, but as an introductory text, I think it does a better job than any other Haskell book I've come across at keeping challenging concepts understandable.
As an introductory text it actually does a better job than just about anything I've read. I'd rank it up there with The Little Schemer series for learning some aspects of functional programming. Miran is very good at explaining and writing.
I had forgotten all about that. I agree, it's very good. Its chapter on monads is decent, iirc. But the book is also quite light, being a basic intro to the language, and doesn't get into a lot of things you'd want to learn in order to actually get stuff done (concurrent programming, networking etc.).
I think parsers are a decent way to introduce monads, and I really like the way Graham Hutton's Programming Haskell does it, by demonstrating parsing without even using the word monad, and then in a later chapter on IO, saying "oh hey and this thing is called a monad and we used it once before!"
And I disagree a bit about RWH... I'm in the middle of it now I think it's great, filled with meaty examples that aren't too complicated at all and are generally pretty concrete. Maybe I'm just not far enough in yet? There are bits where it goes off into the weeds (the chapter on JSON goes so far as to fiddle about with character encoding IIRC) but I just can't imagine a PragProg book could do much better, without sacrificing depth.
The problem is that RWH teaches Haskell with "real world" examples that actually do useful stuff (such as the JSON chapter), and high-level theoretical concepts such as functors and monads are subtle and complicated enough, especially for FP newbies, that they need to be taught from first principles, not by wading into the deep end by showing a real-world implementation of a parser.
Personally, my "eureka" moment was reading the "IO Inside" [1] page on the Haskell wiki, which explains the IO monad in a way that made it click for me, as a imperatively-trained programmer. You can think of the IO monad as a way to force a compile-time dependency between I/O operations by having each IO-performing function take a "baton" value (the "IO" instance) as an argument, as well as return it. The explanation of how Haskell retains the appearance of purity in the face of impure I/O by passing an instance of RealWorld around is genius. That's a brilliant way to start out explaining monads, and it's something simple that you can use to explain the other monads such as the state monad (which the IO monad is built on).
I agree, and it's the one I recommend for experienced programmers. It has a K&R-like succinctness and clarity.
That said, it is about programming in Haskell and not about programming with Haskell; hence this series of articles where I try to look at the wider picture and some pragmatic issues.
Most of the mathematical culture inherited by haskell can be learned somehow elsewhere. Disclaimer, I'm quite a noob, but I'm pretty sure sicp/little schemer are good complementary sources on the spirit of haskell.
Have you tried Learn You A Haskell? It's a gentle and humorous introduction to the language, and I rather like it's take on the more important typeclasses. As for monads in particular, I think the best introduction is "You Could Have Invented Monads."
Did anyone notice that I only mentioned monads once in the article? (NB a mistake upstream meant some material was dropped. There was a nice para there about "I remember the time before monads... We did the same stuff, only it was a bit clumsier...")
Monads are a useful technique, but they aren't everything!
Personally, I like the "Learn You A Haskell" style, of carefully and very gently building up the basic language, with examples that are as simple as possible and deal with only the issue being discussed. My problem with Real World Haskell is that it combines learning the language with building stuff. So when it gets to something like monads, there are way too much noise surrounding the core concept being explained. It's fine to show practical examples, but not everything should be built on them.
Aside from that, I like the practically-oriented stuff in RWH. I would like to see an article on doing concurrent, parallel, distributed systems (Erlang-style) in Haskell.
This [1] was the explanation which finally made monads "click" for me. You can think of the IO monad as a way to force a compile-time dependency between I/O operations by having each IO-performing function take a "baton" value (the "IO" instance) as an argument, as well as return it. The explanation of how Haskell retains the appearance of purity in the face of impure I/O by passing an instance of RealWorld around is genius. That's a brilliant way to start out explaining monads, and it's something simple that you can use to explain the other monads such as the state monad (which the IO monad is built on).
I agree, functional programming is great but sometimes I just see ideas being sold as "functional" that apply to programming in general. Splitting programs into smaller parts that are easily solvable and work in a predictable way is one of the most fundamental parts of coding.
Sure, that's not all there is to it, but I'm a little bothered that it's being marketed as the new thing.