Hacker News new | past | comments | ask | show | jobs | submit login
Disadvantages of purely functional programming (flyingfrogblog.blogspot.com)
90 points by YAFZ on June 5, 2016 | hide | past | favorite | 68 comments



It seems to me that yes, you need an "escape hatch" in an FP language to make your updates.

It would be nice if the language required such functions, and the modules in which they reside, to be flagged. (I don't know if Haskell does something like this with mutation, or not)

It also seems that an "actor model" would be a good way to encapsulate the updates in an otherwise FP program by having a loop/reduce/fold wrapped around the mutable data responding to request-events and generating responses. This allows the other pure/immutable/idempotent type of code to remain isolated from it.


Flagged like "State T" in function type, which Haskell has had since ~1995?


Or IO, or ST, or STM, or any other number of Haskell features that Roboprog has correctly predicted to exist.


Performance-wise, StateT is not an escape-hatch. It can be great for removing boilerplate, but it's simply another way of writing (s -> (a, s)).

IO, ST, and STM mentioned by tome better match the description of "escape hatch" - though for ST and STM they are very carefully shaped escape hatches.


Harrop is known in the Haskell community for being a hater. Most of his remarks here are opinion, which is fine. Lots of people don't like Haskell - that's also fine, but pieces like these hurt the community because it will both push away newcomers and make industrial use more difficult.

Also, I've never needed an unsorted dictionary, and parallelism is actually great in Haskell. http://chimera.labs.oreilly.com/books/1230000000929/index.ht...


I like Haskell, and I use it once in a while. The thing is, I like knowing what my tools are capable of. That includes wanting to know their limits.

This piece, to me, seems well-researched and factual. It's not perfect, but it's good. It has plenty of good citations for areas that aren't simply opinionated. A piece that is honest about the shortcomings of a piece of software can only help that software's community.

You claim this piece will push away newcomers. As a newcomer to Haskell, I disagree. There are good things and bad things to every language. What does actually push away newcomers are the pitfalls that they're not aware of.

Also, consider the fact that even though you don't need something out of a language, someone may. And those people will now know to avoid Haskell for those things they can't get.


Thing is, Harrop runs a business using F# and is a known troll against anything competing with F#. His MO is to focus on minutiae and make them sound like bigger deals than they actually are. Then when asked about a limitation of F# typically responds "well you'll never need that in the real world anyway". (See his responses to e.g. http://stackoverflow.com/questions/21170493/when-are-higher-..., trolling every single answer, including my own).

So, while potentially everything in the article is true, it's all likely overinflated and/or has workarounds. I'd not put much weight on anything in here.


It'd be nice for him to have disclosed that fact in his article.

It's a shame he'd rather flamewar over "best language" than honestly discussing the issues of both.


Some of the author's claims are correct, however point 3 for example doesn't have citations and is complete bollocks.

An immutable collection isn't a place to be mutated, a bucket to fill, so saying that immutable collections don't support concurrent updates is really stupid, because not allowing concurrent updates is the point of immutable collections.

But then, I challenge the author to show me concurrent dictionary implementations that have the non-blocking, lock-free property. Such implementations exist, mind you, but they are an area of active research and not usually part of standard libraries. And there goes the author's argument about the established industry. One such implementation is the TrieMap [1], a new and interesting implementation of a lock-free Map that actually relies on techniques used in persistent data-structures ;-)

On the other hand, shove a persistent data-structure into an AtomicReference and you get a non-blocking concurrent collection for free, which of course includes any kind of persistent Map implementation you want. And lo and behold, the layman can achieve non-blocking concurrent data-structures without a PhD.

There are of course gotchas. The real strength of mutable concurrent data-structures is that they can distribute the contention over multiple nodes, whereas with an immutable data-structure you end up driving that contention to a single root. That can be really, really bad for writes. But then again, concurrent reads for immutable data-structures come basically for free. This means such a data-structure is bad for storage, but really good for message passing (like from producers to consumers). Hence actual databases making use of FP techniques, like for example Cognitec's Datomic, are implemented in a mixed style, best tool for the job and all that.

But instead the author's argument is just a rant that doesn't delve into any interesting details.

A similar weak point is number 7. First of all, what the author understands about parallelism isn't parallelism and I don't really understand his ramblings on performance and absolute performance, but that's not the goal of parallelism. The goal of parallelism is scalability (i.e. the ability to throw hardware at a problem in order to decrease processing time). But this will naturally tax the number of operations per second that a single core can achieve, which can be acceptable if you can make up for it with hardware. And for example his linked benchmark uses at most 8 cores, which is too few to come up with a sweeping generalization like that. That conclusion simply doesn't follow from this benchmark, being basically just Amdahl's law in action, which he fails to recognize.

[1] http://lampwww.epfl.ch/~prokopec/ctries-snapshot.pdf


This is an ad hominem attack, and a fallacy. He may hate Haskell - so what? Are any of his critiques incorrect?


Harrop is a known troll. This is not an attack.

It's a fact.

> Are any of his critiques incorrect?

Probably. That's what makes a real troll effective.

Harrop had a business around OCaml and later F#. He often posted to other FP communities controversial statements, with added links to his offerings. He started not knowing much about those other languages or FP in general, but in later years he learned from people and his attacks got more sophisticated. His controversial marketing did not make him many friends, though.

Going after the real or perceived drawbacks of purely functional programming in Haskell and also its data structures was one of his favorite targets.

Before that he used similar tactics in the Lisp community. For example I remember that he once said that Lisp is not used and this can be seen that there are no Lisp mailing list archived at GMANE. Well, that one was easy. I explained to him that GMANE has a top hierarchy for Lisp, where all the mailing lists are. But that did not stop him, he discovered or made up many more problems with Lisp. ;-)


It's not that they're incorrect - they are opinion disguised as fact. The points about lacking an unsorted map are really the only substantial argument, but only if your use case absolutely needs an unsorted map. I've never specifically needed my maps to be unsorted under the hood. The link about someone suggesting you turn off garbage collection doesn't even mention turning off garbage collection. My main point is that it's a very ranty post, as are most of his others. I'd like him to come back with actual benchmarked code.


If you read closer, it is actually neither of those things. OP makes it clear that it is an opinion.


I found another article linked from the one linked here more interesting, but it's not about "FP" but only about Haskell:

"Why is Haskell used so little in the industry?" - http://flyingfrogblog.blogspot.de/2010/05/why-is-haskell-use...

Go and grab some popcorn before you move on to the comment section...

It's from 2010, I'd be interested in an update, just out of mild curiosity.


the company i work at, https://obsidian.systems currently employs 10+ full-time Haskell developers. I'm not sure what the exact number is because we've been expanding lately. https://www.reddit.com/r/haskell/comments/49jjff/haskell_opp...

The answer to that 2010 blog post is that the ecosystem has dramatically improved in the last 6 years.


Sorry, but that's no proof.


The article wasn't particularly good, even in 2010 ...


Sadly, the author's chosen style is a rant and it is counterproductive.

There's plenty of words and claims are made but benchmarks or code are nowhere to be seen. Why should anyone take these claims at face value?

> Furthermore, most functional programming languages (OCaml, Haskell, Scala) are incapable of expressing a fast generic mutable hash table because they lack the killer combo of: reified generics, value types and a fast GC write barrier.

Sounds plausible? Maybe. Incapable is a strong word. I'm not an expert on mutable hash-tables so I don't know for sure. If I was making a claim that you can't implement mutable hash table without 2 square feet of badger fur and a pound of whale fat would you believe me? I would like to see a citation.

I have written a lot of Haskell and there was never a situation when I said to myself "if only Data.HashMap (unordered-containers) was faster". Just make sure you're using the right tool for the job (which might not be Haskell).

The author doesn't seem to write any code himself but instead links to code that other people have written and then makes claims about F# being better (I can't find the F# solution to parallel quicksort from one of the linked posts). I see very little value in that.


> benchmarks or code are nowhere to be seen

> The author [...] links to code that other people have written


> there was never a situation when I said to myself "if only Data.HashMap (unordered-containers) was faster"

Here's a Haskell programmer looking for a faster hashmap: https://github.com/ndmitchell/shake/issues/418

Shake in fact does use reified generics (a.k.a. Dynamic/Typeable) and a Value type, and recommends disabling idle GC. I'm not sure if there's a "GC write barrier", Shake just uses MVar locks.


It's also setting a very high bar for functional languages to clear. Off the top of my head I can't even name a garbage collected language that has reified generics and value types.


> Off the top of my head I can't even name a garbage collected language that has reified generics and value types.

C#


This post lists 9 points, but the first 7 are all variations of "mutable data structures suck in a pure language!" and the other 2 are "look at all these fucking idiots who talk about purely functional programming!"

So the first 7 points are true because pure functional programming, by definition does not support mutable data structures very well. It is what it says on the tin.

If you go on stack overflow you'll find no shortage of ill-informed, unhelpful input about imperative programming languages. I think it's very strange to call a social phenomenon that is universal in software that affects every language community is a disadvantage of something as broad as "purely functional programming". If you want good discussion, go on the mailing lists or the IRC channels. #haskell is a very good channel with a lot of active professionals who love answering questions.

I am a professional Haskell programmer, AMA.


At least point 4 seems to be about algorithms, not data structures? That is, there's nothing up front that says graph algorithms must be faster using mutable data structures, but that is apparently the case, so far.


Any imperative algorithm / data-structure, e.g. union-find, can be implemented purely by using an IntMap for memory and a State monad, with roughly equivalent performance: https://hackage.haskell.org/package/union-find

As to whether that actually counts as "purely functional programming", I can't say. Honestly the whole term seems quite misleading. https://chadaustin.me/2015/09/haskell-is-not-a-purely-functi...


The reason he cites union-find is that it is one of the only significant data structures where sustained research hasn't either produced an immutable version with the same performance as the mutable one or demonstrated that one can't exist.

It's a great puzzle, but it's not a huge drawback because subbing in IntMap or something really _does_ give adequate performance for typical union-find cases (unification or the like). Which is to say, the performance gap left to be closed is almost certainly _not_ where the important improvements to unification algorithms are made (which have to do with being able to impose a lot more particular structure on the types of things being unified, etc).


There are some graph algorithms that are probably most easily expressed with a mutable, garbage-collected graph, sure. This shouldn't be something that you have to implement yourself though; if you really need a garbage-collected graph datastructure, surely you just use a library that provides one.


The form of an algorithm is informed by the data structures that it uses and a data structure is informed by the algorithms that use it. Point 4 is implicitly about mutable collections.


Good points, although I agree that a lot of them boil down to similar statements.

I have settled on a style of doing "functional-like" programming in Python and C++. It's more about the high level architecture than low level coding details.

It means being very paranoid and rigorous about state -- but still having great tools to express it! For example: Using ZERO mutable globals, and passing state in as a parameter to functions. Using immutable/persistent data structures. These techniques are done very naturally and effectively in Python and C++. You just have to be disciplined.

To me there's no real advantage to expressing something like "split a string by a delimiter" in a purely functional style. Either way, you have a trivial referentially transparent function you can reuse without causing complexity in your program. You might as well do the obvious imperative thing.

However there IS a benefit to threading state explicitly throughout the application, and encapsulating it in OBJECTS (yes objects).

For me, the thing that sealed the deal against functional languages for "real work" was trying to write a production quality sh parser.

I went down a long path of trying to do this first in OCaml and then in Lisp. The VERY first thing that hits you over the head -- lexing -- is heavily and inherently stateful. ocamllex and ocamlyacc to me are evidence of this paucity and poverty of purely functional solutions. They're just transliterating solutions from C. Well I might as well use C then.

Actually, I decided to use C++, which was like my 5th choice as language. Aside from headers and compile times, it's a good choice. I use "functions and data" (a la Rich Hickey).

Except my functions and data are both CLASSES. Data objects are basically structs, except they can do things like print themselves and answer simple queries based on their values, which helps readability (e.g. 1 liners, like word.AsFuncName() ).

Function objects are simply classes with configuration passed to constructors. That usually have a single method, but multiple methods are also often useful. Calling this method is basically equivalent to calling a curried function. But this is supremely useful for both compilers and servers, because often you have config/params that is constant once you reach main(), and then you have params that vary per request or per file processed. Many functions depend on both, and it's cleaner to separate the two kinds of params.

So both "functions and data" are effectively and usefully implemented as classes. The key is to make some classes like functions, and some classes like data. And have more of a bipartite dependency graph, where functions depend on data, and data depends on functions.

When all your classes are an equal mix of data and behavior, that's when they start getting "hard-coded" weird-shaped dependencies, and your program turns into fragile spaghetti. Functions and data are a useful architectural technique, and to me it doesn't have that much to do with Clojure or Lisp, although Hickey is certainly a great advocate.


I dabble occasionally in FP languages and I am a bit fan of the ML family.

However I came to the conclusion that the latest improvemetns in C#, Java and C++ are already quite good for doing FP style programming.

Yes, I feel a bit dirty not being able to use more pure FP languages, but I have to work within the context that customers allow us to do.

Although it feels like blasphemy, C++14/7 can be seen almost as an impure Haskell with curly brackets.


Yes I agree. It's MUCH easier to do FP in modern OO languages than OOP in functional languages. So with the OO languages, you're simply in a better position to solve real problems.

Learning OCaml really improved my appreciation of C++. C++ code is just all over the place, but you can write it in a pretty clean style (admittedly with considerable effort.)

Related: https://akabe.github.io/evilml/


Have you ever learned Smalltalk?

Thanks to support for blocks (lambda), Smalltalk collections already had LINQ style programming, aka algorithm.h support, for example.

Nice link.


I actually haven't ever used SmallTalk, although I understand Ruby is considerably influenced by it (blocks, as far as I understand). Any pointers are appreciated though.

I'm more of a Python person, and Python doesn't really use blocks. I like the duality mentioned this post: http://journal.stuffwithstuff.com/2013/01/13/iteration-insid... (In summary it's for item in L: f(item) vs L.each(|item| ...)

I don't really think of C++ algorithm.h as LINQ ? I guess I don't use it that much. To me that is more "functional in the small", e.g. map or filter with lambdas, vs. "functional in the large". But I'd be interested to hear more details on this analogy.


There are lots of Smalltalk books here, including the original ones from Xerox PARC.

http://stephane.ducasse.free.fr/FreeBooks.html

The best free implementations to play around are Squeak and Pharo.

http://squeak.org/

http://squeak.org/

Or if you want to avoid installing anything, Amber gives some taste of it.

http://amber-lang.net/

Now, regarding the LINQ like stuff, in Smalltalk doing this type of stuff was already possible back in those days.

    |a|
    vec := #(1 2 3 4 5 6). 
    sumPairs := (vec select: [:x | (x \\ 2) = 0]) inject: 0 into: [:l :r | l + r].
Yes, that is what I was thinking of. The other part, immutable data and such, isn't that possible to practice in large teams. But I do make use of it in personal projects.


Interesting points.

As a matter of style, I still prefer using currying over "one trick pony" (single method/function) classes, though. Sometimes this means I have several "layers" of references to the original function, with varying numbers of parameters applied as each layer further specializes (adds configuration). I'm doing most of this type of work in Javascript, though, so YMMV.

(PS - also did some work with Lisp in Uni back in the 80s, and sundry imperative/oop/func ; static/dynamic things since)


If you're using JavaScript, I think that's actually an advantage because your closures can return multiple named methods (in a JS dictionary). To me, closures in Lisp seem a bit impoverished since it's awkward to return more than one method. (Another problem is that I don't like reifying random local variables as program state. State is important; it should be both minimized and made explicit. Classes make it explicit.)

Classes with one method are useful, but so are classes with multiple methods.

I just stumbled across this post again, and HIGHLY agree with it. Yegge is basically outlining why JavaScript is a better language than Emacs Lisp. I had the same experience with hacking on femtolisp and trying to write my shell in it.

http://steve-yegge.blogspot.com/2008/11/ejacs-javascript-int...

OCaml and Lisp both have problems with polymorphic print. There were some recent posts on HN about the expression problem, and the duality between FP and OOP, which I liked. But I have to say that print should be polymorphic, full stop. And if your functional language has problems with that, that's the language's problem and not my problem :)

Related:

http://stackoverflow.com/questions/2497801/closures-are-poor...

http://c2.com/cgi/wiki?ClosuresAndObjectsAreEquivalent


Back at CSU in the 80s, the Lisp exposure we had was so so. We learned about pure functions and a tiny bit about higher order functions, but that's about it. Nothing much about closures explicitly.

It's funny that one of the other commenters mentioned Smalltalk and blocks. Version 5 of the "Clipper" language I was using at work, which came out around 1990, added (borrowed, stole) blocks to an "XBase" style language. I didn't quite know what to do with them, so I used them to simulate virtual method tables in a non-OOP language :-)

It wasn't until I started tinkering with Ruby around 2006 or so that all the block / closure / lambda stuff REALLY sunk in ("Hmm. I could have been doing this - subroutines as closures, and not just "blank" call-backs - in all the Perl 5 code I've written the last 8 years"), some 20 years or so after I was in college. They did a good job of brainwashing proto-OOP into us back in the mid 80s.

Never used any Lisp macros, but "eval" and dynamic languages sure help shorten many tasks vs static types bondage and discipline. There's another Yegge essay on that sort of thing ("code compression"). The PC "4GL" type languages I used at work in the 80s were dynamic, with runtime types, and quite productive for the jobs they were designed for (pre MS-Windows...). I'm tired of the static type authoritarians punishing us all for the sins of C/C++. (I need to stop before I go into another rant)


In Common Lisp it is easy to return multiple closures.

Emacs Lisp now has lexical closures, too.

Common Lisp has no problems with 'polymorphic print'.


Fair enough, but the appeal of Lisp to me is that it's a small axiomatic core, something that not only fits in your head, but the whole implementation fits in your head too. I wanted to use it to bootstrap languages, with no dependencies.

But it turns out that this axiomatic core is too impoverished for a lot of programming. You DO need something like Common Lisp on top. And I'm not really willing to open that can of worms, both for this specific project, and in general.

I would liken it to the language "Factor", which I've also experimented with... you took a specific and elegant paradigm, and tried to extend it to all paradigms, and ended up with a mess. A square peg in a round hole. Factor is basically Forth with OOP and a whole bunch of other stuff.

The other problem I pointed out is that the first problem you hit when writing a language is writing a lexer. I don't see anything that Lisp offers you in that respect. I looked at how Julia does it:

https://github.com/JuliaLang/julia/blob/master/src/julia-par...

Julia is actually bootstrapped in femtolisp. If you look at the lexer, it just looks like C code written with Scheme syntax. There's nothing helpful about this. It doesn't help you manage state. I might as well just write C.

(And actually I chose the code generator re2c, which is BETTER than C.)


> whole implementation fits in your head too

Of the axiomatic core. But not of any programming language.

> But it turns out that this axiomatic core is too impoverished for a lot of programming.

It's not even a programming language. It's just an axiomatic core. If you try to use it for programming you must be doing something wrong.

> is writing a lexer. I don't see anything that Lisp offers you in that respect.

Lisp is a language family, not language implementation with a library, which happens to include a lexer library.

If you mean Common Lisp as a programming language with implementations, there are portable lexers.

http://www.cliki.net/LEXER

https://github.com/drewc/smug

https://github.com/lispbuilder/lispbuilder

Writing your own lexer in Lisp shouldn't be too hard. People have written applications in Lisp which includes lexer functionality.


I don't think your opinion about the axiomatic core matches that of all, or even most, Lisp programmers.

And I'm not saying it's not possible to write lexers in Scheme or Common Lisp. I'm saying that there's no real benefit to doing so over C or even Python. You're using the exact same algorithms and just transliterating it into a different language with more awkward syntax for that problem. The code isn't any shorter.

Related to the other commenter as well, the production quality Julia parser in Lisp doesn't use parser combinators. It uses recursive descent. It's C code written in Lisp syntax.


> I don't think your opinion about the axiomatic core matches that of all, or even most, Lisp programmers.

I have never seen anyone developing software with the 'axiomatic core'. But I see Emacs Lisp, Common Lisp, Scheme, etc. developers.

> And I'm not saying it's not possible to write lexers in Scheme or Common Lisp. I'm saying that there's no real benefit to doing so over C or even Python.

Depends on what level you program. With Lisp it is possible to develop a compact syntax, which expresses ___domain-level concepts, very easily. Interactive development is many times more convenient than in C.

> You're using the exact same algorithms and just transliterating it into a different language with more awkward syntax for that problem. The code isn't any shorter.

I have a surprise for you: it's perfectly legal to write imperative code in Lisp. Lisp is at its heart a multi-paradigm language with the option to add many other paradigms.

With Python you develop mostly object-oriented and in C it's mostly imperative. With something like Common Lisp you can do what you want.


Yes, you are definitely in the right path but don't you sometimes notice that "get-this-stuff-working-by-tomorrow" productivity is dinged by always having to be doing such deep and ordered design, as well as working out all the other typical programming issues?

I went down an almost identical path as you many years ago but found that after a certain point, the maintenance and just being away from the codebase for a couple of months made the time needed to make coherent mods that fit the design quite problematic.


Yes, that's definitely an issue in general. But in this case, my shell has some unique properties, which make it both necessary and possible to do a good job of software design:

1. Shell is a fixed problem ___domain. It's not changing, so you can spend time getting the components and interactions right, and know that the rug won't be pulled out from under you. (Though I am planning significant enhancements; it's not a "nostalgia" project).

2. But it is a complex problem ___domain. What I'm writing is a superset of full fledged POSIX shell, i.e. not a toy. You really do need to control complexity... or you'll end up with something like bash (at least 175 K LOC)

3. It's "systems code". I find that systems code needs to be more highly reliable than application code (or at least the market tolerates less sloppy systems code than app code). So good software architecture matters.

4. It's not a commercial product. I am doing all the design by myself, for better or worse, and the only deadlines are my own.

I think my main thing now is designing with ZERO mutable globals. I've probably been doing this for about 5 years now. Once you get in the habit, and get a few idioms down, I find it makes things go a lot faster. Things break less.

My main issue now is C++ headers and compile times. I started the current codebase with C++, but then switched over to Python to figure out the design. There were a lot of nontrivial issues to solve. Admittedly, implementing everything twice is pretty extreme! But maybe not -- I can write Python so fast that it saves time overall.

But that issue is entirely orthogonal to FP vs OOP. As far as that is concerned, Python and C++ are identical -- they have an imperative history, with significant OOP additions, and support a functional-in-the-small style (lambdas, list comprehensions, etc.). functional-in-the-large is already covered by classes, as mentioned.


I suggest giving some form of parser combinators a try for a more functional approach to parsing/lexing.


Can you give me some pointers to some production quality parsers written with this style? By production quality, I mean "used in practice by a significant number of users".

I've looked at dozens of parsers, both for shells and for other programming languages, and none of them use this technique. I've studied many forms of parsing, and written my own lexing and parsing library/language. This included going on a pretty deep dive into PEGs.

I'm going to be a bit rude and say that parser combinators look appropriate for toy programs only. shell is the opposite of a toy language... it's fairly big, and has a very precise spec with all the hairs and scars of evolution.


Luckily Haskell supports impure functional programming too ...


I don't get why people would use functional programming for anything. It's both more difficult and slower.


I use functional style in C++ with lambdas and iterators. It is nothing but slow. I use functional style in Rust. It's blazing fast. Huge body of problems can be handled by simple recursion but I've yet to see non-functional programmers use it. I don't know why.

Functional style is simply better and is applicable in many languages. Modern C++ is a great example. Throw away much of OOP clunkiness, use simple functions, win! (OOP has it's place but in small quantities and with moderation).

With respect to purity I'm on the sidelines. I think what's far more important is algebraic data-types, pattern matching and value-returning conditionals (values, not statements!). Rust has them and it is great and not pure.


> Huge body of problems can be handled by simple recursion but I've yet to see non-functional programmers use it. I don't know why.

Stack overflows.


"Functional" languages typically have tail-call optimization and heap-allocated frames. Maybe OOP is a workaround for these missing features...


Why doesn't your language have TCO?

"Nobody uses recursion."

Why doesn't anybody use recursion?

"No TCO."


Tail-call optimization eliminates those. Assuming your language provides it, and assuming you trigger it correctly. ;)

http://stackoverflow.com/questions/32164370/does-elixir-infi...


TCO eliminates some types of recursion. Not all of them.


Eliminates some types of stack usage, not "recursion".


A significant part of my work is in a technical/numerical computing ___domain where Fortran is still prevalent. Most students who now enter the field have a tendency to start with Matlab, or feel like they are doing something truly modern if they use C++.

For various reasons, I chose to use OCaml, and from my experience, the difficulty part is only true insofar as the learning curve goes. Once past that, FP-style problem-solving is spectacular in that the vast majority of time you spend goes not into the coding, but rather (a) thinking about what exactly your problem is, and (b) how to decompose and solve it. I can't tell you how many times in Matlab, people start writing code before they've even fleshed out (a) and (b), simply because there's so much drivel and overhead that one naturally tends to feel they're "getting things done" by jumping in and writing this code.

Once I had a sufficient grasp of OCaml concepts and syntax, I would find myself getting frustrated by how little code I might get done in a day. Upon stepping back and reflecting, that was because I had not worked my way through (a) and (b), and that is where the human thinking is really required. I have since been delighted at the utility of each line of code I write and am similarly annoyed by the low signal-to-noise of something like Matlab.

With respect to slower, I think my points above address "slowness" in terms of development time. In terms of runtime, it's very much a function of your choice of language/ecosystem and paradigms. If GC and typical functional overhead are a constraint in your problem ___domain, one can always bring over a functional style into modern languages like Rust, where performance and lack of GC are first-order design goals. But you can't tar all of FP with the label of "slow" any more than anyone can make a blanket statement that "solving matrix problems is slow". It just doesn't make sense.

Lastly, as others have pointed out, Jon Harrop's points refer to purely functional styles and languages. In the end, striving for functional purity may not make sense. To illustrate, one can easily drop into an imperative style where needed in OCaml, for example when doing I/O. I use this quite a bit. But make no mistake that adopting a functional approach where it makes sense can pay dividends.


I rarely use FP but I certainly see it's appeal for certain things.

I think the concept of immutability confuses people bit. It really clicked for me when I stopped thinking of it in terms of things not being able to change and instead in terms of every version of things having different names.

Functional programming from that perspective is like version control for your variables. It makes explicit, not only which variable you are accessing but which version of it.

The reason it seems like you need to copy a variable each time you modify is just to give each particular mutation a different name. Note that the compiler can optimize things. It doesn't need to keep every version in memory. If it sees that you are not going to reference a particular mutation, it might just physically overwrite it with the next mutation so that in the background "var a=i, var b=a+j", might run as something like "var b = i; b+=j";


First, functional programming is an umbrella term. This paper talks about strict FP, which is basically "no mutation", so I'll talk about that.

It's clear that some other aspects of FP, such as first-order functions, closures, etc are very useful in practice.

Immutability does have its advantages in that it supplies strong guarantees about what your code does. It avoids spaghetti code where everything can and does mutate everything else.

Myself, I tend to use whenever its advantages outweigh its inconveniences. Immutability is especially handy at interface boundaries.


I agree that immutability is very useful, but you don't really need any kind of FP to use immutability, do you?


If your data is immutable, in what sense is your code not 'Functional Programming?'

"It's OO, and it can't be both OO and FP."

Hmmm.


I was going to say the same thing; it's certainly possible to write immutable object oriented code


Indeed, it's more of a pattern.


You're confusing first-class and higher-order functions.


Programming is both an art and a science. Art here means we don't have a hard scientific recommendations for all important cases; we have to learn tricks of the trade and use subjective judgement; naturally our programs differ widely.

Art - as something subjective - also means opinions differ on difficulty and slowness. That's perhaps why downvotes - the statement like this presented as an objective fact draws enough scepticism from those who have different experience in the subject.


Difficulty - Only at first. Once you get over the hump and learn to think functionally, procedural/OO starts to look like a convoluted mess.

Slower - Optimize for developer time. Same reason most people write their backends in Python/Ruby instead of C++.


When it comes to expressing logic in small composable, understandable pieces, nothing else can compete.

Unfortunately, it does have the challenges listed in the OP, so it seems to make most sense where either (a) it is ok to leave performance on the table in the interest of clarity or (b) you create a little sandbox in which you use functional programming for clarity, and something outside your sandbox compensates for the performance cost.

React and Om are examples of (b). They let you write pure functions that return the entire DOM. Then, outside that sandbox, a DOM differ makes it so you don't force the browser to re-render the entire DOM every time.


I think it's also interesting to note that sometimes when choosing immutability as the default, which gives a slight performance penalty for basic operations compared to doing normal mutation, you end up being able to save lots of performance on a larger level. React wrappers in ClojureScript, such as Om, are a good example of this - since data is immutable, shouldComponentUpdate is automatically and trivially implemented for you, leading to oftentimes better performance than straight React. I reckon this would also be the case with the lock-free code you write in Clojure, etc.


There are cases where referential transparency is essential, so it worth tolerating all the FP issues to get there.

Also, there is a lot of cases where you need a total language with all its guarantees.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: