Hacker News new | past | comments | ask | show | jobs | submit login

At least point 4 seems to be about algorithms, not data structures? That is, there's nothing up front that says graph algorithms must be faster using mutable data structures, but that is apparently the case, so far.



Any imperative algorithm / data-structure, e.g. union-find, can be implemented purely by using an IntMap for memory and a State monad, with roughly equivalent performance: https://hackage.haskell.org/package/union-find

As to whether that actually counts as "purely functional programming", I can't say. Honestly the whole term seems quite misleading. https://chadaustin.me/2015/09/haskell-is-not-a-purely-functi...


The reason he cites union-find is that it is one of the only significant data structures where sustained research hasn't either produced an immutable version with the same performance as the mutable one or demonstrated that one can't exist.

It's a great puzzle, but it's not a huge drawback because subbing in IntMap or something really _does_ give adequate performance for typical union-find cases (unification or the like). Which is to say, the performance gap left to be closed is almost certainly _not_ where the important improvements to unification algorithms are made (which have to do with being able to impose a lot more particular structure on the types of things being unified, etc).


There are some graph algorithms that are probably most easily expressed with a mutable, garbage-collected graph, sure. This shouldn't be something that you have to implement yourself though; if you really need a garbage-collected graph datastructure, surely you just use a library that provides one.


The form of an algorithm is informed by the data structures that it uses and a data structure is informed by the algorithms that use it. Point 4 is implicitly about mutable collections.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: