Hacker News new | past | comments | ask | show | jobs | submit login

> You can map, filter, reduce, chain, compose functions that all work with optionals, and lift functions that work on numbers (or any other type) to be functions that work on Optional<number> but none of that is even mentioned in this piece (likely cause then its harder to justify picking nullable types instead)

Maybe I'm missing something, but I don't see how that's special to option types. Here's a set of extension methods in Dart that provide the operations you describe:

    extension NullableExtensions<T> on T? {
      R? map<R>(R Function(T) transform) => this == null ? null : transform(this);

      T? filter(bool Function(T) predicate) {
        if (this == null) return null;
        if (predicate(this)) return this;
        return null;
      }
    }

    extension FunctionExtensions<T, R> on R Function(T) {
      R? Function(T?) lift() => (T? param) => param == null ? null : this(param);
    }

    extension NullableIterableExtensions<T> on Iterable<T?> {
      Iterable<R?> map<R>(R Function(T) transform) =>
          this.map((e) => e == null ? null : transform(e));

      Iterable<T?> filter(bool Function(T) predicate) =>
          where((e) => e == null || predicate(e));
    }
Chaining has built-in syntax:

    int? maybeInt = ...
    print(maybeInt?.isEven.toString());
I'm not sure what you mean by "compose".

Of course, this is not entirely as expressive as option types because of the inability to nest, but the article is pretty clear that nesting is the major advantage of option types.




You missed out the most interesting one, which is 'reduce':

    extension NullableIteratorExtensions<T> on Iterator<T?> {     
      T? reduce<T>(T? Function(T,T) f, T? x0)
         => this.moveNext()
            ? (this.reduce(f, (x0 == null || this.current == null) ? null : f(x0,current)))
            : x0;
    }
Not sure if this is the best notation but then I've never written Dart before.

Of course this is all a bit easier to read if we use some syntax specially built for Monads such as Haskell's do syntax:

    reduce f (head:tail) x0 = do { x <- x0 
                                 ; y <- head
                                 ; reduce f tail (f x y) }
    reduce f [] x0 = x0
which has the advantage that it works for all Monads.


> I'm not sure what you mean by "compose".

The simplest illustration is the ability to have multiple levels of semantically meaningful optionality:

Option<Option<int>> is a meaningful type, and None, Some(None), and Some(Some(1)) are all meaningfully-distinct values. int? doesn't compose to int?? and has no good way to differentiate what Option<Option<int>> represents as None and Some(None).


Ironically the main advantage of Monads is that Option<Option<int>> has a natural transformation to Option<int>.


The reason Option<Option<int>> isn't a problem is because it's a monad, so people will naturally transform it into Option<int> while writing their programs.

There is no irony there. If it wasn't a monad, the GGP would be incorrect, and ad-hock solutions would be very valuable.


In a language with Haskell's guarantees, you can mechanically get from the standard "bind" function to prove that "m (m a)" can be converted to "m a" [1]. That function is called "join", and is a lesser-well known way to write monad implementations which is equivalent to the more famous bind function.

(IMHO, join is a better way to understand the typeclass intuitively. The standard bind is better to program with in general use, but much harder to grok.)

So, in Haskell, it's more than just "the APIs would tend to encourage not nesting the types"... having a valid monad implementation means they really are equivalent.

[1]: https://stackoverflow.com/questions/3382210/monad-join-funct...


They are not equivalent. Join can throw away information - and certainly does for (Maybe (Maybe a) -> Maybe a) and ([[a]] -> [a]).

They would be equivalent if there was an isomorphism, but of course there isn't.


By compose I quite literally mean the compose function.

    const add3 = val => val + 3;
    const multiplyBy10 = val => val * 10;
    const subtract5 = val => val - 5;

    const doABunchOfMath = compose(add3, subtract5, multiplyBy10);

    const ninetyEight = doABunchOfMath(10);

    const optionallyNinetyEight = Some(10).map(doABunchOfMath)
That Dart already provides these extensions feels like they are providing an optional type but through a more complicated mechanism than a simple class

Nowhere in any of the functions I wrote above do I as a user have to worry about handling the case where no number exists, that's handled by the option type and signified by the type (The value in the option may or may not exist) and I can write my functions without ever having to worry about handling that case


Compose can also be implemented easily:

    extension NullableExtensions<T> on T? {
      R? map<R>(R Function(T) transform) => this == null ? null : transform(this!);
    }
    
    V Function(V) compose<V>(Iterable<V Function(V)> functions) =>
        functions.reduce((composedFunction, function) {
          return (V value) => composedFunction(function(value));
        });
    
    num add3(num val) => val + 3;
    num multiplyBy10(num val) => val * 10;
    num subtract5(num val) => val - 5;
    
    final doABunchOfMath = compose([add3, subtract5, multiplyBy10]);
    final optionallyDoABunchOfMath = (num? value) => value.map(doABunchOfMath);

    doABunchOfMath(10); // 98
    optionallyDoABunchOfMath(10); // 98
    optionallyDoABunchOfMath(null); // null
The nullable syntax (${Type}?) also makes it clear to readers that this is a type which may or may not contain a value. If you don't supply the trailing '?', Dart will enforce that the value must exist at compile-time and you can write your functions without worrying about nulls sneaking in where they're unwanted.

In effect, int? is almost nearly Option<int>, except you cannot represent Option<Option<int>> with Dart's nullable syntax.

Check out https://nullsafety.dartpad.dev/ to play around with the possibilities!


>By compose I quite literally mean the compose function.

Outside Lisp/Scheme has anybody used those for anything other than examples in FP tutorials?

Every case I ever seen is non-production tutorial thing like "add3".


I do on a regular basis (front end dev). For an example of how, I found this article[1] to be a nice demonstration of how it can be useful at a basic level for ReactJS. Than you just begin applying the same concepts one level higher for your use case and keep going.

Even if you don't use compose directly, the mindset of working to build systems in a composable manner is invaluable.

Its a mental shift that leads you to thinking how a system can be represented as a series of inputs being piped from one function to the next between boundries of my code (server response data --> client side caller --> transformation functions --> local data store --> UI).

Achieving these pipelines requires expressing intent with functions and with a self imposed constraint of writing pure functions, you begin needing functors and applicatives and monads to help store stateful information.

If at any point you need a new capability or need to add a new code path, you just modify the relevant pipeline(s), write any new (pure) functions along with any adapters you might need to inject it in your pipeline, and you're good to go. If everything is a pure function, testing and debugging instantly become easier. All this from wanting to build composable functions.

Some languages also support piping (runs the functions in the reverse order that compose does) which can help visually since functions are invoked in left to right order which is how we read too.

[1] https://hackernoon.com/forms-of-composition-in-javascript-an...


Threading syntax is a giant compose. OK, but that's an operator.

In the TXR internals, I have a C function called chain, which composes N functions together (left to right, not right to left like typical compose functions). It is variadic: the end of the arguments is signaled by nao: a not-an-object constant:

  git grep '\<chain('
  eval.c:                       chain(car_f, eq_to_list_f, nao),
  eval.c:                        chain(cdr_f, consp_f, nao),
  eval.c:                        chain(cdr_f, cdr_f, null_f, nao),
  eval.c:                        chain(car_f, eq_to_quote_f, nao),
  eval.c:                                   chain(car_f, eq_to_list_f, nao),
  eval.c:                                   chain(cdr_f, consp_f, nao),
  eval.c:                                   chain(cdr_f, cdr_f, null_f, nao),
  eval.c:                                   chain(cdr_f, car_f, consp_f, nao),
  eval.c:                                   chain(cdr_f, car_f, car_f, eq_to_quote_f, nao),
  eval.c:                              chain(cdr_f, car_f, cdr_f,
  eval.c:  return chain(juxt_fun, apf_fun, nao);
  eval.c:  iter_from_binding_f = chain(cdr_f, iter_begin_f, nao);
  lib.c:  val pred_key = chain(default_arg(key, identity_f), pred, nao);
  lib.c:  val pred_key = chain(default_arg(key, identity_f), pred, null_f, nao);
  lib.c:val chain(val first_fun, ...)
  lib.c:                     chain(car_f, keyfun_in, nao));
  lib.h:val chain(val first_fun, ...);
  match.c:                                      chain(func_n1(cdr),
  match.c:                              chain(func_n1(cdr),
  match.c:                                        chain(func_n1(length_list),
  match.c:                                        chain(func_n1(rest),
The _f variables are pre-computed function objects, stored in globals, to avoid consing them repeatedly. That func_n1(cdr) seen in match.c could be replaced by cdr_f , not to mention by the func_f1(rest); it conses a new function object referencing the C function cdr each time it is called.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: