Hacker News new | past | comments | ask | show | jobs | submit login
Linear types make performance more predictable (tweag.io)
163 points by eatonphil on March 14, 2017 | hide | past | favorite | 94 comments



As much as I want to use Haskell for real, instead of just reading about and toying with it, I think the examples in this article are a perfect illustration of why I find learning Haskell borderline obnoxious at times.

  data S a
  data R a
  
  newCaps :: (R a ⊸ S a ⊸ IO ()) ⊸ IO ()
  send :: S a ⊸ a ⊸ IO ()
  receive :: R a ⊸ (a ⊸ IO ()) ⊸ IO ()

  In this API, S is a send capability: it gives the ability to send a message. R is a receive capability...
At least here the author reasonably explains the snippet (this article is actually just fine generally), but so much of the Haskell examples online are just like this. Single letters for every type/variable. Why aren't S and R just called SendCapability and ReceiveCapability? It's almost like the Haskell community takes pride in being as obscure as possible, which I think is very possibly true.


It's uncommon to see single-letter datatype names; the general idea being that designing your programs with a data-type orientation should provide the kind of description you're looking for; type variables are very often single letter since they're not very important to understanding (the exception being exceptionally unusual types).

Single letter bindings are not common in production haskell code though there are exceptions where it's very clear using single letter binding names is clear enough and not easily confused, like this for instance:

    -- | Produce @Nothing@ if the first argument is @False@; produce
    -- @Just a@ if the first argument is @True@.
    consMaybe :: Bool -> a -> Maybe a
    consMaybe b v = guard b *> pure v
but usually, most production Haskell (at least where I work and write it) looks like this (minus the top-level function documentation and all the surrounding types and imports, which are important for context):

    makeKey :: FormatTime t
            => t    -- Formattable timestamp, usually obtained with getCurrentTime
            -> Text -- AWS region, e.g: us-east-1
            -> Text -- AWS service, e.g: s3
            -> Text -- AWS signing key protocol, e.g: aws4_request
            -> Text -- AWS secret access key
            -> ScopedSigningKey
    makeKey ctime region service protocol key = ScopedSigningKey scope' sigkey
      where
        dateStamp = formatTime defaultTimeLocale "%Y%m%d" ctime -- YYYYMMDD
        scope'    = Data.Text.intercalate "/" [ pack dateStamp, region, service, protocol ]
    
        sign :: ByteArrayAccess ba => ba -> C8.ByteString -> Digest SHA256
        sign k m  = hmacGetDigest $ hmac k m
    
        kDate     = sign (encodeUtf8 $ "AWS4" <> key) (C8.pack dateStamp)
        kRegion   = sign kDate    (encodeUtf8 region)
        kService  = sign kRegion  (encodeUtf8 service)
        kSigning  = sign kService (encodeUtf8 protocol)
        sigkey    = decodeUtf8 $ convertToBase Base64 kSigning


To be fair, this article is really pretty "cutting edge" as far as Haskell goes. It describing a new feature they're planning on adding to the Haskell compiler, one that significantly extends the type system in an interesting way. I think it's fair for them to assume comfort and familiarity with Haskell syntax and code when writing this article; their audience is folks who are likely to know about and use somewhat esoteric GHC extensions...


> Why aren't S and R just called SendCapability and ReceiveCapability?

Let's see how it reads if they were?

    data SendCapability a
    data ReceiveCapability a
  
    newCaps :: (ReceiveCapability a ⊸ SendCapability a ⊸ IO ()) ⊸ IO ()
    send :: SendCapability a ⊸ a ⊸ IO ()
    receive :: ReceiveCapability a ⊸ (a ⊸ IO ()) ⊸ IO ()
Personally, I find that less readable - most of the space of the line is taken up with the two names, it's harder to keep track of the shape of it (in particular, matching parens), and it doesn't communicate anything more when you'll need to read the accompanying paragraph anyway.

That's not to say S and R are good names in a real program - name length should track scope, and type names have pretty broad scope (global but module-constrained), so they should usually be comparatively long. `SendCap` and `RecvCap` might be a good compromise, in that context. Type variables (like the `a` here) are local to the individual type signature (and, with -XScopedTypeVariables, possibly the accompanying definition) and so should be shorter. One letter may be too extreme - `a` does a fine job conveying "whatever you care to be dealing with" but `msg` might be even better in this particular case.


While I agree that if the names are too long, it becomes difficult to discern the "shape" of the function types, in my mind it's more important to be able to tell what the symbols mean. One-letter names are the bane of my existence...

I agree that a good compromise is to shorten the names as long as they are still easily understandable, exactly like `SendCap` and `RecvCap`.


> Why aren't S and R just called SendCapability and ReceiveCapability

Because each one is used only 3 times, in the same code snippet. Having long JavaStyleNames would make the provided code harder to read, not easier. Additionally, S and R are just placeholder types for this snippet; the whole point is you're supposed to replace them with some "actual" type.

> It's almost like the Haskell community takes pride in being as obscure as possible

Production Haskell code doesn't use 1-letter names like tutorials often do. Tutorials expect you to have a non-zero attention span so you can keep track of a short name across a few lines, but I've never seen this sort of thing used for type constructors in an actual library.

People with a weakly typed/untyped background also often make the mistake of confusing specificity with obscurity; Haskellers use "obscure" terms like "Functor" not because they're trying to confuse you, but because it would be wrong to use a "simpler" term. From the Haskeller perspective, it's better to spend a bit more time learning the correct concept than to save a bit of time by using a crutch intuition that will ultimately fail you.


Specific examples are easier to understand than abstract rules - you need a mathematical mindset to get abstractions at all.

The best way to get a concept across is to list enough examples that the reader can do their own generalization. Just tell me the abstract rule, and I will be mystified.

In math, this is why it's important to do the problem set. But you don't get to assign homework to readers in most places; you just gotta give a lot of memorable examples in the text.


Frankly, just saying that you expect people to pay attention and that's why the names aren't typed out seems lazy. If the snippets are as simple as people state then the names won't ever reach the Java style AbstractionFactoryGeneratorSingletonInterface that are difficult to parse due to the name. If they did end up getting that long that'd be more of an indication that the pieces of your code are doing far too many things, or are too complicated of an abstraction anyway for people to be easily able to understand the snippet. Not specifying what you mean when teaching/demonstrating has no benefits in my eyes


Except, they did specify what they meant, and longer names would have taken attention away from the syntax which was the point of the article.

Entirely and absolutely appropriate for the setting and audience.


I agree this wouldn't be ok if you were reading implementation from an actual production system - there the semantics matter more - but in this example, I think what the author did is fine. It forces you on not focusing on the semantics but on the shape of types and functions, which is what matters here.


Haskell I've seen often looks a lot like mathematical equations to me, where the tradition is that the notation is terse, with single letter variables (possibly with primes or other modifiers) is the norm.


I've thought that Haskell should consider resurrecting the old convention of putting a commented header at the beginning of their code that explains the single-letter notation in a more friendly way. Then they can have their terse syntax but make things a little friendlier for newcomers.


Javadoc-like comments ("Haddocks") are pretty widely used, and Hackage (the Haskell library index) let's you browse them and so on.

See [0] for a good example of well-commented but idiomatic Haskell (a snippet from the popular Megaparsec parser library), which includes things like:

    data ParsecT e s m a
    
    ParsecT e s m a is a parser with
    custom data component of error e, 
    stream type s, underlying monad m
    and return type a.
Admitted, that is a monad transformer, but one-character type variables with human-readable docs are both good for newcomers as well as low-friction for people who are used to the library in question.

(Strawman alert.) It is significantly more work to parse (ha) something like

    ExceptT fooException (StateT fooWorldState underlyingMonad) a
when compared to

    ExceptT e (StateT s m) a
[0]: https://hackage.haskell.org/package/megaparsec-5.2.0/docs/Te...


It's also worth noting that there are conventions around type variable names. If you see an `m`, it's probably a Monad; if you see an `s`, it's probably a stream or a struct or state; `i` is likely being indexed over; the code probably doesn't care what `a` is - the surrounding code can pick it freely. If you need another, advance to the next letter (`b`, `c`, less often `n`, `o`).

Of course, Edward Kmett had a story about winding up with his code telling him `i m a s t a b u`...


f for a functor is also very common, as is k for "continuations", very generally speaking (I think this is a borrowing from Lisps?): one often sees monadic bind defined as

    m >>= k = ...
In a somewhat more esoteric vein, I love the use of w for comonads: the rationale is apparently that it looks like an upside-down m.

> Of course, Edward Kmett had a story about winding up with his code telling him `i m a s t a b u`...

Is that an indexed lens ... parameterized over a monadic type? I'd like a link to the video/article.


Certainly `f` for functor. `k` is more for values than types (you usually know a continuation is some kind of (->)) but they'll often return `r`.

> I love the use of w for comonads: the rationale is apparently that it looks like an upside-down m.

That's my understanding, yeah.

> Is that an indexed lens ... parameterized over a monadic type?

Presumably further parameterized by the focus, the source structure, the target structure, the focus again, the transformed focus, and some third structure.

> I'd like a link to the video/article.

Me too :-P I see a couple references in a google search but I'm not sure where I first heard it.


I want to disagree a lot - the first example is way easier to parse then the second. The names tell me approximately the scope the function applies to, what I'm looking for in the code as "relevant", and which bits are doing what.

The second tells me nothing - how is this intended to be used? What scope is it intended to be used in? Why was it created? Where does it fit in the program flow?


There is a certain amount of familiarity expected, sure. But anyone who has read, say, the Monads chapters in Learn You A Haskell (which is about as nonintimidating as a book can get!) should be able to guess what those types - s and so on - mean. It's akin to the mClassMember/THIS_IS_STATIC_AND_PROBABLY_FINAL conventions in Java (don't quote me on that).

In other words, my thesis is that if you aren't familiar with State, both State s and State fooAppStateType are fairly opaque.

In any case, most of the time one works with a specialized instance, for instance, one might have something likr

    newtype AppState a = 
      AppState {  
        runAppState :: 
          ReaderT Config
            (StateT AppState IO) 
            a 
      }
and then there's no problem at all.

If you're trying to understand how a library works when you're not very fluent in the language, that's what the documentation is there for! Any good library (and this is an area where's lots of room for improvement) should document the types it uses and the idioms that it lends itself to, and as one accrues experience, the whole "learning to see common patterns" takes over. The great thing about a helpful compiler of the Rust/Haskell ilk is that you make fewer mistakes in between (modulo occasionally horrific error messages).

And I'm willing to vouch for this not being expert (ha) blindness - I have only recently become comfortable enough with these things to talk about them.


Very clever way to make sure the programming language never gains any significant traction. ;)


Haskell: avoid success at all costs.


Obviously the solution is to encode success into the type system, such that functions which increase the chance Haskell will become a mainstream programming language fail to type-check.


People keep saying this as if it were meant to be parsed "(avoid success) (at all costs)", but in fact it's supposed to be "avoid (success at all costs)", in other words "principles > popularity".


I've always understood it to be, simultaneously, jokingly the former and seriously the latter.


Only if your program doesn't type-check ;)



It certainly inherits a lot from that culture but there's a fast-growing number of industry programmers using Haskell that are bringing more approachable styles to the language; most notably: using partial functions as little as possible, using let bindings more often, using applicative-do, making variables names a bit more descriptive when it makes the waters clearer, using descriptive data type names, documenting the fields of records, documenting the arguments to a function if the types don't make it crystal clear, etc.


Interestingly, what you've described sounds a lot like most of the OCaml I've come across (and written). I suppose that makes sense, really!


That's a relief. All the OCaml that I've seen has been a horrible mess of single character identifiers without comments or type annotations...


Is there a style guide for that? Looks incredibly useful (i.e. a Haskell pep8)



That is, they inherit the worst aspect of math notation.


I always thought it's the same reason as when doing math on paper starting in elementary school you'd call things x, y, z, h, w, w2... etc. instead of triangle_width... etc.

In math'ish problems naming/algorithm (noise ratio) needs to be kept low, otherwise you can't read through it.

You want to elevate symbols between type/vars in reader's attention (what's happening) as opposed to being detailed in naming (which is less of a problem to recall).

It's the same in other languages, just look at SHA calculation in C or anything else - it's the same, math'ish style (and it's good).

ps. another answer may be that writing code this way uses less ink! :)


Reminds me of Baez's Rosetta stone paper [1]. It's about some common threads in physics, math, logic, and programming. A completely different set of connections that what you'd see in Gödel, Escher, Bach.

I kinda sorta remember linear logic having the option for an infinite source, and an infinite sink for things you might want to deal with. A business has an infinite number of customers, but each specific transaction needs to be handled in a very specific way. You give me a dollar, and i give you a hamburger or french fries, your choice.

A light, accessible overview of linear logic is here [2]. That has a few nice, simple examples. It's easier than the wikipedia article.

[1] http://math.ucr.edu/home/baez/rosetta.pdf

[2] http://www.csl.sri.com/~lincoln/papers/sigact92.ps.gz


Basically, non-linear variables are those with infinite duplication and sinks (they're "comonoids").


Another option is affine types that give you "at most once" semantics, in contrast to linear types' "exactly once". If your language has non-termination then affine is easier to enforce. (Cf Rust.)


Sure, but rust benefits from must_use annotations, which effectively turn those annotated affine types into linear types. Turns out they are both useful in a very overlapping venn-diagram kind of way.


How does this deal with non-termination? Eg.

   declare x (must_use);
   P;
   use(x)
(I apologies for the non-Rust-conforming pseudo-code.) The type-checker won't be able to tell if P terminates or not, whence x's use can't be enforced. That's a direct consequence of Rice's theorem (which in turn follows directly form the non-computability of the halting predicate).


This is a big deal. Right now, Haskell (like most GC-ed languages) has one big heap that gets GC-ed regularly. The _only_ time memory is reclaimed is when the GC runs.

By retrofitting Haskell with linear types, there would be another linear heap. In that heap, there is no GC - resources get freed as soon as the linearly typed value is used. On can imagine programs properly annotated with linear types that don't even need ANY GC.

This is an even bigger deal in Haskell than in most other languages because Haskell, being immutable, performs a ton of allocations. Want to update a field? Nope, but you can allocate the whole object again with just that field changed. If most of that work could be moved to a linear heap which doesn't need GC... well you get the point.


Doesn't Haskell use persistent data structures? If so, doesn't that directly compensate for the problem you mention?

Though it is still true that a purely functional language will need to allocate more than a typical imperative language.


That does help, yes. For example, inserting into a tree only takes a logarithmic number of allocations because of the fact the subtrees are persistent.

That said with every insertion/deletion into that same tree, there is still at least log(n) garbage produced, which currently has to be handled by the GC. Linear types would let us free that garbage up as soon as it is produced and without GC pauses.


You're right that reducing pause times is made more practical here and this is a part of why people excited (and rightly so).

That said, it's worth noting that a naïve attempt to free resources ASAP can be a performance hit and sometimes mean long pauses. For instance if you find yourself no longer needing a large tree, the regular GC lets you just forget about it. Freeing the whole tree would be walking the whole tree, and the naïve approach does that synchronously. Of course there are ways around that - they add some complexity, this kind of tooling will help get them right when they're needed.

All in all, performance is complicated, there are usually tradeoffs, and this seems some great tooling.


I have not seen any evidence that GC is the primary bottleneck in Haskell, or that a form of compiler-driven manual memory management would be faster.


GC has non-zero overhead compared with linearly typed de-allocations, but that's not the main point. The title mentions predictability, and one of the big blockers to using almost any GC-ed language is the lack of predictability about runtime latency, which is very different than throughput, which I generally associate with the term bottleneck.

Additionally, predictable performance is about what the programmer can predict (whether you are talking about throughput or latency), and the argument is that linear types make it much easier to reason about the performance of a system, hence making it more predictable.


IANA expert, but as far as I can tell...

1) When it comes to sources of unpredictability, laziness is a way bigger problem than garbage collection

2) Assuming that by "GC" we mean "some variation on mark-and-sweep" (rather than meaning "any type of automatic memory management", which I hope is not the usage of anyone in this conversation), GC cost (in terms of CPU-time consumed) doesn't care how many allocations you've done, nor how much garbage it needs to collect. All it cares about is the size of the working set at the time that it runs. As such, all those lingering copies of the old tree nodes (etc, etc) are irrelevant. So I don't see why Haskell's additional allocations should cause GC to be any more of an overhead.


> So I don't see why Haskell's additional allocations should cause GC to be any more of an overhead.

In this kind of garbage collector, time of an individual garbage collection should not depend on the amount of garbage that has been generated. However, producing more garbage should mean more frequent collections, and so more time spent in GC overall.


Under some workloads, GC pauses are definitely the primary bottleneck: http://stackoverflow.com/questions/36772017/reducing-garbage...


If I am not mistaken you will always need a GC for cyclic data structures, which you can't type linearly.


I imagine that given linear arrows you could write something like Rust's Rc/Arc?


Entertainingly, you can't create a cyclic data structure with 100% pure code.


You can in a lazy language, like Haskell.


That's interesting, how do you achieve that?


Well, one of the usual examples would be:

    let ones = 1 : ones in ones
Here ':' is the cons operator, prepending something to a list, so here you defines 'ones' to be '1' followed by 'ones', which in (GHC) Haskell, compiles down to a datatype that has a pointer to the list element ('1') and the tail ('ones', i.e. itself).

EDIT: I realised I forgot to say what it actually does, in case that's not obvious. It's an infinite list of, well, ones...


    repeat :: a -> [a]
    repeat something = [something] ++ repeat something

    print (head (repeat "hello"))
For performance reasons, you wouldn't implement it exactly like this, but the principle is the same. The list can contain an infinite number of elements, but not unless some function tries to consume all elements does it become a problem.


The defintion of 'something' uses it twice.


Laziness gives pure semantics to a certain form of mutation. Technically you don't represent cycles but instead the infinite unfolding of that cycling structure... but you do so in a way where the compiler just makes the cycle. In pure code these two situations are indistinguishable.


There's even a purely functional graph library, allowing cycles: http://web.engr.oregonstate.edu/~erwig/fgl/haskell/old/fgl01...


It's not a 'cyclic data structure' though. It allows cycles in that every node has an ID and you can freely point to nodes to create cycles. For a cyclic graph with true cycles see https://www.cs.utexas.edu/~wcook/Drafts/2012/graphs.pdf


Rust uses something resembling linear heaps.


Rust has affine types. Linear = exactly once, affine = at most once.


Quite so


Linear logic is also useful for describing the rules of games. You can set up the games starting state as resources and the games rules are axioms and winning the game is equivalent to proving a certain outcome.

This was also super interesting: https://www.cs.cmu.edu/~cmartens/lpnmr13-short.pdf



It's interesting that they use the word "capability" here. These kinds of single-shot values show up in the capability-theory literature occasionally. I suspect that linear types might be equivalent in structural power to the typical notion of "perfect encapsulation" in object-capability security, but I don't yet have a proof.


This is wonderful, can't wait to use it for my own code. :)

Though at the moment the semantics isn't entirely clear to me. For instance, in the protocol example, shouldn't the pair be using multiplicative linear conjunction? It doesn't matter so much for a function receiving two Ints, but what's the semantics for a function receiving multiple ressources?

I guess you can use an impredicative encoding, e.g.

  A ⊗ B = forall C. (A ⊸ B ⊸ C) ⊸ C
but isn't it more usable if it's built into the compiler?


Oh but I want a linear kind though! I'd love to reimplement Rust with GHC for some kick-ass safe interopt.


Did you actually mean Rust done in Haskell or Haskell done in Rust for Rust stdlib?


Implement Haskell in Rust and then implement Rust on top of that


I think that might be a waste since we have better tech than Rust for compilers. Better of doing it in COGENT and Haskell simultaneously. Use tools such as QuickCheck on Haskell part. Production code will be COGENT that's certified to machine code. Once that's finished, use the compiler to compile itself with RTS left in COGENT.

https://ts.data61.csiro.au/projects/TS/cogent.pml


Hi, I'm the creator of Cogent.

I don't think Cogent would be much good for writing a compiler, actually (it doesn't natively support recursive data structures like ASTs). It's designed for OS components like file systems.


Wow, what a small world. :) I evangelize high-assurance tech and R&D on forums like this. I wasn't aware of this limitation of COGENT. I maybe wrongly assumed one could write a simple, non-optimizing compiler if people had written two filesystems in it. The latter seemed more complex. I'll hold off on that recommendation then.

I did want to learn some more about the strengths and limitations of your tool to generate ideas for other researchers. Send an email to the address in my profile. I'll reply in a few days with some questions that people here might have.


Well, the end goal is making the RTS just another library.


You might find Tolmach et al's work useful:

http://programatica.cs.pdx.edu/House/

They were also doing a new systems language, Habit, before reassigned to better project. Here's that project:

http://hasp.cs.pdx.edu


Very aware of House and HalVM. Not aware of the second, though, thanks for that even if it's just historical.

HalVM and House still rely on some C for the runtime, so I think my goal would be great for them.


I wonder how we are supposed to type ⊸ in ye olde ASCII?


IIRC There is some discussion of using "-o", but then there are a whole slew of lexing/parsing issues that emerge since "o" is not part of the (vast!) class of characters Haskell considers as valid for use in operators. Regardless of that, I do kind of hope "-o" gets adopted...


I think "--[star]" could do well: the "[star]" emphasizes the "pop" in the lolly. Touch it once, and it's gone.

EDIT: Having trouble typesetting the asterisk!


* gets swallowed up for italicizing text on HN. Best way to get literal text is a line with two (or more) leading space characters:

  --*


Just -* and then pester the authors of Fira Code and Hasklig fonts (or fork) to add a ligation for the symbol when - and * appear next to each other =)


Roughly the same way we type → :)


Ha. Maybe -@


I still not get what is the deal here. So I see the types, but not why making them linear is good.


There are lots of programming styles where the programmer must ensure the invariant that some piece of data (value, reference, pointer, whatever) gets used in exactly one sequence. Normal types can ensure that functions and arguments fit together, but they can't talk about how often and when something gets used.

Linear types are an extension to normal types which enable them to talk about how often and when something gets used.


Linear types guarantee at compile time that you can use each value only and exactly once--so no use after free errors, no memory leaks.


Ok, and know that is not possible with normal types and follow the flow?


No, as it turns out in presence of recursion it's undecidable to know that in general. Inference in a linear logic system can answer when it's possible some of the time. Programmer-written annotations can assert it's the case. Type-checkers can ensure that assertions actually hold.


Do you like Rust? Now you kinda 50% Rust.


How on earth do those functional programmer folks always come up with the worst names for simple existing concepts, with totally different names? Worse than perl.

-> instead of comma?

-o / linear instead of refcaps?

"linear types" are certainly not linear. They are a simple reference capability, already in usage for decades. https://en.wikipedia.org/wiki/Object-capability_model

e.g. https://tutorial.ponylang.org/capabilities/reference-capabil...

Consuming a value changes its type to enable reasoning for concurrency.

As to the claim: Sure. Not only makes a refcaps type system concurrency predictable, it also makes it decidable at compile-time, and much faster. rust could learn a lot from this to avoid locking.


> -> instead of comma?

Because a comma would be a very unusual notation to separate the parameter and return value of a function.

For example a function with type "a -> b -> c" is not a function taking three parameters, it's a function taking something of type a returning another function of type (b -> c). Writing that as "a, b, c" would be an extremely weird way to write that.


Yes, I was wrong. I should have said () vs ->. You would write your "a -> b -> c" as

  (a) -> b
  (b) -> c
and then combined: ((a) -> b) -> c. A function taking a function. But Haskell just flattens nested these lists.

E.g. take this:

  inc :: Num a => a -> a
is normally written as

  func inc (Num a) -> Num;
A function taking a Num and returning a Num. The a is normally written as T, a generic type variable, thus forbidding type-promotions from e.g. overflowing Integer to Num. This inc will wrap around.

This is normally written without dependent types as generic for all subtypes.

  func inc (Num a) -> Num;
  func inc (Integer a) -> Integer;
Now Haskell calls that "pattern matching", but of course it is not pattern matching, it is polymorphism, static dispatch based on signatures. Pattern matching has a well-established meaning already, if strings or structural matching. Dispatch does much more than just matching. A match only finds exact candidates, dispatch also finds the closest candidate, which doesn't need to match.


Haskell doesn't call that pattern matching, or at least I've never heard that usage.

What Haskell calls pattern matching is if you have a single type (e.g. Maybe a) with two different constructors (in this case Just and Nothing) where "Just something" will match with a value constructed with that first constructor and Nothing with the second constructor.

Note that his isn't the only thing you have wrong, there are no values of type Num, because Num is not a type. In the expression "inc :: Num a => a -> a" the only type is a, which has a restriction namely that the type has to implement the Num typeclass. The parameters don't have a name in the type signature.


Sorry you got the paren association wrong. It's right associative: curried function returns a function.


Linear types are a more fundamental concept (in functional programming and type theory) than reference capabilities, since they apply to values of any type and a linear type system does not need any references or objects that have state and behaviour.

The "linearity" of linear types comes from the typing rules of values with a linear type: They must be used exactly once. In fact, the simple typing rules of a linear type system allows one to implement all manner of capabilities and especially capabilities for references.


Haskell doesn't use a comma to separate function arguments because it doesn't try to follow tradition, but rather tries to take a step back, and rethink what a programming language is, using components that are as modular as possible. -> is not arbitrary notation, it's the symbol for a profunctor[1], and it has a very precise mathematical meaning. It's not made to be recognizable by programmers that are familiar with C notation, because that's a very high price to pay for the loss of generality offered.

If you had chosen to separate the arguments to a function using a comma, you would have to invent new syntax to describe linearity (because you have one less comma than number of arguments). Many languages constantly invent new syntax to describe useful add-ons, until their compilers are too complex to maintain at reasonable cost. Haskell avoids this by having carefully chosen which constructs to build the language out of, instead of going for recognizability/tradition.

[1] https://en.m.wikipedia.org/wiki/Profunctor


These are linear types by analogy to linear logic[0], so the terminology dates to the 80's at the latest. In contrast to your accusation, I think FP folks rarely make up terminology, usually preferring to borrow liberally from neighboring fields of mathematics.

[0] https://en.wikipedia.org/wiki/Linear_logic




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: