Hacker News new | past | comments | ask | show | jobs | submit login

> The benefits of that kind of stuff are arguable

I'll argue for and against some of these points. My perspective should be somewhat contentious as a dyed-in-the-wool Haskell user. We take the side-effects and the static type stuff to an extreme. Should be more interesting to read.

> you can do serialization in a static language, but you can't just slap in a call to JSON.load on a file with complex structure and then access the result the same way as native types)

That's accurate. It's kind of the point. If a field of a data structure is going to disappear depending on the contents of some arbitrary input, I'd consider that a flaw. It's convenient to say foo.bar as opposed to foo ! "bar", but the latter (in Haskell) is explicit about the expected type it infers. For example, I can do this:

    λ> json <- readFile "/tmp/foo.json"
    λ> putStrLn json
    → {"firstName": "John", "lastName": "Smith", "age": 25, "address": { "city": "New York"}}
    λ> do let decade x = div x 10 * 10
              group n  = show (decade n) ++ "-" ++ show (decade (n+10))
          person    <- decode json
          firstname <- person ! "firstName"
          age       <- person ! "age"
          address   <- person ! "address" >>= readJSON
          city      <- address ! "city"
          return ("The name is " ++ firstname ++ " who is age " ++ group age ++ ", and he lives in " ++ city)
    → Ok "The name is John who is age 20-30, and he lives in New York"
Whether `person' is a string, or `age' is an int, is inferred by its use. I could also explicitly add type signatures. Type inference and type-classes give you something that you don't have in Java, C++, C#, Python, Ruby, whatever. The decode function is polymorphic on the type it parses, so if you add a type annotation, you can tell it what to parse:

    λ> decode "1" :: Result Int
    Ok 1
    λ> decode "1" :: Result String
    Error "Unable to read String"
Or you can just use the variable and type inference will figure it out:

    λ> do x <- decode "1"; return (x * 5)
    Ok 5
    λ> do x <- decode "[123]"; return (x * 5)
    Error "Unable to read Integer"
    λ> 
So you have (1) a static proof that the existence and type of things are coherent with your actual code that uses it, (2) the parsing of the JSON is left separate to its use. And that's what this is, parsing. The code x * 5 is never even run, it stops at the decoding step. Now use your imagination and replace x * 5 with normal code. If you take a value decoded from JSON and use it as an integer when it's "null", that's your failure to parse properly. What do you send back to the user of your API or whatever, a “sorry an exception was thrown somewhere in my codebase”?

If you want additional validation, you can go there:

    λ> do person <- decode json
          firstname <- person !? ("firstName", not . null, "we need it")
          return ("Name's " ++ firstname)
    Error "firstName: we need it"
Validated it, didn't have to state the type, it just knew. Maybe I only validate a few fields for invariants, but ALL data should be well-typed. That's just sound engineering. This doesn't throw an exception, either, by the way. The whole thing in all these examples are in a “Result” value. The monad is equivalent to C#'s LINQ. Consider it like a JSON querying DSL. It just returns Error or Ok.

Types can also be used to derive unit tests. I can talk about that more if interested.

> proxy/remote objects

Again, the above applies.

> monkey patching (in various forms - raw monkey patching is bad style anyway, but even things like random global callbacks are hard or bad style in most static languages),

Well, yeah, as you say, monkey patching isn't even a concept. I don't know what a random global call back is for. Sounds like bad style in any language.

> objects that pretend to be A but are really B (perhaps to avoid changing existing code; perhaps to implement things like "pseudo-string that's an infinite series of As" or "rope" or "list that loads the relevant data from a file when an item is accessed" without having to change all the code to make "string" and "list" typeclasses/interfaces; perhaps for mocking during testing)

That's true. There is no way around that. I was recently working on a code generator and I changed the resulting string type from a list of characters to a rope, technically I only needed to change the import from Data.Text to Data.Sequence, but it's usually a bit of refactoring to do. (In the end, it turned out the rope was no faster.)

> dynamic loading

Technically I've run my IRC server from within GHCi (Haskell's REPL) in order to inspect the state of the program while it was running to see was going on with a bug. I usually just test individual functions in the REPL, but this was a hard bug. I even made some functions updateable, I rewrote them in the REPL while it was running. I've also done this while working with my Kinect from Haskell and doing OpenGL coding. You can't really go re-starting those kind of processes. But that's because I'm awesome, not because Haskell is particularly good at that or endorses it.

GHC's support for dynamic code reloading is not good. It could be, it could be completely decent. There was an old Haskell implementation that was more like Smalltalk or Lisp in the way you could update code live, but GHC won and GHC doesn't focus much on this aspect. I don't think static types is the road-block here, in fact I think it's very helpful with migration. In Lisp (in which live updating of code and data types/classes is bread and butter), you often end up confused with an inconsistent program state (the 'image') and are waiting for some function to bail out.

But technically, Ruby, Python, Perl and so-called dynamic languages also suck at this style of programming. Smalltalk and Lisp mastered it in the 80's. They set a standard back then. But everyone seems to have forgotten.

> REPL (in running code, that gives you the flexibility to change anything you want), ...

See above.

> The benefits of that kind of stuff are arguable, but I think the net effect is that static languages, even when they save you from having to write out a lot of types, encourage a fairly different style from dynamic languages, which I prefer.

Yeah, some of them are good but static languages can't do, some are bad that static languages wouldn't want to do by the principle of it, some are good that static languages don't do but could do.

> p.s.: you don't need a static type system to use an option type. :)

This is a pretty odd thing to say, because you don't need an option type in most if not all dynamic languages, they all have implicit null anyway. All values are option types. The point of a stronger type system is that null explicit, i.e. null == 2 is a type error. And a static type system tells you that before you run the code.




> This is a pretty odd thing to say, because you don't need an option type in most if not all dynamic languages, they all have implicit null anyway. All values are option types. The point of a stronger type system is that null explicit, i.e. null == 2 is a type error. And a static type system tells you that before you run the code.

Well, that's a problem, actually. After encountering option types, it's hard to live without it. Because you want to be able to mark that parameter A and B should not be null, but C may be. And unless you have a very good static analyzer, you are constantly at the mercy of a nasty NPE somewhere in your codebase.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: