> TypeScript's type system is over-complex since it tries to be a superset of JS.
This is exactly why it got so popular, and TypeScript support in the JS ecosystem is so good. It's easy to just strip the TS metadata and you're left with working JS. This is why projects like esbuild managed to ship TS support so quickly (without type-checking).
Good luck doing that with a new language.
I for one am "all in" on TypeScript, with all its shortcomings it's just the best of both worlds, and so easy to get on board.
TypeScript's biggest flaw, IMO, is that it can't (or chooses not to- I don't understand anything about JS-adjacent build systems and how they work) understand the difference between "code I write" and "code someone else wrote".
I'm not going to beat the dead horse of how unsound TypeScript's type system is. We all know that most of that is intentional because it would just be too inconvenient to work with existing JavaScript code (Aside: think about what this implies about existing JavaScript code. This makes alarm bells go off in my head).
However, why should I have to deal with an unsound type system in my new, fresh, TypeScript-only code?
I'm not saying it's simple or criticizing anybody's work, but it would VASTLY increase my opinion of TypeScript if I could enable some kind of 'truly-strict-new-code' flag that would error on unsound code in the current project and maybe just warn when I call unsound code from a dependency.
Because, honestly, I'm truly disappointed that the one language that finally has a chance to unseat JavaScript has such a poor type system that it can't even do basic sub-type variance correctly no matter how many flags you set. Out of all of the statically typed languages I've used, TypeScript has helped me the least when it comes to type errors.
I was just thinking about this. TypeScript’s main flaw is that it’s a full superset of JavaScript, which is a full superset of old JavaScript with weird conventions (null vs undefined, == vs ===). Furthermore, its type system is so strong it can type practically any JavaScript, but at the cost of strong typing.
I wish there was language that was basically JavaScript/TypeScript, but it gets rid of backwards-compatible JavaScript edge cases and conventions (like, no more undefined, no more {} == ‘0’, no more weird function ‘this’), removes advanced types and makes the type system sound, and (just in case) can actually check typecasts and non-null assertions at runtime. JavaScript and TypeScript are honestly great languages, but they are great languages with major flaws in the name of backwards-compatibility.
Even with strict typing enabled, I write a surprising amount of bugs which ultimately come down to type errors. And not only that but silent type errors, because TypeScript’s casts and not-null-assertions aren’t checked. So often I find that a mistyped or null value managed to slip through a type guard without the compiler saying anything.
Totally agree. That's kind of what I was going for when I described the hypothetical "true-strict" mode: don't allow the code I'm writing to do bad/unsafe stuff.
And yes, I really wish that TypeScript's non-null assertions actually included a runtime check. Type casts can't really do anything at runtime without a definition of what "being type Foo" actually means, since the types are erased, but you can definitely check if something is null or undefined and throw an error.
I think it would be fair to have the non-null assertion do a runtime check, and if you want the non-null assertion with no runtime code, just use a type cast.
I add a bunch of tests at the top of all JS function - To make sure the parameters are correct. There is no good reason to not write these tests. You can throw friendly errors, and bugs will be detected early.
I also use automatic tests, and most of the time it's the tests inside the functions that detect the error - because the tests runs the code. It's much more simple then trying to prove correctness before the program haven't even run once.
Backward compatibility is bridge to whole JS ecosystem. If there was way use “unsafe” old js code within newer language that would be great. But it would bring another set of annoying problems.
I can't get behind the idea that a type system is either completely sound or completely useless.
Surely preventing 99% of issues (or 90%, or 80%) is good?
I'm also not aware of too many real-world instances of unsoundness in TS that has caused problems. Obviously I don't know your specific scenario, but it's possible there are ways to work with TS (instead of against it) to get a good level of type safety.
I think the gradual typing is just too permissive... It's not the same feel you get with type inference. Not to say that a unsound type system is useless. If you're coming from dynamically typed languages it will certainly feel safer, but if you come from a more strict type system it still feels like just a linter. In my experience.
Can you link to where we previously discussed TypeScript? I don't remember that, and can't find it anywhere in my comment history (looked back to 2019).
Also, I didn't mean to misinterpret what you said. Sorry about that.
Yikes. I'm so sorry. I mistook your username for another one. That was incredibly rude of me to react like that without at least double-checking.
I had a long and frustrating back-and-forth with someone else some time ago where I pointed out what I believe to be several glaring issues with TS's type system (like the fact that `readonly` properties on object types don't actually guard against writes in most situations) and every one of my criticisms was met with the equivalent of "You're saying a type system has to be 100% perfect or it's completely useless". And even though it's the internet and I should know better, it just irritated the hell out of me.
Again, I'm really sorry for jumping at you like that. I think I'm traumatized over arguing about TypeScript... lol
Is it even possible for the program to know at compile-time if the types will be violated at run-time? Even if you declare data from other sources 'not mine' this still seems like it would be impossible to do.
The type system can force you to handle both cases - the data being in the expected & typed form, and the data not being in the expected form.
You would have a function like this:
parseJsonString : String -> Result Error Person
Where Result is a value that is either an Error or a Person.
To get the Person value out, the type system forces you to also do something with the Error if it exists. Not handling both possibilities in some way is a type error.
> Is it even possible for the program to know at compile-time if the types will be violated at run-time?
I mean, that's kind of the whole point of static typing, right?
If you're speaking to TypeScript/JavaScript, specifically, then yeah, the compiler can't guarantee anything because someone can hop into the running program and start modifying objects as they please (at least for web page code).
But the compiler could still make the weaker guarantee that "the code that I can see is soundly typed". And for 99% of us that's effectively the same thing.
As long as the data is from within the system, yes. If you're trying to use data from an external source (api request, external json file etc) than no, but that's true of most type systems.
If you correctly type the response of that request then the answer becomes yes again.
Maybe I misunderstood, but I thought the person I was responding to was suggesting having the option for sound typing enforced in typescript, at compile time. Because if it's at run-time... well the run-time is Javascript, so there's of course no typing there. This isn't really on Typescript
There are however javascript libraries which allow you specify object schemas and throw errors when mismatches occur. Perhaps the suggestion then is to build some shared layer between the TS compiler and V8 so that similar schemas can be autogenerated by the TS compiler and enforced at run-time.
There is generally no typing in run time. One of the practices in writing compilers is to skip type checks during run time, because you know beforehand that no type errors are possible. That is type erasure.
If you're interested, lookup existential types. Very interesting stuff.
FWIW the lack of explicit variance annotations (and probably many other TS unsound design choices) are not so much related to supporting old untyped JS, but rather language design decisions that have to do with pragmatism and type system complexity.
I can't find it now but I have a vague recollection Hejlsberg talking about the explcit variance annotations explicitly, how those were a huge can of worms in C# and they didn't want to bring them to TS. Unlike Flow, by the way, which does have explicit variance handling and doesn't suffer from the beforementioned foot gun (but does have its own foot guns of course).
Yes and no. I do think that there are only two correct ways for a type system to do generic user-defined types: you either give us variance markers or you make all user-defined generic types invariant (Swift does this).
Personally, I hate the latter, but not as much as I hate unsoundness.
Yeah I agree, that's not what I'd consider good variance rules either.
I guess my point was more about the reasons behind these weird rules. AFAIK interop with existing JavaScript was not the primary reason for doing them this way, if that's any consolation.
> I guess my point was more about the reasons behind these weird rules. AFAIK interop with existing JavaScript was not the primary reason for doing them this way, if that's any consolation.
Are you sure? Because there was an essay written by the TypeScript devs when they released whatever version that introduced the strict-function-variance flag explaining why they "couldn't" make it also apply to class methods, and it boiled down to breaking too much existing JavaScript code.
That's not proof that the original variance choice was also due to JavaScript compatibility, but I feel like it's a very strong suggestion that it was probably a big motivator.
I'm glad that it wouldn't work in Flow. I'm sure it wouldn't work in most/all of the other statically typed compile-to-JavaScript languages. I just hate that none of them won out over TypeScript, so they're basically all irrelevant- Flow included.
> TypeScript's biggest flaw, IMO, is that it can't (or chooses not to- I don't understand anything about JS-adjacent build systems and how they work) understand the difference between "code I write" and "code someone else wrote".
Arguably no language in the world makes that distinction and Microsoft's "Just My Code" debugger tools are fantastic but use a lot of heuristics that you generally tell get as many false positives as false negatives, which is likely why it would always be a toggle.
That said, I do think Typescript could make a better distinction about types that come in from code it has compiled itself (directly seen) and types that come by way of definition files with no code attached ("hearsay" types). (Again, I can't think of any language today that trusts code versus symbols files so differently, so this would be a new thing.)
Often I want it for the: I'm looking at the JS or its documentation right now and it would let me call it this way, but for some reason the Community types are too strict and I don't want to file a DT PR and get into some philosophical discussion about authorial intent versus JS as written versus documented examples and which is "most true" so I'm just going to have to cast this API to any for now and call it a day.
But I can also see the point in: don't give me strictness errors outside of the fence of code you are compiling for me right now in this project. And maybe more unsoundness tools, such as option to treat all return results as `unknown` and force defensive coding when using outside APIs. In general I think `unknown` isn't used enough in community types (it is still "too new").
Though `unknown` is also maybe too defensive for a lot of commmunity types, too.
Also, if you want to code that defensively, you could always replace community types .d.ts files yourself with ones that use a lot more `unknown`.
> Arguably no language in the world makes that distinction
That may or may not be precisely true, but there are languages that do things that are in that same ballpark. For example, Kotlin on the JVM "knows" when you're calling Java code and it handles certain things differently (null) than if you were calling Kotlin code- even if the code is from pre-compiled libraries.
And considering that I already have a tsconfig file that describes my project and applies rules to only the code in my project (IIRC, there's a toggle to control whether the TS compiler analyzes/compiles your node_modules), I don't see why it would be such a leap to just have an event-stricter --strict flag that didn't allow known-unsound code in my project's files.
> But I can also see the point in: don't give me strictness errors outside of the fence of code you are compiling for me right now in this project. And maybe more unsoundness tools, such as option to treat all return results as `unknown` and force defensive coding when using outside APIs. In general I think `unknown` isn't used enough in community types (it is still "too new").
>
> Though `unknown` is also maybe too defensive for a lot of commmunity types, too.
>
> Also, if you want to code that defensively, you could always replace community types .d.ts files yourself with ones that use a lot more `unknown`.
That's all a little intense, IMO. There's a clear difference between trusting a third party library's API/types and having the compiler help you catch bugs in your own code. If I call a third party function in my code and the documented types all check out. Then, if I experience a runtime type error, that's a bug in the library, not a bug in my code. This can happen in any language with escape hatches like type-casting and it's not appropriate, IMO, to guard against that. Put another way: if that is something that is a concern, then you shouldn't be using a third-party library.
And it works in both directions. You can "easily" (I mean, this is JS afterall so this heavily depends on the degree of toolchain hell chosen) move to the TS toolchain and rename your files to have the .ts extension and you have a valid TS project and benefit from the inferred types for your code. Type annotations can then be added later on as you go. And the syntax everybody already knows stays valid.
This is a huge benefit compared to something like ReScript because it makes adoption really easy. With the obvious downside of not having nice features like pattern matching.
I agree that TS eases the transition from JS, but I'd qualify your statement by saying it's not the toolchain that's a source of headache.
Ideally, one should compile TS with `--strict`, otherwise there's little difference with JS. With this in mind, it's quick and easy to switch to TS tooling. Afterwards, one can incrementally adopt stricter typing, culminating in adding `--strict` to your `tsconfig`.
In reality, since `--strict` is false by default, I'm concerned people erroneously assume that their JS code is type-safe after the first step, when it really isn't. In other words, it's no different than JS, except in name only!
The problem with the add types later approach is that in the end you have half baked .ts files left and right. And it becomes increasingly difficult to know when something compiles if it's because it's correct or because there is an any somewhere.
Yeah, agreed. Ideally this wouldn't happen like that. Temporary solutions like a half-migration have a tendency to stick around for a lot longer than anybody anticipates. But I think it is still better than not having any types at all, if only for having type information in development.
> It's easy to just strip the TS metadata and you're left with working JS.
Compared to what? JSDoc made it even easier, you didn't even need to strip anything, as the "types" were just comments. This comes with its own problems, who want to program with comments? But it didn't require any changes to use for us who want nothing to do with TS.
Yeah, duh. The argument is that there is a group of people who wants types and write libraries. Before they used JSDoc, which is effortless to use from JS, as it's just actually vanilla JS. Some of those started using TypeScript now for everything, including libraries. If I want to use those libraries, I need to fork the project and rewrite them to vanilla JS, something that was not needed before. Or find a different library/write it myself.
You actually don't, that's the nice part. NPM Typescript modules are shipped as transpiled js modules with d.ts files. Those d.ts files contain the type informations for consuming TS projects but they are completely optional and you can simply ignore them if you write a plain js project.
You can see this in the babel project, for example. It is written in typescript but used by a lot of js projects that don't care (and don't have to care) about this.
It sounds like you either don't use npm and are trying to use libraries' source files or you don't actually have this issue. This doesn't and can't happen with npm.
> If I want to use those libraries, I need to fork the project and rewrite them to vanilla JS
Why? If you want to just use, npm packages give you “compiled” vanilla JS and type definition files that you can ignore. Alternatively you can compile them yourself, if you vendor them in.
TypeScript supports providing types via JSDoc comments[1]. This gets you use all the IDE language features you'd get using TS, including highlighting type errors. I actually use this all the time, since I have plenty of scripts that are complex enough that I want type checking but not complex enough to warrant a compilation step.
There's also Google's Closure Compiler[2], which uses JSDoc for its JS type annotations.
Corollary: we should invest in systems that allow us to mechanically verify correctness of comments as readily as type checkers do with type specifiers.
>It's easy to just strip the TS metadata and you're left with working JS.
This actually touches on one of ReScript's supposed unique selling points amongst the functional compile-to-JS languages. Namely that the JS its compiler generates is so clean and readable that, if you end up deciding you don't want to bother with ReScript any more, you can just take the output from the final time you ran the compiler and treat that as a plain JS codebase to continue working on. Exactly how true that is, I don't know.
In my experience with these typed alternatives to JS, as long as you stay within that language things are good and smooth. But as soon as you need to interface with 3rd part lib, that's when things start to get messy.
Providing type declarations for every third party lib you might want to use, especially in an ecosystem like Javascript's which relies heavily on them, gets tiring pretty fast.
Typescript manages to get past that because it has reached a level of adoption now that most popular libraries will have a @types/<lib> package also but it took a long while to reach that stage.
Curious if anyone has used ReScript at a larger scale (and does not have the resources of, say, Facebook) and what's their experience been like.
We use ReScript at Draftbit (https://draftbit.com). We're a small company with about 13 people.
Former coworkers have messaged me saying that they miss writing ReScript and wish they could convince their team.
We have about 1200 ReScript files on both front-end/back-end. It's great. It feels much safer to refactor and write new code.
We use TypeScript in some places, like TypeORM and we're able to consume those types on the Rescript side. Rescript generates and consumes typescript types via gentype.
I agree, ReScript has no big adoption (hopefully yet), and that's the biggest disadvantage in comparison with TypeScript.
But somebody has to do the first steps...
There's some project to get TS type annotations to work for ReScript. It'd be great if ReScript could lift on the work done by the TS community to produce types for many common JS libs.
I wonder how would that work. Typescript has pretty advanced types like Template Literal Types[0]. Wouldn't ReScript need to support all those types? Or maybe it just resorts to `any` equivalent if I can't map a TS type to ReScript type.
One solution is for the language to batteries-include all the most important stuff. This is pretty much what Elm does, and it seems like ReScript does at least some of that with the React integration
"This means having two files with the same name in different directories is not allowed."
"ReScript has no async/await support."
"ReScript has no special debugging support or source maps."
Sorry but aren't these issues fairly glaring problems? They're hand-waived away because the project is small, but I feel like they'd become quite major eventually?
The first one is a design decision that I consider a bit "meh" as well, but I can see why they did it. The third one I'm assuming is just a thing that hasn't been gotten around to, and I think is reasonable to not immediately expect from a smallish project.
The second one does sound like a fairly major thing though. The promise syntax shown in the article to get around the issue looks like a massive code smell.
* the use of decorators in even the most basic example
* no async/await. You fall back to Promise.{then,resolve}. The example given is not that clear to me.
* You can't have two files with the same name in different folders??
And in general, for me, the ability to be a little less strict at times makes Typescript way more easier to deal with. It's a 80/20 thing. Most of the time I want type enforced but in some situation, I know it's OK and I just want to move on with my day.
Using fp-ts there is never a need for async/await and the code is still linear, (Promise is transformed to TaskEither<Error, Result>, similar to the example in the article, although you can ignore the error and just map over the result, handling the error somewhere near the end of the chain).
Problem with async/await is that the syntax is so easy that eventually the whole codebase gets polluted by it, even parts that could have been completely separated.
From the coding perspective, the problem with promises arise when there is a need to bind the intermediate results. You can either store them in an object and return that object for the next .then(fn) call, or you can nest inside a promise. I guess nesting is what annoys people the most (pyramid).
> Problem with async/await is that the syntax is so easy that eventually the whole codebase gets polluted by it, even parts that could have been completely separated.
Sometimes you don't want to pause (with await) and give something else the chance to execute (happens a lot in UI).
When code is sloppily written you realize that at some point you can't use much of the existing code outside of async.
One nice example of polluting the codebase is having async local storage. Now everything that reads from the storage needs to be annotated async and now your initialization pipeline might be insanely async. Good way to avoid it is to read the whole local storage at the beginning (having a sync interface) and then everyone just reads without await.
From recent experience, I did exactly that with storage and ended up removing thousands of async annotations that were no longer necessary.
Similar "mistakes" happen for other things and then at some point you are putting loading guards everywhere because your async operations are always awaiting and letting the UI update when it should not.
Behind async there is a real actual hardware reality. Accessing local storage (or the network) is slow and you do not want to block your UI.
You say async were all over the place. Either it makes sense because fundamentally, what you are doing IS asynchronous. Or it make no sense because the asynchronous nature of you calls are not important in your design (I don't see why but ok), in that case you can always break the chains by using a promise:
Yes, but the usual cases of -> "app starts -> load some state from local storage -> store it in memory for sync access -> do app" sometimes get lost due to the ease of async/await syntax.
If you have to chain network requests but each request depends on results of the former ones, then async/await makes it really pretty.
I have no idea how much async/await is too much.
What I do dislike is that storage loads, network requests and other things are tied to the same type, so when you look at code, you have no idea what's happening inside those async functions, but that story is for some other time.
Callback hell was before the introduction of Promises. Promises tried to abstract away the callback mechanism and it made things a little clearer, but you still had the pyramid effect of successive Promises.
I still somewhat prefer the readability of promises vs. await. I like the expressiveness of a then/catch as functions as opposed to try/catch blocks, which I‘ve always disliked. But I agree that await is more suitable for more complex scenarios.
async/await is just a syntax sugar, most languages didn't have it until very recently when it became a trend or something. In OCAML it's just like any other monad you use the bind operator instead. No need for special keywords.
C# 5.0 introduced async/await in mid-2013 (late 2012 depending on how you count it), and F# had it before that in its version 2.0 (~2007). It's a pretty old "trend".
I was arguing that languages are following the "add async/await" trend recently, including rust. That other comment brought up that it exists in C# for a long time, hence not a recent trend.
I'm still annoyed that most other languages stole async/await... but not the more general monadic/do notation. Instead we get elvis operators, chaining functions, etc.
F#'s "Computation expressions" are the same sort of thing as the mentioned Haskell's "do notation" and Scala's "for comprehension". They're just thin syntactic sugar for "callbacks" -- chaining maps/binds/flatmaps/etc. Even though F#'s "Computation expressions" are very cleverly thought out and more general.
in F# it's an instance of computational expression, not keywords. So it's really only C# that added the keywords, and other languages followed suit since 2015. None of the functional languages have async/await as keywords
The `let* in` is a generic syntax for monads, it doesn't need a special one just for promise. This was in fact a debate back when async/await was in consideration for ECMAScript, but special syntax is hip so now we have `async/await` for Promise, `.?` for optionals and `flatMap` for arrays, basically the same thing.
Having seen too much `.map(_.map(` in Scala, I'd argue that different words/syntax for different things (that happen to have the same algebraic structure) is a good thing.
> It turns out Lwt gained support for let* sytax only in April 2020.
I don't understand your point. Monadic let can be done with ppx_let (created in 2016), and even before that there was the older syntax where you write `lwt` instead of `let`.
How it started: "In OCAML it's just like any other monad you use the bind operator instead. No need for special keywords."
How it's going: "well, there's a third party libary that only recently introduced let*, but before that you had to use either ppx_let [whatever that is — d.] or an older where you use lwt"
You listed no less than three special keywords. And one of them specifically introduces a new syntax via ppx.
"Just".
I find this overuse of the word "just" extremely infuriating. And FP-community is especially guilty of overusing it. "Oh, no need for X, just use <some string of smart-sounding words that is always extremely convoluted, doesn't actually do what X does, comes with caveats the length of the equator etc.>"
the point is not that there should be no special syntax, but that special syntax in these (functional) languages serves a more generic purpose, and isn't just for async.
Ocaml is actually the worst example because it has always relied on extensions to provide an alternative to Haskell's do notation.
> well, there's a third party library that only recently introduced let*
it won't change how you feel about it but `let*` is an syntax extension (ppx), kinda like babel transform. It has nothing to do with the third party library. What the library provide are async primitives, equivalent of the `q` package in nodejs
Your point about using "just" is an important one. When you're inside all of that, it seems crystal clear, but for people coming outside of that bubble, it can be quite opaque.
--- end quote ---
So, your original comment, " In OCAML it's just like any other monad you use the bind operator instead. No need for special keywords." quickly devolved into:
- oh, it's not "just a bind operator"
- oh, you need special syntax
- oh, you need a third party library that may or may not have support for that special syntax
So yeah, it's not "just <intellectually superior sounding words>". It never is.
Especially in the context of the conversation which is:
- ReScript
- Requirement to integrate and interoperate with JS and JS libraries that are increasingly Promise-based.
If you read back my comment, at one point I said I only have basic understanding of OCAML, at no point I claim to be an expert. I'm mediocre in programming in general. What I said I said it from the perspective of a javascript developer who don't think that async await is that big a deal, I just happen to know some context regarding functional programming, which, again, isn't the point of view I am taking, since I only have basic knowledge and write javascript at my day job.
With that said, let me put it bluntly, you are being annoying.
If we want to stop talking about programming and start picking on people's tone, let me point out one thing: you are not a mind reader, the way you just pick on a rather common word that people often use casually, and infer that I am asserting my "intellectually superiority", is annoying. Let me remind you this is the Internet where everyone is mostly anonymous. No one cares enough to show off to a bunch of usernames that don't matter to them.
You are right to say that what I claimed to be simple, wasn't simple, but I'm not taking the other nonsense you are accusing me of.
It is though, it's a function/operator that is defined in userland.
> - oh, you need special syntax
> - oh, you need a third party library that may or may not have support for that special syntax
No, you don't need either of these things. These just add a convenient syntax that desugars to the aforementioned bind function. These PPXs aren't special keywords specifically for async/await that had to be added into the language's syntax itself. Note that I know nothing about ReScript and this reply is not about ReScript, as that's not what this comment chain is about.
The full explanation: In OCaml, before 4.08, there was no special support for monads. For example, the Option module (https://ocaml.org/api/Option.html) offers an "Option.bind" function. When chaining monads, you would usually write "Promise.bind" for promises, "Option.bind" for options, etc. OCaml also allows you to define your own infix operators. By convention, "bind" is often associated with ">>=". But this is usually done in userland code, OCaml and its standard library don't use ">>=". For example, the Result module (https://ocaml.org/api/Result.html) uses "Result.bind" too, but doesn't define ">>=".
Lwt is one of the two popular libraries used to have asynchronous IO, the other being Async. It does this through a type called "Lwt.t", which is very close to promises in JS. Lwt.t is a monad, and to chain monads like normal code, you need to use "bind". In OCaml, you can chain expressions with ";" or "let ... in", but if you want to chain monads, you have to use "bind" or ">>=" instead. This is considered annoying by some people, and those people often want monadic code to look the same as regular code. For this, Lwt and Jane Street (the company behind Async) have syntax extensions, called "PPX". PPXs are OCaml code that is executed on your code before your code is compiled. You could call them macros. The equivalent of PPXs in the JS world would be Babel, and a single PPX would be a Babel transformer.
In 4.02, Lwt added the Ppx_lwt library, that allows you to write code that is close to regular code, but is monadic: "let%lwt ... in" instead of "let ... in". There's also ppx_let, by the people behind Async: "let%bind ... in".
That brings us to binding operators. In 4.08, OCaml added support in the base language (so no need for PPXs) for monadic let. Here's the manual page about it: https://ocaml.org/manual/bindingops.html. So now, you can write "let* ... in", which is a bit shorter, and doesn't rely on external libraries or PPXs. You can see in the manual page I linked in the 2nd and 3rd code blocks examples how this makes monadic code look more like regular code. There's also "let+ ... and+ ... in", which has support for applicative functors.
Your point about using "just" is an important one. When you're inside all of that, it seems crystal clear, but for people coming outside of that bubble, it can be quite opaque. To finish on another JS example, this story is close to the story around asynchronous code in JS. First callbacks, then promises, then syntaxic support in the language to make promise-heavy code look just like regular code. The difference here is that OCaml put a more general async/await in the language, but no promises.
To go back to ReScript, you can use promises in ReScript, but there is no syntax support for monadic code in general, or promises themselves. So this is like JS with promises but without async/await.
Sorry but that's not true. You can adopt ReScript one file at a time. ReScript supports importing and exporting typescript types out of the box so you still get the type safety from "the other side"
We started using Rescript by adopting individual files until most of our codebase is ReScript. We still have Typescript lingering around but folks _choose_ to refactor it to Rescript because they've found it easier to work with.
There are a lot of stranger things typescript makes you do make things work properly =P
Rescript doesn't make you do strange things. The async/await stuff is annoying for sure, but after 3 years of using it, we've kinda forgotten about it.
> Unfortunately, today it's necessary to understand difference between OCaml, BuckleScript, Reason and ReScript to navigate in the ecosystem comfortably
I got hit by this literally yesterday. It took me a while to understand this mess.
Yup, that's a rough start.
I don't see how any project could take off with rebranding and splitting from an already small community, Reason users, (itself a split a relatively small community, Ocaml users).
It seems like the road to success is the other way around. Make it a little better while super easy to adopt and iterate on that à la TypeScript or Reason, which can be considered alternate syntaxes for Javascript and Ocaml users respectively and are super easy to adopt whereas forking Reason into yet another programming language that has no ecosystem/community and wait for it to grow is almost always the promise of a bad pain/benefits ratio.
A lot of the original community members were also functional programming enthusiasts and their recent shift to ReScript has been intentionally alienating them in order to make the language more palatable for newbies to gain marketshare from Typescript. I understand the motivation, but some of the changes involve opinionated omissions of features in order to steer the open source code bases away from more traditional FP paradigms.
Typescript is a gateway drug for Javascript programmers. It's backwards compatible. So, it's easy to get started. However, it comes with a lot of compromises to enable that backwards compatibility.
If you think of javascript as a compilation target rather than a language, typescript is just one of many languages that can target that. And I'd argue that even javascript these days is transpiled to what runs in the browser. It might be the same javascript dialect. But minified/obfuscated etc. But more commonly, it's more than just that. In short, modern javascript development involves transpilation/compilation. It just does. The whole scripted/interpreted vs. compiled distinction that people used to make is less relevant these days. It pretty much always involves some kind of compiler.
And if you are going to use a compiler, you might as well use a proper one that does some useful stuff like prevent bugs, check types, optimize, etc. Typesystems have come a long way in the last 20 years. In short, there is no technical reason to be using Javascript exclusively in 2022. Whether it's Clojure, Elm, Kotlin, or other languages; they each have in common that they have js transpilers that work quite nicely and type systems that are definitely a level up from the unholy mess that is Javascript's "it could be a string or some object or undefined, but who the hell knows". Some people actually prefer that. Other people want no part of that kind of sloppy typing. And the good news is that there is no technical need to have any of that.
I've been using Kotlin-js for the last year for a web application. The tool chain has some rough edges but the language is a joy to work with generally also for browser development. Definitely an upgrade over typescript, which I've also used in the past. And we have quite a bit of multi platform kotlin code that we also use server side. A lot of what we do is increasingly multiplatform and not explicitly dependent on the JVM. We could choose to use that code also on Android or IOS (via the native compiler). With compose web and other emerging Kotin web frameworks and the upcoming WASM compiler for Kotlin, I suspect it's going to become a lot more popular to use for web applications. With WASM, transpiling to Javascript will ultimately become a thing of the past. If you are compiling, you might as well produce WASM. Short term there are some open interoperability issues with browser APIs of course. However, nothing that can't be solved long term.
Have you had issues around the distribution size? I've been playing with building a kvision based app for the past year as a side project, and I can't get much lower than 2MB. I suppose I might try a "hello world" with compose web and see how that shakes out.
It's a concern but it has been getting better. Dead code elimination (DCE) in kotlin.js has improved in the last year as they rolled out support for the new IR compiler with 1.5 and 1.6. It's still not perfect, but it's getting there.
For what it is worth, we are using the Fritz2 framework, which is quite nice. That was definitely a bit of a bet a year ago but we've stuck with it and they are getting close to a 1.0 now. We might switch to compose web at some point but it looks a bit early for that right now. One plus point with Fritz2 is that it comes with a nice component library, support for themes, etc. So you can get up and running pretty quickly without spending a lot of time on custom component design.
You might be interested in trying PureScript as well. It has good FFI support and as a consequence there exist wellmaintained bindings into react etc. It can also run on other backends beside javascript, which makes that you can use it for backend programming too.
Yes, out of all alternatives I looked at, I did choose PureScript because of the (relatively) easy JS interop (which Elm lacks), not needing the JVM and having types (Clojurescript) and the better async handling without the need of `then`s and the function 'pyramids' and, last, but not least, the lack of (C-) JS-Style syntax.
Yeah I also noticed Purescript was missing from your comparisons list. It's the only one of these that I've looked at much, and it seemed like the best of them, though I wasn't aware of the issue with ES6 modules. I don't use JS at all for now, so I haven't been up to date about anything in it.
None of the available alternatives to typescript — Elm, Purescript, Reason/Rescript — give me the same feeling of confidence in the future direction of these projects as typescript does. At least typescript is very closely aligned with the development of javascript; and it's also very widely used.
- Elm: not general-purpose enough (only useful for UIs), plus has a tendency of making drastic changes between versions (remember how loudly people complained here? [0])
- Purescript: the designer of the language, Phil Freeman, has stepped away from this project, and it doesn't seem to have gained sufficient traction
- Rescript: I'm put off by its ocamlness; and worry whether it will repeat the trajectory of Flow, which has essentially become irrelevant
> Elm: not general-purpose enough (only useful for UIs), plus has a tendency of making drastic changes between versions (remember how loudly people complained here?
Not really, it just has a slow release schedule. Over time it is relatively stable.
Not a single thing has changed about it the last two years and the creator has communicated that it will probably stay mostly the same for quite a while.
Yes, there have been some bigger changes in the past but that is fair game for software that hasn't hit version 1 yet and it was communicated that it might happen.
The drama is mostly an issue caused by misaligned expectations: Many people assume that open source projects should be developed openly with everyone being able to contribute and having a say about the direction. Evan, the author of Elm, prefers to keep tight control of everything. The approach has some advantages and disadvantages and I can see why people might decide to stay away from the community but honestly if you just want to get work done in Elm, it is not much of an issue either way.
Elm is probably the most stable and reliable choice for frontend code and helps avoiding the churn that happens in the JS ecosystems.
Business logic can be shared with .NET and it has lots of ways to interoperate with JS. There's even a TypeScript converter though of course it's not as pleasant as "yarn add" and YMMV : https://fable.io/ts2fable/
> Despite requiring very verbose type annotations, TypeScript does not have a sound type system, meaning it does not guarantee the absence of type-related errors in runtime even if everything compiles fine.
Does ReScript solve that? If so, how? In my experience this is mostly an issue with data/functions coming from outside sources (Return values from HTTP calls to endpoints you don't control being a big culprit). Sure, you can write type definitions for that but that doesn't mean anything unless everything is manually validated/checked at runtime as nothing really guarantees that the type definitions are actually correct. Does ReScript handle that automatically?
That's not what soundness means in a type system. Of course untyped input has to be parsed and validated.
An unsound type system means that you can write code that compiles, but experiences a runtime error caused specifically by the types (not the values or the business logic conditions) not actually being compatible. The famous example is having an array of a subtype being used as an array of the supertype:
function doStuff(animals: Animal[]) {
animals.clear()
animals.push(new Cat())
}
const dogs: Dog[] = [new Dog()]
doStuff(dogs)
dogs[0].bark() // <-- type error
In a sound type system, that code would not compile because we know that you can't treat an array of Dog as an array of Animal in general.
I don't think that example would compile in TS either (not 100% confident though). The "issue" with TS is that everything type related completely disappears after the build step and that needs to be kept in mind in development. Some developers struggle with that, especially at the boundaries to external APIs/libs/whatever.
> I don't think that example would compile in TS either (not 100% confident though).
I believe that code does compile by default.
If you turn on the strict flag, that exact code wouldn't compile. BUT, if you wrote a class method instead of a top-level function in the above snippet, it STILL would compile.
> The "issue" with TS is that everything type related completely disappears after the build step and that needs to be kept in mind in development. Some developers struggle with that, especially at the boundaries to external APIs/libs/whatever.
I agree that a lot of devs struggle with that, but that's not really relevant to the type system being sound, and has nothing to do with the example I wrote. It can be proven to be incorrect at compile-time, so the compiler should reject it.
In fact, most of the "hardcore" statically typed languages you hear about have full type-erasure: Haskell, Rust, ML, etc. Yet, they have reputations for very strong and strict static type systems. None of those languages would let the above garbage compile.
There still "weak" points, where incorrect typing can be introduced, e.g. incorrectly written bindings.
Regarding data from HTTP call in theory they can be cast to any type abusing `external` keyword which is meant for binings.
But common and recommend approach is to write codes. Codec would perform all the necessary checks to ensure that the data are in correct shape to satisfy the type restrictions.
You can see some examples here: https://github.com/greyblake/from-typescript-to-rescript/blo...
TypeScript does nothing like that with any strict settings, because it doesn't provide any JSON parsing alternative, so that is always unsafe.
I bet there are runtime checkers that use some kind of reflection, that would be the next best thing. But I also bet that those force you to annotate/decorate everything.
You can redefine all definitions from lib.d.ts and similar. In my projects, parse function returns unknown, so I need to parse/validate/decode it in runtime to satisfy types.
I've used Rescript (and ReasonML before that) for side projects, I really like it. I'd say it main strength is also its main weaknesses; it's not JS with a C# sauce (like TS), it's OCaml'ish with a JS sauce, making it really nice in many ways but also harder to pick up for JS folks.
That's where Typescript clearly has the advantage; being just JS with types makes it much easier to get started, and the fact that its #huge also helps a lot.
It's a bit sad as I think ReScript is much better on many fronts: the type system with crazy good inference and soundness, very easy to configure and add to existing projects (it just outputs JS files even with TS types), the module system is amazing (gone with imports) and creating simple bindings to existing JS/TS is also quite easy once you understand the basics, I'd go as far as saying it's easier than TS . It's also really fast, both the compilation as well as the JS it outputs.
Because of the adoption hurtle I doubt it will be as big as TS, but I really hope it gains enough momentum and usage to just sit comfy in its niche, for that I think it needs to grow a bit further.
I do think the ReScript team has made incredible progress on all fronts; I think ReScript is much easier to pick up than Reason, because of the language tweaks, but also the new website is so much better than the scattered docs in multiple places.
I hope it will continue to improve and more people will give it a go!
Yes and no. The syntax has been updated to make it less OCaml and more JavaScript-like.
ReasonML can compile to native, rescript can’t. Reason still exists as a separate project, and projects that have been rewritten into reason will not necessarily switch.
Honestly that was the selling point of ReasonML to me. Share business ___domain types and share common code but also get a native compiled backend. The pattern matching, variants, and syntax of Rescript are quite nice but unfortunately I don't find it enough of a selling point for frontend code that (in my experience) does not benefit as much from those previous points as say writing compilers or parsers because the majority of what youre writing is interacting with 3rd party libs and components, building forms etc in React, for which the code ends up looking pretty similar in rescript-react.
We tried doing this with darklang. It ended up not actually working that well (it took a lot of effort to have even a few shared types), and it didn't get us very much either.
It's because Reason and Bucklescript were two separate projects with two separate goals. The goals of Reason was to be multi-platform, which was causing lots of technical challenges for rescript, which didn't care about that need but were doing most of the work. So the two needed to split up, and no-one had ever heard of bucklescript.
Then the messaging on the name change was unclear, and the fact that Reason still exists (walking dead, but the name is still out there) made it all very unclear to everyone what was happening, and I think that caused a massive amount of damage to the momentum.
But then again, it's a great language that's moving quickly, so I expect it will continue to grow. Certainly my company is built on it.
I've been using ReScript for a few months now. So far I mostly like it, but it takes some time to get used to it. The error messages can be confusing though so I try not to depend too much on type inference. The other downside is that some bindings for some of the web apis seem to be missing (like for the types in the Dom module).
Btw, for Material UI there are already bindings [1]
I'm not saying CoffeeScript is bad, nor ReScript. I just wouldn't use them for commercial purposes because its likely that most CoffeeScript projects today are legacy and I suspect ReScript will be the same in the future.
As a sort of litmus test I use the number of stars a superset of(or language compiling to) JavaScript has on GitHub and how that compares to fartscroll.js: https://github.com/theonion/fartscroll.js/
By this measure ReScript is still a niche language.
> there was a type mismatch, but the error the compiler reported was quite far from the original error made by me. It's not a compiler's fault, the fault was purely mine.
Most languages "aid the user" with better error locality by requiring the user aid them with far more type annotations. Lack of error locality is presently a downside of relying on pervasive type inference (at least in both OCaml and Haskell).
I expect there's room (and responsibility) to improve, and so I do accord the language some of the blame. But error locality can be recovered by adding back some of the annotations that another language might require, and it can be optionally deferred until the locality is desired. I don't think it's right to say that that's requiring more help from the user than the baseline, and I think it's right to place some of the blame on the user misusing the tools if they're not adding enough annotations to get the kind of error locality they want from the tool they're using (as noted in the article, the author stopped doing that, although it may've been an overreaction).
I feel like the amount of time you would spend dealing with weird interop is far far greater than the number of times typescripts unsoundness would get you (hardly ever in my experience)
Yes--if you target Node.js which I believe is supported first-class for AWS Lambda. ReScript compiles to very clean JS that looks almost hand-written. Should be easy to deploy to a lambda.
The quality of the generated JS is a bit irrelevant as they'll probably want to minify the generated JS to reduce the size of the assets and make cold starts quicker anyways.
This is exactly why it got so popular, and TypeScript support in the JS ecosystem is so good. It's easy to just strip the TS metadata and you're left with working JS. This is why projects like esbuild managed to ship TS support so quickly (without type-checking). Good luck doing that with a new language.
I for one am "all in" on TypeScript, with all its shortcomings it's just the best of both worlds, and so easy to get on board.