Hacker News new | past | comments | ask | show | jobs | submit login
JavaScript async/await implemented in V8 (googlesource.com)
509 points by onestone on May 17, 2016 | hide | past | favorite | 217 comments



The moment I started using async await (with babel) combined with the new fetch API so many libraries got obsolete.

Getting data is as easy as:

  async function main () {
    try {
      const res = await fetch('https://api.github.com/orgs/facebook');
      const json = await res.json();
      console.log(json);
    } catch (e) {
      // handle error
    }  
  }
So I am quite happy when this lands in modern browsers asap.


Can anyone point to a good explanation why this is preferable to cooperative threads (coroutines)? I.e. the above code could easily be written as

  function main() {
    try {
      const res = fetch('https://api.github.com/orgs/facebook');    // yield here
      const json = res.json();                                      // yield here
      console.log(json);
    } catch (e) {
      // handle error
    }
  }
and the runtime would automatically yield this coroutine and let other coroutines run whenever some call blocks.

I guess the only realy difference would be that I would make `await` (and `async`) implicit, similar to how exceptions are handled implicitly in the above snippet (i.e. we don't annotate functions as `throwing` and we don't use an `attempt` statement (analogous to `await`) when calling `throwing` functions).

Edit: Reading various articles, the best explanation is that since the single-threaded nature of JavaScript makes synchronisity implicit (i.e. all code is run in an implicit transaction), it has to make asynchronisity explicit. The alternative would be to have coroutines/fibres (implicit asynchronisity), but with explicit `atomic` blocks (for synchronisity).

C# doesn't provide any synchronisity guarantees, so I guess the motivations there were different (it's really hard (impossible?) to implement fibres efficiently, and even harder to allow for native code interoperability).


There is also the the point of flexibility. Async/await works on promises, however promises themself are just perfectly un-magical JS objects that accept a callback. This results in two effects:

1) Yielding is decoupled from the asynchronous action itself, giving the caller fine-grained control when to yield. For example, with async/await, management of parallel operations is straight-forward:

  var promise = fetch('https://foo.com/');
  // ... do something useful while the fetch runs ...
  var result = yield promise;
Same goes for concurrent fetches:

  var promise1 = fetch('https://foo.com/');
  var promise2 = fetch('https://bar.com/');
  [result1, result2] = yield Promise.all(promise1, promise2 );
2) Promises + async/await allow you to switch back-and-forth very easily between callback-style and coroutine-style APIs, which makes integrating old APIs very easy:

  async function f() {
   var p = fetch('https://foo.com/');
   p = p.then(function(inbetweenResult) {return new Promise(function(resolve) {
    oldapi.onresult = resolve;
    oldapi.process(inbetweenResult);
   })});
   return await p;
  }

  f().then(...) 
... and so on.


The explicitness is the reason currently you can assume* that if nobody will ever modify variables unless you explicitly give up control somehow, if you were to have implicit await then that would no longer be the case and any function call could theoretically pause the function and give up control to other functions.

*certain DOM APIs due violate this contract, notably window.open() on firefox pauses everything but setTimeout.


Just to expand a bit upon this: That you can assume a single thread simplifies the code immensely, e.g. when modifying state. It's important to be explicit about when this assumption is broken.


The problem with that is that any mutable escaped variable (i.e. explicitly or implicitly global) can be potentially modified by any function call. On the other hand a non escaped variable cannot be mutated behind your back, yield or no yield. At least, that's the case in a sane language, not sure about JS.


escaped/non-escaped variables are not a thing in JS


There are no locals in JS (honest question)? If that the case you have to assume that everything can be mutated everywhere unless for those functions that explicitly and fully document their side effects (and yield would certainly be a side effect).


variables declared with the var keyword are functionally scoped in JS so they are local to the function they are declared in and are lexically scoped to the closure of that function. So your local variables you can be fairly confident are not mutated anywhere else.

That being said you can totally have globals and whatnot that can be mutated anywhere (e.g. parameters passed to a function or literal globals installed in the global scope) or things declared at a lower scope that can still be modified by multiple functions.

Hope that clears it up


Perhaps this is the reason:

Is it possible to have a yield inside a called function (i.e., not the generator function itself)? Last time I checked this was not allowed (?)

Anyway, if so, I would strongly prefer this over a async/await construct, which is less general.


> Is it possible to have a yield inside a called function (i.e., not the generator function itself)?

Call a generator function from a generator function? Sure, you just need to transitively yield its content using `yield* [[expr]]` (which delegates part of the iteration to the `expr` generator)


So what tomp is doing:

    const res = fetch('https://api.github.com/orgs/facebook');    // yield here
actually requires a "yield" in front of the "fetch"? But what if I want the fetch() function to decide whether to yield or not?

Of course, fetch(), could yield a status specifying whether its calling-ancestors should yield, but this can become unwieldy very quickly, and might require an exception mechanism of its own. Better to just let called functions yield.


In javascript it would yes (actually a `yield*`, or an `await` in async/await). With native coroutines it wouldn't (the runtime would implicitly yield on IO).

> But what if I want the fetch() function to decide whether to yield or not?

In javascript? The question doesn't really make sense, a function is either sync or async.


> The question doesn't really make sense, a function is either sync or async.

Well, either fetch() blocks, or not. If it blocks, then it should yield, its caller should yield, etc. If it doesn't block, then it can just run to completion.


Don't elide important parts of the quote. In javascript your question doesn't make sense.

> Well, either fetch() blocks, or not.

Javascript functions always block.

> If it blocks, then it should yield, its caller should yield, etc. If it doesn't block, then it can just run to completion.

If a function is async, you must yield[0]. It may not need* that capability (conditionally) and doesn't have to yield at all e.g.

    function*() {
        return 4;
    }
is valid and will not suspend. You must still yield*[0] on it so that it is properly resolved as a generator.

[0] I think that's not an issue for async/await as it's syntactic sugar for then chains, the code itself may be converted to a generator or whatever but it will embed the driver


Is this the problem Python has solved with 'yield from' construct?


Its basically the same thing. But javascript doesn't come with a new eventloop and other construct to use it, since similar functionality is already in js


Probably not preferable. Async await works on top of promises...


It might be something to do with backwards compatibility.


What do you mean? Edit: I mean, can you give any examples of how old code would be broken by implementing something like this?


Is the syntax composable? Can I do

    const json = await (await fetch(https://api.github.com/orgs/facebook')).json();
or do I have to name it?


yeah that works, your statement is fine


nice, thanks!


You probably can, but you may as well just do this instead:

    const json = await fetch('https://api.github.com/orgs/facebook').then(res => res.json());


I think the await code is more explicit, whereas a callback is somewhat ambiguous


I'll be glad to do away with callbacks completely tbh.


What you asked for is whether `await` is idempotent. No it isn't. The "argument" to `await` is a Promise, and the "return value" of it is the resolved value of the Promise (i.e. `Promise#then`).

EDIT: I missed the `.json()` part, but I suppose it doesn't return a Promise, no?


actually .json() returns a Promise. The first fetch only returns HTTP status code and some other details. The content itself is streamed. You can also access the stream if you want too. In general you would rather just call .json() and get the parsed data.

The fetch function can for example also be used to get images. There you would use .blob() and assign that to an image.src with URL.createObjectURL(blob);

See here: https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API/U...


This is totally not what I'm asking, thanks.


I don't understand your code. the .json() should be inside the parentheses because you cannot await something that has already been converted from a Promise to a real value. And even in that case that code will be exactly equivalent to the synchronous code because you are awaiting in place the result of the Promise. So, given my very limited knowledge of javascript, I think that probably you can write it moving the .json() inside, but I don't see why you would ever want to do it if you can have the same result without the await boilerplate...

EDIT: got it, you want to do it inside an aync function with some other code after the await, then I think it makes sense in that case.


fetch() returns a promise, which is unwrapped using await. fetch's promise contains a response object, which has a json method, which returns a promise.

  const responsePromise = fetch(https://api.github.com/orgs/facebook');
  const response = await responsePromise;
  const jsonPromise = response.json();
  const json = await jsonPromise;


Tip: if you use anything other than fetch you'll just get JSON back immediately based on the MIME type, without the unnecessary conversion step.


Those other than fetch()s still do the "unnecessary conversion step". Don't dismiss fetch by the fact it doesn't hand hold or make assumptions.

It should be said that fetch() is quite low level call compared to what we've had before regarding ajax, so it makes sense in larger projects to wrap it to keep any logging, error handling and retry in one place.


> it doesn't hand hold or make assumptions

Using MIME types isn't hand holding or making an assumption.

Yes, people who need the low level features will wrap it with a series of competing high level libraries. It's a missed opportunity to make a standard, capable high level API though.


Fetch API also doesn't have any means of tracking progress, making it strictly inferior to XmlHTTPRequest


In fetch's defence it will in future once the Streams API is finalised: https://github.com/whatwg/fetch/issues/21


Fetch does have streaming uploads and downloads, and a quite some other features which doesn't come with xmlHttpRequest. Making it not strictly inferior to XMLHttpRequest


Will have. Future tense. Not present tense.



yes code like that works.


With this natively in V8 the debug experience ought to be much better than now, too


Is using async/await through babel a good idea? When I last looked at it I thought it was using a busy loop which didn't seem like such a great idea to use in shipping code.


It's okay but clumsy. It's a state machine, not a busy loop, but the compiled code is a lot more verbose than the input code (and so you pay a latency penalty on the client). The developer experience is also sub-par: you'll get regenerator stack frames in any of your stack traces, and it interacts poorly with babel-watch.

You can do it, it's not insane, but native async/await will be much, much nicer.


Can you elaborate on "it interacts poorly with babel-watch."?


IIRC under babel-watch it required me to include babel-polyfill at the top of my main script, but under babel-node it prevented me from including babel-polyfill at the top of my main script. (babel-polyfill has a check so that it can only be required once, and errors out otherwise.) That meant I needed to keep commenting/uncommenting that line when I wanted to debug things.

There were also cases where I wasn't sure if babel-watch was reloading dependencies properly if I used async code in a file, which is why I had to keep switching to babel-node or running the code through babel and inspecting the generated code to debug. Overall, it just felt very new and unpolished (which I guess it was...I was working with it around March, so babel-watch was < 1 month old and babel async/await was about 2-3 months at the time) to be relying on all the time.


If you don't want to use regenerator and your environment supports generators already you can use the async-to-generator transform https://gist.github.com/rauchg/8199de60db48026a6670620a1c33b...


It's fine. It transforms async functions into a state machine.


Async/await is just syntax-sugar around Promises. No busy loops involved. (Those generally don't work in Javascript anyway.)


This overlooks Promises needing to be polyfilled. Though that's certainly not using busy loops either (though most of what I've seen overlooks more performant setTimeout(fn, 0) alternatives which is unfortunate).


Async/await already needs to be transpiled in for most browsers, so Promises needing to be polyfilled seems like the least of your problems.


The try and catch inside the async function is also great. Curious though, is using const over let common? I usually use const for imports and module level globals, and let inside functions. I recently got back into javascript, and it's nice to see it flourishing.


The great thing about let and const, for me, is that it gives you crucial information about variables without having to look further down.

const should be the default, and if you're going to reassign it, then use let.

In most code, you'll use const on the vast majority of variables, and it'll make let assignments stick out, which helps a lot when you're browsing the code or refactoring.


This right here. One of my friends (a more experienced developer) told me at coffee one day, "I use const for everything" which really stuck with me. It really helps make it clear if you need a mutation to have let stand out.


However, it's worth noting that using const doesn't make variables immutable. It only protects them of being reassigned:

const val = 'a'; val = 'b'; // TypeError

const obj = { val: 'a' }; obj.val = 'b'; // This line will still work.

obj = { val: 'b' }; // TypeError


Yep. In those cases, I still prefer let personally even though I know the variable itself will not be re-assigned, kind of as an indicator that I expect the referenced object to be mutated.


Coming from a background that includes some experience with C / C++, `const` here works as I expect it to. I'd prefer `const` even for the object reference case: I get errors on reassignment this way, and presumably it makes additional optimizations possible. To each their own, I suppose :)

There's also `Object.defineProperty()`, `Object.freeze()` for more control over mutability.


I just wish let was in fact const (and something like "local" could be let), simply because the keyword let is nicer to type, easier to read, and more importantly better in line with the meaning of let in Lisp/Scheme/ML/OCaml/et al.


Yeah, I was thinking the same! I was even considering writing a custom babel transform to do this for me :P


We also favor using const as a default for everything in our team's code. When someone uses let, it's right away obvious that the reference will change.

One case that surprised me a bit is that it's correct to use const in for-of loop, e.g.

for (const a of arr) {}


I've been using a lot of const function declarations

`const fn = (i) => ...`


Yes, http://eslint.org/docs/rules/prefer-const is a fairly common [1] ESLint rule.

1. Admittedly my only data point is AirBnB's style guide / ESLint config: https://github.com/airbnb/javascript#references--prefer-cons...


const is more declarative as you can immediately tell that the data isn't going to change throughout the duration of the program (although technically you can change the data, just not the reference).


> although technically you can change the data, just not the reference

This is why I'm hoping to see native JS immutable data structures eventually (maybe it's already in a proposal somewhere?). ImmutableJS and Mori are great, but having a native solution that's available everywhere would be ideal.


Consider looking at other languages. ClojureScript does a pretty good job at immutability, Elm is great for immutability + strong typing, and PureScript adds in Haskell's advanced type system (typeclasses, HKTs, etc.)


Definitely. ClojureScript is what I usually choose for my own projects, but the reality of the front-end job market is that JavaScript still dominates, so improvements to JavaScript itself will still have a significant impact on the lives of developers everywhere regardless of the existence of other compile-to-JS languages.


When I want to ensure strict immutability, I use an implementation of deepFreeze (https://github.com/jsdf/deep-freeze) that recursively calls Object.freeze() on all child properties. Obviously slower than strict browser support. Does ClojureScript rely on the Clojure compiler to ensure the properties aren't modified or is there some polyfill?


IMHE, const is the generally the default and let is the exception.


I'm alone in this but I prefer `let` over everything else unless it's something like Redux action strings, database connections strings, or anything of the like. I rarely use `const` for the ridiculous reason that I find it too long (5 characters!). `let` is short and reads nicely and my programming style in general never mutates stuff anyway (I use Ramda/always return new data).

That said, `const` is picking up to be the default for everything. I can definitely see good reasons to use it when declaring functions and required modules or working in a team.


I'm curious why you let the amount of typing dictate what you use? If five characters is such an impediment, why not just use a snippet letting you have less error prone code than currently (e.g., 'co ' expands to const).


As you admitted, your behaviour is rediculous, hope for your sake, your boss or customers don't read this.


What do you use for the new fetch api on node?



I don't use it on Node but this syntax is great on react native.


node-fetch is one option, but I go with isomorphic-fetch[0] (since it can be run client-side via Babel).

[0] https://github.com/matthew-andrews/isomorphic-fetch


fyi isomorphic-fetch is just a universal wrapper around node-fetch and github-fetch


Why do you need to put await before res.json?


Someone correct me if I'm wrong here, this is just my hunch - but according to the fetch spec, res.json returns a promise, which are async. I'm assuming so you have the chance to handle errors as a result of malformed JSON.


Promises are used for catching errors?


Yes.


At the time that the request resolves, it has received and processed the response headers, but it may still be receiving the response body (which is represented as a stream). All the body representations (like .text() or .json()) need to wait until the body has arrived before they can resolve.

https://fetch.spec.whatwg.org/#concept-body-consume-body


Thank you to Microsoft and Anders Hejlsberg for inventing Async/Await (and to F# for the original inspiration) https://en.wikipedia.org/wiki/Futures_and_promises


async/await as we know it actually came from Midori's C# dialect, which indeed was inspired from F#: http://joeduffyblog.com/2015/11/19/asynchronous-everything/

(Although, just to be pedantic, the concept of futures is way older than F#. Alice's syntax, for example, is pretty close)


The nice part about F#'s async implementation is that it was purely a library. I wish languages aimed at allowing developers to build things for the language, rather than providing more compiler built-ins. Making it extensible is more elegant (though I understand it might be easier to optimize perf).


Integrated features are easier to tool and provide good diagnostics for. It's nice to have enough expressiveness to model monadic computations without too much syntactic overhead - and monads are usually enough, since they hand blobs of user code to the library to be composed as necessary - but debugging a reshuffled graph of closures is no picnic.


This news is almost as important as the rest of ES2015 being implemented in Webkit.

async/await is the final piece in the puzzle of providing real solutions to callback hell in JavaScript.


I am willing to admit that async/await is a bridge between the Node and .Net communities -- surmounting this means we are that much closer to literally doubling the developer pool for either sections of the community.


how so? I like the JS async/await a lot, but you couldnt pay me to write .net

I personally don't know many devs who would choose to learn a new lang because of a feature (assuming that they didn't want to learn it before it existed)


I held my nose when I was forced to write JavaScript until I learned about async/await.

I came from a language that can do this kind of thing implicitly using coroutines (i.e., I could write imperative code that would pause on a network request and resume afterwards). Have to wade through callback hell -- even with Promises -- was a serious step down.

With async/await being explicit, combined with functionality like Promise.all(), this is actually a better solution than I had before, because I can spawn a dozen queries at once and wait for them all to complete before I continue. I've used that to good advantage already once with an await Promise.all(...) call, and the patterns are so easy to follow that it brings tears to my eyes.

Or I can inject more traditional callbacks when the logic would be easier (or when it would allow several simultaneous calls to complete), which does happen from time to time.

As to whether I'd learn a new language because of it: I'm considering learning Elixir because of even stronger guarantees [1], actually, so yes, I would.

[1] Apparently in addition to being able to write code using light threads like this, you can also put a cap on how long the VM will run code, so if some idiot developer puts a loop in the code that doesn't return control enough, instead of destroying your UI experience it will just pause that loop and resume the main loop. Still have yet to try it though.


I don't understand. I also like JS async/await and like JS and node in general but .NET is simply beautiful IMHO (given you are using F# or C#). Why would you rather not write .NET code?


c# is a great language but many people don't want to have to specify type information so often or create classes so often. Even many statically-typed languages don't require as much List<T> stuff. Also, everything outside of the language itself sucks. The frameworks, the operating system is has to run on, the community, etc.


I object to this. The .NET Framework is top tier and sets a high bar for standard libraries, in my opinion. It also runs on any OS just fine, especially now that Microsoft's own implementation is open source and MIT licensed. The community is also very strong with tons of docs and libraries and SO Q&As and plenty of jobs and local user groups. And if you don't like the C# boilerplate, F# has all of the same benefits.

Enough with the FUD around .NET/C#, I'm a full time Linux greybeard and even I'm sick of it.


c# has var as long as the variable is being assigned. var t; is a readability nightmare to me anyway.

As for creating classes so often, they are adding better support for tuples in the next version (c# 7).


Why would it be futile for someone to offer you money for writing .net code(and to be clear I'm assuming we are talking about csharp).



Eddie Zaneski is an awesome and friendly guy!

Here's another awesome article on it:

http://amasad.me/2015/10/31/javascript-async-functions-for-e...


Why is "async" keyword needed? Can't JS engine infer from the use of "await" in a function that this function need to be async? I'm using async/await for a while now, and so many times I've introduced bugs in my code because i forget to put "async" in front of the function, or put "async" in front of wrong function. It's simply annoying to go back and put "async" when in middle of writing a function I realise I need to use await.



Thanks for this! Also later in that thread is an example to further bring home the point: https://github.com/tc39/ecmascript-asyncawait/issues/88#issu...

Solid reasoning, in my opinion.


I guess this is because you do not always want to wait for a future just after the function call. In some cases, there is code you want to execute between the call to an async function and the use of its return value.

A simple example would be the concurrent fetch of two urls.


In addition to the answers already given, I think it's important to notate that a function is async in the signature rather than requiring the user to scan the entire method looking for an "await" keyword.


That's how it works in C#, and personally I much prefer to have a clear declaration than having to rely on the interpreter parse the function to infer it's async.


I'd agree with you, and I'm keen to learn the real answer. My guess would be that some initial optimizations can occur without having to also parse and analyze the body of a function, potentially having significant performance benefits to reducing the boot time of a program.


the simple version is that because `await` wasn't a keyword before, it can be a named function.

So `await (x)` is ambiguous. Making the outer function `async function () {...}` makes await a keyword inside that function, allowing them to move forward with that syntax in a non-breaking way as the parser now knows that there can't be a function named `await` inside any async functions.


Looking at Kangax's support table:

http://kangax.github.io/compat-table/esnext/#test-async_func...

Apparently Microsoft Edge seems to have been the first browser to implement it... good job Microsoft!


I have yet to see a convincing argument that this feature is necessary or even helpful beyond one-liners. The Q promises API, to me, is the right way to reason about asynchrony. Once you understand closures and first class functions, so much about complex asynchronous flows (e.g. multiple concurrent calls via Q.all, multiple "returns" via callback arguments) become so simple. The "tons of libraries" argument doesn't make tons of sense either. I've done a lot of async and I've never needed anything beyond Q that I can recall.

This feels like a step toward added confusion rather than language unity. Much like the majority of of ES6 feels to me: bolted-on features from popular synchronous languages that themselves are only now adding features like lambdas.

I don't want to write faux-event-driven code that hides the eventedness beneath native abstractions. And I definitely don't want to work with developers, new and old, trying to learn the language and whether/when they should use async/await, promises, or fall back to an es5 library or on* event handlers. I want developers who grok functional event-driven code with contextual closures.


Allow me to make both a theoretical and practical argument.

It's been said that "callbacks are imperative; promises are functional". It's true. Furthermore, callbacks structure control flow and promises structure data flow. Instruction scheduling is an explicit sequence with callbacks (well, assuming the API calls you back precisely once), but scheduling is an implicit topological sort of the directed acyclic dependency graph of promises.

Sometimes, when it comes to side effects, implicit scheduling is less than ideal. You need specific things to happen in a specific order. The solution is to introduce data dependencies to force a particular schedule. This is precisely what is done by monads in Haskell. However, unlike Haskell, JavaScript doesn't have "do" syntax, so there's no convenient notation for a nested chain of bindings.

The result of not having monadic binding syntax is that you wind up with some funky nested chain of getY(x).then(y => y.then(z => z.then(.... OR you have to declare variables up top, flatten your then blocks in to a promise chain, and often ignore intermediate values:

    let y, z;
    getY(x)
    .then(returnedY => { y = returnedY; return zPromise(); )
    .then(returnedZ => { z = returnedZ; return ......; }
    .then(......another side effect.....)
    .then(_ => f(y, z))
async/await neatly eliminates this problem by reusing the traditional coupling of data flow dependencies with first-order control flow dependencies.

Practically:

    let y = await getY(x);
    let z = await getZ(y);
    ......another side effect....
    return f(y, z)


This is not quite true with regards to promises - one can do

    getY(x)
      .then(returnedY => Promise.all([Promise.resolve(y), zPromise()])
      .then([y, z] => { ...another side effect...; return Promise.all([Promise.resolve(y), Promise.resolve(z), sideEffectPromise()])
      .then([y, z, _] => f(y, z));
Not the cleanest, but you preserve the isolation without relying on polluted variables bleeding into function scope. In addition, one can do proper error handling via .catch as opposed to the brute force unnecessary catchall that is a try-catch.


1) Yes, you can restructure/destructure variables at each step, but you're just unifying the control flow and data flow by hand. Yet another flavor of human compiler.

2) catch's entire purpose is to be a catch-all for unexpected errors. For expected errors, it's better to have explicit error values. Bluebird.js at least lets you differentiate .catch from .error, where .error handles the explicit error values generated from traditional Node.js callbacks.


> It's been said that "callbacks are imperative; promises are functional". It's true. Furthermore, callbacks structure control flow and promises structure data flow.

Do you have any good escapes l examples of further reading on this? I'm not sure I agree with this statement, or perhaps don't fully understand it. It seems like both cases boil down to functions as data (function pointers essentially) that are called at the end of some event or chain of events.

To me, they seem to be the exact same thing but worth promises being much more human readable.


To give a brief "explanation", look at the type signature of a callback-based API:

  readThisFile: string -> (string -> void) -> void
Usage:

  readThisFile('foo.txt', result => ...)
You have a double void here. Which is the hand-wavy justification as to why callbacks don't compose. On the other hand, a promise's value is Promise<actualTypeOfValue>, which you can pass around as data without asking the other end what it wants to do with the value (aka decoupling). You do have the imperative action at the top to kick the whole chain into motion, but the intermediate stuff are most of the time side-effect-free.

Hopefully that was clear enough to see that passing actions around is akin to controlling the flow with e.g. if-else, and that passing data around is declaring how your tubes/rail tracks/whatever analogy for monads are constructed, aka data flow.


You can google that quote and find more, but the control flow vs data flow formulation is not widely discussed from what I've seen.


It's widely discussed in Haskell communities, fwiw ;)


99.9% of all JavaScript is handling state and events in a user interface (web page). Ex: You have two buttons, and when both of them has been pressed, a image of a cat is displayed.

>> functional event-driven code with contextual closures

This paradigm works very well on the server side too. Ex: A HTTP requests makes a query to a database server, then returns the result to the browser

This gives parallelism without the complexity of threads.


> (e.g. multiple concurrent calls via Q.all, multiple "returns" via callback arguments)

How about:

  async function main () {
    try {
      const responses = await Promise.all([
        fetch('https://api.github.com/orgs/facebook'),
        fetch('https://api.github.com/orgs/facebook')
      ]);
  
      const jsons = await Promise.all(responses.map(res => res.json()))
    } catch (err) {
      console.log(err)
    }
  }

I think this is pretty clear and not needing any library for this is quite nice.


Just to be picky, but I believe you would be better off with this:

    async function main () {
      try {
        const jsons = await Promise.all([
          fetch('https://api.github.com/orgs/facebook'),
          fetch('https://api.github.com/orgs/facebook')
        ]).map(promise => promise.then(res => res.json()));
      } catch (err) {
        console.log(err)
      }
    }
This way, if one response was much quicker than the other, you could begin sending it through the `res.json()` portion immediately, instead of waiting for both responses to return before continuing.


Nit -- you want:

    async function main () {
      try {
        const jsons = await Promise.all([
          fetch('https://api.github.com/orgs/facebook'),
          fetch('https://api.github.com/orgs/facebook')
        ].map(promise => promise.then(res => res.json())));
      } catch (err) {
        console.log(err)
      }
    }
As written you get "Promise.all(...).map is not a function"


Just to put it out there, an ES6 implementation using async.js would look like

    async.parallel({
        facebook: done => request("https://api.github.com/orgs/facebook", done),
        twitter: done => request("https://api.github.com/orgs/twitter", done)
    }, (err, responses) => console.log(err, responses); );
And output something like:

    null, {
       facebook: { // facebook data },
       twitter: { // twitter data }
    }
Sure it suffers from a third party dependency and I would argue its slightly less concise, but we're getting to the point of opinionated API's just like some developers prefer promises, some will prefer async/await, others will continue with vanilla callbacks.


This is not the same at all. Any error in any of the callbacks is not catchable with one catch statement (you need separate try catch statement in every one of your callbacks). With async/await and promises, you only need one catch statement or one .catch method in one place because of proper error propagation and composition.


How common are dozens/hundreds of try-catch blocks in javascript code? Developers have long ago stopped caring about uncaught errors because it doesn't matter in the code they are writing, they catch every error they need to and leave the chips to fall where they may.

In the same vein async.js will be catching all errors passed to done(err, response) and in the case of async.series() and similar calls, will stop the execution of concurrent actions as at the first error that is caught.

Therefore I think its completely valid, because even though our examples are different, to most developers it doesn't even matter.

I think you have a completely valid point, and it's probably a better world where we have proper error propagation and composition, just the reality of the situation is what it is.


> How common are dozens/hundreds of try-catch blocks in javascript code

Nobody who realized the fragility of async/callbacks and understanding what it takes with them to make at least a passably robust program would stay with async, they would quickly switch to promises/fibers/equivalent or stop using node. Although I have seen projects on github that use try-catch blocks properly with callbacks/async and I have to wonder what made them think that there isn't a better way.

And again, async does not catch any errors, it only forwards error callbacks based on some very loose convention which isn't even honored in core node apis. Just because most node (or rather callback/async) users are clueless about error handling doesn't make the examples equivalent.


Oh you're right of course. My mistake.


Would it?

IIRC Promise.all will run the two responses in parallel, but will not send the first into the map before both are finished.


Promise.all doesn't "run" anything. Promises "run" the very moment they come into existence. Promise.all simply tells you if and when all of them are settled. It depends on your Promises whether they are doing something outside the single threaded event loop or not for them to be parallel or not.


>Promise.all doesn't "run" anything. Promises "run" the very moment they come into existence.

Yeah, I know that. I don't mean .all schedules them to run (e.g. like doing thread.start()), of course they auto start.

By run I mean that inside the .all implementation syncs with you about their completion ("tells you when all of them are settled").

>It depends on your Promises whether they are doing something outside the single threaded event loop or not for them to be parallel or not.

That's not the point though, which was the parent's assertion about whether the first to resolve will escape .all and go into the .map, which I don't think is the case.


> Promises "run" the very moment they come into existence

Which is why I'm a massive fan of Futures from ramda-fantasy[0] instead of Promises for certain tasks!

[0] https://github.com/ramda/ramda-fantasy/blob/master/docs/Futu...


Yes. They will be sent into map immediately, and Promise.all() will return a Promise which will settle when the mapped array is all settled (or one errors, but that's not really relevant to the question). We're really just awaiting the promise returned from Promise.all(), but mapping the original fetches into a "new" set of promises.


I'm sorry, but I don't see why/how they would be sent to `map` immediately, given where the parenthesis are in the above code snippet. `Promise.prototype.map` isn't a builtin function is it? If the `map` call was inside the invocation of `Promise.all` then I could see it, but perhaps I am missing something.


No, you're right. That was a mistake on my part. The .map() should be called on the array, as calebegg pointed out.


You should put the json parsing inside the Promise.all.


I will admit that I hadn't realized async worked on Promises. This is definitely a very clean solution, though not one that can be seemingly achieved without Promise.all.

There is, of course, no library needed to use pure promises for the same functionality, though.

My point is that all of this is possible without async/await, which to me are abstractions that cloud the landscape, obscuring the real evented nature of JavaScript that makes async programming so easy, IMO.


I'd say a bad programmer can produce shit regardless of the tools he is given. What async/await does is make complex async code much easier to read and maintain. Of course, you have all the expressive power of callbacks and promises at your disposal too. That's one of the great things about it. It's implemented on top of these constructs and they remain accessible.


The short answer is that async/await is to promises what loops were to if/goto. It's "just" syntactic sugar, but once you start using it, the code is much easier to write, and its structure is more obvious at a glance. It does not change the way you reason about asynchrony, because the underlying model is still the same. And you still need to understand that underneath, it's all promises and continuations. But there are very tangible benefits to be had from this kind of syntactic sugar.

This is speaking from personal experience, albeit not in JS, but rather .NET. However, the story there is similar - .NET used callback-based continuations first (the Begin... pattern), then added promises (Tasks) in .NET 4.0, then finally async/await syntactic sugar in C# 5. We introduced tasks to our codebase pretty soon, but had to avoid await for a while because of the requirement for the code to be compilable with C# 4. When we finally dropped that requirement a couple of years later, and started moving to await, code became much simpler and cleaner as a result, and new code is also much easier to write, with fewer silly mistakes (like "I forgot to check for errors").


> The "tons of libraries" argument doesn't make tons of sense either.

It absolutely does make sense. How many "class builders" have been written in the JS community since ES6 got classes ? That's right. Now you don't have to rely on co/thunk and what not to write readable async logic. Promises are still callbacks wrapped in objects. Now the consumer of a function that returns a promise doesn't have to deal directly with the promise anymore.

This feature is something JS should have had a long time ago, no question.

> This feels like a step toward added confusion rather than language unity.

Things like prototypical inheritance are the biggest source of confusion, along with "this" behavior.

> beneath native abstractions

The correct terminology here would be syntactic sugar.

Look you might not like this, but what is better ? community fragmentation with transpilers and what not, or a single language that answers most of the needs of the community ? libraries that handle complex abstractions or documented syntax that allows getting rid of these poorly documented and poorly maintained libraries? I don't want to use generators to mimic corroutines and depend on co/thunkify to write readable async code. I don't want to use 3rd party language X or Z that will get me the syntax I want, I want everybody to use the same language without obscure patterns.


The think about using async/await its to write less code, make it much simple and makes also easy to try/catch errors.

The think I don't like about promises its that you need to write a lot of lines that you could avoid with async/await, i.e:

  new Promise(function(success, failure){
    db.fetch('sql...', function(err, data){
      if (err) failure(err)
      else success(data)
    })
  })
VS

  var data = await db.fetch('sql...')
Also if you concatenate with then

  promise.then(function(done, args){
      try {
        db.fetch('sql...', done)
      } catch(e) {}
    })
    .then(function(done, args){
      try {
        db.fetch('sql2... {args}', done)
      } catch(e) {}
    })
    .then....
VS

  try {
    var data1 = await db.fetch('sql1...')
    var data2 = await db.fetch('sql2... {data1}')
  } catch(e) {}
btw its just pseudocode but you get the main idea


To correct this a little, the proper way to catch with promises is .catch, not with a try-catch inside a .then - that is an antipattern. The promise examples as written are not proper promise usage.

I'm also in the boat of confused people of why is async-await requested by a good portion of the js community - wrapping the block within the function in a try-catch seems like a bad idea, a brute force mechanism. IMO this is language bloat, and does not really solve problems we have while introducing a more expensive vector for managing flow (try-catches).

One missed point about the .catch with promises is that it allows you to branch flow discretely - it is not a perfect solution for branching flow (it is awkward when branching due to more than just a binary outcome), but with await, one might end up writing repetitive code for certain flows where one expects common calls to be made downstream in async data fetching flows.


>To correct this a little, the proper way to catch with promises is .catch, not with a try-catch inside a .then - that is an antipattern.

Depends whether you can survive and continue from an exception inside the .then to the next step or not.


Promises catch exceptions and automatically reject: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe... . I've never seen a promise not do this - do you have an example where this is not the case?


As I wrote in depends whether you can survive and continue from an exception inside the .then to the next step or not -- not about how the promise would handle an exception, but whether you want it to handle it or you want to survive and handle it yourself.

e.g, naive example:

  .then(() => {
     var foo;
     try {
          foo = getFoo();
     } catch (err) {
          foo = 10; //some default value
     }
     return foo;
  }).then( (foo) => 
etc.

So you might not want to reject automatically inside then if getFoo raises, but recover and return a default value.


That example could be written:

    .then(getFoo)
    .catch(err => 10) //some default value
    .then( (foo) =>


cool, learned something new


How does one add timeouts to fetch? E.g. is db.fetch('sql', done, 1000) possible and how do you know the thing you're trying to fetch has timed out?


>The Q promises API, to me, is the right way to reason about asynchrony. Once you understand closures and first class functions, so much about complex asynchronous flows (e.g. multiple concurrent calls via Q.all, multiple "returns" via callback arguments) become so simple.

Being simple doesn't make them optimal.

Promises are a clutch for lack of a built-in language feature, and async/await is that.

They add extra visual clutter, they can eat exceptions [1], they break the intuitive flow of code, etc.

[1] Yes, only if they are used badly. That's kind of the point. C is an excellent language too if everybody never uses it badly. The thing is people do, and a feature/language that doesn't need a corner case or mental model to keep in mind is better than one that does in that regard.


I think anyone with a fair amount of experience in complex asynchronous programs can agree with you. But most people don't have that experience and things look very different to them. They don't understand yet why their sequential database queries do not matter, they don't understand that it's not where the complexity is and that seeking familiarity and "prettifying" simple code with syntactic sugar is not going to make it simpler or easier to understand and maintain. They will have to learn it the hard way.


I agree with you, I feel this way about async/await and felt similar about ES6 classes ans various other features. Perhaps one advantage is it makes the language "look" more appealing to beginners, almost all of whom are familiar with object-oriented programming and imperative programming.

In the end, as you pointed out, these construct may work against developers in large/concurrent codebases and is an undoing of the simplicity of Javascript, one of its original strength. On the other hand we can't really say this was not coming, Javascript being a single target language with many stakeholders around it.

Maybe what's needed is to stop teaching object-oriented/imperative programming as the main paradigm. But well before that happens, hopefully WebAssembly will create a evolutionary market for languages vs. a language by committee.


Beginners will have to learn more of the language, which makes it less useful for beginners. And the larger the surface area of the syntax, the larger the amount that everyone on the team must know and the probability someone will screw something up.

C++0x started adding the kitchen sink and that's when C++ jumped the shark. JS seems to be headed in the same direction...


I use plain old promises by default, but the first time I have to think about which promise results are in scope within nested then() calls, I rewrite it with async/await to eliminate that cognitive load entirely.


You'd like Tcl and various Perl libraries. Mojo (Perl event loop among other things) in particular has a very nice, straightforward event flow quite reminiscent of Q.

async/await looks nice but doesn't scale. As soon as you have to do something other than wait for a callback it falls apart.


>async/await looks nice but doesn't scale. As soon as you have to do something other than wait for a callback it falls apart.

Waiting for a callback is what you do 90% of time. For the rest, you can always combine async/wait with Promises...


Maybe if the only thing you do is Ajax. Writing a server, a web crawler, a P2P protocol, heterogeneous OpenCL computations, etc, and you're going to need something a little more heavy-duty.


I'm writing a p2p protocol and you are wrong. Async/await is compatible with promises and the tooling around them, but for 90% of cases it is much more understandable.


What exactly would you need that you can't get with async/await or async/await+Promises?


Kinf of OT, but can anyone share their experience about using Babel's async/await in production instead of regular Promises?

I'd love to hear about people who have used it in large and complex projects, from a debugging standpoint.

As of now, using Bluebird (with its source in a different, blackboxed script), it is possible to follow the code execution through the event loop with async debugging, in a very elegant and enjoyable fashion.

I find async/await much more appealing when coding, but I'm worried about quality of life when hardcore debugging, as in my current project it can make me waste hours at a time when something if completely fringe happens.


I have some medium/large projects using async-await.

There are 2 ways to transpile async-await.

a) Default, with regenerator, transpiles it to ES5

b) If your env supports generators (like Node, or newer browsers), you can use async-to-generator plugin.

Regenerator gives you unreadable code, but it will run everywhere. async-to-generator gives you (relatively) readable code.

Source maps support has improved a lot, and you can choose to see only your original code while debugging. You'd be setting breakpoints on your code instead of transpiled code. So you should be fine whether you're using regenerator or async-to-generator. There might be corner cases (very rare) depending on your env; in which case if you're using regenerator you might need to switch to async-to-generator to find the bug.

Source maps work well with node-inspector. If you'd like to see better stack traces in say "mocha" tests (or other test framework), use https://github.com/evanw/node-source-map-support.

Overall, the developer experience is now pretty good. Thanks to all the work that's gone into the tooling.


If you want a good solution to those types of problems in dynamic languages like ES6 then I think short functions and unit/functional tests are the best approach. Without that it is always going to be a bit challenging.

Having said that I have not seen worse stack traces with 'async/await. I think they are similar.


It has been in Microsoft Edge/Chakra for a while. But I couldn't make it work with Babel + webpack2. I still needed babel for static properties and React. It was either webpack parsing couldn't recognize async/await or webpack executes modules on Nodejs which wasn't supporting asyn/await by the time. So bundling was failing.

I wonder if there is any babel/webpack gurus out there can make it work ?

Oh btw, I recommend windows users to try microsoft edge for debugging and runtime inspection, it is so slick :)


> [esnext] prototype runtime implementation for async functions

Well, it claims to be a prototype. Can anyone from the team comment?

Quite exciting in any case!


Promises are like cancer, and async/await is just treating the symptoms.

  // Callback
  dataCollection.find('somethingSpecific', function getIds(dataArray) {
    var ids = dataArray.map(item => item.id));
    display(ids)
  });


  // Promise
  var dataArray = dataCollection.find('somethingSpecific');
  var ids = pmap(dataArray, function(item) { // Cancer cell 
    return item.id;
  });
  pdisplay(ids); // Cancer cell 
  // The cancer grows ...
  function pmap (dataPromise, fn) {
    return dataPromise.then(
    function(data) {
      return map(data, fn);
    });
  }
  // The cancer grows ...
  function pdisplay(dataPromise) {
    dataPromise.then(function(data) {
      display(data);
    },function(err) {
      display(err);
    });
  }


In the meantime there's the co library which provides the next best thing using yield.


And a swifter way to handle legacy callback (using thunks). Co is _great_ and work today (even in browser)


I'm definitely rooting for the feature to be included in the spec as soon as possible, but I'm a little weary when features are added to the engine before being standardized. Object.observe, anyone?


The only reason that async/await hasn't been included in standard is because there were not enough implementations.


Things should be clearer now that the "Stage Process" [1] is more transparent. Object.observe made it only as far as Stage 2 ("Draft"), but async/await has already pushed on to Stage 3 ("Candidate"), making it much more likely it will hit Stage 4 ("Final") just as soon as a plurality of web browsers support it. (...and as soon as it hits Stage 4 it will be included in that years final spec, under the new annual review process.)

[1] https://tc39.github.io/process-document/


Man, I used to love the times when try/catch was used to exception only, and with exceptions that leaves the program in a bad state, I used to think when you see a throw something really bad is going on, not just a simple ajax fail.

Dont know why people love so much async/await. In the end of the day, this all (in node land, for instance) will be just a function call in the libuv, this will never change, this is because the pattern is really good.. Why overcomplicate that?


I think my problem with this approach is the way it throws away the functional programming paradigm JS has been sharpening over these last few years. When I see await, I'm reminded of Java or .net, languages that didn't traditionally have functions as first class citizens.

OTOH, I can't complain about removing dependencies like "when" or "bluebird", and it'll be nice to not have to simulate promise returns in testing.


I think it's nice to have both options. I prefer the functional style of chaining promises for the most part, but there are definitely a few times when await is easier to use IMO (awaiting inside of a loop, for example).


I don't see how it throws away the functional programming paradigm. This is basically Haskell's do-notation applied to JS Promises.


Super excited to start seeing async/await in some of the browsers. Async/await makes certain function decorator patterns even more useful, e.g. http://innolitics.com/10x/javascript-decorators-for-promise-...


I like how it rhymes.


From all my research, I feel like Promises end up making better code than async await. Am I the only one who thinks that?

Like, what's the equivalent of Promise.all with async/await? And how do tou do stuff synchronously after kicking off an async process?


I'm not sure you fully grok async/await then. Thing is, using async/await means using promises. You can't use async/await without promises. An async function returns a promise. Always. (in a way, `async` can be seen as something of a type annotation).

If you want to use async/await with something that's asynchronous but not Promise based, you'll first need to convert it into a promise before you can async/await it. This is actually very elegant in my opinion: instead of proposing yet another way to do asynchronicity in JS, async/await fully embraces Promises, which you already know.

Of course, this means that async/await also has all of Promise's downsides. For example, you can only resolve a promise once, so neither Promises now async/await make dealing with streams of asynchonous events any easier.


I don't see that as a downside. Callbacks make sense for the pipelining portion of async streams. I usually wrap things up such that i set up my callback/event based stream pipeline, and then have a promise that gets resolves on completion/end. That way your stream code is 100% normal callbacks, and you still have promises/async/await for overall flow control.


I haven't written any async/await code yet, but since it's just sugar over promises, wouldn't you just do "await Promise.all(...)"?


Yup. Promise.all() returns a new Promise which settles after every Promise in the provided array has settled (or throws after one of the provided Promises throws).


What about this nodejs async,

https://github.com/caolan/async

May be a replacement until the async is implemented in V8


It's not a replacement at all, and has almost nothing in common with what is discussed here (except the name).


That's the most common node flow control library, and it's awesome, but it's still built on callbacks. With ES2016 you can have an async function return a value you can assign to with =.

Or, short ver: less indentation.


what's the point of explicitly declaring functions "async", and then explicitly saying "await". IMHO all this could be done automatically by language.


Because you don't always have to await a function, and it isn't always possible for the runtime or language to decide whether to "await" or not.

  /*
    The following code may or may not use an await.
    Not await-ing would be better user experience.
  */

  async function sendEmails(id) { ... }
  
  async function signUp(userData) {
    const user = await db.saveUser(userData);
    sendEmails(user.id); // <-- Not awaiting send to complete
    return "Congrats. Signup complete!"
  }


  /*
    On the other hand this code needs an await:
  */

  async function updateGlobalState(obj) { ... }

  async function doSomething(data) {
    await updateGlobalState(data); //await needed
    return global.x;
  }


Sometimes it's really useful to kick off an async function as well, do some other work in the meantime that doesn't require that async function to be done, then await the promise from the async method later on when you do need it to be done.


is the non-awaiting one guaranteed to finish (assuming no errors)?


It's guaranteed to return as soon as the thread of execution finishes prior immediately-pending callbacks (since it's basically a synchronous function call with an awaited return value). It's "finished" as soon as it's finished initiating the send, since it doesn't await the result.

(The exact point where this gets resolved, relative to other pending callbacks/resolutions, is a very wonky detail involving constructs not exposed at the language level, which varies from engine to engine in spite of what the standard says: https://jakearchibald.com/2015/tasks-microtasks-queues-and-s...)

Whenever the call resolves, even if the send does encounter errors, this function or its caller aren't going to know about it either way. That's why this kind of promise-abandonment is pretty bad design - I wouldn't be surprised if there's already a draft in the works for some kind of "use strict" option that causes warnings / crashes when synchronous code finishes with Promises unreferenced (or at least something in tooling / profiling to trace 'promise leaks').


Interesting!


> IMHO all this could be done automatically by language.

Not backwards-compatibly. You'd need to make every function async and every value a future/promise (semantically at least, then optimise most cases back to sync somehow, which incidentally means possible behavioural changes based on what the optimiser manages or doesn't manage to syncify).


> what's the point of explicitly declaring functions "async", and then explicitly saying "await". IMHO all this could be done automatically by language.

Not without Fibers. Javascript doesn't have Fibers though there an implementation in Nodejs.

What it changes is that you don't have to rely on yet another library anymore to get the behavior people mostly use yield for. I'm talking about co/thunk . While it doesn't solve the red/blue function issue [1], it makes writing async code a bit less tedious.

At the end of the day it will force everybody to return promises from async functions, rather than requiring callbacks,which is a good thing.

[1] : http://journal.stuffwithstuff.com/2015/02/01/what-color-is-y...


Async/await is syntax sugar for doing things asynchronously. Whenever you have an async call in your code, you have to consider how the state of the program can change during the passed time. Also, there are many places where you need a result synchronously, like when you're in the event handler for a mouse click and need to determine whether the event should be canceled. If you had an asynchronous call, then the call will finish after the event has finished dispatching!


I'm sure there's good reason, but I have the same question. I also don't understand the need to use promises in the construct. I'd love some clarity here.


async/await is just sugar around promises. when you await something, that something must be a promise.

coroutines are used to make your code appear to execute without a need for a callback, and in order to create a coroutine one must wrap the function doing the awaiting in a generator.

now, I am assuming (perhaps incorrectly) that the async keyoword marks the function for wrapping, and await lets you know where the yields should go. or something like that :)

of course, now thats its all native, I have no idea any more


how would that work?


I can't see how wrapping everything in a promise and a try/catch, plus adding async/await is any easier then a callback.


You can write asynch code to look like synchronous code so you can follow the flow. You can also do things like await in for loops which is also the preferred flow a lot of times. You can use several async calls in order without making a pyramid and with much less code than promises or chaining callbacks across separate functions.

In short its cleaner code.


Try/catch is only useful if you want to support promise failure, it's perfectly fine not using it if you know there is no acceptable failures (eg, use try/catch for network operations, not for setTimeout).


Exactly! When you count for nested try/catchs you see no significant improvement


Why would you nest try/catches? At worst you get try/catch parades:

    try {
      await thing1()
    } catch (e) {
      console.log(e)
    }
    try {
      await thing2()
    } catch (e) {
      console.log(e)
    }
    // ... and so forth ...
That's still nothing like the pyramids you get in callback world.


That won't work because if thing1() throws you usually don't want to process to thing2

Take this for example:

     const response = await fetch('./api.json');
        try {
            const json = await response.json();
            console.log('YAY! JSON!', json);
        } catch (jsonParseError) {
            alert('damn it!');
        }
     } catch(fetchError) {
        console.warn('this is not cool');
     }


Okay, that's a fair counter-argument. My gut feeling is that's possibly one catch to many and as much as possible I'd want to try to merge the two catches/unifying the error response logic.

Another idea is that you could reformat to a "parade" try/catch by adding returns to the catch:

    try {
        const response = await fetch('./api.json')
    } catch (fetchError) {
        console.warn('this is not cool')
        return // Exit
    }
    try {
        const json = await response.json();
        console.log('YAY! JSON!', json);
    } catch (jsonParseError) {
        alert('damn it!');
    }
Obviously you'd need to move any code you'd want to run regardless of error up into the parent function or out into a wrapper function.


"Pyramids" are not that useful for complex flows, just as synchronous-looking code, but for something as simple as your example they are equivalent in complexity, maybe even less complex, because they don't have additional implicit behavior introduced by async/await. Still, worlds better, than CSP with channels.


async/await ends up being the less complex thing, my "simple" example is an extreme that sometimes but rarely happens. When was the last time you saw that in synchronous code? It happens, certainly but the synchronous default is to only handle what you can handle and then pass everything else back up the chain. The asynchronous default with async/await is the same, and you can rely on default error propagation in the natural default case:

    async function doTheThing() {
      await thing1()
      await thing2()
    }
That just works, if there's an error in thing1(), doTheThing() stops at that line and returns the rejected promise to whatever called it. The callback world is clearly the more complex here, where all error handling has to be properly wired and there is no default handling. The node callback equivalent to the above is:

    function doTheThing(callback) {
      thing1(function (error, value) {
        if (error) callback(error, null)
        thing2(function (error2, value2) {
          if (error2) callback(error2, null)
          callback(null, true)
        })
      })
    }
You can't forget either if (error) check or else errors will just silently be eaten/ignored.

The "implicit behavior" introduced by async/await are essentially the same things you are used to in code without async/away, such as the way try { } catch { } and throw naturally work in synchronous. That alone should be considered a simplifying improvement.


First of all, exceptions are not a great way to handle errors, because they are implicit. There is no special keyword, like "await", to permit a function to throw exception, you have to guess an implicit behavior of each function. And you will have functions that never throw exceptions, functions that do, functions that lie about whether they do or do not. You will have to remember that at all times and be very careful to always use try/catch if unsure and so on. It's a big cognitive load to have. Most of the times the easiest and the only reasonable way to deal with errors is to be explicit about them.

So, the equivalent to that code with callbacks is the one from your previous example and not the simplified one that doesn't handle errors. Not forgetting "if (error)" in this case will be exactly the same, as not forgetting "catch (error)" or "if (error)" if you handle errors properly in the first place. But these are just patterns and with enough consistency they don't introduce much cognitive load. What does is implicit behavior and you don't have one with callbacks in your example, but you do with await. It's not as bad, as with threads, but still a little bit worse, than with callbacks.


I'm sorry you don't like exceptions. It's a pattern that's pretty well understood at this point, has been in the JS language for quite some time now and other languages for even longer.

«So, the equivalent to that code with callbacks is the one from your previous example and not the simplified one that doesn't handle errors.»

No, it isn't. It really isn't. The callback version isn't doing any sort of handling of errors, it's just laddering them, passing them back up the chain. The async/await version is doing the same automatically, wiring together for you the error handling to pass errors back to calling functions.

Even if you did want to handle every possible error/exception that could be thrown in doTheThing(), you don't really have to go all the way to the "parade" version, as only one try/catch should be sufficient in doTheThing().

The issue here is that where you see error handling as a cognitive load, the general best practices for error handling have mostly leaned to the goal that exceptions are for actual exceptional cases and catching exceptions should be left to cases where the user can actually fix the exceptional situation (rare) or logged somewhere appropriate in a global handler.

If you are unsure if you should handle an exception, don't handle the exception. Let the calling function catch, or let it continue bubbling up to whatever global exception handlers you want to set, or even just leave them for the browser's Unhandled Exception Handler and Unhandled Rejected Promise Handler to spit out debug information in its dev tools console. Don't bother remembering anything, just let your tools do their job.

The difference between catch (error) and if (error) in the callback example is that if you "forget" catch (error), it bubbles up to your browser's handlers, but if you forget if (error) it doesn't stop execution, it doesn't bubble up to your global toolkits and exception handlers and it doesn't bubble up into your dev tools debuggers.

(One thing I realized in writing this: due to my rustiness in callback coding I forgot that they should be if (error) return callback(error, null) to properly stop execution, which you want in the default case of not handling the error but passing it back up the chain. Small mistake that could cause debugging headache and adds to my point that the little things that are easy to forget that you always must do in the callback world can have a major impact on debugging...)


Don't forget that for every await - you or preferably someone else has to write the Promise boilerplate!


promise-ring [1] is all you need for Node style callbacks. (Bluebird and a few other Promise augmenting libraries also have similar tools, but promise-ring is really all you need.)

Eventually, too, Node and Electron and Cordova and everything else will catch up to the fact that Promises are native in ES2015 and the APIs will eventually migrate.

But the thing about it is toi, async functions also return promises, so not every await require "Promise boilerplate", as you will presumably be using other async functions that produce their own promises.

[1] https://github.com/DavidAnson/promise-ring


The one extra diff I would really like to see in that list is some documentation. The V8 docs have always been too sparse.


So what does this mean in terms of being implemented in Node.js? How soon can we expect that to happen?


This is so good. Great days for JS. This is going to save us from a lot of headaches.


Realistically does this mean we will see async/await in node-v7.0?


I hope the node APIs are changed/extended to return promises eventually so I don't have to keep on using promise wrappers for all of them.


Is there a reason you always wrap them? Even if it's awkward I tend to leave most platform APIs alone and treat them as special cases if I'm doing something, say, promises or messages.


Promises are much each to compose. Promise objects can be passed around and reused. (Multiple things can wait on the result of the same promise.) Code is much easier to read with flat .then() and .catch() chains, versus sometimes the "pyramid of doom" callbacks can create. Code is much, much easier to read with Promises when you can use async/await, and getting everything wrapped to promises now makes it that much sooner you can use async/await.


I get the why people want to use Promises in general but when they're used to simply wrap a standard node / browser call feels dirty to me because it changes what you expect it to return (e.g. a promise instead of whatever it normal returns).

I dunno I just don't like to give the impression I'm changing anything about how I'm accessing / using the standard library but I guess this is a bit subjective.

> versus sometimes the "pyramid of doom" callbacks can create

If you structure the code well you never run into this (callback soup is very overblown; if you find yourself in that position then someone screwed something up). But I get promises and async / await are nice :)


Probably in an upcoming Node 6 -- they do update the running version with the latest v8 releases IIRC.


That seems like a mistake to me. That would mean code that runs perfectly under, say, 6.5 would not run on 6.1. It should most certainly be a 7.x release.


This is intentional - it's a feature, not a breaking change, although the docs do identify a greater possibility of regressions until LTS. From https://nodejs.org/en/blog/release/v6.0.0/

> The general rule for deciding which version of Node.js to use is:

> ...

> - Upgrade to Node.js v6 if you have the ability to upgrade versions quickly and want to play with the latest features as they arrive.

> Note that while v6 will eventually transition into LTS, until it does, we will still be actively landing new features (semver-minor). This means that there is an increased chance for regressions to be introduced..


For stability one should be using the LTS release.

And why would one downgrade from 6.5 to 6.1? If they code for 6.1 and want to not use LTS and to keep 6.1 installations around, then they should not use 6.5 features.


It was simply a hypothetical but surely you can imagine a scenario where you either have to downgrade (perhaps a regression happens) or you simply have to target multiple versions. I've run into both situations multiple times in my career.

Regardless yes you should use LTS but that's no excuse for going against semantic versioning. It should be labeled an alpha or beta if they don't want to change major versions for breaking feature additions.


>It was simply a hypothetical but surely you can imagine a scenario where you either have to downgrade (perhaps a regression happens)

Sure, but if you might have to downgrade, then simply don't write code that depends on 6.5 features. 6.5 can still run 6.1 code (feature wise), so you'll be alright.

The only problem would be for people wanting to a) run 6.5, b) take advantage of newer, 6.5-only features, and THEN c) downgrade to 6.1


> Sure, but if you might have to downgrade, then simply don't write code that depends on 6.5 features.

Ah but see that's the rub. Hindsight is always 20/20. I've actually run into similar issues in the past. Ultimately if they're consistent with versions then it's not the biggest deal (because if you have an issue with 6.5 there is probably a good chance you have an issue with 6.1 as well) but with semantic versioning I always expect code written against the major version to always work across all minor versions unless it relied on some weird bug side effect.

It just seems really inconsistent to me.


Really looking forward to this!




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: