> The optimizing compiler did not generate ideal code for array destructuring. For example, swapping variables using [a, b] = [b, a] used to be twice as slow as const tmp = a; a = b; b = tmp. Once we unblocked escape analysis to eliminate all temporary allocation, array destructuring with a temporary array is as fast as a sequence of assignments.
I enjoy seeing updates like this to learn more about how the engine works, but: these changes are exactly the reason you _don't_ want to design your code around specific micro-optimizations, particularly when it comes to JS. They're at parity now and it is quite possible that 6 or 12 or 24 weeks from now, the cleaner destructuring approach will actually be faster than the tmp-var approach. Remember this the next time someone suggests in code review that you should design your code to meet the whims of the JS engine.
I'd have to be running that billions of times a day before I'd start to care about the extra nanoseconds. Clean-looking code makes me feel better than having my servers sit at 67.3% CPU instead of 67.4% CPU.
Yeah even on the days when serious applications on the 16 bit platforms were written in Assembly I hardly felt the need to disable bounds checking in Turbo Pascal, except in cases when the solution was to re-write the function in Assembly anyway.
There’s a serious point: cleaner code makes high-level optimizations easier and those are usually far bigger wins. I’ve seen more than a few times where Python, Perl, etc. beat hand-tuned C because the C used so many clever tricks that nobody wanted to change it.
Yeah, wasn't criticizing you. I think one of the signs of experience is when people go from just laughing at that to laughing and thinking more deeply it.
Python's readability helps a lot — I've seen a ton of code in statically-typed languages where a bug was obscured by the syntactic boilerplate and nobody noticed that, say, they were calling the correctly type-checked function with the wrong variable, casting it incorrectly, etc. If the language or popular frameworks encourage boilerplate that's especially effective at getting people to skim over important details.
Newer languages with solid type inference help a lot for reducing that cognitive load, so this isn't saying that one side of the static/dynamic-typing debate is wrong but rather that it's an important design factor for programmer productivity.
This is what I meant. I suspect the 0.01 probably encapsulates ALL these little sub-optimisations I decide on in a code base, rather than just this one variable swap. CPUs are a lot faster than most people realise. It's usually other things that slow down a program, not whether the programmer uses array.forEach or a basic for loop.
Disclaimer: I do tend to use for loops a lot since they can be quite readable.
> Remember this the next time someone suggests in code review that you should design your code to meet the whims of the JS engine.
I think the best approach, if you have the discipline to follow it, is to normally code for ease of understanding ("clean code"), and occasionally fall back to performance hacks in performance critical code, but don't stop there. The following will likely make you both much more confident that your choices were correct, that you can easily reverse them when needed at little cost, and that you'll know (or be able to easily query) whether that time has come.
1. Mark each performance hack with some identifier (e.g. a comment, /* PERFHACK #4632 / or / PERFHACK #4632 - destructing is slow */.
2. Maintain a test in the same repo which confirms this assertion, by actually testing the relative speed of each approach, expecting the hack to be faster be some margin. Each test should contain the PERFHACK identifier and optionally a description.
3. Occasionally (or along with regular testing) run all these PERFHACK tests, looking for failures, which would indicate the performance is not a certain percentage greater than the simple case.
4. To fix the failing test, reverse the test assertion, and search the codebase for instances of that PERFHACK identifier and fix those as well.
That sounds like a pretty fragile test - could you see it failing if I plug my laptop into AC power while the test is running? (Intel SpeedStep or whatever it's called will take my laptop from 2ghz to 3ghz when I do that).
> could you see it failing if I plug my laptop into AC power while the test is running?
That should at max cause a single test case failure, as it caused a speedup during one leg of the operation benchmark.
Since these are not completely automated (you might want to have them as a set of allowed to fail tests, or tests you have to explicitly request), you can easily just re-run the tests to confirm the prior output wasn't a fluke. You need something like that anyway because you're essentially testing benchmark outputs, and it's not like it's easy to set up perfectly consistent benchmarks in the first place.
The margin of margin you want your PERFHACK to be faster than the default code is also your margin of error. Theoretically you would test something like NORMAL_SECONDS*0.8 > PERFHACK_SECONDS in your test to ensure you're still getting a 20% or more speedup. If some changes cause it to drop to 15%, you then get to assess whether that gain is still worth the hack (and perhaps change the test to be a 10% gain assertion), or whether you want to clean up your code (or at least schedule it).
The point is that you've put a system in place that allows you to have a clearer picture of whether your prior decisions with regard to performance hacks still make sense.
Sure, the tmp-var-approach could be slower in some weird edge cases but probably only slightly, while the array-swapping approach could be much slower if the JS-Engine doesn't implement this optimization at all or this code-pattern is run in interpreter or baseline compiler.
Not arguing you should do such micro-optimizations, but generally writing "simpler" code should also allow less sophisticated compilers to generate fast code.
Or there's a third option, which is that the difference in performance doesn't matter for your application, which is the case the vast majority of the time. Destructuring an array is not going to be a bottleneck for almost any piece of code.
I enjoy seeing updates like this to learn more about how the engine works, but: these changes are exactly the reason you _don't_ want to design your code around specific micro-optimizations, particularly when it comes to JS. They're at parity now and it is quite possible that 6 or 12 or 24 weeks from now, the cleaner destructuring approach will actually be faster than the tmp-var approach. Remember this the next time someone suggests in code review that you should design your code to meet the whims of the JS engine.