Hacker News new | past | comments | ask | show | jobs | submit login

> The optimizing compiler did not generate ideal code for array destructuring. For example, swapping variables using [a, b] = [b, a] used to be twice as slow as const tmp = a; a = b; b = tmp. Once we unblocked escape analysis to eliminate all temporary allocation, array destructuring with a temporary array is as fast as a sequence of assignments.

I enjoy seeing updates like this to learn more about how the engine works, but: these changes are exactly the reason you _don't_ want to design your code around specific micro-optimizations, particularly when it comes to JS. They're at parity now and it is quite possible that 6 or 12 or 24 weeks from now, the cleaner destructuring approach will actually be faster than the tmp-var approach. Remember this the next time someone suggests in code review that you should design your code to meet the whims of the JS engine.




I'd have to be running that billions of times a day before I'd start to care about the extra nanoseconds. Clean-looking code makes me feel better than having my servers sit at 67.3% CPU instead of 67.4% CPU.


Yeah even on the days when serious applications on the 16 bit platforms were written in Assembly I hardly felt the need to disable bounds checking in Turbo Pascal, except in cases when the solution was to re-write the function in Assembly anyway.


I'd take cleaner code for half the speed quite often!


I see that you too are a Python developer! /s


There’s a serious point: cleaner code makes high-level optimizations easier and those are usually far bigger wins. I’ve seen more than a few times where Python, Perl, etc. beat hand-tuned C because the C used so many clever tricks that nobody wanted to change it.


I absolutely agree! I use python like 70% of the time, and accessibility is one of the big reasons for that.

Just making a joke.


Yeah, wasn't criticizing you. I think one of the signs of experience is when people go from just laughing at that to laughing and thinking more deeply it.


How is a language which doesn't have a type checker a synonym for clean code?


Python's readability helps a lot — I've seen a ton of code in statically-typed languages where a bug was obscured by the syntactic boilerplate and nobody noticed that, say, they were calling the correctly type-checked function with the wrong variable, casting it incorrectly, etc. If the language or popular frameworks encourage boilerplate that's especially effective at getting people to skim over important details.

Newer languages with solid type inference help a lot for reducing that cognitive load, so this isn't saying that one side of the static/dynamic-typing debate is wrong but rather that it's an important design factor for programmer productivity.


How is a type checker or the presence of types synonymous with clean code?

The two are totally orthogonal concerns.

You can write highly obfuscated spaghetti code from hell in any type checked language you wish...


Python does a lot of type checking and complains loudly, very different from eg Javascript.


Besides, whether it complains or not about misused types, says nothing about the code being clean or not.

One can write totally unreadable code in Scala and a totally readable version of the same algorithm in Python if one so wishes.


Check out mypy. It works well.


If you hit a hot path the impact can be way higher. Especially if you're running client-side and want a fluid experience. Example: https://github.com/facebook/react/pull/12510

But definitely agree that cleaner code > performance in most cases. First make sure you really do need to squeeze that performance out.


0.01 is kind a big deal if you do that a few 1000 times though right ?


0.01 is the resulting speedup in CPU usage (overall performance). It factors in the call count.


This is what I meant. I suspect the 0.01 probably encapsulates ALL these little sub-optimisations I decide on in a code base, rather than just this one variable swap. CPUs are a lot faster than most people realise. It's usually other things that slow down a program, not whether the programmer uses array.forEach or a basic for loop.

Disclaimer: I do tend to use for loops a lot since they can be quite readable.


Whatever leads to 60fps on client-side is welcomed.


> Remember this the next time someone suggests in code review that you should design your code to meet the whims of the JS engine.

I think the best approach, if you have the discipline to follow it, is to normally code for ease of understanding ("clean code"), and occasionally fall back to performance hacks in performance critical code, but don't stop there. The following will likely make you both much more confident that your choices were correct, that you can easily reverse them when needed at little cost, and that you'll know (or be able to easily query) whether that time has come.

1. Mark each performance hack with some identifier (e.g. a comment, /* PERFHACK #4632 / or / PERFHACK #4632 - destructing is slow */.

2. Maintain a test in the same repo which confirms this assertion, by actually testing the relative speed of each approach, expecting the hack to be faster be some margin. Each test should contain the PERFHACK identifier and optionally a description.

3. Occasionally (or along with regular testing) run all these PERFHACK tests, looking for failures, which would indicate the performance is not a certain percentage greater than the simple case.

4. To fix the failing test, reverse the test assertion, and search the codebase for instances of that PERFHACK identifier and fix those as well.


That sounds like a pretty fragile test - could you see it failing if I plug my laptop into AC power while the test is running? (Intel SpeedStep or whatever it's called will take my laptop from 2ghz to 3ghz when I do that).


> could you see it failing if I plug my laptop into AC power while the test is running?

That should at max cause a single test case failure, as it caused a speedup during one leg of the operation benchmark.

Since these are not completely automated (you might want to have them as a set of allowed to fail tests, or tests you have to explicitly request), you can easily just re-run the tests to confirm the prior output wasn't a fluke. You need something like that anyway because you're essentially testing benchmark outputs, and it's not like it's easy to set up perfectly consistent benchmarks in the first place.

The margin of margin you want your PERFHACK to be faster than the default code is also your margin of error. Theoretically you would test something like NORMAL_SECONDS*0.8 > PERFHACK_SECONDS in your test to ensure you're still getting a 20% or more speedup. If some changes cause it to drop to 15%, you then get to assess whether that gain is still worth the hack (and perhaps change the test to be a 10% gain assertion), or whether you want to clean up your code (or at least schedule it).

The point is that you've put a system in place that allows you to have a clearer picture of whether your prior decisions with regard to performance hacks still make sense.


I think that kind of test would be more useful on CI hardware that’s usually pretty consistent but it’s a good point to raise.


Sure, the tmp-var-approach could be slower in some weird edge cases but probably only slightly, while the array-swapping approach could be much slower if the JS-Engine doesn't implement this optimization at all or this code-pattern is run in interpreter or baseline compiler.

Not arguing you should do such micro-optimizations, but generally writing "simpler" code should also allow less sophisticated compilers to generate fast code.


Depends on if you want your code to go fast right now or if hypothetically going faster maybe sometime in the future perhaps is good enough for you.

There's a good argument for both situations depending on context.


Or there's a third option, which is that the difference in performance doesn't matter for your application, which is the case the vast majority of the time. Destructuring an array is not going to be a bottleneck for almost any piece of code.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: