His team discovered Eris and many other trans-neptunian objects, which did fuel the discussion behind pluto's demotion: greatly increase the number of planets, or demote pluto? They're also behind the Planet 9 theory that's discussed in the article.
This limitation is to ensure that the two functions use the exact same calling convention (input & output registers, and values passed via stack). It can depend on the particular architecture.
Does it have to move or resize when one of the sides reaches the end of the array? I presume that would be slower than a ring buffer that only grows when it's completely filled?
Both are O(1) datastructures, but indexing a ring buffer is slightly more costly compared to this and insertion is slightly more costly for this than a ring. Probably usually works out in favor of this design though for net performance usually?
They both have an offset, but ring buffers aren’t contiguous so they also need a branch or modulus to handle wrap around. Either can be cheap, but clearly that is strictly more costly than not having the extra operation (even if very little). Only matters for random indexing also, since for mutation the situation is swapped
There are many situations where those little differences completely vanish because of instruction pipelining. Only way to know is to actually measure it.
The pathway of untrusted/malicious input -> trusted command line argument seems to be a common problem, and one that could possibly be mitigated by better type/taint checking.
It looks like there is some prior work in this area, but it hasn't resulted in commonly available implementations (even something basic like a type/taint checking version of exec() etc. on one side and getopt() etc. on the other.)
I could've sworn I remember something about bash and glibc cooperating to indicate which arguments come from expanded variables but I cannot find anything on the internet or in the sources. Either I'm going insane or it was an unmerged proposal.
Besides what has already been said in other comments, I think reality has already shown how it _is_ painful in C. It's painful to implement both safe and ergonomic. The amount of subtly incorrect and/or differing implementations doing easier things out there is just incredible.
That just sounds like you don't believe it's ever possible to change existing C code, which... is a position you can argue, but I'm pretty sure that bash and glibc are actively developed to the point where I wouldn't personally commit to that position.
1. I don't see why you need to solve Trusting Trust to make libc and the shell more robust.
2. If we are worried about Trusting Trust, then Rust is worse; at least C has the wide range of compilers needed for diverse double-compiling and as of https://guix.gnu.org/en/blog/2023/the-full-source-bootstrap-... we arguably have a working solution. Rust only has a single compiler, and that compiler is used to build itself, making it the poster child for Trusting Trust targets.
This article is one my pet peeves. It always shows up in discussions as "proof" that 0 indexing is superior, but it hides under the carpet all the cases the where it is not. For instance, backwards iteration needs a "-1" and breaks with unsigned ints.
In theory, everyone would be able to take an informed decision. In practice, patients wold rather take action than wait, resulting in harm from unnecessary exams and interventions.
Not to mention the cost of the exams themselves. Everytime someone goes through an exam that serves no useful purpose, that's healthcare money that could be put to better use.
The big problem with the broad spectrum exams is the "if appropriate" part. Often it's not clear if further exams or interventions are warranted, and when in doubt people tend to err towards taking action.
reply