This is a very good question. We actually did consider proxy certs and name constraints certs first and had a long discussion at the IETF about these different options. At the end the consensus was that it would be much better to have a very minimal structure which could only do 1 thing and nothing else. DCs also have the advantage that they are cryptographically bound to a particular End entity certificate vs a particular public key only, and hence can only be used with their properties, so it really is the minimal possible thing you need and nothing more.
As a CA you can issue certificates for other domains as well which might be undesirable. There are existing mechanisms such as Name constrained CAs and proxy certificates to reduce this scope. While they were originally considered there are issues with them. There is no widespread support for either and there is no way to know whether both sides support them. DCs allow for a extremely minimal subset of what you might need to issue credentials with your own lifetime and it only affects you. DCs are cryptographically bound to the leaf certificate as well. A bunch of this is documented in the draft.
generalized asn1 parsing can be super tricky and the industry is generally avoiding asn1 for new protocols. There has been some really interesting work from microsoft on verified parsers for asn1 https://www.usenix.org/system/files/sec19-ramananandro_0.pdf
You're supposed use parser generators for ASN.1. The reason ASN.1 is so difficult in TLS and other crypto standards is precisely because ASN.1 messaging is intermixed with ad hoc messaging (as in this case) and implicit state, which means you couldn't even use a parser generator for everything even if you wanted to.
The excellent open source ASN.1 compiler, asn1c (http://lionet.info/asn1c/compiler.html), can generate C data structures and a parser and composer for X.509 DER certificates from the formal ASN.1 description. But it's not widely used because, among other reasons, you end up having to write too much ad hoc parsing anyhow, which makes the investment in the parser generator seem not worthwhile. (AFAIU, asn1c is far more popular in the telecom industry, likely because telecom uses fully ASN.1-based messaging.)
Of course, if you're not going to use ASN.1 as intended, then the binary encoding (e.g. DER) can be quite tricky to parse using an ad hoc parser, including most parser combinators, mostly because TLV encodings aren't context-free. But I managed to write a full X.509 parser using LPeg. LPeg has an extension for match-time captures which provide a way to invoke a subexpression parameterized on the value of a previous match (e.g. the decoded length context), which can return match success or failure along with a resumption point to the parent expression. See http://lua-users.org/lists/lua-l/2019-04/msg00226.html and http://www.inf.puc-rio.br/~roberto/lpeg/#matchtime
I feel like there's simply no good answer here. The fundamental problem is the tension among 1) strictly specified, formalized protocols (which ASN.1 DER absolutely provides), 2) efficiency in time and space (ASN.1 DER does well, PER takes it to an extreme), 3) the need for forward compatibility so protocols can incrementally evolve (partly technical, partly a social management issue), and of course 4) ease of implementation. Context-free encodings help with #3 and #4, but fail at #2 (e.g. field names aren't necessary, and variable length values require a more complex encoding), and in a security context cause problems with #1 (better to have a failed parse than to successfully parse unknown elements that you ignore).
Interesting, didn't know that. That trend is a bit sad then as I've been quite enjoying the interactive ASN.1 viewer when developing rcgen, an asn.1 library: https://lapo.it/asn1js/
This would make a great comparison. I'm not certain whether or not K8's mutual auth supports session ticket resumptions and distribution of short lived ticket keys. The ticket rotation design would probably make a great addition to K8. There are a lot of intricate details in design which can make a major difference in not only performance but also whether or not the system wakes you up at night.
Ya they're similar in that they are all signed blobs of data, but different in the sense that they are specifically designed to send authentication information via several layers of proxies
I'm actually interested in this subject so I'll check out your links when I'll be able to. At first sight this sounds like wrapping tokens or third party caveats in Macaroons.
If you don't mind I wouldn't necessarily agree with the comment about JWT by Yueting. JWT is just a format, querying backend to get a new token is not necessary (this is only how people often use them). I actually built a small PoC that mints new JWTs on client side (in the browser) signing them with a non-exportable key (through Webcrypto).
As for Macaroons I believe they could also be adjusted to resemble CATs as I understood them (with layers for different services). I do have other issues with Macaroons though (https://news.ycombinator.com/item?id=17878845)...