Would be interested to see threat models alongside these analyses.
Something lacking in a lot of these lower-level sandboxing approaches is the direct impact they have on patterns of website developing and user priorities.
Take for example "site isolation":
> Site isolation runs every website inside its own sandbox so that an exploit in one website cannot access the data from another.
What is "one website" here? Is it a tab, including the 3rd-party resources & requests within that tab? How does one website access the data from another at an application level, and how would such an attack look if something (imaginary) like "origin isolation" were in place instead of site isolation?
What are the most frequent attacks of this type and who are the attackers? Which websites' data is being accessed in such attacks?
So, all the security features that he considers large holes haven't been used to mount successful attacks, but the ones he considers "not substantial" are the ones that have been used for the real 0-days.
It seems the "threat model" was "if Chrome has it, it must be important" and "if Chrome doesn't have it, it must be useless". You cannot do a serious security analysis this way, it's like looking at a list of feature checkboxes to choose a product. But in this case it's even worse because we only look at the checkboxes vendor G has ticked.
Again, checking reality, which "sandbox hole" was used in the last successful Firefox exploit in pwn2own?
Those Chrome CVEs are for single-digit version numbers from 10 years ago.
Not that this has any actual relevance to the argument (which is that media decoding is a huge risk vector for browsers, and the kind of thing a threat model looks at), but I'll humor you: https://nvd.nist.gov/vuln/detail/CVE-2013-0894
I'm sure they sandbox FFmpeg more extensively now.
You are making my argument for me.
Chrome sandboxes Vorbis -> important!
Firefox sandboxes Vorbis in a much tighter way -> "not anything substantial".
Considering where Firefox and Chrome overall stand on sandboxing, two libraries isn't substantial. Chrome has consistent, extensive sandboxing, and Firefox has sandboxing here and there as an afterthought. I looked into trying Firefox on Android but it apparently doesn't even have a sandbox.
I've already disproven this assertion by pointing out those exact libraries are real life exploit vectors instead of theoretical weaknesses that most of the rest of the page talks about. Repeating the wrong assertion doesn't doesn't make it true.
Chrome has consistent, extensive sandboxing, and Firefox has sandboxing here and there as an afterthought.
This isn't really true either, the Chrome sandbox varies in what it blocks per process type, and it's the same for Firefox. The processes do have to talk to the operating system to accomplish anything useful! The original article also seems to completely miss this, at least in the Chrome case.
Something lacking in a lot of these lower-level sandboxing approaches is the direct impact they have on patterns of website developing and user priorities.
Take for example "site isolation":
> Site isolation runs every website inside its own sandbox so that an exploit in one website cannot access the data from another.
What is "one website" here? Is it a tab, including the 3rd-party resources & requests within that tab? How does one website access the data from another at an application level, and how would such an attack look if something (imaginary) like "origin isolation" were in place instead of site isolation?
What are the most frequent attacks of this type and who are the attackers? Which websites' data is being accessed in such attacks?