The problem is this: Chrome. Which, last time I checked, often ends up with >100MB private working set per tab.
And given the amount of dev time that's gone into Chrome / Chromium, it doesn't exactly set a good precedent.
Also, you're missing that there are a number of things that have to be duplicated in a multiprocess model that you only need one of in a multithreaded model. For instance: the JS heap, which has a reasonable amount of overhead.
Some things can be shared between processes like they can be between threads, but not everything.
Based on running the following script on a freshly generated about:memory profile:
var max = 0;
var sum = 0;
var vals = $$('span[id^="Web Content"][id$="explicit"] span.mrValue');
for (let vi = 0; vi < vals.length; vi++) {
let v = parseFloat(vals[vi].textContent);
sum += v;
max = Math.max(max, v);
}
console.log('Content processes:', vals.length);
console.log('Sum:', sum);
console.log('Max:', max);
console.log('Average:', sum/vals.length);
This is with ~400 tabs opened but in "unloaded" state. Only 5 pages were actually fully loaded.
50 processes caused stuttering when scrolling in the tab bar.
10 & 5 processes felt smooth and allowed tab titles of background tabs to load in quickly during startup.
1 process took forever to actually load in the tab titles of background tabs on startup.
Compartments are not quite what I'm talking about.
I'm more talking about metadata / caches. There's a fair bit of stuff that can be global with a single process that cannot be with multiple processes. Sometimes you can use shared memory tricks, but not always.
Also: I wasn't aware there was that much of a memory penalty. Yow.