There are forks but they're very limited in how far they can deviate from what Google wants. The Manifest v3 discussions show this. Extension APIs aren't a big part of browsers compared to all the other things they do, and there was clearly demand to keep Manifest v2 alive, so you'd expect at least one or two forkers to differentiate by doing that.
In practice the rebasing costs are so high that everyone shrugged and said they had no choice but to go along with it.
Chromium is open source, but not designed for anyone except Google to develop it. Nothing malicious about it, it's just that building a healthy contributor community is a different thing to uploading some source code. If you've ever worked with the Chromium codebase you'll find you have to reverse engineer it to work things out. The codebase is deliberately under-architected (a legacy of the scars of working on Gecko), so many things you'd expect to be well defined re-usable modules that could be worked on independently of Google are actually leaky hacks whose API changes radically depending on what platform you're compiling for, what day of the week it is, etc. Even very basic things like opening a window isn't properly abstracted in a cross platform manner, last time I looked, and at any moment Google might land a giant rewrite they were working on in private for months, obliterating all your work at a stroke.
There are reasons for all of this... Chrome is a native app that tries to exploit the native OS as much as possible, and they don't want to be tied down by lowest-common-denominator internal abstractions or API commitments. But if you view Chrome as an OS then the API churn makes Linux look like a stroll through a grassy meadow.
> The codebase is deliberately under-architected (a legacy of the scars of working on Gecko), so many things you'd expect to be well defined re-usable modules that could be worked on independently of Google are actually leaky hacks
I'm guessing you're specifically referring to Gecko's early over-use of XPCOM, which the Gecko team itself had to clean up in a long process of deCOMtamination [1].
I'm hopeful that if Servo ever gets enough funding to become a serious contender among browser engines (hey, KHTML was once an underdog too), that it might walk a middle path between overuse of COM-ish ABIs and what you described about Chromium. Servo is already decomposed into many smaller Rust crates; this provides a pretty strong compile-time boundary between modules. Yet those modules are all statically linked, and in a release build, that combined program gets the full benefit of LTO. Of course, where trait objects are used, there's still dynamic dispatch via vtables, but the point is that one can get strong modularity without using something COM-ish everywhere.
Incidentally, the first time I built Chromium (or more specifically, CEF) from source in late 2012, I was impressed as I watched hundreds of static libraries being linked into one big binary. Then as I studied the code (though not deeply enough to learn the things you described), I saw that Chromium didn't use anything COM-ish internally (though CEF provided a COM-ish ABI on top). That striking contrast with Gecko's architecture (which I had previously worked with) stuck with me. I wonder how much the heavily reliance on static linking and LTO (meaning whole-program optimization), combined with the complete lack of COMtamination, contributed to Chrome's speed, which was one of its early selling points.
[1]: Mozilla used to have a dedicated wiki page about deCOMtamination, but I can no longer find it.
https://en.wikipedia.org/wiki/Chromium_(web_browser)#Browser...