That chart with the growth of the Linux kernel discredits everything. The Linux kernel continues to grow because they are obsessed with keeping all drivers in mainline instead of having a stable API for them as any sane project would.
You still have the source to the old drivers. It's just not a priority for others to support anymore, but you have the means to go add that support yourself, or pay a third party to do it for you.
Contrast with when windows changed the video driver architecture in Vista, and the only option you had was to pound sand.
Seems like a pretty easy cost/benefit analysis to me:
I can either spend thousands of dollars worth of my time on updating the driver, or I can pay someone else thousands of dollars to do it. Or I can spend a fraction of that on a new video card.
Seems like the only solution for Vista users is the only solution I'd ever have gone with, anyway.
Sure the cost/benefit works out the same for you (which is why no one has done the work), but that's different from the full set of options available to you in both cases.
How expansive do we want to be about what the full set of options is, though?
I tend to fall on the side of the root poster - if having a stable driver API means Linux gets more commercial driver support, then that's the one that has the highest cost/benefit in my book.
In practical terms, being able to hack on open-source drivers isn't particularly useful to me, but having an easier user experience with Linux on the desktop (and, more generally, having Linux on the desktop be something more than an also-ran) would benefit me immensely.
Some xp drivers work on 7, most don't. Also unlike user space, you cannot run 32 bit drivers on 64 bit windows, so unless you were running 32 bit win7 you needed 64 bit xp drivers, which were rare. Are you sure you explicitly installed xp drivers and didn't just happen to have hardware where windows shipped matching drivers or could download them from the windows driver database? (Or downloaded an installer bundle from the manufacturer's website that just contained drivers for all supported platforms..)
> My Asus Netbook no longer gets all the acceleration options that it used to have pre 16.04.
Which holds true, because /dev/radeon no longer offers hardware video decoding support it once did unless I force enable it, and even then it usually leads to random X crashes when watching videos.
And then there is this,
"For one, AMD users can’t use applications that require OpenGL 4.3 or later without the fglrx/Catalyst drivers."
OK, that calls out X.org as the reason why the drivers aren't being supported rather than Linux. You can use fglrx with newer kernels just fine, it's just that user space went out of it's way to break support.
I really fail to see how that has anything to do with Linux's unstable kernel driver API.
> OK, that calls out X.org as the reason why the drivers aren't being supported rather than Linux.
Ah, the Linux evangelist blame-game. It's a big advantage of a system being so haphazardly thrown together from separately developed components. Start by blaming the choice of distro, end up at "Linux is just a kernel".
I mean, I feel like that's valid in the context of this discussion. That being 'Linux's unstable driver API designed to push drivers as source into mainline causes ISVs headaches'.
How does that apply here when Linux didn't change and you can still use the same kernel driver, but some other component decided to not work with the driver anymore?
... a brand new, mainline kernel will run fglrx just fine. It's user space (specifically the x srever) that decided to break fglrx. So how is that the fault of the kernel's unstable API again?
I mean, everything runs on the kernel... but I assume you meant in the kernel.
Graphics drivers are split into three main pieces these days.
1) Kernel space that mainly sets up the GPU's MMU, and adds a context to the GPU's hardware thread scheduler. There'll be some modesetting too, but that piece is really simple (and fglrx never used the main kernel API for that anyway, so it doesn't really matter if it was stable or not. But it actually was pretty stable over the time frame we're talking about).
This piece still works fine on fglrx with a modern kernel.
2) A userspace component that lives with the window manager setting up the actual display output, and accelerating compositing. Stuff like EGL works by making IPC calls to this layer.
This is the piece that broke in your case, and only because the x server decided to change.
3) Another piece that runs in userspace and is ultimately what gets called by graphics APIs and is linked into every process making graphics calls. This is the vast majority of the driver.
Linux on the desktop would have succeeded with a stable driver API. Having hardware with closed source drivers supported could have being a walk in the park for OEMs. Bad 3D, wireless, etc. support is what killed Linux. OEMs would have probably made more and better drivers if they didn't have to make changes to them every few months or go through the effort of releasing and mainlining the source.
This is your opinion. The opinion of informed observers who are actually doing the work is the opposite - OEMs produce buggy, incomplete drivers and don't care to fix the bugs. In fact, given the choice, they don't want to allow anyone else to dig into their drivers because doing so can only lead to embarrassment. Plus exploits that are discovered are likely to be cross-platform exploits.
Any operating system that wishes stability therefore has to put barriers between themselves and the driver. And, preferably, should use drivers that they can audit themselves rather than trusting OEMs.
As an example of this outside of the open source world, the biggest reason why Windows used to have a reputation for BSODs is that they were dependent on third party drivers. Windows made it harder for third parties to take down their OS, shipped tools to audit drivers for bugs, and forced OEMs to clean up their act. (Which they didn't do voluntarily.)
Someone told me back then that the sound blaster drivers were responsible for more crashes than the next several causes combined, so I started watching and I’ll be damned if I didn’t see the same thing.
If I remember well, there was a sound card which talked on the PCI bus after it should have stopped talking, these kind of bugs can break any OS, microkernel or not..
If you control the desktops and take strong measures to stop horribly-buggy drivers from working, 95% of manufacturers will fix the drivers because they want to sell the product.
In that tradeoff, I'll easily pick the marginally smaller market with vastly better drivers.
On the flip side, there are vastly fewer drivers. I can plug a random USB device into a Linux system and the chances are it just works. Same with Windows, though it might require the driver installing first. MacOS might not have any driver at all.
The actual choice is whether to have limited hardware choices or hardware that crashes regularly. Put that way, it is obvious to me that limited hardware is the right answer.
Why? Because what I care about is having a system that works well for me. Limited choices are fine as long as I can determine in advance whether the system that I'm considering will work. (I usually can.) So now my choice boils down to, "Do I want to be able to buy a reliable system, or be forced to put up with a buggy one?"
Put that way, who wants to be forced to put up with bugs?
So we should have two numbers, actual kernel, whole tree.
I am quite convinced that if they kept it that way it was because some unplanned property caused worse driver status with an API; too many delays, improper usage I don't know.
If there's no solid reason, then I'd be happy to have split codebases.
RHEL does provide a stable kernel ABI (kABI) that can be and is used by vendors to ship binary drivers. See https://elrepo.org/tiki/FAQ
When I worked for a NIC hardware vendor, we would ship our driver in 4 forms:
1) source tarball
2) upstream kernel
3) RHEL/Centos kABI compliant source and binary rpms
4) Debian pkg using dkms
The upstream kernel driver wasn't good enough for a variety of reasons. For example, on Ubuntu LTS and RHEL, the in-tree driver was often based on a kernel that was several years old and which lacked support for recent hardware or features.
It's not misleading because most Linux kernel drivers run in kernel space; hence compromising them indeed potentially compromises the whole system, which is exactly the article's point. The fact that they're often buggy and poorly supported, unlike the "real" kernel, makes things worse and doesn't invalidate anything.
Nobody seems to understand that it’s possible to have one source tree and multiple binaries.
You can have hybrid systems where the code runs in isolation but the pull requests are still self contained instead of split into multiple pieces that have to be coordinated.