I think that can be attributed to user skill for both, I render my Linux machines weird more often than my Windows machines, but admit it's because I'm better at driving Windows machines. I have had zero problems with Windows machines since win7 that can't be attributed to hardware failures.
Also by "just works" I meant mostly being able to get binaries from a random website and installing without having to use a package manager or failing to have the right deps. If you go to the 100 biggest app sites (Skype, Spotify, ...) and try to set up from a download, the "just works" is probably a lot better on win and Mac. This is of course a lot due to the size of the market, but it's no secret that standard cross-distro/desktop-env prebuilt binary installers for GUI apps are still not exactly a strong point on Linux.
Disclaimer: I work for SUSE, a Linux company which provides support for enterprises running SLES, as well as contributing our packages and knowledge to the openSUSE community.
I don't see how you could consider package management as a bad thing. Why do you consider "downloading binaries from a random website" to be a "good thing". Not to mention that those binaries almost never update themselves and how well they deal with dependencies depends on what $500 installer builder they used.
Package managers allow you to always keep your system up to date and you have a single database of all software that has been installed, what it's dependencies are and what files it did install (so you can uninstall it). They are definitely one of the awesome things about Linux package managers. OS X has homebrew, but it's not well integrated into the system because it's a third-party library of software. BSD's package managers are at least 10 years behind Linux's (they're still working on packaging the base system). Windows has nothing AFAIK. Things like OBS allow you to automate the release of new versions, OpenQA allows you to do automated QA testing to make sure there's no regressions in graphical or terminal services.
I especially don't understand why you think that not having a way to update the libraries on your system is a good idea. Packaging the same DLL in 30 different places is not a good thing.
1. Nothing wrong with packages, but central repos rarely contain up to date packages. I don't mind going to skype.com and downloading a .deb package for skype (I wish it was the same format for desktop software for all flavors of linux, but I digress).
2. Shared libraries are only good for saving space (largely irrelevant on desktop) and for security. Different side by side versions of libs work poorly when you reach "system" level is my experience. Example: having apps that require different incompatible glibc versions is painful. For example: see the accepted answer to this question http://stackoverflow.com/questions/847179/multiple-glibc-lib...
"The absolute path to ld-linux.so.2 is hard-coded into the executable at link time"
WAT?
I think the difference in mindset between what desktop computing is, and what a "system" is, is very different bewteen a linux user and a windows user (i.e. one that just wants an OS that is a dumb layer for running binary compiled shitware that must be built and distributed by its creator because it will never be in a repo).
> 1. Downloading binaries from websites is what Windows users know. Package management is vastly superior as long as the packages exist and are up to date with the "official" source such as Spotify. If the package is a week late, then I'm going to prefer the direct binary. Once half my apps are direct and half are packages, the benefits of a package system diminishes.
"What Windows users know" doesn't mean that it's a good thing. Windows users also know to run everything with administrative privileges. For sufficiently sophisticated build systems (read: OBS) you can automatically rebuild packages. The reason why packaging takes time is because there is a testing process (which can also be automated with things like OpenQA), but there's lots of other maintainence that goes on when curating packages. Believe it or not, but sometimes upstream is downright irresponsible when doing version bumps and it's the maintainer's job to deal with it. It's fairly thankless work, to be honest, because you're not working on the new hot stuff. Sure, "just download a binary" works until you have multiple components that depend on each other.
> 2. Shared libraries are only good for saving space (irrelevant on desktop) and for security. Different side by side versions of libs work poorly when you reach "system" level is my experience. Several libc versions etc is painful.
"Only good for [...] security" is enough reason for me. Tell me how Windows programs deal with updates to critical libraries that everyone uses separately. I'm guessing the answer is "not well at all". And if you're going though your package manager, then no package should require a specific version of libc (besides, this problem can be mitigated somewhat with symbol versioning). The gains far outweigh the perceived costs IMO.
Argh your ninja response time meant my complete rewrite of my above post now looks silly, sorry :)
> "What Windows users know" doesn't mean that it's a good thing.
I know (I also removed it). It's patently stupid. Let me rephrase it if you want one way of distributing apps that anyone can use, it's basically the only working way. Download a binary from the creators' site. Otherwise you end up with the utterly broken method of "check if it's in a tree in some package repo, if not, you can add more package sources to your repo, or if not, you check if you can find a downloadable package for it, if not, you build from source".
> Tell me how Windows programs deal with updates to critical libraries that everyone uses separately
They don't. It's both a bug and a feature. OS libraries are updated of course (by windows update) but I don't necessarily consider e.g. a C++ runtime to be an OS library, even if it's microsofts' own redist. I prefer my applications to ship their own copy of their c++ runtime and keep it local because it limits problems. Even at the cost of having an unpatched one somewhere.
> Windows users also know to run everything with administrative privileges.
Well, accidentally answering "yes" to the UAC prompt is about as likely as accidentally sudoing something imo.
I reinstall Windows every few years when something really bad happens and never have seen Windows turn weird. After few years it is as good as a fresh install.
On the other hand sometimes in the internet I see "advices" like "Windows should be reinstalled every six months" and wondering what the hell these people are doing with their computers?
Use it for a month and all manner of oddities bubble to the surface.