Hacker News new | past | comments | ask | show | jobs | submit login
Using the same Arch Linux installation for a decade (meribold.org)
526 points by meribold on Aug 16, 2022 | hide | past | favorite | 411 comments



I can provide some details regarding the times things did break that I mentioned in the article.

* In September 2014, X broke, and I created an `/etc/X11/Xwrapper.config` file with the lines `allowed_users = anybody` and `needs_root_rights = yes` to get it to work again. I don't remember and don't have notes on why that helped. It sure does sound like a pretty terrible hack. I don't have that Xwrapper.config file anymore, and I also don't know when I deleted it.

* In June 2017, audio stopped working, but all I had to do was add my user to the `audio` group.

* In May 2018, X broke a second time. This time I downgraded the `xorg-server-common` and `xorg-server` packages. A few weeks later, I ran another system upgrade, and this one went fine.

These weren't the only problems, but they were the most disruptive. Generally, things like TrackPoint driver updates changing how the cursor responds or Firefox changing its UI have been far more annoying than Arch Linux issues :)


I had a similar life with arch. A handful of boot blocking issues, let's say 5. 4 out of them were solved after joining #arch on now-dead freenode and realizing this was explained on arch main page. 1 of them was a deeper borkage that arch team didn't catch early and required a bit of surgery. The problem was gone in 5 minutes.


It is technically not the arch way to use aur helper, but I use paru instead of pacman which shows arch news before updating.

This makes sure I get any arch news before potential breaking updates.


>realizing this was explained on arch main page

That's not the place to explain it. If they're going to require user intervention on updates, the place to do it is during the pacman -Syu itself. Better yet have a shell script that'll fix it automatically and give the end user the ability to read the source before execution.


The worst incident I can remember from several years of running an Arch VPS is when they had relocated some /lib or /usr/lib files and it required intervention, and I did a pacman -Syu and rebooted without reading the news. Just a mild PITA in OOB console.


I definitely had some rough edges with the pulseaudio, and then pipewire, upgrades, and a few cases where almost everything broke because in my infinite genius I had compiled my own (insert dependency) for a bleeding edge feature, forgot to revert when it made it to mainline, and then later down the road a major version bump meant some `.so` was missing, and I had to USB liveboot to fix it.

I've also been on Arch for over a decade, and it's almost never been broken, even when I was playing with some seriously bleeding edge components. Almost always, it's been surprisingly straightforward to un-screw the few screw-ups I've made.


> Almost always, it's been surprisingly straightforward to un-screw the few screw-ups I've made.

That's been my experience as well. With Arch, everything is exposed. There are usually no wrappers or layers of abstraction or weird modifications added to the upstream components it ships. That makes problems so much more straightforward to troubleshoot and fix.


Exactly. And having gone through the install process to the point of a usable desktop, I learned a ton about what I'd need later to fix the issues myself, quickly. And then when everything just worked, much better than Ubuntu, I installed Arch on a few other machines and got even more practice.

The learning curve may have been steep but it definitely paid off in my understanding of how to maintain and fix issues going forward. And just my understanding of Linux overall.


I yearn for my arch days where 'ls /etc' only yielded things I knew about.

These days I am stuck with WSL, and that sadly does not work with Arch. As far as I can tell the Arch community does not want to support WSL because of philosophical disagreement.


WSL2 should have no problems running arch at all.

https://github.com/yuk7/ArchWSL


That is not the kind of packaging that inspires confidence. And the effort it takes to verify it isn't backdoored is too much. Definitely, my IT department is not going to appreciate me installing this. In this case, I actually think our IT department would be right.


the pipewire thing has been a debacle on gentoo as well, with it being a coin toss for at least this year whether a full software update will break audio and video or not. And i didn't write a note anywhere except on the gentoo matrix channel. And if you've ever tried to search a busy matrix channel on a single-instance matrix server...

If memory serves, deleting some file from /etc/ fixes the issue.


You didn't get bitten by a painful glibc upgrade? It's been long enough that I don't recall details, but I thought that was a big one.


The glibc upgrade which was painful (and essentially required recompiling everything) was much further back than 10 years. I think I was running LFS at the time, but recall it was painful for all distros. I don't think there was a glibc upgrade that was disruptive since then. There was the introduction of multi arch on Debian some years back which caused a bit of disruption (I was running Debian unstable at the time IIRC), but pretty much everything else has been very minor since then. I've been running arch for 2 years or so now, before I was running tumbleweed. I have to say that rolling releases have been much less eventful in general than release based distros (I administer my partners Ubuntu laptop and lts upgrades are a bit more disruptive).


The current glibc version broke Easy AntiCheat support for Proton games, but that's the only break of note in recent memory, and it only affects people playing multiplayer games on Linux, which is a minority (multiplayer gamers) of a minority (Linux gamers) of a minority (Linux users).


glibc updates have recently broken lots of Electron software (and probably other stuff using similar sandboxing), by using a new syscall (clone3? or something) to implement some library methods.

Pretty much every glibc update breaks something, honestly.


That breakage is because of the dumpster-fire that is seccomp. Your seccomp policy (in this case, the one that comes with Electron) whitelists syscalls, but which syscalls glibc uses to implement things is considered an implementation detail, not part of the contract. So seccomp was designed in a way that makes it broken-by-design with the most popular libc.


In Arch, the glibc package upgrade associated with the `/lib` + `/usr/lib` merge in 2012ish was painful. I assume that's what the parent post was referring to. I assume you're referring to the libc5→libc6 upgrade?


Hmm, I remember UsrMerge being a non-event from a user POV. The official instructions seem quite short too: https://archlinux.org/news/the-lib-directory-becomes-a-symli...


For many users it was a non-event; but if you missed that news post, and didn't pass `--ignore glibc` to your `-Syu`, then your system broke. And a sizable minority of users missed the news post. (Shamefully, I was in that minority.)


I like my pacman helper, pikaur [0]. Not only does it allow me to edit/view changed AUR files, it also alerts me whenever there is archlinux.org news

[0]: https://github.com/actionless/pikaur


Love pikaur, been happily using it for years.


Yes I think you're correct libc5 to libc6 is the upgrade I was talking about. I just had a look that's more than 20 years ago. Funny how people still talk about "painful libc upgrades".


I remember libc5 to libc6 as a late 90s thing.


> The glibc upgrade which was painful (and essentially required recompiling everything) was much further back than 10 years.

Ah. I'm old. Somehow traumatic glibc upgrades were not how I expected to find out.


Glibc upgrades are how you become old.


I suspect you were thinking of an Arch Linux-specific glibc upgrade related to Arch's `/lib`+`/usr/lib/` merge in 2012ish, not a painful glibc-itself upgrade that other distros would have noticed?


The a.out -> ELF migration had some sharp edges, too, but that was before Ubuntu and Arch.


glibc and the file system update were both horrible updates in Arch.

The systemd transition was annoying as I liked Arch’s unit system but it was just an annoyance rather than breaking change.

Recently I ran into an issue were I had to revert to a LTS kernel because the main kernel hangs during boot each time (spent hours debugging and haven’t found the culprit. But the LTS kernel is working fine so I’m going to stick with that).


The biggest upgrade I remember on my similarly aged system was initscripts -> systemd, and that had some steps to follow but went off smoothly.


I had the exact same problems you mentioned!

Also, Nvidia gpu drivers are the worst (I was on Manjaro back then) to seeif I could get rid of Windows for gaming purposes. I used Linux for games for about 6 months and had to quit and get back to Windows.

I should retry now with all the steam deck fuss!


Proton is a game-changer, but Nvidia drivers remain the most unstable thing on Arch. Find a version that works well with your card, and avoid upgrading it, if at all possible. It's performant, but I have games that crash once every few hours, but only specifically on certain machines.


This is true everywhere. I have some very expensive Lambda Labs GPU blades, and even their Lambda-stacked Ubuntu upgrades break CUDA stuff occasionally. I think Nvidia's driver ecosystem is held together with chewing gum and duct tape.


Or get an AMD GPU, like what I did for my recent gaming PC.


I use NVidia on my single Windows gaming system, but every Linux desktop system around here has AMD. I actually end up playing just about any game that plays on Linux on one of my Linux machines, and only boot the Windows system when wanting to play an exclusive title or occasionally just to do OS updates to be ready to play later. I've been using almost exclusively ATI/AMD GPUs since the Mach32 days, but for Windows gaming systems sometimes NVidia has the performance crown for months or years at a time with reasonable stability.

My fastest laptop is an oddball. It uses an AMD mobile processor with integrated graphics and an NVidia discrete GPU. Windows 10 handles that fine. Several Linux distros didn't install correctly at the time I bought it, but Ubuntu did. I'd have been okay with it installing to only support the integrated GPU, but it supports both and has a menu option for every app to launch it on the dedicated GPU. So far the only problem I've had with that whole laptop is a Windows update breaking the bootloader and making Linux temporarily inaccessible.


Been on Manjaro+NVidia for a few years now. Gaming works very well, especially for recent games (2017+). Most "Windows only" games play perfectly. Older games are hit or miss, but misses are pretty rare.


Depends on what games you play. Games with anti-cheat seems to be broken right now.


I am glad I am not the only one. I run have been running arch on a "experiment" VPS for ~11 years now. Been `pacman -Syu` ing every month :)


My first significant Arch install was also pretty long-lived (not 10y but def >5)

Did you go straight to full-on systemd when you installed it? Arch was transitioning to systemd by default around the time of your initial installation (default since Oct 2012 so you would have just missed it if you went with defaults).

Mine was a few years of cruft-accumulation in already and my init system understanding was not super deep at this point so this (together with the glib thing around the same time) is the most disruptive upgrade breakage I recall. If it doesn't show up in your top 5 I'm guessing you dodged it?


I had a couple of storage/media servers on Arch that I set up in 2008. In 2012 when systemd came, I experienced lots of pain. I eventually got things working but it took me a long time not to hate systemd. They both failed within a few months of each other in 2018, putting them right around 10 years of Arch as well.


I switched this installation to systemd after about two months with sysvinit, but the switch was quite painless.


I'd guess a couple of years prior of sometimes blindly messing around with random packages and configuration made the difference (:


I had similar, but more serious, X breakage with Debian testing. I used the apt feature to clean up orphaned packages, but the dependencies weren't listed quite correctly in the packages. I ended up spending an hour or two chasing dependencies manually to reinstall the missing packages. So Arch is definitely not alone in things sometimes breaking.


So to recall the above, had you blogged about this, wrote in your journal, or just queried your damn memory??


I try to take a note whenever I fix some major issue with my system. Querying my memory doesn't yield much; definitely not specific dates :)


If only we could add some indexes ...


I always reinstall from scratch although I could just Debian "dist upgrade". My thinking is this: if, ten years ago, I somehow missed a security patch or some 0-day owned my machine before it was patched, then I'd potentially have been copying / dd'ing / rsync'ing a rootkit for ten years.

By installing from scratch at every new stable (or unstable) release, I get rid of a lot of potential security issues.

Now as an anecdote: ten years with the same install is nice but... I've got a dedicated server at OVH with 3400 days of uptime. You read that correctly. Nearly ten years of uptime. Once in a while I give temporary ssh access to people here and there just so they can type "uptime" and see. Kids: don't try this at home. Yes, it's insecure (although there's a firewall and only the SSH port open). No, it doesn't do much nor is it very useful. But it's fun to think I own one of the computer in the world with the biggest uptime.

I plan to kill it once it reaches ten years \o/


I reinstall once a year. Nowadays, with PXE booting, preseed files, and automation like Ansible, everything is pretty much automated for me.

This goes for both my personal and work laptops. Both have been running some version of Debian for the last ten years. For the previous four/five years, I've been on Debian Sid, and I may run into an issue about every other year that requires me to use timeshift to go back to yesterday's OS backup.

With the proper backup and automation strategies in place, I do not see a point in not doing a reinstall periodically. It definitely gives me peace of mind knowing I can be back online and 98% percent functional in under an hour in most cases on just about every device in the house.


> It definitely gives me peace of mind knowing I can be back online and 98% percent functional in under an hour in most cases on just about every device in the house.

It is also a good test of disaster recovery.

By wiping your devices and doing a fresh install, you catch hidden assumptions.


Sometimes I write a setup script or some other automation… but then the next re-install I do everything differently.


I have similar automation with Ansible. Can I see your config if it's open source?


that sounds awesome! could you provide a few more details? I'd love to set up something like that too


> But it's fun to think I own one of the computer in the world with the biggest uptime.

The top 500 entries on that list are almost guaranteed to be mainframes.


According to Guinness, the top is Voyager 2:

> The computer system that has been in continual operation for the longest period is the Computer Command System (CCS) onboard NASA's Voyager 2 spacecraft. This pair of interlinked computers have been in operation since the spacecraft's launch on 20 August 1977. As of 29 October 2020, the CCS has been running for 43 years 70 days.

https://www.guinnessworldrecords.com/world-records/635980-lo...


Not a lot of bugs in space, so it is hard for them to get in....


43 years without a single reset or revision to safe mode (a common feature on spacecraft)? The linked article says that they've updated the software many times, by live patching it, instead of rebooting?


Since there are 2 computers I'm guessing that they are letting one computer update and reset the other one, then repeat for the reverse. The system as a whole would stay up, the individual computers would not.


> The top 500 entries on that list are almost guaranteed to be mainframes.

That's very interesting. Your comment got me searching and I found a link here on HN: "Stratus: Servers that won’t quit – The 24 year running computer" with quite a discussion (123 comments) on the topic:

https://news.ycombinator.com/item?id=13514909


I reinstall every 2 years or so simply to re-evaluate what tools I have and decide if I really want them enough to install and configure them again. It's part of the reason I resist systems like Nix, Guix, or even dotfile management tools.


Funny, I've had the opposite experience. Because Nix is so stable and allows risk-free easy experimentation and rollbacks, I'm always re-evaluating and upgrading my system. An added benefit is that the changes you make stick, and you've got a paper trail of the changes in git. You can create a whole set of changes behind a single flag or config file, and flip back and forth between different setups nearly instantaneously.

With Ubuntu I was afraid to change anything because I might break things and not know how to get back to a working state.

With Nix, you are always on a fresh, pristine system, because the file system is mostly read-only and an exact reflection of what's in your config. It's impossible to get crufty.


That doesn't make much sense because Nix doesn't prevent you from reinstalling, and after you reinstall, you are not forced to copy your old configuration.nix to your new /etc/nixos/.


In fact I think Nix is much better here. Instead of reinstalling you can just audit your configuration for anything that you don't want. You don't need to worry about stray packages, config files or services like on other distributions. And of course if you accidentally clean too much it is easy to revert via source control of your config file.


This is the best I've got, almost 2 years:

    $ w
    15:25  up 725 days,  6:06, 1 user, load averages: 1.31 1.21 1.30
    USER     TTY      FROM              LOGIN@  IDLE WHAT
    andrew   s000     192.168.1.200    15:25       - w
But! This is a system in my office at home. Not in a data center somewhere.


I'm more impressed you didn't get a single power cut in that time.


Oh I have. We get power outages all the time where I live. At least a few a year that last 8+ hours. Have even had some multi-day outages.

I have all my machines plugged into UPSes and we have a standby whole home generator. So the only way any of my machines shut down is if I do it explicitly.


Coming from the author of ripgrep, which is a tool I use daily, I've got to say it's an honor to see you answering my silly comment!

Office at home certainly beats datacenter hosted but by 5x? Not sure... ; )


Hah. The only way it's possible where I live is with a UPS combined with a whole home standby generator.

(I got the generator not just for uptimes haha. Got it because we get a lot of outages and we were losing a lot of spoiled goods from our fridge. About half our neighborhood has generators of some kind.)


refrigerators don't really use much power while running. As a joke-y test i ran my refrigerator off of my truck via an inverter connected to the battery terminals and 3 50' extension cords. I think a fridge draws less than a decent GPU while running. You do, however, need a decent inverter to start the compressor running in the fridge, though.

Also, a deep freeze that is the top loading style is best for preventing spoilage, especially if you keep all the available space full of bags or bottles of water.

I also have a makeshift "whole home" backup generator, 11.5kW/9.5kW peak on gas/propane, but only because i have a well, and the pump draws something like 4.5kW when running. The extra headroom allows me to run my 5 ton HVAC and have running water and lights and everything.


Yeah, our fridge has a top loading freezer. I think that stuff spoiled once during a particularly long outage. We do also have an upright freezer. We purchased that after getting the generator, so the advantage of it keeping stuff frozen longer was less important to us than the convenience of easier access. (And we were okay to sacrifice a bit on capacity too.)

We also have a hot tub and we're in New England. We were worried about a multi-day outage resulting in the tub freezing in the winter. A low probability event, but one that was conceivable.

Both the fridge and the hot tub problems can indeed be resolved with something less than our 16kW generac. The hot tub really just needs a small space heater to be put inside the cabinet to prevent the pipes from freezing. And as you say, fridges don't use much power. So powering those two things (and our upright freezer) would need a pretty small power source.

So absolutely, we overdid it a little. But it's super nice having it kick on when we get an outage. Otherwise I'll be playing the, "is it really worth going out in the snow, lugging out a generator and getting it running if the outage is just an hour?" game. There's also the issue of maintenance, which I'm... not greatest at to be honest. (I have a 2 year old at home and time is just absolutely slipping by. It's hard enough to keep up with normal chores.) The standby generator does a weekly test, so we know whether it's actually working or not.

Yeah, definitely a bit of a luxury, but worth it for us. :-)


Any particular model of the generator you can recommend?


I have a Generac. I'm a happy customer.

I am not an expert on generators though. If you're not looking to maintain it yourself, I'd say the most important bit is going with people that have good customer service. Which depends on your area. Might be Generac, might be Kohler.

See also: https://old.reddit.com/r/Generator/


Don't you need to reboot after kernel updates?


This particular machine is a mac mini. Otherwise, I have a lot of machines. I just don't get around to updating and rebooting them that often.

I very rarely use the mac mini. Basically just for testing. So at some point, it got to a crazy uptime and now I'm purposely trying to see how long I can go haha.

In theory the battery in UPS will eventually need to be replaced. Otherwise I don't see it ever losing power given that I have a generator


Is that plugged to a UPS or is your power just that stable?


I get lots of outages. At least one a year that is 8+ hours. It's plugged into a UPS and I have a whole home standby generator.


If you have a rootkit that you're concerned about copying around, that can somehow persist through pretty much everything on the system being upgraded at some point or another... you should probably also be worried about the various vectors that the rootkit could use to persist across OS reloads.


It doesn't really need to be well hidden if you're not actively looking. A shell script and a crontab entry / bashrc exec / init system entry is very low tech.

Pair that with a slightly higher (but still low overall) tech LD_PRELOAD libc shim so it hides itself and you got something just stealthy enough that you wouldn't find it if you don't look for it.

Remember, the easiest privilege escalation is aliasing sudo and patience.


I don't disagree with that but some OS re-installs also correspond with buying an entirely new machine. And I'm the kind of paranoid person which burns instal DVDs and then checks the DVD's checksums from an offline computer before doing an install on my desktop, for example. Now, sure, the rootkit may try to hide in my Git repos (but that's not the easiest trick to pull) or shell scripts (but they're versioned with Git) etc.

I still like it that way: a good old write-once DVD, checksum'ed, and a brand new install. Ideally on new hardware but that's not always the case.


I reinstall with every new version of Ubuntu for that, and to force myself to exercise my backups and install from scratch scripts.


> or some 0-day owned my machine before it was patched, then I'd potentially have been copying / dd'ing / rsync'ing a rootkit for ten years.

Isn't this a huge problem? Distro-preference/upgrade patterns aside, What's latest in the detection of a root-kit in the Linux desktop of an unsuspecting home user before they realize that they've been pwned(i.e. data exfiltration, ransomware etc.)? Assuming they aren't running any real-time scanners(Even if they want, Is there any real choice apart from ClamAV?)


Hate to hear your Hubble, but there are systems with decades of uptime. Great accomplishment though!


Burst your bubble


I stuck with Ubuntu (eventually Kubuntu) for many years because I thought it was the best for a no-fuss distro for folks who wanted things to just work (TM). I was afraid of things randomly breaking and interrupting my daily work. One day I eventually bit the bullet and installed Arch instead. The experience has been phenomenal, and if anything it feels more stable than Ubuntu. I was underestimating my own Linux knowledge, and it really isn't difficult to setup/configure at all if you have a reasonable understanding of how Linux works. The initial installation can be done by just following one of the many YouTube videos that break down the process step by step - you may even learn something if you don't already know it. I can't ever see myself going back.


I was in the same situation. My fresh install adjustments to Ubuntu consisted of a long list of apt packages, removing snap, dep packages, ppa's and some make builds.

I finally tried arch, and my long list of custom manual shenanigans was reduced to packman + aur. I'm seriously impressed, and it gave me a renewed hope for OSS.

I've used Linux, Windows and MacOS. For what an operating system does, the only flaw Linux has is due to lack of software and driver availability, neither of which has anything to do with the operating system itself. Because for what it is, and what it does, Linux and gnome is imo, absolutely excellent.


> The initial installation can be done by just following one of the many YouTube videos

Even easier, the ArchWiki has detailed steps you can follow along at your own speed, together with links to each topic for a deep dive (which isn't required, but nice to have when you do need it). Reading through and following the "Installation Guide" + "General Recommendations" pages would take you 2-3 hours at most, and you'll end up with a fully installed and ready system :)


I ended up using Manjaro mainly because the arch installer was just a bit too much to figure out for me. Recent installer improvements make things much easier and I might consider switching from Manjaro to Arch at some point. But Manjaro got me up and running quickly.

Things I like about arch: it's stable and up to date. It's stable because fixes by upstream developers are not being blocked by package maintainers that second guess what they are doing. I never understood this being a thing. Mostly all the experts for any software are exactly those people involved with producing the upstream source code. If they say something is stable and ready for use, it generally is. Arch mostly just ships software as is. Nothing more nothing less. Unless the upstream developers release broken and untested software, it generally is the best version of that software available. Basically, Arch does most of the due diligence on ensuring things at least compile, install, and run fine. So generally, it's fine. And if it's not, you just wait and it will be fixed. You always have the option to downgrade. I've never had to do that actually.

Arch is mainstream enough that all the mainstream stuff just works. Docker, snap, steam, etc. Between those, I can run just about anything I need to and get the best experience possible.

Hardware support is constantly improving and with Arch you have early access to kernel releases. I generally am within a few patch releases of the latest stable linux kernel. So, I get the most out of my GPU, CPU, wifi, and other stuff I have. Mostly chances of hardware working decrease with the age of the kernel. Best case, stuff works but with more stability issues and less performance on an older kernel. Worst case, you have stuff that doesn't work at all that you can't get working until your distribution finally updates the kernel. I've updated the kernel mostly without issues. On 1 occasion I went back to the previous version for a week while they sorted out some issues with touchpad support. Not a big deal.


> Recent installer improvements make things much easier and I might consider switching from Manjaro to Arch at some point

This post made me play around with Arch last night, and specifically the installer. Having spent most of my last dozen years in Debian and Ubuntu, it wasn't as flashy or glamorous, but it got me a running system really quickly. One thing I missed was the network setup - I skipped it, and was left with a VM that had no network access. But after playing with it some, I spun up two VMs - one profiled 'server' and the other 'desktop' - and have really enjoyed the limited time I've had with them. Way back in the day, I spent a lot of time with Gentoo, and while building from scratch wasn't my jam every time (like, when novice me installed `emacs` and not `emacs-nox` and lost two days of compute time), but this scratches the itch I had for lower level control of my system but also with a little bit of shiny knobs and levers.


For me the big difference with Arch is that you choose all the different pieces (networking, disk setup, desktop environment, login manager, etc.), so when something goes wrong, you know exactly where to look. If something goes wrong in Ubuntu, especially as a beginner all you can do is google "Ubuntu wifi disconnecting" or whatever. It's definitely more work to get installed, but you come away with something very personal, that you understand very well.


Yeah, but then you’re on the hook for maintenance too. Maybe you installed a while ago with Pulse and WPA supplicant, but now you need to know they are replaced with Pipewire and IWD. And you have to swap them out manually yourself.

Installing Arch is the easy part. The maintenance is what gets ya.


Nowadays arch install ISOs have an `archinstall` script that can get you going for basic setups.


I think the old narrative that Ubuntu is the best “just works” distro is just plain wrong (and harmful). I’ve tried a huge variety of distros over the years and ironically I’ve always had the most problems with Ubuntu based distros. Fedora, and Arch have given me the least problems. Fedora in particular has been rock solid for me.


I’m gonna try moving from Arch to Silverblue this weekend. From what I’ve seen in a VM, it’s very impressive.


With the Arch Wiki and a little determination you don't need any prior knowledge of Linux.


almost the exact same story here. I've been running versions of Ubuntu for years and every time there was an upgrade something would inevitably break. I've only had Arch for a few months now on a computer I built myself and it runs like a dream. I love KDE as well.


which wm are you using?


I recently switched to an arch flavor because the AUR had a lot of little small utilities that make life better on wayland. Also, it's quite easy to install the latest version of golang and rust, etc.

Pros: Documentation is excellent. Better than even Gentoo or Ubuntu's docs.

Cons: Arch doesn't have an installer, and seems almost militantly against providing one, or a lot of other little utilities that could improve the user experience. I get the same sort of 'I suffered, therefore you must suffer, learn to RTFM noob' elitism that I saw with slackware 20 years ago. I'm a grizzled vet, I can figure this stuff out, but it doesn't help your average technically literate joe.

Updates seem to break for odd reasons. I had to uninstall nodejs to be able to do a system update. Why? There should be some sort of automated way to address this.


> Arch doesn't have an installer, and seems almost militantly against providing one, or a lot of other little utilities that could improve the user experience

This just changed! The archinstall package is included on install media now and is not considered experimental (according to the wiki page history, that happened on 2022-07-08).

https://wiki.archlinux.org/title/archinstall


Huh, it took a while for the wiki to be updated to reflect that. The official announcement was on april 1st, 2021 (the date was almost certainly a joke of how unexpected it was to people now following its development, but the release was no joke): https://archlinux.org/news/installation-medium-with-installe...


Welp, looks like I'm switching back from Manjaro for my next Linux install.


I recently adopted Arch onto my laptop and didn't even realize 'archinstall' wasn't always a thing. I looked up a traditional Arch installation and think I might have skipped it had the installer not been present when I was distro hopping.


Yes, it does a quite decent job of getting you up and running with a bootloader and disk encryption all configured. From there, you can just bootstrap the rest. But that was the main reason for me to pick Manjaro half a year ago.


I would like to defend arch here.

In my experience, Arch's main focus is on upholding a simple consistent architecture. There are tools that do something like the bare minimum required work, and then excellent documentation so that the end user can correctly perform the remaining required work manually (or via their own scripts).

As a result, there are certain features that will probably never be implemented. For example, if you wait too long between upgrades, your signing keys will be out of date. The solution is to upgrade the "archlinux-keyring" package first. Should pacman automatically do this? It would be nice, but it would also introduce a special case into pacman. Would that special case be abused to do unexpected things?

Another example is installers. Writing a basic installer for a single machine is easy. Writing an installer that covers any machine is very hard. Writing an installer that covers any machine with any configuration the user might want is impossible.

Put another way, everyone likes that Arch has up to date packages and an excellent wiki. Would either of these exist if there was a bunch of extra complexity that needed to be integrated? Is there any need for an excellent wiki if installers automatically resolve all your problems?


> Another example is installers. Writing a basic installer for a single machine is easy. Writing an installer that covers any machine is very hard. Writing an installer that covers any machine with any configuration the user might want is impossible.

This is a cop out. An installer doesn't need to cover every option a user might want. An installer only covering popular options/configurations is only a problem if the installer is the only way to install the system. If it's just an option itself during the install there's no issue. The weird corner case can still be handled manually while more common options can be handled by an installer.


I think the historical opposition of Arch devs to an installer is more maintenance. Most advanced users end up scripting their own specific system or getting a muscle memory for installing it (and that’s if they install Arch frequently at all).

Most if not all Arch devs are likely advanced users, they didn’t need to maintain a user friendly installer for themselves so the old one got out of date and was removed.

What’s probably changed recently is that Arch has grown enough to get devs/trusted users who actually wanted to write and maintain an installer, so now it has one.

I find the Arch dev justification/perspective for most things much better than many users on the forums who tend to pick up the basic idea and cargo cult it whilst losing the context behind it.


I'd call it more likely a form a neurosis than a cop-out.

It can be hard to resist some analysis paralysis when faced with an incredibly broad problem. There's a strong urge to find one solution to cover all cases. Trying to do everything at once being so overwhelming that it's impossible to get even started on it.

Of course, you're correct that the best thing to carve out the most common cases and then iterate. I've found that it takes time and experience to gain the wisdom to learn the perfect is the enemy of the good.


arch used to have an installer. it didn't work for everyone, and caused a lot of complaining. turned out most people could do the steps themselves.

it has an installer again. maybe the userbase will change.


Being that Arch is maintained with arch users in mind[0], building an installer for that user base would have to entail a wide range of option for unique use cases because that's whats expected of their users[1]. There isn't a "cop-out" or "plea" to users outside of the community because it was never a goal to appease them.

[0]https://wiki.archlinux.org/title/Arch_Linux#User_centrality

[1] https://wiki.archlinux.org/title/Arch_Linux#Versatility


No, it wouldn't, because that versatility can be attained by... not using the installer, the exact same as the situation without an installer.

An installer, if anything, improves "user centrality" because you're making it more accessible and usable to most users with just the most common few options.

Not having an installer improves "dev centrality" (the few users who matter are the devs and other advanced users), over and against user centrality.

You could use the same thinking to argue against having a package manager. You might have to install a package manually anyway, so why bother providing packages at all?


At the end of the day, as others have probably already said in this thread, the maintainers are unpaid volunteers and choose to focus on certain things for their own reasons and using their finite resources.

There are other distributions that focus on other things and people can choose. If Arch chose to implement lots of convenience functions, that choice could be to the detriment of other strong aspects of the distribution.


…which they chose to use to write an installer for Arch after all, so I find your arguments against it somewhat amusing.


I am aware of the fact that they wrote an installer. I do not think this invalidates my previous comment though.


>Arch's main focus is on upholding a simple consistent architecture

I though it is all about doing it the upstream way.


That's the flip side of the coin.


They're talking about an installer for Arch itself (which could and should support the bare minimum options that most users want), which is what it didn't have until recently; not installers for packages within Arch, which it has always had.

So, would it add a bunch of extra complexity? Not really; it's actually tiny compared to making a package manager and maintaining its library.

Would it take away from maintaining the package library? I guess a bit.

Would there still be a need for documentation? Of course.


I love that Arch has no extra configuration system. I'm building Debian images at work for like three years now and still don't really get how debconf works.

Also, the amount of patches, workarounds and custom configuration files in most packages and maintainer scripts is just wild. Coming from more than ten years of Arch, it took me a while to learn to check the debian packaging source if upstream doesn't match the behaviour on my system.



There actually is an optional installer included with the official installation media now [1].

[1] https://wiki.archlinux.org/title/archinstall


Quite a good video about it - https://www.youtube.com/watch?v=OtWWLN3wGNE

Used it to create a dual boot on my main machine and works a charm.


Sounds like the installer is not ready for regular users yet.


That was a few months ago, there's been improvements since.


This must be new, but it's definitely a step in the right direction.


The previous installer was deprecated in 2012, so a 10 year period of no installer.


There're some Arch-derived distros that are basically just Arch + some choices made for you + an installer, like https://endeavouros.com/


Another vote for Endeavour.

I already run Ubuntu on my NUC, and when I was looking around to install on my former gamebox (i.e. computer with a graphics card unlike NUC with onboard graphics only) I wanted something with good steam/nvidia integration. It came down to either Pop! OS or Endeavor... because while I develop software I also want to play games without futzing around too much.

Very happy with Endeavour: pacman or yay for nearly everything, no hassle graphics drivers updates (system always suggests a reboot after and I always do that).


Should have scrolled down before replying. I agree endeavour is excellent.


This is exactly what I'm running :^)


For me, Manjaro has been a fantastic base on top of Arch to start with- installer, desktop configured for the flavor you pick, and things Just Work out of the box, including the usual suspects like wifi, media keys (speakers, backlight, etc).

My only real complaint was that I wanted to do a fairly extensive change to the sway configs, and tracking down where they put all the config files took a bit of time.


But with Manjaro you don't have to personally babysit every computer in the house. Imagine not having your wife and/or child mad after the occasional update breaks things and they're helpless because they simply refuse to get intimate with the OS their own computer! What kind of world would that be?


Well, the wife has a chromebook, and the kid is moved out on her own and pretty much only uses a phone anyway, so it's a pretty nice world, to be honest :)

At some point I'm tempted to get a fat server going so I can do my dev work remotely to it using a super light weight / long battery life laptop, but that's a complication too far at the moment.


> At some point I'm tempted to get a fat server going so I can do my dev work remotely to it using a super light weight / long battery life laptop, but that's a complication too far at the moment.

A setup like that would be quite cool -- if it weren't for one problem: All the "super light weight / long battery life" laptops have tiny (= max 13-14 inch) screens; there don't seem to be any 16(or more!)-inch ones. OK, just the bigger screen and housing means a machine like that couldn't be as super light weight as a smaller one, but they could still be quite light weight...

Only machines like that don't seem to exist. Or does anybody know of some?


I think you misunderstood the point of the distribution. There's no installer because Arch developers have no need of one. They're creating the distribution for themselves, and if you find it useful (I do), that's great. There's no expectation that an "average technically literate joe" is going to use it because he's not in its target group.


Arch isn't some small LFS niche distro anymore. At a certain point, you have to embrace at least some small amount of usability, or at least not reject PRs to add it. I found it quite embarrassing when one of the most popular distros out there had a worse installer than deb or slack from 20 years ago. That's not 'oops we forgot', it's not 'it was never really a priority' that's blatant user hostile elitism.


It's "maintaining an installer is a tedious chore I don't want to do as my hobby." Not elitism, eventually they did attract someone who wanted to maintain an installer as their hobby and now they have archinstall.


> blatant user hostile elitism

...and that's a feature. There's nothing wrong with Arch doing its thing, there's always Debian/Ubuntu/Fedora/Manjaro/whatever - and Gentoo for those with taste.


actually no you don't have to go change your project's philosophy bc different ppl start using it and demand different stuff. this is the kind of attitude that makes ppl hate maintaining FOSS anything.


Any decent installer is non-trivial code and you cannot expect unpaid volunteers to write or maintain code that they don’t want to write or maintain.

I don’t see what the size of the distro has to do with it. Regardless of size, if there are users who have the time commitment and knowledge necessary to get commit rights and build and maintain an installer (as there is now), then it will have one. Otherwise it will not.

There’s no user hostility necessary for this sequence of events to happen.


I don't think you should be embarrassed. An offer to provide code review, continuing development, and community support for this new feature is quite generous of you. They might not have decided to prioritize it, but at least you can be proud of the fact that you aren't just nitpicking from the sidelines, right?


> that's blatant user hostile elitism

A small amount of elitism is okay. For example, the Arch Wiki requires users to run a simple command before it allows them to create an account:

  pacman -V|base32|head -1
That command more or less proves the user has successfully installed Arch Linux. People who can't provide that output probably shouldn't be editing the wiki. I realize it's nowhere near a perfect proof but it's probably fine for the wiki's purposes.


That is only true if the Arch developers seek user growth beyond the niche they currently have. You are not entitled to them spending their time to meet your demand, though you are free to build your own distribution on top of it (or use one of the many available ones).


You just proved parent's point.


Except there is an installer now, so your arguments against it are moot.


I don't know that it's elitism so much as not an interesting problem to solve. Like a lot of software, it's straightforward to automate the easy parts of an installer. But for it to work reliably, you have to account for so many corner cases and that it can become tedious and take a lot of dedication to get right. I think most Arch maintainers would rather spend their time working on Pacman or system stability issues that benefit themselves rather than new users.

I think most people who want easy access to Arch just go with Manjaro, which is close enough and gives you access to the AUR.


Endeavour os lets you install barbones arch if you untick enough checkboxes in the installer, I tried manjaro but they put on too much extra stuff


As a Linux noob i actually picked arch because it doesn't have an installer. Just installing arch I learned a lot about how computers work and how much is actually done by other operating systems like windows. I just followed the installation article in the wiki and didn't have any issues I couldn't solve. There are plenty of nicer distros for people that just want to use Linux. In my field of work RTFM is pretty standard so I don't mind it in the arch community.


You should try Linux from Scratch.


Seconded. It's a terrible end-user experience, but a fascinating and extremely educational project for a developer/sysadmin.


> Updates seem to break for odd reasons. I had to uninstall nodejs to be able to do a system update. Why? There should be some sort of automated way to address this.

That usually happens when a package depends on a specific version of a dependency, with e newer (major) version in the repo.

It should only be an issue if you're using AUR packages or other package management systems. Official packages are updated for new dependencies afaik.


For the most part that does seem to be the case. Though I have had problems with Qbittorrent and qt6 not playing nicely unless i hold back qt6. Seems to be resolved now


i am in favor of gatekeeping out ppl who are completely unwilling to go read the extremely helpful and well maintained wiki. it helps maintain a better atmosphere and keeps the forums clean(er) of the same 10 noob questions that everyone asks without looking for prior responses.



The thing is that it used to have an installer that was removed at some point. It was a simple menu that just covered the basics and would leave you with a working system withing minutes. I don't know why it was ever removed.


> I recently switched to an arch flavor because the AUR had a lot of little small utilities that make life better on wayland.

Same for me after more than a decade on Ubuntu. Switched to Manjaro Sway so I could have a better life in Wayland.


If you want an installer and some utilities to make life a bit easier while staying on arch I can recommend endeavouros. It is really just vanilla arch with an installer and some convenience packages.


Arch actually now has an installer script built into the installation ISO.


Installers often get configured for the average user, and the average user doesn't exist. Also remember to do a full system backup before you upgrade!


The best thing about the rolling releases is that the OS feels ageless. If I leave a Thinkpad sitting in a corner for 3 years and boot it up again to pacman -Syu, it will have basically the same software as my modern Thinkpad. This just tickles my sense of "this is how the computer _should_ behave".


I would be very surprised if pacman -Syu worked after three years of no upgrades.

That's generally not supported and will require manual interactions.


I literally let my server sit without upgrades for almost three years before realizing my autoupdater script had been failing me. The biggest issue was needing to manually download a statically linked version of pacman (found on the arch forums) to support the new mirror/signing features. Once the system pacman was upgraded, I just ran `pacman -Syu`, let it spin for a while, and rebooted, and it just worked. ¯\_(ツ)_/¯


The main problem you’ll have is that you’ll want to upgrade archlinux-keyring first. Beyond that, it wouldn’t surprise me at all, though it also wouldn’t surprise me if you did have problems. Early last year I updated a machine I hadn’t booted for 2½ years, and I messed up the procedure a bit and ended up rendering it unbootable (the way I did it, upgrades in the pacman hooks system meant they didn’t run properly, so mkinitcpio didn’t happen, so I couldn’t boot past grub), but when I came back to that machine almost a year later and fixed it up by booting from USB, chrooting into the local installation and I think just reinstalling the linux package as a convenient way to trigger the mkinitcpio hooks, then it was fine, and after just a manual archlinux-keyring update the year’s worth of updates were uneventful. (I think if I’d tried to do the whole three years’ worth in one go it might have failed, though, vague recollections of the .tar.xz transition or something.)


I routinely go over a year between updating some old crufty systems - usually it's all just fine, excepting a need to update the keyring first.

Periodically, some upgrades of some packages do require manual interaction when configured in certain ways. However, this is usually a general issue with a change in packaging which isn't actually any more onerous for systems that hadn't been updated in a long time.

Now, it's true that every once in a blue moon something really big does change. The big one I remember was updating a system that had been shut down for some years, and the compression used by pacman had changed in the meantime. That one did require some self-imposed manual intervention :)


It works well, at least for me. I haven't updated mine for many months.

If I have problems during the process, it's almost always a signing keys problem. Package verification fails because too much time's passed between updates: the keys stored on my computer are too old and don't work anymore. I made a script to refresh the keys and everything started working perfectly.


That's what I was thinking. Good luck getting all the random crap you have installed to update without choking on EOL'd packages, or various packages that have new config formats, or changes to the updater itself, or apps which only support version-at-a-time migrations (or otherwise don't include the full history needed to migrate)...


Broke for me after a month or so on one laptop. Had to stop whatever I was doing and learn a bit how pacman and PGP keys interact or something.


That's expected, you'll have to pacman -Sy archlinux-keyring when a new developers key is added, which happens every couple months or so.


If you boot from USB to use an external pacman to upgrade you can mitigate almost all of the common problems people have with this.


Ubuntu LTS updates worked incredibly smooth, for last 3 release. I am surprised somehow.


And all the data in your `~/.config`, `~/.cache` will still be in the old format. Linux programs are notoriously bad at handling that kind of things. There is no local data update api like the one provided by android sdk.


s/Linux //

Windows isn't any better. I don't have enough Mac experience but I doubt it has some magic bullet.


i did recently boot a windows machine running windows 10 that i unplugged about 3 years ago. It took about 2 hours to fully update on my crap wired internet, but now works fine. i3-7350k with NVME. I was shocked that everything worked so well, i fully expected to have to boot into something and recover the windows install key and do a full reformat-reinstall.

now if i'd do that with something that had the original release of winten, probably not. But i'd happily boot a win7 machine that was >3 years offline.


Yes, that sucks. I'm not aware of any *nix besides Android that handles that. Would love to be proved wrong.


The keyring issue that other commenters mention has to be the dumbest misfeature ever, because upgrading is guaranteed to fail, yet nobody fixes it. To make things worse, one of the top Google suggestions is a pacman command that can cause libc to be upgraded without a full system upgrade, which will make your system unusable immediately. No process will launch after that. I had to do very desperate recovery steps when that happened to me. To be honest, I'm still not entirely sure how to safely fix the keyring issue when it happens.

Other than that, and this may sound very strange given the last paragraph, Arch is fantastic. It's the most usable and useful system for general software development I've ever owned. Basically, you follow the wiki, and everything just works, sound, graphics, wifi. I never had that with Linux, not with Ubuntu, not with SuSE.


Booting with a live USB, mounting the root dir and doing a manual downgrade of libc could work


Yeah, something like that. I don't remember the full steps I took, but it involved pacman-static from the AUR, which is a pacman build that works without a working libc. I think I used the USB boot method you mention, but with pacman-static, I was able to go forward and do a full system upgrade.


I'm a happy user of another rolling release distribution: Debian testing. It's on my desktop and a laptop for a decade (even more for the desktop since I transferred the OS from my previous desktop pc).

I can't remember any problem during the upgrades of the recent years. I often apply partial updates through `aptitude` on a weekly basis, and full upgrades once in a while (maybe monthly). There were some rough times long ago, but I think it was related to the transition from initrc to systemd: for a major change like this, it's would be surprising if a testing release was perfect.

I don't think I ever had problems as acute as having no Xorg or alsa. If several occurrences of this kind had occurred, I would call my OS unstable.


Another Debian unstable user. It's hard to remember the exact installation date, but I have a `/var/log/firewalld.1` from Aug 25 2002. So, 20 years soon! And it has been migrated from 32-bit to 64-bit.


If you used d-i to install, it would have left logs in /var/log/installer/

Also, the installation-birthday package can figure out your installation's birthday.


No /var/log/installer (maybe it was not the case for Woody?) and installation-birthday says 2006 but this is the date of the cross-upgrade to amd64 (using debootstrap).


Maybe the filesystem or LVM/RAID creation date would give a better birthday. Also, these days one can cross-grade between architectures without using debootstrap, just apt.

https://wiki.debian.org/CrossGrading


Unfortunately, I migrated to LVM later (I think 2006 too), through a new disk (I suppose with tar | tar). At the time, amd64 was just a port.


I use sid/unstable, and like you I use aptitude, partially out of habit. I used to use stable exclusively. About a year ago I decided to try using sid permanently. I only ever had one issue (couldn't login to a gui), but that was resolved during the next update. It feels almost as stable as Debian stable, and I see no reason to try out something like arch or manjaro.


Debian unstable running here since about 7 years. I tend to reinstall when getting a new computer. Once I had X break and I had to set "GDK_BACKEND=x11" and "XDG_SESSION_TYPE=x11" to fix it. Something related to Wayland I guess. But I use startx directly, no login manager, so probably not a common issue.

Sometime more than five years ago, there was some trouble always when a new version of GNOME trickled to unstable one package at a time and having part of packages the old version and part new caused a lot of small issues, like thumbnails not working in Nautilus etc.

I don't remember any other relevant issues. apt-listbugs helps avoid a lot of issues. I use aptitude as package manager (commands, not the menu UI) and often use "aptitude forbid-version" to skip a bad version, not sure if apt-get or apt these days provide something similar. apt-listchanges is nice too, although I rarely see something relevant to me specifically there.


Me too! I do unattended-upgrades 4 times daily. I also auto-upgrade packages to unstable when there are security updates that didn't reach testing yet.

https://wiki.debian.org/DebianTesting#Best_practices_for_Tes...


I do it 5 times daily


The Debian archive is only updated 4 times daily, so one of those 5 times will do nothing.


Another Debian testing user reporting in. I think I’ve reinstalled once or twice in a decade.


Debian unstable is equally functional on desktops. 15 years without having to reinstall.


It's even better (to use unstable) since you don't have disappearing packages, you have access to packages only in unstable (Firefox instead of Firefox ESR), and you get timely security update.

20 years ago, things may break a lot in unstable. But nowadays, you just have to be careful to only use "apt upgrade" and from time to time "apt full-upgrade" (and check what is removed as it could remove everything).


You can also pin most packages to testing, but allow packages from unstable/experimental to be installed but pinned at lower priority and also pin security updates from unstable at a higher priority. I think this approach is the best of both worlds.


This seems overly complicated. Nowadays, unstable almost never break (with the exception of files migrating from one package to another without the proper metadata) and I am updating daily. I am curious, are you using testing or unstable?


I am using primarily testing, but some packages from unstable due to security updates or where the package got autoremoved from unstable due to release-critical bugs, which happens quite often. It feels a little more comfortable using testing, knowing that I don't hit any bugs piuparts/etc blocks from entering testing. I think I initially downgraded to testing to avoid dealing with unfinished transitions, which make upgrades a bit more annoying.


I’ve been running Debian stable for about 10 years and reinstalling every so often. How do you run a partial upgrade on testing? Is it usable as a daily driver?


The software install on my daily driver dates from 1999 (though the oldest file in /etc is dated 1994) and was originally an old Red Hat release. It runs XFree86-4.x (last rebuilt 2004) which works fine with nvidia-390.132 (no more than a few years old) on a GTX780Ti and an i7-3770K (maybe a decade old itself now?). Desktop is currently Xfce-4.something (was FVWM for many years); applications only get upgraded as needed (autoconf and make FTW). It's been triplehead pretty much forever (easier now only one graphics card is needed, current running 3x27" Dell something or others).

I don't normally explain why it's so fast, just smile quietly when people comment on how instant the response is.


xfce is great


>my experience doesn’t match the common notion that Arch Linux is unstable

Arch is unstable, as in, package versions constantly change and can (will) introduce bugs and regressions. Debian is considered stable because apart from security updates package versions are set in stone til the next release, so there won't be any surprises.

This is stable/unstable difference, that doesn't mean that you can't break your OS and have to reinstall in either Debian or Arch.


> Debian is considered stable because apart from security updates package versions are set in stone til the next release, so there won't be any surprises.

which is not what the rest of the world means by "stable" when talking about software, so there is in practice a lot of surprise for users coming to Debian when they hear "stable" thinking that it means "no bugs" when it actually means "no changes"


> which is not what the rest of the world means by "stable" when talking about software

[citation needed]

That's exactly what a lot of software projects mean when they make stable release branches, stable interfaces, stable protocols...


I'm not taking about the world of software developers, but the world of humans. Go ask your non-tech family members what "is your computer stable?" means to them


Go ask a software developer what that question means to them and you'll get the same answer. Ever heard about that thing called "context"? ;) You may very well argue that stable is where horses are living.

In context of software releases, "stable" has a clear meaning - and Debian releases are software releases.


> Ever heard about that thing called "context"? ;)

most people who use and talk about software aren't software developers - that's the context.


> which is not what the rest of the world means by "stable" when talking about software

But it is what the rest of the world means by "stable" in lots of ways: a stable climate, a stable government, a stable relationship, a stable heading. I don't think it's far-fetched for Debian to use it in that meaning about software as well.


stable, by dictionary definition, means "not changing or fluctuating", no bugs would be "bug free", there is no "bug free" distro.


To be fair, "stable" also means "can stand up without falling over", which in a computer context gives it the secondary meaning (besides "unchanging") of "doesn't crash", which is very easily interpreted by the layman as "not beset by bugs that make it crash".


it's a problem of scale: it can mean "changing or fluctuating" as in, the software itself won't change over, say, a time scale of months (what Debian means), OR "changing or fluctuating" in the sense that a stable chair does not fluctuates, breaks or tips over when you sit on it, e.g. at the time scale of an human interaction. I'd wager than people mean the latter in general with the word "stable".


But if they think that they can have the second definiton without the first, they're fooling themselves. The reason Debian Stable is both kinds of stable is because they test forever, eventually put the most finely tested software into a release, then keep working on fixing any bugs in that release until the next one.

edit: your new, experimental chair just released yesterday may be stable enough to sit on, but it's nothing to bet on.


> The reason Debian Stable is both kinds of stable

I stop you right there - I ran debian stable for years. Arch Linux has been so much more "stable" in terms of "less bugs and issues" for daily use it's not even funny


As another Arch Linux user, I can attest that it is the rock-solid foundation for your computer.

I use double boot to host both Linux and Windows; when I need to use Windows, I just put Linux into hibernation. This greatly extends the amount of time it can go without being rebooted.

Driver and X problems still cropped up occasionally, but things seem much more reliable now than they did a while back.

This is a also the distro I'm familiar with from my time working on servers, where I've used it with ZFS with great success.


When I don't need Windows for gaming, I just boot the physical windows partition from Linux using Virtualbox. This way I don't even have to hibernate and interrupt any work.


Isn't windows "Installing new hardware" every time you boot? And the license is still active? I was always affraid it will break the system after few switches between virtual and bare-metal boot.


I've been running this setup since 2017 and while there were occasional graphical and sound glitches, usually after a Windows update, over all it works great. It did the "installing new hardware" thing only after updates as well, so I didn't run into it too often.

It does like to deactivate itself after a few reboots, but that's nothing a bit of mild piracy can't fix. I have at least two spare windows licenses in a drawer so I don't feel the least bit bad about it.

I also have a big ugly powershell script that runs during startup and does some things differently depending on where it's running. Things like not launching all my background stuff in the VM, remapping drive letters between physical disks and VM folder shares...

Another thing to look into is RemoteApp. I did some experiments with it in a VM and the performance was way better than "seamless mode" (which doesn't exist anymore anyways), but getting it to work on non-Server editions is a pain.


Seamless mode still exists in Virtualbox 6.1 which is the latest version.

(Windows did just crash when I was testing it but, as it also crashed twice before I got that far, I don't think it's related)


Did Windows 10 start to crash in VBox for you too? I had to move to libvirt/qemu as I was not able to resolve. Downgrading nor upgrading VirtualBox did not help. Removing last windows update helped a little, then it started crashing again.


Installing new hardware doesn't happen often and it shows the unlicensed watermark when booted from Virtualbox, but booting natively restores the activation status.


Nah, when Windows installs new hardware it doesn't uninstall the old hardware. Good luck on the license front though, since Microsoft has decided you can't move around the item you purchased from them...


I have a libvirt VM running Windows with GPU passthrough to the GTX1070 (the main GPU is the RTX 2070 Super) and use looking-glass-client and scream to pass the audio to Linux, which I use for gaming. No need to even multi boot, though I do still have the Windows boot available if I need it, which is rarely.


how is the latency on audio with that setup? I use qemu with GPU passthrough, 1060 on linux and 1070 on windows for a few years now, and i had to buy a laptop to do music production because of the latency. The only other solution was to passthrough a USB soundcard, but about 15 years ago i discovered i can't actually play live notes on a MIDI device with the latency on the USB subsystem.

I've heard of looking glass, but not scream. Will check out, regardless of reply, so thanks!


I don't /perceive/ any audio latency, the game sound seems fine to me. Talking about Elite: Dangerous specifically - I use the VM to develop my E:D 3rd party application.


Yes, I do it too, but getting it to work is a bit tricky with UEFI.


> "With Ubuntu, I would’ve had to upgrade (...) five times to end up with the latest LTS release.* And these release upgrades don’t always go smoothly either."

I read this in the exact moment I was upgrading to 22.04.1 end it stopped due to lack of space in /boot. A rare case of synchronicity in my life.


Ah, lack of space in /boot, the "bye-bye regular user" issue that was finally solved for apt upgrade some time in 201x, but apparently nobody bothered to ensure it does not happen in dist-upgrade. The year of Linux Desktop is just one decade away now.


you have to run apt --autoremove or something. If you don't, debian/ubuntu updates will get slower and slower as it has to rebuild some index for every kernel available on the system. Removing all of the kernels you're no longer using fixes the storage space issue, but i am unsure if apt will freak out if it can't find a kernel it thinks is installed. apt --autoremove will uninstall and de-index all of the old kernels for sure, though.


Yes, I even added --purge to gain an extra notch in the process, but it all fell 37MB short in space. The next alternative is to shrink /home and expand /boot, but (2) I would have to do it via an USB booted Ubuntu, but (3) my SSD has LUKS, so I don't know how gparted would deal with it. I feel the risk of losing some data... I do have backups, but (4) not the time nor the patience to go through all this.


I've had the same install going since January 2012:

    > head -1 /var/log/pacman.log
    [2012-01-22 14:55] installed filesystem (2011.12-2)
In that time I've converted the install in-place from x86 to x64, migrated from legacy boot to uefi, replaced the entire RAID set twice, motherboards, CPUs, etc. It's my own ship of theseus. Many of these tasks people would say to do a reinstall, but I've always been able to find a guide on the arch wiki to do it in-place without losing anything.


I am currently running Fedora 35. I haven't reinstalled since at least Fedora 21 (maybe older, I can't quite remember)! Every 6 months, I do the standard fedora upgrade and keep on going! This reminds me, it is time to update to Fedora 36!


I avoid reinstalls as well and one of my systems hasn't had a reinstall since Fedora Core 4. This system actually started on Red Hat Linux 8 (the precursor to Fedora, not RHEL). It was upgraded to RH9, then FC1, etc., but I had to reinstall once (I think disk failed during upgrade). I have been able to rescue other upgrades that failed.

I have upgraded literally every component multiple times so it's a Ship of Theseus.


I'm on Fedora 36 laptop that started from Fedora 25, and at some point in the middle I moved from Dell XPS to LG Gram.

To migrate machines I didn't use rsync, used dd instead.

Zero upgrade issues, however I generally wait 2-3 months after release before upgrading.


I get the best of both world. I run Fedora Silverblue, and develop inside an Arch Linux container toolbox. I get the stability and user friendliness of Fedora, an immutable system w/ Silverblue and AUR and all the Arch goodies to develop with.

This is the best workstation setup I've ever had, there's no comparison.


I use Fedora as a VM for development on other systems. I just love the stable and smooth experience.


ITT: people telling each other that they use archlinux.

BTW I also use archlinux


Well, I use Manjaro. Does it count as insufferable too?


No Tux is insufferable


Gentoo user here.


Your distro is literally named after a penguin, so what's your point? :D


[flagged]


Are they actually? Have you spent any time in the Gentoo community or talked to actual Gentoo users?

I see people parrot this around a lot but from my experience the insufferable thing is the constant complaining and sweeping accusations against a large and varied group of people who happen to use a certain distro.


The point is they're not as insufferable as made out - but there is a reason for the stereotype, for sure.

Though much of that may have been taken over by the Archies, I've not fought in the distro wars for years now.


Gentoo wiki is top notch. I almost used gentoo once, but after 30 minutes in compiling vim I gave up...


Gentoo is really nice if you want complete customizability over your install.

I started using it long ago when RedHat 6 or something pulled in X just to install mpg123.


Because it is awesome!


It's certainly my favorite. Pragmatic, simple, and complete. What's not to love?


I actually switched to i3/sway on my Arch install ;-)


I have a similar experience, though only now am I faced with the existential crisis that "2013" is going to be "a decade ago" in a few months. My Arch Linux install started life as a VMWare Workstation image. It made it through two major init systems (sysvinit -> systemd), different audio subsystems (alsa -> pulseaudio -> pipewire), different WMs (gnome2 -> kde4 -> i3 -> sway), three filesystems (ext3 -> ext4, ext4 -> brtfs -> ext4, then ext4 -> zfs), several different versions of VMWare Workstation (7 through 14 I believe), different storage substrates, etc. It's also lived on three different uArchs (AMD Bulldozer c. 2012, Intel Skylake c. 2016, and Ryzen c. 2020) but VMWare abstracted most of that away, of course.

Eventually I got fed up with Windows and decided to `zfs send` the install to a real disk and booted it on bare metal. It has been my daily driver since then for the last 2 years or so. (I did drop into the Arch installer a last year to unfuck my bootloader while trying to get rEFInd & ZFS Boot Menu to work, but that was just building a new initramfs; I haven't run "pacstrap" since I built the image c. 2013.)

The flexibility this operating system has provided me with is nothing short of amazing. I do have to say though: since switching to Wayland + the in-kernel AMDGPU driver, I can't remember the last time my system was rendered unbootable. (Excepting the one time I tried to change my bootloader, but that's user error.) In hindsight I feel like the vast majority of Arch's reputation for breaking systems is overblown, and the blame rests mostly on DKMS + NVidia's proprietary drivers.


May I ask: What prompted you to change the file systems? I recently reluctantly switched over to btrfs and I see no meaningful difference between ext4, so I am curious.


I've been using the same base-install of Gentoo since like 2010. Switched from Gnome to Xfce some time ago; new machines just `rsync` the stuff over. But when the root partition when from /dev/sda3 to /dev/nvme0p3 there was some switching.

Back in my Windows days (1990-2001) I was NEVER able to do anything like that; copying that damned registry. Had to make special files; apps would never work. It was a game changer to find a system that was as simple as copy the files to the new hardware.


I haven't tried this in a while, but Windows 10 can too be rsynced between different systems. Fixing the bootloader is a major PITA though, and I had to relearn doing that every time. Of course, it then spends half an hour installing new drivers for everything, but it is expected in that world.


In the Windows 2k era I would do it, but it involved somehow tricking the system to think it was doing "first boot after install" and using default drivers for everything (especially the chipset). With some trickery you could get it to work (sometimes you had to preinstall the new chipset driver where it could find it.

Wasn't worth it usually.


It was only worth it if you had to deploy a custom image to a whole fleet of machines.

IIRC there was a tool called "sysprep" you could run to reset the drivers, hostname etc. so they would reconfigure themselves on first boot.


Exhibiting the self discipline to not distro-hop in 10 years is more commendable...but I guess that's Arch Linux for you.


I distrohopped a lot for my desktops and then stopped when I got to arch.

For servers, I've used CentOS, Ubuntu, and debian. I never did get too into the the CentOS redhat side of things. I'm not sure why that is except that I just was used to the debian way of doing things... In general, my go to is debian or Ubuntu. Ubuntu is the only distro where every damn time I've managed to break it in ways I can't figure out how to fix. Arch and debian are the only distro s where I've been able to fix something screwed up without a full nuke and reinstall. And I HAVE screwed some things up pretty bad.

Arch though is just so good. I just don't quite trust it to handle updates unattended so I don't use it on servers.


Yeah Arch is basically the final destination all that distro hopping gets you to.

No reason to hop once you've arrived.


That said, I strongly recommend letting the distro hopping take you on a journey before you land on Arch-- don't skip to the end. It's nice to see how things are done elsewhere (defaults, etc.) so that you've got some opinions by the time you get to Arch and configure your own take on things.


I thought the exact same thing until I switched to NixOS. I love Arch, but I'd never go back.


Same story. I love Arch, but having more than one computer with constantly changing custom wm/shell/compiler setup makes it complex to replicate the same changes in all the machines. NixOS makes this just one git push/pull and switch away.


I used arch around 2008 or so, then distro-hopped to Slackware, when I remain to this day. Some of us escape.


I started on slack and hopped on over to arch eventually. But lately been considering going back to Slackware again. Circle of slack.


If you can get past the learning curve, Nix is the final destination.


For me Debian was that final destination. Cygwin -> Knoppix -> Debian unstable -> Debian testing.


s/Arch/Gentoo/

I did run Arch for a while around 2010 but it didn't take. It's nice to find a permanent home - I've been on Gentoo since 2013 and an acquaintance has been on Slackware since the 90s.

Those three seems to be where us tinkerers end up.


Those three do seem like the popular ones amongst the tinkerers.

Could I ask what your favorite things about Gentoo are?

For me, with Arch, it's how up-to-date the repos are and how it doesn't make me compile everything myself. Should something not be available in the repos, chances are I can still compile it myself and build a package using AUR.

Another thing I like is the excellent wiki.


A large part of it is just that it's "home" after finding it at the right time in my skill curve - probably could've been Arch as well.

The Arch wiki might be a bit better and we all benefit from it to some extent, the Gentoo wiki is also good and honestly I don't use either that much anymore.

The great differentiator is portage. I never run into "maintainer built the package without xyz support" - sometimes I'll run into "stupid user disabled xyz support when installing" but that's just a config change and easier to accept since I'm the stupid user. In theory they might've missed adding a USE flag for that feature and I'd have to overlay my own ebuild or live without it. Custom source patching is built in and package versions can be (un)blocked via package.mask/.accept_keywords. Version 4.3 broke something? package.mask: =pkg-category/pkgname-4.3 to downgrade to 4.2 and still automatically get 4.4 when that shows up. Mask >=4.3 instead and nothing will change (but eventually dependencies might force you to fix it, of course).

Compilation is pretty fast on a modern computer. Significantly slower than binary packages sure but it's mostly with browsers and maybe qemu/compilers that you really feel it (and several of those do have binary packages, e.g. www-client/firefox-bin, dev-lang/rust-bin). On my old laptop this turned into an excuse to experiment with distcc and nfs for speedups (Core2Duo with 4GiB RAM is a bit weak for encrypted ZFS + compiling gcc, and also it's fun).

Compilation times is still the strongest con but having compilation as a first-class citizen is what enables most of it and the big stuff you just run in the background.

(For the unaware, do note that "compiling myself" means "emerge pkgname" or "emerge -auvDNU @world" and includes dependency tracking, not "./configure && make" and manually hunting things down.)


+1 - Arch is fine, but it doesn't have Portage.


When I was first getting into Linux ~2006-2007 I distro hopped constantly until a friend told me about Arch and then there was no more reason to distro hop :)


Never had the urge to distro hop after settling on Gentoo, 12 years ago. I've not seen a better package manager in all that time, just amazing.


I'll be coming up on 10 years as well, mostly just because of the AUR. Whatever obscure/proprietary program you might need, there's a decent chance some helpful person has packaged it up. Also has the advantage of easily being able to uninstall the weird 10 year old perl/python/mono version the thing needed to run that would otherwise probably stay on my machine forever.


What I love about the AUR is that if it doesn't exist, it is extremely easy to create a package. Debian packaging and Red Hat packaging were just not very intuitive. PKGBUILDs are simple and effective. Alpine also has similar packaging as Arch.


100% agreed.

I've spent quite a bit of time modifying deb source files and rpm source files to do upgrades/downgrades/patches.

I could never start from scratch in making a package. I'd be so incredibly overwhelmed by all the different options and obtuse syntaxes and level of background required.

Writing a batch script that installs everything into a `$pkgdir` directory is so much easier to understand and get started with. And if you publish to the AUR and have some small issue, someone will eventually show up and tell you what's wrong with your script.


The Arch Linux install on my ThinkPad X230 is about twelve years old now. For most of the time it was my primary laptop, until a couple of years ago when I turned it into a sort of home server with built-in KVM and battery backup.

In contrast to my experience running Gentoo or Fedora, I never experienced significant breakage when doing a system update. Having said that, I've always run a fairly minimal desktop environment and I've been conservative with wifi and audio software. So maybe I wasn't pushing it very hard, but still full credit to Arch for having a rock solid foundation.


I push my Arch installs hard. I install a large number of applications. I even have Arch Linux running in a proot on my Android phone. I have had occasional breakage every 6 to 12 months. It is usually a vendored binary that depends on an older version of some library. And that library changed its ABI upon a minor version upgrade. Arch has packages for old major versions of libraries. For example Telegram Desktop and Ungoogled Chromium can break, but rarely. Breakage is resolved within days on stable.


I'm the same but with Gentoo, which is another rolling distribution. I've had it installed for over 12 years on multiple servers without any issues.


My preferred distro journey went: Mandrake -> Gentoo -> Debian -> Gentoo -> NixOS

I rage-quit Gentoo the first time (2002ish?) for Debian when stable portage got a broken version of gcc, making it very hard to recover. The second Gentoo was by far the longest, maybe 2003 through 2018? I'm 4 years into NixOS now and very happy with it. I actually run into issues with switching to new release channels almost as often as I did the few times I experimented with Ubuntu, but it's just so much easier to work around these issues by mixing-and-matching packages from different channels[1] that it just doesn't bother me.

It also got me to love systemd. Configuring systemd units (especially timers!) with nix is so much more ergonomic than the bare files that I pity anyone who has to do it by hand.

1: 95% of the time it's already fixed in unstable, the other 5% of the time I pull in the version from the previous release channel.


Windows -> Solaris -> Windows -> Gentoo -> Arch -> NixOS

I feel much more satisfaction pouring hours into NixOS over doing the same on Gentoo and Arch. The hours on nix are in a source file I can carry around. The hours on Gentoo and Arch I'm doomed to forget and have to repeat.

I do miss the AUR though. I haven't been able to package a rust program that has a build with a transitive dependency that expects internet access (https://github.com/foundry-rs/foundry). Something something sandbox, crate2nix. But a frivolous install of a little binary that isn't packaged is not necessarily an easy endeavour.

Overall I'm very happy. Nix unstable feels equivalent to Arch more or less. You can pull in master with flakes easily enough too.


Yes, builds that expect internet access are not friendly with nix. It's particularly annoying when such packages are in something like cargo:

A: "We built this nice tool named cargo that manages transitive dependencies for you and will automatically fetch and build them from the internet"

B: F-this, I'll just download a tarball from the internet.


Ironically, what cargo does is just downloading tarballs from the internet for you.


For me it was: Windows -> Xandros (1 year) -> Ubuntu (6 years) -> Arch (4 years) -> NixOS (4 years)

Except I never actually installed Xandros myself, my dad just brought it home from work one day and installed in my computer.


Likewise, I think my oldest install is a Void Linux system (another rolling release). Unfortunately, I have no clue how old it is; musl doesn't keep login records (wtmp?), and this install has lived through not merely multiple machines (moving hard drives from one box to another) but multiple filesystems (rsynced from I think ext4 to f2fs to zfs) and I think one of those jumps lost timestamps because the oldest time I can get is younger than I think the system is. Regardless, there's something special about that degree of continuity - at work, I like cattle, but at home I actually enjoy having a pet around.


Hrm...

    $ qlop -tvm|head -n1

    2007-01-18T19:50:33 >>> x11-base/xorg-server-1.1.1-r4: 9′23″


Same here. My gentoo installation on my previous machine was setup as I bought the PC, somewhere around 2008.

I tried arch but it does not feel like home.


I am using the “same” MacOS installation since the first Intel Mac Mini (2006?). It has branched out to a dozen or so boxes (3 kids, a wife and a TV set etc). I had to make a clean install on one “leaf” only - a former main computer relegated to video watching (a lot of different video players, torrent clients and such) began crashing once in 2 days. Clean install helped.

But to be fair upgrading a Mac computer consists now of adding your old stuff on top of a new operating system, not adding a new operating system on top of your old stuff. But well…

I had a problem with USB daemon crashing sometimes (probably related to a famous rewrite of the daemon in C++). And most recently - the most shameful M1 related problem with unkillable screen saver wrongfully started when Remote Desktop is in use. (The bug is shameful cause an unkillable screen saver means that design review, code review, QA processes are all broken, and the fact that it was not fixed in a year(!) - means that tech support process is broken too. Looks scary for things to come. M1 is a great processor though).


I love arch linux. It just works. Even with video games. Steam deck is based on it.

Linux really did win desktops. Running on arch for 3 years now.


Arch wiki is a real treasure too. I don't even run arch but use their wiki info all the time.


I switched off Ubuntu to arch because I got sick of inconsistencies between Ubuntus hacks and developer defaults talked about in archwiki.


Precisely. The best part about it (linux distros) is that you can actually fix your issues, while with windows/apple you have to suffer until you get a bug fix, since you don't control the software that runs on your computer


Arch wiki is what made me switch to arch - I was tired of translating from arch-wiki -> whatever debian flavor I was using.


I ran Archlinux with LTS kernel on my home laptop for 1.5 years and I stopped because of instabilities. The last issue that I had was the update to pipewire, my bluetooth headset stopped working after suspend. I got fed up of tweaking configuration to make it work. I could have reverted to pulseaudio.

But to be honest, the only major issue was an issue with pam login. I could not log in anymore after an update, had to search on the internet to find a workaround that consisted in updating a pamd.conf file in single user mode boot. Many breaking updates were Gnome related...

Switched back to Windows 10 then 11 for a year, tried WSL2 and found it unstable (some random crashes and tmux freezes), and slow sometimes.

Now on Fedora for a few months since I am a Gnome user, I am surprised there are quite frequent kernel updates also. I am little bit less worried that an update will break something, but i'll slowly move away from the bleeding edge.


When I say Arch is not stable, I don't mean that you can't leave it running for a long period of time. I mean that it changes. Debian is not stable because you can run it for a long time without crashing (You sure can, but you can also run Debian with daily crashes, depending on what you're running). Debian is stable because it doesn't change. You get security updates, but you don't get feature updates, because feature updates introduce change, and the way you thought something was done is not the way to do it anymore. Flags change, output changes, inputs change. None of this is bad, but with Debian you know it won't happen until you're ready to move on to the next version. "Unstable" distros can introduce these changes at any time, making it harder to review what will change.


While I agree with your definition of "stable", in my case its effects are reversed: I much prefer having to deal with one change from time to time, than having to apply a big update where possibly "everything changes" all of a sudden. Although, granted, this is much more likely to avoid "yoyo changes", where a rollback is necessary because the new shiny is actually broken.

I didn't take notes and don't remember the specifics, but I have a small VM running on some cloud that only hosts LXC containers, so not much is installed on it. I did an update from Ubuntu 20.04 to 22.04: multiple dozens of packages were removed, and multiple new ones added.


Debian releases do what you say. But you can also run Debian/unstable, which is the bleeding edge rolling "release". It works quite well, and many people run that on their machines.


Now try to replicate that Arch setup on another PC. Even if you started from the exact same install, would it turn out the same?

I'd really like to see something like a rolling release take on Fedora Silverblue. Rolling release with versioning/immutability and easy rollbacks.


Yeah, this is a thing.

To enjoy years of stability on Arch[0]:

  - occasionally upgrade your system
  - before upgrading, glance at https://archlinux.org/news/ to see if anything requires manual intervention
[0] I use Arch btw


> - before upgrading, glance at https://archlinux.org/news/ to see if anything requires manual intervention

I simply have https://archlinux.org/ as my homepage when I open my browser on the desktop computer. Shows the same news in a slightly better format (personally), and also shows latest package updates on the right side, in case some favorite software of mine has been recently updated.



That's great! But what it doesn't say is what's actually relevant: how much time you spent maintaining it in that period.

Stuff being obviously/"disruptively" broken usually has an undue amount of weight given to it, even though it generally (a) occurs at a time the administrator has chosen and should be planned to minimize the effects of any disruption (i.e. when you're doing updates or potentially problematic config changes), and (b) usually takes significantly less time to deal with, overall, than regular maintenance (upgrades, config changes caused by them, etc).


The issue isn't so much whether a person can keep an arch install stable, it's whether arch is stable for most people, most of the time.

As modern hardware and DE choices change and conflict, arch has to be manually tweaked to stay working. Those tweaks (aka config choices) are essentially the entire purpose of a normal distro.

Arch isn't designed to do the tweaks. It's just that simple.

Saying you kept arch running is either a brag about how well you manage it, how minimalist your environment is, or how simple your hardware is. Not to mention whether your needs drive you to try any of the edgier stuff.

Congrats to this guy though.


I would say that Arch's philosophy is about Exposed simplicity vs Hidden complexity. The wiki is extensive and covers a lot of cases one might stumble upon. Sometimes it feels like Arch's goal is to teach you how it works end-to-end. As a byproduct you get a working OS.


How would you folks make sure your system is not compromised? That's my No. 1 concern about my system. Like OP, I'm using my arch for many years now, however I cannot tell whether it's running vanilla arch or not. Unlike NixOS I cannot verify the changes to the state of my system over time and I haven't kept a log of any software installed or any config changes. That's why I plan to replace it and keep a log of future changes.


You probably can't update as much as you can with Arch, but any reasonable modern linux distro these days allows the user to have a very stable core and very up to date user space with snaps, flatpaks and AppImages.

I was jealous of windows because of exactly this. No longer now. It could still be better, e.g.: .AppImages could automatically be opened by something that mimics an installation wizard with checks for signatures, permissions, installations, periodic updates... it would be much more user-friendly than expecting the user to give "executable rights/flags" to a file that was just downloaded. Nevertheless, it broke the "good enough" barrier for me.

Coming from a time that "just released" software required compilation from source and sometimes, compilation of its dependencies which sometimes could simply break the system or worse: add lots of repos that could suddenly disappear or break your package manager in a way that it was simpler to just reinstall your system from scratch than trying to fix it; current situation is much much better. It is good to have a mature and stable distro that one can expect will be maintained for years with software that was released just yesterday.


Mine is around 6-7 years old. Started off as an Antergos on T450s, then moved to T480 and later on I replaced the extra repos and the startup logo with EndeavourOS after the Antergos ones went dark.

However, I can see a noticable slowdown and some services occasionally acting weird - once every few months i need to switch to a terminal and use loginctl to unlock my session. I am seriously considering a reinstall, no matter how humiliating that sounds.


I tried to do a reinstall a while back. Gave up around 4 hours in. Installing arch was easy, getting it how I liked it on the other hand. That was gonna take another couple weekends of figuring out what I did 5 years ago to make X work just right.


I did a re-install last year to get rid of the clutter. I created a document that describes every step. Whenever I install a new package, that is going to be useful, I add it to pacman -S command in the document.


I expect the same, but on the other hand many things related to X configuration have improved since then (dpi, multi-screen, etc), so I was hoping the result would be better. I need only xfce, appmenu and plank anyway.


I used the same Arch Linux installation from early 2016 until summer 2021, and had a very similar experience to the author's.

There was a significant initial setup investment, by modern standards, although not by historical ones; I had used Linux on the desktop continuously since I was a child, around 1997 (I think I started with RedHat 4.0 and kernel 2.0.29), so I remember all initial setup to be fairly burdensome in the late 1990s and early 2000s. Arch seemed a throwback to that. This is not a criticism, to be clear. I was very aware that this was part of the Arch philosophy, and I embraced that in my switch from Ubuntu.

However, once that was done, very little of interest happened over the next half decade. It's not that nothing ever broke, ever, but the rate of breakage was impressively low, and much lower than I had the previous 7 years on Ubuntu, or desktop Debian beforehand. Arch Linux was eminently stable.

In 2021, I switched to Mac/OSX -- the last of my social group of techies to do so, late to the party by a decade or so. While this has some advantages, my work kinetics will never come close to the raw efficiency and speed of my Arch Linux + i3wm setup.


For me, it was pretty similar...but I was more affected by everything related to mesa and their decision to drop old Intel core generations (basically everything up to Haswell is broken with Vulkan).

The worst upgrade bug was around 2016 when mesa had a buffer overflow bug, and I had to apply the patch manually...and well, recompile mesa every time an X update came out or anything related.

On top of that the almost monthly archlinux keyring messups that need a repopulate, a different keyserver, or a deletion on the filesystem for whatever reason...oh man, I just hate gpg so much. I wish this piece of crap software would be more reliable. It would make my life so much easier.

I always have issues with gpg and pacman complaining about outdated or wrong keys, and then shit happens even when you tell pacman to not delete the package download (and when you start the init, repopulate, and sync of archlinux-keyring again). It messed up my system so often with remnants of packages, where I debugged X for days just to realize that a file was missing and the package got uninstalled automatically.


Out of all the distributions I've tested, pure arch has been the most stable, most documented, most fixable distro yet I've dealt with yet


  $ head -1 /var/log/pacman.log 
  [2014-03-24 23:03] [PACMAN] Running 'pacman -S yaourt'
Eight years so far ...


My last clean Windows installation was in 2007 with Vista, and since then, I ran in-place upgrades to get to the next immediate version. It survived multiple mainboard changes and moved from MBR IDE HDDs to GPT SATA SSDs. (With the help of Acronis/Macrium images)

The next big move would be to change to a Mainboard+CPU with Windows 11 support and an NVMe disk. I wonder how feasible this will be.


In my experience fresh no-bloat windows installs boot in seconds, which gets slower and slower every major soft or hardware upgrade.

Is your boot still fast?

Nice username btw.


No, it is not booting up very fast, but it is still ok as I usually don't reboot that often. I remember one severe increase in the user login duration, and this was caused by the high number of files in a temp folder which Disk Cleanup somehow did not delete.

Manually deleting those solved this issue, and over the years, I have only used the standard tools included in Windows to maintain the system. No tuning or cleaning tools that defrag the registry, download some RAM or do some other magic.


I tried using arch based distributions but the need for constant manual babysitting has turned me off them.

Packages randomly break and require hours of work to fix. Non-rolling distribution will usually only break when there is major upgrade which can be scheduled for when you have the time to deal with it.

Pacman only works in interactive mode so using it on a headless means at least weekly session. Randomly that will use up an hour or two when something breaks, and it will. Ubuntu based LTS distribution will last a couple of years without needing manual intervention after initial installation, and much longer if you don't upgrade to next LTS until the EOL.

Finally, if you need old software to build old version of something, Arch based distribution is useless. On Ubuntu I can download and install version 14.04 without much hassle (and even older with a bit of work) and build that old Android or sdk for device that is no longer supported but with Arch...don't waste your time, its not happening.


> Pacman only works in interactive mode so using it on a headless means at least weekly session.

What do you mean by this?


This matches my experience as well, been using it since 6 or 7 years ago, and it feels nice to know I can rely on my OS install. If something breaks I have to learn how to fix it and that knowledge builds up over time. Way better than other operating systems where I'm totally screwed if something breaks up and the only thing I can do is run in circles.

I also use i3wm btw


I run Arch™ on all of my production servers since 2016 and never had any real problems or the need to reinstall either. The newer ones use docker with Alpine images, but the hosts still run Arch. Packages are updated every few days or weeks. On most servers I use the LTS kernel, pin it and only update it every couple of months to keep downtime low. That's it. The first ever server still runs fine with all its legacy apps thanks to a couple of custom packages.

One of the things I like about arch is that it keeps app config as upstream as possible. Debian's custom app config with all its magic scripts used to cause so much trouble. Yes, I only have <10 servers and the way I maintain them does not scale up too well. But that's something we have the "cloud" for these days anyway.


Hey cheers, I bet I'm around a decade on the same install of Arch too. That spans 3 machines. The trick for me is hot swap backups. I do an rsync backup of the drive to an identical disk (nowadays a 1TB 980 Evo) and then immediately swap the backup drive to the main drive. I have little helper scripts to format drives, do backups and automatically update fstab and the boot config. So new machine no problem, rsync the files into it and boot it up and I have everything exactly as it should be.

Now and again I'll do some package spelunking (pacman makes this straightforward) and clean out cobwebs. Next on my list is my emacs config which is like 15 years old and a couple generations out of date. I wouldn't care but startup times are slowing down and there is a lot of great ideas and packages to solve this problem. Just need the time, it's a few hours here and there, but easily enough to keep Arch going forever!


> A few months ago, I copied my complete installation to a ThinkPad X13 Gen 2 using rsync

How would something like this work? Is the target laptop running any random linux distro, and rsync replaces all system files etc. effectively "swapping" operating systems? Can the laptop boot into the new system as if it was installed normally?


You boot into a live cd (probably the arch install disc) and rsync on to a mounted partition, then you chroot into that partition and re-install the bootloader.

This is pretty much how you install archlinux normally, except instead of rsync-ing the initial filesystem you make a new initial filesystem based on the "filesystem" package.

https://archlinux.org/packages/core/x86_64/filesystem/


I do that routinely. Boot from a liveUSB, rsync data from the main system, fix the bootloader and a couple of configs like /etc/fstab. Works like a charm every time.


> I do that routinely. Boot from a liveUSB, rsync data from the main system, fix the bootloader and a couple of configs like /etc/fstab. Works like a charm every time.

Just be careful if you intend to keep the old system running. You probably don't want to clone /etc/machine-id and similar (see: the problems that come with exactly cloning VMs). But of course, if the old system is being destroyed, then no worries.


I did exactly this a few times, too.

My Arch is from 2007 or so and had only a few hiccups. The last thing happened when MD5 was getting deprecated for /etc/passwd and the automatic migration to another hash algorithm was not working, which is obviously directly related to the age of the installation.

It is running on its third mainboard/CPU/GPU combo with maybe the fifth HDD/SSD.


Boot from external media (USB disk, cd or dvd if you're kicking it old school, maybe even network) with no persistent storage (ramdisk or whatever only). Mount the hard drive. Format. Rsync the files over.

That's pretty similar to what installation media does anyway, really.

[EDIT] Oh you might need to install the bootloader too. But you can sort-of (your running kernel will still be the installation media's) do that from the rsync'd system once it's copied over, with some creative chrooting and mounting. Which, again, is something that Linux installers kinda do anyway, in ordinary operation, but you'd likely want to do it manually in this case. It's exactly what you used to do (probably still do?) installing Gentoo, even for a "stage 3" (least-painful) installation.


> How would something like this work?

Very well and very easily. You can do that even with Windows, even with different CPUs: I've ported my master Windows install from a Xeon to a regular Intel CPU on a laptop.

This laptop still feels like downloading Xeon-specific updates time to time, but every system peripheral is recognized in the device manager, and everything has been working great for over a year now.

For my next laptop, I'll do just the same: use bitlocker unlock, clone this Windows install into a larger SSD using Linux NTFS tools, plug the new SSD into the new laptop.


There's really not very much magic involved. The root filesystem is just a collection of directories and files. The kernel is just a binary blob, usually stored in /boot. You perhaps need an initial ramdisk tailored to your configuration, but that's usually just running a script. The only really arcane bit is bootloading the kernel. With modern x86 systems EFI does it, and so you just need a correctly formatted fat32 EFI partition.


I would boot from a live disk image, mount the built in hard disk, and then rsync the files to it from there and reboot I think.


I booted a live system from a USB thumb drive and then pulled down a backup using rsync. After that, I adjusted my `/etc/fstab`, `chroot`ed into the new (old) system, ran `mkinitcpio -P`, and generated a new GRUB configuration file.


Could be booted into a USB live CD and then manually mounted the internal harddrive to rsync the new root filesystem. They would need to make sure their boot partition is rebuilt/updated too with the kernel if necessary.


Interesting question. Only other ideas I can think of is they pulled the drive out from the laptop to do the copy physically, or they booted the laptop from a live distro and did the copy. But those are simply guesses!


I'm gonna guess you rsync onto a separate partition and then get rid of the "host" partition once you're done instead of replacing stuff on the fly.


I had a laptop in storage and went on a trip. I didn't want to bring an expensive MacBook but I did want something that could take a note or two and check my email so I took a very old Dell latitude with me running arch on an intel 2xxx something (ancient)

When I booted it up I realized to my surprise I had been running arch not windows...so I checked my email and then decided on the hotel wifi to try and update it.

I will admit after about 6 years there was quite a bit of fumbling with pacman sources and keyrings and the delta to upgrade on a laptop that old and hotel wifi wasn't great but after I left it for an hour it finished.

I rebooted it and there I was in i3 without a thing wrong. Wild

Maybe if I used gnome it would have been a different story but I think the point of the article holds that arch is much more stable than people give it credit for if you are willing to learn a bit about how it works.


I left my computer in storage for a year while I lived elsewhere and I couldn't upgrade through pacman. There was no simple viable upgrade. The constraints just couldn't be met. Fortunately, all I had to do was reformat / and go again since I keep most user config in /home

Really enjoyed it otherwise


You probably cant test it anymore, but I think it might have had something to do with the keys. I had a similar problem with failing upgrades once, until I updated the keyring.


This.

The easy way, usually, is installing the previous keyring packages from the archive.

And maybe peruse the "news" page for any manual intervention required. There aren't many, so for a year of missed updates it should be quick enough.


Ah. Good tip. Thank you. I'll keep that in mind for next time.


This. When I took my old Arch laptop from drawer after few months it was impossible to upgrade without removing half of the system.


Weird AUR packages?


I just had the one nvidia dkms one. But that one actually worked fine.


My home laptop Arch installation is about 8 years old. The only major breakage I've experienced with Arch and Debian in the last 10 years or so has been the switch to systemd, which is understandable. I don't necessarily recommend for everybody to run Linux/Unix on desktop – especially if you need specific hardware or software support – but it's definitely not less stable than Windows or OSX from what I gather observing those around me running other operating systems. To me, the fact that I can likely fix things myself is worth any extra trouble of choosing the less beaten path.


I always love hearing these stories, especially because I'm too fickle to stick with something that long. I've mostly been a Mac user since maybe Puma or Jaguar, but I disto hop every few years. Spent a good while on Debian, Ubuntu, Arch, FreeBSD and OpenBSD. Still have an OpenBSD Thinkpad x1 Gen 6 around somewhere. That OpenBSD install could age like this, but I fat fingered a dd command and nuked the system disk in mid 2020, so it got a fresh install then.

Had various Arch installs on desktop and Thinkpad that I used as my day to day for while, but I always just ended up back on my Mac. And I get a new Mac every few years.

Any Linux I've had always ends up getting torn down at some point. I get a new computer and do a fresh install.


I was wondering why their system didn't break with the migration to systemd, but it's possible that their system is new enough that it started on systemd. That one was huge pain if I remember correctly. I think I needed to rescue with a bootable usb.


I wonder how many, if any, original install components are still present in that Ship of Theseus.


I never quite made it to a decade, but I think I definitely hit 5 years.

Yes, there have been breakages, but none very bad and thus not very memorable. Interestingly, some breakages were due to windows update doing something bad since I was dual booting.

Before switching to arch, I used ubuntu for years, and that was not nearly as pleasant an experience. Upgrading ubuntu versions always failed in some horrible way for me and I had to just reinstall the new version instead, and the way ubuntu is put together made it a horrible experience if you ever needed to install software that didn't jive with it(like something that required an updated version of some gnome dependency).


I had a similarly great experience with Arch but sometimes the system upgrade would mess up libraries required for the screen locker and I had to use loginctl unlock. I also didn't like that every system upgrade would make a lot of AUR packages broken, waiting to be recompiled.

I solved both problems by moving to NixOS and I have been very happy since then. I haven't had any issues with it so far. I've also created a couple of Nix expressions for packages I was missing, but I found most of the software I use in Nixpkgs or NUR already.


The whole discussion over "rolling release is stable" never gets concluded because people are not agreeing upon what they talk about. Here are the basic facts that the discussion should be based on:

* "Stable" is not a binary property - it's a scalar. The exact level of stability assumed by the word can be different depending on the context.

* By design, rolling releases are less stable than regular releases. You simply can't beat the stability of something that doesn't change.

* People have different stability requirements. You don't expect something works on your own laptop to work in every occasion.


All the issues mentioned in these comments would be way too much for me. Sure I could fix them and move on... unless they happened on a very busy day. Then they could tank a schedule.

And busy days can show up suddenly. What if I get a super last minute job right after I decided to update my packages and break something?

It seems like using Arch or any tech that involves any tinkering requires accepting that the system isn't a total point of trust for your whole life. You have to have an attitude of "I'll make it work" rather than "I know my gear is dependable".


Same stuff can happen with Windows too with unexpected updates. Not sure about Mac though, I've never used one.


That seems to be one of the big reasons people move to Linux. It's definitely part of why I stay.

Debian based stuff is very good at not breaking.


I've had that with a Debian desktop (for over a decade). My current OS on my laptop is Ubuntu, which I also installed a decade ago. Though something went really wrong with an upgrade about 5 years ago, so I had to reinstall. (Keeping all the data, of course.)

The only problem (with both debian and ubuntu) is that these old installs tend to drift from what a fresh install would be. And that the GNOME guys keep removing features I use with every release and then it takes a few months (sometimes a year) until someone adds it back as an extension (which will be broken with the next release, for sure).


I have it with Debian testing/unstable on my laptop, I think it has about 15 years now. Most stable system. I have also Ubuntu laptop from work, freezes constantly.

I switched laptops 3 or 4 times and HDD to ssd and another bigger sad with copying while system and just enlarging partitions.

The only issue I have right now is lack of legacy boot on newer laptops, not sure how to UEFI my disk.


I've been using the same Archlinux install over the past 12 years or so. Initially installed in 2006 (version 0.7 or so), I only re-installed properly when I switched from 32-bit to 64-bit packages, and from ReiserFS to Ext4 in the same step. Since then I've been using the same install on my personal computer, and just rsync-ed the files between hdds, laptops, etc.

It's been pretty stable, had some hiccups a dodgy kernel once I think. Can't remember what it was specifically.

> zcat /var/log/pacman.log.1.gz-2018010214.backup | head -1

> [2009-02-23 18:14] installed filesystem (2009.01-1)


This is so good, but Arch only annoys me in 1 way - that every week I need to download 500MB of updates. If I go away for a couple of weeks, my computer will most likely have 2 GB of updates pending.


How many packages (via pacman I assume?) do you have installed in total?


Not many. Standard dev tools like latest versions of VSCode, Chrome, JDK, Node, Ruby and some utils like Git, Gimp, Inkscape, etc. On Linux Mint I just have to do (maybe) an update of 100MB a week.


I have the same experience here. I installed Arch on 2011. I've switched laptops multiple times. Once I copied the entire disk as-is into the new one and grew the partition. Another time I rsync'd the entire disk.

Things broke probably one or two times throughout this time. I broke things more times while tinkering with internals or startup sequence myself. I've learnt an immense amount doing this too.

Arch requires a bit of effort getting into, but honestly, the KISS philosophy really pays off.


I use ubuntu LTS to ease software build with android, yocto, some vendor's SDK,etc as they're all tested on ubuntu LTS, so do many other projects from github etc.

Every few years I do a full re-installation (instead of dist-upgrade as I have quite some local installations that made things complex).

How Arch cope with that? I like the live-update-never-need-full-reinstall side, meanwhile I don't want to spend time to fix those tested-on-ubuntu-lts-ready-to-go third party software package when Arch is used.


So basically you use Arch Linux? (:

btw, I use openSUSE Tumbleweed - more stable rolling release and it's awesome. Never going back to regular release with painful major updates every few years


No offense, but how could possibly Debian Sid and the SUSE have more stable rolling release. For a rolling release you count on the default configuration working and both Debian and SUSE are a hell of customise scripts. I don't think it is technically possible for that to be working better.


They have https://openqa.opensuse.org/ for automated testing


Personally, I am using Debian Sid. The only somewhat painful part are the proprietary Nvidia drivers when a significant kernel upgrade occurs, but it's usually just a matter of selecting an older kernel in grub for a few days. Other than that, it's really up to date, and with an extensive choice of packages.


I have a similar experience. I have a 2005 Fujitsu LifeBook S2110 I keep for sentimental purposes. I installed Arch on it in ~2015, and I've been upgrading it ever since. I've had 2-3 breakages over the years, but every time I've just googled the issue and found an obvious solution to resolve the issue. I've been so happy with the Arch rolling release that I now use it on my main daily driver desktop. I switched over full time from macOS last year.


I'm fairly certain that my current Debian installation dates from ~2006, whenever I tired of running Gentoo for its amd64 support and returned to Debian, which I had used from 2001-2004.

It is wonderful that these distributions maintain an ongoing upgrade path that lets us move smoothly through our computing lives limited disruption. Support your local distros and package maintainers!

(Typing this comment reminded me that I hadn't donated to Debian in ages. Just did.)


I bought a samsung np900x3g about 5 years ago for the reason it's too cheap and put archlinux on it. then I used it as my main computer, bought 2 ryzen rtx gaming laptop since then, but only use them to play some games. it's a 8 years old computer and I suspect with arch I can use it for at least several years more, and I doubt any people here is still rocking a i5-4200u , but it work perfectly for me.


I don't get what the fuss is about.

I run several Arch boxes, a couple I believe is on 15+ years. One such installation has even been mirrored from one disk to multiple other disks and put into other machines, just because it was already set up as needed. Only once did a package I ran require manual intervention during upgrade, but as always, that was clearly described on the Arch website.

This also goes for Debian, FreeBSD and OpenBSD.


I am running Arch on my server (VPS) for over a decade and therefore haven't even noticed the X and audio issues. The only time something broke was when the Ethernet interface suddenly was named eth0 instead of the vendor specific (?) enXsX or whatever it was. And I had configured systemd-networkd to use the absolute and exact names and not some wildcard like e*. Error was located fixed within five minutes.


I had one of the worst debugging experiences of my life installing Arch (before I knew anything about linux, or programming), followed by probably the most delightful computing experience of my life using that installation for the next several years.

I don't use Arch anymore, but I think about going back to it all the time. Hopefully one of these days I actually pull up my socks and install it again.


Having to debug X or audio breaking several times even over 10 years is a non-starter. Had these issues 0 times over 10 years on mac, albeit on several machines, but I think it’s extremely rare for someone to reinstall Mac OS ever. Only time I did it was when I tried to setup a hackintosh and that really broke some things.


I get that some people don't like tinkering. They want something that will always Just Work. Some of my friends who were long time linux uses switched to macs and stayed there.

MacOS wasn't that for me- brew was the cause of no shortage of pain, but even that could have been lived with. It's also not exactly stable- one company I worked for (about 7 years total) had a blanket request that people not update OSX to new versions for a few weeks or months while it was tested to make sure that the bugs had been ironed out- anyone who upgraded before then was on their own if they ran into issues.

Even that, I could (and did, when I had to) tolerate.

I hate the UI. It doesn't jive with how I want to operate. Settings exist for some things, but not others. It's just not my cup of tea, and I'm happier tinkering a bit to get exactly what I want.


I recently got a macbook for work and I can't believe how many minor things just can't be changed. I don't think you can change the date format in the top right. It seems like you can't get rid of that damn dock entirely without killing important processes and breaking things. (I'm able to hide it and put it in the left, so it's mostly out of the way, but it seems so anti-user to force this interface on everyone).

I haven't yet looked for a "how to effectively use mac keyboard shortcuts" comprehensive guide, instead I've looked for things as I need them. I can see the benefits for introducing "cmd" where "Ctrl" is usually used on other operating systems.

But I'm very disappointed by cmd tab and cmd backtick. Often I want to press a single keyboard shortcut to switch between three windows or so: usually a few browser windows, a terminal, and an IDE. cmd backtick switches between windows of the same program, cmd tab switches between programs.

Can any more experienced mac user tell me the way to do this properly? How to switch between a few separate windows, like alt tab, without having to think about what program they are?


For the Command Tab issue, I created rcmd to fix it: https://lowtechguys.com/rcmd

It became really annoying to press tab 5 times just to get to the app I need.

If you’re interested in technical writings, I recently wrote about my journey to creating rcmd here: https://alinpanaitiu.com/blog/window-switcher-app-store/

The dock stops being a problem once you set it to automatically hide and find ways to use the mouse less. Shortcat is another tool that helped improve my mouseless workflow, and is kinda of like a vimium for the whole system but with fuzzy search: https://shortcat.app


Thanks, this looks great. I'll definitely read this later.

But why doesn't the base macbook install support more of these features? I was led to believe (perhaps incorrectly) that I wouldn't have to tinker with a mac as much as I have with Linux. (I suppose that fine tuning keyboard shortcuts is very different from trying to desperately fix a video or wireless driver)

I assumed that apple optimized for a good user experience. Are "power users" (or even people that just want alt tab) not included in apple's UX goals?


Indeed, power users are not really what Apple optimizes for. They try to dumb everything down, and it helps them in their ultimate goal: get more market share.

You've actually stumbled upon the least configurable components of macOS: the Window Manager, and the Desktop Environment.

On Linux you can choose your own, and you have so many different paradigms. I still miss i3 wm..

On macOS you don't have this choice, and you have to use apps to get to the workflow you need.

I was a Windows power user for a few years, and now I use both Linux and macOS daily since 6 years ago. In the end, I feel more productive on macOS nowadays, mostly because there are many quality apps to get anything I want done, I don't have to worry that basic OS function will stop working when I update some dependency, and there are some macOS-native features that really improved my workflow.

For example I didn't know how useful Live Text would be until the first time I noticed that Command-F search in Safari also searches text in images, or when I double clicked on a phone number and I could just call it with my iPhone (which was in another room) but keep talking from the MacBook.

I can't even imagine how I would do that on Linux (surely doable, but nothing beats "already done and usable"), and it's just one of many features like that.

I will end with some more software recommendations: yabai for window management (https://github.com/koekeishiya/yabai) and skhd for hotkeys (https://github.com/koekeishiya/skhd)

They are more Linux-like, using config files, free and easy to forget they aren't native.


I used https://www.hammerspoon.org/ to write custom keyboard shortcuts for switching between specific applications. I don't have it anymore, but it didn't take too much perusing the docs to find a way to bind a keyboard command to look for a specific window, focus it if it was found, or open the application if it wasn't. SO much better than cmd+tab/backtick or mission control.


What i've discovered is you really need to buy and/or install a bunch of 3rd party applications to get that horrible UI to work in any usable fashion.


I've seen a lot of people recommend this approach when searching for solutions online. I was trying to embrace the apple way, rather than forcing it to match what I'm used to. Your comment might be the push I need to just give up and force it to match what I'm used to.

But if this is the case, why do so many developers buy and enjoy macbooks? It seems ridiculous that you have to pay such a premium for a nice laptop, and then find random 3rd party applications to make it work the way you want.

If I wanted to endlessly tinker then I'd be happy with Linux. I was under the impression that macbooks would "just work". I've also been disappointed by poor UX in some cases, like randomly showing "enter your password" dialogs.


> I was under the impression that macbooks would "just work".

It's just a matter of redefining "just works" as "just works, as long as I adapt my workflow and UI preferences to The One True Apple Way".


Your points are all valid, but I will point out homebrew not working isn’t really an apple issue. Also this isn’t quite same level as X or drivers not working. It seems intuitive that less configurability in is in fact a reason why macOS is more stable


When comparing Linux to Apple's laptops, homebrew is the defacto package manager. The Apple store is pretty much GUI applications or tools, and I'm not putting my personal credit card info into a machine I don't own, so I never bother seeing up the app store anyway.

The primary package manager sucking is a very serious flaw for Apple when trying to get developers to convert from Linux.

FWIW, I personally don't ever recall an upgrade causing X or drivers to stop working, though I've been going through laptops at a rate of one every two or three years lately.

On the other hand, windows and Mac upgrades have certainly caused issues, and misbehaving applications can certainly cause issues on macs (cough teams cough).


For me, having to setup X or audio is a bigger deal breaker. Unless it's on nixos. It provides a nix config file that works like a translator between you and all the bazillion different system configuration files.


So what, I likely had my Arch Linux for a decade as well. Just copying it to the new SSD every time I had a new device.

I even have 2 bootloader entries: for intel and for amd devices so I do not need to reconfigure anything. ARM devices such as PineBook or Olimex are a PITA, though. Never had patience with them.


Also a rolling release, I've used the same openSUSE Tumbleweed install for 4 years. The benefit is that if something breaks, I just reboot and rollback, and wait for a few days until they fix it. I've never had to worry about tinkering with failure due to update.


I switched between arch and debian and the only real unstable thing i encountered is KDE. even before i changed something my whole desktop crashed. I had to login as root and kill my usersession (as root so that every process that belongs to my user is stopped).


Ok? I am using same Win10 installation since ~2016. It survived migration to another SSD, upgrade from Intel to Amd CPU, motherboard and RAMs, migration from MBR to GPT...

Only time when it crashed was when I OCed CPU too much and when using insider builds.


I've had the same Arch installation running since Spring 2020 on a desktop from 2012 or so. It's not my daily driver, but still works just fine. Even with some long periods between updates, I don't recall any serious stability issues.


I've had good luck once I get a system going with arch. Only rarely are there manual interventions required for updates, I just run pacman -Syyu every few days. I just try to install very few packages, only what I need for the server.


Did this with Gentoo for a decade, even across two computer changes. boot into a rescue disk, set up grub, then for each partition do ssh oldcomputer "gzip</dev/sda5"|zcat>/dev/sda5, then resize the fs.


totally of topic... as a previous Arch user, I switched to POP-OS with my new laptop, simply because it worked out of box with everything, and I struggled to even make a proper SMB connection to windows network with ARch... not to mention automatical USB stick mounting etc... However, as someone who would like to get back to Arch because gnome uses so much power, and I am missing the simplicity of Arch, is there some good tutorial to make it a decent distro right after installation including the sound, nvidia graphic, network settings, display (with external usbc connected display) etc...


I reinstalled Arch once since 2009, and it was to resolve a long-standing bug that I could not fix for the life of me. FWIW, it worked. (This was probably 6-ish years ago, I don't remember what the bug was.)


I don't think I've ever done an Ubuntu release upgrade without having at least one thing break. Admittedly, my sample size is rather small because I stopped using Ubuntu for that very reason.


Funnily enough my main box is an Ubuntu 22 machine that started life as an Ubuntu 16 machine.

The only problem I have at the moment is the root filesystem is on an early 64GB SSD and it's getting a bit cramped. Well, that and Ubuntu 22 really took a big steaming dump on Firefox. :( It has to run out of a container that cripples it.

I only update on even years though, so only 3 major updates. Not really all that exciting. I also have a FreeBSD box that started out as FreeBSD 8 something and is currently FreeBSD 13-RELEASE. This had some issues because the software RAID I was using became deprecated and I had to move all of the data off to a backup drive and rebuild the data drive at one point.


I tried to use Centos long term, that was a disaster. Ubuntu and Fedora both broke a few times but I was able to recover them. However, Manjaro / Arch feels the most stable in my experience.


> I tried to use Centos long term, that was a disaster.

What was a disaster about it? I would expect pain crossing major versions, but you could plausibly have installed CentOS on a machine and just stayed there for 5-10 years with nothing more than the odd `yum update && reboot` and I would have thought it would be fine.


Usually the pain comes when you find you need to use a feature in some software and the feature wasn't introduced until after the version that Centos ships. This happens more often than you might expect. Even fairly "modern" versions of Centos come with some really old and crusty packages.


Yeah, that's fair. I guess I was reading it as a complaint that the existing system had issues, but EL certainly has a painful tendency to not keep up with new stuff (which is rather the point, but that doesn't make it not painful:])


After a while I believe there were no more updates for Centos 7. What is impressive about this story is that I was able to drag it into the modern era with rocky linux 9 but it was a ... rocky transition. My biggest pain point was when the networking stack was broken and Yum refuses to do ANYTHING without loading sources from the network. I couldn't get around it without flash driving RPMs to the server..

The uptime of the machine was great on Centos, but it came to a point I was afraid to reboot because of the package changes that could apply.

edit: I think some software I was using was EOL'd just for Centos maybe also prompting the switch.


CentOS 7 "active support" ended in 2020 (which I'm guessing is what you mean?) but it's supposed to keep getting security support until 2024 (thus hilariously outliving both CentOS 8 and CentOS Stream 8) - https://endoflife.date/centos

Although certainly I can sympathize with the software eventually getting very long in the tooth, and jumping EL 7 to 9 could definitely be quite a change.


I stopped using Arch when I went to a meeting after upgrading the night before and realizing my audio is not working during the call. You don't have time to RTFM then.


I've been using Arch for 7 years, I only destroyed it once with python packaging ... and it took 5min to repair it, its the most stable OS I've ever used


I’m in the same boat. Never saw a need to reinstall, there’s been hiccups where after an untimely upgrade the system won’t boot. But it’s a 30min fix every 3-4years.


One of my systems, a public-facing server:

    Server status at 2022-08-16 11:39:24
    System status: Database up for 1644.44 days.


How often did the upgrade process broke? I've been using it for a few years and it broke a few times... but overall very happy.


My install has been alive since 2014. I've never had the upgrade process outright break. One in a while it requires some manual intervention like holding something back, or ignoring some package while upgrading. The only really "big" problem was when i switched from Nvidia to AMD and had to boot from a usb to restore the video drivers after failing to completely configure them.

This install has moved 4 disks in this time. Never a reinstall, just an rsync a a new fstab.


> Once in a while it requires some manual intervention

that's what I meant by breaking


Kinda off-topic: how does Arch compare to Debian, in terms of UX and stability? (I'm a programmer and I use i3)


running Arch for about 5 years. then again - dual booting Win10 and it's also chugging along just fine


Similar for me: I got a new desktop in January 2015 and I've been dual booting ArchLinux and Windows 10 ever since. Everything is super stable and it has even survived several hard drive upgrades where I cloned the old drive onto a bigger one.

Soon I'll have to buy a new motherboard, because of Windows 11. I'm curious if I can still keep the same OS without reinstalling.


What is the advantage of keeping the installation instead of starting from scratch when buying a new computer?


No idea. I think of /home as the important part. Everything else is disposable.

If I could painlessly flip the FHS, I'd have something like /system, /data, and /config.

/system would be "Files you can download from your package manager verbatim". This is what apt and pacman create and update. If I lose it, who cares, just re-install the OS.

/data would be human-made and only human-made. Not even program preferences. This is the only thing I really care about backing up.

/config would be all the dumb little dotfiles that won't put themselves properly in $HOME/.config. This is stuff that might be important, but since I didn't choose the names of the files or for them to exist, I don't want them cluttering up /data, and I don't want a program complaining if I delete a file in _my_ /home that the _program_ thought belonged to it.

I think Android does something kinda like this. Android is right twice a day.


I a million percent agree with this. I've NEVER had major issues with Arch. It's my forever linux.


What a chad. I wish I was that consistent. I distro-hop about once every other week


Circa 2003 i used the same Debian install for 4 years without a reboot.

Fine


have an Arch install that's been running for ~8 years. It's solid. Went through the same growing pains as others. Most recently the pipewire change.

Love Arch.


he uses arch, btw


Wow so arch works on thinkpads


Operating systems are not unstable. Users are unstable.


Of course OSs are unstable, in two senses: First, some OSs, somewhat dependent on hardware and software in play, just tend to crash. Second, OSs can be unstable in the sense of changing things; users are, in this sense, actually supremely stable and tend to be quite unhappy when the system decides one day to move menus around, rename programs, change shortcuts, shift around config files, require manual intervention for updates to work, etc. It is this second sense in which Arch tends to be less stable, and distros like Debian and RHEL are extremely stable.


I see you never used windows 95.


Windows had multiple issues. I recall multiple times having to basically do a reinstall after installing some update.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: