> So to me this really looks like it applies neither to servers nor desktop.
It applies to both. We need desktops to boot up fast, because you said it yourself, sometimes they just need to. And no one likes waiting around for their machines to boot. Can you imagine the volume of complaints about long boot times that would come in to large-scale distros from annoyed users? That alone makes it a high priority.
And on top of that, we need servers to boot up fast, because nowadays they're virtualized and started/stopped constantly when services are scaled up and down. Can you imagine trying to scale up a fleet of servers and waiting a couple of minutes for each one to boot?
> I.e. sometimes computers just need to reboot, and there's nothing you can do about it.
And this is the attitude that brings us shitty software, and "I dunno, just reboot to fix the problem?", which is what we have now.
Short of kernel upgrades they really really don't.
But if you've bought in to "oh computers just need to reboot sometimes", then I guess you fall into the category of people who have just given up on reliable software, or you don't know that there is an alternative and no this was not normal.
>>> We need desktops to boot up fast, because you said it yourself, sometimes they just need to
>> I didn't say that. Because they don't.
> Yes you did[…]
> people just Do. Not. Reboot.
So is that what I said? I believe you did not read what you quoted.
The main reason people reboot is because of shitty software that requires reboots. So if you want to go self-fulfilling prophecy, then systemd is optimizing for boot times because it's low quality software that requires periodic reboots?
But maybe you count forced reboots once a month (or every two months) for kernel upgrades (but also the above arguments since they also run systemd and therefore need reboots). Fine.
So in order to save ten seconds per month (from a boot time of a minute or so, including "bios" and grub wait times, etc.., so not even a large percentage) this fd-passing silently breaks heaps of services, wasting hours here and there? And that's a good idea?
And all for what? Because you chose to have installed services you don't need, and don't use? And if you do use them, then the time was not saved anyway, but just created a second wait-period where you wait for the service to start up?
And ALL of these services could in any case be fully started while you were typing your username and password.
So what use case exactly is being optimized? The computer was idle for maybe half the time between power-on and loaded desktop environment anyway.
> If nobody cares, then why do people hate rebooting so much?
Because all their state is lost. All their open windows, half-finished emails, notepad, window layout, tmux sessions, the running terminal stuff they don't have in tmux sessions, etc… etc…
> And ALL of these services could in any case be fully started while you were typing your username and password.
This is the key point you are refusing to hear. No, all of the services on a modern Linux machine can't be started while you're typing in your credentials. So they're started lazily, on-demand, one of the classic techniques for performance optimization and a hallmark of good engineering.
Of course they can. How many services do you they there are, installed, and how long do you think it takes to start them?
How long do you think it takes to start gpsd, or pcsd? Even my laptop has 12 CPU threads, all idle during this time. And including human reaction time (noticing that the login screen has appeared) this is, what, 10 seconds? 120 CPU-seconds is not enough? All desktops run on SSD now too, right?
In fact, how many services do you even think are installed by default?
And Linux, being a multitasking OS, doesn't even have just that window.
But you know, maybe it's a tight race. You could try it. How long does it take to trigger all those?
> a hallmark of good engineering.
In the abstract, as a "neat idea", yes. In actual implementation when actually looking at the requirements and second order effects, absolutely not.
You know you could go even further. You could simply not spin up the VM when the user asks to spin up a VM. Just allocate the IP address. And then when the first IP packet arrives destined for the VM, that's when you spin it up.
That's also a neat idea, and in fact it's the exact SAME idea, but it's absolutely clearly a very bad idea[1] here too.
So do you do this, with your VMs? It's cleary "started lazily, on-demand, one of the classic techniques for performance optimization and a hallmark of good engineering".
[1] Yes, very specific environments could use something like this, but as a default it's completely bananas.
But no it doesn't. Until your service is started, your service is NOT actually booted. That's what I said.
You are not paying per-second for the VM. The VM itself adds zero value to you. It's the service that's running (or in this case, not) that you're paying for.
Who cares how long it takes before systemd calls listen()? Nobody derives value from that. You're not paying for that. You're paying for the SERVICE to be ready. And if you're not, then why are you even spinning up a VM, if it's not going to run a service?
Starting services in parallel will reduce overall service start up time as well, even if services are dependent on each other, because services often do work before they connect to a dependent service. Without socket activation that is a race condition.
It applies to both. We need desktops to boot up fast, because you said it yourself, sometimes they just need to. And no one likes waiting around for their machines to boot. Can you imagine the volume of complaints about long boot times that would come in to large-scale distros from annoyed users? That alone makes it a high priority.
And on top of that, we need servers to boot up fast, because nowadays they're virtualized and started/stopped constantly when services are scaled up and down. Can you imagine trying to scale up a fleet of servers and waiting a couple of minutes for each one to boot?