Does it require to physically go to each machine to fix it? Given the huge number of machines affected, it seems to me that if this is the case, this outage could last for days.
The workaround involves booting into Safe mode or Recovery environment, so I'd guess that's a personal visit to most machines unless you've got remote access to the console (e.g. KVM)
It gets worse if your machines have bitlocker active, lots of typing required. And it gets even worse if your servers that store the bitlocker keys also have bitlocker active and are also held captive by crowstrike lol
I've already seen a few posts mentioning people running into worst-case issues like that. I wonder how many organizations are going to not be able to recover some or all of their existing systems.
Presumably at some point they'll be back to a state where they can boot to a network image, but that's going to be well down the pyramid of recovery. This is basically a "rebuild the world from scratch" exercise. I imagine even the out of band management services at e.g. Azure are running Windows and thus Crowdstrike.
• Servers, you have to apply the workaround by hand.
• Desktops, if you reboot and get online, CrowdStrike often picks up the fix before it crashes. You might need a few reboots, but that has worked for a substantial portion of systems. Otherwise, it’ll need a workaround applied by hand.