You'd think a well-designed distributed system would be able to handle the failure of a component and fallback into a different configuration to handle the problem. That's the fundamental feature of Internet routing systems, i.e. they expect components to occasionally fail. Building a distributed system that can't handle such failures just seems like sloppy engineering.
The joke is that no matter how many redundancies, the complexity is so beyond human's ability to map at once, that you always miss a single point of failure.
Each failure scenario must be gracefully handled in each subpart to take into account all possible impact on the whole system.
You'll always get a random crap you never heard of destroying an assumption you made in your own crap.
I remember an instance where a national mobile phone provider in France got a single db outage but during a large sms day (think new year), so the failover happened but totally overloaded the new servers, and it created a chain reaction to other providers to a point you couldnt read your email for a day or two because of inability to get smses. Why is that random db in a provider I dont even use impacting my google account ? Because the system is distributed.
In our distributed systems lecture, my professor started the introduction lesson with some descriptions on 'what is a distributed system.' The one by Lamport was by far the most quoted in exams, as it is the most memorable, because it is funny.
After watching Lamports introduction to TLA+, I got the impression he's just a really funny guy (while being a genius, too).
“A distributed system is one in which the failure of a computer you didn’t even know existed can render your own computer unusable.” — Leslie Lamport