> if the costumer doesn't agree to Apple's or Google's TOS.
Or if Apple or Google arbitrarily decide that they don't like that customer. You don't have to have done something wrong, they can decide that you're likely associated with someone who did.
When people ask for examples, I point to a NYT report of a man in San Francisco whose young son had redness on his penis and complained about it feeling sore. The pediatrician asked for some photos to make a diagnosis online. Google flagged it as child porn and notified the police. The police said it wasn't, but Google declined to restore service.
In India, Google locked an engineer from Gujarat out of his Google
account because it contained explicit content potentially involving
child abuse or exploitation. The engineer believes it's because the
account contain images of him as a child being bathed by his
grandmother.
I use these examples specifically because many in my government want "Chat Control", where snitchware scans messages for child porn and the like, and notifies the police. It will be full of false positives like these, especially if the scanning software continues to be built by puritanical American companies.
Another class is people who the US deems to be a security threat. How long will it be until the US extends its sanctions against the ICC by ordering Microsoft, Apple, Amazon, Oracle, and Google to shut down the accounts for the ICC and anyone involved in their genocide investigation, work and personal?
Or if Apple or Google arbitrarily decide that they don't like that customer. You don't have to have done something wrong, they can decide that you're likely associated with someone who did.