I would argue that the technical limitations of iOS are what accomplishes this, rather than app review. For a malicious actor, sneaking prohibited behavior past app review is incredibly easy - look at what Fortnite just did! The reason that apps on iOS can't damage your device is that apps are sandboxed, and that the OS requires user permission to access data and places limits on how many resources an app can use. There's no reason that the same sandboxing system couldn't be applied to apps from outside the store.
(VPNs and provisioning profiles are sort of an exception to this, because they can escape the sandbox, but a) the number of scary warnings presented by the system should be enough to limit their impact, and b) they will also continue to exist separately from the app store issue).
Sandboxing is important, but is only one part of the protections.
There's also:
- App store guidelines on what is and is not permissible in different cases
- Restrictions against using private APIs
- Restrictions against jailbreaking the device
There are a variety of VPN apps available on iOS. Why was Onavo blocked? Because it violated the guidelines on the use of the information, which is the kind of thing that it difficult to automate.
Restrictions against using private APIs are semi-automated, and would be difficult to completely automate.
The fact that you can't get an iOS app that jailbreaks the device in order to do whatever it wants is in part due to human review - if one existed in the store, it would get pulled, and the developer cert would get revoked. Jailbreaks exist, and human review in the app store is one way they are mitigated.
I for one remember the bad old days when playing a CD could (and did!) install a rootkit.
Here's already another platform that does just fine with sandboxing. It's been running for 27+ years. It's called a the web browser. Restrictions against using private APIs.... you can't call any private APIs, it's impossible. Find an exploit? It's generally fixed in a few days.
Software isn't even installed, new versions are downloaded daily or more so the concept of sandboxing as been throughly tested and proven effective for those 27 years.
The difficulty with that argument is that Apple has gone out of their way to make webapps second class citizens. PWAs can't do everything installed apps can.
But that's not the point - the point is, sandboxing largely solves these issues without the need for restrictions on side loading, restrictions to a single app store or similar abuses of consumers rights.
Apple should build a better sandbox, the idea that "private APIs" exist and the only thing stopping them from being used is a basic string search on the app store review is pretty horrifying.
And then you notice that your browser demands almost the same amount of resources as a 60+fps 3d game for presenting you a just a bunch of static images and some text. It is apples to oranges comparison, because a performance requires an unsandboxed, non-emulated native env, which is hard to protect from exploits. Replace a browser with any OS in existence and see how secure it is to execute an arbitrary binary on it.
All iOS jailbreaks are a result of security vulnerabilities, which Apple tends to fix almost as soon as they're discovered - and ultimately, it's Apple's responsibility to make their sandbox secure, regardless of what's running in it. I also don't see how installing outside apps would make jailbreaks any easier, given that you can already connect your phone to a computer and temporarily install an app on it (and for people who are motivated to jailbreak, this isn't much of a hurdle).
I haven't done enough iOS development to know for sure, but I'm assuming Apple could prevent private API usage by apps through technical means, rather than just app review.
Kind of. They cannot prevent private API used by their frameworks running in the same process as the app (e.g. an app can use an Apple UI widget, for example). Things that apps should generally not be able to do have already started being locked down using entitlements, which prevent third-party apps from using those APIs regardless of whether they can sneak it past review.
> and for people who are motivated to jailbreak, this isn't much of a hurdle
And also because, once you’re jailbroken, you can setup software to automatically resign the app on-device every few days, so you never need a computer again.
I thought jailbreaking worked by using exploits to disable code signing. As in, there’s no need to sign an app. Have things changed the past few years?
Most Jailbreaks today are "tethered" in some way, which means the Jailbreak disappears (to varying degrees†) once the phone is turned off. For Jailbreaks like unc0ver, this means you need to re-run a bootstrap app every time you reboot your phone, in order to return to "Jailbroken" mode and allow unsigned code.
This, of course, is a catch-22. You need to run an app to allow unsigned apps, but that app can't run if it isn't signed.
---
† The community makes a distinction between "tethered", "semi-tethered", and "semi-untethered" jailbreaks. The jailbreak I described above is "semi-untethered". You really couldn't come up with terminology more prone to getting mixed up...
IIRC Apple has the ability to push a blacklist of apps that has slipped through the review, preventing them from running, not just from being installed. To my understanding, they've only ever used it for actual malware though, not for apps that they've pulled from the App Store due to "regular" breaches of the rules.
Apple can just revoke the certificate of developers that sign malware, preventing them from running. They also have the ability to pull apps from your device, but have never used it.
(VPNs and provisioning profiles are sort of an exception to this, because they can escape the sandbox, but a) the number of scary warnings presented by the system should be enough to limit their impact, and b) they will also continue to exist separately from the app store issue).