I am not exactly surprised, but it is very sad to see, because what happens on locked-down mobile platforms, desktops seem to follow sooner or later. You can argue that power users and developers will always find ways around it, but what this does is effectively remove one more little bit of that freedom which lets users discover what their devices are actually doing, and I think that is a very bad thing in the long term.
Much of my knowledge about how computers work in various ways has been gained through exploring creatively and inspecting what things do. I use a MITM proxy on my PC that blocks ads, tracking, and rewrites webpages to my preference. I learned a lot of HTTP, HTML, and CSS just from doing that. But maybe that is exactly what those in power do not want --- users who can think and investigate things for themselves --- because such users are not easy to control.
Root users could use Xposed to inject the settings to trust user added CA:s into selected apps. More work, but power users do retain the same ability (for apps not using NDK for networking).
Taking away the user's ability to manage their own security should bring with it the responsibility - and liability - for any problems that derive from the imposed settings.
The paternalistic attitude that users are and always will be ignorant is not only offensive. it is counterproductive. Security is not a product, and keeping people ignorant of the trust models they are relying on is a recipe for disaster in the long-term.
Instead of pretending that users are always going to be ignorant of security and incapable of learning, the UI should be extended to make the chain-of-trust more visible in a way that helps the users understand their security situation.
Instead of pretending that one security model fits all situations, more control needs to be given and taught to the user, in a way that actually allows them to make decisions about their security situation.
I very much agree with the liability responsibility and liability argument that you're raising. I also fully believe self-driving car makers should be 100% responsible for accidents and hacking incidents of their cars and they should fully compensate the victims of such accidents and hacks.
However, I think this is generally a good thing. When Android has 50 different OEMs (or whatever the number is), then some standardization is a good thing for the user.
You don't want Samsung or Huawei or some local Turkish OEM, or perhaps even the retailers themselves (as we've often seen with malware on imported Chinese phones) to start loading up certificates in a device before selling it. Think about what Lenovo and Dell, or all the anti-virus companies, are doing with their own certificates to PC users because Windows allows people to install their own certificates.
Also, in some countries, such as Kazakhstan, they want to start requiring users to download the "national security certificate" and install it in their devices.
This policy would prevent all of those things. That doesn't mean we still won't see CNNIC and Blue Coat/Symantec and other untrustworthy certificates loaded up by default in all Android devices (which you can still disable yourself), but I think overall this is still a good move from Google.
> This policy would prevent all of those things. That doesn't mean we still won't see CNNIC and Blue Coat/Symantec and other untrustworthy certificates loaded up by default in all Android devices (which you can still disable yourself), but I think overall this is still a good move from Google.
I'm ok with this. Even if I can't add a custom root certificate, I would like the ability to distrust any root certificate I explicitly do not like. That coupled with standardization of certificates loaded by default makes this sound like a welcome change.
Most people don't want to be taught something like this though, and why should they? The system can do what practically everyone wants it to do without that user education overhead, which is what they are doing.
This renders tools like mitmproxy un-usable. But really, if it's your device, why can't you see your own traffic?
I can understand how this might improve security, but it locks you out of the conversation your own phone is having. Feels like reverse privacy; Not even you can know what you're saying!
I believe Android is taking this approach because of hawkish network appliances vendors selling all too powerful gear to enterprises and these enterprises don't care about what to decrypt and what not and causing too many weaknesses on the way.
I belive MITM decryption for enterprises is a flawed way of identifying intrusions and doesn't stop or hinder any intrusions. It only provides a false sense of security.
Intruders will always be able to fool appliances by using encapsulation of multiple encryption protocols or using non-standard protocols.
I can't really say if Google is taking this approach to secure a device from heavy handed enterprise admins; but if true, it's gone too far by allowing only the app from having private conversations with the api and not allowing the user to see what's being sent over the wire.
This isn't exactly new, we do after all, have certificate pinning. But now, this certificate pinning is done at the OS level by DEFAULT, un-trusting all certs except the ones that Google deems fit.
We know that there have been un-trustworthy Certificate Authorities that all our machines have trusted until its been deemed unworthy by our vendors... and eventually expunged! But this change explicitly un-trusts us, the users of our own phones -- in the name of security.
User deftnerd (https://news.ycombinator.com/item?id=12061342) had an excellent suggestion that the trusting of user added certs can be relegated to the TPM module (via password, passcode, fingerprint, etc..) -- not by the heavy handed approach of simply blocking us out of our own phones conversation.
This may be a conspiracy theory, but how probable is that this move is instead actually motivated by app developers that want to make reverse-engeneering harder? There have been cases in the past were hidden APIs were discovered that e.g. Twitter or WhatsApp were reserving for their own apps. (Not to mention privacy leaks). This will certainly become harder with the new change.
Honestly, it shouldn't change that too much, since third parties like Cyanogenmod should be able to reverse the change. Of course, it's still a horrible move.
In my experience enterprise MITM decrypt is not usually deployed to identify or stop intrusions. It is deployed to enforce compliance with corporate usage rules and as part of a data loss prevention solution. Quite often these are both necessary to meet regulatory requirements.
MITMing the world won't do anything against data loss.
It might "protect" you against data exposure (though I wouldn't count on it), but that shouldn't be affected by any regulators unless your employees have access to WAY too much user data.
Data loss prevention refers to preventing unauthorized, purposeful or unintentional, access or transmission of sensitive or critical information. A comprehensive DLP solution covers data at rest, data in use and data in motion. Network DLP solutions help address the data in motion. They use MITMing in order to inspect data leaving the enterprise. They are often deployed in order to meet regulatory data protection requirements.
For example, I used to work in investment banking. It was well known, and expected, that us (and probably most other institutions) had network level monitoring, to prevent say, the leaking of a deal on some chat or web forum somewhere. Believe me, when there's lots of money involved, there are plenty of incentives to leak things.
You'd look pretty silly if you had to explain to the regulatory authorities that you took zero steps to secure your network perimeter, or prevent the exfiltration of privileged or confidential data.
And there are other legitimate use cases - educational institutions and schools come to mind.
You still can, it just involves a bit more work: grab the .apk, decompile it, insert your own network-security-config into the manifest XML, repackage and install onto the device.
Xposed can do the same without modifying the apk file in storage, instead it intercepts the calls and modifies the responses for what the app config would be.
There is already kind of a parallel to this - you can't see the raw data that the apps on your phone are storing unless you root the phone. It's all protected under the app's userid that you can't access. The only exception that I know of is for apps built in debug mode.
There are several ad blocking solutions for Android which involve creating a VPN connection which terminates to an app local to the device. This allows the app to filter all traffic for ads including ads inside apps. These solutions depend on installing a user CA certificate in order to filter TLS connections. I wonder how much preventing these type of ad blockers played into this decision.
I joked to a friend that if Google invented a time machine, the first thing they would do is travel back to remove the extension support in Chrome before it was launched - the ad blockers.
Chrome didn't support the proper method for ad blockers at launch (it could hide content but not stop the requests, if I remember it well), and they purposely added those in months after launch, clearly for the benefit of adblocking extensions writers who where complaining.
Google is a big company; I'm pretty sure not everyone there is against adblocking. Or perhaps they managed to sneak request blocking past those who were against adblocking by advocating a different use-case --- "reader mode" comes to mind.
Yes until the managements find out the they are losing Billions due to the ad blocking. IMO, not bring extension support on Chrome Android is the same reason.
Google doesn't allow these apps in the play store due to the ability to block ads in apps. Several can be found in the Amazon app store. Adclear and Adguard are 2 examples. By utilizing the local VPN to redirect traffic for filtering these apps are able to work without requiring root access as other ad blocking solutions on Android do.
My favorite is (open source) netguard. It is really easy to use, nice ui, nice default options. I will be super bummed if this will no longer work.
I guess I was was wondering how Google would resolve the conflict between their basic function -- making money by serving ads -- with a user experience which is almost always improved by blocking ads.
Mobile chrome will never have extensions, excuses side, because Google had the opportunity with a new platform and new browser to make sure ad blocking wasn't as easy as installing an extension.
I personally run mobile Firefox with ublock origin. Wonder if that will be always be possible?
If I need to root my phone to get a system level VPN / firewall / adblocker, I might as well just get an iPhone and jailbreak.
Netguard uses a host file to provide incorrect DNS results to prevent ads and doesn't filter the data, so it doesn't need to install a user CA certificate. This approach while generally effective can't block certain types of ads or tracking data that the other solutions or something like ublock origin in your browser can.
I would rather see the OS let people load the cert but then require the user enter their PIN, password, or unlock drawing. Then the cert can be signed by the PIN/etc and trusted.
This would allow certs to be added, but prevent them from being silently side-loaded by an admin or malware. Changing your PIN would invalidate the cert but you could just be prompted to resign them.
After reading the article, I think what you describe can easily be implemented -- at the _app_ level.
The big change is that Android no longer provides an ability to add a CA for all apps on the device ("device global CA"). There is only one "global CA store" now, the one shipped with Android.
Device updates can update the CA store, and the article talks about how to get your CA included.
And how can I add the CA of my university, for example, which is required for some university networks?
Especially because it has to be in the global store?
Or how am I supposed to use my legal right to reverse and understand the functionality and APIs of software I have installed?
EDIT (as I can’t create new comments for the next hour): The certificate is not used for HTTPS – but for TLS for IMAP, for example, and for some internal services, eduroam, etc. Obviously, that means that the Email-App needs to use the cert, too, in addition to the Android system, and a bunch of other apps.
It seems likely the email app would opt in to user certificates.
Once the university (and their providers) have modified their apps to opt in, the user sees a net benefit. I guess it might be hilariously optimistic to expect the university and all of their providers to actually fix their software, but that's the other way of looking at that specific problem.
It’s not a university email app. I’m using K9 with that.
Others are using GMail, or AOSP mail, etc.
You can’t seriously expect every single Android app to add the feature back, do you?
And even then, I still get no benefit – I have to update my own fork of AOSP mail, too, I have to update a bunch of system apps, I can’t use eduroam properly anymore, and I can’t MitM the traffic of my own device anymore. The only thing this does is my life worse.
Every single Android update since KitKat has done that. Made my life worse. Removed features, killed AOSP stuff, moved code into proprietary apps, and prevent people from modifying their own device.
I only expect the apps where a user certificate is useful for normal operation to add the feature back. Other apps get better security by not enabling user certificates.
Do you trust your university more, or "TÜRKTRUST Elektronik Sertifika"?
Who is that, you ask? That is precisely my point. I have no idea, none at all. I do happen to know what my university was (and why it was running its own CA). Yet, this one is doubleplusgood for me. Trust Google, it knows best.
(I went to my Android's builtin trusted CA list, this was the very first one - out of a list of about 100, or maybe 200)
1. The university is explicitly asking to MITM your traffic, so that's an automatic no-go on any of my devices for me. At least other CAs will be punished if caught.
2. I'm fairly certain these university certs are not subject to certificate transparency, and even if they were, since they are explicitly designed to MITM traffic, I'm not sure that it would raise any red flags if someone other than the university was issuing these certs for inappropriate domains. At least TÜRKTRUST Elektronik Sertifika issuing inappropriate certificates are much more likely to get caught doing anything fishy (with high profile sites, anyway).
3. What you're really trusting is the vendor of whatever security gateway, not the university itself. If you look at the Security.SE thread I linked, there was an actual issue with at least one of the black box vendors. I don't think the other vendors are considerably better.
So, all-in-all, I would not accept an MITM certificate on any device that I used for anything personal, full stop.
I was talking about MITM certificates. I don't really understand why a university is not just getting a certificate from a CA in the trusted store, but I would have many fewer qualms about something that is not a general MITM certificate. That said, my experience is that generally applications that have a legitimate reason for a limited-use self-signed certificate don't need to be in the general store. For example, it's common to set up OpenVPN with a self-signed certificate, but I've never had to add a certificate to the general store for that - it's always app-specific.
That is already the requirement if you have a lock set up for your device, except for the last part about changing the lock code removing all the user-added certs.
This is a pretty annoying move for anyone who has their own CA they use to sign internal domains. I wonder if they will also have Chrome not trust them...
Keep in mind, this specifically affects apps. Not necessarily Chrome, which I would guess is going to use the Trust API to allow them. So unless you have custom apps that access internal domains, then you shouldn't be impacted at all.
Root and Xposed, yes. That's probably going to be the easiest solution.
Root doesn't inherently reduce security - it only adds as much target surface as you chose to add. Adding root services with bugs is dangerous, but one-off tasks with secure code and a secure root manager isn't a problem.
Totally against this. Why not leave the choice to the user instead of Google enforcing this. MITM is good for some purposes, e.g, debugging, enterprise filtering etc.
While google collects all my info by default these days, what's wrong to let me install my CA locally myself? Google is becoming an online policeman more and more these days.
I now hope Firefox OS or Ubuntu Phone OS prevails. Also I'm hoping there is a strong google search competitor soon. I no longer favor Google, though it's difficult to find an alternative at this point.
> Why not leave the choice to the user instead of Google enforcing this.
Because often the choice isn't made by the user.
> MITM is good for some purposes, e.g, debugging
The linked page documents how to enable custom CA certificates for debugging. If you're trying to debug an app that doesn't want to be debugged, you're probably running a custom rooted version of Android anyway.
> enterprise filtering
Which the user should be explicitly aware of if it's happening.
1. Then make some password-protect mechanism instead of taking that choice away. That's the key issue, that Google is trying to make decisions for its users because Google may think it knows better.
2. OK thanks.
3. That is a corporate policy issue, it can state all corporate network is monitored and it installed a local CA etc. Without a local CA all https sites will pop up warnings and it leads to more problems. By the way corporate normally tunnels a safe list of https sites without doing any MITM, such as banks, well known "good" websites etc.
This is just terrible news. Android's treatment of user-added certificates is already terribly broken (why does it warn me that my network connexions may be monitored when I install my own certificate, when Android already trusts e.g. Symantec?), and this makes it worse.
What they should have done was gone the other direction entirely. If I — the owner of the phone — choose to trust a certificate authority, then every app should obey me, without complaint or warning. It's my phone, not Google's.
Right, but you are warned forever* if you have a user-installed CA on your phone and you go to a site or use an app that uses that cert.
At some point, it should recognize that you've continued using the cert XX times or for XX days or both and stop trying to guilt you into removing it via the use of some illusory 'you might be insecure' bogeyman, when Symantec, an already 'trusted' CA, could issue any cert for any site and you'd have no recourse or ability to tell if they should have.
Everyone 'might be insecure', that's just how it works.
*Maybe this goes away at some point months down the line, but it's at least 30 days and has to be some number in the 1000s of uses if the warning eventually quenches.
I feel this is a great win in terms of security for the average consumer. Sure VPN connections within organizations may get affected but the've got a workaround for that and I'm sure IT at orgs where they have an internal CA can figure it out.
The recommended path for large orgs (one of which I was recently employed at, and attached to a relevant project), is to:
1. Ensure all internal domains can also be registered externally (although the actual external registration step is optional), but this means no more .local or .companyname type TLDs internally.
2. Use an external CA to provide your certs, for both internal and external use.
Obviously, this means a bit of pain[1] for existing non-standard internal domains, but with the big shift to cloud, this is less of an issue going forward.
Also, external CAs will no longer provide you with a cert for internal domains that are not also able to be registered externally.
Are you familiar with any CAs that will issue certs for a few hundred servers (under the same ___domain) that aren't on the public internet, with an automation API and at a reasonable price?
Lets Encrypt isn't an option, because (a) the servers aren't on the public internet and (b) even if they were, LE is limited to 20 certificates per ___domain per week.
I'm aware that Plex has some sort of deal like that, but is that a one-off thing, and if it's something any company can get in on, how expensive is it?
I know some CAs offer 'enterprise services' but their websites don't seem to spell out what that means or what it costs - just that you should contact them to schedule a demo, which probably means the costs are eye-watering.
GlobalSign's CloudSSL will issue certificates with up to (IIRC) 200 SAN extensions for somewhere around $2k, but wildcard certificates will cover most use cases at a much lower price point.
A wildcard is a bad idea here since one compromised host would have a certificate that works on all other hosts. A better solution would be a sub CA that's restricted to issuing certificates for that ___domain only, but name constraints is an extension that some (obsolete) clients might not handle correctly.
I was going to try writing an addon/extension to chrome to make a visible warning indicator when this site was trusted only via user added certs. That would've allowed me to see when my company was intercepting my traffic at the proxy (the proxy decides not to intercept some sites, mostly ones w/pinned certs). Unfortunately this info doesn't seem to be available to the framework. :(
I think this decision should be inverted - user/admin installed CA's should be trusted by default (but not able to be preloaded, by say a malicious cellphone employee at a store), but the entire public CA list (aka, "people we expressly allow to MITM you") should be run in front of and require approval by the user.
But that's a problem everyone seems comfortable with ignoring...
a user would not be able to tell whether they can trust a public CA (they probably have nfi what it is).
A user installed CA is more likely done by an automated mean (such as a jail break, or malware) than the user themselves. This choice means they can at least stop some malware/crapware. Yes, the legitimate users with self installed CA got screwed over, but in google's eyes, those people are going to put up with it, because they've already invested in the android platform (and won't switch to iphone).
Especially so since Android since 5.x already shows a huge obnoxious "Your network may be monitored" if you have installed a local CA.
It's a shame you can't configure certain domains to ONLY trust a specific LOCAL CA system-wide. It would completely eliminate the security problem that is untrustworthy or hacked global CAs, for example for your own/enterprise mail server.
The way I understand it, this only affects apps, not Chrome for Android.
Chrome on desktop follows the HPKP RFC on this matter[1]. Failing pins that chain up to local CAs would break many deployments. On mobile, apps that implement certificate pinning are usually already broken if a MitM proxy is used, so it's a different context and this move improves the situation for apps that haven't moved to cert pinning yet (it's not a full replacement as it only helps with the "malicious local CA" problem).
The implications of an attacker being able to import a CA certificate are different on mobile as well, IMO. On desktop, this usually implies administrative access, in which case anything Chrome could do would be pointless, as the attacker could also just replace/modify binaries, install a keylogger, etc. Android has a more fine-grained permission system, and an attacker who tricks a user into installing a CA certificate would not necessarily be able to do these things, so this would definitely make things harder for them.
Why would this impact that? Apps can still use whatever certs they want, they just won't automatically pick up certs that are added at the system level.
> their certs are more trustworthy than 90% of the certs that users would add.
Why? Why wouldn't I trust my own cert more than any Google trusted certs. I'm sure the majority of CA's are responsible entities, but I trust my certs more, because I control them!
Exactly! I will absolutely not buy Android N(owned many Android devices so far). Google collects all users activities by default while don't let me trust my own CA, how ironic. I guess Google thinks only itself is trust-able and it has the right to baby seat whoever uses its devices, in the name of protecting its users...No Thanks Big Brother.
Android has always been wonky about private PKI. One of the big initial reasons why a past employer went all iOS.
Apple also lacked (I haven't look in detail in awhile, so ymmv) a global proxy capability, requiring stupid solutions like GRE tunneling or mandatory VPN.
Can this be bypassed with root access? (Note: stuff like patching binaries in memory doesn't count... I mean something that's reasonably simple for users to do, like editing a config file or something.)
Note: stuff like patching binaries in memory doesn't count... I mean something that's reasonably simple for users to do, like editing a config file or something
If you write an app for it, that'll make it simple for users. (And then no doubt you'll get a bunch of "security" people complaining that your app is malware...)
If you plan on doing that, it might be worth looking into a similar project that uses the XPosed framework to bypass certificate pinning at a low level:
I share your frustration and I'm interested in your work on this, if you do proceed please consider making it easy for others to follow your lead by publishing, anonymously if necessary.
I get that the intention is to make the device more secure but Google's execution on this is disappointing and lazy, and the result is that my devices no longer trust me the owner, which I don't consider acceptable given the lack of liability.
https://news.ycombinator.com/item?id=9078741
https://news.ycombinator.com/item?id=9078762
I am not exactly surprised, but it is very sad to see, because what happens on locked-down mobile platforms, desktops seem to follow sooner or later. You can argue that power users and developers will always find ways around it, but what this does is effectively remove one more little bit of that freedom which lets users discover what their devices are actually doing, and I think that is a very bad thing in the long term.
Much of my knowledge about how computers work in various ways has been gained through exploring creatively and inspecting what things do. I use a MITM proxy on my PC that blocks ads, tracking, and rewrites webpages to my preference. I learned a lot of HTTP, HTML, and CSS just from doing that. But maybe that is exactly what those in power do not want --- users who can think and investigate things for themselves --- because such users are not easy to control.