Hacker News new | past | comments | ask | show | jobs | submit login

> I don't see the threat model they are addressing.

The threat model they're addressing is the one where users have a small semblance of control over their devices and networks. I've been saying for years that HTTPS everywhere, DoH, eSNI (or it's successor), etc. are part of a long term plan for big tech to have absolute control over what users are allowed and not allowed to do with their own devices.

You can't see the threat model because, to them, you're the threat. From that GitHub issue:

> It does not prevent nor attempt to prevent you from doing those kinds of things.

The thing missing there is the for now qualifier. Once they know it won't impact anyone with the power to cause problems for Google, they'll remove the config flags and lock us all out. The same strategy has been used over and over and over in the last 10-15 years.




As a point of clarification, lest anyone think you are actually being serious...

The threat being addressed here is the proliferation of VPNs that also install a local trusted root[1], rogue roots being installed at border crossings[2], and entire countries mandating a MitM root to egress traffic[3].

I do believe they could have done a lot more to improve the developer experience, but this does address a legitimate security concern for millions of users.

1. https://www.techradar.com/news/new-research-reveals-surfshar...

2. https://www.vice.com/en/article/neayxd/anti-virus-companies-...

3. https://www.eff.org/deeplinks/2022/03/you-should-not-trust-r...


What gives me pause is that above examples are state actors. With no deep knowledge of the field, I don't know how much android Chrome can ever do to mitigate a sovereign state policy, especially as phone systems all have some local specificities introduced either through the carrier's software, or some straight exception to follow the country's regulation/culture.

[Edit: User introduced VPNs are another issue, but it then falls down on stopping a user from meddling with their phone, which is also tricky in my opinion]


What about your phone vendor injecting a CA to your phone so they can decrypt https traffic and inject ads into the webpage?

When they have that capability it’s not a big leap to other things.


Yes.

This kind of stuff was already happening at so many level. Before https everywhere I was seeing a phone carrier auto-proxying requests and injecting additional ads on the way back. I can’t imagine they just gave up on the revenue stream when pages switched to https.


These are legitimate concerns, but this is like beheading to treat a headache. Technically it works.

I'd suggest to add a notification dialog when a root cert is added, and a good, clear UI to manage the certs: when added, by what app, disable, remove, etc.


Yeah. Just like they do now: those are the certs. Install them. Very good for security.


As a point of clarification, lest someone take you seriously, clear warnings are what is needed. And smart users. Instead of raising awareness, google et al have been trying to hide https, parts of url, so that they can maintain control.

As someone who runs their own cloud top to bottom with custom CAs, adding a trusted root CA is a pain. Removing the ability for me to run ym CA takes away control of my own device from me and puts it in the hand of the big companies.

You should hard reset your phone when crossing boundaries. You would do the same if somone borrwed your clothes.

Would you lend some one your clothes with your passport and wallet in it? Then why is a phone any different.


It sounds like you are the type of person that should just be rolling their own browser anyway.

Like I said, there are a lot of things they could have done better here. But the threat is real and its not some tinfoil conspiracy by "big tech." It is our job as technologists to first and foremost do what we can to protect the 99.999% of users who do not run their own CA.


Running your own CA is a pretty common thing for companies to do, to manage internal SSL certificates. And telling systems to trust it IS a pain. Even on Desktop you can't just drop a file in a folder, because chrome and firefox don't trust the system CAs, so you have to configure those separately, and possibly other applications as well.

I don't think it is some big conspiracy, but it isn't a good situation.


I think there are other, more effective, mitigations in place against the threats you are describing. First, there's no automated way to install CAs on Android. Users must do this step manually (the articles you linked are about Windows). Second, if you have installed a user-added CA, you get a prominent and permanent notification – non-dismissable, reappears after reboot – that your network traffic may be monitored. All this stops the "secretly-added CA" threat.

Finally, the current implementation is not effective at protecting against country-level MITM. Attempts at country-level MITM have been thwarted by browser updates to blacklist the respective CA certs, the same can be done on Android.

I agree those are legitimate threats that need to be addressed, but there are better ways to do so, which don't come with the convenient side effect of killing privacy research.


> Users must do this step manually,

Good luck refusing if it is an edict from some governments.


I don't think the purpose of CT is to protect against anything on your local computer. It's to protect against CAs getting hacked or coerced by governments into issuing malicious certs.

CT's design really doesn't make sense if the goal is to protect against local malware. Why would it need public legers of Merkle trees containing every issued certificate if it was just to protect against local malware?

Anyways, local malware isn't in Chrome's threat model:

https://chromium.googlesource.com/chromium/src/+/master/docs...

Disclosure: Google employee


The purpose of CT is so that shenanigans have to be done out in the public view where they can be more easily detected. Depending on the logs you are using, an attacker may not be able to submit a self-signed certificate (I haven't looked at which logs Chrome is using these days).

Regardless of the documentation, Chrome does roll out changes to protect against local threats. I think they just don't want to be on the hook to address every local threat. Happy to give examples in private if you want to email me.

Disclosure: ex-Google employee


Wow your first link is very alarming. I was curious about this passage from that link however:

>"The installation of an additional root CA cert potentially undermines the security of all your software and communications. When you include a new trusted root certificate on your device, you enable the third-party to gather almost any piece of data transmitted to or from your device."

I understand how they could decrypt any communication between the VPN client and the VPN server but if I was already encrypting my data using a browser that wouldn't give them anything more than encrypted traffic. I do understand the overall threat of these companies installing a Root CA but is that particular passage a little disingenuous or am I missing something much more obvious?


> but if I was already encrypting my data using a browser

Encrypting against whose keys? The website you are visiting? The malicious VPN company?

The entire point of user-added root CAs is that they can place themselves between you and whoever you're communicating with and intercept/modify it all. And you're unlikely to be warned about it at all.


Encrypting with the public key of the site I'm visiting, example - google.com. A VPN provider that installed a Root CA without my knowing still wouldn't be able to read the traffic being encrypted with Google's public key. They could see the SNI and see I am visiting Google that's understood. Perhaps that's what the author meant in the passage I quoted above.


And how do you know you're actually encrypting against google.com's public key, and not somebody else's key?

A VPN provider is in the perfect position to MITM all of your traffic, swapping out any site's public keys with their own in real time. If your VPN app has installed an alternative Root CA on your device, you'll get no warning that this has happened.


My understanding was that for Chrome that the CA had to be in the Chrome root store. And that this is what is used over the OS level root store where the VPN providers would be installing theirs. Doesn't Mozilla also ship with its own preferred root store as well?

https://www.chromium.org/Home/chromium-security/root-ca-poli...


From that document:

"If you’re an enterprise managing trusted CAs for your organization, including locally installed enterprise CAs, the policies described in this document do not apply to your CA. No changes are currently planned for how enterprise administrators manage those CAs within Chrome. CAs that have been installed by the device owner or administrator into the operating system trust store are expected to continue to work as they do today."

In other words, locally installed certificates are normally treated as trusted by Chrome.


Thanks. I completely misunderstood that. That makes total sense for an enterprise use case too otherwise it would probably be non-starter for many corporate IT departments.


Just like a normal root CA. Who should i trust better ? Microsoft ? Google ? Facebook ? Nederland's root CA or Ghana's root CA ? I'm really sure Google only collects data to make better products for _me_.


The point is to use the injected root CA cert for TLS handshakes, then use the VPN to make sure that all traffic goes through a node that can mitm the TLS connections (and I guess they just get the unencrypted traffic for free).


You know that sounds reasonable until you realize you could solve this 'issue' by not trying to lock out users from root in the first place.

Give everyone root, sure they can mandate some mitm whatever at the border but it won't matter once you disable it with your root...

I think the post you are responding too is far more salient than these bogeymen you are inventing...

Sure, there are going to be issues with some folk installing spyware, but honestly not having root hasn't solved this issue.


People should have the right to choose their own "threat model". Google has a threat model. A computer user may have a threat model. It is absurd to assume they will always be the same. The interests of Google may conflict with the interests of the computer owner.

Google wants to choose the threat model for everyone. There is no opt-in or opt-out. One size does not fit all.

Would enjoy more details on how ESNI/ECH/whatever will be used to exert "absolute control". SNI is certainly being used for censorship, but would like to know how ESNI can be used in similarly malicious ways.

Using DoH run by a third party is optional, as is using traditional DNS from a third party. One can still utilise these protocols with localhost servers. Computer owners have ample storage space today for storing DNS data. I use locally stored DNS data. I put the ___domain to IP address information in map files and load them into the memory of a localhost proxy.

Public scans from Rapid7 used to be a good public source of bulk DNS data in addition to public zone file access through ICANN, e.g., czds.icann.org. (A variety of third party DoH servers can be use to retrieve bulk DNS data as well.) Alas, Rapid7 have recently decided not to share the DNS data they collect with the public anymore.


> People should have the right to choose their own "threat model". Google has a threat model. A computer user may have a threat model. It is absurd to assume they will always be the same. The interests of Google may conflict with the interests of the computer owner.

I think that you are correct.

However, there is also such things as dynamic IP addresses, which might also have to be considered if you want to store DNS data entirely locally.


I have been using local DNS data for over a decade. At the start I believed most DNS data was truly dynamic. Today, I believe that is false; I have the historical DNS data to prove it. It is actually only a small minority of websites I visit that are changing hosting providers frequently or perhaps periodically switching between a selection of hosting providers. I do not mind making occassional manual changes for that small minority as I want to know if a website is changing its hosting. There are legit reasons to keep changing IP address but there are illegitimate ones, too. If I am lazy and do not want to look at the details when something changes, I can just redirect requests to archive.org or something similar, or a search engine cache. This works surprisingly well.

I once had someone challenge me on HN arguing that the IP address for HN was dynamic, with no proof. However I know it rarely changes because I have the DNS data stored locally and I have not changed it in years. It is baffling to me why some people refuse to accept that most DNS data can be, and in fact is, relatively static. It is too easy to test. Perhaps those who like to use DNS for load balancing do not appreciate the idea of the end user making the choice of which working IP address to use. However, they can, and in my case, they do.


The place I work uses AWS EC2 instances for everything. They get created and destroyed fairly frequently, and change public IP addresses as a result.

I wish this wasn't the case, because this includes all the things I need to access through the VPN, so several times per week I have to go rerun the "DNS lookup this list of domains and static route the resulting IP addresses through the VPN" script again.


"They get created and destroyed fairly frequently, and change public IP addresses as a result."

That's half the story. A load balancer (static IP) will often offload the traffic to another IP. Dns is not doing much for you here.

Furthermore, DNS often has a significant lag time between changes - switchovers usually measure in days, relying on dns to cover your routing is usually only pratical with a custom dns resolver anyways.

Even in the case of websites with truly dynamic access like this, then, it's enough to run a targeted query from your local resolver - an argument for local resolvers over your custom-roll-a-script solution...


> The threat model they're addressing is the one where users have a small semblance of control over their devices and networks. I've been saying for years that HTTPS everywhere, DoH, eSNI (or it's successor), etc. are part of a long term plan for big tech to have absolute control over what users are allowed and not allowed to do with their own devices.

I had thought so too, and also "secure contexts" (apparently there is supposed to be a way to configure this, but it does not seem to be the case) and HSTS, too. I think that HSTS is very bad. (HTTPS is not bad, but all of these things that are forcing them is a bad idea.)


IMO the downside of HSTS you’re referring to only applies to the specific HSTS-only TLDs. Otherwise, HSTS is useful to prevent a downgrade attack where someone types in http://bank.com on a new computer and poisoned DNS (via a rogue network operator or hacked router) means the initial page load for the bank loads over HTTP and thus the IP returned can show a fake login page, even to the point of the browser auto-filling login credentials.


> initial page load for the bank loads over HTTP and thus the IP returned can show a fake login page, even to the point of the browser auto-filling login credentials.

That problem would correctly be solved in a different way. "http://example.com/" is different from "https://example.com/" and so would have different cookies, auto-fill, etc. An indicator can be used in the ___location bar or status bar if needing to indicate the protocol and security clearly. The browser also should not auto-fill anything without the user's permission, regardless of protocol and TLS.


>different cookies

That protection essentially already exists with the secure bit in cookies and the __Secure- prefix.

The problem is that http:// links exist all over the web, and a website owner cannot force all incoming links to say https:// instead. Without HSTS, any time a user clicks one of those links, the user's ISP will be able to see the exact URL the user is visiting, and might even inject ads or other stuff[1] into the page. HSTS solves that.

Disclosure: Google employee, and my license plate is HSTS

[1] https://en.wikipedia.org/wiki/Great_Cannon


While it is true, I should think that it should be better to be up to the end user to decide what they want, and that it should not assume that everyone else knows better instead. If the user writes "http" then it is http, if the user writes "https" then it is https, etc. (What it should do if the suer does not specify the scheme is a different question, and different users may have different preferences (which should probably be configurable). My preference is that it would treat the URL as relative instead of absolute in that case, but that is probably minority.)


Most users have no idea what any of this means. Even I often don't know what one is best. I want to use https:// if it's available, and http:// otherwise. Should every time I want to click a link, I instead right click to copy the link ___location, then paste it into the URL bar and modify it to https:// to try that, and then if it fails to load, change it back to http:// ? That's a massive amount of work. I want it to just work without all that hassle and having to keep a info in my mind of which websites support https:// and which don't.

>If the user writes "http" then it is http, if the user writes "https" then it is https

Should we also block the website from doing a redirect to https:// ? HSTS is basically just a redirect cache.


> Most users have no idea what any of this means.

OK, but users who do know what it means should be allowed to configure it differently. The computer software should be designed for advanced users, who do know what it means (and includes documentation in case you do not know).

> I want to use https:// if it's available, and http:// otherwise. Should every time I want to click a link, I instead right click to copy the link ___location, then paste it into the URL bar and modify it to https:// to try that, and then if it fails to load, change it back to http:// ?

No; it should be configurable to do it automatically how you want to do.

> I want it to just work without all that hassle and having to keep a info in my mind of which websites support https:// and which don't.

There are also other ways to do that even without HSTS, though. Still, it should be configurable by the end user.

> Should we also block the website from doing a redirect to https:// ?

No, that it isn't up to the client side to block. (I think that most web sites should not automatically redirect in this way, but that is not related to the client software.)

> HSTS is basically just a redirect cache.

Then it is deficient since it is not the only kind of redirect. Furthermore, it is bad because it does so without the user's specifying if they want this cache or if the user wishes to override it for any reason, making some things difficult to do. It should not try to think they know better than the end user what the end user is wanting to do.


>OK, but users who do know what it means should be allowed to configure it differently.

Yeah, I agree. In order to avoid bloat, it might be best to offload this to extensions.

>No; it should be configurable to do it automatically how you want to do.

I agree, but I think complex configuration should be offloaded to extensions. By default it should do what the website owner wants (HSTS).

>Then it is deficient since it is not the only kind of redirect.

Other kinds of redirects aren't relevant for security, so they don't need as the degree of cache guarantees that HSTS provides.

>Furthermore, it is bad because it does so without the user's specifying if they want this cache

Stuff should be secure by default. We don't want security to be opt in.

>or if the user wishes to override it for any reason, making some things difficult to do.

I agree there should be overrides.

>It should not try to think they know better than the end user what the end user is wanting to do.

Sometimes the end user actually doesn't know what the end user is trying to do and gets phished. I agree there should be overrides though, but they need to be carefully designed so that attackers can't abuse them.


> In order to avoid bloat, it might be best to offload this to extensions.

While it is a good idea to avoid bloat, there are some problems with this:

- The web browser is too bloated already, and its features are not offloaded to extensions. (If they were (offloaded to extensions which are then included with the browser by default), then it would be easier to customize those features, and the extension mechanism would be sufficient to add new HTML commands, file formats, protocols, character encodings, etc too.)

- WebExtensions is incapable of many things. (I don't know if it can affect the ___location bar behaviour, but XPCOM is capable and is what I have done on my computer. One thing that WebExtensions definitely does not do is to load native .so files natively. Of course native code should not be available in the public extension catalog, but would be useful for advanced users who can add it by themself.)

> I agree, but I think complex configuration should be offloaded to extensions. By default it should do what the website owner wants (HSTS).

I think it should be unnecessary. If you have a header overriding option, then it makes many other settings unnecessary, and the user can then make settings for cookies, languages, HSTS, referer, JavaScripts and other features (using CSP), user agent override, and many other things, without needing separate settings for those things.

> I agree there should be overrides.

Yes, but unfortunately the HSTS specification, and web browser authors, does not want.

> Sometimes the end user actually doesn't know what the end user is trying to do and gets phished. I agree there should be overrides though, but they need to be carefully designed so that attackers can't abuse them.

I think differently. It might need to be a separate program, which is the "advanced users web browser", you can override and set everything. I don't like this modern software that is not designed for advanced users. Software should be designed for advanced users.

One idea is that a better implementation can be a web browser engine which consists of a lot of independent components (HTTP, HTML, CSS, PNG, JavaScript, WebAssembly, key/mouse events, ARIA, etc; available as separate .so files perhaps (with their own source code repositories)) that can then be tied together by a C code (the main program); a programmer can modify or rewrite some or all parts of this code, to make a customized web browser with your own functions changed.


>and its features are not offloaded to extensions.

I think commonly used features should be built in, and rarely used features or features where people can't agree on how the UI should look, should be in extensions. I think HSTS customization would be a rarely used feature, so should be in an extension.

I think I agree that most of what you're saying would be ideal. I'm not sure how much of it is doable though when you consider programmer time constraints.


> I think commonly used features should be built in, and rarely used features or features where people can't agree on how the UI should look, should be in extensions.

Unfortunately doing it makes it difficult to work. Also, the extension mechanism is deficient.

If multiple kinds of web browsers are made, so that it is not a monopoly, then they do not all have to be made the same way.

They shouldn't all be Chromium or whatever; they can be something else. If you have separate components, then you can more easily replace them or tie them together differently, without having to use inefficient extensions, modify the source code (which is large and might take a long time and be complicated), waste disk space and memory on unused features, etc.


Yeah. Although, as you mention modifying the source might take a long time, similarly implementing multiple browsers would take a long time. I don't know what the solution is.


> Yeah. Although, as you mention modifying the source might take a long time, similarly implementing multiple browsers would take a long time. I don't know what the solution is.

That is why I think that separating out the components might make it easier for other people to independently build multiple browsers.


> Most users have no idea what any of this means.

That's a bad reason to do things. Users will learn what is made important to them. Red website for http, green website for https. Nice big lettering to translate to the non technical user 'Protected' mode versus 'Unsafe' mode.

Yeah for power users show all the nitty gritties, but this isn't about being technical or not - it's not being communicated properly and instead its just hidden.

When your spouse doesn't communicate well with you, starts having an affair, and then hides it, _thats NOT a good thing_.

All this talk of pasting urls...thats not how users use the web. Nobody is keeping in mind which websites support https or not. They do care what happens when they browse.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: