I wish there was a solution for those of us who develop web interfaces for embedded products designed to live on LAN, often without any internet access and no well defined ___domain name.
I'm all for HTTPS everywhere but right now for my products it's either: https with self-signed certificate, which basically makes any modern browser tell its user that they're in a very imminent danger of violent death should they decide to proceed, or just go with good old HTTP but then you hit all sorts of limitations, and obviously zero security.
I wish there was a way to opt into a "TOFU" https mode for these use cases (which is how browser dealt with invalid HTTPS certificates for a long time), although I realize that doing that without compromising the security of the internet at large might be tricky.
If somebody has a solution, I'm all ears. As far as I can tell the "solution" employed by many IoT vendors is simply to mandate an internet connection and have you access your own LAN devices through the cloud, which is obviously a privacy nightmare. But it's HTTPS so your browser will display a very reassuring padlock telling you all is fine!
It's worrying how they are improving the case for "70%" scenarios, while crippling it for the other 30%, without recourse. It's not even funny any more.
What happens with offline LAN? And the ideal IoT devices that we would all want to have? (I mean those we dream about in all IoT HN posts, where the rants typically are that no internet connection should be needed for most of these kinds of devices)
What about offline tutorials? I'd like to provide a ZIP with a plain HTML tutorial that shows how WebRTC works, but users cannot serve it up in their LAN and access through their laptops or phones, because WebRTC requires HTTPS. What's even worse, a self-signed cert doesn't work either. iOS Safari does NOT work with self-signed certs at all!
It's maddening, obstacles everywhere once you don't walk the path "they" (industry leaders, focused on mainstream online web services) want you to follow.
EDIT: There are some (lots [0]) of features that require a secure context, i.e. a web page served through HTTPS. So the defaults to HTTP are not a silver bullet, and the security exceptions for localhost are not really that useful either, being limited to the same host.
Doesn't the post say they'll fall back to http if the https attempt fails?
> For sites that don’t yet support HTTPS, Chrome will fall back to HTTP when the HTTPS attempt fails.
The only change here seems like it's that, from the user's perspective, initial connections to http-only sites will be a bit slower (vs. the opposite which used to be true: initial connections to https-only sites were slower).
I'm talking about the general state of HTTPS implantation. If you develop an offline device which offers a web UI, and it happens to use any feature that is deemed to require a Secure Context, you're out of luck.
WebRTC is such a feature, but there are lots more, and they can change from one version of the browser to the next one.
The players who are pushing so hard to shove HTTPS down our throats are simply closing their eyes and ignoring the use cases that are not interesting to them. The mandatory renewal timing is a good example: it used to be more than 1 year, now it is 90 days (and some would like to reduce it to mere weeks!) Absolutely great for the grand scheme of things and global security of the Internet, but dismaying for lots of other use cases.
I don't actually see the problem. If you're on a local network, there's no practical way to deal with certificates, so use http. Chrome will fall back. Problem solved.
If http support ever gets truly removed, I will be very upset. But that hasn't happened, so what is there to complain about?
HTTP is effectively considered legacy by the big web actors these days. More and more APIs are HTTPS-only (often for good reasons) and the "insecure" warnings you get from using HTTP become more intrusive every year.
The trajectory is pretty clear, the long term plan is to phase out HTTP completely. And I'm not against it, but I need a solution for LAN devices, and it doesn't exist at the moment because the big web actors do everything in the cloud these days and they don't care about this use case.
The authority who grants you your SSL certificate. There is more than one out there, sure, but you can't do it without them. And ultimately, they all answer to the same authority above them: the browser maker who populates the root trust store.
So, to summarize: one more way for the browser maker to control what the user can and cannot access without jumping through hoops.
The OP means that in using https (and being forced to used https) you are also being forced into paying a 'third party' an annual fee just to get a valid certificate.
That 'third party' is one of the recognized 'certificate authorities'.
But the OPs point is by going https, you don't have a choice, you have to pay the certificate tax.
Right, and Let's Encrypt doesn't solve the problem, it just kicks the can to DNS, which is globally unique and costs money. Communicating between your computer and any device that you supposedly own without the slow, unnecessary, and increasingly intrusive permission of some cloud IoT stack will become more and more difficult.
I would like to trust a given root very for only a specific ___domain (and sub ___domain)
I.e *.int.mycorp.com, but not www.mybank.com
Browsers don’t let me do that, it’s either app or nothing. X509 name constraints aren’t great either and don’t give me, the browser operator, the power.
Don't think personal LAN, think e.g. industrial automation: Many sensible companies want modern sensor systems that provide REST APIs and so on, but don't want those to access the internet. The hosts in this case often are appliance-like devices from third parties.
But that’s my point, and many others’. Sure, we can self sign, but it’s useless for the WWW. You’re forced to pay up to one of the few certificate providers. Thankfully, Let’s Encrypt has made it free and easier, but it’s not a no-brainer.
That's OK then, if that's we all have to do to run any devices inside our LAN/home network.
Want a NAS box for sharing family files/photos or some other IoT device at home? Just set yourself up some other device to run the docker image, get your self a certificate from LetsEncrypt and then... install it on the NAS box? How does that happen?
Perfect time to radicalize the underground (say by beginning to experiment with Gemini or other protocols), the mainstream as usual only knows how to follow
The problem is that there is no way to deal with certs on a local network, but the OP would like to be able to use https anyways; http might be considered too insecure for their usecase
What I do is buy localme.xyz and get a wildcard cert via DNS validation. This way you get SSL for offline devices. But you need to update the cert periodically.
I wish there was a way to automate wildcard certs, at the moment I'm building a python script that logins to my ___domain registrar's panel and updates DNS records
If your ___domain provider's API sucks, or doesn't exist, or requires generating a password/key with more permissions than you're willing to give a script, look at acme-dns [1] and delegated DNS challenges:
It shouldn’t be practical, that’s by design. Imagine if every captive portal had you install their root certificate to access the WiFi, with just the click of a button.
3 months? I must have updated FireFox / Discord / VS Code /etc. about a hundred times in last 3 months. Plenty for them to add renewed SSL whatevers inside one of the updates.
> 3 months? I must have updated FireFox / Discord / VS Code /etc.
I think this state of affairs is nuts. With the exception of Firefox, because web browsers have an inordinate number of security issues to contend with.
I agree completely. But, I also think that in most cases, if a simplistic piece of software like an IM app needs a security patch every three months, regularly, it's a sign the attack surface is too large.
The article you linked to is kind of confused and I'm not sure I blame them. This stuff is really complex!
According to the proposal[0], leaf certificates are prohibited from being signed with a validity window of more than 397 days by a CA/B[1] compliant Certificate authority. This is very VERY different from the cert not being valid. It means that a CA could absolutely make you a certificate that violated these rules. If a CA signed a certificate with a longer window, they would risk having their root CA removed from the CA/B trust store which would make their root certificate pretty much worthless.
To validate this, you can look at the CA certificates that Google has[2] that are set to expire in 2036 (scroll down to "Download CA certificates" and expand the "Root CAs" section) several of which have been issued since that CA/B governance change.
As of right now, as far as I know, Chrome will continue to trust certificates that are signed with a larger window. I've not heard anything about browsers enforcing validity windows or anything like that, but would be delighted to find out the ways that I'm wrong if you can point me to a link.
Further, your home made root certificate will almost certainly not be accepted by CA/B into their trust store (and it sounds like you wouldn't want that) which means you're not bound by their governance. Feel free to issue yourself a certificate that lasts 1000 years and certifies that you're made out of marshmallows or whatever you want. As long as you install the public part of the CA into your devices it'll work great and your phone/laptop/whatever will be 100% sure you're made out of puffed sugar.
I guess I have to disclose that I'm an xoogler who worked on certificate issuance infrastructure and that this is my opinion, that my opinons are bad and I should feel bad :zoidberg:.
They might to you, but the browser doesn't agree. It will scream with all its force to all your users that this accessing that product is a really really dangerous idea.
I have tried to do just that but ran into all kinds of difficulties:
1. Overhead: I have 5 devices that I own 3 of my wife and a smart TV. Setting all this us takes a lot of time, even if it worked fine.
2. What about visitors to my home, that I want to give access? They need the cert as well together with lengthy instructions on how to install it.
3. How do I even install certs on an iPhone?
4. Firefox uses it's own Certificate Database -- for each profile. So I'll have to install certs on the system AND on the host (for e.g. Chrome to find it).
5. All these steps need to be repeated every year (90 days?!) depending on the cert expiration period.
Eventually I just gave up on this. It's not practical. There needs to be a better solution.
I bought a ___domain For use on my local network. I use letsencrypt for free certs, and I have multiple sub domains hosted under it. It works very well and wasn’t that hard to setup. It’s actually better organized and easier to use than my old system since I had to take the extra step up front to organize it under a ___domain.
15$ a year for a ___domain, throw traffic through local [split?] dns and traefik with the lets encrypt dns challenge and call it a day? i have over 25 internal domains & services with 25 certs auto renewing and no one can tell - it just works and is easier the self signing certs and loading them into whatever rando service or device your trying to secure
i'm not sure i follow, what does split DNS have to do with DoH? i don't want my internal DNS addresses public, there is no need + security and for some addresses i have different IPs internal vs external.
Browsers send to external provider like google, rather than the network provided server which has the internal addresses (and which may override external addresses for various reasons)
I have been looking at using MDM for my iPhone so I can install trusted root certs and require it to be on vpn when not using a specific list of WiFi networks.
It is a hassle and as far as I could see it requires resetting the device and I’m not sure I can restore my backup over it and retain both.
Browsers accept a root CA with long lifetime. Certs signed by CAs installed by the user or admin also allow long lifetimes (probably will still for a while).
There is an important difference between (A) trusting "just this one" cert for a specific reason, and (B) installing a root cert that is able to impersonate any server.
It ought to be a practical to do (A) without doing (B), but due to a variety of deep human and technical problems, it isn't.
> initial connections to http-only sites will be a bit slower (vs. the opposite which used to be true: initial connections to https-only sites were slower).
More than a bit. For HTTPS-only sites, the site could serve a stub HTTP endpoint on port 80 that redirects to HTTPS. The redirect causes maybe some milliseconds to a few seconds (worst case) of latency.
HTTP-only on the other hand can't do a HTTPS stub as easily (as the primary reason you'd want HTTP-only is probably that you don't want/can't deal with the certificate management - if you can set up a redirect from HTTPS to HTTP, you might as well go full HTTPS)
So the only option for HTTP-only is to not open port 443 at all - meaning Chrome has to wait out the 30 seconds (!) network timeout until it can try the fallback. So pure-HTTP sites would become a lot more unpleasant to use.
(A site might be able to cut this short by actively refusing the TCP handshake and sending an RST packet - or by opening and immediately closing the connection. I don't know how Chrome would react to this. In any case, that's likely not what HTTP-only sites in the wild are doing, so they'll need to update their software - at which point, they might just as well spend the effort to switch to HTTPS)
I don't worry about port scanners. If your infrastructure becomes less secure because of a port scanner, it's not very well secured. Sending a REJECT is cheap, no DDoS opportunity and helps browsers and other apps fail over fast.
The recommended practise I've heard everywhere is that an ICMP REJECT provides no opportunities for an attacker they wouldn't have with 10 minutes extra time (modern port scanners can be set aggressive enough that REJECT or DROP doesn't matter if they have a known open port they can obtain an ACCEPT timeline from).
Yeah, that absolutely makes sense - as I said, a site could also avoid the timeout by sending a TCP RST packet. My point was more that I believe not many sites are doing any of this as in the past, "best-practice" firewall configuration was to be as silent as possible.
Being as silent as possible has only two effects; an attacker takes 2 seconds longer and you significantly degrade the network experience of all your users. It hasn't been best-practise in a while (outside the greybeard firewall admin circles), ie, almost a decade at this point. Anyone who hasn't figured that out should really consider not doing network admin if their knowledge is a decade out of date.
Hmm, OK. Might be that my knowledge was not really up-to-date there. I'd really appreciate if network protocols designed for common benefit (such as ICMP) are not discouraged due to security for a change. In this case, sorry for spreading FUD, that wasn't my intention.
> It's worrying how they are improving the case for "70%" scenarios, while crippling it for the other 30%, without recourse. It's not even funny any more.
I constantly have issues with this address bar hiding the scheme and even the www.
One issue is when I quickly want to select some parameters or delete parts of the url in order to "up" one level.
What drives me absolutely insane is their inconsitent autocomplete functionality. Sometimes I end up at googling "examp" instead of navigating www.example.com, which was a page I already had visited and therefore showed up as the ___domain which was autocompleted inside the address bar. Sometimes pressing enter autofills the address, sometimes it googles the halfway typed ___domain. If I tab the halfway typed ___domain, it always completes it and an enter will navigate to it.
There is some strange difference in entering a ___domain and performing a search, something feels off. I can't exactly tell what it is, but sometimes I end up with submissions which I did not intend.
Also something with pressing the down-key in order to select the topmost entry, the one which gets selected with tab, it just gets skipped when I use the down-key.
Another thing is if I want to query "Raspberry Pi disable wifi" or something which begins with "Raspberry Pi", that I then get suggested the URL raspberry.org, a ___domain which I have often visited, and am forced to type through the entire word Raspberry in order for the URL-functionality to get aborted and have it switch over to googling mode. Maybe there is a keyboard shortcut or something which would help me out, but it simply isn't intuitive.
It's a silly hack, but if you install googles "Suspicious site reporter" extension for chrome, then the chrome address bar retains the full URL all the time.
In Chrome you can right click the address bar and select "Always show full URLs".
I prefer how Firefox highlights the most important part and still shows the full URL.
That number was totally made up, of course :) that's why I quoted it... didn't really want to be too pedant and explicitly say it, but maybe I should have.
But the number matters to your point. Focusing on 70% of users to the detriment of 30% seems a lot less defensible than focusing on 99.9% of users to the detriment of 0.1%.
The claim you made was “chrome is hurting a large minority of users and they should make a browser in a more fair way” but this changes when you change your made up numbers to something more like “chrome is hurting a tiny minority of users with weird use-cases like me and I don’t like it”
FWIW, it feels like the problem is more that your use-cases don’t fit into web PKI, and I agree with you. But I don’t think harming the security of web browsing for the vast majority is the solution to those problems.
> Focusing on 70% of users to the detriment of 30% seems a lot less defensible than focusing on 99.9% of users to the detriment of 0.1%.
Ah, but now you’re measuring a different thing than the GP, users vs scenarios!
I’d hazard a guess that at least 30% of users need to log into a router at some point or another. I hope it’s more than 30%, because everyone else is likely paying ridiculous prices for a crappy router from their ISP.
The change doesn’t affect routers (which I think most people don’t know how to log into these days) as there is no https default for URIs outside of the scope of normal PKI (ip addresses and single-level names). Are we disagreeing about the actual change or an imagined future one?
I wonder if a scheme could be invented where your router could be responsible for issuing certs to local devices. Forgetting about the impossibilities of industry adoption, would such a scheme be possible?
E.g. your router/DHCP controller/AD box gives an IoT device a DHCP ip and maybe a DNS address, and additionally it will provision a cert+key to that device by some standard protocol (keeping this secure might be impossible?). Router has an internal CA cert+key to do this.
Your PC then (handwavy) "knows" to retrieve the CA cert of your router by some standard protocol (dhcp extension?), and "knows" to trust it for devices on the router's subnet.
One problem is that it's easy to inject malicious DHCP on to any network you have access to, and you can then route all traffic to yourself (by telling clients that you are the gateway.) This kind of attack is partially mitigated because of TLS - redirecting all traffic to yourself isn't particularly useful if it's all encrypted. But if you could issue a cert along with the DHCP it'd be game over for everyone on the network.
If you’re the network operator you can mitm network traffic if curse. If you aren’t how are you running a dhcp server when the switch will block the packets?
Switches don't typically block DHCP packets. You can literally just spin up your own DHCP server and plug it in to a switch port - if your fake server responds to a DHCP request faster than the legit DHCP server the client will get your lease instead of the right one. It's this way by design - it's not at all uncommon for the DHCP server to not run on the router itself, but on some other device elsewhere in the network, or even outside the layer 2 network using a DHCP Relay.
See my edit. There are lots of stuff browsers forbid you from doing if HTTPS is not in use, so the kind of defaults you quote are not really that useful. For example, the browser won't let you capture webcam video (MediaDevices.getUserMedia()) from a page which is not https or localhost (so, good for a computer where you are running some software, not so good for an embedded device you want to install in your home and access from within your LAN with e.g. your phone, for whatever reasons)
Is it the case that self-signed certs don't work in iOS at all? I'm looking around, and I appear to see tutorials for how to properly configure one in iOS.
I'm talking about user access. At least other browsers still allow it (but that's also prone to change at the whims of the developers), but in Safari for iOS the page will fail silently and won't load, with absolutely no feedback as to why.
Having to install custom-made Root CAs into all and every client device doesn't sound to me like an ideal solution...
Unfortunately, since one is breaking the SSL trust model, that's probably the right solution. Not unlike having to explicitly enable "Developer mode" before a whole host of security-breaking options are available.
Actually, that's one solution Apple could consider: if a user has enabled Developer Mode on a given iOS device, allow the trust model to be broken with an "Are you sure you know what you're doing?" button instead of a silent failure.
At that point, isn’t it easier to send the user to chase.com.scammer.com?
The goal isn’t to make a 100% foolproof system (because you can’t), and needing to flip a switch called “developer mode”, which preferably also displays a warning message, should make it clear something is wrong.
...I think this whole discussion is kind of missing the point though. Developers are not the only people who need to log in to routers.
Yeah, I don't know what OP is talking about, I'm using one on my iPhone right now. Enterprises deploy them all the time.
It is true, that in recent versions of iOS (in the past five years or so), you have to install the certificate in Safari, then go to Settings->General->About, scroll all the way down, and manually trust the certificate (to ensure you really know what you're doing by enabling it). And iOS doesn't make this known anywhere outside of that special menu three levels deep, I suppose to not confuse people who had an attacker install a cert on their phone somehow.
If you are talking about installing the Root CA in the iPhone, yeah. That's how I do it in my development devices.
But for a user, iOS Safari (not Safari for MacOS) doesn't show any certificate warning that the user can accept, like other browsers. In fact, it just fails absolutely silently. You'd have to connect it to a Mac and open up the developer tools on the desktop's Safari, to see the errors that are being printed on the JS console.
Otherwise, you'd just be left wondering why it just doesn't work like all the other browsers.
Unfortunately, user behavior testing shows those certificate warnings are a threat vector. There's a reason the browsers have been moving towards the exits on trusting the user to understand the security model enough to override the trust breakage.
Chrome pops a warning, but (with a few exceptions) doesn't let you just navigate through it (there's a secret key sequence you can type to override it, but it's both purposefully undocumented and periodically rotated to make it something that you can only know if you have the chops to read the source code or consult the relevant developers' forums).
I'm using self signed cert on iOS/macOS and it works just fine with Safari. Safari is messed up in other ways with TLS. Like it re-uses HTTP2 connections when making requests for a different Host when it's running on the same IP address as the host it connected to previously, which completely breaks client certificate selection and unless you recompile nginx with custom patches, it also doesn't work with nginx, because SNI and actual Host header differ, which nginx doesn't like by default.
>It's worrying how they are improving the case for "70%" scenarios, while crippling it for the other 30%, without recourse. It's not even funny any more.
It's even less funny when you realize that the class of devices that gets effectively crippled includes the wast majority of all IIoT devices. Including ones that run manufacturing and power generation. Yeah, your precious personal website is now secure from MITM attacks. The factory that made your car, however, uses critical infrastructure controlled by web interfaces with no encryption at all. Congrats.
> It's worrying how they are improving the case for "70%" scenarios, while crippling it for the other 30%, without recourse. It's not even funny any more.
70% of the scenarios impact 99.99% of users as well as the project's intended scenario.
> "I'm all for HTTPS everywhere but right now for my products it's either: https with self-signed certificate, which basically makes any modern browser tell its user that they're in a very imminent danger of violent death should they decide to proceed, or just go with good old HTTP but then you hit all sorts of limitations, and obviously zero security."
How about what Plex did for its self-hosted media servers?
"First they solved the problem of servers not having a ___domain name or a stable IP (they are mostly reached via bare dynamic IPs or even local IPs) by setting up a dynamic DNS space under plex.direct"
"Then they partnered with Digicert to issue a wildcard certificate for *.HASH.plex.direct to each user, where HASH is - I guess - a hash of the user or server name/id."
"This way when a server first starts it asks for its wildcard certificate to be issued (which happened almost instantly for me) and then the client, instead of connecting to http://1.2.3.4:32400, connects to https://1-2-3-4.625d406a00ac415b978ddb368c0d1289.plex.direct... which resolves to the same IP, but with a ___domain name that matches the certificate that the server (and only that server, because of the hash) holds."
That's very cool, but it still requires a working and reliable internet connection.
Also these ultra-long URLs are very clumsy and can't really be used directly, so you need to have some sort of cloud frontend that where the device phones home in order to announce its LAN IP, then the user can go through there in order to connect to their LAN devices.
For something like Plex it makes a lot of sense since you're probably going to have an internet connection when you use it anyway, but for my devices it's a deal breaker.
And at any rate, that's a whole lot of infrastructure just to be able to expose a simple web interface.
Yes, why don't we come up with an extremely elaborate scheme to issue more or less faux certificates to these devices that still breaks in practice because it looks like DNS rebinding and requires an internet connection for the device and some VC funded service on the other end in perpetuity for correct operation?
Ultimately, the question is why the fuck my browser needs TurkTrust, Saudi CA or others to authenticate the device when I could do that right fucking now by turning it over and reading a label. No 3rd parties required.
But this requires the manufacturer of the IoT device to provide a central service like Plex does.
Maybe they can't afford it, or the device is expected to run for a very long time, like even after the manufacturer goes out of business. Or as some other commenters have said, it's a personal / hobby project and the "manufacturer" doesn't have the means nor the will to maintain some outside management server.
> IP addresses, single label domains, and reserved hostnames such as test/ or localhost/ will continue defaulting to HTTP.
I don't think this affects you. If you are accessing a device on your LAN, you either use its IP address, or if you use DNS, you must be using your own DNS resolver then. In that case you can just use a single-label ___domain name such as http://media/ and you can omit the "http://" in that case. This is also the case for many enterprise networks. Tip: you need to enter a trailing slash to tell Chrome is a single-label ___domain, otherwise Chrome will think you want to search.
mDNS uses link-local multicast so it does not work if your local network is more than one (l2) network segment (e.g. separate segment for wired and wireless).
Here's an approach I've used before successfully. It's not perfect but it's better than nothing.
1. Create your own root Certificate Authority.
2. Create a script using your favorite language and libraries that will create a new certificate for each device something along the lines of "myiotdevice-AABBCCDD.local". The AABBCCDD needs to be some sort of serialized number that's assigned during manufacturing and won't be repeated between devices.
3. Add to your product support for ZeroConf/mDNS/DNS Service Discovery and advertise an https web server at myiotdevice-AABBCCDD.local.
4. Provide instructions to your users on how to download and install the certificate for your root CA (this only needs to be done once).
5. Print the name "myiotdevice-AABBCCDD.local" on the device and instruct users to type that in to a browser's address bar.
I'm doing this from memory so I may have missed an intricacy here or there (like DNS SD is a weird story on Windows 10) but this approach should basically work well enough.
EDIT - good commentary in replies about the dangers of the CA being compromised. Also, good mention of X.509 Name Constraints and how they can be used to mitigate that danger somewhat. More info here: https://systemoverlord.com/2020/06/14/private-ca-with-x-509-...
2. Ensure that the security around your new root CA is watertight, so that if your environment ever gets compromised, someone can't generate a new *.google.com or *.yourbank.com certificate signed by your CA and then MITM your connection.
So then I have to install a root CA for every random IoT product I buy? Which also entails handing them the keys to my machine, since being a root CA means any certificate they generate will be trusted.
That should work but trusting a root cert from a third party makes be a bit wary depending on how it is done.
If the certificate is scoped to only that ___domain or to only domains used by that user then I suppose it's OK but there is currently no way to enforce this, that I am aware of, without the user understanding and inspecting the certificate.
Thinking out loud here: It would be neat if browsers supported some form of addresses which are public key hashes like is done in many distributed systems. Maybe, out of caution, it would only be supported on local networks. For ease of use this address could be discovered via QR code or a simpler local dns name.
I think there is the solution - to support scopes for certificates, but I am afraid big companies won't be keen on donating resources to implement that.
Any device manufacturer that does this should not be allowed to touch anything related to security. It's one thing to have a little article about why they get the big scary security warning and how to add their device's cert as a one-off exception but "let this random IoT manufacturer vouch for any website" is nuts.
I think the solution is TLS-PSK[0]. But browsers don't support pre-shared key mode. If they did, each IoT device (or consumer router, NAS, etc) could ship with a unique key which your browser could prompt for on first use. These could be even be managed by password managers, so you'd get the trust on all your devices.
The very partial solution that I've been experimenting with and trying to refine is to "abuse" DNS records and Certbot's DNS tests so that I can have a bunch of public subdomains that point at intranet sites.[0] There's no rule that says you can't point a DNS record at a local IP.
This really isn't a full solution though because there are instances where you don't want a public DNS record at all. It's also not particularly plug and play, at least right now. And it requires you to have a static ___domain name and an Internet connection.
It's a step in the right direction, but not perfect, and not usable in every situation. Setting up a custom certificate server and configuring every device is a no-go for me, maybe that works for highly managed networks. I don't trust myself to do it, and even if I did, it's too time consuming and annoying to set up for new devices. At least with my current setup I don't need to transfer any information (other than the DNS lookup) out of my network, and anyone and any device on my network will immediately be able to connect to whatever service I have, and I don't need to worry about accidentally compromising every website I visit.
I'm not sure what the better solution would be. I'd also be interested in hearing ideas about this, it's a problem I'm running into as I try to figure out how to get HTTPS encryption on my local projects. And I do want HTTPS on those projects. I don't want my local network to be running everything unencrypted just because it's behind a LAN. But it's very tricky to try and set something up that's both robust and dependable, and that is fast enough to be usable within 1-2 minutes of me starting a new project that I'm just hacking on.
It's also something that we're thinking about at work. We'd love to manage HTTPS for software installations on our client networks, but we don't want to force them to reveal too much info about their networks on the public Internet, and we don't want to deal with trying to integrate with whatever weird, half-broken certificate servers they have running.
Your solution comes to closest to mine, so I'll comment under here:
- Register the ___domain names you'll use in your LAN, point them to a public VPS and generate a wildcard TLS certificates.
- Copy the certificates to the servers in your LAN. This is the annoying part, as it needs to be done after every renewal (90 days with LE).
- Have Pi-hole in your LAN. It's necessary for security reasons to every serious IT professional anyway. It allows to set local DNS records - configure your ___domain names to point to the LAN servers.
This way, you get valid certificates on private servers without exposing DNS records into the public, without having to configure each client individually. Other people here still using self signed certificates are crazy...
It seems like we should have something like known_hosts for ssh, yeah. As long as it’s trusting one ___domain at a time (not a root CA), would it really be /that/ bad?
This and browser vendors being overbearing about extensions (I know they’re powerful) gets me down.
> As long as it’s trusting one ___domain at a time (not a root CA), would it really be /that/ bad?
I think the unhappy reality is that yes, it would. In security terms it's a good thing the major browsers are extremely hostile to invalid HTTPS certs, and do not give the user an easy way to proceed (as somehnguy mentioned).
If you give the average user a simple Click to proceed, they will use it unthinkingly. Then, an invalid cert would no longer be a website-breaking catastrophe (which it absolutely should be), instead it would just be a strange pop-up that the user quickly forgets about, and the door is opened to bypassing the whole system of certificate-based security.
If you have the technical know-how, you already have the ability to customise the set of trusted certificates on your machine (with the possible exception of 'locked-down' mobile systems like iOS). The rest is a matter of UI.
> This and browser vendors being overbearing about extensions (I know they’re powerful) gets me down.
Similar situation. Unless carefully policed, browser extension stores can be used as attack vectors, and whether it's fair or not, the browser gets a bad reputation.
That's not equivalent and easily dangerous. Random server's TLS cert could be a wildcard or even a CA, which you should not add to your trust store with a single click.
The certificate needs to be either restricted to specific domains (preferable) or validated to make sure there aren't any suspicious attributes (seems easy to get wrong or reject many certificates).
Isn't this the default behavior of most browsers? Access an https service with an untrusted tls certificate, the browser throws a warning and offers a way to permanently trust the certificate.
Neither Chrome nor Edge offer a simple way to permanently trust the cert. I’m sure there is a way to do it but they don’t make it obvious. It’s maddening as someone who develops and distributes local network apps with https.
Sounds like you're using Windows, and I believe both ultimately outsource that to the OS, so you'd need to look whereever windows manages certificate trust to find and remove any old trusted certs. HTH.
I'm on chromium right now and it does remember my decision to trust self signed cert once I click proceed to unsafe ___domain the first time. It's not ideal, and it shows an alert to inform the user of the fake certificate, but it works and unlocks the https only APIs on a local offline environment. It's permanent until you switch user profile or click on the alert on the left of the address bar and reenable the warnings for that ___domain.
You used to be able to add your own certificates to a device's certificate store. Nonadjustable certificate stores complement planned obsolescence, and help split the market into consumer and enterprise devices, that latter of which you can charge a premium for.
I can't do that with my Chromecast, though, which my point. There are devices that depend on HTTPS to function, but are designed such that the user who owns the device cannot add their own certificates to their trust store.
Yeah, there really needs to be a "secure, but not trusted" mode. My suggestion would be add a "trusted-only" TXT DNS entry that a browser could check when presented with an untrusted connection.
HTTP: gray broken padlock
HTTPS+Cert: green padlock
HTTPS+no cert: gray padlock
HTTPS+no cert+trusted-only: red broken padlock
Any complaints? No? Ok, let's make it a standard! Oh wait... we're not in control of the standard, it's the world's largest cloud hosting, service and product providers. Intranets and locally-accessible embedded devices are a threat to their business model. Fuck.....
There's no such thing as "secure, but not trusted". The security depends on the trust. That isn't just how TLS works; it's how all secure key exchanges work.
That's exactly how mail works between servers though. Granted, it's about semantics, but virtually all mail servers accept TLS connections without the need to check cert validity of their respective counterparts.
Yes: encryption does not work in multi-hop SMTP email. Email is not a secure messaging system, and it is difficult to even build a novel secure messaging system on top of it.
Client-server TLS has a goal of actually thwarting adversaries. SMTP encryption is mostly about raising the costs of adversaries (I think there's plusses and minuses with this strategy; to some threshold, increasing costs for the US IC is actually helping them, organizationally, because the IC's real primary goal is budget-seeking).
That is very obviously not the case. A self-signed certificate protects data from being intercepted and read just as well as a signed one. The only thing "valid" certs protect from that self-signed certs don't is impersonation. In the case of an intranet or local embedded device, if someone can MITM your connection, you're already screwed - and either way, your are just as screwed getting MITMd with an unchecked cert as you are with no encryption. The difference is, that without any encryption, an attacker doesn't even need to MITM you - they can just sit quietly and snoop on your packets.
One of the basic jobs of a secure transport is to prevent MITM attacks. MITM attacks are the modal attack on TLS. It's not 1995 anymore. Nobody's using solsniff.c.
Obviously, SSH authenticates. But on that first connection, when it prints the "you've never seen this key before" warning? You get that you're not getting any real security on that connection, right?
I considered that, but I think at the moment there's no concept of IP address for web certificates, it's all based on ___domain names as far as I know.
It doesn't mean it's not doable of course, but I could understand if it make people uneasy since it means that the same ___domain and the same certificate would behave differently depending on what it resolves to.
It may be an interesting solution to consider though. That would definitely make my life easier.
> there's no concept of IP address for web certificates, it's all based on ___domain names as far as I know
Regardless of whether you can or can't issue a certificate with a CN of an IP address, the browser doesn't receive the certificate in isolation, it receives it from an IP address, and can handle certificate validation differently depending on what it's connected to.
This may be a terrible idea for reasons I haven't considered (it probably is), but I can't think of any off head myself right now.
EDIT: this is probably terrible because someone can just stick a MITM proxy on your lan, and poison your DNS to resolve google.com to a RFC1918 address and boom.
You absolutely can get a certificate for an IP address. Clients should verify them based on the common name, and a subject alternative name has various field types including IP address.
A quick Google search shows various certificate authorities who will issue certificates for IP addresses.
well, rfc1918 addresses just specify which ranges should not be advertised into the default free zone. (aka, the internet routing table). It says nothing about if a network is LAN or not.
One could totally build a network with globally routed addresses, and not announce those addresses to the rest of the world.
> "One could totally build a network with globally routed addresses, and not announce those addresses to the rest of the world."
Could, and people do; I've worked on networks where the original people must have misunderstood networking and did the equivalent of using 10.0.0.0/8, 11.0.0.0/8, 12.0.0.0/8 for internal networks, including public /8s they didn't own, so they lost access to one or two chunks of the internet - and it never seemed to cause all that many problems for them working this way (so no motivation to do a big involved risky rework-everything project). We added new private network subnets for new build things, but never swapped everything over. It'll phase out eventually I guess.
On a related note, pinning the public keys of TLS certificates in browsers used to be a thing (HPKP) and it did mitigate certain classes of attacks with caveats (i.e, let's hijack a ___domain using an "incompetent" ___domain registrar and MITM clients that previously visited this site before, happens more than you think[1][2]).
Given how it was configured using HTTP headers and with the average site that has buggy webapps and such that could be used for header "injection" independent of the webserver it was unfortunately considered a theoretical persistent DoS vector, and thus removed from browsers.
I'm not convinced other solutions (CAA, CT) are adequate replacements because it best, they are reactive (versus preventative) solutions, and CAA assumes all CA's are properly checking DNS records at the time of issuance and that those DNS queries are not being intercepted, which is a big assumption in my book.
> https with self-signed certificate, which basically makes any modern browser tell its user that they're in a very imminent danger of violent death should they decide to proceed, or just go with good old HTTP but then you hit all sorts of limitations, and obviously zero security.
If you go the self-signed route, you'll encounter devices that simply won't work with them, especially IoT devices. If you got the HTTP route, you'll still encounter devices that simply won't work with them. For example, you can't cast an HTTP resource from Chrome or Chromium to a Chromecast, even if it exists on your LAN.
As long as there are devices that you can't insert your own certificates into their certificate stores, this will be an issue.
>I wish there was a solution for those of us who develop web interfaces for embedded products designed to live on LAN, often without any internet access and no well defined ___domain name.
Don't use the browser?
I understand the temptation to use the browser, but this is the price you pay for using someone else's platform: They're free to close whatever door they want.
HTML renderers are dime a dozen. Electron is a thing. Serve the HTML using _another_renderer.
JavaFX isn't bad either but I'm not gonna sell anyone on that front because I don't even do it, but the option is there.
Please stop using my browser in obscure and painful ways just so you can (understandably) avoid having to write a native UI.
I would much rather a web UI, please. A router, webcam, network storage, firewall, etc. that responds on ports 80 or 443 and serves up a usable web interface is a dream , all you need is a username and password and you can guess the rest.
The distant past where you needed a desktop program, and that needed a login to the manufacturer's website and a support contract to download, and it's never native it always needs Java and must be 3 versions out of date to work, then needs fiddling with Java's excrable and innumerable "security" prompts, then uncommon ports to be opened, and the older the device is the more likely it is to not work with UAC and need to run with Admin rights and depend on old versions of libraries, and then you end up with one carefully curated fossilised-in-amber management VM for that specific device; that time was much much much worse.
A 3D printer where you need CAD software to make much use of it, fine, have a desktop program. A thing which only needs an IP address for management and maybe it to talk to cloud services, browser web management absolutely any day please.
Telling users that, in order to connect their new router to the internet they must first download your native app from the internet... sounds like a non-ideal design?
To be fair, my current ISP (xFinity) and my previous ISP (Google Fi) both technically support some kind of web interface, but both push hard to get you to download their app and use that instead.
I had no problem (in either case) just downloading the app and using it to set up my networks. Most people know how app stores work now.
IMHO, what's needed is a way for appliances to obtain a real, permanent global ___domain name (and therefore certificates, email, etc.).
There's a bunch of mostly-obsolete rules that assume domains are held by organizations with full-time staffs (like whois contact records, per-___domain ICANN fees, UDRP, etc.) a tld like ".thing" that allows non-humans to claim permanent global names on a first-come-first-serve basis would let autonomous hardware devices integrate with the existing infrastructure without hacks and exceptions. Maybe a ccTLD could be convinced to do this?
> I wish there was a solution for those of us who develop web interfaces for embedded products designed to live on LAN, often without any internet access and no well defined ___domain name.
I don't know how it could work if you're truly disconnected from the Internet, want to connect arbitrary machines with no setup (not install local CA certs), and don't want any sort of prompt on first use (the TLS-PSK someone else mentioned). We might just be stuck with http in that case. Chrome isn't turning off http support, just changing the default behavior to try https first.
What I can imagine though is home LAN appliances being able to get certificates automatically when you have an Internet connection, have a ___domain name (they're pretty cheap), and set up your router for it. The router could present a (hypothetical) DHCP option saying "get certificates from me" (maybe via the standard ACME interface) and use the DNS-01 challenge with the upstream ACME server (letsencrypt) behind the scenes on each request.
This is certainly more complicated than just doing the DHCP request for a hostname and being done, and it makes your appliance hostnames public, but you wouldn't have to make appliances accept traffic from the Internet, much less have all your traffic proxied through some cloud service. And I can imagine it being a standard router feature some day with a wizard that walks you through the setup.
Probably, and I have found myself in the same situation btw, the best solution would be a fork of one of the browsers, that would (for example) only browse to addresses in the user's hosts file. That way, it could still piggyback all of the web interface machinery of a modern browser, but the things like "omg that's a self signed certificate you will die now" warnings can be safely removed, since the browser will only be able/willing to go to addresses in the user's hosts file. Just a thought.
We've known the solution forever: public-key cryptography [using asymmetric keys to share a symmetric key]. Every single server admin in the entire world already depends on it to secure access to their servers. You might also know it as "ssh keys".
You connect once, it says "this is the first time you've visited this site. please confirm these numbers are legit???", you compare some numbers, then you save those numbers, and if they ever change, the browser screams. There are ways to make it more user-friendly, such as QR codes or serial numbers.
You could do this a million different ways to differentiate it from the rest of the internet. They could require non-DNS-compliant names so the services could never route to the internet. They could dedicated a TLD like ".lan" or ".local" to it. They could add a new protocol prefix, like "local://" (but that doesn't jive with their vision of completely eliminating the address bar). You could just create a new PKI cert attribute that specifies this is a local-only cert and to use public keys, and the browser could enforce that the IP address could only be RFC1918 (but this is a terrible idea as a hacker could just proxy requests from your router to bankofamerica.com or something).
Good luck getting any browser vendor to accept it if it doesn't personally benefit them. You could try bribery.
This is a huge pain point for me as a user, too. I interact with a lot of managed network devices that have web UIs as their primary configuration method. All existing options for this stink. At least Firefox still lets me click through the security warnings - in Chrome you have to know the 'thisisunsafe' incantation, and who knows when that will just go away. There has to be a better way.
Seems like a browser could treat the scenario when a user types an IP address differently from a normal ___domain name resolution (eg even just changing the messaging to be less scary).
If you're actually using ___domain names on your LAN maybe you just have to bite the bullet and sign certificates too. You don't need internet access to have a properly signed certificate.
A hypothetical solution would be to stick with the DNS SD / avahi concept and use domains ending in .local.
That would have the benefit to be able to create a CA for that ___domain that can cross-sign your certificates in order to prevent the snakeoil workflow, which must stick to local IPv4 ranges and the fe80:: IPv6 prefix.
I've been digging through the DNS SD specifications lately (and how airplay, airprint, airscan and others work)...and I'm mindblown on how simple all things IoT could be if everything would support the DNS service discovery RFC.
In a parallel world nobody has IP problems, and nobody has problems connecting to their printers.
The way it's currently going though, I don't see any legacy firmware working in the near future due to CORS. I mean, most of the admin interfaces will break, probably, because of how they use forms to submit configs and settings.
> I've been digging through the DNS SD specifications lately (and how airplay, airprint, airscan and others work)...and I'm mindblown on how simple all things IoT could be if everything would support the DNS service discovery RFC.
It's amazing when you see it all, isn't it? IMO the slow process is communicating to product owners and IPv4-locked technical architects about how these things are actually supposed to work.
I wish we could get industry standard making .local official, then we could get browser-level support for ".local is allowed to be self-signed". Of course, a crapload of software still recommends against .local. And consumer routers and the like don't create a nice .local ___domain for all your crap.
I wonder if you'd be able to use a .local name and somehow get a proper signed certificate for it, which you embed into your local device.
Alternatively (and this is pretty hacky, but should work), you get a full ___domain unique to your device, e.g. mydevice-admin.com, and you require users to run a small installation script on their local devices that'll map that hostname to the local IP of the device. If not setup, it'd show a webpage that has instructions on how to run said installer. Then you embed the certificate on your devices.
There's some obvious security risks on this (if someone could extract the certificate and MITM the website and then trick people into running a malicious installer…) but at least your default experience would have the desired green check-marks.
let's encrypt with *.lan.mydomain.com via DNS validation, installed all over where needed, and annoying to update every 90 days because it's in weird/internal/non-standard places :)
I develop broadcast TV equipments which are often rented all over the place for short amounts of time, often don't have any direct internet access etc...
I simply cannot make any assumption about the network these devices will run, and can certainly not rely on any sort of DNS validation. Virtually 100% of the time the devices are addressed directly by IPv4. I really can't think of a solution for this situation.
For network you control your solution makes a lot of sense though.
That's quite a cool hack, but as I mentioned elsewhere in this discussion it's still a lot of infrastructure to let a user connect to a device on their own lan, and it still presupposes that the user will have a robust internet connection when they need to use the device.
That makes a lot of sense for Plex, but it's really not applicable for my equipment.
Interesting. I’m currently working on an “IoT” device and this seems like it could theoretically work. One concern I have is that there’s an initial step where the device creates an access point so that you can enter wifi credentials that it will use to connect to your home network. In this case, the device connecting to the local server will not have internet access, and would not be able to resolve the plex.direct ___domain. Maybe I can rely on the browser dns cache, but that seems pretty sketchy...
Perhaps you could distribute an Electron style client that has a self-signed certificate pre-configured and ask your clients to interface with the equipment via that?
If you were using Electron you wouldn't have to worry about browser support either as you'd just have to target Chrome/Blink.
Just brainstorming ideas here, someone will probably shoot me down.
DNS validation can entirely be done by a server on the internet, which does all the stuff necessary to get the certificate, and then gives the certificate to your end user device.
All the end user device needs is a connection to the internet once per 90 days. The vast majority of networks have sufficient network connectivity for this.
I think the more frustrating point is that you need all this internet infrastructure (with running costs) at all - even if your device has nothing to do with the internet at all.
If the vendor of your dumb device goes out of business, you can just keep using the device until it breaks down.
If the vendor of a "smart" device - or now anything with a modern web interface - goes out of business, the device will have a broken UX at best and at worst turns into a brick 90 days later.
In the end, browsers are now a platform and you have to register and pay a subscription fee to make use of them.
> All the end user device needs is a connection to the internet once per 90 days. The vast majority of networks have sufficient network connectivity for this.
I'm no expert on long-tail use cases, but I'd imagine that most networks either have internet connectivity or they don't. I can't think of many situations where you'd only have internet once every 90 days.
Of course one could argue that 90 days long is enough so an employee could go around with a memory stick and manually copy the certificate to every device - which is theoretically possible but sounds like a ridiculous thing to do just to keep a web interface workable. (And even then, you'd somehow need a unique certificate for every device)
I can't think of many situations where you'd only have internet once every 90 days.
Having worked in broadcast news, I can think of hundreds.
News doesn't happen in the newsroom. It happens in the field. And very often in places without internet access. Sometimes for weeks or months at a time. (Think siege at Waco, plane crashes, hurricanes, etc.)
I work in broadcast news. Specifically in connectivity in the field. I can’t think of any time we’d be without some form of Internet for more than a couple of days, depending how you define China as Internet.
There’s not much point in doing broadcast news of you can’t file, and you can only file if you have an IP connection (we do have some non ip satelite but without IP you wouldn’t be able to do much in the way of production - no production system, no email, no phone)
Covering natural diaaaters is why we have bgans and generators and MREs and water cleaning kits. Internet access is as essential as any other high risk safety equipment, and there’s no point deploying if you can’t file back.
I worked in embedded radios for the broadcast industry and SCADA applications for a spell and dealt with the same problems that the GP is describing. Many of these systems are composed of networks built on top of radio modems. It is very common that these devices can't reach the Internet...
I've literally never seen anybody use a ___domain name to address my devices, only
a simple IPv4. That already makes it a nonstarter, but let's entertain the idea. Maybe I can convince my clients to change the way they work, they generally love that.
Just going through the trouble of having the customer mess with their OS's DNS
resolver to connect to the device is ludicrous. Can you even do it on Windows
without having to deal with the hosts file manually?
Then they need to remember to update it when the IP inevitably changes because
they've moved onto a new project with a different address plan, and they'll
invariably forget to change it or forget how to do it or do it wrong and then
call me to help them.
And on top of all that, no I can't really expect my devices to have internet
access, not even once every 60 days. It's not uncommon for broadcast operations
to use costly and critical satellite links for internet access, and they're not
going to let random devices use that without a good reason. More generally, even
if there's an internet connection available they're likely not to configure the
gateway correctly, then complain when the night of the big event their browser
refuses to connect to my equipment, saying that they're about to be h4x0r3d.
My use case is certainly a niche, but I think it's probably a significant niche
given that everything and anything has web interfaces these days. Cameras have
web interfaces for configuration, middle end smart switches and routers have web
interfaces, I've even seen power plugs with web interfaces...
There are a couple of possibilities here, given that we're talking costly and critical satellite links.
1. You set up a private PKI using a private CA. AWS will sell you one for $400/month. Install the public root in the trust stores of the clients and then issue 5 year private certs for either IP addresses, or for .local and use DNS-SD.
2. You create a private ___domain at private.example.com. You get letsencrypt to issue a wildcard for .private.example.com. Then you set up private DNS for that zone. Server cert needs to be updated every 90 days. That involves downloading a single PEM file from your internal network.
I don't think Let's Encrypt is going to be the right use-case for you then. I think your best bet is to work with another CA to get your company an intermediate cert that you can use to issue longer certs that include ip addresses in the SAN.
Then it's just a matter of the devices connecting to the internet at least once a year and doing a very simple "Hey, I'm $device and using $address. Issue me a cert plz."
> work with another CA to get your company an intermediate cert
I'm no expert, but wouldn't that make GP's company effectively a delegate CA? This seems like it would need a very close relationship with the original CA - and all just for a simple web interface.
> include ip addresses in the SAN.
Not sure if this may be different with intermediate certs, but you won't find any public CA that will add private IP addresses as a SAN - as this would undermine the whole security model. If any CA did this, Chrome would likely ban them quickly.
I'm sceptical a CA would let you do that with intermediate certs if there is any danger the leaf certs get into the wrong hands (e.g. because the devices are sold, someone reverse-engeneers one and manages to talk to the back-end service)
let's encrypt does not like that use case btw. because you would need to validate the cert and store it somewhere and then download it to the specified box, since the service can only be inside an intranet.
A suggestion would be to treat destinations that do not need to traverse a gateway (ie local network) differently.
Browsers would need to implement this.
Further, they could present an initial dialog to 'trust' the local network. Obviously we don't want to do this in public Wi-Fi networks, but the OS already has a concept of 'private' vs 'public' networks, and the browser can easily know if the destination needs to be routed or it's local.
You can get embedded devices with TLS running in 200K of RAM. Cypress has WiFi MCUs based on Cortex M4 running their WICED stack which is a customized FreeRTOS, LwIP, and MbedTLS. I can't wholeheartedly recommend it as a platform because their modifications to reduce memory consumption break the MbedTLS API in subtle ways that make porting code more difficult than it needs to, but it is possible to get secure networking on something less capable than an RPi.
It's funny how pushing stringent privacy and security defaults in one ___domain degrades the privacy and security experience in another ___domain. My jaded takeaway from the last 5 or so years is that the internet companies (understandably) don't care about non-internet experiences. I empathize with how annoying that reality is because internet technology certainly works locally if you configure everything correctly.. just not how things work by default.
The actual problem though is fascinating and there is a ton to unpack. For one, the internet was always supposed to be zero-trust. Security-by-NAT was an accident not a feature. And IPv6 enshrines this reality (and it breaks my heart when I see tech companies trying to make IPv6 work like IPv4 in the home). The privacy problem is not really an issue if everything is appropriately firewalled and communicating securely. And as you mention, half the IoT things out there only use a central server to get around what is ultimately a problem NAT introduced: devices don't have public IPs and aren't 1st class internet citizens.
Here's how it's supposed to work:
Your ISP delegates you an IPv6 prefix. As the gateway to your home site, your router advertises the public prefix, as well as a ULA prefix to your home devices. Devices construct both permanent and temporary IP addresses from the public and universal local prefixes (at this point a device that needs public internet access has 4 IP addresses). That's just IPv6 so far but the point is that devices have multiple addresses, ones for public communication and ones for local communication.
Once you have the setup above you can do this:
Your ISP provides you a ___domain (optionally you purchase your own vanity ___domain, of course) and their nameservers delegate to your gateway device (or possibly some other one, but conveniently your gateway) as the nameserver for your home site. After a device comes online it dynamically registers its desired hostname with gateway, which will now respond to DNS queries with its address. The gateway should also serve PTR and SRV records for your site unicast DNS-SD style. Your nameserver can serve the public record if the request comes from a public IP and the ULA record if the request comes from that prefix. All public traffic stays public and all local traffic stays local.
Since your devices now are publicly routable, they get certs using ACME. If you want to run local ACME on your ULA network, go for it, but you'll continue to run into the original problem that browsers aren't configured to use your local CA by default and getting users to bootstrap that is essentially impossible. In that vein I do wish there was a way to start a browser window for "local" browsing where it only trust one CA (your local one) and thus isn't mixing public and local security ___domain concerns.
If devices don't want to communicate on the public internet because that's a privacy or security concern, then they simply don't provision themselves a public-prefixed address or add a public DNS entry, etc.
In short, NAT killed the internet and we're still recovering from it.
Picking specifically on the claim that IPv6 doesn't degrade user privacy.
In an IPv4 + NAT overload residential network, google can see 10 different accounts logging in from a single IP address.
In an IPv6 + privacy-extention-addressing residential network, google can see the unique IPv6 used for each address, and concludes that since five of these 10 accounts are coming from the same IPv6 address, and the other five are coming from five different distinct addresses.
That's more than it was able to glean before.
IPv6 was designed at a time when NAT was a hack to extend address space, before we had pervasive surveillance on the internet. IPv6 privacy extension addressing is a hack to try and address the fact that we now have pervasive surveillence on the internet.
Privacy properties of IPv4 NAT overloading was by chance rather than by design, but tragically the IPv6 privacy extentions are worse by design, than by chance.
You're not wrong.. but honestly I don't understand what people expect when _browsing the internet_. If "all my household devices come from the same IP" is a privacy requirement for you then you're always going to need NAT. And the advantage of your scenario disappears if any of those devices actually communicate with google's servers because then you can profile the requests, look at user agent, probably get different tokens, etc. Your nit seems so marginal to me I don't understand how championing this type of privacy paranoia is beneficial for internet technology. If you're worried about google seeing your traffic the answer is simple: don't send it to them.
Yeah I came here to say just that. It's really annoying when Firefox is stuck on https for some reason, maybe history? So I have to test if one of my LAN services works with curl.
I think it has to do with history so I have to clear all history for that site and then start using it with http and it should work fine. This is only Firefox.
Strict-Transport-Security perhaps?
My preferred method for testing is a clean profile
(firefox -no-remote -P) or just ctrl-shift-p to open private browsing which does not read SiteSecurityServiceState.txt
An arguably "not finished" solution for me has been to ramp up adoption of IPFS and use Brave as a default until other browsers support it. It is speeding up my transition to full fledged adoption.
Just host a certificate + private key at a well known ___location and have all the devices download / update it every month. /s (I'd bet someone has done this).
> I wish there was a solution for those of us who develop web interfaces for embedded products designed to live on LAN
There almost is! Instead of self signed certificates, use a certificate authority, and install that on the LAN's machines. https://github.com/devilbox/cert-gen
You can use macOS Server or Active Directory to push out the Certificate as trusted.
It's not perfect, but it's close enough for a LAN.
That makes a lot of sense. HTTPS adoption is now very high[1], and this might push it a little bit further for sites that don't redirect to HTTPS automatically. I've been using Firefox in the experimental HTTPS-only mode, and the web is quite usable without cleartext HTTP.
It's not a big change from security perspective though. HTTP requests shouldn't be getting cookies any more, and the redirect isn't revealing anything new (not until we get DoH + ESNI). You still need HSTS preload to defend against active downgrade attacks.
An interesting side effect of the change is that sites won't have have a working HTTP redirect any more. Inevitably, there will be sites that let their HTTP versions rot and break, which will eventually force all other web clients to default to HTTPS for web-compatibility.
HTTPS adoption is hard enough that the wast majority of shared hosting providers haven't automated cert provisioning and are delegating this process to their users.
The push towards forced HTTPS has significant costs, which most people in this filter bubble don't want to honestly discuss.
> The push towards forced HTTPS has significant costs, which most people in this filter bubble don't want to honestly discuss.
I agree, but I'll push back by saying that delaying HTTPS adoption and getting lax about it has a much higher cost -- and that is similarly a cost that most people pushing back against HTTPS either downplay or refuse to acknowledge.
And more than that, those critics have shown that they're not interested in solving the problems that they bring up or in finding ways to mitigate them. So it's not like we can wait a year and adoption will suddenly get easier. The critics of HTTPS aren't moving forward. They don't want HTTPS adoption to slow down while they catch up, they want it to stop so that they don't need to move forward at all.
We've seen significant improvements in usability for HTTPS for ordinary people, from LetsEncrypt, to Cloudflare, to Netlify. Holdouts like Gitlab and Github don't provide an easy way to provision certs by default. A lot of other smaller hosting providers are ignoring the problem entirely. This will get better over time as more hosting providers realize that this is a feature they have to provide to be competitive.
But that's the thing. Smaller hosts are ignoring the problem and they will continue to ignore the problem until they're forced to upgrade their infrastructure to support solutions like Certbot. Because MITM attacks aren't their problem, client privacy isn't their problem. They are not going to get better support until they literally don't have any other option.
That changes our calculations; where a decade or two ago we might honestly argue that immediate, harsh incentives to switch to HTTPS had too many downsides, we're now in a position where we realize that harsh incentives are the only way that HTTPS infrastructure is going to improve at all.
And that has significant implications for people's privacy and security online. We're at the point where even though it's a barrier of entry for some people, everyone should still be using HTTPS on any public site that they build, period. Honestly, I'm in the process of trying to find good HTTPS schemes for intranet sites too. We need to move forward on security.
To add to that https I think is one part of the puzzle.
This weekend I watched a bunch of scam bait calls. Very amusing to watch them get taken down. But there are people out there mailing 20k in fedex boxes to scammers. All because their browser said 'your balance is 20000 and I transferred too much to the account, could you mail that back to me so I do not get in trouble?'. It is a very human attack using a combination manipulation, fear and compassion. The bottom line is these people unknowingly allowed people to open the debug console in their browser to change things.
What I have been trying to figure out in my head is how do we do a digital watermark on data presented to the user? How do we at a minimum tell the user their data has been manipulated? You can have all the TLS on every level but at one of the most important ones there is nothing. A scammer can open a debug console on a https presented page just as much as a http page.
An explicit caching server that I integrate into my site on purpose is a totally different category of device from a random router sitting in an airport.
This is like arguing that because Linode can log into my server and examine/edit my files, then we might as well stop requiring a password when random people online try to SSH onto the machine. Trusted agents and untrusted agents are not the same.
In the same way that any webhosting company "does MITM", your loadbalancer "does MTIM", ... The term makes little sense for a party that's explicitly part of the hosting infrastructure a site owner choose to handle HTTPS.
Google is not the only company pushing for these changes, every mainstream browser is trying to increase HTTPS adoption. And you shouldn't be waiting for them to do so, you should already be running the HTTPS everywhere addon[0] in your browser today.
The overwhelming consensus in the security industry is that end-to-end encryption increases security and privacy. Overwhelmingly, security professionals recommend using HTTPS.
Of course it's tied to user privacy, there are multiple examples of not just malware authors but also corporations like Verizon, public wifi administrators for large establishments, doing sniffing and MITM attacks on non-HTTPS traffic. It's absurd that E2E encryption in the browser has to be defended to people, it is absolutely a privacy and security issue.
I dislike Google's privacy stances too; I'm regularly on the Google hate train. But if you're leaving your Internet traffic unencrypted just because Google is one of the companies telling you to encrypt, then you are pointlessly cutting off your nose to spite your face.
I can tell you from personal experience that they are in the process of going out of business.
Traditional shared hosts got their lunch eaten starting almost a decade ago with a combination of site builders like Weebly on the user friendly side and AWS on the technical side.
In 2013 most of my social group was friends I made in the shared hosting industry. Now I don't know a single person still working for any MSP as they've all needed to find greener pastures as the companies get bought up by conglomerate vampires that will milk the remaining customers (there aren't many new ones) for what they're worth until the companies finally die.
Looking to shared web hosts for guidance is like looking to 2005 to decide what's cutting edge.
They're done for. Shared hosting is over. RIP cPanel, Plesk, and the whole lot
And for technical users who find AWS/GCP/Azure and friends too expensive for whatever reason, there's enough small bargain basement VPS providers around that still beat the prices of the shared hosting providers while providing way more flexibility. I run my personal blog using a mom-and-pop KVM VPS provider that costs $2 per month, and I get full control over whatever stack I want to run.
Shared hosting is awful, I don't know why anyone would ever want to go back. Here's an Apache server we set up, it's got every module under the sun enabled along with the associated security holes. You get one PHP version that we upgrade at our leisure, and a shared MySQL server that you pay per database for. Eugh.
For my use case, I upload a bunch of HTML files via SFTP, and it just keeps working. I don't have to deal with the server software, someone who can dedicate a lot more time does that for me for a nominal cost (because keeping the server for 10000 people updated is only marginally more difficult than me keeping my own server updated).
I pay the same or less than I'd pay for a small server, and someone who provides the benefit of a managed platform gets some profit for the value (hassle free website) they created. In exchange, I save an hour or two of fiddling with the server per year, which makes this a great deal. You're not paying for infrastructure, you're paying for the "managed" part of a managed service.
Could I just use a storage bucket? Probably. But I'd have to figure out how to make Let's Encrypt work with that, and if someone decides they hate me and downloads my site with a million bots several times per second, I'm getting a bill that costs me more than a lifetime of shared hosting.
If I were to use PHP and MySQL... they'd probably still update it more diligently than I would after a year when I get busy with other things.
That's probably worth a try for me. For e.g. some small nonprofit, they probably would be happy to pay $24/year more and have everything hosted at the same provider that handles their ___domain registration.
If you aren't technical enough to manage provisioning yourself, or don't want to, you are most likely throwing the site behind Cloudflare which automagically terminates TLS for free, or paying for some other CDN that does the same. Or if you really aren't technical at all, you would be using Wordpress.com or a similar platform that takes care of it for you.
Actually, I have seen several ___domain registrars that want to advertise "free SSL" implementing some sort of basic Let's Encrypt provisioning tool that automatically updates the TXT records to get them.
The biggest problem I'm having is that our edge firewall doesn't play nicely with it for some reason. I get these random websites that refuse to work in Firefox, but they always work fine in Chrome. And it's not certificate errors, it's just "connection reset by peer".
I'm not entirely sure how it's working, but I've seen a few other people with these issues at the mozilla bug tracker and it's always just sort of either ignored or dismissed or people say it's caused by antivirus (which I don't have). I can't figure out how to debug it because literally everything else I try works. curl/wget/etc. I asked IT about it and they just said that I should use Chrome.
Firefox just gives a "connection reset by peer" error and literally nothing more. Changing the new firefox security settings has no effect. The exact same url put in chromium or chrome or curl or wget loads fine. I cannot find anything that doesn't work except firefox. So either the firewall is specifically targeting firefox for some (and only some) https sites or there's something going on in firefox. It's literally the only reason I'm considering not using firefox at this point. I've just got no idea what to do or how to debug this. Thankfully, most major sites work fine. But it's annoying when some website blocks off like this.
> The biggest problem I'm having is that our edge firewall doesn't play nicely with it for some reason.[...] I'm not entirely sure how it's working, but I seen a few other people with these issues at the mozilla bug tracker and it's always just sort of either ignored or dismissed.
This sounds like you have some expensive enterprise equipment that is doing funny things with your TLS connection, but instead of complaining to your enterprise vendor that you likely pay a lot of money to, you complain to an open source project that probably has nothing to do with it.
I mean that's fair, but it's really frustrating that for whatever reason all the other clients work. I just want to figure out what's going on and how the firewall knows to target firefox. Like I said IT's response is "well who cares just use Chrome".
Someone please correct me if I'm wrong, but I do think Firefox ships their own root certificates with their browser, while Chrome uses the system ones. It's possible fluidcruft's employer has installed new root certificates so they can analyze/inspect the traffic through their network and Chrome is happily rolling along, while Firefox does not like it because now the connection effectively has been broken.
I 'm pretty confident that shows up as a certificate error with it's own scare page, not connection reset. Usually on cert errors you can click through all the warnings and accept the risks to connect anyway. Connection reset is just a complete dead end.
I have pondered whether there's some way for the firewall to have a whitelist of sites that it allows without SSL MITM and otherwise sends reset for unknown SSL sites unless the firewall's MITM certificates are used. Our IT might be paranoid enough to do that.
I don't have any evidence that the firewall does any MITM other than I found it listed as a feature that can be configured on the firewall's website when I've been googling. But even then the error people complain about is bad cert not connection resets. I have tested with Chrome on systems that IT cannot have injected certificates into and they do work. I do know that I can access my home server's let's encrypt site using firefox and chrome without problems and the keys are correct. So it's not really high in my suspicion list.
Anyway it looks like I need to learn how to packet sniff.
Sounds like a engineers way of saying "I think it is like this, but I should double-check just in case" because who knows what the browser can show when something different happens?
"Connection reset by peer" usually means that either the remote host or an intermediate gateway/firewall sent a TCP RST packet, which is not used during normal connections. Wireshark should help figure this out -- it's a superpower for anyone working in dev or IT.
Huh? If everything but one browser works, the suggestion will be to avoid using that browser unless you can show the expensive equip is doing something wrong.
So firefox doing nothing more than a connection reset message does not help at all.
A trouble ticket that says chrome works, wget works, curl works, IE works, but my firefox browser with 10 privacy plugins does not work - is NOT going to get a good response from you support contact.
So, it sounds like FF is sending something that's causing an RST to be emitted from either the website or (more likely) your appliance. Next step would be to pcap/tcpdump a connection from both a working browser and FF, and see what the difference is. That kind of information is a lot more useful to FF devs than "something is happening that causes an RST from someone".
I've had exactly the same issue for 2 years and I have narrowed it to dropped packets on the network.
I'm connected to internet through WiFi bridge that periodically has transmission issues (over 200 meters distance) and if average amount of transmitted packets (I can see it in antenna panel) is anything else then 100% and negotiated speed is anything lower than maximum 54 Mbps then neither Safari nor Firefox will be reliable and I will get "connection reset by peer" error that will go away after refresh. Chromium-based browsers work fine, I'm using Vivaldi with no issues. On Windows tools like curl and wget work with no issues but not on macOS - it looks like a deeper problem with its network stack that Chromium browsers somehow avoid.
It might be a correlation without casualization, but I also was unable to find a good way to debug this issue.
Oh interesting. Firefox says it's both enabled and set to use 1.1.1.1, and I figured that nothing would resolve if it wasn't... but if https://1.1.1.1/help is correct then it's not actually working and something else is happening.
I know I tried setting and disabling that when I was testing but I saw no change. I don't remember setting 1.1.1.1 but I may have enabled DoH. I'll see if changing the DNS server in firefox to whatever everything else is using helps.
Edit:
It looks like Firefox just silently falls back to non-DoH system defaults if it's not working. Good to know, I guess. Not really sure what the point of DoH is if networks can just silently override the setting.
> Mozilla has announced plans to enable DoH for all Firefox desktop users in the United States in 2019. DoH will be enabled for users in “fallback” mode. For example, if the ___domain name lookups that are using DoH fail for some reason, Firefox will fall back and use the default DNS configured by the operating system (OS) instead of displaying an error.
> Not really sure what the point of DoH is if networks can just silently override the setting.
You can explicitly configure the browser to insist on DoH whereupon it's your fault if that doesn't work. But the defaults changed only to try to do DoH if they can.
"If a user has chosen to manually enable DoH, the signal from the network will be ignored and the user’s preference will be honored."
Well I have already manually disabled and enabled it a few times and it still clearly always uses the fallback. So if disabling fallback is a setting, it's something beyond manually turning DoH on in the preference panel.
FWIW, I've only seen the issue you reported when resolving a Cloudflare hosted site through Cloudflare dns, with Firefox as the client. Refreshing multiple times seemed to work.
I haven't had the time to investigate it when it occurred; anecdata.
I have wondered if it's related to handing off from the CF balancer to the sites tls.
It seems like this is primarily a performance optimization, at least for now. One less round trip when navigating to a site by typing the ___domain name when that site redirects to HTTPS (and isn't on the HSTS preload list).
The info "is https available" is not secured either. The ISP can just block any packet on port 443 and force http that way. It would break links but wouldn't break people entering the address via the URL bar.
A real improvement in security would be Google caching the data, and either offering it via a custom API or just signing it and appending it to their 8.8.8.8 DNS responses. Per default, Chrome already sends the URL to Google as you type, you have to turn auto complete off if you don't want it to happen.
Right, as long as it falls back to HTTP, you don't really increase security. And if you have a side-channel like preloaded HSTS lists the change does not apply. So you're right, it just makes HTTPS sites load faster.
Hmm so I thought about it a little and I think the old way allows passive monitoring of the URL within the website while the new way requires active attacks to enable this.
We're not talking about preloaded HSTS. In such cases this change makes zero difference; Chrome already would have made the initial connection over HTTPS.
And it doesn't matter whether the legitimate server is refusing to serve plaintext HTTP if you're not talking to the legitimate server in the first place. The attacker can serve whatever they want.
Unfortunately, you can't be on the HSTS preload lists and have all newer browsers enforce HTTPS while still having a HTTP fallback for legacy systems. In genereal, the backwards compatibility story with HTTPS has been abysmal - it should have never been a new port or URL scheme in the first place.
Theoretically ISP can provide a transparent proxy to translate HTTPS to HTTP (but with some feature degradation). It could happen in restricted countries.
Funny thing, Linux users clearly lag behind in the adoption, from that graph. You can't even brush it off as ‘who even uses Chrome on Linux’, since it's just the share among whatever users Chrome has there. (Though perhaps those are some weird bozos.)
Come to think of it, these might be just devs testing their own non-production sites?
Also apparently Google supports the movement of barely readable text on the web.
As someone with a WordPress blog and static personal site with zero security threats, these changes don't help me at all. They just force me to pay an $150 additional per year to my hosting provider or spend at least a full day figuring out how to reconfigure my SSH certificate to one of the cheap options.
This is simply because they told SEO's that HTTPs takes precedent and effects ranking - if you want mass adoption of anything then just tell a bunch of SEOs that rankings will be effected (AMP is one that has thankfully not won the fight).
Google passed it off as security but I cannot believe this to be the case when you see the shit that litters the Play Store.
We're finally getting there! Now just a decades to go and we'll reverse the order to get
com.example
After which we'll wait a couple of decades to decide we don't need TLDs as ICANN is instead just given a free pass to print any amount of money they want without that silly distraction. Then we'll be at just:
example
which is just one step from the final solution which is that no URLs will be needed because the browser will just open up to Amazon who after final approval of their purchase of Google will at that point own not just the internet but the entire world.
Actually we should have switched to resource locators that are just plain strings ("ycombinator news threads 26558305" . It s what google has been pushing everyone to do anyway (readable URLs), and what the centralization of the web led to ( fb / twitter usernames). Its easy for people to parse and speak through the phone, and it would be a decentralized, natural revival of AOL keywords. It would also drop total google searches by a few tens of percent
It is the FQDN, but the .com.example notation specifically is generally called big-endian notation (it puts the most significant part first), the UK used to use this with JANET's NRS opposed to DNS.
We notate IP-addresses like that as well:
- (MSB) fe80::1 (LSB)
- (MSB) 10.0.0.1 (LSB)
However, while the TLD is the most significant tree-wise, it is probably not the most significant to the end-user, who is going to "news (in) ycombinator (in) com" instead of "com's ycombinator's news".
There's no link to more technical detail. What happens when the site I type in the URL bar doesn't support HTTPS? Will it error out? (with a timeout?) Or will it automatically fallback to trying HTTP? (In that case, could a MITM block HTTPS to force the browser to try to downgrade?)
EDIT: I see that the article says it will fall back, but Chrome Canary has options in chrome://flags, and it's not clear which option they picked.
> Omnibox - Use HTTPS as the default protocol for navigations
> Use HTTPS as the default protocol when the user types a URL without a protocol in the omnibox such as 'example.com'. Presently, such an entry navigates to http://example.com. When this feature is enabled, it will navigate to https://example.com if the HTTPS URL is available. If Chrome can't determine the availability of the HTTPS URL within the timeout, it will fall back to the HTTP URL.
The options are: Enabled, Enabled 3 second timeout, Enabled 10 second timeout, and Disabled.
I've found http://neverssl.com very helpful for captive portals. It does what you'd expect - hosts a HTTP-only page that allows captive portals to work correctly.
Since it's only ever HTTP, it sidesteps the certificate errors or HTTP downgrades that normal sites are hit with during captive portal interception.
I am curious what happens to captive portals as HTTPS adoption rises. Some OS's (Android, OSX) already detect captive portals and launch a lightweight webview.
Yep, I use neverssl.com, when I remember it. Though, mostly I use example.com, since that works without SSL as well. I guess now I'll need to remember to type http://example.com .
Based off my experience, more and more captive portals appear to be letting the requests the OS makes to captive.apple.com or whatever through but still try to present a captive portal to the user. I can guess as to the motivation, but as a user it's danged annoying.
It might still be an issue, but this is still a huge improvement. If the browser is going to pick a protocol on behalf of the user, it ought choose a stronger protocol first.
If the browser chooses the stronger protocol it should force the user to opt-in to any fallback behavior which might downgrade security. The current behavior of the browser in most cases is if you enter an 'https' url and the request fails or the cert is invalid you get a failure or warning message. I'd like to see this behavior kept in this case. It communicates what the browser is doing instead of silently downgrading the protocol based upon fuzzy failure signals.
Its a shame but I wish the insecure protocol name was not a prefix of the secure protocol name. Its so easy to miss the 's' and for things to just work for the wrong reason. I guess we have Netscape and Microsoft to blame for this one.
That is a stupid idea, forcing https on people’s throats is a stupid idea generally speaking but it wouldn’t have been that bad if it hadn’t been forced on many people by what is basically a monopoly by this point.
I really don’t get this opinion. The internet isn’t secure — the public internet is by definition an untrusted network assumed to be malicious and openly hostile. There is no safe way to use unencrypted connections on the public internet. None. Doesn’t matter if you are a good admin on your local network, doesn’t matter if you have a good ISP, you have zero guarantee the network path your packets will take and who has the opportunity to observe and modify them.
Look I get it. It’s no fun when the whole class is punished because because bad actors exist. And it’s no fun when someone pressures you into doing something for the benefit of others but not yourself. But the internet is really really different from 20 years ago.
HTTPS is such a scam. It's an obvious ploy to conquer the last corners of the web not yet under corporate control. But apparently calling it "Let's Encrypt" instead of "Let's make your website technically dependent on a Google/Mozilla/Amazon/Facebook-controlled service" is enough to fool people.
It's totally obvious to me that, once HTTPS is mandatory, the next step will be that Let's Encrypt will stop supporting websites that disobey their "content policy" or whatever. It will follow the usual course. First outright illegal things. Then today's "deadly sins" like racism. Then unpopular politicians. And then, whatever they want.
You are literally making a random organization into the censor of the web.
While there are a finite number of trust providers in the HTTPS signed certificate model, "Let's Encrypt" isn't the only one.
But, yes, HTTPS is a trust-rooted security model. It's hypothetically possible to have every cert provider decide you're not trusted. That's kind of in the category of "It's hypothetically possible to have every DNS provider decide you're not trusted" though.
If this is a concern, the solution is probably to come up with an HTTPS modification that allows for decentralized trust, not to set up the average use case of the web to "The user is trusting every machine between their client and the server to be reading their traffic and not acting maliciously on it." Tradeoffs, right?
> What happens when the site I type in the URL bar doesn't support HTTPS?
The article says: "For sites that don’t yet support HTTPS, Chrome will fall back to HTTP when the HTTPS attempt fails."
Yes, a MITM could block this access. This would be an active attack and thus detectable, which is fine if you're the Great Firewall and your existence is government policy but likely to be a problem for some other types of attacker.
In the passive attack case this is strictly better (previously a passive on-path attacker gets to see all the unprefixed URLs typed into a Google that didn't match an HSTS rule it knew, and with this change they do not) in an active attack nothing changes.
"For sites that don’t yet support HTTPS, Chrome will fall back to HTTP when the HTTPS attempt fails."
I hope the slippery slope stops here though and HTTP will not be eradicated in browsers (in which case one would need corporate permission and approval to publish anything).
s_client or curl are not a suitable workarounds for the masses ...
Huh. I didn't know that setting existed. And I make a habit of systematically going through the settings pages of every app I use (and in Chrome's case, also chrome://flags), to find out about things like this. It looks like that option exists only in the right-click menu, and not in chrome://settings. That's a problem!
You go through every Chrome setting/flag? That's very ... sporty. I mean, I'd like to do this as well, but Chrome has hundreds of settings and flags, so going through them takes serious time (some settings are fairly arcane "Temporarily unexpire M87 flags."). And they can be amended on every update, which might hit you weekly.
I agree! The protocol is not particularly interesting information, as long as it indicates secure vs. non-secure connection somehow. As long as it comes along when you copy the URL, it's fine.
The bad version of this trend is when you hide the path after the ___domain like Safari does. That's awful design, that's a very relevant part of the URL!
It's quite annoying when the https:// prefix is hidden but then appears in copy-paste. Quite often I'd like to run things like "host -t mx <paste>" or "whois <paste>" and then it also copies the invisible "https://" prefix which is incorrect for these use cases.
One thing that was quite annoying to me is the URL changing under my cursor on double-click if the protocol is hidden. However, I can see that editing the URL is a niche use case. Fair enough.
I hate that editing URLs is practically impossible in Chrome on iOS (not sure about Android or other iOS browsers). There seems to be no equivalent of arrow keys to navigate around inside the address bar. If I screeenmash enough I think I can sometimes get it to go to the very start or end of the URL, but anything in between is hopeless.
(Posting this partially in hopes that someone tells me how to do it to prove me wrong...)
Are there ways to get the keyboard to have cursor keys, or something similar?
I use the Microsoft Swiftkey keyboard on Android, and there's a setting which makes holding the space button activate gestures for up/down/left/right. (i.e. hold space and move small amounts left/right.)
If you have enough screen space, I think you can add real arrow keys.
But I'm not sure you can change the keyboard on iOS.
Wow -- life-changing, thanks! On the native iOS keyboard, hold down spacebar and (without lifting your thumb) swipe left/right to "scroll around" within a text field (at least in the Chrome address bar, but presumably anywhere). Amazing that after 5+ years using iOS I'm still discovering these hidden but basic features...
It's definitely got worse (on Android) because pressing the address bar now deletes the URL and you need to press the separate edit icon to get it back and modify it. Used to just put you straight into editing mode. But you can still touch to position the cursor, backspace to delete, etc. You can scroll the URL, too.
I agree. Making the protocol visible would confuse less my parents (I've recently seen them writing searches in the address bar BUT as well pasting URLs into Google's website's search field).
> Making the protocol visible would confuse less my parents
Presumably it's not accidental that Google used their chrome browser to make it easy to mix up urls and searches. That means more searches for them. To them the confusion is not a bug, it's a feature.
I feel like I'm asking the obvious, but.. is it that hard to mention a date when you're going to break a bunch of stuff, so those affected at least know how long they have to look for contingency plans? For those who don't know how long it takes for a Chrome version to move from dev (v90 is there now) to prod it would be nice to have an idea, is it a week/month/90 days?
Doing this helps in some cases against passive attackers, sidesteps the need for a redirect to send visitors to your secure site on first visit to have them pick up your HSTS but it doesn't offer any protection against an active attacker on first visits.
Firefox HTTPS mode gives you a (dismissable) interstitial if any site apparently doesn't do HTTPS, which is an opportunity to catch attacks, but less suitable for non-experts because they will find it hard to judge when to be surprised.
As a site owner, HSTS preload remains what you should do to protect visitors if you know you are going to do HTTPS.
Some day DANE and/or DPRIVE plus HTTPSVC will reliably ensure the visitors to any HTTPS site get HTTPS on contemporary browsers, we're years from that though.
i was amazed when i looked into it. brave did it, safari had it hidden in the developer menu, chrome: nope. i figured there was something to be gained for them.
ultimately there arent too many sites you type out much of an endpoint for, but "old.reddit.com/r/ihaveembarrassingsecrets" or whatever is a big one.
Firefox offers HTTPS Mode, which converts all HTTP links to HTTPS first, A HREFs, stuff you type into the URL bar, everything, then if that fails it generates an interstitial page explaining what went wrong with a button to get the unencrypted HTTP site if that's available.
But that's optional (I wouldn't recommend it to anybody who doesn't seem clear on what HTTPS versus HTTP means for example) and far more invasive, though in exchange it delivers more practical security if you understand what's going on. I've enabled it, I have mentioned it to IT people I know socially, I wouldn't suggest my mother or sister try it.
This is cool, I get tripped up every day by accidentally navigating to HTTP sites just because I typed them into the browser, and there's literally no downside—if the site doesn't support HTTPS, the redirect is transparent to the user.
How does it behave when the site nominally supports https but only uses a self signed cert? Many local network items are like this. in most cases it might be better to fallback to http but I am not sure.
I think what will happen is that visiting `vcap.me` will open `https://vcap.me`, which would fail with a cert error, so Chrome will automatically (without showing the error) open `http://vcap.me`, which would issue a redirect to `https://vcap.me`, which would fail with a cert error.
If you were navigating to `localhost`, Chrome would directly open `http://localhost` which would do the redirect.
It's unclear if Chrome is smart enough to know that `vcap.me` is a localhost service, but we can imagine a remote host which would behave the same way.
> For sites that don’t yet support HTTPS, Chrome will fall back to HTTP when the HTTPS attempt fails (including when there are certificate errors, such as name mismatch or untrusted self-signed certificate, or connection errors, such as DNS resolution failure).
The name mismatch is the key point for those captive wifi portals that respond on HTTPS with an invalid certificate.
Potential source of frustration: naked domains on http that upgrade to https after redirect to www or some other subdomain. It’s a problem to me now - if I name my naked .org site in GMail it assumes https. At some point I will have to host my own redirection service, just for this one issue.
Side note: it beats me why browsers couldn’t agree a way to specify at least the first redirect in DNS, no web server needed for that part.
You'd need to authenticate that redirect, so it isn't as easy as it looks. (something like DANE and DNSSEC)
The benefit doesn't seem big to me. You're already running a http server for the original ___domain, and an https server for the target ___domain, so extending the http server for the original ___domain to https doesn't sound like a big step.
I'm using AWS Elastic Beanstalk, and you're not supposed to point a naked ___domain at the load balancer. On reflection though, it's a problem Amazon could solve there.
None of this is super hard but it is annoying, some complexity that feels removable.
#1 thing I want for my security is a browser that makes the address bar text box very large for typing (and only when I am typing that address). I am tired of tiny address bars.
Also make it easy for me to cut and remove the path. Some bad sites don’t recover properly from a bad session or query argument
I've said it many times, but nobody seems concerned: Certificate authorities are being used as tools of censorship by oppressive regimes. Until a central-authority-free alternative exists, the move to HTTPS is bad for a free world.
Simple: The oppressive regime in question can deny you a certificate thereby censoring you. Some also require you to use a particular certificate, thereby creating the illusion of security while being able to intercept, interdict, or MiTM information any time they like or continually. Here's an article that touches upon it superficially from 2010, and also notes that private companies are already selling these capabilities to authoritarian governments: https://arstechnica.com/information-technology/2010/03/govts...
Is it better for a free world to have the traffic be unencrypted?
Feels like one head of the hydra is control of the cert-trust network, but another head is traffic-sniffing and monitoring one's online activity, yeah?
Please do not bring a straw man argument to the table. I clearly said " Until a central-authority-free alternative exists". Such alternatives are readily available and already used to secure e.g. ssh connections, but are not without drawbacks. One alternative that might be less likely to suffer from the same weaknesses as a certifcate-free model is a distributed certificate authority that relies on a ledger like DNS.
Those are the same thing. The CA system is a distributed X.500 database. It's the most distributed database in the world since the entries that comprise it are the certs themselves.
Moving the infrastructure to DNS wouldn't change the nature of the system, just which entities are at the root. Hell, the CA system right now delegates authority to ___domain registrars and the DNS system.
The devcert tool (and its corresponding devcert-cli command-line interface) is very handy for creating a local root certificate authority that you control & your device trusts:
IP addresses and localhost are still exempt from these changes. For development purposes, nothing serious should change; localhost is already a secure context.
I'd very much watch out for using automated tools like these. Many browsers do not check certificate revocation and this leaves your browser open to some extensive MitM attacks. If you use this, you should probably create a second profile (firefox /p to create and pick one) so your normal browsing won't be affected.
In my opinion, it's easier to just type "thisisunsafe" into the warning page once if you use it for developing. If you're demoing your application, putting it on a server with a Let's Encrypt cert shouldn't be too hard either.
I'm sure this has its uses, but I just can't figure out what they would be if you do not wish to take huge security risks.
Regarding the complaints about IOT devices and LAN local services: I wonder if it would be possible, and reasonable, to define some sort of LAN certificate authority protocol, which behaves a bit like Let's Encrypt, but only for services that are on LAN local subnets.
Normally, something like your WiFi router would fulfil this role, local services would poll it to obtain signed certs, and, browsers would poll it for a CA cert which they would apply as a trust root for LAN local subnets only.
I'm interested If that behavior will be the same when using web.dev
Usually when I enter a site to test it there, it always tells me to avoid redirects. I think HSTS would would have also solved this, but our (managed) hosting provider does not offer this as a default, and doing it manually for the amount of sites is not really practical. At least not the sites that are already done.
All of .dev is covered by HSTS pre-loading. So even if you explicitly type http://web.dev/ you are going to actually navigate to https://web.dev/ because that's how HSTS is defined. If you want a site that isn't encrypted in browsers then an entire TLD which is specifically secured is the wrong place to build that site.
I think I explained myself badly. Im entering sites INTO https://web.dev that we make at work. Web.dev is basically Google Lighthouse and tests your website for basic performance, seo, best practices and A11Y.
So for example I enter mycustomer.com and it tells me "avoid http redirects" because I didn't enter the https:// before.
Hsts is included in one of our packages which also includes CSP settings and other security stuff, but barely anyone buys that.
I feel like something will be lost when the barrier of entry for the web goes from "Host an HTML site on a ___domain" to "Obtain a certificate, host an HTML site on a ___domain configured to use a certificate" but maybe it's not as bad as it sounds. I'm loathe to let a random script modify my nginx config.
"Chrome will fall back to HTTP ... when there [is] ... DNS resolution failure" makes no sense to me. In what circumstance does DNS resolution succeed for HTTP after failing for HTTPS? DNS doesn't care what you're about to do with the IP address it provides.
Is there a way to get exactly this new Chrome address bar behavior in Firefox? I.e. I want Firefox to follow plain http links same as before, but if I type in example.com it should expand to https://example.com
AFAIK, Firefox has worked exactly like this for a long time. There is probably some way to disable that behavior, and when you type a site, it offers an http suggestion for you to reach easily, but the default is https.
Firefox 86 falls back on HTTP transparently (e.g. try it with sane-project.org). I'd like to disable that, but without also adding a (stateful, fingerprintable) confirmation step to follow HTTP links.
I've been prefixing https:// to fast-track this for as long as I can remember. I probably will still continue to do it though since Chrome isn't the only browser in the world.
There is this recurring joke (to put it nicely) in pop culture where some clueless person will spell out a URL as "H T T P COLON SLASH SLASH W W W DOT...".
Part of me now always goes "but we use https now..."
And here I am, thinking that this was already the case for the past few years ... Thinking about it, it's probably because "everyone" started using HSTS.
This is a great instance of https://xkcd.com/1172/ for me. "example.com" is the only ___domain name I intentionally load over HTTP in Chrome, so this change breaks my workflow.
Many for-pay wifi networks (e.g. on airplanes) are designed to intercept all HTTP requests from guest users, redirecting the browser to a login/signup page. Until you log in HTTPS is blocked, so you have to try to open a ___domain Chrome doesn't recognize as requiring HTTPS. There are fewer of these every year, so I use "example.com" because it autocompletes easily and will be around forever. Now there's no ___domain that works, and you have to type every character of "http://a" to get the same result.
That's not the issue. Example.com is just distinctive for supporting both http and https, whereas modern sites redirect you to the https version. Chrome remembers which domains support https (which is nearly all of them) for when the ___domain is typed naked, and apparently won't fall back to http if it recalls https working in the past. So, until now, domains like "example.com" would sometimes let you get to an airplane's login page when domains like "facebook.com" wouldn't.
Hey, take a look over the HN guidelines, specifically this one:
> Please don't comment on whether someone read an article. "Did you even read the article? It mentions that" can be shortened to "The article mentions that."
Absolutely. You need the HSTS-preload list so that even when HTTPS fails on the first visit to a HSTS-protected ___domain, the browser will still refuse to fall back to plain HTTP.
What you've described is, of course, what Google wants in the long run. For corporations making money on the web HTTPS is required and it makes a lot of sense.
The problem is the web isn't just corporate sites involving money or private information. But if a browswer, or individual, begins blocking HTTP sites what they're effectively doing is saying, "Only sites approved by corporations should be visitable." This is a result of only corporations (no matter how non-profit or currently benevolent they are) being certificate authorities. A human person can't be a certificate authority. And that is a terrible blow to freedom on the web.
That's true, but I can still type in an IP address and communicate with any webserver I want on the entire internet.
I guess you don't remember in 2018 when Comodo revoked sci-hub's TLS certs under corporate political pressure. This style of revocation combined with HTTPS only browsers is effectively a block that can't be bypassed. I am not saying that the DNS system cannot be used for political attack. It obviously can. It was used as such against sci-hub before they started attacking via the cert provider. What I am saying is that blocking HTTP in the browser makes the consequences much, much worse.
It greatly increases the incentives for revoking TLS certs for political reasons due to increased effectiveness. The cert authority doesn't even have to be malicious. All they have to do is be "law abiding" relative to some country with a bad set of laws.
Still not seeing how that's any different from DNS. I mean yes, obviously it's another possible point of failure. But I don't see MITM protection as being any less important than name resolution on the modern web. Seems no less reasonable for a site to break due to lack of MITM protection than for it to break due to failure of name resolution. Normal users aren't going to be manually looking up and navigating to IP addresses anymore than they're going to be manually installing TLS certificates.
I prefer that if the scheme is not entered, it is treated as a relative URL. (I have managed to customize Firefox to do this.)
I also dislike the way "secure contexts" work entirely. Whether or not those features are enabled should depend on whether or not the user enabled them, not on whether or not the site uses HTTPS.
(I also think that a better web browser should be made, which would be mostly rewritten entirely rather than using the existing browsers, since they have so many problems.)
Did anyone else thought: I want to be able to draw that kind of animation for architecture diagrams. No complex sequence diagrams, no numbering of the arrows, no confusion of arrow directions.
When the big push to HTTPS came around, I was all in favor of it. Now... I'm more skeptical. Not everything has to be HTTPS. And I've become aware that many of the sites I visit are HTTP only and will never become HTTPS because of their age, or the lack of technical ability of their owners. HTTPS also has the side effect of obsoleting older hardware for no real reason. I have devices that work perfectly fine, but can't be upgraded to the latest whiz-bang encryption du jour.
A vintage computer software archive doesn't need to be HTTPS. A local historical society doesn't need to be HTTPS. A color picker doesn't need to be HTTPS.
I still think HTTPS is a good thing, but I also see it as a way of marginalizing and eventually eliminating a very large portion of the content on the web. Much of it is the very content that made the web so popular in the first place.
I think I first became skeptical when Google said it would rank HTTP sites lower. This fits in with Google's general direction of making any content older than 4 years inaccessible, and creating its own information ecosystem in which to corral people. Google can now delist a huge swath of the internet and shrug its shoulders and say, "because security!"
I think a good compromise would be for browsers to make connecting to an HTTP web site a lot less scary. Yes, tell people the connection is old-fashioned, and may be public. But don't block the page and put up big red scary icons and make people enter the system password to view Aunt Harriet's Really Good Muffin Recipe. It just seems like the antithesis of what the web was built for.
Safari has been doing this for at least two years.
And I don't get what hardware age has got to do with anything?
Not every browser gets upgraded to the latest HTTPS. There are millions of televisions, game consoles, older computers, and other devices that can only browse HTTP, or older versions of HTTPS. They will not be upgraded by their manufacturers. I don't think making those still capable devices less functional is a good idea just because "technology moves on."
I just tried many http websites on Safari and am still not sure what you are talking about. They all work perfectly fine, without any warnings.
Then I will thank Apple for changing things and making them less scary already.
The fact is that this is exactly how Safari used to handle things. I know because I got many angry screenshots from C-level people in my company when some of our web sites started showing up that way due to a misconfiguration by the IT department.
Here is a picture of a similar warning. This is for an SSL error, but the ones Safari used to display for HTTP were nearly identical:
The point I think is that HTTP never changes, while HTTPS constantly evolves and deprecates hash functions, ciphers, and authorities. If your router's or printer's web management interface was written more than a couple of years ago, it is probable that it will show up as insecure or even have a big warning screen.
All Chrome does when you visit an http site right now is put a subtle white warning icon and the words "Not Secure" next to the ___location bar, in lieu of the lock icon you get with https. There are no blocks, no big red scary icons, no password requests. The content on the page renders identically to a secure page.
Technology moves on, and encryption is important, even if there are some cases where it's not strictly vital. Sure, some recipe page isn't a big deal, but there are plenty of sites out there that accept personal information and still have to be dragged kicking and screaming into the present to secure that information. Sadly the only thing that's going to solve that problem is forcing https-only at some point in the future.
All Chrome does when you visit an http site right now is put a subtle white warning icon and the words "Not Secure" next to the ___location bar
That's great. But Chrome isn't the only browser in the world.
My point about browsers entirely blocking HTTP pages, putting up red scary icons and requiring a system password currently exists in Safari, both desktop and mobile.
Technology moves on
This is a lazy argument. And in my opinion, meaningless.
encryption is important
You are correct. But it is not required on every web site any more than everyone needs to carry a gun into a library to look up recipes.
I like using http where appropriate and not wasting resources.
I publish a blog and there’s no need for https. Adding https just adds a little more effort and provides no benefit to the user.
I guess if you count the ISP not knowing, but Google knowing, that you’re visiting my blog, then that’s a reason. But that’s a user issue, not a server issue.
Practically, my host does all the cert stuff for me and it’s not hard. I just don’t like the gradual complexification of the web when there’s not a good reason.
Moving more stuff to ssl that doesn’t need to be just burns up extra compute.
I hope someone calculates the carbon footprint of SSLing all the stuff that doesn’t need SSL. While each action it tiny, there’s trillions of cpu cycles wasted on encrypting stuff that doesn’t benefit from encrypting.
You mean like by someone throwing up a huge warning that this page is made by the devil himself and unless he recites the right combination of holy words there is nothing you can do to access it? Ran into a few pages that were hijacked that way, some of them at least seemed to go to the expected content when I got rid of the https.
My comment was actually meant to be sarcastic and a bit over the top. The hijacker in this case would be the browser telling me that the page is not secure and as far as I remember valid certs are enforced to the point where a user can't bypass the error in some cases. In my experience https has always been more of a hurdle when I was looking for some obscure information hidden away on barely maintained websites.
I remember where it used to be fun to do stuff like that, or reverse all the images in pages, etc so thought it was a reference to just general monkeying around with http traffic that won’t work with https.
I get maybe 20 hits a year so I would probably think it funny if someone was interested enough to do this.
No. It’s just static content, there’s no registration and all usage is anonymous. I collect no data. Someone can mitm if they wish, although there’s little motivation.
The risk of harm is that someone could try to misrepresent content or something, but again who has motivation and even if they did it would eventually be repaired.
I'm all for HTTPS everywhere but right now for my products it's either: https with self-signed certificate, which basically makes any modern browser tell its user that they're in a very imminent danger of violent death should they decide to proceed, or just go with good old HTTP but then you hit all sorts of limitations, and obviously zero security.
I wish there was a way to opt into a "TOFU" https mode for these use cases (which is how browser dealt with invalid HTTPS certificates for a long time), although I realize that doing that without compromising the security of the internet at large might be tricky.
If somebody has a solution, I'm all ears. As far as I can tell the "solution" employed by many IoT vendors is simply to mandate an internet connection and have you access your own LAN devices through the cloud, which is obviously a privacy nightmare. But it's HTTPS so your browser will display a very reassuring padlock telling you all is fine!