Hacker News new | past | comments | ask | show | jobs | submit login

> without recourse

Doesn't the post say they'll fall back to http if the https attempt fails?

> For sites that don’t yet support HTTPS, Chrome will fall back to HTTP when the HTTPS attempt fails.

The only change here seems like it's that, from the user's perspective, initial connections to http-only sites will be a bit slower (vs. the opposite which used to be true: initial connections to https-only sites were slower).




I'm talking about the general state of HTTPS implantation. If you develop an offline device which offers a web UI, and it happens to use any feature that is deemed to require a Secure Context, you're out of luck.

WebRTC is such a feature, but there are lots more, and they can change from one version of the browser to the next one.

The players who are pushing so hard to shove HTTPS down our throats are simply closing their eyes and ignoring the use cases that are not interesting to them. The mandatory renewal timing is a good example: it used to be more than 1 year, now it is 90 days (and some would like to reduce it to mere weeks!) Absolutely great for the grand scheme of things and global security of the Internet, but dismaying for lots of other use cases.


I don't actually see the problem. If you're on a local network, there's no practical way to deal with certificates, so use http. Chrome will fall back. Problem solved.

If http support ever gets truly removed, I will be very upset. But that hasn't happened, so what is there to complain about?


HTTP is effectively considered legacy by the big web actors these days. More and more APIs are HTTPS-only (often for good reasons) and the "insecure" warnings you get from using HTTP become more intrusive every year.

The trajectory is pretty clear, the long term plan is to phase out HTTP completely. And I'm not against it, but I need a solution for LAN devices, and it doesn't exist at the moment because the big web actors do everything in the cloud these days and they don't care about this use case.


I AM against it, because it puts more centralized censorship power in the hands of the certificate authority.

Also, it completely cuts out "legacy" devices, basically anything more than 5 years old.

The Web is once again splitting into AOLized mainstream and "indie underground" that you have to make an effort to access.


Who is "the certificate authority" you're referring to here?


The authority who grants you your SSL certificate. There is more than one out there, sure, but you can't do it without them. And ultimately, they all answer to the same authority above them: the browser maker who populates the root trust store.

So, to summarize: one more way for the browser maker to control what the user can and cannot access without jumping through hoops.


The OP means that in using https (and being forced to used https) you are also being forced into paying a 'third party' an annual fee just to get a valid certificate.

That 'third party' is one of the recognized 'certificate authorities'.

But the OPs point is by going https, you don't have a choice, you have to pay the certificate tax.


Right, and Let's Encrypt doesn't solve the problem, it just kicks the can to DNS, which is globally unique and costs money. Communicating between your computer and any device that you supposedly own without the slow, unnecessary, and increasingly intrusive permission of some cloud IoT stack will become more and more difficult.


This is not true, you can set your host to trust a self signed certificate without much difficulty. Check out this tool for example https://github.com/FiloSottile/mkcert (prev discussion at https://news.ycombinator.com/item?id=17748208)


I would like to trust a given root very for only a specific ___domain (and sub ___domain)

I.e *.int.mycorp.com, but not www.mybank.com

Browsers don’t let me do that, it’s either app or nothing. X509 name constraints aren’t great either and don’t give me, the browser operator, the power.


Self signing doesn’t let the world access my website without some scary warning.


That’s irrelevant to this discussion about hosting sites on a LAN with no internet access.

If you need https on the public internet you need a trusted cert.


Don't think personal LAN, think e.g. industrial automation: Many sensible companies want modern sensor systems that provide REST APIs and so on, but don't want those to access the internet. The hosts in this case often are appliance-like devices from third parties.


But that’s my point, and many others’. Sure, we can self sign, but it’s useless for the WWW. You’re forced to pay up to one of the few certificate providers. Thankfully, Let’s Encrypt has made it free and easier, but it’s not a no-brainer.


How long do you think it would take someone who has never been to HN?

I don't think they would even know the option exists.


Letsencrypt provide a really good service.

I can recommend the docker image made by linuxserver in particular [0]. Makes Https a (tax free) breeze.

[0] https://docs.linuxserver.io/general/swag


That's OK then, if that's we all have to do to run any devices inside our LAN/home network.

Want a NAS box for sharing family files/photos or some other IoT device at home? Just set yourself up some other device to run the docker image, get your self a certificate from LetsEncrypt and then... install it on the NAS box? How does that happen?


Perfect time to radicalize the underground (say by beginning to experiment with Gemini or other protocols), the mainstream as usual only knows how to follow


Gemini requires TLS 1.2 or higher.


But it doesn't rely on CAs. It relies on TOFU.


I prefer HTTP :)


Let’s encrypt exists, your argument is moot.


Can you use it on a microcontroller in a home network?


The problem is that there is no way to deal with certs on a local network, but the OP would like to be able to use https anyways; http might be considered too insecure for their usecase


What I do is buy localme.xyz and get a wildcard cert via DNS validation. This way you get SSL for offline devices. But you need to update the cert periodically.


I wish there was a way to automate wildcard certs, at the moment I'm building a python script that logins to my ___domain registrar's panel and updates DNS records


let's encrypt supports wildcard certificates: https://community.letsencrypt.org/t/acme-v2-and-wildcard-cer...


If your ___domain provider's API sucks, or doesn't exist, or requires generating a password/key with more permissions than you're willing to give a script, look at acme-dns [1] and delegated DNS challenges:

https://github.com/joohoi/acme-dns


Make your own CA, install on each computer, install certificates, voila.


Telling your clients to install your certificate in their computer/browser store is not very practical. And they will need to do that regularly.


It shouldn’t be practical, that’s by design. Imagine if every captive portal had you install their root certificate to access the WiFi, with just the click of a button.


Not regularly, my root is 10 years long.


Repeat every 3 months or whenever the root certs expire.


3 months? I must have updated FireFox / Discord / VS Code /etc. about a hundred times in last 3 months. Plenty for them to add renewed SSL whatevers inside one of the updates.


> 3 months? I must have updated FireFox / Discord / VS Code /etc.

I think this state of affairs is nuts. With the exception of Firefox, because web browsers have an inordinate number of security issues to contend with.


And other programs don't?


An instant messaging client shouldn’t be executing arbitrary remote code, no.


It's not really possible to prevent that. E.g. a well crafted image can easily trigger an RCE on some older versions of Android: https://nakedsecurity.sophos.com/2019/02/08/android-vulnerab...

Issues like this exist at all layers of the stack, so anything touching the internet needs regular security patches.


I agree completely. But, I also think that in most cases, if a simplistic piece of software like an IM app needs a security patch every three months, regularly, it's a sign the attack surface is too large.


Why would the certs you create for this purpose be made to expire?


It needs to expire before 397 days, because otherwise the CA will not be valid, even if it is marked as trusted. https://www.zdnet.com/article/google-wants-to-reduce-lifespa...

edit: a word


The article you linked to is kind of confused and I'm not sure I blame them. This stuff is really complex!

According to the proposal[0], leaf certificates are prohibited from being signed with a validity window of more than 397 days by a CA/B[1] compliant Certificate authority. This is very VERY different from the cert not being valid. It means that a CA could absolutely make you a certificate that violated these rules. If a CA signed a certificate with a longer window, they would risk having their root CA removed from the CA/B trust store which would make their root certificate pretty much worthless.

To validate this, you can look at the CA certificates that Google has[2] that are set to expire in 2036 (scroll down to "Download CA certificates" and expand the "Root CAs" section) several of which have been issued since that CA/B governance change.

As of right now, as far as I know, Chrome will continue to trust certificates that are signed with a larger window. I've not heard anything about browsers enforcing validity windows or anything like that, but would be delighted to find out the ways that I'm wrong if you can point me to a link.

Further, your home made root certificate will almost certainly not be accepted by CA/B into their trust store (and it sounds like you wouldn't want that) which means you're not bound by their governance. Feel free to issue yourself a certificate that lasts 1000 years and certifies that you're made out of marshmallows or whatever you want. As long as you install the public part of the CA into your devices it'll work great and your phone/laptop/whatever will be 100% sure you're made out of puffed sugar.

I guess I have to disclose that I'm an xoogler who worked on certificate issuance infrastructure and that this is my opinion, that my opinons are bad and I should feel bad :zoidberg:.

[0] https://github.com/cabforum/servercert/pull/138/commits/2b06... [1] https://en.wikipedia.org/wiki/CA/Browser_Forum [2] https://pki.goog/repository/


HTTP does not solve the problem if you still want your traffic encrypted in transit.


Yeah, I think we need a browser that isn't developed by companies with vested interests in having all your traffic go to them...


Then again I think Google would do just fine even if Firefox was the only browser.


Self-signed certificates seem reasonable in this context - unless I’m missing something.


They might to you, but the browser doesn't agree. It will scream with all its force to all your users that this accessing that product is a really really dangerous idea.


You can set up the users' machines so that they trust your certificate.


I have tried to do just that but ran into all kinds of difficulties:

1. Overhead: I have 5 devices that I own 3 of my wife and a smart TV. Setting all this us takes a lot of time, even if it worked fine.

2. What about visitors to my home, that I want to give access? They need the cert as well together with lengthy instructions on how to install it.

3. How do I even install certs on an iPhone?

4. Firefox uses it's own Certificate Database -- for each profile. So I'll have to install certs on the system AND on the host (for e.g. Chrome to find it).

5. All these steps need to be repeated every year (90 days?!) depending on the cert expiration period.

Eventually I just gave up on this. It's not practical. There needs to be a better solution.


I bought a ___domain For use on my local network. I use letsencrypt for free certs, and I have multiple sub domains hosted under it. It works very well and wasn’t that hard to setup. It’s actually better organized and easier to use than my old system since I had to take the extra step up front to organize it under a ___domain.


I am as upset as you about this and cancelled IoT related projects because of it

But, #3 How do I even install certs on an iPhone

AFAIK (though I've never done it) you use a configuration profile

https://developer.apple.com/documentation/devicemanagement/c...

https://support.apple.com/guide/deployment-reference-ios/cer...


https://news.ycombinator.com/item?id=17748208

I found the steps described in https://github.com/FiloSottile/mkcert reasonable to follow. It describes an iOS workflow too.


15$ a year for a ___domain, throw traffic through local [split?] dns and traefik with the lets encrypt dns challenge and call it a day? i have over 25 internal domains & services with 25 certs auto renewing and no one can tell - it just works and is easier the self signing certs and loading them into whatever rando service or device your trying to secure


Split dns is dying - doh is sorting that. Sure canary domains exist, but they won’t forever


i'm not sure i follow, what does split DNS have to do with DoH? i don't want my internal DNS addresses public, there is no need + security and for some addresses i have different IPs internal vs external.


Browsers send to external provider like google, rather than the network provided server which has the internal addresses (and which may override external addresses for various reasons)


split-dns will never die, it would kill far to many internal corp environments where there is no public DNS entries.


I have been looking at using MDM for my iPhone so I can install trusted root certs and require it to be on vpn when not using a specific list of WiFi networks.

It is a hassle and as far as I could see it requires resetting the device and I’m not sure I can restore my backup over it and retain both.


If you keep your CA secure, no reason that you can't set the expiration of the root cert to something like 10 years.


Will browsers accept that?


Browsers accept a root CA with long lifetime. Certs signed by CAs installed by the user or admin also allow long lifetimes (probably will still for a while).


Maybe it is a dangerous idea. You could be snooping on them for all they know. A little truth never hurts.


There is an important difference between (A) trusting "just this one" cert for a specific reason, and (B) installing a root cert that is able to impersonate any server.

It ought to be a practical to do (A) without doing (B), but due to a variety of deep human and technical problems, it isn't.


> initial connections to http-only sites will be a bit slower (vs. the opposite which used to be true: initial connections to https-only sites were slower).

More than a bit. For HTTPS-only sites, the site could serve a stub HTTP endpoint on port 80 that redirects to HTTPS. The redirect causes maybe some milliseconds to a few seconds (worst case) of latency.

HTTP-only on the other hand can't do a HTTPS stub as easily (as the primary reason you'd want HTTP-only is probably that you don't want/can't deal with the certificate management - if you can set up a redirect from HTTPS to HTTP, you might as well go full HTTPS)

So the only option for HTTP-only is to not open port 443 at all - meaning Chrome has to wait out the 30 seconds (!) network timeout until it can try the fallback. So pure-HTTP sites would become a lot more unpleasant to use.

(A site might be able to cut this short by actively refusing the TCP handshake and sending an RST packet - or by opening and immediately closing the connection. I don't know how Chrome would react to this. In any case, that's likely not what HTTP-only sites in the wild are doing, so they'll need to update their software - at which point, they might just as well spend the effort to switch to HTTPS)


You can simply send a ICMP Rejected Message, which should direct your browser to immediately try any fallbacks or other hosts.

Timeouts occur when you incorrectly configure your firewall to drop packets instead of rejecting.


True, that would work as well. Though I believe it's recommended as a best practice to simply drop packets as not to help port scanners.


I don't worry about port scanners. If your infrastructure becomes less secure because of a port scanner, it's not very well secured. Sending a REJECT is cheap, no DDoS opportunity and helps browsers and other apps fail over fast.

The recommended practise I've heard everywhere is that an ICMP REJECT provides no opportunities for an attacker they wouldn't have with 10 minutes extra time (modern port scanners can be set aggressive enough that REJECT or DROP doesn't matter if they have a known open port they can obtain an ACCEPT timeline from).


Yeah, that absolutely makes sense - as I said, a site could also avoid the timeout by sending a TCP RST packet. My point was more that I believe not many sites are doing any of this as in the past, "best-practice" firewall configuration was to be as silent as possible.


Being as silent as possible has only two effects; an attacker takes 2 seconds longer and you significantly degrade the network experience of all your users. It hasn't been best-practise in a while (outside the greybeard firewall admin circles), ie, almost a decade at this point. Anyone who hasn't figured that out should really consider not doing network admin if their knowledge is a decade out of date.


Hmm, OK. Might be that my knowledge was not really up-to-date there. I'd really appreciate if network protocols designed for common benefit (such as ICMP) are not discouraged due to security for a change. In this case, sorry for spreading FUD, that wasn't my intention.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: