Looks like a case where there are tradeoffs to be made, but the people with authority over the decision have no incentive to consider one side of the trade.
Or maybe the endgame could be: creation of a centralized service that all web servers are required to be registered with and connected to at all times in order to receive their (frequently rotated) encryption keys. Controllers of said service then have kill switch control of any web service by simply withholding keys.
For extremely sensitive systems, I think a more logical endgame is 30 minutes or so. 30 seconds is practically continuous generation.
An semi-distributed (intercity) Kubernetes cluster can reasonably change its certificate chain every week, but it needs an HSM if it's done internally.
Otherwise, for a website, once or twice a year makes sense if you don't store anything snatch-worthy.
Because most of the sites on the internet store much more sensitive information when compared to the sites I gave as an example, and can afford 1/2 certificates a year.
90 days makes way more sense for the "average website" which handles members, has a back office exposed to the internet, and whatnot.
Give me examples of websites which doesn’t have any kind of member system in place.
Forums? Nope. Blogging platforms? Nope. News sites? Nope. Wordpresss powered personal page? Nope. Mailing lists with web based management? Nope. They all have members.
What doesn’t have members or users? Static webpages. How much of the web is a completely static web page? Negligible amount.
So most of the sites have much more to protect than meets the eye.
Neglecting the independent web is exactly what led to it dying out and the Internet becoming corporate algorithm-driven analytics machine. Making it harder to maintain your own, independent website, which does not rely on any 3rd-party to host or update, will just make less people bother.
I mean, a news site needs their journalists to login. Your own personal Wordpress needs a user for editing the site. The blog platform I use (mataroa) doesn’t even have detailed statistics serve many users so they need user support.
I'm not a web developer, and I don't do anything similar on my pages, blog posts, whatever, so I don't know.
The only non-predatory way to do this is to being honest/transparent and don't pulling tricks on people.
However, I think, A/B testing can be used in a non-predatory way in UI testing, by measuring negative comment between two new versions, assuming that you genuinely don't know which version is better for the users.
1. Journalists shall be able to write new articles and publish them ASAP, possibly from remote locations.
2. Eyeball optimization: Different titles, cutting summaries where it piques interest most, some other A/B testing... So you need a data structure which can be modified non-destructively and autonomously.
Plus many more things, possibly. I love static webpages as much as the next small-web person, but we have small-web, because the web is not "small" anymore.
I was thinking about this with my morning coffee.. the asymptotic end game would be that every TLS connection requires an online handshake with Connection Authorities to validate the server identity synchronously, right?
But on a more serious note, can someone more familiar with these standards and groups explain the scope of TLS certificate they mean for these lifetime limits?
I assume this is only server certs and not trust root and intermediate signing certs that would get such short lifetimes? It would be a mind boggling nightmare if they start requiring trust roots to be distributed and swapped out every few weeks to keep software functioning.
To my gen X internet pioneer eyes, all of these ideas seem like easily perverted steps towards some dystopian "everything is a subscription" access model...
> the asymptotic end game would be that every TLS connection requires an online handshake with Connection Authorities to validate the server identity synchronously, right?
The article notes this explicitly: the goal here is to reduce the number of online CA connections needed. Reducing certificate lifetimes is done explicitly with the goal of reducing the Web PKI's dependence on OCSP for revocation, which currently has the online behavior you're worried about here.
(There's no asymptotic benefit to extremely short-lived certificates: they'd be much harder to audit, and would be much harder to write scalable transparency schemes for. Something around a week is probably the sweet spot.)
I understand the optimization curve you are talking about. But, my coffee and I think my answer is more accurate as the theoretical asymptote as you reduce certificate lifetimes... you can never really have a zero lifetime certificate in a TLS connection, but you can reduce it to the handshake sequence necessary to establish the connection and its authenticated symmetric cipher.
"When they voiced objection, Captain Black replied that people who cared about security would not mind performing all the security theatre they had to. To anyone who questioned the effectiveness of the security theatre, he replied that people who really did owe allegiance to their employer would be proud to take performative actions as often as he forced them to. The more security theatre a person performed, the more secure he was; to Captain Black it was as simple as that."