Serving from space CDN means at least 2x better ping than Starlink (for content directly served by CDN). Then you upgrade to space Cloud and provide even more content with fewer hops.
Being an ISP, CDN, Cloud Provider and Content Provider gives serious advantages. That's a great way to out competitors and get hefty fines at some point.
My first job was exactly that, selling windows app in Delphi. I joined the new team working on .net windows apps and we had an army of people clicking on UI all day long.
They maintained their "test plan" on a custom software where they could report failures.
TBH, that was well done for what it was but really called for automation and lacked unit-testing.
I am forced to use a custom kv store for my current t project. That pos has a custom dsl, which can only be imported through a swing ui, by clicking five buttons. Also, the ui is for 1024 screens, they are tiny in my 4k monitor
You now have to build and self-shot a complete CA/PKI.
Or request a certificate over the public internet, for an internal service. Your hostname must be exposed to the web and will be publicly visible in transparency reports.
> Or request a certificate over the public internet, for an internal service. Your hostname must be exposed to the web and will be publicly visible in transparency reports.
That doesn't seem like the end of the world. It means you shouldn't have `secret-plans-for-world-takeover.example.com`, but it's already the case that secret projects should use opaque codenames. Most internal ___domain names would not actually leak any information of value.
It's a real shame. OCSP with Must-Staple seemed like the perfect solution to this, it just never got widespread support.
I suppose technically you can get approximately the same thing with 24-hour certificate expiry times. Maybe that's where this is ultimately heading. But there are issues with that design too. For example, it seems a little at odds with the idea of Certificate Transparency logs having a 24-hour merge delay.
Also certificate transparency is moving to a new standard (sunlight CT) that has immediate merges. Google requires maximum merge delay to be 1 minute or less, but they've said on google groups that they expect merges to be way faster.
Interesting and definitely something platforms must take into consideration.
Now back to the post, implementing custom cache is not something Netlify is strongly complaining about. They are mostly asking for some documentation with rather stable APIs. Other Frameworks seems to provide that.
I think in practice in their randomized tests, almost all samples were above the recommended threshold, so you can save the test money and assume it's not going to be good.
> In the old days you had to run a script or install a package to hook into their monitoring....but with IPMI et al being standard they don't need anything from you to do their job
reply