Even worse, there has been at least one "security program" that installed its own CA and went on to prevent any further modification to the CA list.
It was probably meant to prevent malware from adding their own CAs. In practice, though, it even stopped Windows from keeping its official CA list up to date.
In October 2021, the root certificate used by Let's Encrypt expired. All existing certs were cross-signed by another broadly supported (but more recently included) CA, so this should have been a non-issue on any reasonably up-to-date device. Uh oh, a lot of South Korean PCs had had their CA lists frozen for several years. Suddenly, random people all over the country were unable to connect to websites using Let's Encrypt despite being on the latest version of Windows 10. It was nearly impossible for ordinary users to track down the offending program and uninstall it, and there was no guarantee that the CA list would be restored upon uninstallation. A lot of website owners just switched to ZeroSSL or some other CA because of this clusterfsck.
Unfortunately there isn't a definitive source, only a bit of noise in the blogs for a few days. AFAIK it was difficult to pinpoint exactly which version of which program was the culprit, since people aren't free to install an arbitrary version of a government-mandated security program. Most website owners just got tired of the bullshit and bought a cheap Sectigo cert instead.
But the symptoms were very clear. Some program, at some point before ISRG Root X1 became included in Windows, had set the registry key \HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\SystemCertificates\AuthRoot\DisableRootAutoUpdate to 1, when it should have been 0 by default.
I remember having seen a name being mentioned, but can't find the name now. The program was probably from a healthcare and/or welfare related website, since most of the complaints that came across my desk were from physicians, pharmacists, and welfare workers. No complaints at all from casual shoppers and bank users.
Why is it even possible for an application to install root certificates that other applications have to accept? An individual application accepting its own certificates is one thing (as Firefox does), but I can't think of a good reason why apps would need to modify the OS list of certs.
There are semi-legitimate use cases. One is intranets where you might want to have HTTPS on non-public resources. More commonly the company installs their own root CA however so that they can monitor all outgoing traffic. The other use case are antivirus applications – same reasoning here, they want to monitor all traffic.
And: yes, adding this certificate should normally be a user-initiated action. There is no way of preventing applications from automating it however. E.g. Firefox doesn’t have an API to install root CAs, so these applications package up NSS Tools in their installer, search for Firefox profiles during installation and add their root CA to any certificate database they can find.
On a somewhat (un)related note: What has been the reception, if any, from the Korean side in regards to your research? I've lived in Korea on and off for around ten years, and their online 'security' has always both bothered and worried me. It's nice to see someone actually opening up this horrendous and intrusive mess of software and investigating it.
On the individual level, the reception is remarkably positive. I’ve had lots of people thank me for this research, including people working for government agencies.
The news coverage is mixed. Some articles are positive about this and are asking about how to improve this sad state of affairs. Others are parroting statements from the affected companies: not actually bad, difficult to exploit, misunderstanding of the domestic market.
And as to politics, it’s too early to tell I think. This definitely generated much attention. Whether all this attention will actually lead somewhere is impossible to tell at this point.
That's great to hear. Hopefully your Korean colleagues will help bring awareness to this issue as well. I imagine it would probably be hard for politicians to ignore or argue against the consensus of the top minds of the country.
HTTPS on non-public resources isn't doesn't need to use DIY certs, normal certs work fine. If you are afraid of hostnames showing up in CT logs, you can use wildcard certs.
Also TLS traffic inspection and antivirus can be done at endpoints.
>More commonly the company installs their own root CA however so that they can monitor all outgoing traffic. The other use case are antivirus applications – same reasoning here, they want to monitor all traffic.
How does this work with HSTS? There's a growing number of websites that will not load unless their legitimate certificate is present.
You're confusing HSTS with HPKP (key pinning). HSTS only mandates HTTPS but it never address what certificate is the server's. HPKP is the standard detailing certificate pinning (ensuring that the only certs are the correct one), but browsers ultimately decided against its implementation because of the difficulties associated with a hacked site effectivey ransomed by sending a HPKP with attacker-controlled certs.
Edit: since that you might get confused on why HPKP can ransom a site, consider this case:
0. The owner of the website doesn't know about HPKP or decides against implementing HPKP due to its burden.
1. Either the web server, DNS service or BGP corresponding to the IP address of the server is hacked. The attacker can now control the site.
2. The attacker then issues a new certificate. Since that they control the keys, they can send an HPKP header that only locks the key that attackers controlled.
3. Unknowing users visit the site. The site can either be still active (web server hack) or proxied back to the legitimate site (DNS or BGP hijack). HPKP keys are remembered.
4. The owner now realises that their site is hacked and tries to restore it. Let's assume that they have revoked their old cert and issues a new and shiny one which is never in the HPKP header in the first place.
5. Users visit the supposedly now-restored site, but instead of succesfully connecting back it errors out with an HPKP error. The owner can't do anything but to migrate to a new ___domain name.
Thank you for the clarification. As a site operator, how should we assure that a visitor's connection cannot be MitM'd? Is this why some apps only use their own internal trusted certificate list, in lieu of offering a webpage?
Can't they inform the HKPK list provider to revoke the old key after they regain control of the ___domain?
HPKP isn't a centrally managed list, it's remembered by clients who've received HPKP headers when visiting your site (or in this case, the attacker controlled site at your ___domain).
As dwattttt said, it's not a central list at all. Having a central list defeats the purpose of HPKP (since that governments can just force them to add another authorized key, and even if they only have remove-only policies I'm pretty sure that stripping HPKP defeats its own purpose).
It is frustrating that the ecosystem remains the way it is. I suppose mostly to support internal self-signed certificates where you don't want all your internal hostnames being exposed via certificate transparency.
And, also, the crappy corporate monitoring MITM industry that desperately feels the need to spy on employees.
It is a complex mess, where platforms (android, iphone, windows, linux, etc), languages (python, java, node, etc), browsers, web servers, and other pieces all handle client and server certificates in completely different ways.
Application doesn't even need to use root CAs from system, it can ship its own; the problem starts when you try to make system browser part of your app
I'm talking about secure networks, where all traffic has to be monitored to ensure nothing is accidentally leaked, that malware doesn't easily spread, or that proxies aren't used to circumvent filtering. MITM devices configured by the user are very much friendly actors, and an important component of defense in depth.
To be clear, the way these normally work is that they have their own root CA that devices participating on the network add to their trust stores voluntarily.
What about all the people who get their root certificates from Mozilla. The system root certificates should not be tied to any single application and yet how many people get their system root certificates from a single application developer. A browser developer. Go figure.
> The system root certificates should not be tied to any single application and yet how many people get their system root certificates from a single application developer.
What alternative do you propose? The most common alternative is OS vendors, who don't always respect the CA/B guidelines (which, overall, have been a positive force for CA ecosystem change).
As far as points of failure/dependency go, Mozilla is not an unreasonable one.
Why not obtain certificates from their sources instead of a third party.
Maybe intermediaries ("middlemen") are trustworthy, and using third party is not single point of failure. Maybe people love the convenience. Maybe it is just laziness. Who knows.
(I suspect the OS vendors may in some cases use the Mozilla bundle.)
Personally I find that many of the certificates in the Mozilla bundle or in browsers are ones I never use. I certainly do not need them all. Sometimes when experimenting with TLS I download root certificates from the companies that provide them. It's certainly possible to get them from their source instead of Mozilla.
At least with system certificates, the user can remove the ones she does not want. With certificates included in browsers, the user would have to edit the source code and re-compile. The so-called "modern" browsers are extremely cumbersome in that regard. Huge size and slow, resource-intensive compilation.
And for some of the popular "modern" browsers modification and re-compilation is not even possible because the source code is unavailable.
> Why not obtain certificates from their sources instead of a third party.
You can't do this sustainably. We're talking about hundreds of certificates that get cross-signed and rotated on varying bases.
Nothing about this boils down to laziness: CA and bundle management is very difficult. Mozilla does a good job given the complexity, and arguably do a better job (including perceived conflicts of interest) than anybody else who could be tasked with the responsibility.
What does "You" refer to in this comment. And what does "sustainably" mean. Sustainable by who. And for what purpose. Every computer user is different and each may have different needs.
“You” means an end user, and “sustainably” is in this context “mean ordinary Internet usage.”
If you want to maintain your own CA bundle, absolutely nothing is stopping you from doing so. But it would not be reasonable of us to expect ordinary users, including people who just want to connect to their banks securely, to do so. And even if we were to make such an unreasonable imposition, it’s not clear that it actually improves their security posture in any way.
Agreed. The point I was was raising originally is why other options besides sourcing certificates from third parties are not considered. Using a Mozilla bundle is one option. Relying on hardcoded certificates in a web browser or other application is another option. IMO, these are not the only options. The "user" should have a choice.
With respect to computers and the internet, there is substantial history of problems with third party intermediaries. Deliberately excluding, or even just failing to recognise, the option for a user to eliminate a third party intermediary is highly suspect given that history, IMHO.
At least we can say that South Korean web security has moved on a little bit since the truly dark days of SEED.
> In the late 1990s, the Korea Internet & Security Agency
developed its own 128-bit symmetric block cipher named SEED
and used ActiveX to mount it in web browsers. This soon
became a domestic standard, and the country's Financial
Supervisory Service used the technology as a security
screening standard. ActiveX spread rapidly in Korea. In
2000, export restrictions were lifted, allowing the use of
full-strength SSL anywhere in the world. Most web browsers
and national e-commerce systems adopted this technology,
while Korea continued to use SEED and ActiveX.
I lived in Korea in 2008-2010 and can confirm the ActiveX thing was still in widespread use then. It was a pain because every website effectively only worked in IE, meanwhile I had a Mac and couldn't log into any secure websites such as my local bank account from it.
what is missing here - localhost is a trusted origin for at least 5 years. You can call localhost without https from all major browsers now without any cross-origin warnings.
Yes, that’s what I also assume. I didn’t bother verifying however that mixed content warnings are generally not an issue with localhost in any browser that South Korea cares about (e.g. Internet cough Explorer). Keep in mind that they have some rather adventurous ways to access the local web server, such as JSONP or communicating by posting to a frame.
The localhost issue is very interesting. Of course, there's no chance that someone is impersonating localhost, but there is a chance that someone is eavesdropping on it (although this implies a level of privilege that may make encryption moot). Should browsers just accept self-signed certs for localhost on the assumption that you can't actually impersonate it? Maybe. I used to run internal PKI for a large company and "can I get a cert for localhost" was one of the top 3 arguments I used to find myself in (for the record, the answer was always no).
Interception of localhost traffic is in fact a non-issue, someone able to do it can do worse. So TLS really shouldn’t be necessary on localhost, that’s it.
Certainly seems likely to be the case, but I'm open to there being a theoretical attack that involves being able to inspect the data flowing through the TCP stack without being able to inspect the process spaces of the two endpoints. E.g. if the TCP stack was able to be put in a debug mode that was logging packets somewhere, you'd prefer those to be encrypted. It's pretty far-fetched in terms of an attack surface, but "this data never exists unencrypted outside these two processes' memory spaces" is objectively stronger in a specific way than plaintext transiting kernel buffers.
> The reason for all these certificate authorities seems to be: the applications need to enable TLS on their local web server. Yet no real certificate authority will issue a certificate for 127.0.0.1, so they have to add their own.
Modern browsers will treat http://127.0.0.1/ as a secure origin (i.e. you can load resources from it from within a secure website without triggering mixed content warnings) specifically to make hacks like this unnecessary. (Depending on the exact browser and version, you may need to use either the IP or "localhost" - one of them isn't universally supported in older versions).
There would be other workarounds, but the correct solution is this, if this kind of portal is in fact required/a good idea in the first place.
I think the whole design of PKI is bad. Instead on trusting a handful CAs that have to charge for or severely limit access to their signing servers, PKI could be based on the DNS hierarchy. CA type certificates do have an extension for specifying at which DNS level and below they are valid, so the root DNS operators would have the root cert and would only issue signatures for CA certs to TLDs that are limited to those TLDs only. A TLD administrator would then sign CA certs for second level domains that are time limited to ___domain validity and have their CA limited to this SLD and below.
An owner of an SLD would automatically get a CA cert and wouldn't need to depend on third party issuers (and their OCSP servers to which data leaks), complicated ACME protocols, ...
A bank in question could then sign certs for each user and point domains user58284u2874localhost.bank.example to 127.0.0.1. Even better, the bank would be much more secure because there would be no other CA that could sign certs for it's ___domain, apart from THE SINGLE root CA and it's TLD CA.
This entire process of basing on DNS as a security chain already exists in form of DANE/TLSA, which is IMO an even simpler protocol than CA and cert chains. With DANE, TLSA records containing TLS public keys (or cert hashes), trusted on a specific ___domain, are published in DNS zones, which must be signed by DNSSEC. That way, TLS certificates don't even need to be signed by a CA. No browser currently implements DANE however, it's major users are currently mailservers.
A DNS PKI would also be controlled by a handful of unaccountable companies (you can't generally get signed name directly off the root), with the fun additional wrinkle of the most important of those companies being de jure controlled by state actors. It's hard to think of a worse option than DANE.
What's more, browsers have no real influence with DNS registries. They can kill misbehaving CAs, and have done so, to some of the largest CAs. But they can't kill a DNS registry; they can't even move their own names off a registry they don't trust (customers are going to keep hitting Google.com no matter what Google says).
That's important because browser influence over CAs has drastically modernized the WebPKI, most notably by creating an append-only cryptographic ledger (CT) that tracks every certificate issuance. That system doesn't exist for DANE and will never exist for DANE, because the browser vendors can't force the issue.
That's not why browsers don't use DANE; they don't use DANE because it doesn't work (for instance: large numbers of middleboxes won't pass DANE queries). But it's the reason nobody should support DANE.
You don't have to like the WebPKI or support it (I respect it more than I like it). By all means propose alternatives. Just don't make those alternatives a DNS PKI.
I think the central thing here is you need to trust the DNS anyway. If you control a domains DNS records you can get a Cert issued for that name under the existing WebPKI.
Given that scenario the current CA setup just adds many more entities that need to be trusted on top of that.
> (you can't generally get signed name directly off the root)
Yea you can (unfortunately IMO.) Just stump up the money. Most people have no need for that though.
> browsers have no real influence with DNS registries.
I’m not sure why the browser vendors being in control of this is so much better than ICANN. Google has undue influence here IMO.
> They can kill misbehaving CAs, and have done so, to some of the largest CAs. But they can't kill a DNS registry;
I think the difference is that the DNS hierarchy means only one set of published records exist for a ___domain. It’s not like the Web PKI where other certs can be issued I parallel by other CAs.
> they can't even move their own names off a registry they don't trust (customers are going to keep hitting Google.com no matter what Google says).
I’m not that familiar with all the processes here, but this seems like an abuse of a registrars power. Surely the gtld servers can stop delegating zones to bad actors? Have there been historical examples of something like google getting hijacked by their DNS registrar?
> append-only cryptographic ledger (CT) that tracks every certificate issuance. That system doesn't exist for DANE
As conflicting records can’t exist it’s trivial for an entity to monitor the published records for their ___domain, and alert on changes.
> it doesn't work (for instance: large numbers of middleboxes won't pass DANE queries).
That is true for many things. I don’t find it a good argument against protocols themselves.
What is needed is less centralization, not more. I'll take the US society as an example: having congress, senate, president, courts, military, police, states, free press (with multiple participants), university etc means that to compromise democracy, you need to subvert many elements simultaneously.
Likewise, having DNS and PKI separate is a plus, as the cert is linked to DNS and finding a web server is done through DNS. To completely fool an attentive user, you would need to both fool their DNS and their pool of trusted certificate (or a single CA).
To increase security, we need more cross-linked layers that each require the other. We need to have more steps and more elements that a crooked player needs to control simultaneously in order to achieve an exploit.
For example, we could require multiple certs, so that you'd need to compromise multiple CA. Although, if you can add one CA to the trusted pool, then you can add 3, so this does not protect against actors that have access to the user's OS. But, on that front, if you got root access to a user's OS, you can do anything. Once you MITM them, you can make a user divulge their credentials. Only a physical fob with its own certs, DNS resolver and hardware-level private key can protect against that.
> What is needed is less centralization, not more.
I don't think depending on more CAs counts as constructive decentralization. Since every CA is ultimately trusted, the security of our PKI depends on the weakest link in the chain. A single exploited CA is enough to completely invalidate PKI security, whereas separating CAs based on DNS hierarchy means that a compromised CA for .SI would only affect domains under .SI.
> To completely fool an attentive user, you would need to both fool their DNS and their pool of trusted certificate (or a single CA).
Currently, due to the way ___domain ownership is verified, a compromised TLD registry leads not only to DNS compromise, but to TLS compromise as well, since ___domain ownership is verified via DNS by currently trusted CAs (most notably letsencrypt).
> We need to have more steps
Less steps that form a reliable security chain that can be easily monitored is IMO better for securing infrastructure than having to trust each and every obscure CA.
> we could require multiple certs
But this would significantly increase deployment complexity. And the ___domain registry can still lie about ___domain ownership so it still must be ultimately trusted.
One thing that tends to happen in these discussions is that people start with a set of axioms based on the circa-2002 era WebPKI, and just build an argument from first principles. That doesn't work. There has been a sea change since then in how browser root programs are managed.
Can't agree more. We got to a point where state actors control the public names that we use to refer to locations on the Internet. That's fine, and the logical conclusion of divvying up natural monopolies. When you have a namespace, someone needs to be the central authority for it. On a global system, you need to at least have as many namespaces as countries (plus whatever extra corporate fiefdoms you want to spin up, which I assume was what gTLDs were trying to do).
Ensuring you are communicating with the resource that you intended is a feature that does not need to be centralised, so it shouldn't be. Requiring a random third party on top of your central entity seemed like a bad idea when DANE was proposed, as CAs seemed like a rent-seeking middleman that offered nothing. In fact, it's actually a feature as you said. Because there are multiple, free and crucially, interchangeable third parties that can do this, the browsers are free to kick any misbehaving party out AND keep raising the bar of admission, like imposing CT requirements.
Right now, I don't have to place any trust on the state entity behind the hypothetical .xy ___domain when talking to a .xy website. They can return whatever DNS records they want to any user at any time, but assuming a TLS connection on top of WebPKI, they cannot silently perform a MitM on their own – while they have the power to fool a CA to give them a ___domain-validated certificate for any ___domain they control, doing that would be visible in CT logs for that well-behaved CA, meaning they would get caught. The incentive to do that is much less than if you were both the source of truth for the name system and the PKI infrastructure.
I've found Moxie Marlinspike's talk [1] to be a great overview of these sorts of issues. IIRC, he built an alternate system discussed in the talk. But I believe that's defunct. (Tangent, anyone know the postmortem on that?)
The feature is called 'RFC 5280 Name Constraints' and nobody will issue you such certificates.
This is because some clients don't support the constraints, so if they give you a CA certificate that can sign any subdomain of evil.com you could use it to sign MITM certificates for good.com and, although you wouldn't fool modern web browsers, you might fool smart fridges and ancient android phones.
You can, however, use it to constrain your in-house corporate CA if you like.
Well our private one does lmao. Not useful in the slightest because of lack of client support.
We had an idea that our dev machines could live under *.dev.example.com and just have sub-ca generating certs for it and so any vulnerability or misgenerated cert would be limited to that environment, but lack of client support means that wouldn't work very well
>You can already do this by effectively getting a pool of wildcard certs.
That is absolutely not "effectively" the same.
If I had a constrained CA cert for my ___domain, that CA cert would be the high value target which could potentially compromise my entire ___domain, but I could keep it as secure as I wanted to while issuing certificates for my world-facing machines specific to their own names.
Wildcard certs instead force me to put the high value cert on the world-facing machine. Suddenly every one of my web servers has the ability to impersonate my entire ___domain, and if compromised the attacker then has the keys to the kingdom.
Wildcard certs are a sorry excuse for a constrained CA cert, and the fact that we're stuck using them is terrible.
The only case where I consider wildcard certs acceptable is on a frontend machine that serves an entire dedicated ___domain, for example one of my systems has a lot of customername.___domain.com and that particular ___domain is only used for that system, so everything under that ___domain would be those machines exclusively, making wildcard certs no additional risk.
If your ___domain use allows a wildcard-equipped web server to impersonate anything unrelated, it's probably a bad idea.
I do not know why this parent got voted down, but I do echo the parent’s sentiment that wildcard on the public-facing TLS/SSL servers still remains a weak point of security (from the server-side security POV).
Purpose of reducing wildcard usage in Subject Name (SAN) to within just the network portion that you control entirely still remains a valid security posture.
There are many types of TLS/SSL servers that can fork off of a public-facing wildcard FQDN; the thing is that ANY other CA (other than ones who signed wildcard CA) can be used to hang off a subdomain portion while its parent ___domain got differently issued by this wildcard-permitting CA.
The many types of CA end-nodes that can be created (unbeknownst to holder of this wildcard certificate holder) and used as subdomain servers are but not limited to:
1. Non-Leaf Intermediary CA
2. Cross-Certificate CA
3. TLE-Signing CA
4. Email Protection CA
5. Code Signing CA
6. Timestamping Intermediate CA
7. Identity CA
8. TLS Server Certificate
9. TLS Client Certificate
10. OSCP Responder Certificate
11. CRL Certificate
12. Code Signing Certificate
13. Email Protection Certificate
14. Timestamoing Certificate
15. Encryption Certificate
16. OSCP Responder Certificate
This above remains true even if DNSSEC setup is perfectly sound for you still have web (and OS) security to contend with; you also have to contend with correct OpenSSL settings for this wildcard CA sertificate (setting details. in link below).
Details on what OpenSSL settings for above are given in this:
(Note: my website shall be accessible only by non-Chrome web browsers, it is not accessible to Chrome-based clients due to its hardcoded client-side HTTP/2 mandate ignoring and overriding my TLS server’s must have HTTP/1.2-only, ChaCha-only. Chrome is broke, remains my assertion).
> (Note: my website shall be accessible only by non-Chrome web browsers, it is not accessible to Chrome-based clients due to its hardcoded client-side HTTP/2 mandate ignoring and overriding my TLS server’s must have HTTP/1.2-only, ChaCha-only. Chrome is broke, remains my assertion).
I guess everyone needs a hobby lmao. Also what is exactly the problem ? On wireshark I see chrome sending hello with TLS1.2/1.3 and TLS_CHACHA20_POLY1305_SHA256 (0x1303) then getting rejected https://imgur.com/a/HjZormT
Correct. TLSv1.2 is getting obsoleted due to several CVEs, so block that.
I would be talking about my server mandating HTTP/1.3 over Chrome mandating HTTP/2.0 of which Chrome does poorly in concise error messaging. Since my server decides, it fails.
On an unrelated note, stick with TLS v1.3 whenever you can.
I suggest you fix the HTTP TLS-less version of the site, so that chromium browsers may access the plaintext version.
Currently I get a 404 Not Found for the linked HTML file when connecting via plaintext.
TLS only complicated things (: I don't really need to verify the website integrity and privacy when reading an article from a HN commenter I do not know (: Maybe it's only about cosmetics -- a green padlock is more pleasing than a red triangle.
With ___domain control certificates it basically is. If you control the ___domain name, you're getting a cert. Sure, there's an intermediary, and yeah they can sign bad certs; but... Now the certificate transparency is mandatory, bad issuances end up recorded and that will lead to CAs getting cancelled.
Does this offer a single CA per TLD? Or a single mandated CA per whatever DNS subtree?
This makes such a CA a SPOF, and juicy target for attacks or governmental coercion. Delisting it for bad practices, even if thee practices are well-known becomes infeasible.
If you are perplexed by choice, enjoy the fact that the choice exists.
It's not so much a juicy target for governmental coercion as it is an active arm of governmental authority, as you've seen every time DOJ has taken down a site and replaced it with their goofy logo.
Authorities can issue certificates for IP addresses. They merely cannot issue certificates for non-public IP addresses.
And: yes, maybe certificates for 127.0.0.1 should be disallowed altogether. But this creates a backwards compatibility issue, and I’m not sure browser vendors are willing to do it.
I see that indeed it is allowed to issue certificates for IP addresses (however not for private addresses). In my opinion this should not be allowed, there is no real justification for issuing them.
I don’t believe in too big to fail theories about certificates for local addresses, they are not allowed and should not be accepted. If your application breaks, you get to keep the pieces. I doubt there is a lot of real use though, apart from misuse as in the article.
As long as they don't MITM, that should be fine. And if they MITM, hopefully browser vendors will blacklist their certificates like they did with similar attempt from Kazakhstan.
And what if somebody else gets hold of their private key and uses it to MITM? I mean, it’s only South Korea, it’s not like they have a neighbor country which would be willing to abuse this kind of thing in targeted attacks…
This is absolutely not true. As I wrote to a different reply.
For about 5 years it is possible to call localhost from https page without any warning. Localhost is a secure context, exactly that you don't need to do this.
I cannot find the docs that says that, but... it is true. You can try that yourself.
There absolutely is, it's called native messaging hosts. It allows webextensions in your browser to talk to executables installed on your local machine. The locally-installed application simply needs to install their appropriate extension in your browser.
That isn't the fault of the native messaging host transport mechanism though. I'm just saying there are transport alternatives that are more secure than the Root CA installation nonsense the apps in your submission employ.
Neither are applications doing root CA installatin nonsense a fault of communication via a local web server. HTTPS isn’t required here, but they either have this hack in place for compatibility with decade old browsers – or they simply failed to revisit it.
Just don’t do it then. Either ship something electron based (or similar) with an embedded browser and IPC communication with the bundled „backend“, or communicate via a trusted server.
Listening on localhost is a security nightmare, also because it is accessible from other user accounts on the same machine. The server probably can do some privileged tasks (otherwise you wouldn’t need it) and could be hijacked by malware.
No, really. Let's stick to the example online banking: If you need to install software to use it, why don't you just deliver an Electron (like) application, and host the whole banking application inside of the application? You get the additional benefit, that you can configure the security inside your applications browser as you wish, pin certificates to make MITM even less likely. Electron let's you ship native code too, comes with an auto updater, etc.
And there are numerous alternatives to Electron if you don't like to ship a full Chrome browser. It get's a bit more complicated then, but its possible, take a look at Tauri for example.
To be honest, I had multiple projects, where a http://localhost solution was proposed, and we found a better solution without it. It is very rarely the best thing to do.
It is possible to secure that, sure. Another possibility is MITM. Because you usually don’t have HTTPS, there is no verification of the Server possible. If the server runs on an unprivileged port >1024, any other application on the same computer (even on another user account) could hijack that port.
If you use http://localhost, you need to authenticate both server and client. It can be done, but often it is not done properly. This authentication needs to be present on every request. And that’s tricky, because you would need to track every http-tcp-connection, and ask the server for authentication in the beginning of the connection. Once you open a TCP new connection, you would need to do that again. As far as I know, you can’t do that with JavaScript inside a browser.
Another preventable mistake is, that the server somehow gets accessible not only from localhost, but gets exposed to a public network interface somehow.
So all in all a lot of possible headaches, that can be prevented by choosing a different architecture.
What does he mean with CA's that don't belong there? When does a CA belong there and when it doesn't? Does it mean if it's from Korea, it automatically doesn't belong there? Article also fails to explain what's malicious about these CA's.
Also, I think you have to manually confirm as a user to install such certificates.
Maybe I am missing something, but this smells like "Korea Bad" without explaining why.
In fact, the article does explain this. The CAs that are on this list by default have to comply with strict criteria making sure they cannot be abused. Anything that has been added externally, avoiding the usual processes of Microsoft/Mozilla/Apple, is suspect.
I know very little about certificates and online security, but I'm also kind of baffled by the expiration time of the iniLINE certificate (2018-10-10 to 2099-12-31). I feel that's also a poor practice, right? What should a regular expiration time be for a proper root certificate?
There's no authority above root certificates,* able to sign new certificates - that's what it means to be a root certificate. So root certificates will often have super long durations.
For example, the certificate HN uses is signed by "DigiCert Global Root CA" - valid from 2006 to 2031.
* Unless you count the power of OSes/browsers to push updates with new certificates.
[1] https://palant.info/2023/01/02/south-koreas-online-security-...
[2] https://news.ycombinator.com/item?id=34516013 https://palant.info/2023/01/25/ipinside-koreas-mandatory-spy...