FYI, if you don't want to install the official Let's Encrypt client on your production server, I created a simple python script that will automate the ACME process for you. It doesn't have to run on your server, and it doesn't ask for your private keys.
The only thing that has to be run as sudo on your production server is a simple python one-liner that temporarily serves the required file. The script prints out this one-liner so you can copy and paste it into your server's terminal, then kill it when the challenge is done.
The standalone authenticator still requires (1) installing/running the letsencrypt client on your production server, (2) letting it run as root, and (3) letting it access your private keys. My script requires none of these things.
I would like to use this service, but I do not want a program to do things for me on my server unless it's part of the OS distribution, so this will help.
Why do you need sudo to serve a file though? Is it a restricted port?
The overview speaks of a "here's my key!" / "you are registered!" step that letsencrypt-nosudo doesn't seem to have. It seems that your first communication with the CA is during step 2 (requesting challenges). Does this affect your script's ability to do things like subject alternative names?
This script takes certificate signing requests (CSR) as input. If you want a subject alternative name, it's up to you to add that to your CSR file before you run the script.
The official Let's Encrypt client is mostly focused on ease of use. This script is mostly focused on not knowing your secrets or requiring privileged access, and it assumes you know what a CSR is and how to install a signed certificate on your webserver.
It used to, but that meant you had to scp the generated dvsni cert to your server[1], which is more complex than asking the user to start serving a string over a specific url (which is what the SimpleHTTP challenge requires).
This is a good idea, but the docker daemon runs as root so it's at least theoretically possible to break out of the container and start messing with the host, as root.
> the address of the CA is hardcoded in the program so be sure to use the official lets-encrypt
Now, the idea of a nosudo letsencrypt is a good idea, but if such a thing should exist it should be made on the officiel repo and audited like lets encrypt was (note that this is just my opinion).
* Renewal does work already, it's just not documented anywhere. There is a script that can check when your cert is a specified amount of time from expiry, get a new cert, and subsequently deploy the new cert by updating symlinks. This mechanism works now, if the script is run from cron.
* You can use the client on a machine that doesn't have Nginx or Apache, it just won't be able to install the resulting cert. But the client's standalone mode can obtain the cert and save the associated files in the current directory. (In this case, they also won't be enrolled for automated renewal by the renewer script.)
How would it work on for instance App Engine, where you can't really run a script on the server? I guess you'd have to manually do the validation step, but will it do any of the other steps automatically still?
Depending on the nature of the control that you have over the site, you might need to get the hosting provider to do it for you (some of them are probably going to integrate Let's Encrypt) or you might be able to make the changes manually to prove control of the ___domain.
If people think of useful integration features for particular platforms, we can take patches, or those platform developers can write their own clients. :-)
>One of the common one is to upload a certain file at a certain address of that ___domain.
I wonder if this can be socially engineered/tricked to gain a certificate for someone else's ___domain. Like, I've seen at least one service (Majestic Seo) that asks you to upload a document to your ___domain to get certain services. Now that this (will soon) exist, any such service that someone uses can generate a cert and MITM the site.
Also, if they're verifying over http, couldn't anyone just mitm the verify connection? Well, anyone with access to any computer along the route between Let's Encrypt and whatever ___domain they want.
I think that this is a common way of verifying ___domain ownership, but something harder to spoof without actually owning the ___domain, like setting a custom TXT record or being able to respond to a WHOIS contact email would seem to make more sense than file upload.
I haven't heard it used in the "wild" yet, and it's not a MITM attack. Just shows how useless this verification method is - if you're hitting a PHP built site - a relatively simple hack would be to find a PHP page that allows you to upload something, most hosts don't upload to the root but I imagine it'd be trivial to use the upload process against itself.
A MITM can easily spoof IP addresses. You can't trust unsigned data, and IP address headers aren't signed.
You also can't trust signed data if you don't trust the signature. That's the real problem here. This whole protocol is an attempt to establish trust, but it's based only on temporary control of a server's network traffic. Probably that's the legitimate owner of the ___domain, but maybe it's somebody malicious who merely had access to their network for a time. You can't really be sure.
If someone MITMs the connection between the CA and the website itself, then yes. I believe LetsEncrypt uses a variety of proxies around the world to measure the website, so there'd have to be a lot of successful simultaneous MITMs for that to work.
Ultimately you have to bootstrap trust from somewhere. Perhaps in future DNSSEC can be used to solve this problem (though DNSSEC is of course itself just another PKI).
People have been able to get SSL certs for webmail domains before, as the webmail providers hadn't blacklisted some of the emails CAs can use for ___domain validation by email. e.g. see http://www.entrust.com/what-happened-with-live-fi/
Even if a MITM can intercept HTTP requests to your server, they still can't hijack your registration attempt.
The real problem is that if somebody can intercept your HTTP traffic, they can register their own account key. After all, they can control your ___domain, and that's all letsencrypt cares about! If you haven't already registered an account key, I can't see how anything would stop them. If you already use a different CA and don't want to bother with letsencrypt, you might never find out either!
I guess that’s why they will probably not use HTTP for the verification. Why would they? The client already generated a certificate and sent it to Let’s Encrypt, they can already use TLS.
Except the cert used in that process is one uploaded by the user, and could easily be provided by an attacker. As long as the attacker can MITM the path to the ___domain's web server, they can provide that cert for TLS and successfully spoof the site.
Edit: Ok, technically it's a CSR signed by a private key, but you could still use the key to self-sign a cert or something... But none of that mitigates the MITM attack described above.
I've never heard conducting a MITM attack on the Internet backbone or hosting provider as "easy" before (other than by governments—but they probably already have plenty of vectors for generating fake certs). Now, if you're serving HTTP over an unknown wifi access point...
Question: is the X.509 system set up in such a way that one of these issuers could give me a free code-signing cert, or a free CA cert for signing client certificates?
Because, as far as I understand, the answer is no: signing certs have no sense of signing "for an origin". If a CA issued me a CA cert, I could use it to create a signed cert for microsoft.com. Which seems nonsensical.
All I want is to get issued a CA cert that lets me sign arbitrary things within my ___domain's scope. Basically, the X.509 equivalent of a DNS NS record, delegating responsibility for making assertions about that ___domain (and only that ___domain) to "my" CA. Why is that so hard?
It's hard because for the longest time (e.g. between 1995 and about 2010) nobody really cared about SSL. It was a box you ticked when implementing a login form or a credit card purchase page. So the infrastructure was neglected, progress was lethargic and generally everything bitrotted.
This can be seen quite clearly in the state of OpenSSL, in the number of exploits in TLS stacks that started appearing in recent years once academic attention was focused on them, etc.
The feature you want is called name constraints and for the longest time was not really implemented anywhere. So that's why CAs don't do it. Also: this would be a highly niche feature and CAs, like any businesses, weigh up implementation cost vs potential market size and risk when choosing what to support.
That doesn't mean it will never happen. It could, now there's a lot more focus on the SSL infrastructure in the wake of Snowden.
Also, openssl didn't (or doesn't) appear to care about high-reliability software engineering and the IETF TLS Working Group didn't care about producing a minimum-featured spec that is not overly difficult to implement, maintain and support in the real world. Instead, TLS has become a kitchen-sink, feature-hoarding, experiment-in-production jambalaya.
X509 extensions can be marked as critical. Certificates must be rejected if the stack encounters a critical extension it doesn't understand. (In theory at least, I haven't looked at real implementation behaviour.)
As icebraining wrote, it's technically possible with X.509, but nobody within the public CA industry will do it for business reasons.
You could approach the problem by creating your own root CA for the ___domain and then issue subordinate CA certs, but obviously for clients to validate them they would need to import your root into their trusted store (in a corporate IT scenario you would push out the root as part of the ___domain policy).
That's one of the several elephants on the room with the CA system. They issue non-leaf certs with a big frequency for people like the GP, with a honest reason to want them.
I love that the author of this article actually got out a pen and paper, drew out that diagram and photographed it. Great summary, looking forward to using this service!
I'm really excited about this project, but a lack of steady updates and communication on release date are really hurting it. Hopefully they release soon and make this a moot point.
They just released a blog post about generating the certs. That means they are getting their. From looking at the code they have most of it in place, they just lack a few features, and testing. IMO it shouldn't take too long now. The first commits date from october of last year.
if you divide the year into 3 equal parts, early, mid, and late, then early is January through April, mid is May through August, and late is September through December.
therefore, they still have two and a half months approximately until "mid 2015" is over.
We are desperately looking forward to this being available so we can SSL all our dev sites and still be able to test them on our phones (which are a bit hard to get one's own CA cert onto).
It sounds like Lets Encrypt is going to require a site to be publicly available for them to give out a cert for it. They'll tell you to put a certain file at a certain ___location to demonstrate that you control the ___domain.
So unfortunately it'll only help with your dev sites if they're available on the public internet.
However, the methods described there that don't involve a ___domain validation step all specify prior ACME usage; that is to say that at least the first time you get a cert from an ACME CA with policies akin to those described here, you must always complete a DV step with DVSNI or Simple HTTP (which do require a publicly-visible server with a publicly-visible ___domain name).
I don't expect the Let's Encrypt CA will be willing to help keep servers (or certs issued to them) a secret -- for example, the certs are likely to be published in Certificate Transparency! -- but you're right that the ACME DNS challenge doesn't require the server to be publicly accessible and doesn't even require the underlying subject name to exist in the publicly-visible DNS.
Interesting but will there be a way to generate the certs and download them myself versus running an application as root on my servers? I would much prefer not to run their application as root on my servers :)
Agree. I love what they're doing but it seems crazy that installing a new daemon and running it as root is being advertised as a key part of the solution. I get that they're trying to make it super simple for the hobbyist at home with a LAMP VPS but it feels like this should be an alternative to a simple, composable command line tool that's more of the unix philosophy and less invasive.
The good thing is the ACME protocol looks pretty straightforward so I'm sure someone will write that tool, even if Lets Encrypt themselves aren't interested in providing it.
Alternatively, it could have a non-sudo mode that would give you the certs and configs in a directory and tell you where to copy them (or generate a simple, five-line bash script to copy them).
I'm a bit confused on the issue of certs because I've heard that they don't work for subdomains, so can anyone tell me:
If I get a cert from someone like letsencrypt for a ___domain I control, for example "xyz.com", will that same cert also work for "wiki.xyz.com", "blog.xyz.com", "someotherthing.deepdomain.xyz.com" and so on?
I'm interested because I use jwilder/nginx-proxy and then have a docker container for each of my webapps, each connected to a different subdomain (video, music, rss, vm, desktop, files, etc).
1. Get a separate certificate for each subdomain. Let's Encrypt makes this feasible since the certs are free and require a minimum of human overhead to obtain.
2. Get a single certificate that covers each subdomain by using Subject Alternative Names, which are currently supported by Let's Encrypt.
3. Get a wildcard certificate for "*.xyz.com". Let's Encrypt does not support this and I do not think it plans to, at least not in the initial release.
The main question is - is the set of subdomains known before-hand, or is it highly dynamic? If it is fairly stable, then options 1 or 2 will work well for you; otherwise, you probably want option 3.
Choosing between option 1 and 2 usually depends on the details of your infrastructure and your threat model. For example, are all these subdomains served from the same server or different servers? If it's the same server, you might be more inclined to use 1 cert for all of them (option 2) since it will be easier to manage. Do all the domains have the same threat model? For example, what if you run blog.xyz.com on one server and billing.xyz.com on a separate, hardened server. In that case, you would want option 1 because if you used the same cert on both servers, obtaining the private key from the less-secured blog.xyz.com server could be used to attack traffic for billing.xyz.com.
This is just for my home system, so that I can access everything via https without that annoying ssl cert warning. It's all run from a single server in the closet. I have an nginx instance facing outside, with http redirecting to https, and then each individual app accessed based on what subdomain you used (rss.xyz.com to get to my rss reader, vm.xyz.com to access my virtual machine management console, etc). At the moment I have 7 webapps running, and am in the process of adding an 8th.
The subdomains are somewhat dynamic, since I'll add more whenever I add a new webapp to the server.
In that case, sounds like you probably want a wildcard. Also, if this is just your home system and you are the only person who needs access to these machines, why not use a VPN instead? Makes more logical sense, easier to configure, and you can do more than just webapps.
For a browser to accept a cert while connecting to "wiki.xyz.com", the cert needs to be for "wiki.xyz.com" or for ".xyz.com". Note that ".*.xyz.com" is invalid. So called "wildcard certs" are usually more expensive than single and multi-___domain certs.
Yeah. And there are cases where a single wildcard isn't enough, like stackexchange's meta sites. If you type in https://meta.<site name>.stackexchange.com, it will return an SSL error.
I saw in an earlier thread that wildcard certs won't be supported at first, but possibly in the future. An additional comment was made that wildcard certs wouldn't be necessary once the cert issuing/renewal process is automated.
My question is: wouldn't wildcard certs be faster if the behavior on your site skipped around different subdomains? One certificate for all vs individual certs for each subdomain. I may be way off, I'm a novice when it comes to SSL/TLS handshake so please correct my ignorance.
Maybe the question is wildcard vs SAN certs. With automated cert issuing/renewal, they could theoretically talk to e.g. Cloudflare's API to get your DNS records, if you're with them, and then regenerate a new SAN cert containing exactly the subdomains you really have whenever that list of subdomains changes. Which is probably more secure in some way I don't understand.
You're right about the "just using a bunch of separate certs" solution being slower, though, especially with HTTP/2 (which, because of origin combining, uses exactly one real socket per TLS cert.)
Let's say you host your site on your own server at www.example.com. You want that to be served by HTTPS. You also have a subdomain cdn.example.com for static assets which is CNAME'd to a proper CDN that you want to use HTTPS with also.
With a wildcard cert, the third-party CDN would need your private key for your self-hosted www.example.com site. With different keys & certs per-site, the process is more modular.
I agree that wildcard certs would be great and I'll default to using them for the simplicity (e.g. temporarily activate a quick test.example.com site to check something works) but the ability to keep things modular is also great.
This is a really useful breakdown of Let's Encrypt and its scripted process for procuring certificates. It's ironic though that cryptologie.net is not run over https! They should use Let's Encrypt.
Good question from the page's comments- will this work for different sites run on one server? I'm thinking of all my side projects that have sites up, they're all run off the same $9/month Apache server. Can I get individual certs for each ___domain run off the server?
You can get certs that are valid for multiple domains, using subjectAltName (SAN). The client can request multiple names at once and the resulting cert will be valid for each.
Has anyone tried a ruby client library implementation yet? I've looked at the spec and it's pretty complicated. I'm not sure I'll have the time to devote to it unfortunately.
The current implementation is fine for people running nginx servers with a single ___domain, but for anyone trying to engineer this via a web application for more than one ___domain, this is not sufficient for proper tooling.
Can someone explain why the CSR is exchanged over https? The whole point of a CSR is that an adversary can't fake it and get a key that works for you and the adversary.
The CSR is signed by the key that is going to be used for HTTPS on the website requesting a certificate. If the CSR wasn't transmitted over HTTPS, since the Let's Encrypt CA has no idea what your key is, a network adversary could replace your CSR with one containing an attacker-controlled key.
HSMs can typically duplicate keys to other HSMs (though usually only other HSMs from the same vendor). All of the Let's Encrypt private keys are stored on multiple HSMs.
https://github.com/diafygi/letsencrypt-nosudo
The only thing that has to be run as sudo on your production server is a simple python one-liner that temporarily serves the required file. The script prints out this one-liner so you can copy and paste it into your server's terminal, then kill it when the challenge is done.