I love tailscale, too. Also, I read this article before it had any upvotes and learned absolutely nothing new or insightful. Wish the author had kept going.
It was news to me that Tailscale allowed DNS lookups for particular domains to hit specific resolvers and those resolvers would serve up the internal VPC address, so you need nothing except a subnet router inside the VPC to be working against your secured AWS resources.
It means that you can close all the open ports on your VPC security groups without changing the configuration of how your external systems access the internal AWS services.
It was probably obvious to everybody else, but after I worked that out, Tailscale became my network.
Behind the scenes they’ve obviously got a lot of crazy to deal with, but it seems to work well from the outside (just using it for Tailscale lookups, at least).
Have you had issues with it otherwise?
EDIT actually I do have one gotcha. There’s a switch in the admin panel to override real dns. In theory if you changed that option and your machines were currently using private dns on route53 to find each other you might be in trouble (don’t ask me how I know).
There is no open-source equivalent implementation available for the PITA that is Magic DNS, so I don't use it. This ensures that if Tailscale goes rogue, I can replace it as easily as possible.
I thought headscale added support for MagicDNS? There's at least one other comment in the thread mentioning this. It didn't always have that support, but it has been added and the gap narrowed.
And I'm just as happy to use it for free, I haven't set up Headscale either as attractive as it sounds to own my own infrastructure when I can have someone else do it for me...
Tailscale is a really great service and it's so easy to teach someone else how to use it, compared to like every other VPN ever!
The only thing worst than “It was DNS”, is “It was DNS, but in this rare and weird edge case only so it never showed up when you tried to debug it”…
I mean, I’m impressed by that capability. But I’m horrified by the potential future support implications. Who’d want to be debugging a problem with “magic DNS” at 2am on a Sunday morning while Prod is down and the entire C suite is half drunk, tired and angry, and breathing down your neck?
Sadly almost none of that is complex or surprising if you’re used to dealing with DNS deployments on Linux or BSD. What’s new is bundling a custom system name resolver for machines that can’t forward matching ___domain requests to specific name servers. Users are often left to their own solutions if they don’t use systemd-resolved or NSS.