I job hop a bunch and a common theme that I have had in my jobs is to go in and understand the legacy, and bring it up to a more reasonable state. It's been interesting having to be the one dealing with someone else's consequences.
I'll sometimes chat with the people who have had to maintain whatever ground work I had laid, and each time around, I get a little bit better at laying down a framework of infrastructure and operations that's better and better at staying sustainable
I had the pleasure to briefly work with the author of this post within the last few years. Jeff was one of the most enlightening and positive people I've ever learned from. He was refreshingly honest about what challenges he was having, and delightfully accessible for mentorship and advice.
Depot also does remote docker builds using a remote build kit agent. It was actually their original product. If you could feasibly put everything into a Dockerfile, including running your tests, then you could use that product and get the benefits.
I actually didn't know this. We've had some teething issues _building_ in docker, but we actually run our services in containers. I'm sure a few hours of banging my head against a wall would be worth it here.
> including running your tests,
"thankfully", we use maven which means that our tests are part of the build lifecycle. It's a bit annoying because our CI provider has some neat parallelism stuff that we could lean on if we could separate out the test phase from the build phase. We use docker-compose inside our builders for dev dependencies (we run our tests against a real database running in docker) but I think they should be our only major issues here.
>It’s simple to create a Dockerfile that containerizes your app. It’s even easier to create one that has terrible build performance, chalked full of CVEs, is north of 20GB in size, and whatever foot gun you trip over when using Docker
It's been years that writing Dockerfiles has been fairly common. Years. And yet it's still so common that people write such poorly optimized Dockerfiles.
I think we should definitely start thinking about admitting that it's too much over head for people to learn how to write a Dockerfile.
That being said, I've known Kyle for a while now. The team at Depot have consistently shown the deepest possible understanding of the container ecosystem. I'm very excited to see what else they do.
That's true. We have a slightly different focus - CI workloads.
However, the goodness of depot.dev comes from buildkit remote builders and remote cache. That'll be natively integrated into our runners in ~2 weeks.
So you'll get that goodness when running CI with zero changes to your actions needed.
As opposed to keeping all of your servers independent of each other, super computers are used any time you want to pretend the entire computer is one computer.
In other words, they're used when you want to share some kind of state across all of the computers, without the potential overhead of communicating to some other system like a database.
Physics simulations and like, molecular modeling come to mind as common examples.
In the case of ML training, model parameters and broadcasting the deltas that get calculated during training are that shared state.
I'll sometimes chat with the people who have had to maintain whatever ground work I had laid, and each time around, I get a little bit better at laying down a framework of infrastructure and operations that's better and better at staying sustainable