The graph seems a bit too much. It says that GPS 110 means ~125 on the odo. Although from personal experience I'd say it's more around a delta of 5 at those speeds, and 3 for lower speeds.
Border Force looked into a later order of thorium, and that caused them to go over all his past imports which contained the plutonium. The plutonium had been sitting on the guys shelf for months at that point.
14 and 15 when doing it relatively quickly. 20/20 when i looked away from the screen for circa 5-10 seconds after each difficult set from 10th one onwards.
It usually depends on ___location, for example Cloudflare has a setting somewhere for "always show captchas for non-western traffic" and a lot of people set it.
I'm your average skilled introverted engineer. But after more than a dozen years of experience and problem solving, I'd go with #2. I feel i'd be able to explain myself much more easily, have to do much less work, and probably have much more ways to impress the interviewers with face to face. I have also been on the receiving side of take-home tests and I know how hard it is to impress someone with those.
When I hire I don’t need someone to impress me, he has to present himself as able to do the job.
I think a lot of devs think they have to hire someone that impresses them and we end up with insane tests and adversarial interviews.
Yeah of course there will be people who will blow through the task much faster than others with better quality than average and they will impress me and they will get the offer in first place - but often times they already get multiple offers and after negotiation they will pick some higher offer and we have to get someone we can afford to do the job.
> When was the last time you had to debug an ancient codebase without documentation or help from a team?
I wonder why the author thinks this is something unthinkable and unrealistic. Small teams with strict ownership and skilled engineers. If you join a company like that you will inevitably find yourself in that situation.
Podman actually works really well. Out-of-the-box virtually-no-configuration-needed rootless containers. It's also usable via docker-compose with a single env variable. (podman-compose wasn't up to par for us)
We've been using it for a couple of years running and managing hundreds of containers per server - no feeling of flakiness whatsoever. It's virtually zeroconf and even supports GPUs for those who need it. It's like docker but better, IMO.
Hope it gets a popularity boost from CNCF. Rooting for it.
I vastly prefer it to Docker, especially buildah over buildx. Instead of inventing yet-another-dsl buildah allows you to simply use shell scripts (though it does also support dockerfiles). Another thing buildah is really good at is not doing much automatically: you can really optimize layers if you care to.
The Podman ecosystem has given me a strong disliking of the Docker ecosystem, so I'm also rooting for it.
I think I might be the only one that prefers Docker for building Docker containers using CI.
I use Drone, but instead of using the Docker plugin I start a detached (background) Caddy server to work as a proxy to DOCKER_HOST. That lets me proxy to the local Docker socket to take advantage of caching, etc. while I'm iterating, but gives the option of spinning up docker-in-docker to get a clean environment, without any caching, and running a slower build that virtually identical to what happens on the CI server.
I find that having the daemon available solves a ton of issues that most of the CI provided builder plugins have. For example, with the builder plugins I'd always end up with a step like build-and-tag-and-push which didn't work very well for me. Now I can run discreet build steps like build, test, tag, push and it feels far more intuitive, at least to me.
I only dislike Podman because some distributions used it as an alias for docker which made a lot of docker-compatible software to not work on that distribution unless some workarounds. I wouldnt normally blame the application for this but in this case they are both, application and distribution, from the same dev.
Agreed, the `podman` command is 95% drop-in compatible with the `docker` command, but those edge cases are annoying and I would rather just use the docker cli backed with podman running the containers.
Podman has a docker frontend. On Fedora, it is packaged as podman-docker, I believe. I recently went through the pain of getting testcontainers working on Fedora 41 with Podman. After enabling the Podman socket and setting an environment variables, I was off to the races!
I completely agree and have had the same experience as you with docker-compose working better than the alternatives.
Past versions of podman were flaky, but since version 4, which is now a couple of years old, I haven't had any issues whatsoever. I'd recommend anyone using containers on linux to try it out instead of installing docker out of habit.
+1, Podman is great. I have been running it for a while on NixOS.
But Compose doesn’t mesh well with the overall NixOS configuration system. So I ended up building a custom tool that can convert your existing Compose project into a NixOS config.
If podman compose would parse env var strings correctly, then it would be on par. Not sure why that hasn’t been fixed but probably because it’s a stepping stone instead of a well thought out replacement.
The IO through fuse-overlay is performance limiting though. It's almost half the speed as overlay directly for layers with many tiny files.
Note that Linux allows you to mount overlay within a user namespace if you are root within the user namespace.
In other words, if you are root within a container; even though it is not root on the host; Linux accepte ton mount overlay filesystems (most filesystems are not allowed). `man user_namespace`
For posterity, there have been some issues when destroying containers. Errors about "inconsistent state of container" or such. But these have always been about non-running containers, so the answer has been destroy/recreate, so no measurable impact for the business. After spawning and destroying thousands of containers in a high-load live environment(across half a dozen servers), I consider podman pretty stable.
And assuming my own comment is high up, this is the env variable we automatically load:
On macOS it creates a centos VM to run containers in. Rootless simply means that the root user in a container maps to the runner outside and not as the actual system root.
Edit: .. because the runner is not needing to run as root
Both should do exacly the same, they are just installed differently. docker compose is installed as docker CLI plugin (Linux only), and docker-compose is installed as standalone binary.
Previous docker-compose was a separate program, written in Python if I remember correctly, people usually preferred to them as v1. Later docker incorporated it into the docker binary itself as a subcommand so that’s v2
v2 is still a separate binary, it can just be installed in different ways (on Linux). If GGGP was referring to v1, then that's legacy software since 2+ years and they probably shouldn't use it.
> The Docker compose CLI plugin has no stable output format (see for example https://github.com/docker/compose/issues/10872 ), and for the main operations also no machine friendly output format. The module tries to accomodate this with various version-dependent behavior adjustments and with testing older and newer versions of the Docker compose CLI plugin. Currently the module is tested with multiple plugin versions between 2.18.1 and 2.23.3. The exact list of plugin versions will change over time. New releases of the Docker compose CLI plugin can break this module at any time.
Honestly everytime i wanted to use podman i hit a bug, which is already fixed most of the time but i can't get the new version because they don't have any direct repos to get updates from, this is very helpful for docker.
Also networking in rootless containers still suck.
The end result is just go bakc to docker with less hassle and better stability.
I've asked the same question and still haven't got a concrete answer as to why anyone would use devcontainers.
As I understand it, devcontainers use some base image and then instructions in json file with steps what to add for that specific app. Why not just make a docker image with everything your app needs and use it?
Yeah I’ve spent a lot of time digging into devcontainers, and I think the basic answer is that they play nice with VScode/codespaces and not much else.
If you are using those, it gives you a nice push button way to get up and running. If you’re not using those, they’re pretty awkward — you just end up with images/containers that can only be built/run/interacted with using the devcontainer-cli tooling.
Would be nice if there was a simple standard way to point to a dockerfile in your repo (or even an image name) and say “build and run this for the dev env.” And the tools ecosystem could then reference and use standard container tools to build and run.
This sounds better if you never use cloud editor platforms; devcontainer can be useful if you jump between local coding and say GitHub Codespaces where devcontainer.json is automatically recognized.
IME a hybrid of your approach and devcontainers works best in that situation though, because I don't want to rely on or even figure out how to use the devcontainer recipes, and building them every time the thing spins up takes forever. I like a workspace container defined in a Dockerfile that publishes to a registry and a very basic devcontainer config to use it.
I think a lot of it is ide integration. You can see devcontainer details in a tab in vs code for example.
But I think the biggest thing is code reuse - there are a lot of plugins for dev containers so it is easy to spin up a new project with things like git, node, defining users, etc etc
IDE integration. For example, I use the Jetbrains RustRover IDE on Fedora Atomic, which is an immutable Linux distribution that is based on keeping the OS setup as clean as possible. I use DevContainer to install the toolchains I need for my project and the IDE takes care of setting up my development environment.
I imagine that is a bit tricky to optimize for build speed on a monorepo. Say you have 3 things to build you need to pick the order OR you could use multiple targets amd copy stuff around I suppose.
I don't really understand the premise of these types of write-ups — the software has a license, and people and companies use it accordingly.
I understand most core software was started long ago as a one-person project and given a FOSS license. Due to the license, it grew from the work of hundreds or thousands who contributed, but the license no longer serves the authors' worldview.
It seems to me all the contributors implicitly approve of this situation, as they contribute labor while knowing what the license is.
I think the article clearly states the problem: It encourages contributors to stop contributing and become "takers", and once there are not enough makers, the product and entire ecosystem dies. A classic tragedy of the commons.
Except software is infinitely reproducible once written. There's no tragedy of the commons if the commons' resources are infinte.
"But code needs to constantly change and update all the time! Who's going to do that!?" -- well, maybe that's the problem. Maybe if we want to make a real, lasting contribution to OSS, without being stuck maintaining it forever, we should focus on making software that doesn't have to change. Code is basically math, and we get lots of use out of polynomials and complex numbers and Galois theory without anyone actively "maintaining" them. Galois died in 1832!
Maybe the software we're writing is trying to do too much; maybe we should stop expecting perpetual updates and maintenance of OSS? Maybe a small, focused, reliable library that does one thing really well and never gets updated is actually the perfect OSS?
> maybe we should stop expecting perpetual updates and maintenance of OSS? Maybe a small, focused, reliable library that does one thing really well and never gets updated is actually the perfect OSS?
This is something that took some getting use to working with clojure. You'll hear it a lot, a lot of libraries are simply "done". They do their thing and they do it well. The language itself prioritizes not making breaking changes so there is rarely a need to "maintain" many libraries that were last updated years ago.
Habit still makes me pause when seeing it, but looking through the code will usually be reassurance enough or tell you that it was abandoned and needs work. There is also CLJ Commons[0] that takes useful/popular libraries that are done/mostly done and no longer maintained by the original maintainers. Usually the only changes are some performance updates with new JVM/Clojure features. Many of them are incredibly useful and haven't been updated in months or years.
It's definitely not a tragedy of the commons problem. Open source doesn't get used up by more people using it.
Takers actually have an inherent interest in supporting the open source software they use, in direct proportion to the long-term value they derive from it.
You actually need some countervailing force to have significant takers. E.g, with Wordpress I think there's an acrimonious and competitive relationship between the for-profit company controlling the open source project and one of the big for-profit users of the project.
The tragedy for OSS here is that an OSS project is being used as a lever in a struggle between business competitors over who gets the dollars. (I suspect WordPress was always designed and intended to support a commercial enterprise, though, so this kind thing was probably always going to be part of it.)
> once there are not enough makers, the product and entire ecosystem dies
Once there are not enough makers, willing to license products to corporations for free, the corporations either have to write their own software or die.
The “tragedy” as commonly interpreted is so wrong-headed and ill-framed. The problem at the heart of it was always the intermingling of private interests and common goods. The biggest problem with OSS is exactly that: private corporations can take those commons and get rich based on them.
So what is the tragedy? Really? It’s the tragedy of private interests. But it’s of course not named that because Economists championed The Problem. In turn we have to pretend that The Commons have a problem. Because Private Interests are axiomatic and are not to be questioned.
reply