Hacker News new | past | comments | ask | show | jobs | submit login

Containers are cool for big things. But imagine every single program stuck inside a container without proper management tools to track dependencies. Now imagine a heartbleed-like scenario where you need to patch a big security hole ASAP. With containers you'd have a hard time.



"Container" is a rather vague term and the implementation details vary. For example, Guix containers (though not fully baked yet) do not have the downsides that we've come to expect from Docker and friends like being based on opaque images with no useful provenance, relying upon complicated overlay file systems, duplication of the same software amongst different container images, and not doing anything to solve the very crucial reproducible builds problem. Instead of using disk image layers, Guix can simply bind-mount the required package builds (we call them "store items") from the host into the container. One of pleasant consequences of this design is that software shared amongst N containers is on the file system in exactly one place. No need to rely on complicated file systems that may or may not actually deduplicate things depending on the circumstances.

Let's use heartbleed as an example. It is easily possible with Guix to walk the dependency graph for any package or system configuration (container or otherwise) and inspect for known vulnerable software. Docker simply does not, and cannot, know the level of detail that Guix knows about the composition of software and systems.

Furthermore, the Guix tools that build containers use a functional, declarative programming interface. Contrast this with Dockerfiles which are an imperative sequence of steps in which you mutate a disk image in a specific order. To utilize Docker's caching abilities, you have to very careful about the order in which the directives in the Dockerfile are evaluated. In Guix, the order doesn't matter and the "cache" (it's really memoization) can be effectively used when any component of the system has already been built before.

I've picked on Docker a lot here because it's the most popular, but you can substitute Docker for any other image-based container system and the same things will more-or-less apply. Docker is great for making things easier right now, in which the status quo package managers are weak and you need on average 3 to 4 package managers to deploy any given web application. But it's not the future. Docker is based on binaries whereas functional package/configuration management is based on source code. We can do much better (and we are doing it already) by building on the functional package management paradigm.


Depends how those "containers" interact with the system at large.

In the Windows world, it's common to have the libraries you need stored in ~/Program Files/Program_name/ . Now, each program dumps in its program home the library, only because it cannot guarantee the library would be in the system.

In Linux, we deal with having different major versions of said library. nd minor versions are clobbered by major. So when someone who wrote stuff using python 2.7.2 and 2.7.3(bugfixes) comes out, there is invariably something that is broken.

A container that encompasses "Python", and holds all the versions would be a great deal. Same with other programs. They could interact, but their install and environment would be encompassed as a complete "Python", regardless of version or bugfix.


You are describing NixOS/Guix rather than containers.


Yeah, I don't want to run every single program in a container... and I don't plan to. That said, containers have their advantages even in that security scenario. For example, once I was running into obscure segfaults that seemed related to some interaction between Sidekiq a particular Ruby version. Once I figured out the fix, I just changed the top line of a couple of Dockerfiles to change their Ruby versions.

Generally also containers are built on existing Linux distributions and use their packages. I still haven't figured out exactly what I want to do but my vague future plan involves containers bootstrapped from Nix expressions. That's just to get a level of indirection to abstract away NixOS so I can run app services on whatever distribution.


Depends on how stuff is implemented no? You change a library, you do a rebuild of all packages that uses that library, the user downloads package deltas and the filesystem does deduplication so the same file is used in all the containers. Would only cause a problem with packages from third party sources. But thats a possible problem now already?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: