Hacker News new | past | comments | ask | show | jobs | submit login
The sad state of sysadmin in the age of containers (vitavonni.de)
82 points by Xam_Orpheus on March 12, 2015 | hide | past | favorite | 13 comments



Yes, we live in the generation of containerised "apps" and especially "curl blah | sh". Add to the mix a hip and clueless web developer and there's your secret sauce for disaster.

Indeed it is a sad, sad state and there is no cure in sight. Perhaps a whip...


This really do seem to echo my feeling as of late with recent Linux related debacle.

sysadmins and "power users" are being sidelined by developers that don't want them interfering with their precious creations. Oh, and users are "children" best locked in padded cells.

Writing that i find myself thinking of the Greek pantheon, and their supposed attitudes towards humanity.


> Nobody seems to know how to build hadoop from scratch

That's not true. It's fairly well documented in FreeBSD ports.

https://svnweb.freebsd.org/ports/head/devel/hadoop2/Makefile...


It seems the core of this argument comes down to the diversity/complexity of today's software environment. That's understandable, but there are clear reasons why it is this way, and equally apparent upsides. Without elaborating too much on it, few people build software like they did in the 80s anymore, most software glues together libraries others have written. This leads to more complicated build processes.

While this is true, I think on balance the author's case is unjustified. First of all, it's not like you could just type 'make' to build something back in the day either, unless you were compiling a simple library that was only supported on one platform. Secondly, sysadmins should not be in the business of building source code from scratch. What is the security upside of that, are sysadmins expected to audit the code?

I believe containerization is an incredibly exciting technology to have hit ops, and old school sysadmins should emerge from their nostalgic depression and embrace it, and adapt to change. I have been using everything from CFEngine, to Puppet, to Ansible over the past decade. Containers are an amazing breakthrough, because they standardize the build process. We have our own container repository that essentially act as a private package repository. The contract of what the container is allowed to do to the outside world and host environment is clear. As a result, the hosts are spartan, simple and secure. This is miles from the old world of large servers where dozens of services shared the same file system.

I am so happy that we finally live in the age of containerization. I would never wish to go back to that old world the author describes.


Containers are in Solaris and FreeBSD forever.. before containers you could use virtual machines. Nobody has a problem with containers...

The problem is that you run and download arbitrary unverified binary code in your system by design. E.g. in the docker registry or when installing a major "bigdata" related project...

It's opaque and a lot of work to check the integrity and updating becomes a nightmare when you have randomly downloaded 150 jars - of which 1 or 2 have an serious security flaw. I'm not even talking about attacks from secret services or good attackers.

I've read it as a rant in favor of establishing something like signed apt-repositories with public-key encryption for the world of java/maven and containers. And this is a very valid point that is just not there.


> Containers are in Solaris

Yes.

> and FreeBSD forever..

Sort of...

> before containers you could use virtual machines.

You could if you were sufficiently masochistic. Container layers improves the situation so much that it makes a qualitative difference over VMs.

> The problem is that you run and download arbitrary unverified binary code in your system by design. E.g. in the docker registry or when installing a major "bigdata" related project...

Is this a bigger problem with containers than with packages from e.g. an apt repository? Not sure if docker supports image signing at the moment, but I find it hard to believe this being a problem with containers in general.

> It's opaque and a lot of work to check the integrity and updating becomes a nightmare when you have randomly downloaded 150 jars - of which 1 or 2 have an serious security flaw.

I think the author mixes two separate points: (1) The complex state of modern software development, which is valid but is like complaining about the existence of violence, and (2) How this prevents him from compiling from source, which is plainly bogus.

Reading the piece favorably, I can sympathize with the plight of sysadmins tasked with ensuring security in modern software development, with many small libraries and diversity in tools. Frankly, this should not be on sysadmins shoulders, it should be the responsibility of developers. Rather, sysadmins should be responsible for ensuring the security of the environment around what developers build. But this is made easier with deploying containers, because they provide a better shrink wrap on the code delivered from dev. In fact, Dockerfiles in particular offer a much more reasonable contract between dev/ops than the ambiguous one that existed previously, where hosts had to be polluted with all these maven packages.

So while I can understand the author's frustrations, I think the diagnosis is flawed.


I think the make example was more aimed at the sprawling number of specialized "downloaders". Starting with erl cpan and moving on to ruby gems and similar.

It was not so much that there is a unified command, but that you could reasonably expect that either it was self contained, or the dependencies were spelled out somewhere. Now it is "let cpan/gems/whatever grab all the magic pixie dust while you get coffee".

Sure works fine for a lone dev at their desktop, but in a production environment handling perhaps millions of users and dealing with a mass of regulations?


>To build Apache Bigtop, you apparently first have to install puppet3. Let it download magic data from the internet. Then it tries to run sudo puppet to enable the NSA backdoors (for example, it will download and install an outdated precompiled JDK, because it considers you too stupid to install Java.)

https://lh3.googleusercontent.com/-O0Ztn2Fhj4A/AAAAAAAAAAI/A...


Spending a decade as a sysadmin then another 6 as a developer:

- cmake, autoconf etc provide consistency for the admin building OS packages. That said in this case Hadoop should already come as an rpm/deb package (a real one that owns its own files and that can be verified, rather than pulling things in from the internet).

- build tools and language specific package managers provide full fidelity access to the packages for that ecosystem for the developer

Neither is right or wrong. The main thing is to delineate who's responsible for managing what - something containers help with.


"cmake, autoconf etc provide consistency for the admin building OS packages."

I have to laugh any time I see the words cmake and consistency in the same sentence.


A hollow, bitter, laugh. Followed by tears.


and blood.


Why ?




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: