Hacker News new | past | comments | ask | show | jobs | submit login

It's cliche, but I really do feel reading the Art of Unix Programming gave me a very good sense early on for how to walk this line carefully. Unix programs are high quality - but they're also, ideally, small, and written with an eye to compositionality with the rest of the ecosystem. The best architecture is in the middle, and often the best first draft is a simple series of pipes.

https://www.catb.org/~esr/writings/taoup/html/




(Honest question)

What is the difference between this microservice architecture that gets a lot of hate here?


I'm not the person you're replying to, but here's my take: even though the two look conceptually similar, Unix programs are just a lot simpler. All programs run on the same machine, they read their input, execute, produce output which is piped to the next program, and terminate. Want to change something? Change one of the programs (or your command line), run the command again, that's it.

Microservices are a lot more complicated. You'll need to manage images for each of the services, a fleet of servers on which the services will be deployed, an API for services to communicate together, etc. In many (most?) cases, a monolith architecture will be a lot simpler and work just fine. Once you reach the scale at which you'd actually benefit from a microservice architecture (most companies won't ever reach this scale), you can start hiring devops and other specialists to deal with the additional complexity.

What actually gets hate, I think, is not microservices themselves, but the fact that microservices are often used in contexts where they are completely unnecessary, purely because of big tech cargo culting.


We only think of Unix programs as simple because we have many more abstractions nowadays. But you should compare a Unix program with DOS programs (probably CP/M also but I never wrote those myself) at the time. Poking directly at the hardware, using segmented memory, dealing with interrupts. The idea that a program should be well behaved, should accept things an inputs, should push outputs, and should have a virtual address space are actually huge abstractions over what could just be a block of code run on a spare OS. I'm not saying that microservices are better than monoliths, just that Unix programs aren't as simple as we think they are in a world where we're managing servers like cattle and not like pets.


    > should have a virtual address space
I'm pretty sure that most of the GNU POSIX command line tools were written before virtual address space (VAS) was common. And, as I understand, VAS is hidden from the programmer behind magical malloc().


That's a great question. Some might say it's because of the network - that makes microservices messy and so on. But I dont think so, from what I remember plan9 (the os, successor of unix), Rob Pike wanted to make it so that there is no difference between an object being on the network or outside the network. In unix philosophy, things have the same interface, it's easy to communicate. For microservices it would be REST api which is unique to network things. I honestly see a direct link between these ideas. Unix here is projecting a much nicer, simpler image but nonetheless they seem to overlap a lot. The result in both cases seems to be a hard to debug network of small utilities working together. The saving grace for unix is that you are mostly using stable tools (like ls, cat), everything is on your system so - you don't get to experience the pain of debugging 5 different half-working tools.


Everyone wants to make network objects the same as local objects. Nobody's ever succeeded.


This is the first time that I heard about it. Can you tell me about other examples?


CORBA

https://en.wikipedia.org/wiki/Common_Object_Request_Broker_A...

This predated REST. And the rest is history.


Microservices provide encapsulation and an API interface, but are not composable the way Unix programs are when e.g called by a Unix shell on a Unix OS.

Either microservice A calls into microservice B or there's a glue service calling into both A and B. Either way there's some deep API knowledge and serious development needed.

Compare with a (admittedly trivial, but just doing that is orders of magnitude less complex than web APIs) `ls -1 | grep foo | sed s/foo/bar/g`, exit codes, self-served (--help&al.) or pluggable (man) doc, and other "things are file-ish" aspects, signals (however annoying or broken they can be) and whatnot. There's a whole largely consistent operating system, not just in the purely software sense but in the "how to operate the thing" sense, that enables composition. The closest thing in http land would be REST, maybe, and even that is not quite there.


>> Microservices provide encapsulation and an API interface, but are not composable the way Unix programs are when e.g called by a Unix shell on a Unix OS.

Because the Unix programs all use pipes as their interface. When you simplify and standardize the "API" composition becomes easy. Microservices are more like functions or modules each running as separate processes - if you use the same language for the services and glue you can just compile them all together and call it a program right?


My take:

Unix utilities are stand alone and used as needed for specific tasks. They hardly change much, have no data persistence, usually no config persistence other than runtime params, and don't know about or call each other directly.

Microservices are moving parts in a complex evolving machine with a higher purpose that can and do affect each other.


The problem is that they are microservices in name only. Where a unix utility does a few hundred or a thousand lines of C code for its entirety, a microservice will depend on a complex software stack with a web server, an interpreter, an interface with a database, and so on.

It's easy to forget this complexity, but it comes at a cost in terms of performance and, above all, technical debt. The microservice will probably have to be rewritten in 5 years' time because the software stack will be obsolete. Whereas some unix utilities are over 40 years old.


In theory, only the network boundary. Which allows you to independently scale different parts of the system

In practice, a way of splitting work up between teams. Also it makes it easier to figure out who to blame if something breaks. Also a convenient unit for managerial politics

So because manager X "owns" microservice Y, it's going to stay around so that they have something to manage. Over time the architecture mirrors the organization


If somebody created a complex system of one hundred small unix utilities that all talk to each other over pipes I am sure it would get abd deserve a lot of hate. Unix utilities are nice to do very small, simple things, but there is a limit.


Composability.

This does not mean "it's a small component in a dedicated pipeline". It means "this is a component that's useful in many pipelines".


Microservices require Kafka (or a Kafka equivalent)


You can do something like 0mq, but still need something to coordinate configurations on where service-x is, like etcd.


That's a really good book. Thanks for mentioning it




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: