I work in a very iterative fashion, my favorite language is Ruby, which allows me to work as iteratively as my heart desires. Write one line of code, execute it, make sure it's doing exactly what I want it to be doing, then write another line of code.
In order for an iterative coding workflow to be fun, I can't be jumping all over the place, one line at a time only please. But golang refuses to make that fun for me. I can turn the linter off, but the compiler is constantly, noisily, judging me, and the core golang team absolutely refuses to let me turn inconsequential errors off.
So I'm constantly having to jump around the file to make the compiler happy with it's incessant complaints. I'm not using this variable, I don't have any new variables declared, so I gotta remove the ':' until it's time to add it back. Shut the heck up compiler! These are linting concerns!
I wish desperately for language makers to start caring about workflow. It always seems to never even bother rating any kind of concern at all. I wish I could just stop using languages that don't respect coders.
Most of these are fine, but U is a massive mistake.
The Unix philosophy is fine for tiny systems, but absolutely terrible for building any nontrivial system, because it causes the complexity of the system to grow super-linearly with the number of features - O(n^2), in big-O notation, where n is the intrinsic complexity of the application (roughly, the features you want) and the big-O measures the total complexity of the application.
Why? Super quick breakdown: code has to be broken up into modules in order to prevent human brains from exploding, if the modules are too small then inter-module communication complexity dominates (because you then need a large number of modules to implement your features), if the modules are too large than intra-module implementation complexity dominates, and the Unix philosophy requires erring on the size of too-small modules.
For empirical evidence: none of the non-trivial applications you use daily (bash, firefox, chrome, windows, linux, macos, blender, krita, gcc, cargo, llvm, npm, node, vim, emacs, vscode, sublime - whatever you want) are composed of a collection of Unix utilities, or the equivalent (flat program hierarchy consisting of thousands of call sites to hundreds of tiny functions).
The evidence is very clear: the Unix philosophy does not work for anything but small systems.
Contrary opinion: the Unix philosophy advocates a level of granularity larger than you posit - grep has a single responsibility, but is not a "tiny function"
The "do one thing and do it well" philosophy can be taken to bad extremes, but the tools you list mostly fit the philosophy as it was intended. Vim is a text editor, it does that one thing and does it well. Firefox is a web browser, it does that one thing and does it well, notwithstanding attempts to cram everything into web browsing.
This is factually wrong. Vim is emphatically not just a text editor - it is a full-fledged IDE, weighing in at over a million lines of code, with syntax-highlighting, HTML rendering, a completion system, complex multi-window and multi-buffer support, a multi-language plugin API, regular expression engine, complex key rebinding system, jump-to-definition functionality (tags system), and its own scripting language and interpreter for such.
A "text editor" in the Unix tradition edits text, nothing more - something like nano or pico, composed with other utilities for things like regexp matching (using grep externally) and shelling out to tmux or the X window manager for multi-window and tabs/split support. It certainly does not contain its own embedded scripting language.
Firefox is also definitely not a "does that one thing and does it well" tool - it's incredibly complex, at over 21 million lines of code, and contains its own cryptography, web rendering engine, PDF reader, developer console with its own IDE, remote development tools, tab system, UI toolkit, JS runtime, JS API implementation, and plugin API. It used to even be able to interact with FTP servers - and all this in a single, monolithic tool.
Firefox is exactly the opposite of the Unix philosophy is almost every way imaginable. Saying that it "does one thing and does it well" is about as true as saying that the Boost C++ libraries or the Windows OS "do one thing and do it well" - it's only true if you define that "one thing" to encapsulate all of the stuff that the tool happens to do, which is not the way that the Unix philosophy uses it.
Claiming that "x isn't relevant" is completely meaningless unless you describe how it isn't relevant.
Lines of code is a decent heuristic for complexity. An actually-used (so you aren't generating spurious lines of code) 21 Mloc program is more complex than a 21 Kloc one. So, it seems pretty darn relevant to me.
This sounds like the same discussion I always see about the SRP. I think the root of it is that whether you're following the SRP can always be true, depending on how you're looking at your code. If the one thing your code is supposed to do is "run the whole application" then putting everything into one file called "Application" sounds just fine. If the one thing your code is supposed to do is "add 2+2" then abstracting a module that only adds 2+2 seems just fine.
I always found the SRP too vague to be useful for actual decision making, it seems to just lead to arguments where everybody (and nobody) is correct. "Unix philosophy" sounds like it is just about as vague, or am I just really missing something that everyone else gets?
"SRP mandates that you separate the rendering and business logic of the component. As a developer, having these living in different places leads to an administrative chore of chaining identical fields together. The greater risk is that this may be a premature optimisation preventing a more natural separation of concerns emerging as the codebase grows, and as components emerge that “do one thing well” and that are better suited to the ___domain model of the problem space."
Vim is not only a text editor, it's a fully extensible and programmable IDE. It's only perceived light in relationship with the Eighty Megabytes And Constantly Swapping monster.
>Firefox is a web browser, it does that one thing and does it well
Can't you justify anything that way? You called easily say that the rendering engine, JavaScript engine, etc should all be split up into programs that do that well.
This logic is invalid. That many complex pieces of software exist does not prove that composition along different boundaries does not work at a larger scale.
I've built large platforms out of small, simple components. Works just fine.
The text editor I use daily is composed of small, simple components. Both internally, and by separating out many things most editors do internally: File selection, theme selection, and others is delegated to rofi, for example. Splitting into frames is delegated to my tiling wm. Separating this functionality out changes where the boundaries are, and how to reuse them, not the existence of them.
> That many complex pieces of software exist does not prove that composition along different boundaries does not work at a larger scale.
That logic that you stated is invalid; you strawmanned my argument.
My actual argument was that "none of the non-trivial applications you use daily (...) are composed of a collection of Unix utilities, or the equivalent", which still holds.
I guess that I could have added the qualifier "popular" to be more precise, but it's still roughly correct. Certainly, there does not exist a popular, featureful tool that is composed entirely out of Unix utilities or the equivalent.
> The text editor I use daily is composed of small, simple components.
Which, I notice, you didn't name. Why is that? Perhaps because it's not very popular, or because it's not a non-trivial piece of software?
The point stands - large systems aren't built using the Unix philosophy - or, none that are popular (I'm sure there exist large systems made using Unix tools, but no one uses them, because they're brittle and slow).
> My actual argument was that "none of the non-trivial applications you use daily (...) are composed of a collection of Unix utilities, or the equivalent", which still holds.
You made a logical leap to this:
> the Unix philosophy does not work for anything but small systems.
Which does not follow from the argument you made.
> I guess that I could have added the qualifier "popular" to be more precise, but it's still roughly correct. Certainly, there does not exist a popular, featureful tool that is composed entirely out of Unix utilities or the equivalent.
That is not relevant to the claim you made. You're now massively moving goalposts. Nothing in your original comment supports the notion that popularity was in anyway relevant to your argument, which in effect was largely implying that doing a single thing means doing a small, trivial thing. The fact that you're suggesting that a criteria would be that such a tool need be composed entirely of "Unix utilities or the equivalent" suggests you miss the point of arguing for such tools. Part of the point is to have composable, reusable tools, not to reject using additional code when building something else, but to avoid having to write the common parts over and over.
> Which, I notice, you didn't name. Why is that? Perhaps because it's not very popular, or because it's not a non-trivial piece of software?
I didn't name it because it doesn't have a name. It's my personal editor. I was able to put it together as easily as I did exactly because applying the Unix philosophy meant I did not have to write an editor from scratch, but could avoid writing large chunks of what people usually write, and instead write just the bits that are custom to how I want things to work This is a central part of the Unix philosophy: That providing reusable tools allows you to build on them without reinventing the basics. This goes to the point above that the Unix philosophy is not about avoiding custom code, but about avoiding custom code to do things that can be done with existing tools, and to maximise the ability to reuse tools it helps when your tools have a focus and provide a reasonable complete feature set.
The point is not for the tools to be small, but for them to be cohesive whichever size that requires.
Popularity is irrelevant to the claim I responded to. That you're now making an entirely different argument does not change that.
> The point stands - large systems aren't built using the Unix philosophy - or, none that are popular (I'm sure there exist large systems made using Unix tools, but no one uses them, because they're brittle and slow).
Here you are stating two entirely different arguments that are not remotely comparable.
A whole lot of large distributed systems are built using the Unix philosophy. I've built and worked on many over my career. The vast majority of my last 25 years have been spent building systems following those principles. That these platforms rarely get published is not proof of their non-existence.
> Which does not follow from the argument you made.
It absolutely does follow - the facts that you can't name a single popular non-trivial utility that is implemented using the Unix philosophy, and that none of the above applications I listed are implemented using it, are strong evidence that it doesn't "work".
Let me clarify what "work" means here: it means that it is a software design philosophy that is repeatably effective, scales to large systems, and quickly produces high-quality software. It clearly does not mean "you are able to design a piece of software using it", because you can design a large piece of software using unstructured assembly-language and get it to function, assuming an infinite amount of resources. A design strategy "working" means that it is effective.
So, the fact that there aren't any reasonably-popular counterexamples means that the Unix philosophy doesn't work for building medium-sized (and up) software systems, because if it worked, it could be repeatedly used to build them.
> Nothing in your original comment supports the notion that popularity was in anyway relevant to your argument
It should be pretty obvious that if someone builds a piece of software with the Unix philosophy but then doesn't tell anyone about it, then it can't be used as a piece of evidence for this argument, because neither of us will know about it.
It should also be pretty obvious that if someone claims to build a large piece of software with the Unix philosophy but nobody uses it, that's a strong indicator that the software is deficient in some way, and therefore that you can't claim that it indicates that the Unix design philosophy "works" unless you examine it carefully.
Therefore, it should be pretty obvious that popularity is relevant.
> largely implying that doing a single thing means doing a small, trivial thing
That's because it's true. If you're doing a big thing, then you're not doing "a single thing" anymore, because your large task is composed of smaller sub-tasks, which the Unix philosophy says that need to be separate programs. Saying that Windows "does one thing" and that that thing is "general computing things" is pretty obviously wrong, for instance.
> The fact that you're suggesting that a criteria would be that such a tool need be composed entirely of "Unix utilities or the equivalent" suggests you miss the point of arguing for such tools.
The topic under discussion is whether the Unix philosophy can be productively applied to build large systems. If there doesn't exist a tool that is build entirely out of "Unix utilities or the equivalent", then it means that the Unix philosophy is just bad at that thing. It doesn't matter if I "miss the point" because I'm only interested in building large systems, and that's the only thing I'm interested in discussing.
> Part of the point is to have composable, reusable tools, not to reject using additional code when building something else, but to avoid having to write the common parts over and over.
"Write reusable software" is a basic and widely-accepted tenet of software engineering. The Unix philosophy is different, because it specifically requires that your modules be separate programs that communicate using text streams[1].
That is the thing about the Unix philosophy that's both unique and bad. I'm not arguing against modularity in software engineering - I'm arguing against the Unix philosophy's specific implementation of it.
Finally, you wrap up your comment with fallacies that because just because someone did build something with the Unix philosophy, that means it works - see my earlier point that it doesn't "work" unless you can repeatedly and effectively apply the principle, and that if the artifacts aren't visible or popular, that's a strong indicator that they're deficient...and if all of the artifacts produced using a design philosophy are deficient, then what does that say about the philosophy?
(tl;dr - if your evidence for the effectiveness of Unix is that you've seen a bunch of unnamed, non-public systems that I oh so conveniently can't inspect myself, then you don't really have any evidence at all)
> It absolutely does follow - the facts that you can't name a single popular non-trivial utility that is implemented using the Unix philosophy, and that none of the above applications I listed are implemented using it, are strong evidence that it doesn't "work".
Doesn't work like that. This is a pointless discussion, as you're trying to infer something not even by lack of existence, but from lack of public popularity. Logic does not work this way.
> It should be pretty obvious that if someone builds a piece of software with the Unix philosophy but then doesn't tell anyone about it, then it can't be used as a piece of evidence for this argument, because neither of us will know about it.
That is true, but ignoring the fact that I do know of such examples, this is irrelevant as arguing that it is not possible requires justifying why it is not possible, not pointing to your purported absence of evidence that it is possible.
> It should also be pretty obvious that if someone claims to build a large piece of software with the Unix philosophy but nobody uses it, that's a strong indicator that the software is deficient in some way, and therefore that you can't claim that it indicates that the Unix design philosophy "works" unless you examine it carefully.
Your conclusion does not follow from your argument.
> That's because it's true. If you're doing a big thing, then you're not doing "a single thing" anymore, because your large task is composed of smaller sub-tasks, which the Unix philosophy says that need to be separate programs. Saying that Windows "does one thing" and that that thing is "general computing things" is pretty obviously wrong, for instance.
This, to me just demonstrates that you don't understand the point of the argument. The point is about cohesion. This kind of pedantic interpretation of "a single thing" taken to it's full extent would mean no programs longer than a single instruction. Clearly that is not the point. Clearly the fact that tools generally accepted as standard unix tools includes multiple Turing-complete tools illustrates that the intent is not this kind of ultimate reductionist thinking.
> The topic under discussion is whether the Unix philosophy can be productively applied to build large systems. If there doesn't exist a tool that is build entirely out of "Unix utilities or the equivalent", then it means that the Unix philosophy is just bad at that thing.
Your conclusion does not follow from your argument.
> That is the thing about the Unix philosophy that's both unique and bad. I'm not arguing against modularity in software engineering - I'm arguing against the Unix philosophy's specific implementation of it.
Your original argument was premised on arguing that communication between components would cause a combinatorical explosion. That's not something affected by the specific Unix approach.
In other words: You're again trying to change the argument into something very different from original claim.
> Finally, you wrap up your comment with fallacies that because just because someone did build something with the Unix philosophy, that means it work - see my earlier point that it doesn't "work" unless you can repeatedly and effectively apply the principle, and that if the artifacts aren't visible or popular, that's a strong indicator that they're deficient...and if all of the artifacts produced using a design philosophy are deficient, then what does that say about the philosophy?
Again, your conclusions do not follow from your arguments.
> (tl;dr - if your evidence for the effectiveness of Unix is that you've seen a bunch of unnamed, non-public systems that I oh so conveniently can't inspect myself, then you don't really have any evidence at all)
This would be relevant if you'd supported your arguments in a logically sound way so that there was something meaningful to refute. As it stands, my direct personal experience serves to explain why I have no reason to believe your unsupported arguments, as I've seen first hand that you're wrong. Whether or not you choose to believe that is of no consequence to me.
> you're trying to infer something not even by lack of existence, but from lack of public popularity. Logic does not work this way.
The world absolutely does work that way - if there's no popular public example of a large system built with an old, well-known, widely-advocated design philosophy in a field with millions of engineered artifacts, that's extremely strong evidence that that design philosophy doesn't work for large systems.
This is not physics, where "absence of evidence is not evidence of absence", nor is it logic, where you can't affirm the consequent. This is architecture, where you have a bunch of design philosophies that you can try out, and if there's a well-known philosophy that's been around for a period of time, with a large and vocal following, and yet there isn't a single public artifact of its success (for large systems), that's a very strong signal that it simply does not work (for large systems).
> ignoring the fact that I do know of such examples
...which you've presented no evidence for, so I straight-up don't believe you (and, any other HN readers reading this post, I would encourage you to think very hard about whether they're worth believing without providing any evidence whatsoever).
> this is irrelevant as arguing that it is not possible requires justifying why it is not possible
When did I ever argue that it was not possible? This is a straw-man argument.
> not pointing to your purported absence of evidence that it is possible
Weasel-words. I provided strong evidence of absence, not "purported".
> Your conclusion does not follow from your argument.
You know the pyramid of arguments? You're around the middle, at "stating the opposing case without providing evidence".
> Clearly the fact that tools generally accepted as standard unix tools includes multiple Turing-complete tools illustrates that the intent is not this kind of ultimate reductionist thinking.
It's obviously not that kind of ultimate reductionist thinking - that's not what I'm arguing. I'm arguing that the Unix philosophy is not useful for building large systems, given whatever the most commonly-held definition of "the Unix philosophy" is, regardless of whether I defined it right or not. In a previous comment, you said:
> Part of the point is to have composable, reusable tools, not to reject using additional code when building something else, but to avoid having to write the common parts over and over.
...and yet, you still can't name a large, popular system that is made with components that fit those criteria, your own criteria, even as you nitpick mine!
> Your conclusion does not follow from your argument.
Stating the opposite case without providing evidence.
> Again, your conclusions do not follow from your arguments.
And here.
> This would be relevant if you'd supported your arguments in a logically sound way so that there was something meaningful to refute.
Here I'm going to point out that you claimed that my arguments were invalid without actually providing any reasoning or explanation for why you thought so, and then point out that you incorrectly tried to apply logic to a ___domain where it doesn't apply (architecture/design).
To future readers: notice the consistent pattern of claims without evidence, strawmanning of my points, and contradiction without explanation, and ask yourself if you want to believe this random person on the internet who claims, without any evidence whatsoever, that they have seen some mythical systems out there created using the Unix philosophy, while not a single public example exists.
Originally it's about avoiding the quadratic explosion of the number of things that must be implemented. The method is to replace M * N separate pieces of code with
- a single extra X thing
- M + N pieces of code
- a way to compose these M + N pieces of code to get the M * N you wanted in the first place
I think what you describe is preferable and is how I like to code in general, but then I find that the complexity comes from how these things communicate. It's not actually true in general that the whole is greater than the sum of the parts, but it's good to have a system that possess this property.
But that complexity is even larger without this encapsulation into components. Reducing the complexity by reducing the potential communications surface is a large part of the point.
Modules can be built compositionally from smaller modules, even in the *nix shell - that's what a shell script is! Similarly, libraries can depend from other lower-level libraries. No flat structure is required, hence no huge increase in either implementation or communication complexity.
Great article! The properties described are definitely something I look for in the software I create. Also, I can see a lot of these properties in our open-source project, especially Domain-based. The work we are doing goes a lot in this direction of closing the gap between the code, the developers and also the non-technical stakeholders.
As the creator of BDD, I have great respect for Dan North's work.
I like the idea of aligning on properties. Does not exclude to value certain principles as well.
However, like with all abstract "code quality" properties (modularity, predictability, robustness, encapsulation, etc) how can you quantify/measure how modular an application or how predictable a code base is?
I work in a very iterative fashion, my favorite language is Ruby, which allows me to work as iteratively as my heart desires. Write one line of code, execute it, make sure it's doing exactly what I want it to be doing, then write another line of code.
In order for an iterative coding workflow to be fun, I can't be jumping all over the place, one line at a time only please. But golang refuses to make that fun for me. I can turn the linter off, but the compiler is constantly, noisily, judging me, and the core golang team absolutely refuses to let me turn inconsequential errors off.
So I'm constantly having to jump around the file to make the compiler happy with it's incessant complaints. I'm not using this variable, I don't have any new variables declared, so I gotta remove the ':' until it's time to add it back. Shut the heck up compiler! These are linting concerns!
I wish desperately for language makers to start caring about workflow. It always seems to never even bother rating any kind of concern at all. I wish I could just stop using languages that don't respect coders.