As a relatively new Linux user, I often find the "versioning" of bundled system utilities also to be a bit of a mess, for lack of a better word.
A classic example, at least from my experience, is `unzip`. On two of my servers (one running Debian and the other an older Ubuntu), neither of their bundled `unzip` versions can handle AES-256 encrypted ZIP files. But apparently, according to some Stack Overflow posts, some distributions have updated theirs to support it.
So here is what I ran into:
1. I couldn't easily find an "updated" version of `unzip`, even though I assume it exists and is open source.
2. To make things more confusing, they all claim to be "version 6.00", even though they obviously behave differently.
3. Even if I did find the right version, I'm not sure if replacing the system-bundled one is safe or a good idea.
So the end result is that some developer out there (probably volunteering their time) added a great feature to a widely used utility, and yet I still can’t use it. So in a sense, being a core system utility makes `unzip` harder to update than if it were just a third-party tool.
I get that it's probably just as bad if not worse on Windows or macOS when it comes to system utilities. But I honestly expected Linux to handle this kind of thing better.
(Please feel free to correct me if I’ve misunderstood anything or if there’s a better way to approach this.)
In the specific case here, 7z is your friend for all zips and compressed files in general, not sure I've ever used unzip on Linux.
Related to that, the Unix philosophy of simple tools that do one job and do it well, also applies here a bit. More typical workflow would be a utility to tarball something, then another utility to gzip it, then finally another to encrypt it. Leading to file extensions like .tar.gz.pgp, all from piping commands together.
As for versioning, I'm not entirely sure why your Debian and Ubuntu installs both claim version 6.00, but that's not typical. If this is for a personal machine, I might recommend switching to a rolling release distro like Arch or Manjaro, which at least give upto date packages on a consistent basis, tracking the upstream version. However, this does come with it's own set of maintenance issues and increased expectation of managing it all yourself.
My usual bugbear complaint about Linux (or rather OSS) versioning is that people are far too reluctant to declare v1.00 of their library. Leading to major useful libraries and programs being embedded in the ecosystem, but only reaching something like v0.2 or v0.68 and staying that way for years on end, which can be confusing for people just starting out in the Linux world. They are usually very stable and almost feature complete, but because they aren't finished to perfection according to the original design, people hold off on that final v1 declaration.
Info-Zip Unzip 6.00 was released in 2009 and has not been updated since. Most Linux distros (and Apple) just ship that 15-plus-year-old code with their own patches on top to fix bugs and improve compatibility with still-maintained but non-free (or less-free) competing implementations. Unfortunately, while the Info-Zip license is pretty liberal when it comes to redistribution and patching, it makes it hard to fork the project; furthermore, anyone who wanted to do so would face the difficult decision of either dropping or trying to continue to support dozens of legacy platforms. Therefore, nobody has stepped up to take charge and unify the many wildly disparate mini-forks.
The "Unix Philosophy" is a bankrupt romanticized after the fact rationalization to make up excuses and justifications for ridiculous ancient vestigial historic baggage like the lack of shared libraries and decent scripting languages, where you had to shell out THREE heavyweight processes -- "[" and "expr" and a sub-shell -- with an inexplicable flurry of punctuation [ "$(expr 1 + 1)" -eq 2 ] just to test if 1 + 1 = 2, even though the processor has single cycle instructions to add two numbers and test for equality.
??? This complaint seems more than 20 years too late
Arithmetic is built into POSIX shell, and it's universally implemented. The following works in basically every shell, and starts 0 new processes, not 2:
20 years doesn't even get you back to the last century, it's more like 48 years since 1977 when Bourne wrote sh. As one of the authors of the Unix Haters Handbook, published relatively recently in 1994, and someone who's used many versions of Unix since the 1980's, of course I'm fully aware that those problems are hell of a lot more than 20 years old, and that's the whole point: we're still suffering from their "vestigial historic baggage", arcane syntax and semantics originally intended to fork processes and pipe text to solve trivial tasks instead of using shared libraries and machine instructions to perform simple math operations, and people are still trying to justify all that claptrap as the "Unix Philosophy".
Care to explain to me how all the problems of X-Windows have been solved so it's no longer valid to criticize the fallout from its legacy vestigial historic baggage we still suffer from even today? How many decades ago did they first promise the Year of the Linux Desktop?
The X-Windows Disaster: This is Chapter 7 of the UNIX-HATERS Handbook. The X-Windows Disaster chapter was written by Don Hopkins.
Why it took THREE processes and a shitload of context switches and punctuation that we are still stuck with to simply test if 1 + 1 = 2 in classic Unix [TM]:
[ "$(expr 1 + 1)" -eq 2 ]
Breakdown:
expr 1 + 1
An external program used to perform arithmetic.
$(...) (Command substitution)
Runs expr in a subshell to capture its output.
[ ... ]
In early shells, [ (aka test) was also an external binary.
It took THREE separate processes because:
Unix lacked built-in arithmetic.
The shell couldn't do math.
Even conditionals ([) were external.
Everything was glued together with fragile text and subprocesses.
All of this just to evaluate a single arithmetic expression by ping-ponging in and out of user and kernel space so many times -- despite the CPU being able to do it in a single cycle.
That’s exactly the kind of historical inefficiency the "Unix Philosophy" retroactively romanticizes.
> The X-Windows Disaster: This is Chapter 7 of the UNIX-HATERS Handbook. The X-Windows Disaster chapter was written by Don Hopkins.
This gave me a big laugh, I love the UNIX-haters Handbook despite loving UNIXy systems. Thank you for decades of enjoyment and learning, especially in my late-90s impressionable youth.
UNIX is dead, no one cares anymore. It's just Linux now. Your examples and complaints are both outdated and not in good faith.
For all the weirdos smashing that downvote button: How about you name me some UNIX distros you have ran in the past year? Other than Linux, OpenBSD (~0.1% market share btw) and ostensibly MacOS (which we all know has dropped any pretense of caring to be UNIX-like many years ago), that is.
macOS is absolutely Unix, and a lot more like mainstream Unix than many of the other vastly different Unix systems of the past and present, so exactly when did the definition of Unix suddenly tighten up so much that it somehow excludes macOS? And how does your arbitrary gatekeeping and delusional denial of the ubiquity and popularity of macOS, and ignorance of the Unix 03 certification, the embedded, real time, and automotive space, and many other Unix operating systems you've never heard of or used, suddenly change the actual definition of Unix that the rest of the world uses?
Have you ever even attended or presented at a Usenix conference? Or worked for a company like UniPress who ports cross platform software to many extremely different Unix systems? Maybe then you'd be more qualified to singlehandedly change the definition of the word, and erase Unix 03 certification from existence, and shut down all the computers and devices running it, but you're not. Who do you think you are, one of Musk's DOGE script kiddies? Because you sound as overconfident and factually incorrect as one.
>The "no true Scotsman" fallacy is committed when the arguer satisfies the following conditions:
>1) not publicly retreating from the initial, falsified a posteriori assertion: CHECK
>2) offering a modified assertion that definitionally excludes a targeted unwanted counterexample: DOUBLE CHECK
>3) using rhetoric to signal the modification: TRIPLE CHECK
macOS, AIX, HP-UX, Solaris (still technically certified), Inspur K-UX, EulerOS, etc.
POSIX-compliant and Unix-alike OSes (e.g., FreeBSD, QNX, etc.) are very active in many common domains (networking, firewalls, embedded, industrial).
Mission-critical infrastructure, telco, financial systems, military/spacecraft, automotive, and embedded still widely use non-Linux Unix or Unix-like systems.
QNX in cars, AIX in banks, Illumos in storage, RTEMS in space systems.
You have no clue what you're talking about, you're completely incapable and afraid to respond to any of my points, and you've been just making shit up and throwing around random buzzwords you don't understand for quit some time now, incoherently unable to complete a sentence, like you're on ketamine. Nobody's falling for any of it. All you've done is make ad hominem attacks, no true scotsman defenses, move the goalposts, then hypocritically accuse other people of doing exactly what you just did: textbook psychological projection. Every single leaf of this argument is you unable to take the L, counter any the valid arguments other people have made, and implicitly admitting defeat that you can't defend anything you said or counter anything anyone else has.
macOS is certified Unix, widely used and extremely popular, and there's absolutely nothing you can do or say that will change that fact, and everyone knows it.
I'll update my examples when your examples of how it's been fixed don't use the same arcane syntax and semantics as the 48 year old Bourne shell. That's the whole point, which you're still missing.
> $ bash -c '[ $((1 + 1)) = 2 ]; echo $?'
Not even Perl uses that much arcane punctuation to test if 1 + 1 = 2. As if [] isn't enough, you've got to throw in two more levels of (()), plus enough grawlix profanity for a Popeye comic strip. And people complain Lisp has too many parens. Sheez.
I prefer ITS DDT (aka HACTRN), with its built-in PDP-10 assembler and disassembler, that lets you do things like your login file customizing your prompt in assembly code to print the time by making system calls, without actually spawning any sub-jobs to merely print the time:
..PROMPT
holds the instruction which DDT uses to type out a "*".
You can replace it with any other instruction.
To use "%" instead of "*", deposit $1#%$> in that ___location
($> to avoid clobbering the opcode without having
to know what it is)
If you have to use arcane syntax and grawlix profanity, you should at least have direct efficient access to the full power of the CPU and operating system.
I keep submitting PR’s to get my assembler extensions in Fish and ZSH but so far no avail. Ideally all scripting should be done in single-clock-cycle assembly statements.
I mean it makes write-only languages like Perl look like beautiful prose but it’s hard to argue about efficiently setting the 20 environment variables used by my terraform jobs with a mere 20 clock cycles. It may seem silly but every clock cycle truly matters.
I love "the Unix Haters Handbook", just as I love "Worse is Better", but this ship has sailed 30 years ago as you mentioned. Your "old man yelling at clouds" rant reminds me of Bjarne Stroustrup's quip, "there are two type of languages, those everyone complains about and those nobody uses". I mean run your nice, coherent, logical LISP machine or Plan9 system of whatever is that you prefer, but let us enjoy our imperfect tools and their philosophy :)
The Unix philosophy really comes down to: "I have a hammer, and everything is a nail."
ESR's claptrap book The Art of Unix Programming turns Unix into philosophy-as-dogma, where flaws are reframed as virtues. His book romanticizes history and ignores inconvenient truths. He's a self-appointed and self-aggrandizing PR spokesperson, not a designer, and definitely not a hacker, and he overstates and over-idealizes the Unix way, as well as and his own skills and contributions. Plus he's an insufferable unrepentant racist bigot.
Don't let historical accident become sacred design. Don’t confuse an ancient workaround with elegant philosophy. We can, and should, do better.
Philosophies need scrutiny, not reverence.
Tools should evolve, not stagnate.
And sometimes, yelling at clouds stirs the winds of change.
>In a 1981 article entitled "The truth about Unix: The user interface is horrid" published in Datamation, Don Norman criticized the design philosophy of Unix for its lack of concern for the user interface. Writing from his background in cognitive science and from the perspective of the then-current philosophy of cognitive engineering, he focused on how end-users comprehend and form a personal cognitive model of systems—or, in the case of Unix, fail to understand, with the result that disastrous mistakes (such as losing an hour's worth of work) are all too easy.
Donald A. Norman: The truth about Unix: The user interface is horrid:
>In the podcast On the Metal, game developer Jonathan Blow criticised UNIX philosophy as being outdated. He argued that tying together modular tools results in very inefficient programs. He says that UNIX philosophy suffers from similar problems to microservices: without overall supervision, big architectures end up ineffective and inefficient.
>Well, the Unix philosophy for example it has been inherited by Windows to some degree even though it's a different operating system, right? The Unix philosophy of you have all
these small programs that you put together in two like Waves, I think is wrong. It's wrong for today and it was also picked up by Plan Nine as well and so -
>It's micro services, micro services are an expression of Unix philosophy, so the Unix philosophy, I've got a complicated relationship with Unix philosophy. Jess, I imagine you do too, where it's like, I love it, I love a pipeline, I love it when I want to do something that is ad hoc, that is not designed to be permanent because it allows me- and you were
getting inside this earlier about Rust for video games and why maybe it's not a fit in
terms of that ability to prototype quickly, Unix philosophy great for ad hoc prototyping.
>[...] All this Unix stuff, it's the sort of the same thing, except instead of libraries or crates, you just have programs, and then you have like your other program that calls out to the other programs and pipes them around, which is, as far from strongly typed as you can get. It’s like your data coming in a stream on a pipe. Other things about Unix that seemed cool, well, in the last point there is just to say- we've got two levels of redundancy that are doing the same thing. Why? Get rid of that. Do that do the one that works and then if you want a looser version of that, maybe you can have a version of a language that just doesn't type check and use that for your crappy spell. There it is.
>[...] It went too far. That's levels of redundancy that where one of the levels is not very sound, but adds a great deal of complexity. Maybe we should put those together. Another thing about Unix that like- this is maybe getting more picky but one of the cool philosophical things was like, file descriptors, hey, this thing could be a file on disk or I could be talking over the network, isn't it so totally badass, that those are both the same thing? In a nerd kind of way, like, sure, that's great but actually, when I'm writing software, I need to know whether I'm talking over the network or to a file. I'm going to do very different things in both of those cases. I would actually like them to be different things, because I want to know what things that I could do to one that I'm not allowed to do to
another, and so forth.
>Yes, and I am of such mixed mind. Because it's like, it is a powerful abstraction when it
works and when it breaks, it breaks badly.
No tool is perfect. The unix philosophy is a philosophy, not a dogma. It serves well in some use cases. And in the other use case, you’re perfectly fine to put the whole ___domain in a single program. The hammer has been there for millennia, but once we invented screw, we had to invent the screwdriver.
The point is that Unix philosophy is mostly a retroactive justification of why things are the way they are, and not really a coherent philosophy that drove the design of those things, even though it is now often represented as such.
> And sometimes, yelling at clouds stirs the winds of change.
> "The reasonable man adapts himself to the world; the unreasonable one persists in trying to adapt the world to himself. Therefore, all progress depends on the unreasonable man."
George Bernard Shaw.
Man, I'm with you, but I'll put my efforts elsewhere :)
Based on the account name, bio, and internal evidence you should assume this is Don Hopkins. His Wikipedia entry at https://en.wikipedia.org/wiki/Don_Hopkins includes:
> He inspired Richard Stallman, who described him as a "very imaginative fellow", to use the term copyleft. ... He ported the SimCity computer game to several versions of Unix and developed a multi player version of SimCity for X11, did much of the core programming of The Sims, ... He is also known for having written a chapter "The X-Windows Disaster" on X Window System in the book The UNIX-HATERS Handbook.
I hope this experience helps you realize that jumping immediately to contempt can easily backfire.
If you're going to emphasize that it's two processes, at least make sure it's actually two processes. `[` is a shell builtin.
> `eval` being heavy
If you want a more lightweight option, `calc` is available and generally better-suited.
> inexplicable flurry of punctuation
It's very explicable. It's actually exceptionally well-documented. Shell scripting isn't syntactically easy, which is an artifact of its time plus standardization. The bourne shell dates back to 1979, and POSIX has made backwards-compatibility a priority between editions.
In this case:
- `[` and `]` delimit a test expression
- `"..."` ensure that the result of an expression is always treated as a single-token string rather than splitting a token into multiple based on spaces, which is the default behaviour (and an artifact of sh and bash's basic type system)
- `$(...)` denotes that the expression between the parens gets run in a subshell
- `-eq` is used for numerical comparison since POSIX shells default to string comparison using the normal `=` equals sign (which is, again, a limitation of the type system and a practical compromise)
> even though the processor has single cycle instructions to add two numbers and test for equality
I don't really understand what this argument is trying to argue for; shell scripting languages are, for practical reasons, usually interpreted, and in the POSIX case, they usually don't have to be fast since they're usually just used to delegate operations off to other code for performance. Their main priority is ease of interop with their ___domain.
If I wanted to test if one plus one equals two at a multi-terabit-per-second bandwidth I'd write a C program for it that forces AVX512 use via inline assembly, but at that point I think I'd have lost the plot a bit.
I was quite clear that this is HISTORICAL baggage whose syntax and semantics we're still suffering from. I corrected it from TWO to THREE and wrote a step by step description of why it was three processes in the other comment. That's the whole point: it was originally a terrible design, but we're still stuck with the syntactic and semantic consequences even today, in the name of "backwards compatibility".
> they usually don't have to be fast since they're usually just used to delegate operations off to other code for performance
Even now you're bending over backwards to make ridiculous rationalizations for the bankrupt "Unix Philosophy". And you're just making my point for me. Does the Unix Philosophy say that the shell should be designed to be slow and inefficient and syntactically byzantine on purpose, or are you just making excuses? Maybe you don't think YOUR shell scripts have to be fast, or easy to write, read, and maintain, or perform simple arithmetic, or not have arsenals of pre-loaded foot guns, but speak for yourself.
When my son was six he found a girly magazine at a friends house and was sneaking away to look at it. When my wife caught him she told him the magazine was bad and he should not be looking at it. His simple reply was "But I like it Mom."
I actually didn't mention the Unix philosophy once in my comment, I just explained why the shell snippet you posted is the way it is. As far as I can tell, nobody in this thread's making long-winded ideological arguments about the Unix philosophy except you.
I think it's a perfectly reasonable assessment to think of shell scripts as a glue layer between more complex software. It does a few things well, including abstracting away stuff like pipelining software, navigating file systems, dispatching batch jobs, and exposing the same interface to scripts as you'd use to navigate a command line as a human, interactively.
> Maybe you don't think YOUR shell scripts have to be fast, or easy to write, read, and maintain, or perform simple arithmetic, or not have arsenals of pre-loaded foot guns, but speak for yourself.
This is the opinion of the vast majority of sysadmins, devops people, and other shell-adjacent working professionals I've encountered during my career. None of them, including myself when I'm wearing a sysadmin hat, deny the shortcomings of bash and friends, but none of us have found anything as stable or ubiquitous that fits this ___domain remotely as well.
I also reject the idea that faster or more full-featured alternatives lack footguns, pre-loaded or otherwise.
- C has a relatively limited type system by modern standards, no memory safety, no bounds checking, a slew of non-reentrant stdlib functions, UB, and relies on the user to account for all of that to benefit from its speed.
- C++ offers some improvements, but, being a near superset of C, it still has the footguns of its predecessor, to say nothing of the STL and the bloat issues caused by it.
- Rust improves upon C++ by miles, but the borrow checker can bite you in nontrivial ways, the type system can be obtuse under some circumstances, cargo can introduce issues in the form of competing dependency versions, and build times can be very slow. Mutable global state is also, by design, difficult to work with.
- Python offers ergonomic and speed improvements over POSIX shells in some cases, and a better type system than anything in POSIX shells, but it can't compete with most serious compiled languages for speed. It's also starting to have a serious feature bloat issue.
Pick your poison. The reality is that all tools will suck if you use them wrong enough, and most tools are designed to serve a specific ___domain well. Even general-purpose programming languages like the ones I mentioned have specializations -- you can use C to build an MVC website, yes, but there are better tools out there for most real-world applications in that ___domain. You can write an optimizing compiler in Ruby, but if you do that, you should reevaluate what life choices led you to do that.
Bash and co. are fine as shell languages. Their syntax is obtuse but it's everywhere, which means that it's worth learning, cause a bash script that works on one host should, within reason, work on almost any other *nix host (plus or minus things like relying on a specific host's directory structure or some such). I'd argue the biggest hurdle when learning are the difference between pure POSIX shell scripting idioms and bashisms, which are themselves very widely available, but that's a separate topic.
C was already limited by 1960's standards when compared to PL/I, NEWP and JOVIAL, 1970's standards when compared to Mesa and Modula-2, .....
It got lucky ridding the UNIX adoptiong wave, an OS that got adopted over the others, thanks to having its source available almost at a symbol price of a tape copy, and a book commenting its source code, had it been available as commercial AT&T product at VMS, MVS, et al price points, no one would be talking about UNIX philosophy.
> - C has a relatively limited type system by modern standards, no memory safety, no bounds checking, a slew of non-reentrant stdlib functions, UB, and relies on the user to account for all of that to benefit from its speed.
That is a feature, not a bug. Add your own bound checks if you want it, or use Ada or other languages that add a lot of fluff (Ada has options to disable the addition of bound checks, FWIW).
I am fine with Bash too (and I use shellcheck all the time), but I try to aim to be POSIX-compliant by default. Additionally, sometimes I just end up using Perl or Lua (LuaJIT).
I never said it wasn't a feature. There was a time, and there are still certain specific domains, where bit bashing the way C lets you is a big benefit to have. But bug or not, I think it's reasonable to call these limitations as far as general-purpose programming goes.
My argument was that C puts the onus on the user to work within those limitations. Implementing your own bounds checks, doing shared memory management, all that stuff, is extra work that you either have to do yourself or know and trust a library enough to use it, and in either case carry around the weight of having to know that nonstandard stuff.
We’re stuck with plenty of non-optimal stuff because of path dependency and historical baggage. So what? Propose something better. Show that the benefits of following the happy path of historical baggage don’t outweigh the outrageously “arcane” and byzantine syntax of…double quotes, brackets, dollar signs, and other symbols that pretty much every other language uses too.
>I don't really understand what this argument is trying to argue for; shell scripting languages are, for practical reasons, usually interpreted, and in the POSIX case, they usually don't have to be fast since they're usually just used to delegate operations off to other code for performance. Their main priority is ease of interop with their ___domain.
DDT is a hell of a lot older than Bourne shell, is not interpreted, does have full efficient access to the machine instructions and operation system, and it even features a built-in PDP-10 assembler and disassembler, and lets you use inline assembly in your login file to customize it, like I described here:
And even the lowly Windows PowerShell is much more recent, and blows Bourne shell out of the water along so many dimensions, by being VASTLY more interoperable, powerful, usable, learnable, maintainable, efficient, and flexible, with a much better syntax, as I described here:
>When even lowly Windows PowerShell blows your Unix shell out of the water along so many dimensions of power, usability, learnability, maintainability, efficiency, and flexibility, you know for sure your that your Unix shell and the philosophy it rode in on totally sucks, and self imposed ignorance and delusional denial is your only defense against realizing how bankrupt the Unix Philosophy really is.
>It's such a LOW BAR to lose spectacularly to, and then still try to carry the water and make excuses for the bankrupt "Unix Philosophy" cargo cult. Do better.
Shell != Unix (philosophy) as I’m sure you are aware. The unix philosophy is having a shell and being able to replace it, not its particular idiosyncrasies at any moment in time.
This is like bashing Windows for the look of its buttons.
I realized the hype for the Unix Philosophy was overblown around 1993 when I learned Perl and almost immediately stopped using a dozen different command-line tools.
I realized the hype for composing $thing$s was overblown around 1993 when I learned I could just have "A Grand Unified $thing$" and almost immediately stopped using a dozen different $thing$s.
Then, a decade or two later, I realized the Grand Unified $thing$ was itself composed, but not by me so I had no control over it. Then I thought to myself, how great would it be if we decompose this Grand Unified $thing$ into many reusable $thing$s? That way we can be optimally productive by not being dependent on the idiosyncrasies of Grand Unified $thing$.
And so it was written and so it was done. We built many a $thing$ and the world was good, excellent even. But then one of the Ancients realized we could increase our productivity dramatically if we would compose our many $thing$s into one Grand Unified $thing$ so we wouldn't have to learn to use all these different $thing$s.
And so it was written and so it was done. Thus goes the story of the Ancients and their initiation of the most holy of cycles.
There is a world outside of Perl. There really is.
It's a general observation of how we are infatuated with composibility, then tire of it and unify and then learn to love it again because the unifications grow stale and weird of which Perl is an excellent example.
I switched to Python in 1998, and I haven't gone back to the Unix philosophy of decomposition into small command-line tools which interoperate via text and pipes, nor the COM/DCOM/CORBA approach, nor microservices, nor even Erlang processes, so I'm really not the target audience for your joke.
Ken Thompson and Unix folks agree with you. The point is... Perl was a solution to the former Unix (BSD/GNU) bloatings.
When you have a look at Plan 9 (now 9fron) with rc as a shell, awk and the power of rio/acme scripting and namespaces among aux/listen... Perl feels bloated and with the same terse syntax as SH derived shells.
Not much; what makes AWK shine it's the I/O in plan9; it's trivial to spawn
sockets (literally from the command line), either plain text or encrypted.
Also, rc it's much simpler than Bash.
I don't see what crusty implementation details have to do with a philosophy. In fact, UNIX itself is a poor implementation of the "UNIX" philosophy, which is why Plan 9 exists.
The idea of small composable tools doing one thing and doing it well may have been mostly an ideal (and now pretty niche), but I don't think it was purely invented after the fact. Just crippled by the "worse is better".
The "Unix Philosophy" is some cargo cult among FOSS folks that never used commercial UNIX systems, since Xenix I haven't used any that doesn't have endless options on their man pages.
Well, we are set by your "Windows philosphy", and forget NT being a VMS rehash, we would still be using the crappy W9x designs with DOS crap back and forth.
Even Risc OS seems to do better even if it doesn't have memory protection too (I think it hasn't, I didn't try it for more than a few days).
Thing is there is no "Windows philosphy" cargo cult, and I don't worship OSes nor languages, all have their plus and minus, use any of them when the situation calls for it, and it is a disservice to oneself to identify themselves to technology stacks like football club memberships given at birth.
Neither I am a sole Unix user; I have Risc OS open (Apache 2.0?) on an RPI to experiment something else beyond Unix/C.
But Windows it's too heavyweight, from 8 it has been a disaster. And the NT kernel+explorer can be really slim (look at ReactOS, or XP, or a debloated W7).
The problem it's that Apple and MS (and RedHat) are just selling shiny turds wasting tons of cycles to do trivial tasks.
Worse, you can't slim down your install so it behaves like a sane system for 1GB of RAM.
I can watch 720p@30FPS videos under a n270 netbook with MPV. Something even native players for WXP can't do with low level direct draw calls well enough.
The Windows > XP philosophy among RedHat and Apple it's: let bloat and crap out our OSes with unnecesary services and XML crap (and interpreted languages such as JS and C#) for the desktop until hardware vendors idolize US so the average user has to buy new crap to do the same task ever and ever.
Security? Why the fuck does Gnome 3 need JS at first? Where's Vala, where it could shine here and Mutter could get a big boost and memory leaks could be a thing of the past?
C# is a compiled language at all levels (source into bytecode, then bytecode into machine code either JIT or AOT). V8 has JIT compilation for hot paths. As a result, JS is significantly faster than the interpreted languages like Python, Ruby and Erlang/Elixir/Gleam.
No one under GTK/Gnome uses plain C, they use Glib as a wrapper. Plain ANSI C might be 'unusable' for modern UI needs, but, as I said, just have a look on WebkitGTK4.
Glib everywhere, WebkitSettings are a breeze to setup.
Vala it's a toy because Miguel de Icaza went full MS with C# since Ximian. If Vala had more support from Red Hat, Gnome 4 could support Vala as the main language for it. JS? Lua and Luajit wouldb be a better choice for Mutter scripting. If you have a look on how Luakit and Vimb behave, the difference it's almost nil.
Even an operating system as brain damaged as Windows still has PowerShell, which lets you easily and efficiently perform all kinds of operations, dynamically link in libraries ("cmdlets") and call them directly, call functions with typed non-string parameters, pipe live OBJECTS between code running in the SAME address space without copying and context switching and serializing and piping and deserializing everything as text.
PowerShell even has a hosting api that lets you embed it inside other applications -- try doing that with bash. At least you can do that with python!
When even lowly Windows PowerShell blows your Unix shell out of the water along so many dimensions of power, usability, learnability, maintainability, efficiency, and flexibility, you know for sure your that your Unix shell and the philosophy it rode in on totally sucks, and self imposed ignorance and delusional denial is your only defense against realizing how bankrupt the Unix Philosophy really is.
It's such a LOW BAR to lose spectacularly to, and then still try to carry the water and make excuses for the bankrupt "Unix Philosophy" cargo cult. Do better.
>PowerShell implements the concept of a pipeline, which enables piping the output of one cmdlet to another cmdlet as input. As with Unix pipelines, PowerShell pipelines can construct complex commands, using the | operator to connect stages. However, the PowerShell pipeline differs from Unix pipelines in that stages execute within the PowerShell runtime rather than as a set of processes coordinated by the operating system. Additionally, structured .NET objects, rather than byte streams, are passed from one stage to the next. Using objects and executing stages within the PowerShell runtime eliminates the need to serialize data structures, or to extract them by explicitly parsing text output.[47] An object can also encapsulate certain functions that work on the contained data, which become available to the recipient command for use.[48][49] For the last cmdlet in a pipeline, PowerShell automatically pipes its output object to the Out-Default cmdlet, which transforms the objects into a stream of format objects and then renders those to the screen.[50][51]
>Because all PowerShell objects are .NET objects, they share a .ToString() method, which retrieves the text representation of the data in an object. In addition, PowerShell allows formatting definitions to be specified, so the text representation of objects can be customized by choosing which data elements to display, and in what manner. However, in order to maintain backward compatibility, if an external executable is used in a pipeline, it receives a text stream representing the object, instead of directly integrating with the PowerShell type system.[52][53][54]
> Hosting
>One can also use PowerShell embedded in a management application, which uses the PowerShell runtime to implement the management functionality. For this, PowerShell provides a managed hosting API. Via the APIs, the application can instantiate a runspace (one instantiation of the PowerShell runtime), which runs in the application's process and is exposed as a Runspace object.[12] The state of the runspace is encased in a SessionState object. When the runspace is created, the Windows PowerShell runtime initializes the instantiation, including initializing the providers and enumerating the cmdlets, and updates the SessionState object accordingly. The Runspace then must be opened for either synchronous processing or asynchronous processing. After that it can be used to execute commands. [...]
9front it's the truest Unix philosophy since Unix v6. It makes it much better. Proper devices and network connections as files, plus namespaces and aux/listen plus friends. It makes AWK better than Perl and rc it's much simpler without the bullshit of sh. You only have functions, not aliases, and the syntax it's much saner.
On Powershell/C#, TCL/Tk might not be as powerful but it works under Windows XP with IronTCL unlike MS' own and newest C# implementations ( >= 4.5). Double irony there.
TCL can help to write some useful software such as a Gopher
/Gemini client with embedded TLS support.
And the resource usage will still be far lower.
On embedding, TCL wins here, hands down. It's everywhere.
If we forget that the authors moved on into Inferno and Limbo, while re-doing all the Plan 9 decisions they had to rollback like Alef as main userspace language.
>Because all PowerShell objects are .NET objects, they share a .ToString() method,
Congrats, PSH, you did what TCL did ~30 years ago, but worse. With TCL everything it's a string, even numbers. Yes, it sucks you need to [ eval ] math operations, but well, the advantages outnumber the quirks.
If you came from Lisp, you will be at home in the spot. Use the l* functions as you were doing the same with Lisp lists, but without juggling with car, cdr, caar, cddr and so on.
And there's Expect which is utterly underrated.
Yes, I hate upvar sometimes, but with namespaces you can almost avoid that issue.
On TCL done for serious stuff... if people have been using Excel with millions of rows for covid patients and census, TCL/Tk with SQLite would outperform these by a huge margin.
PowerShell is the opposite of TCL and bash. You pass objects directly, NOT strings. I have no idea what you're trying to say. And yes I've written and shipped and open sourced shitloads of TCL/Tk.
Objects are not my thing, they are just good for Inform6 as a Z-Machine game maps really well with OOP because a text adventure based on events tied to attributes it's ideal.
Now you're making even less sense than before, with incoherent grammar and random buzzwords, which is an impressive leap. I don't think "your thing", whatever that is, has any bearing on this conversation. Are you an LLM?
I played the original Zork on MIT-DM, and read the original source code written in MDL, which is essentially Lisp with angled brackets and data types, and it's neither object nor text oriented, so I have no idea what point you're trying to make about its descendent ZIL, because it makes no sense and has no bearing on this discussion.
You're arguing with a well vetted factually correct evidence based wikipedia page, so if you disagree, go try to edit it, and see how long your hallucinations and vandalisms last without citations to reality or coherent sentences.
At least my code doesn't shit its pants when you pass it a filename with a space in it.
I am not an LLM. I am talking about Inform6, an OOP language born in the 90's where they created games far more powerful than the Infocom ones. If6 maps pretty well to MDL. Both compile to ZMachine games, but if6 it's far easier.
On games, have a look on Anchorhead, Spider and Web, Curses, Jigsaw... in these kind of games OOP has tons of sense.
Wow it's really sad that you're not an LLM. That would have been a great excuse. Too bad you've been superseded and displaced by computers. My condolences.
> Related to that, the Unix philosophy of simple tools that do one job and do it well, also applies here a bit. More typical workflow would be a utility to tarball something, then another utility to gzip it, then finally another to encrypt it. Leading to file extensions like .tar.gz.pgp, all from piping commands together.
I do this for my own files, but half of the time I zip something, it’s to send it to a Windows user, in which case zip is king.
Was there any problem with 7z some years ago? I feel like I've been actively avoiding it for having the feeling that I've read something bad about it, but I can't remember what. But I could've mixed it with something else. It sometimes happens to me.
Ah, I think I might remember a couple RCE they had... [0]
So for Windows use I then started to recommend a fork called NanaZip [1] that enabled some Windows security features (CFG, CET, Package Integrity Check...) and added support for additional formats that other forks already had [2] [3].
If you use `atool`, there is no need to use different tools either – it wraps all the different compression tools behind a single interface (`apack`, `aunpack`, `als`) and chooses the right one based on file extensions.
Debian and Ubuntu tend to want to lock the version of a system tools to the version of the OS.
Debian tends to have long release cycles, but is very stable. Everything will work perfectly together on stable (in fact, testing tends to be almost as good at stability vs other OSes).
Ubuntu is basically Debian with "but what if we released more frequently?".
If you want the latest tools, then you'll have to settle for a less stable OS (sort of). Nix and Arch come to mind. Neither are super user friendly.
If you want stable and the latest tools, Gentoo is the way to go. However, it's even more intimidating than Arch.
If you want stability and simplicity, then the other way to go is sacrificing disk space. Docker/podman, flatpak, appcontainers, and snap are all contenders in this field.
Windows and Mac both have the same problem. Windows solved this by basically just shipping old versions of libraries and dynamically linking them in based on what app is running.
I find it funny calling Arch “less stable”, because I’m inclined to find it more stable, for my purposes, skills and attitudes.
I’ve administered at least one each of: Ubuntu server (set up by another; the rest were by me), Ubuntu desktop at least ten years ago, Arch desktop, Arch server.
The Arch machines get very occasional breakages, generally either very obvious, or signposted well. I did have real trouble once, but that was connected with cutting corners while updating a laptop that had been switched off for two years. (I’ve updated by more than a year at least two other times, with no problems beyond having to update the keyring package manually before doing the rest. The specific corners I cut this one time led to the post-upgrade hooks not running, and I simply forgot to trigger them manually in order to redo the initcpio image, because I was in a hurry. Due to boot process changes, maybe it was zstd stuff, can’t remember, it wouldn’t boot until I fixed it via booting from a USB drive and chrooting into it and running the hooks.)
Now Ubuntu… within a distro release it’s no trouble, except that you’re more likely to need to add external package sources, which will cause trouble later. I feel like Ubuntu release upgrades have caused a lot more pain than Arch ever did. Partly that may be due to differences in the sorts of packages that are installed on the machines, and partly it may be due to having used third-party repositories and/or PPAs, but there were reasons why those things had to be added, whether because software or OS were too old or too new, and none of them would have been needed under Arch (maybe a few AUR packages, but ones where there would have been no trouble). You could say that I saw more trouble from Ubuntu because I was using it wrong, but… it wouldn’t have been suitable without so “using it wrong”.
Correct, another way of looking at it is from a programming angle. If Debian fixes a bug that breaks your tool, then Debian is unstable. Therefore, to maintain stability, Debian must not fix bugs unless they threaten security.
The term "stable" is the most polluted term in Linux, it's not something to be proud of. Similar to how high uptime was a virtue, now it just means your system probably has been pwned at some point.
unzip is a special case: upstream development has basically stopped. The last release was in 2009[0]. (That's the version 6.0.) Since then there were multiple issues discovered and it lacks some features. So everybody patches the hell out of that release[1]. The end result is that you have very different executables with the same version number.
I maintain a huge number of git mirror of git repositories and i have some overview of activity there. Many open source projects have stopped activity and/or do not make any new releases. Like syslinux, which seems to be in a similar situation as unzip. And some projects like Quagga went completely awol and don't even have a functional git remote.
So unzip is not really that special, its a mode general problem with waning interest.
I wasn't trying to imply that unzip is the only one.
But the way I learned that unzip is unmaintained was pretty horrible. I found an old zip file I created ages ago on Windows. Extracting it on Arch caused no problem. But on FreeBSD, filenames containing non-ASCII characters were not decoded correctly. Well, they probably use different projects for unzip, this happens. Wrong, they use the same upstream, but each decided to apply different patches to add features. And some of the patches address nasty bugs.
For something as basic as unzip, my experience as a user is that when it has so many issues, it either gets removed completely or it gets forked. The most reliable way I found to unzip a zip archive consists of a few lines of python.
I agree completely. I also know that distros patch packages.
But for unzip the situation is particularly bad because it has no maintainer. Normally, you would raise feature requests for basic functionality upstream and once added, the maintainer would cut a new release. So software with the same version number generally, but not always, behaves similarly across distros.
But for unzip, because upstream is unmaintained, distro maintainers started to add features while keeping the version number. So in the end you end up with different behavior for what looks like the same release.
Distros are independent projects, so that's to be expected IMO. Though some level of interoperability is nice, diverse options being available is good.
That said, most distros have bsdtar in their repositories so you might want to use that instead. The package might be called libarchive depending on the distro. It can extract pretty much any format with a simple `bsdtar xf path/to/file`. AES is also supported for zips.
macOS includes it by default and Windows too IIRC, in case you're forced to become a paying Microsoft product^Wuser.
It is a mess. My suggestion is to just rely on the built-in stuff as little as possible.
Everything I do gets a git repo and a flake.nix, and direnv activates the environment declared in the flake when I cd to that dir. If I write a script that uses grep, I add the script to the repo and I add pkgs.gnugrep to the flake.nix (also part of the repo).
This way, it's the declared version that gets used, not the system version. Later, when I hop from MacOS to Linux, or visa versa, or to WSL, the flake declares the same version of grep, so the script calls the same version of grep, again avoiding whatever the system has lying around.
It's a flow that I rather like, although many would describe nix as unfriendly to beginniners, so I'm reluctant to outright recommend it precisely. The important part is: declare your dependencies somehow and use only declared dependencies.
Nix is one way to do that, but there's also docker, or you could stick with a particular language ecosystem. python, nodejs, go, rust... they all have ways to bundle and invoke dependencies so you don't have to rely on the system being a certain way and be surprised when it isn't.
A nice side effect of doing this is that when you update your dependencies to newer versions, that ends up in a commit, so if everything breaks you can just check out the old commit and use that instead. And these repos, they don't have to be for software projects--they can just be for "all the tools I need when I'm doing XYZ". I have one for a patio I'm building.
This is the way, system packages are for the system. Everything you need lives in .local or in your case /nix. The amount of tooling headaches I've had to deal with is pretty close to zero now that I don't depend on a platform that by design is shifting sand.
I use Arch on my personal laptop daily but have Debian installed on a VPS, and this is one aspect of Debian that bugs me (though I totally understand why they do it). I am so used to having the latest version of everything available to me very quickly on Arch, I am quite commonly stung when I try to do something on my VPS only to find that the tools in the Debian repos are a few versions behind and don't yet have the features I have been happily using on Arch. It's particularly frustrating when I have been working on a project on my personal laptop and then try to deploy it on my VPS only to find that all of the dependencies are several versions behind and don't work.
Again, not a criticism of Debian, just a friction I noticed moving between a "bleeding edge" and more stable distro regularly.
Compressing and encrypting as separate operations would bypass this issue.
A symmetrically encrypted foo.zip.gpg or foo.tgz.gpg would work in a lot more places than a bleeding edge zip version. Also you get better tested and audited encryption code
If I want to mess around with something without endangering the system I put it in ~/bin. You could compile unzip from source and rename it something like ~/bin/newunzip. If it doesn't work just delete it.
You need to understand that you are now in Unix land which means you compose this pipeline using programs that perform each step of the processes. So when creating an encrypted backup you would use: `tar -c /home/foo | gzip | aescrypt >backup.tgz.aes` or something to that effect. This lets you use whatever compression program in the pipe.
Breaking this composability leads to the kind of problem you are complaining about. It also removes the ability of splitting this pipeline across machines allowing you to distribute the compute cost.
Settle down, Beavis. Not everyone is running Linux in a 24/7 production environment. I hear some people even fart around with it at home for fun.
I've been in pager rotations for most of the last 20 years so I'm sympathetic to that. If some genius symlinked unzip to 7z with no testing in production and caused an incident I'd be real mad. But uh I don't think that's remotely what OP was suggesting here.
"if no critical service depends on it, just update and see"
It did not sound like OP was running a hospital infrastructure. And I never did either, nor intend to. I try to have a linux that does what I want on my computer. 7z was helpful to me, so I shared it, that's it.
This guy has been ranting and raving here longer than I can remember or thought to make an account so I assume he is HN royalty and that's why it's tolerated. That said it doesn't really bother me if I understand the circumstances.
“This guy” is Don Hopkins who, amongst a long list of achievements in the field of computer science specializing in human computer interaction and computer graphics, is one of the authors of the UNIX haters handbook - specifically the extremely prescient chapter 7 "The X-Windows Disaster", published when Linux was in its infancy. You don't have to like what he is saying, but he has decades of experience and research behind what he says. Know where your field came from. The longer you can look back, the farther you can look forward - sadly something a vocal minority of the community refuses to do.
When you've been saying the same thing for the last 40 years, and seeing the same responses, more often than not made by people who don't understand where this all comes from, and which do not really counter what you're saying, you'd be rude and dismissive - especially whith the dogma that surronds the "UNIX philosophy", which, in case you aren't aware, wasn't actually put forward by anyone heavily involved with UNIX development. Some empathy with the protagonist whould help.
I once got a mail back from dang, about why my account got restricted. (Limited posts per day)
"I made plenty of good comments, but this one thread counts as flame war and he has to go by the worst, not by the best comments"
I thought about replying with some Don Hopkins comments, that were way worse than what was here and he is clearly not restricted. But I didn't, as I don't do Kindergarten, I just took a time off from HN.
But it definitely is not equal standards.
So I respect Don Hopkins for his knowledge and experience, but not his style of communication sometimes.
unzip 6.0 is from 2009 (see the manpage or https://infozip.sourceforge.net/UnZip.html). I suspect there are patches floating around (so YMMV as to which patches are applied), or someone has aliases/symlinked some other implementation as "unzip" (like Apple has done here, though unlike unzip rsync is maintained).
Try using atool (which wraps the various options for different archives and should hopefully fix your problem) or the tools provided by https://libzip.org/documentation/.
Practically, what you're hitting is the problem when upstream is dead, and there is no coordination between different distros to centrally take over maintenance.
I feel there is an opportunity for a modern go or rust utility that does compression/decompression in a zillion different formats with a subcommand interface “z gzip -d” or “z zstd -9” or “z zip -d” or “z cpio -d” or similar.
It is even worse on MacOS, because Apple bundles the BSD versions of common Unix utilities instead of the (generally more featureful) GNU versions. So good luck writing a Bash script that works on both MacOS and Linux...
First thing anyone doing dev on MacOS should do is install brew. Second is use brew to install the coreutils and bash packages to get a linux-compatiable gnu environment.
Also because the vast majority of MacOS users never open a terminal. Unix utilities are something they don't even know they have, and they don't care what versions they are.
Anyone using MacOS as a unix platform is installing updated tooling with brew or similar.
MacOS userspace was forked from FreeBSD, that's why it bundles non-GNU extensions. Also the FreeBSD userspace has since then incorporated GNUisms.
Why they went with Bash 2 as the defualt shell is beyond me. I always switched to and used Zsh which had a more recent version. Now I'm also using it on Linux and FreeBSD, because I want a consistent shell.
The macOS userspace was never forked from FreeBSD or any other BSD. If anything, it was forked from NextSTEP. In actual practice, it is a collection of individual components taken from a variety of sources. When development of Mac OS X began in 1999, most command-line tools and a large part of libc were derived from either NetBSD or OpenBSD via NextSTEP. Over the years, there has been a shift toward FreeBSD. Apple maintain a collection of GitHub repositories of their Open Source components where you can see the evolution from one release to the next. Most of them have XML metadata indicating the origin of each individual component.
Incorrect. They default to zsh for interactive use, but their /bin/sh is bash 2. They also ship a copy of dash, but it's not sufficiently POSIX-conforming to replace bash.
Huh weird, I remember many years ago getting a notification in the terminal that bash would be deprecated so I assumed that would have happened by now. I no longer use macs so I wasn't up to date, sorry.
As an OpenBSD developer who frequently fixes portability issues in external software, this doesn’t match my experience. Upstream developers are typically happy to merge patches to improve POSIX compliance; often the result is simpler than their existing kludges attempting to support desired platforms like MacOS, Alpine/Musl, Android, Dash-as-sh, and various BSDs. It turns out a lot of people find value in relying on an agreed‐upon behavior that’s explicitly documented, rather than “this seems to work at the moment on the two or three distros I’ve tested.”
forthright point of view and more power to that.. however in this case the weight falls on one small bit there - the same version number. There is information missing somehow someways
A classic example, at least from my experience, is `unzip`. On two of my servers (one running Debian and the other an older Ubuntu), neither of their bundled `unzip` versions can handle AES-256 encrypted ZIP files. But apparently, according to some Stack Overflow posts, some distributions have updated theirs to support it.
So here is what I ran into:
1. I couldn't easily find an "updated" version of `unzip`, even though I assume it exists and is open source.
2. To make things more confusing, they all claim to be "version 6.00", even though they obviously behave differently.
3. Even if I did find the right version, I'm not sure if replacing the system-bundled one is safe or a good idea.
So the end result is that some developer out there (probably volunteering their time) added a great feature to a widely used utility, and yet I still can’t use it. So in a sense, being a core system utility makes `unzip` harder to update than if it were just a third-party tool.
I get that it's probably just as bad if not worse on Windows or macOS when it comes to system utilities. But I honestly expected Linux to handle this kind of thing better.
(Please feel free to correct me if I’ve misunderstood anything or if there’s a better way to approach this.)