If you're going to emphasize that it's two processes, at least make sure it's actually two processes. `[` is a shell builtin.
> `eval` being heavy
If you want a more lightweight option, `calc` is available and generally better-suited.
> inexplicable flurry of punctuation
It's very explicable. It's actually exceptionally well-documented. Shell scripting isn't syntactically easy, which is an artifact of its time plus standardization. The bourne shell dates back to 1979, and POSIX has made backwards-compatibility a priority between editions.
In this case:
- `[` and `]` delimit a test expression
- `"..."` ensure that the result of an expression is always treated as a single-token string rather than splitting a token into multiple based on spaces, which is the default behaviour (and an artifact of sh and bash's basic type system)
- `$(...)` denotes that the expression between the parens gets run in a subshell
- `-eq` is used for numerical comparison since POSIX shells default to string comparison using the normal `=` equals sign (which is, again, a limitation of the type system and a practical compromise)
> even though the processor has single cycle instructions to add two numbers and test for equality
I don't really understand what this argument is trying to argue for; shell scripting languages are, for practical reasons, usually interpreted, and in the POSIX case, they usually don't have to be fast since they're usually just used to delegate operations off to other code for performance. Their main priority is ease of interop with their ___domain.
If I wanted to test if one plus one equals two at a multi-terabit-per-second bandwidth I'd write a C program for it that forces AVX512 use via inline assembly, but at that point I think I'd have lost the plot a bit.
I was quite clear that this is HISTORICAL baggage whose syntax and semantics we're still suffering from. I corrected it from TWO to THREE and wrote a step by step description of why it was three processes in the other comment. That's the whole point: it was originally a terrible design, but we're still stuck with the syntactic and semantic consequences even today, in the name of "backwards compatibility".
> they usually don't have to be fast since they're usually just used to delegate operations off to other code for performance
Even now you're bending over backwards to make ridiculous rationalizations for the bankrupt "Unix Philosophy". And you're just making my point for me. Does the Unix Philosophy say that the shell should be designed to be slow and inefficient and syntactically byzantine on purpose, or are you just making excuses? Maybe you don't think YOUR shell scripts have to be fast, or easy to write, read, and maintain, or perform simple arithmetic, or not have arsenals of pre-loaded foot guns, but speak for yourself.
When my son was six he found a girly magazine at a friends house and was sneaking away to look at it. When my wife caught him she told him the magazine was bad and he should not be looking at it. His simple reply was "But I like it Mom."
I actually didn't mention the Unix philosophy once in my comment, I just explained why the shell snippet you posted is the way it is. As far as I can tell, nobody in this thread's making long-winded ideological arguments about the Unix philosophy except you.
I think it's a perfectly reasonable assessment to think of shell scripts as a glue layer between more complex software. It does a few things well, including abstracting away stuff like pipelining software, navigating file systems, dispatching batch jobs, and exposing the same interface to scripts as you'd use to navigate a command line as a human, interactively.
> Maybe you don't think YOUR shell scripts have to be fast, or easy to write, read, and maintain, or perform simple arithmetic, or not have arsenals of pre-loaded foot guns, but speak for yourself.
This is the opinion of the vast majority of sysadmins, devops people, and other shell-adjacent working professionals I've encountered during my career. None of them, including myself when I'm wearing a sysadmin hat, deny the shortcomings of bash and friends, but none of us have found anything as stable or ubiquitous that fits this ___domain remotely as well.
I also reject the idea that faster or more full-featured alternatives lack footguns, pre-loaded or otherwise.
- C has a relatively limited type system by modern standards, no memory safety, no bounds checking, a slew of non-reentrant stdlib functions, UB, and relies on the user to account for all of that to benefit from its speed.
- C++ offers some improvements, but, being a near superset of C, it still has the footguns of its predecessor, to say nothing of the STL and the bloat issues caused by it.
- Rust improves upon C++ by miles, but the borrow checker can bite you in nontrivial ways, the type system can be obtuse under some circumstances, cargo can introduce issues in the form of competing dependency versions, and build times can be very slow. Mutable global state is also, by design, difficult to work with.
- Python offers ergonomic and speed improvements over POSIX shells in some cases, and a better type system than anything in POSIX shells, but it can't compete with most serious compiled languages for speed. It's also starting to have a serious feature bloat issue.
Pick your poison. The reality is that all tools will suck if you use them wrong enough, and most tools are designed to serve a specific ___domain well. Even general-purpose programming languages like the ones I mentioned have specializations -- you can use C to build an MVC website, yes, but there are better tools out there for most real-world applications in that ___domain. You can write an optimizing compiler in Ruby, but if you do that, you should reevaluate what life choices led you to do that.
Bash and co. are fine as shell languages. Their syntax is obtuse but it's everywhere, which means that it's worth learning, cause a bash script that works on one host should, within reason, work on almost any other *nix host (plus or minus things like relying on a specific host's directory structure or some such). I'd argue the biggest hurdle when learning are the difference between pure POSIX shell scripting idioms and bashisms, which are themselves very widely available, but that's a separate topic.
C was already limited by 1960's standards when compared to PL/I, NEWP and JOVIAL, 1970's standards when compared to Mesa and Modula-2, .....
It got lucky ridding the UNIX adoptiong wave, an OS that got adopted over the others, thanks to having its source available almost at a symbol price of a tape copy, and a book commenting its source code, had it been available as commercial AT&T product at VMS, MVS, et al price points, no one would be talking about UNIX philosophy.
> - C has a relatively limited type system by modern standards, no memory safety, no bounds checking, a slew of non-reentrant stdlib functions, UB, and relies on the user to account for all of that to benefit from its speed.
That is a feature, not a bug. Add your own bound checks if you want it, or use Ada or other languages that add a lot of fluff (Ada has options to disable the addition of bound checks, FWIW).
I am fine with Bash too (and I use shellcheck all the time), but I try to aim to be POSIX-compliant by default. Additionally, sometimes I just end up using Perl or Lua (LuaJIT).
I never said it wasn't a feature. There was a time, and there are still certain specific domains, where bit bashing the way C lets you is a big benefit to have. But bug or not, I think it's reasonable to call these limitations as far as general-purpose programming goes.
My argument was that C puts the onus on the user to work within those limitations. Implementing your own bounds checks, doing shared memory management, all that stuff, is extra work that you either have to do yourself or know and trust a library enough to use it, and in either case carry around the weight of having to know that nonstandard stuff.
We’re stuck with plenty of non-optimal stuff because of path dependency and historical baggage. So what? Propose something better. Show that the benefits of following the happy path of historical baggage don’t outweigh the outrageously “arcane” and byzantine syntax of…double quotes, brackets, dollar signs, and other symbols that pretty much every other language uses too.
>I don't really understand what this argument is trying to argue for; shell scripting languages are, for practical reasons, usually interpreted, and in the POSIX case, they usually don't have to be fast since they're usually just used to delegate operations off to other code for performance. Their main priority is ease of interop with their ___domain.
DDT is a hell of a lot older than Bourne shell, is not interpreted, does have full efficient access to the machine instructions and operation system, and it even features a built-in PDP-10 assembler and disassembler, and lets you use inline assembly in your login file to customize it, like I described here:
And even the lowly Windows PowerShell is much more recent, and blows Bourne shell out of the water along so many dimensions, by being VASTLY more interoperable, powerful, usable, learnable, maintainable, efficient, and flexible, with a much better syntax, as I described here:
>When even lowly Windows PowerShell blows your Unix shell out of the water along so many dimensions of power, usability, learnability, maintainability, efficiency, and flexibility, you know for sure your that your Unix shell and the philosophy it rode in on totally sucks, and self imposed ignorance and delusional denial is your only defense against realizing how bankrupt the Unix Philosophy really is.
>It's such a LOW BAR to lose spectacularly to, and then still try to carry the water and make excuses for the bankrupt "Unix Philosophy" cargo cult. Do better.
In this case:
- `[` and `]` delimit a test expression
- `"..."` ensure that the result of an expression is always treated as a single-token string rather than splitting a token into multiple based on spaces, which is the default behaviour (and an artifact of sh and bash's basic type system)
- `$(...)` denotes that the expression between the parens gets run in a subshell
- `-eq` is used for numerical comparison since POSIX shells default to string comparison using the normal `=` equals sign (which is, again, a limitation of the type system and a practical compromise)
I don't really understand what this argument is trying to argue for; shell scripting languages are, for practical reasons, usually interpreted, and in the POSIX case, they usually don't have to be fast since they're usually just used to delegate operations off to other code for performance. Their main priority is ease of interop with their ___domain.If I wanted to test if one plus one equals two at a multi-terabit-per-second bandwidth I'd write a C program for it that forces AVX512 use via inline assembly, but at that point I think I'd have lost the plot a bit.