Hacker News new | past | comments | ask | show | jobs | submit login

Yes, I don't understand how people end up with assertions that the filename is a require argument. At least we've got /dev/stdin or /proc/self/fd/0 as workarounds.



Much Unix software today is written by people who really don't appreciate or understand Unix. It leads to things like Homebrew (early on, at least) completely taking over and breaking /usr/local; to command-line utilities using a single dash (-) precending both short and long option names (granted, --long-opts is a GNUism, but it's a well-established standard); commands that output color by default even when the output isn't a tty; etc.

It's not hard to fix things like this, but it exemplifies a lack of familiarity with the Unix command line. There are an enormous number of tools out there that only exist because people don't know how to chain together basic 1970s Unix text-processing tools in a pipeline.


Anybody long enough to remember Unix from the beginning or even just the last 25 years... which is a tiny percentage of this site... should know that a unifying Unix or Unix "tradition" as noted in a follow-up comment is a pretty much a myth. The tradition is whatever system you grew up on and tribal biases you subscribe to and the only true Unix traditions are mostly trivialities like core shell syntax and a handful of commands, and a woefully underpowered API for modern purposes. And long option names are definitely not part of any tradition.

Myths like "everything is a file" or file descriptor is complete bollocks, mostly retconned recently with Linuxisms. Other than pipes, IPC on Unix systems did not involve files or file descriptors. The socket api dates to the early 80s and even it couldn't follow along with its weird ioctls. Why are things put in /usr/local anyway? Why is /usr even a thing? There's a history there, but these days I don't seem much of anything go into /usr/local on most Linux distributions.

It's also ironic to drag OS X into a discussion of Unix, because if there was one system to break with Unix tradition (for the best in some ways) -- no X11, launchd, a multifork FS, weird semantics to implement time machine, a completely non-POSIX low-level API, etc, that would be it.

All this shit has been reinvented multiple times, the user-mode API on Linux has had more churn than Windows -- which never subscribed to a tradition. There's no issue of lack of familiarity here, the original Unix system meant to run on a PDP-11 minicomputer only meets modern needs in an idealized fantasy-land. Meanwhile, worse is better has been chugging along for 50 years while people try to meet their needs.


> more churn than Windows -- which never subscribed to a tradition.

My understanding is that Windows has always had a very strong tradition of backwards compatibility. Even to the point of making prior bugs that vendors rely on still function the same way for them (i.e. detect if it's e.g. Photoshop requesting buggy API, serve them the buggy code path and everyone else the fixed one).

That's just as much a tradition as "we should implement this with file semantics because that's traditionally how our OS has exposed functionality".


> no X11

XQuartz if you want it

> completely non-POSIX low-level API

macOS has a POSIX layer.


> XQuartz if you want it

There are X server implementations for Windows, Android, AmigaOS, Windows CE!!, etc... I don't think this is relevant.

> macOS has a POSIX layer. So do many systems, again including Windows in varying forms through the years. I think the salient issue is that BSD UNIX and "tradition" are conflicting. The point of the original CMU Mach project was to replace the BSD monolith kernel.


Tangent: Homebrew itself doesn’t really choose to take over /usr/local; rather, it just accepts that there exists POSIX software that is way too hard for most machines to compile, and so must be distributed precompiled; and yet where that precompilation implies a burning-in of an installation prefix at build time, which therefore cannot be customized at install time. And so that software must be compiled to assume some installation prefix; and so Homebrew may as well also assume that installation prefix, so as to keep all the installed symlinks and their referents in the same place (i.e. on the same mountable volume.)

You have always been able to customize Homebrew to install at a custom prefix, e.g. ~/brew. It’s just that, when you do that, and then install one of the casks or bottles for “heavy” POSIX software like Calibre or TeX, that cask/bottle is going to pollute /usr/local with files anyway, but those files will be symlinks from /usr/local to the Homebrew cellar sitting in your home directory, which is ridiculous both in the multiuser usability sense, and in the traditional UNIX “what if a boot script you installed, relies on its daemon being available in /usr/local, which is symlinked to /home, but /home isn’t mounted yet, because it’s an NFS automount?” sense. (Which still applies/works in macOS, even if the Server.app interface for setting it up is gone!)

The real ridiculous thing, IMHO, is that Homebrew doesn’t install stuff into /usr, like a regular package manager. But due to macOS considering /usr part of its secure/immutable OS base-image, /usr is immutable when not in recovery mode.

I guess Homebrew could come up with its own cute little appellation — /usr/pkg or somesuch — but then you run into that other lovely little POSIXism where every application has its own way of calculating a PATH, such that you’d need to add that /usr/pkg directory to an unbounded number of little scripts here and there to make things truly work.


Or use `/opt` which is a POSIX-standard ___location. Every managed MacOSX laptop I've gotten from "Big Corp" has had `/usr/local` owned & managed by IT with permissions set to root meaning you're fighting Chef (or whatever your IT department prefers) if you use the default homebrew ___location.


But again, you'd have that problem whether you used Homebrew or not, as soon as you tried to (even manually!) install the official macOS binary distribution of TeX, or XQuartz, or PostGIS, or...

Homebrew just acknowledges that these external third-party binary distributions (casks) are going to make a mess of your /usr/local — because that's the prefix they've all settled on burning in at compile-time — and so Homebrew tries to at least make that mess into a managed mess.

And, if some other system is already managing /usr/local, but isn't expecting the results of these programs unpacking into there, it's going to be very upset and confused — again, regardless of whether or not you use Homebrew. So it'd be better for those other systems to just... not do that.

/usr/local isn't supposed to be managed. It's supposed to be the install prefix that's controlled by the local machine admin, rather than by the ___domain admin. Homebrew just happens to be a tool for automating local-admin installs of stuff.


Where are ___domain admins supposed to put installations?


The REAL ridiculous thing is that Homebrew was needed in the first place.

Mac OS X had some of the sexiest ways to install and uninstall application software that we'd ever seen in any other platform at that time.

But that Apple stubbornly refused to include a useful package management system, was one of the most horrible oversights in computing history.


> I guess Homebrew could come up with its own cute little appellation — /usr/pkg or somesuch —

/opt/homebrew would be a somewhat traditional place to put it.

> but then you run into that other lovely little POSIXism where every application has its own way of calculating a PATH, such that you’d need to add that /usr/pkg directory to an unbounded number of little scripts here and there to make things truly work.

What? You should be able to add it to the system PATH that's set for sessions and call it a day on a POSIX system. PATH is an environment variable and inherited. If MacOS is in the habit of overriding PATH on system scripts I have to imagine that's because they completely screwed it up at some point in the past. Generally, you just add it to your use session variables in whatever way your system supports (.profile, etc) if you want it for your user, or at a system level if you want it system wide (I could see maybe Apple making this hard).

The only times in over 20 years I've ever had to deal with PATH problems are when I ran stuff through cron, because it specifically clears the PATH. More recent systems just specify a default PATH in /etc/crontab for the traditional / and /usr bin and sbin dirs.

Maybe you're thinking of the shared library path loading? That should also be easily fixed.


> and yet where that precompilation implies a burning-in of an installation prefix at build time

Not necessarily. Plenty of software uses relative paths that work regardless of prefix. Off the top of my head, Node.js is distributed in this way.

> you’d need to add that /usr/pkg directory to an unbounded number of little scripts here and there to make things truly work.

How so? Are there that many scripts that entirely replace the PATH environment variable? In Linux, I just include my system wide path additions in /etc/profile which will be set for every login. For things like cron jobs or service scripts, which don't inherit the environment of a login shell, you will need to source the profile or use absolute paths, but that's about the only caveat I can think of.


> You have always been able to customize Homebrew to install at a custom prefix, e.g. ~/brew. It’s just that, when you do that...

...and then try to build something entirely sensible like Postgres, but hours of fiddling with different XCode versions and compiler flags still lead to a dead end of errors, you're stuck because you're running an unsupported configuration.

I still don't understand how the PG bottles for Mojave can be built.


The `go` program uses -longopts and I think it would be hard to argue that Rob Pike lacks an appreciation of Unix traditions.


I would argue that Go's design as a whole is characterized by an attitude of ignoring established ideas for no other reason than that they think they know better.


Something being established is not a grand argument for it's usage. The reasons it got established are relavent, and if you feel the end result of said establishment is obtuse or inane, why would you use it?

That's not to say Go's decisions to toss some established practices are "wise" or "sagely", just that broad acceptance is not a criteria they seemed concerned with. Which is fine.

>they think they know better.

It's safe to say Rob Pike is not clueless or without experience in unix tooling. You should listen to some of his experiences and thoughts with designing Go [0]. I don't always agree with him, [but it's very baseless to suggest he makes decisions on the grounds that they were his, not they have merrit.]

Edit to clarify: [He makes decisions on merrit over authority]

[0]: https://changelog.com/gotime/100


Sure, there's nothing that says established practice is better. That is not, in my opinion, a good defense of Go which makes many baffling design decisions. Besides, an appeal to the authority of Rob Pike is surely not a valid defense if mine is not a valid criticism.

I'm (perhaps unfairly) uninterested in writing out all the details, but “they think they know better” is because I see Go as someone's attempt to update C to the modern world without considering the lessons of any of the languages developed in the meantime. And because of the weird dogmatic wars about generics, modules, and error handling.


Rob Pike explained it thusly in a 2013 Google Groups reply:

> Rob 'Commander' Pike

> Apr 2, 2013, 6:50:36 AM

> to rog, John Jeffery, [email protected]

> As the author of the flag package, I can explain. It's loosely based on Google's flag package, although greatly simplified (and I mean greatly). I wanted a single, straightforward syntax for flags, nothing more, nothing less.

> -rob


Rob Pike lacks an appreciation of GNU traditions.

I'm actually surprised by this; I would have expected Pike to go with single-letter options only.


This is a fair point, although ironically it's probably because Pike predates GNU and still has a problem with all the conventions those young upstarts eschewed. Conventions change, usually for the better. I think this is one the Go team got wrong, regardless of the reason.


"There are an enormous number of tools out there that only exist because people don't know how to chain together basic 1970s Unix text-processing tools in a pipeline."

Arguably that is why the original implementation of Perl was written. If I remember the story correctly, we can never know for sure whether, e.g., AWK would have sufficed, because the particular the job the author wrote Perl for as a contractor was confidential.

Are people using jq most concerned about speed, or are they more concerned about syntax.

JSON suffers a problem from which line-oriented untilities generally have immunity: a large enough and deeply nested JSON structure will choke or crash a program that tries to read all the data into memory at once, or even in large chunks. The process is resource-constrained as the size of the data increases. There are no limits placed on the size or depth of JSON files.

I use sed and tr for most simple JSON files. It is possible to overlfow the sed buffer but it rarely ever happens. sed is found everywhere and it's resource-friendly. Others might choose a program for speed or syntax but the issue of reliability is even more important to me. jq alone is not a reliable solution for any and all JSON. It can be overkill for simple json and resource-constrained for large, complex JSON.

https://stackoverflow.com/questions/59806699/json-to-csv-usi...

netstrings (https://cr.yp.to/proto/netstrings.txt) do not suffer from the same problem as JSON.


> It leads to things like Homebrew (early on, at least) completely taking over and breaking /usr/local

Fully agree with you, but oh well, most if not everything is available on Macports anyway.

> There are an enormous number of tools out there that only exist because people don't know how to chain together basic 1970s Unix text-processing tools in a pipeline.

Speed. A specialized tool you need often beats manually wrangling the dozen or so Unix tools you need to replace it, plus many Good Options are only available on the GNU/Linux coreutils and don't work on Macs (sed -i, my most common annoyance) or busybox.


People often complain about homebrew's use of usr local without articulating what is lost.


files might be faster, because you can mmap them?


Most people work on compressed JSON lines files. Sometimes they are stored on s3. Files do not give flexibility.

When using jq, I can do a lot of things:

    aws s3 cp s3://bucket/file.json.gz - | zcat | head | jq .field | sort


Another common thing you can do is accept a generic stream as input, but have some code that penetrates the abstraction a bit to see what kind of stream it is, and if it is a file or something, do something special with it to go even faster. This way, you start with something maximally useful up front, and easy to use, but you can optimize things based on details as you go.

That's how Go's static file web server works. It serves streams, but if you happen to io.Copy to that stream with something that is also an ∗os.File on Linux, it can use the sendfile call in the kernel instead. (A downside of making it so transparent is that if you wrap that stream with something you may not realize that you've wrecked the optimization because it no longer unwraps to an ∗os.File but whatever your wrapper is, but, well, nothing's perfect.)


Which is sorta-kinda the same thing as

  jq .field <(aws s3 cp s3://bucket/file.json.gz - | zcat | head) | sort
which is more annoying to type but works.


Redirection is not a parameter, meaning that would still not work with this Q tool.


This is not a simple redirection. cmd <(subcmd) is a bashism that redirects the output of command subcmd to something that looks like a regular file to command cmd. Command cmd receive a path at the place the <(subcmd) syntax is used. Different from cmd < f, which redirects the contents of file f to cmd's input.

So, this should work :-)


Works in other shells, not just bash

  % cat /proc/self/cmdline <(echo $SHELL) | tr '\0' ' '
  cat /proc/self/cmdline /proc/self/fd/11 /bin/zsh


Yep! Zsh supports a lot of bashisms :-)

It won't work in dash though, and you should not use this in a shell that targets POSIX.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: