Hacker News new | past | comments | ask | show | jobs | submit login
In defense of Unix (leancrew.com)
110 points by ingve on March 5, 2016 | hide | past | favorite | 185 comments



The complexity of the `find` command is the least of Unix's problems. How about defending these?

1. Unnecessary and confusing directory structure. `/etc`? Why not `/config`? `/usr` instead of `/system`, `/var` instead of ... well who knows. The maximum directory name length is no longer 3 characters.

2. Programs are mushed together and scattered through the filesystem rather than stored in separate locations. This basically means applications are install-only. Yeah, package managers try to keep track of everything, but that is just hacking around the problem, and most developers don't want to spend hours creating 5 different distro packages.

3. Not strictly Unix, but the mess of glibc with respect to ABI compatibility, static linking, etc. is ridiculous. Musl fixes most of this fortunately.

4. Emphasis on text-based configuration files. This is often ok, but it does make it hard to integrate with GUI tools, hence the lack of them.

5. Emphasis on shell scripts. Fortunately this is starting to change, but doing everything with shell scripts is terribly bug-prone and fragile.

6. X11. 'nuff said about that. When is Wayland going to be ready again?

7. General bugginess. I know stuff works 90% of the time, but that 10% is infuriating. Windows is a lot more reliable than Linux at having things "just work" these days.


Regarding #5, I quite enjoy text-based configuration files, and can't stand systems that force me to use a GUI to change settings. If I have a text-based config file, I know that it will play nicely with git. If there are many related settings, users can change them all quickly with their preferred text editor.


Agreed, text based config files are good and not the problem. (Though binary formats != GUI tools only.)

I think the real problem is config files either in hard-to-parse-correctly custom ad-hoc formats or even "config files" written in a scripting language (-> impossible to parse).

All config files should use the same standard format. I'd say "like YAML", but I'm not aware of a widely-used format with standard features like file includes or data types beyond "int" and "string" (e.g. for time intervals; these really shouldn't be "just a string... with a custom format").


That works fine when config files are simple straight forward text data. But config files can grow increasingly complex over time, and eventually become complex turing complete languages of their own.

I think it would be better to just start with a Turing complete language. I think they should use Lua. It has very simple general data structures that are self explanatory, and its very lightweight and sandboxable.

The only issue is combining config files with other programs. You don't want to strip the comments or formatting when you modify it with another program. Also wish there was a way to specify metadata, like what the possible values for a variable are allowed to be. Or descriptions and help info. With that you could easily convert config files into GUIs.


> I think it would be better to just start with a Turing complete language. I think they should use Lua. .......... when you modify it with another program ...

No! Turing complete config files are even worse than ad-hoc config files.

If your config format is turing complete, you couldn't correctly modify config files automatically. (You wouldn't even know how long until you're done evaluating it.)

If you need more logic, put it in your program or have a plugin system or write a program that generates the config file, but don't put it in the config file.


I don't see anything wrong with adding the option to do scripting. No one is making you use it. But when you need it, there's no alternative.

Many projects start out with just simple config files. But then they realize they need to do logic, and hack that into it. Then they realize they need more complex logic, and hack even more stuff in. And it's just a mess. It would have been much cleaner if they just started out with a scripting language.

Whether you should be putting logic in the config file is a different issue, but as long as people do it or have a need to do it, it's much better than crazy ad hoc solutions.

See these discussions on the issue: https://stackoverflow.com/questions/648246/at-what-point-doe... https://medium.com/@MrJamesFisher/configuration-files-suck-6...


I do understand your point.

But as soon as scripting is supported, it's impossible to write tools that process the config files and always work, especially with untrusted config files. You can't have both.

So the question is: can code be separated from config (like I've proposed above) (the code can still be inlined in the config file, as long as the "root" of the config file is declarative and the boundaries are well-defined)?

If no, which is more important: parseability or flexibility? It's a tradeoff.


What you could do is go the other way around. The program's canonical configuration format is pure data in a well-defined format (XML, JSON, Protocol Buffers, etc.). However, the top-level user-facing configuration is a script, written in a well-known (and ideally easily sandboxed) scripting language, whose output is the configuration data. The script can still load pure data files, which can be automatically analyzed and transformed, and with enough discipline most of the rapidly changing parts of your configuration will live in these pure data files. Even without this discipline, the final output of the configuration script is pure data that can easily be stored separately for tests, diffs, analyses, etc.


The problem with this approach is that there's no way for my program to parse my config file and tell me if I've screwed something up without actually executing the config file, which may be expensive or infeasible to do at program start.

My personal opinion is almost exactly the opposite. If the program's config file requires anything more complicated than a regular language to specify, it's doing too much.


>All config files should use the same standard format. I'd say "like YAML", but I'm not aware of a widely-used format with standard features like file includes or data types beyond "int" and "string" (e.g. for time intervals; these really shouldn't be "just a string... with a custom format").

Not sure how this compares http://raml.org/



I thought of HOCON, it's quite usable for users, but it's not a solution:

1. Lack of adption: It's not widely used yet. Implementations for few languages.

2. No formal spec; the "spec" is very imprecise and seems to try very hard to leave as much as possible to the implementation.

3. The spec is very, very Java specific. No clear separation between "core HOCON" and "Java extensions".

4. It probably doesn't lend itself well to automatic changes in a way that preserves structure of an original file (-> without just rendering out JSON).

It's clear that HOCONs only real focus is being easy to edit manually.

It's a nice idea, but it's definitely not the long term solution we need.


> All config files should use the same standard format.

I can't think of a single format that would lend itself to all cases, but I agree that config files should be in a standard format (i.e. a format that is supported by parsing tools).

YAML++ for being very readable.


XML has includes and you can have data types like string, int, time intervals, strings with regex, etc. with an XSD, if you want.

The complexity this brings can be overwhelming compared to an ad-hoc config file format.


XML is too verbose for manual editing and I've yet to see an XML library that isn't cumbersome to use (compared to JSON). It's probably too complex, alright. But I'm convinced a simpler format could be specified.

But your complexity comparison is unfair. Ad-hoc file formats are overwhelming. Users don't see the complexity because there is no spec and they just write config files by example. Developers generally either give up or write something that doesn't behave quite the same. A fair comparison would be:

  """"
  XML looks like this:

  <section>
    <subsection>
      <key>value</key>
    </subsection>
  </section>
  """
That's about the level of detail in the documentation of most ad-hoc file formats I've seen. Followed by hundreds of examples that happen to show (but not specify) special cases.


The emphasis on doing everything with text is not a problem, it is a major feature of Unix. Configuration management, portability, and interoperability are all a lot easier to do thanks to Unix's dedication to text as the lingua franca of the operating system


+1 for GUI-based configuration being a pain, having a dotfiles repo that I can edit, clone, fork, etc. is magical (and far more flexible).


Never mind being able to put comments in line with the settings to potentially document why the seemingly crazy changes have been applied.


Text config files plus the command line makes explainations and documentation a lot simpler, more concise, and more precise. You can exactly duplicate a series of commands and config changes. "Type this command to edit this config file, change the value of this setting to this other setting, then run this command to reload." Making documentation is often simply a matter of copying from your .bash_history.

Making documentation for graphical programs often requires screenshots and sentences like "click the third radio button on the right-hand section" that add nothing to the documentation and are easily misunderstood by hapless users. Then the developer changes the layout of the dialog box and you need a new set of screenshots. I'll grant that GUIs are more discoverable to a casual user.

I seem to recall that it's possible to automate GUI software but it seems fraught with peril in a way that automation of a commandline and text config system simply is not.


Check out AutoIt for an example of GUI automation. I've used it before as a hacky way to extend GUI programs.


Totally with you. I HATE dealing with Windows server for this very reason. Whereas, even before Puppet and Chef and the like you could mostly automate the deployment and configuration of a new *nix server with shell scripts and config file templates.

Try automating the deployment of a new IIS server 8 years ago. Hell, try it today.


    > Try automating the deployment of a new IIS server 8
    > years ago. Hell, try it today.
I've tried manually setting up a new local development server a couple of times; given up either on that.

"Download this, and this. Then run the Installation Wizard and select this and this and this, and this if you want it but you might not need it. Then install this and reboot. Then run the New Server Configuration Wizard Utility Package. Then ..." (cont. 94 pages) -- every tutorial.


@7 - is that so ? I just gave up installing F# developer tools in Windows last night after two hours, 3 general install methods (check the F# foundation site) and some 6+ installer packages (some of which wanted to eat 8GB of diskspace). - And yes, my copy of windows is reasonably modern (8.1) and legal.

Contrast that with Ubuntu, where installing F# took all of 5 minutes, with one command, 200MB and I had a full IDE and F# support.

The one concession I'll make is the one you yourself seem ignorant of - ease of use pertains to your expertise with the system. If you grew up on Windows, you may get its idiosyncrasies.


> Contrast that with Ubuntu, where installing F# took all of 5 minutes, with one command, 200MB and I had a full IDE and F# support.

Unless you want the latest version then you are cloning a few repos, compiling and hunting dependencies because building mono + ide isn't straightforward.


There's always the rolling release route, which I have been happily running for 4+ years without any issues.


Same here (Arch), but I just wanted to point out that Ubuntu has PPAs for almost every piece of software out there. Including new versions of Mono, Monodevelop and F#.


There isn't any middle ground though. I want most of my software to be stable with a few packages of the same version. Windows does this scenario. Also I have trust issues (after mint who should blame me), is there any rolling release that is backed by a company?


Disclaimer: I work at SUSE.

OpenSUSE Tumbleweed is rolling-release and we use all of the same (free software) QA and build systems we use for OpenSUSE Leap and SLE to build and test it. Not to mention that SUSE essentially mirrors packages between OpenSUSE Leap and SLE (our enterprise distribution).

I've been told by some of the people working on Tumbleweee that there's been a lot of people switching from Arch to OpenSUSE Tumbleweed because the packages are released much faster (which appears to be the case from my usage of Arch and TW). But if you're looking at having minimalist installs, there's still some work left to do (minimal "server" installs are still a bit too bloated, and --no-recommends isn't the default).

But yes, there is a rolling-release distribution backed by a company.


There's SuSE rolling, but I can also attest to Arch being super stable.


You have latest Mono and Monodevelop available from a Ubuntu/Debian repo in the oficial web site.


These days. I remember not that long ago when I was fixing files by hand to make them compile on Ubuntu. And it was just an example, topical to f#/.net. Ubuntu is great when you are fine with the version in repos, but can be more pain when there aren't any third party repos because linux distros are fractured.


Is this an issue with Windows itself, or with developers poorly supporting Windows? I've had similar problem with installing stuff, but it's always programming related stuff. Most software just works, but try installing pip and you need to set aside the whole day for reading bad documentation that just assumes you use Linux.


Since F# is by Microsoft themselves, I would hope that is not the fault of the developers in this instance.


8. Lack of proper, well integrated, easy to use, expressive permissions system, ideally with a notion of complete isolation by default. Right now most users rely on the benevolence of software writers to not mess with their personal files, but sometimes things goes awry (that Steam homefolder deletion disaster comes to mind).

Imagine mobile OSs with just the Unix permissions system, the malware spread on those would be so humongous, it'd almost be funny again (arguably this was a long-time problem anyway privacy-wise, with software requiring privileges that couldn't be faked (e.g. giving the application a fake address book instead of your own), but at least apps couldn't easily nuke/hijack all your personal files.)


This is coming with wayland and xdg-app. I say this not to try to refute your point but to give you something to Google for if you're curious about how things will probably work in the future.


Android it just that, and what they do there is run each app as its own user.


Though, Android also uses SELinux. I am not sure I would consider SELinux part of standard Unix permissions.


That is something introduced in recent versions.

And i suspect they did it more to get onto government approval lists than anything else (though it may also placate the *AAs).


> Windows is a lot more reliable than Linux at having things "just work" these days.

As long as you only do the things you're allowed to do. I just replaced my 6-year old Windows gaming machine, the only win machine I have. I can't even change the windowing theme - it has to be MS's preselected graphics. I wanted to turn off all the 'phone home' stuff except updates and windows defender; these are spread through half-a-dozen locations. I hadn't even got to install my first bit of software yet (firefox) and already I was limited over what can normally be done with a desktop.

"Just works" isn't really an argument when it's paired with "but you can only do these things".

Not to mention that back in the server world that lightweight virtual servers are an impossibility with Windows arena. ^nix servers are small in volume size, can run on fumes, and are largely disposable. Windows servers are (relatively) huge, slow to launch, require much more system resources, and require licensing. That isn't "just works" for me.

> [text config] This is often ok, but it does make it hard to integrate with GUI tools, hence the lack of them.

Only if your GUI tools are written from the viewpoint that nothing else should touch the config file. After all, if I can write a bash script that upserts a config value in a text ini file, why can't a GUI programmer?


>Windows is a lot more reliable than Linux at having things "just work" these days.

Ha, I'm server tech at a hosting company and this is spit-take worthy.

I really can't see how anyone could possibly think this.

Unless you're talking end-users local PC's this is wrong, and even then it generally isn't 'Linux bugginess' it's 'user ineptness'



The two are completely different, and "just works" is definitely Windows strong side on the desktop.

On my desktop system I want apps installed in isolated directories, with few or no central dependencies. Even if it means I have some unpatched vulnerability in 10 places.

For a server system I don't mind having to tinker, and central libraries can even be a security win.


Windows "just works" when the box is freshly unwrapped.

Use it for a month and all manner of oddities bubble to the surface.


I think that can be attributed to user skill for both, I render my Linux machines weird more often than my Windows machines, but admit it's because I'm better at driving Windows machines. I have had zero problems with Windows machines since win7 that can't be attributed to hardware failures.

Also by "just works" I meant mostly being able to get binaries from a random website and installing without having to use a package manager or failing to have the right deps. If you go to the 100 biggest app sites (Skype, Spotify, ...) and try to set up from a download, the "just works" is probably a lot better on win and Mac. This is of course a lot due to the size of the market, but it's no secret that standard cross-distro/desktop-env prebuilt binary installers for GUI apps are still not exactly a strong point on Linux.


Disclaimer: I work for SUSE, a Linux company which provides support for enterprises running SLES, as well as contributing our packages and knowledge to the openSUSE community.

I don't see how you could consider package management as a bad thing. Why do you consider "downloading binaries from a random website" to be a "good thing". Not to mention that those binaries almost never update themselves and how well they deal with dependencies depends on what $500 installer builder they used.

Package managers allow you to always keep your system up to date and you have a single database of all software that has been installed, what it's dependencies are and what files it did install (so you can uninstall it). They are definitely one of the awesome things about Linux package managers. OS X has homebrew, but it's not well integrated into the system because it's a third-party library of software. BSD's package managers are at least 10 years behind Linux's (they're still working on packaging the base system). Windows has nothing AFAIK. Things like OBS allow you to automate the release of new versions, OpenQA allows you to do automated QA testing to make sure there's no regressions in graphical or terminal services.

I especially don't understand why you think that not having a way to update the libraries on your system is a good idea. Packaging the same DLL in 30 different places is not a good thing.


1. Nothing wrong with packages, but central repos rarely contain up to date packages. I don't mind going to skype.com and downloading a .deb package for skype (I wish it was the same format for desktop software for all flavors of linux, but I digress).

2. Shared libraries are only good for saving space (largely irrelevant on desktop) and for security. Different side by side versions of libs work poorly when you reach "system" level is my experience. Example: having apps that require different incompatible glibc versions is painful. For example: see the accepted answer to this question http://stackoverflow.com/questions/847179/multiple-glibc-lib... "The absolute path to ld-linux.so.2 is hard-coded into the executable at link time" WAT?

I think the difference in mindset between what desktop computing is, and what a "system" is, is very different bewteen a linux user and a windows user (i.e. one that just wants an OS that is a dumb layer for running binary compiled shitware that must be built and distributed by its creator because it will never be in a repo).


> 1. Downloading binaries from websites is what Windows users know. Package management is vastly superior as long as the packages exist and are up to date with the "official" source such as Spotify. If the package is a week late, then I'm going to prefer the direct binary. Once half my apps are direct and half are packages, the benefits of a package system diminishes.

"What Windows users know" doesn't mean that it's a good thing. Windows users also know to run everything with administrative privileges. For sufficiently sophisticated build systems (read: OBS) you can automatically rebuild packages. The reason why packaging takes time is because there is a testing process (which can also be automated with things like OpenQA), but there's lots of other maintainence that goes on when curating packages. Believe it or not, but sometimes upstream is downright irresponsible when doing version bumps and it's the maintainer's job to deal with it. It's fairly thankless work, to be honest, because you're not working on the new hot stuff. Sure, "just download a binary" works until you have multiple components that depend on each other.

> 2. Shared libraries are only good for saving space (irrelevant on desktop) and for security. Different side by side versions of libs work poorly when you reach "system" level is my experience. Several libc versions etc is painful.

"Only good for [...] security" is enough reason for me. Tell me how Windows programs deal with updates to critical libraries that everyone uses separately. I'm guessing the answer is "not well at all". And if you're going though your package manager, then no package should require a specific version of libc (besides, this problem can be mitigated somewhat with symbol versioning). The gains far outweigh the perceived costs IMO.


Argh your ninja response time meant my complete rewrite of my above post now looks silly, sorry :)

> "What Windows users know" doesn't mean that it's a good thing.

I know (I also removed it). It's patently stupid. Let me rephrase it if you want one way of distributing apps that anyone can use, it's basically the only working way. Download a binary from the creators' site. Otherwise you end up with the utterly broken method of "check if it's in a tree in some package repo, if not, you can add more package sources to your repo, or if not, you check if you can find a downloadable package for it, if not, you build from source".

> Tell me how Windows programs deal with updates to critical libraries that everyone uses separately

They don't. It's both a bug and a feature. OS libraries are updated of course (by windows update) but I don't necessarily consider e.g. a C++ runtime to be an OS library, even if it's microsofts' own redist. I prefer my applications to ship their own copy of their c++ runtime and keep it local because it limits problems. Even at the cost of having an unpatched one somewhere.

> Windows users also know to run everything with administrative privileges.

Well, accidentally answering "yes" to the UAC prompt is about as likely as accidentally sudoing something imo.


Nope. Linux turns weird usually when someone is fiddling with it outside of daily usage. Windows seems to turn weird no matter what.


I reinstall Windows every few years when something really bad happens and never have seen Windows turn weird. After few years it is as good as a fresh install.

On the other hand sometimes in the internet I see "advices" like "Windows should be reinstalled every six months" and wondering what the hell these people are doing with their computers?


> On my desktop system I want apps installed in isolated directories, with few or no central dependencies. Even if it means I have some unpatched vulnerability in 10 places.

Why?

We aren't there yet, but Linux is trending towards xdg-app and appstream-esque projects producing a "common" nomenclature for software. Then you can write once install anywhere sandboxed app packages. All you really need are abstractions for both the package manager specific naming conventions and the system specific MAC filter.


if you want isolated directories, install to /opt then. that's what it's for.


Some comments on some of your questions:

1. Legacy and convention. Why do my 64-bit system files live in C:\Windows\system32? Why is the first volume on my system C and not A? Why are there multiple 'global' window stations on my system? Why is the real path to my disk drive \GLOBAL??\PhysicalDrive0, but for some reason I have to use a different path (\\.\PhysicalDrive0) in my programs or else it won't work; a path that I can't discover by myself but have to be told to use by scouring the darkest depths of MSDN? What the heck is ipv6-literal.net and why do I have to refer to some weird third-party ___domain to connect to a file share on IPv6 within my own network? Making these changes would be disruptive for the software that has to make the change, and impossible for the software that can not be modified. Distributions exist that try to improve the hierarchy (GoboLinux) but no one adopted them. We're currently seeing a push to unify the / and /usr hierarchies, so at least in the future things will get a bit simpler here.

2. Legacy and convention. In the days when storage was scarce, you could keep the contents of /usr/share and /usr/lib on central file servers; a single export for the former could serve all your clients, and you'd only need a single instance of the latter for each architecture in use, rather than having a separate copy on each machine. Besides, even in a world where each program lives in its own directory, as soon as you want one program to install a component for the other to consume, you have to bring in a package manager to remember the fact that /app/A installed a plugin into the /App/B/Plug-Ins directory... not to mention the unusably long PATH environment variable that would result... I find the package manager approach is overall superior to unreproducible the crap-fest you get when applications arbitrarily dump files all over the system.

3. Perhaps I'm in the minority, but I've never had problems relying on glibc via dynamic linking; you just have to build against the oldest version that you want to support, which is a bit of a pain but it's hardly the end of the world. The tradeoff with musl is, that you can no longer rely on dynamically loaded NSS and gconf modules. If you don't need these, fine, knock yourself out--but you should be aware of the tradeoffs when you switch your libc implementation out; namely that you can no longer use mdns, myhostname, ipv6literal, winbind, LDAP, etc. for looking up hosts, users, groups and so on.

4. System-wide configuration is better kept in text form where it can be read by a human, contain comments, and be kept in Git. Desktop programs can store their config in dconf, or otherwise do whatever they want as long as it lives in ~/.config and I don't have to care about it.

5. As long as you use “set -eu” with an understanding of its shortcomings, and know how to quote variables properly, writing small and medium systems in shell is a great tradeoff between development speed and robustness. Components can be rewritten in a real systems programming language once they stabilize or require better integration than can be had by parsing the output of other commands.


> What the heck is ip6-literal.net

So, I was looking this up as I havn't used windows in forever. I then did a `whois ipv6-literal.net` and was surprised that Microsoft doesn't own it. That seems weird for them to use it in such a way?


I made a typo, it should have been ipv6-literal.net, which they do own. Still sucks for the rest of us who have to interoperate, though at least there is an NSS module available to make it a bit easier (https://www.samba.org/~idra/code/nss-ipv6literal/README.html). Not that IskKebab will be using it with musl... :_)


This is what I see:

    Domain Name: Ipv6-literal.net
    Registry Domain ID: 1915314004_DOMAIN_NET-VRSN
    Registrar WHOIS server: whois.NameBright.com
    Registrar URL: http://www.NameBright.com
    Updated Date: 2015-09-26T00:00:00.000Z
    Creation Date: 2015-03-31T18:16:33.000Z
    Registrar Registration Expiration Date: 2016-03-31T00:00:00.000Z
    Registrar: DropCatch.com 577 LLC
    Registrar IANA ID: 2057
    Registrar Abuse Contact Email: [email protected]
    Registrar Abuse Contact Phone: +1.720.496.0020
    Domain Status: clientTransferProhibited
    Registry Registrant ID:
    Registrant Name: lirong shi
    Registrant Organization: www.Juming.com


The answer for most of these: legacy. Changing all these is very hard, as it'd be difficult to change the \ to / under Windows, etc.


Where does using / in paths rather than \ not work in Windows? Sorry to pick on your example, but from what I can see, they've already done that.


If you are using / in paths your are limited to max 260 characters paths.

If you want paths up to 32k you need to use back-slashes.

I think this is the reason they are not switching e.g. Visual Studio over to max 32k paths.


It clashes with command line switches. dir /w does not list the contents of the directory called w.


On Windows, you've really got to be quoting your paths everywhere anyway, since spaces abound, and if you do that, then dir "/w" works as expected.


Considering that most file systems these days allow spaces in paths, I'd guess you can safely remove the »On Windows« there. The amount of shell and build scripts on Unix-likes that die horrible deaths when encountering spaces is not funny. And well, yes, in a way that probably means that spaces in paths are not »supported« there, but you could then say the same about Windows. As well as using non-ASCII in paths.


> The amount of shell and build scripts on Unix-likes that die horrible deaths when encountering spaces is not funny.

That's so accurate. I used to write shell scripts with

    command $1
until I got a few too many nasty surprises with dashes in filenames.


Always use "$@"

    foo() {
        for arg in "$@" ; do
            echo "arg is \"${arg}\""
        done
    }

    foo 'bar baz' 'spaces in filename.txt'
> dashes in filenames.

When writing shell scripts it's a good idea to use the -- option whenever possible

    stupid_backup() {
        cp -a -- "$@" /stupid/backup/dir/
    }
 
    # copies 2 files
    stupid_backup --files '-with -leading -dashes'


What does the -- option do? I haven't encountered that before.


It means "everything past here should not be parsed as a - or -- flag". If you have a file named "-l", then ls -- -l will show you that file, instead of doing a long listing.


It should be noted that not all flag parsers support it. But most people use getopt so it's not a big deal.


Thanks- that's really useful.


Not everything's a script though. I avoid using spaces, so I'm in the habit of (outside of a script) not quoting; if I bump into a space within a path then by that point it's just quicker to escape it.


I've been able to successfully use / in the Win32 API with the exception of CreateProcess. I think the reason is that when you start an executable you may pass command line arguments starting with /


Respecting legacy decisions is very important. Often in software design there are many ways to do the same thing. Unless there is an important reason to favor one of the possibilities, the correct answer is almost always to pick the legacy version.

Comparability is important, and not just for existing tools. Choosing something different has a learning cost, and if you have to care about both versions you have ongoing mental effort costs as well.

There may be better names for "/etc", but it's not worth the effort.


>Programs are mushed together and scattered through the filesystem rather than stored in separate locations. This basically means applications are install-only. Yeah, package managers try to keep track of everything, but that is just hacking around the problem, and most developers don't want to spend hours creating 5 different distro packages.

What about things like encap/gnu stow which symlink files from one single package dir.


> 1. Unnecessary and confusing directory structure.

"Unnecessary" for you maybe, but have you considered the possibility that there is a reason why the directory structure is the way it is - and that the problems originally addressed may still be relevant? Here is a hint: partitions. Partitioning the files roughly by usage pattern allows you to tune performance, safety and security in a way that you would have a very hard time doing otherwise. Your suggested names make me think that you aren't very clear on the directory's actual purpose [0]. While longer names may have helped you out in understanding their purpose, once you actually learn it you're stuck with an unnecessarily long PWD that wraps each prompt.

> 2. Programs are mushed together and scattered...

Again, consider why it is the way it is. Do you really want to have a PATH that includes every directory for every binary on the system, or manage individual file permissions within all those directories? You want to do that with libraries as well? Just consider the complexity of what you're proposing and how you'd address: system defaults, setuid, per user preferences, dependencies, build environments... Years ago I basically did what you're suggesting, when Redhat was my daily driver. I'd build from source and install into ~/bin. Try it out for a while, you'll hate it.

> 4. Emphasis on text-based configuration files... hard to integrate with GUI tools...

What are you suggesting, a windows like registry? The text-based configs are no more difficult to use through a GUI than a binary representation, they're both a library function call away. Unless you are suggesting a windows like registry... but you've already pointed out how developers don't want to spend time on portability - so that can't be it.

> 7. General bugginess... Windows is a lot more reliable than Linux...

Ah, well try out an OS that is closer to Unix than Linux - maybe one of the BSDs. I'd put my Freebsd workstation against any flavor of Windows in a contest of uptime and performance, that is a bet I'd be happy to take. As far as Windows just working, that is true for the majority of tasks for the majority of users. But if you fall outside of that happy band of the target market, you are SOL. Consider the whole Windows telemetry issue. Also, I just noticed in my network logs that Windows update is sending out IPv6 dns requests despite the fact that I've disabled it on the network interface and tweaked several registry variables... there is nothing I can do about it. It would be a pretty simple fix for any opensource OS though.

[0] https://www.freebsd.org/doc/handbook/dirstructure.html


> Do you really want to have a PATH that includes every directory for every binary on the system, or manage individual file permissions within all those directories?

No way. But I'd love to gt ride of the plain PATH, replacing it with a hierarchical PATH with a convention. Maybe with every 'bin' directory inside that path getting in the search path, or maybe something that let me nest things deeper.

One can not do this in Unix, and that's the point.


> One can not do this in Unix, and that's the point.

One certainly can, very easily actually: edit you shell rc file in /etc to modify your PATH based on PWD, boom - hierarchical PATH the Unix way. Don't want it system wide? Edit your shell rc in the user home directory. Want something more complex? The posix shell source code is a lot simpler than you'd think. I wanted the same fancy git repo status PS1 stuff found in bash rc scripts, but without the performance impact - and in tcsh. It only took an hour of work to integrate libgit2. I don't think it would have been as easy with cmd.exe.


In fact, you are right.

And I'm hierarchising my ~/bin :)


I personally implemented my PROMPT generation's git commit and branch checks using zsh (just shell scripting) because compiling and dealing with a divergent version of my shell is just too much of a pain given how many machines I have to deal with.


That would have been the easy way to do it, if tcsh allowed dynamically generated prompts (outside of a few stock flags). On Freebsd it makes sense to run your own packaging build server once you start running custom compiles on more than a few machines, so it is no big deal to compile once and `pkg install` everywhere you want it. I guess the downside of that is that it makes it easy to be lazy and not push upstream, which I'm pretty sure is the primary motivation for a lot of code contributions :)


> `/etc`? Why not `/config`?

/etc contains more than just config files so /config would be a misleading (or at least overly specific) name. If we went that route we'd need dozens of top level directories to cover everything. I do prefer OSX's more user friendly directory layout but the traditional directory structure has been around for decades. It works fine.


/etc contains more than just config files because it's called '/etc' and not '/config'.

Any why would you need dozens of top level directories? I can't imagine that you could name even one dozen completely orthogonal aspects of program and system configuration that can't be put into a _some_ meaningful hierarchy.


Linux != UNIX


As for 1. and 2., I kind of like how Apple solved it. When moving Xcode from one Mac machine to the other, I only needed to copy /Applications/Xcode.app directory to the other machine and everything magically worked, configuration files are kept to their applications rather than littering the filesystem.


  > How about defending these?
  > 1. Unnecessary and confusing directory structure. `/etc`? Why not `/config`? 
  > `/usr` instead of `/system`, `/var` instead of ... well who knows. The maximum
  >   directory name length is no longer 3 characters.
I know yrro already explained the ludicrously inconsistent nature of the OS you're apparently defending, but I'll add in:

Why do I need to edit c:\Windows\System32\Drivers\etc\hosts (in what way is hosts related to bit-length or a driver?)

Why .htm rather than .html?

Why programiwanttorun.exe rather than programiwanttorun

/system is as overloaded a word as any in IT. /config doesn't accurately reflect what /etc is about (but in any case, the latter is not a great barrier to entry)

If you're complaining about limits on directory entries .. you're skating around on very thin ice if you're on the NTFS lake (compared to any of xfs, btrfs, reiserfs, ext2/3/4fs, etc)

  > 2. Programs are mushed together and scattered through the filesystem rather
  > than stored in separate locations. This basically means applications are
  > install-only. Yeah, package managers try to keep track of everything, but
  > that is just hacking around the problem, and most developers don't want to
  > spend hours creating 5 different distro packages.
14,000 registry entries for one suite of software ... how is that not 'mushed together and scattered'.

Applications are not install-only. Because package managers (or, rather, distributions) managed to solve this problem elegantly more than a decade ago, I don't know how you can credibly make this claim.

Is 'mushed together and scattered' a contradiction?

Building packages for a variety of distros is a solved problem (again, it has been for a decade or more).

  > 3. Not strictly Unix, but the mess of glibc with respect to ABI compatibility,
  >  static linking, etc. is ridiculous. Musl fixes most of this fortunately.
It's hard to not sarcastically comment with the observation that the phrase DLL Hell did not originate within the nix world.

More pragmatically, I rarely (in twenty years) have had glibc issues. I think perhaps 3 times. All easily solved.

  > 4. Emphasis on text-based configuration files. This is often ok, but it does
  >  make it hard to integrate with GUI tools, hence the lack of them.
The biggest complain with Win95 was the move away from .ini files (text-based).

GUI tools do not intrinsically have an issue dealing with text-based configuration files.

I posit the problem you're describing is the fact that text-based configuration files often contain useful human-readable components (which non-text-file config systems, such as 'the registry' lack) which is slightly harder to maintain using automated tools. But only slightly. As noted elsewhere, these are typically only a library call away.

  > 5. Emphasis on shell scripts. Fortunately this is starting to change, but
  > doing everything with shell scripts is terribly bug-prone and fragile.
Doing anything badly is fragile. Shell scripts aren't intrinsically bad - as evinced by the success of shell scripts.

I don't know many people who have significant experience with apt|rpm && sccm (for example) ... but I know a couple, and the fragility of shell scripts leads to fewer expletives than the alternative.

  >  6. X11. 'nuff said about that. When is Wayland going to be ready again?
Are you suggesting that a system that completely disavows any network-awareness is preferable to one, designed >20 years ago, that does it fairly well?

What problems have you had with X11 that aren't dwarfed by citrix / rdp / single-user consoles / etc?

  > 7. General bugginess. I know stuff works 90% of the time, but that 10% is
  > infuriating. Windows is a lot more reliable than Linux at having things
  > "just work" these days.
Sounds like hyperbole. If things in the GNU/Linux world broke 10% of the time there'd be a lot fewer people using it - and if Microsoft Windows was more reliable than GNU/Linux, there'd be a lot more people moving towards it rather than away from it.


X11 is inherently insecure. With Wayland, the compositor itself is privileged but all clients get access to only their frame buffers and event streams.

Network transparency is mostly irrelevant on modern desktop systems, X11 remotes modern apps poorly at best, and if you really need remote desktop access you know where to find RDP and SPICE.

The Wayland switch will be a win because X11 is almost pure cruft. The good parts are what was kept in Wayland.


> X11 is inherently insecure.

That depends on your perspective. Why are you accepting connections from malicious X clients?

> Network transparency is mostly irrelevant on modern desktop system

For you, maybe. That's an opinion that many of us do not share.

> X11 is almost pure cruft

It's only cruft if you limit your use cases to stuff like GTK+ that decided to only use X as a dumb framebuffer.

> The good parts are what was kept in Wayland.

Except for a long list of features, such network transparency, support for copying PRIMARY selections in addition to CLIPBOARD, or overriding input events of arbitrary programs. Until Wayland supports these, it isn't compatible with a lot of my tools.

Just because something is "old" or has features that you personally don't mean they are "bad" features that should be removed. There are more use cases than those in you definition of "desktop".


>it does make it hard to integrate with GUI tools, hence the lack of them.

I don't understand this point. Care to explain, please?


My guess is that GUI tools often clobber configuration files once they touch them, because it's easier to write code that stores configuration as a hashmap and when it saves it back, the output is not going to preserve things like, user comments, the order of things, etc.


Is it really though? And why are we just calling out GUI tools? Command line tools also must read in a config file, and they manage with text. json, ini, yaml, xml parses and writers exist in every language. There really isn't an excuse to not use a text-based config.


If the Windows registry + INI files + config files anyway is the alternative, I'll take the text files please.


I think command line tools (like say, git-config) manage one option at a time.

GUI tools tend to manage everything at once. Imagine the giant options box from MS Word 2003.


It still has to read in the current config, mutate it, and write it out.


I guess they mean that because GUI tools often have, well, a GUI to change options, they tend to write their configuration files. Whereas for command-line tools it's rare that they provide methods for changing their options and thus they tend to only read their config, thus never running into the problem of clobbering user edits.

Mind you, there are plenty of GUI applications that get this right anyway. But usually for the vast majority of users there is never a need of mangling configuration files by other means.


OS X is actually certified as a Unix, unlike many Linux distros.


Do you mean POSIX? In which case you should be aware that it actually costs money to get licensed as a POSIX-certified OS. If you actually meant UNIX, then of course GNU (GNU is not UNIX) isn't UNIX. It's in the name. At best the Linux kernel is a cousin of UNIX.


I meant Unix, but I should have said unlike all Linux distros except Inspur K-UX. I thought there were more but WP doesn't list any and seems comprehensive otherwise.

My point was really that some/most of those complaints about "Unix" don't apply to OS X for most people and sound more like the pain that users running Linux have to deal with, even though it's not typically an official Unix. I hope this was more clear.


The post linked from this article says:

    Great. It is spread everywhere this kind of complication for everyday tasks. Want to install something, need to type:
    apt-get install something
    
    Since we only use apt-get for installing stuff, why not?
    apt-get something
I will give a response I didn't see in any comment there, in the original post, neither in this post or here:

Because it could fail miserably trying to install like this, any package named like a subcommand.

    apt-get remove     # is installing package remove?
                       # or is failing the remove subcmd without args
You can workaround that, by apt-get install remove in that case, but the error at first try is counter-intuitive.

Edit: fix my last example


For some reason, “Since we only use apt-get for installing stuff, why not? apt-get something” really pushes my buttons.

Perhaps I'm reading too much of my own biases into my interpretation, but this sounds like it's written by a developer who has only ever used apt to blindly install a list of packages in a Dockerfile, rather than someone who has any system administration experience.

And that's fine, except that maybe they should have taken two seconds to read the apt-get manual, and realise that apt doesn't just _install_ packages, but it also, shockingly, allows for them to be removed and upgraded too...

Now I've gotten that off my chest, perhaps a more favourable interpretation would be that they are trying to say that they would prefer separate 'apt-get', 'apt-remove', 'apt-search' commands, in which case they'd have a point. Fortunately there is now a new 'apt' command that can perform the most common operations that users commonly invoke via the apt-get and apt-cache commands.


Nuno Brito, original post author.

Sorry to disappoint but professionally administrating Linux machines since 2004, started as end-user in 1998 and I work with whatever machines are available. My apologies if the blog post is read as an attack on apt-get, this is not the case.

It is just an example. Not saying that we can/should change apt-get, there was apt already made for that purpose. This example is only to raise attention for upcoming command line tools and respective authors to think about the most frequent use-case scenarios and then make them as straightforward as humanly possible.


Debian isn't the only system on the block, and apt isn't the only package manager. OpenSUSE has zypper (which IMO has a much better interface and supports patches to packages). Arch has pacman (which has less features, but is great for normal use on your local machine, I wouldn't recommend it for administrating a server -- not just because it's rolling release). apt has a very janky interface overall, there are better alternatives IMO.


No worries; as I say I am clearly reading too much into the original throwaway quote!


This has actually been made a lot easier. You can now just use:

    apt install something
    apt remove something
    apt search something
instead of

    apt-get install something
    apt-get remove something
    apt-cache search something
The new apt command also has a progress bar. So I've started using it exclusively now instead of the different apt-x commands.


Link to docs for that? Not seeing it on the Debian or Ubuntu sites

https://www.debian.org/doc/manuals/debian-handbook/apt.en.ht...

The Ubuntu 16.04 docs mention using "aptitude install" but the official docs still mention "apt-get install":

https://help.ubuntu.com/16.04/installation-guide/amd64/apds0...

https://help.ubuntu.com/12.04/serverguide/apt-get.html


Yeah I don't know why it is not introduced more. But here are some references from 2 years ago when it was released:

https://mvogt.wordpress.com/2014/04/04/apt-1-0/

https://lists.debian.org/debian-devel/2014/04/msg00013.html

And the HN post I made because the previous comment got so many upvotes that I thought lots of people didn't know about it. https://news.ycombinator.com/item?id=11229406


Now, I was wrong, apt is indeed already in the handbook.

"6.2. aptitude, apt-get, and apt Commands"

https://www.debian.org/doc/manuals/debian-handbook/sect.apt-...


There is a manual page. Nobody updated those fancy doc things.

http://manpages.debian.org/cgi-bin/man.cgi?query=apt


Original post author here, Nuno Brito

Except that exists no package named "remove" available today in mainstream Linux.. So we are forcing every single user across decades to use "install" because a "remove" package is prophesied and we couldn't reserve this keyword. Well, that's XKCD material.. :-)

But without playing: If you see the current syntax as good enough, that is OK. The point is usability on upcoming command line tools.


You'd just be replacing one inconsistency with another: you couldn't use any command names as package names or the reverse without some kind of special case, which would be just as bad.

The real solution would be to get rid of the split between apt-get, apt-cache, &c., which is a real UI issue, and just use a single 'apt' command. Then you'd have 'apt install package' - two of the usability issues with 'apt-get' go away if you do that: the need to use a bunch of other tools for no good reason and the '-get' wart at the end.

And guess what: that happened in Debian and its derivatives a fair while ago. It's just that nobody made a big deal of it. Here's the relevant section of the handbook: https://debian-handbook.info/browse/stable/sect.apt-get.html


Agreeing with you.

  apt-get purge
  apt-get autoremove
  apt-get update
  apt-get dist-upgrade
Plenty of other uses for apt-get besides install.


Furthermore, current versions of Debian/Ubuntu have the newer

    apt
command


It's the start button story again. <get> + <remove> is absurd somehow. And longer. For discovery and ergonomics, all this should be facaded under sys or deb.

    deb install
    deb remove
    deb search | info
    ...
dpkg is still on point, it's less about system integration, and more about archives themselves.


Just use apt.

apt install

apt remove

apt search

apt update

etc.


After opening the link I expected to see an article from 90s, unfortunately it's from 2016... I can't believe that people still going into such debates.


Not all of us were around (as admins) in the 90s.

If it was bad then, and it still (after 20 years) hasn't improved, then what do you expect?


The original article is about thinking on the end-user when designing new command line tools.

Yet the replies seem to focus on why the current "ls" works so well. Ironically, very few can ever find a file on a subfolder without first googling for the syntax.. :-/


The article could have been enhanced by highlighting that some shells (e.g. zsh) provide expansion patterns that recurse into subdirectories. e.g.

   ls **/*.txt


or in bash, with the globstar option set:

    shopt -s globstar # e.g. in your .bashrc

    ls **/*.txt


It's probably worth noting that the option is new in 4.0 -- relevant as OSX users are likely on 3.0.


3.0 is ancient, but 4.0 is GPLv3....


This thread perfectly sums up what's wrong with Unix.


Apple not wanting to distribute software that they're perfectly able to distribute for paranoia/anti-GPLv3 reasons is a problem with Unix?


We're blaming "Unix", whatever means, for the stupid decision by one UNIX® vendor to stop shipping updated versions of the tools that form the core of the operating system? :)


In fish at least, I'd just use

        ls **.txt


Thanks for the tip!


Ask your parents or non-geek friends: Given the task of finding the files with the name ending with .txt which of the following two commands would you choose?

  find -name "*.txt"

  dir *.txt /s


How about if you just give them the man page for "find" and the /? page for "dir" and see who can get the command correct first?


Looks like "find" is yet another command with the Examples section at the very bottom. I suppose that's the convention or something, but sometimes I do run into a man page with the examples at the top, and it takes a lot of effort not to jump up and shout "Thank you!" loud enough for the author to hear me.

The brain is a powerful inference engine. A few well-chosen examples can teach 95% of normal people 95% of what they need to know (numbers pulled out of my butt, 2016). Put good examples at the top and marvel at how many more people actually seem like they RTFM.


And if we're going to be using the commandline, I'd much rather the unixy

    find PATH -name FOO
than the powershelly

    Get-ChildItem -Path PATH -Filter FOO -Recurse
I mean "dir FOO /s" is simple and all, but powershell was created because cmd was deficient in many areas.

The blogpost referenced in the article is also stacking the deck a bit, as some of the 'complex' commands are normal commands, but with the verbosity turned up - the rsync command has three flags for increasing verbosity...


    ls -r PATH FOO
if you want things to be short. Since parameters can be often given positionally instead of named and you can shorten parameters as long as they remain unambiguous and there are aliases – it seems like you're stacking the deck a bit as well.

PowerShell has over cmd (and WSH):

- Consistency in command handling, naming and discoverablity

- A robust scripting language

- A modern shell

- An embeddable scripting environment (most GUI configuration stuff on Windows Server these days is PowerShell in the background; PowerShell is also the NuGet console in VS)

- Extensible

- The core of the language is built up from mostly orthogonal commands which work completely the same regardless of the context

- Interoperable with native commands, .NET, COM, WMI, arbitrary item hierarchies (file systems, registry, etc. – comes back to consistency and the point above)

- SSH-like capabilities built-in. Running a command or a series of commands over hundreds of machines is no harder than doing it locally.

The (perceived) verbosity can usually be tamed quite a bit with aliases and shortening parameters (or using them positionally), which is what you'd do most of the time when working with the shell once familiar with it. I guess you're not yet familiar with PowerShell or never used it, and that's okay. Because the long commands are in many, many cases self-describing to the point that you don't have to guess at all what they mean or do. This also helps with learning or communicating with others.


> I guess you're not yet familiar with PowerShell or never used it

This is correct - my example wasn't intentionally complex, but the result of googling and looking at the top answers (cmd was deficient, and I was wondering how you'd do the same thing in the 'proper' windows shell). I'm glad I have got the responses I did - I didn't mean to deride powershell, but show that if you want more power, you end up with more complex commands - and at the cli, I'd rather be typing in the unix example that the powershell one.

In any case, powershell should be better than the other shells we're talking about - it's from 2006, and the others are considerably older.


> powershell was created because cmd was deficient in many areas

And we didn't just copy bash, because bash was deficient in many areas.


On the other hand, the power shell version returns objects that can be manipulated in a safe way, whereas the find version relies on "-exec" being a feature of find (Talk about bloat.), which is string-based (if you're not extra careful: breaks with spaces/tabs/newlines/semicolons/dollars/backslahes/... depending on what you use it for) and spawns one process per file (really inefficient).


or

    ls -Recurse -Filter FOO PATH


I actually do this instead:

    find . | grep "\.txt$"


once you mentioned it, the original commands which are compared are not equivalent. Dir does case insensitive search, find is case sensitive. The equivalent would be:

  find . | grep -i "\.txt$"


You can do a case insensitive search with -iname: find . -iname "*.txt"


Neither, they would search in Finder or Explorer.


It's funny how minds differ. As a kid I was mesmerized by globbing and pipe operators. GUIs didn't have the same magic appeal. The List monad was calling me from the future past.


I cannot see how this article is defending UNIX only by talking about a single utility. It could be a better "defense" if it mentioned, for example, the power of being able to compositionally combine various commands through pipes, each of which doing one thing well. Inputs and outputs, remember?

The post the author is responding to is uninformed, and seems rather like a rant. DOS vs UNIX comparison does not work. I couldn't know how to take the `apt-get` example seriously. Because you could easily fix something like `alias ai='sudo apt-get install'` to achieve `ai something` magic with something as simple as aliases, which DOS does not even provide.


Powershell also has pipes, and works with objects rather than with text. It can be very powerful and much easier than having to use awk, sed, etc.


The mentioned blog post is horribly ignorant, and lacks almost any valid points.

find, in particular, very much has a fine UI, and I dare you to process, and not just list, the files with cmd.


> find, in particular, very much has a fine UI, and I dare you to process, and not just list, the files with cmd.

I'm not really sure why he is comparing find with cmd in the first place. Nobody thinks cmd is good; most anybody actually doing anything in a shell on Windows would be doing it in Powershell.

In that case, I would do something like:

Get-ChildItem pathname -Filter test.txt -Recurse

Any processing I want to do is easy, because I'm getting back objects and not just text. Say that I want to get a hash of each file named test.txt.

Get-ChildItem C:\Users\Amezarak -Filter test.txt -Recurse | Get-FileHash


Nuno Brito, blog post author here.

Let's make a little test. You pick ten Linux users on the next FOSDEM and ask them how to find a file on a subfolder. This will be called the "brito test".

If only 2 people don't know the answer, you are correct and simplifying the command line is not really needed.

If 5 of them don't know how do it, they will be branded as "horribly ignorant" Linux users.

If 8 of those people fail, we keep closing our eyes and repeating that everything is OK.

If 10 out of 10 people that you ask are failing this question. Well, time to ask another 10 until you get a positive ratio of non-horribly-ignorant answers.. ;-)

And btw, the blog is about design of future (upcoming) command line tools and not about changing "ls" or any of the other example given. For example, thinking about the most used function of a tool and making that as simple as possible to reach.


I do like find's UI as well.

As an aside, I remember reading an article about how "the original Unix guys" found the syntax somewhat odd when find came about. But the command was useful enough that they kept it.

(The article was about how inconsistent the Unix commands are regarding syntax, and I think its conclusion was that it is much more important to design syntax to fit the problem than to maintain a superficial consistency with other commands).


    for /r "C:\some path" %F in (*.exe) do process "%F"
forfiles also exists, which works similar to find regarding passing the list of files to another command.


ls -Recurse -Include "*exe" $PATH | % { something $_}


Use -Filter instead of -Include unless you need fancy wildcards. It's much faster because it gets passed to the FileSystem provider and filtering is applied at that level already.


In my opinion ls is one of the more broken bits in unix. But besides that, arguably more unixy way (even if no proper UNIX supports it out of the box) to solve is recursive wildcard, i.e.

    ls **/*.txt


I'd say this is less unixy, because it relies on the shell to walk the file tree, rather than delegating to a utility whose job is to do just that.

No comment on whether it's actually better or worse, mind!


I was going to make a similar argument, but then I lost faith in what ls' job is, if we want to walk this really pedantic path.

After all, `echo * * / * ` is going to give you an ugly dump of space-separated files; while `ls * * / * ` will give you a pretty list with colours, ownership, permissions, date modified, is directory/exec/symlink, etc. with their respective flags or according to your alias.

I don't think it's cheating to argue that `ls` job is to format the files given, or within given directory.

Edit: Although possibly it is, because `man ls` tells us "list directory contents".


bash supports that, just do

  shopt -s globstar
to enable it.


@7 I see this point repeated often without further clarification, what exactly is more buggy?

I can tell you from personal experience that with a reasonably modern Linux distro, my laptop works out of the box without any problems and I've not experienced any significant system-level bugs in a long time.

Meanwhile in Windows, my sound doesn't work at all when I wake up the laptop from sleep, Windows update tries to override the GPU driver with an older version from Windows update and every time I have > 10 Chrome tabs open, the whole system locks up regularly. Not to mention that the Windows registry is still a complete mess and trying to COMPLETELY remove a piece of software is an impossible task.

I am not saying Linux is perfect and yes, Windows works better in the games department, but I am not sure I'll call Windows less buggy.


I can never remember find's strange command line arguments, so I end up writing the easy replacement:

  find . |grep \\.txt
The backslashes are not very intuitive, but if you miss them out entirely, you'll still likely get good enough results.


Haha! Yes! +1 for this. I do exactly the same thing. Or even just grep txt. The find command just is sometthing i cant get into my muscle memory (despite using linux for more the 15 years)


for files on disk (not external disk) older then a day you can run

`locate` or `locate --regex`

Its faster since it uses a path database(If you have root you can run sudo updatedb to update it beforehand)


    function dir(){
      #make the windows guy happy 
      find ./ -type f -name "$1"
    }



hrmph. I never knew about it.


Not trying to be cheeky, but I have genuinely found it very illuminating to read the _proper_ documentation for all the GNU software that I use. Not the man pages, but the documentation available 'online' via info, and 'offline' on the web for those who are info-phobic. :)

When I find my mind wandering and I am tempted to waste time on reddit/Hacker News, I try to discover a new feature, or a new aspect of one with which I thought I was already familiar.

The info program (hated it when I first used it, now I prefer to use it to access the documentation of stuff that I have locally installed): https://www.gnu.org/software/texinfo/manual/info-stnd/info-s...

Readline User Manual (that thing you use when you press Ctrl-R to search your Bash history): https://tiswww.case.edu/php/chet/readline/rluserman.html

Findutils (the find and locate commands): https://www.gnu.org/software/findutils/manual/html_node/find...

Coreutils (ls, rm, sort, tail, tee, the good stuff): https://www.gnu.org/software/coreutils/manual/coreutils.html

glibc (an incredible manual that is very readable): https://www.gnu.org/software/libc/manual/html_node/index.htm...

Bash (actually has a good man page, but it's nowhere near as in depth as the reference manual, nor does it explain features with language designed for those who don't know that they already exist): https://www.gnu.org/software/bash/manual/html_node/index.htm...

Groff (have you written a man page for the last program you wrote?): https://www.gnu.org/software/groff/manual/html_node/man.html...

Gcc: https://gcc.gnu.org/onlinedocs/gcc/

GNU Make (bet you didn't know you could extend it with Guile and native code...): https://www.gnu.org/software/make/manual/html_node/index.htm...

Automake (for many years the autotools were something deeply mysterious to me that I thought I'd never understand, then I read the manual for them and now I can't get enough of them, and the shortcomings of other build systems cause serious pain): https://www.gnu.org/software/automake/manual/html_node/index...

Autoconf: https://www.gnu.org/savannah-checkouts/gnu/autoconf/manual/a...

libtool: https://www.gnu.org/software/libtool/manual/html_node/index....

The C Preprocessor (has its own manual, who knew!): https://gcc.gnu.org/onlinedocs/cpp/


Use pinfo instead of info.


    > This is not available in the version of bash that comes
    > with OS X
Almost nobody that doesn't comment "what's Terminal?" on an article/etc. should be using "the version of bash that comes with OS X".


I dont think its fair to mention DOS in 2016. There is PowerShell, VBS scripting etc...


1) could be a running gag. If you know it you know it, if you don't why not learn it? 2) mushed together /bin, /sbin/, /usr/bin, /usr/sbin/ usr local Well yes that may be hard to get if you exectubables are scatterd around as in windows in every other directoy. what a "big" difference 3) Yes DLl-Hell never ever has happened to Windows users - never 4) Oh yes it's better to have on registry and nobody knows which is for what. And if the registry is broken the whole system does not even run any more - yes that sounds as if that would be much better.. And no there is no graphical frontend for whatever in webmin. 5) Shell script are programs, and you can use them for scripting. what problem do you have with that? 6) So AFAIKT it runs here without troubles and update/upgrades are just an apt-get upgrade away. 7) Teh IT backbones are servers and most servers run under Linux. That should give you a hint.

You arguments are none. They are just your opinion which is not backed by any knowledge. So welcome in the land of good-doers.

Reliability is a word that Windows users have just learned the last few Windows incarnations. Long running servers are usual with Unices that's hardly the case for any Windows.

And for Windows nearly all malware works. But hey who needs reliability if it's all that nice and colourful


"Since we only use apt-get for installing stuff, why not?

apt-get something instead of apt-get install something"

wat? How dumb do people get?


If you read the blog carefully, you notice that the topic is about usability for upcoming command line tools.

An hypothetical apt something follows the pattern found on curl, ping, unzip, ... Correcting user input when badly typed is something git already serves as example. So, labeling as dumb one of the parties at a discussion about intuitive command line switches as if they are immutable should be done with more substance.

btw, when reading your reply this was the first thing coming to my mind: http://dilbert.com/strip/2007-11-16

Have fun. :-)


Is Bash really better at wildcard expansion than cmd? I mean, sometimes you don't want to do wildcard expansion in the shell. copy .txt .bak would be much harder to write in Bash I suppose.


For the record, I would probably write this:

    for i in *.txt; do
        cp "$i" "${i%txt}.bak"
    done


People are still having windoze vs. unix fights? In this day and age?


There are good things about the Unix shell, but the fact that the shell, not the program being called, expands wildcards, ain't one of them.


I like consistent and predictable rules for how wildcard arguments are expanded. There's no way that thousands of programs would all get it right if it were up to the developers themselves!

That said, it would perhaps have been nice if glob expansion would have, in some other timeline, been performed by a separate command, so that «echo 🞳» would print out a literal 🞳 character, and «glob echo 🞳🞳» would print out the result of the expansion.

glob could then accept flags to modify the rules for expansion, such as enabling the common shortcut for recursive expansion, rather than the user having to modify the behaviour of wildcard expansions by setting global variables.


This. I can't understand why it's handled at the shell level rather than at either the OS level (some Api to expand) or as a separate system tool?

What happens if I make a shell that does expansion slightly diffrently to the existing shells? That can't work well? So there is already a set of rules for expansion and all shells must implement them exactly? That does sound just slightly better than the apps trying to do the same thing?


If programs had to use an API to expand paths then developers would screw it up. Just look at the clusterfuck that is the command prompt on Windows!

In theory this applies to shell developers as well; however POSIX specifies how expansions should work, and there are far fewer shells than there are programs that those shells launch; and shells that don't conform to POSIX are less likely to find adoption because they will break user expectation.


Yes, I know shells are fewer than programs, but shouldn't the OS provide an implementation if something is specified by posix? Can't see the downside of at least several shells using the same OS-provided function for it.


GNU/Linux kinda does.

https://www.gnu.org/software/libc/manual/html_node/Pattern-M...

Don't know if they are commonly used by shells however.


> What happens if I make a shell that does expansion slightly diffrently to the existing shells? That can't work well?

We have different shells with different expansions (and a myriad other differences) and it doesn't work well.

In practice, the result is that people write bash scripts, not shell scripts.

(Though they often don't know, and so use #!/bin/sh anyway, and so the scripts break whenever the default shell isn't bash.)


Do you think the programs themselves should do the rest too? Tilde expansion? Quoting and escaping arguments? Word splitting? Variable substitution? Etc?

I would go mad working in such an environment.


A lot of what this article talks about is why systems like msys2 exists: to bring a *nixy command line to Windows.


Only useful for compiling software that depends on Unix toolchains.

Modern Windows has PowerShell, which is leagues ahead of bash in terms of functionality and power.


Not really. I use msys2 every day, and I've never used it to compile things.

And PowerShell is kind of useless since I can't use it on any other computer I interact with on a daily basis.

I use a github-synced environment, that includes things such as my zsh and vim configs.


and in the last example he given, i think you don't have to wrap things in quotation marks if alias done like this:

   alias lsr='find . -name $1'


As jstimpfle mentioned, that doesn't work, which is why the argument is left off the alias. In general, you can do that in a function instead of an alias:

    lsr() {
        find . -name "$1"
    }
Wrapping variable expansion with double quotes is a good habit, so spaces are handled properly.

Also, if you are using a modern-ish bash, you almost always want to use "$@" when there could be multiple arguments. The double-quoted @ special variable is guaranteed to always expand as multiple args, but with spaces handled correctly:

    foo() {
        bar --quux=42 "$@"
    }

    foo "a b c" "Spaces in my filename.txt"


That doesn't work the way you think. Aliases don't receive arguments. The $1 will be expanded to the first positional argument of the surrounding environment when you call the alias -- not the first argument of the alias invocation.


Seriously? I can't think of anything more pointless.


Trying to improve our command line interfaces is not pointless.


The equivalent of "dir *.txt /s" in UNIX is "ls -R|grep txt$".


I find that syntax "good enough" for my own usage. YMMV when considering it outputting not just files with txt as extension, but also somewhere on its name.

Someone else mentioned a workaround by adding the escaped dot in order to be fully equivalent to the dir syntax.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: