We’re Ben, Aanand, Carl, Eva, and Mark, and we made the Command Line Interface Guidelines.
Earlier this year, I was working on the Replicate CLI [0]. I had previously worked on Docker so I had a bunch of accumulated knowledge about what makes a good CLI, but I wanted to make Replicate really good, so I looked for some design guides or best practices. Turns out, nothing substantial had been published since the 1980s.
On this search I found a superb blog post by Carl about CLI naming [1]. He must be the only person in the world who cares about CLI design and is actually a good writer, so we teamed up. We also were joined by Aanand, who did loads of work on the Docker CLIs; Eva, who is a technical writer, to turn our scrappy ideas into a real piece of writing; and Mark, who typeset it and made a super design.
We love the CLI, but so much of it is a mess and hard to use. This is our attempt to make the CLI a better place. If you’re making a tool, we hope this is useful for you, and would love to hear your feedback.
Some of it is a bit opinionated, so feel free to challenge our ideas here or on GitHub! [2] We’ve also got a Discord server if you want to talk CLI design. [3]
If a user of your CLI tool has explicitly requested the built-in help text via --help (or an equivalent switch that requests help) then that help text shall be output on stdout.
It always peeves me to do:
command --help | less
only to find that the explicit request for help (the --help switch) has output the help text on stderr, and I then have to redo the invocation:
command --help 2>&1 | less
to cause the help text to actually be piped into less.
The result so convenient I added -man to the other command line tools. I thought about adding clickable urls to the error messages, but it just made the output too cluttery.
Yes, but many apps show help text when incorrect arguments are provided so, it’s an error in those cases. The dev probably doesn’t want to bother with using different output channels for the same help text.
Your CLI should also have a man page, installed in the standard way. IMO, -? or --help should be fairly terse, ideally one screenful or less. More complete documentation should go in the man page.
Edit: wrote this before I read "Don’t bother with man pages." Strongly disagree. Relying on web docs leaves your users in the lurch if they are working on a system with no external network.
> "Don’t bother with man pages." Strongly disagree.
Me too. I don’t want to jump to a different program, plus man pages are greppable. And you want the man pages that apply to the machine you’re using (versions, etc) — especially important when you’re using a remote machine (think how common then idiom is to do `ssh foo man blah|less` )
> "Don’t bother with man pages." Strongly disagree.
Me as well. Though I am not against well linked documentation on the web. But man pages provide several unique advantages.
- Because man pages are installed with the same package the software comes in, their version matches exactly to the version of the installed software.
- Man pages don't require an internet connection or a running server. The internet connection can be a problem in certain kinds of enterprise networks, embedded system or during my commute in the train. The running server costs money who might not be willing to spend it as long as I would like to use the software.
- Linking from the executable to the web is tricky. Does the terminal support clickable links? Which browser should I start? Does it help the user in any way to start a browser on a far away machine that he connected to through ssh, mosh, telnet or morse code over avian carrier?
- It is very easy and safe to open a man page to an unknown command. You can be sure the command is not executed by accident. If I have to type `some-command --help` I am never sure if this is one of those commands that doesn't accept --help and does something stupid instead.
Despite all these advantages of man pages or offline available documentation I also like good online documentation. https://www.postgresql.org/docs/current/ comes to my mind when thinking about excellent software documentation. Granted it is not a one-minute tutorial for the latest newbie. But for someone using that piece of software for years, its value cannot be overstated imho.
While I regularly rely on `man` (and would agree with you for systems that have it), I'd also like to add that you should consider people on other systems, as well.
I'm still stuck on Windows for work, and e.g. the way Git (at least the Windows version from git-scm.org) handles this is problematic.
Something like `git --help` will open the URL to a help page (that also takes ages to load) in the browser. Manpages don't exist and there is no useful substitute. And this in a Git distribution that ships with its own GNU environment (MSYS2). And then, there's Git LFS, for which I haven't even found a working help command yet.
While the original idea might have been that the terminal way of looking things up is too confusing for Windows users (which I find ironic, given that said users installed a terminal program), I find it still makes Git even more arcane on this platform.
Edit: Coming back to `man`: Git is in the useful position of shipping their own GNU environment. That means, they could still introduce manpages. Other, small CLI utilities usually don't have that luxury. In those cases, some kind of access to the information in the manpage would still be very handy, even if it is just in `tool --help`.
If Git on Windows can open a browser on a URL, why not just have that be a local HTML file? At least that works if there's a GUI and browser installed (rarely not the case on Windows, but if not you could include lynx and use that in the terminal as a fallback).
I think it does do that, actually. Definitely better in that it can be used without internet access, but I will admit that I have been guilty of thinking "if I'm going to be opening a browser anyway, might as well just google the question and get more targeted help."
Pandoc has the ability to output manpages from .md or .rst documents, so if you already have a web version rendered from those, I see no reason not to include them. I absolutely love software with manpages and leaving them alone just because other platforms (or simply just Windows) don't have them is a bit of an insult.
Ouch. I don't know if anything changed on the last ~10 years, but opening .md files used to open start the Windows help, with an indexing window, that never ended. And if you waited long enough (could be hours), it would open a small fragment of your help, with a navigation that could lead only to the system help.
I’ve been learning Perl lately, and was surprised and pleased to learn that the built-in pod2usage() functionality for converting POD documentation in your program into --help output also automatically supports manpage output. So the Perl programs I’ve been writing all include this as part of their arg-parsing loop:
pod2usage(-verbose => 1) if /^(-h|--help)$/;
pod2usage(-verbose => 2) if /^--man$/;
That second line is all it takes to produce a manpage too. Then it’s just a matter of writing the POD documentation for it.
It is admittedly a little unusual to consume a manpage by typing `someprog --man` instead of typing `man someprog`, but it’s very convenient for self-contained scripts.
I write docs as a plain text file (in /usr/share/doc), works everywhere, view with anything in any way you want, locally or in the web, no converters needed.
An excellent book! It was one of the first software development books I read and still shapes my programming style. I probably ought to read it again though, I suspect I would understand a lot of it a lot better now...
Since you bring up Docker, this little behavior has always baffled me:
$ docker image ls -h
Flag shorthand -h has been deprecated, please use --help
But:
$ docker image -h ls
Flag shorthand -h has been deprecated, please use --help
Flag shorthand -h has been deprecated, please use --help
Flag shorthand -h has been deprecated, please use --help
What's up with that? I'm guessing cobra nonsense, because everyone ends up wrapping cobra's weird API differently...
I know you're talking about how it's repeated three times, but something else here really bothers me.
I understand if they're deprecating it because they plan to replace it with something else, but if they don't then I'll be pretty disappointed. From their docs, it looks like some of their other subcommands have shorthands which conflict with `-h'? Fine, I guess, but that just means when `-h' does work it'll shoot people in the foot with that other behavior.
Software that clearly knows what I want, but refuses to, annoys me so much. For example,
$ python3
Python 3.9.0 (default, Dec 2 2020, 10:34:08)
[Clang 12.0.0 (clang-1200.0.32.27)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> quit
Use quit() or Ctrl-D (i.e. EOF) to exit
because they added `quit' as a top level variable with a `__str__' defined which returns that string. Do anything else! Make the `__str__' definition quit the program or something, test if it's being run at the top level in the CLI and quit, special case the CLI input so if only `quit' is entered, it quits. Heck, maybe just prefill the `quit()' on the next line so I just have to hit return. Do anything but instruct me to do what you should have done in the first place.
I don't agree. Fail fast fail hard or you deliberately lie and confuse your users.
If I make a mistake - tell me. If you allow me to do it slightly wrong then suddenly in my universe there is no consistency between commands and that is way worse than being told what I did wrong.
$ python3
Python 3.8.6 (default, Nov 18 2020, 23:56:33)
[GCC 9.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> quit
Use quit() or Ctrl-D (i.e. EOF) to exit
>>> q = quit
>>> quit = "foo"
>>> quit
'foo'
>>> quit()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: 'str' object is not callable
>>> q()
$
He doesn't claim that docker cli is well designed, only that he worked on it. Docker cli is pretty bad indeed, tar tier. Its web docs are also an example of butchered web 2.0.
This is great, thank you! Under pagers, maybe add a note to respect PAGER if set? I prefer relying on terminal scrollback in some situations, and e.g. in psql often set PAGER=cat. Git respects this, too.
Most of what you say is useful common sense, but the bit about not bothering with manpages is plain evil. If anything, the man page must be written before the program interface!
Is there a usage statistic for how often man pages are read? I violently agreed with the article here. There are zero times I wouldn’t rather switch to a browser. Even if I had to dig up my phone from my pocket and search for help there, I’d much rather do that.
Also, more and more tools are cross platform and man pages aren’t a thing on windows (are they?).
I don't have any statistic, but I use them all the time.
Under vim's default configuration, manpages are a single keypress away. If you press "K", vim opens a window with the manpage of the word under the cursor. It happens instantaneously, and it uses the same font and color scheme than your code. I would really hate it if it opened a browser!
Really? So if I make a man page for my CLI tool you would assault me physically?
You should take your own description of your beliefs as a red flag to reassess them.
I use a few tools that don’t have man pages and I always find it jarring that I have to leave the CLI to get some basic usage info.
There is already tooling to generate manages and online/html docs from a single source. Reasonable thing seems to be “do both”. I’m not persuaded by “Windows is leaking.”
It always seems to fall through the cracks in these discussions, maybe because there's not a lot of folks who run the distro, but Gentoo has very nice command line tools.
Do you mean emerge et al? I've been running Gentoo on my main machine for >1 year now, and I'm still having to check the manpage for flags constantly... I guess I wish that it was more explicitly "subcommandy."
The "commands" I create are, most often, small tools for my own use, so I don't worry about making the names short.
I always try to make the name a verb, or include a verb, e.g. "fetchbookmarks", "deleteduplicates"; one exception would be in the case of converters, in which case "convert" is implicit, e.g. "csv2xml" or whatever; and there may be other cases where the verb is implied.
The Huffman Coding is important, though, and I have made some tools I use alot, so I gave them short names. I use gvim a lot, and the wrapper I wrote for it (which does smart finding - it will load foo.cpp wherever it is under your cwd, as long as there's only one) I named g. I don't expect to distribute this tool, so I don't care. :-)
I strongly disagree with the “ Don’t bother with man pages” advice. It’s extremely annoying when tools don’t provide a manpage. The overhead of opening up a browser to reference command is cumbersome and highly annoying.
There are tools such as pandoc that make generating manpages from various source formats very easy.
Absolutely - I can't buy into the "don't write man pages" idea.
For many years I've had this function in my RC files for macOS:
pman() { man -t "$@" | open -f -a Preview; }
It opens the given man page in Preview, typeset beautifully. No such thing is available for random text printed to the console, nor is a console pager (as recommended by this article) an acceptable substitute for this.
... replacing '{some-pdf-viewer}' with whatever PDF viewer works for you. There is probably one that will take the document on stdin, but I am not aware of one.
I realize I'm late to the party here, but thought is was worth posting anyway. (edited for typo)
Maybe I'm suffering from similar syndrome, but I'm under 30 in human Earth years. However, I very much agree with you and would take it a step further to say that programmatically opening the browser with the help flag is also very unwelcome (I'm looking at you, git >.> )
I also consider this terrible advice. I don’t want to jump to a different program, plus man pages are greppable. And you want the man pages that apply to the machine you’re using (versions, etc) — especially important when you’re using a remote machine (think how common then idiom is to do `ssh foo man blah|less` )
The sentiment was meant to be: "Make `command help` as good as a man page. More people will read that." We didn't mean to suggest to use web pages instead.
We also didn't mean not to use man pages at all. We just find more people use the built-in help and web pages, so if you have limited time/resources, it's better spent on those things.
In retrospect, perhaps it was worded a bit strongly. I am enjoying the debate, though. The meta point, and part of the reason for this document to exist, is that perhaps it's a good idea to question our 30-year-old traditions and see if we can come up with better ways of doing things. This seems to have got people talking again. :)
Regarding web pages, some people work in highly controlled environments where you only have access to the man pages because you are working on a limited intranet designed for sensitive data, e.g. defense contractors or people working with sensitive health data. So having colocated resources can be extremely valuable for a small subset of users and shouldn't be completely discouraged. Although I agree with your assessment that command help and web pages are where the priority should be, since that meets the needs and expectations of a much larger proportion of CLI users.
Having good `command help` is super important. Agree with the assertion that in-built help is the first and most important.
There have been so many tries to make something better than man pages, and all of them tend to fall down because the solution is usually super complex. Usually, you are dealing with html or some archeo-crontastic typesetting format (which honestly, gets you man + hyperlinks and some better formatting). I guess the dream of GNU info is still alive.
They're not saying you shouldn't provide command-line help. Just that you should deliver them through a `help` subcommand and/or through a `--help` flag (like `git` does) because people don't find man pages and because man pages don't work on every platform.
Right. I still prefer terse help text from the command line, and more in-depth in the man pages.
I find commands with copious -h output and having to pipe that into a pager or using scroll back a far worse UX than just opening up a man page that’s easily searchable / scrollable.
Yea that's reasonable, but there are pros and cons and it's also definitely an opinion :).
I don't see the problem with piping help text through a pager (that's literally all `man` is doing). Alternatively, a tool can use a pager automatically (as `git help ...` does). So `man` isn't required to have a pager. Whether you should use a pager by default is a different argument.
That said, I find pagers annoying because it's hard to switch back and forth between typing a command and the docs. That's way easier to do with scrollback. Although someone in this comment thread also mentioned the `PAGER` env var, and that you can set `PAGER=cat`, which is a good pro-tip. So, TIL.
Still, if you're concerned with command-line ergonomics for a broad audience I think there are more people that either 1) know how to `| less` or 2) prefer or don't care about a pager than people who are gonna automatically just know they can do `PAGER=cat man <some-subcommand-of-my-cli-tool>`.
Plus, there's still the cross-platform argument.
Oh, and OP doesn't mention this, but man pages also complicate install, uninstall, versioning, and having multiple versions of a tool installed simultaneously. Built-in help is obviously just works and is distributed with the standalone binary. To use man pages you have to place additional artifacts on the system and make sure they're managed correctly (e.g., versioned). It's additional complexity with basically zero benefit.
So yea. Seems like, as a rule, their guidance is reasonable. I don't see a lot of benefit to using man pages other than "because that's how it's always been done".
> I don't see a lot of benefit to using man pages other than "because that's how it's always been done".
Man pages are organized, categorized, indexed and searchable. It's a common format that can be read by multiple pagers (including dedicated GUI browsers for those so inclined). They're also available when the command might not be. There are definitely benefits over command --help other than the syntax of the invocation.
> I find pagers annoying because it's hard to switch back and forth between typing a command and the docs.
less -X
If people don't know about man, why not teach them? A simple line in the summary -h gives that says: For more information, read the man page: man foo.
> Plus, there's still the cross-platform argument.
This just makes me sad. I don't know exactly where we'd be if every computer system didn't feel it needed to be garbage just because Windows is. But I don't think it'd be quite as bad as where we are.
> That said, I find pagers annoying because it's hard to switch back and forth between typing a command and the docs.
I tend to live inside byobu on the command line, or "pure" tmux/screen depending on availability. When opening a man page or similar I do it in a fresh pane so there is less back & forth.
Or of course usually you are using the CLI within a GUI so you have the option of a separate terminal too, or a web browser to the one version or other docs, though when using an external resources rather than the local man page there is the slight gotcha of making sure versions match (the main docs online may be for a newer revision than the one your OS currently has in its package repo).
Man pages improve discoverability. You only find --help contents if you already know which command you want. Man pages are searchable (with `man -k` or `apropos`).
This is most relevant when it's a tool that isn't going to be immediately obvious anyway (if I'm trying to figure out how to do XYZ on heroku, it's not unreasonable to expect me to look at the heroku program I installed), and especially when it's a tool that might be installed by default or installed and forgotten about.
Perhaps prioritize a --help flag over a man page, but add both if time permits. In any case, a man page is static text and much easier to implement than --help, especially if you have multiple help modes (e.g. verbose or different sets of commands).
Also manpages are hyperlinked to each other. You can easily say "see some other page (3)" and an html viewer will automatically make that into a hyperlink. Try the man: protocol in Konqueror. They are an extremely useful resource where you can put a lot more information than in the output of "-h".
I found it funny how just a few lines down, there's a recommendation to use formatting in your help text, and to "try to do it in a terminal-independent way". This is followed by a Heroku example closely following the manpage style.
I think distros should disallow binaries that do not have man pages. I remember on SunOS 4.x, for example, pretty much everything in /bin and /usr/bin had man pages. You wonder what something does? Read the man page. If you produce a binary and fail to produce an accompanying man page, you have committed a crime against humanity.
These are some pretty good guidelines! Many thanks to the authors for writing them up.
Some nitpicks (what would HN be without nitpicks?):
> If your command is expecting to have something piped to it and stdin is an interactive terminal, display help immediately and quit.
I disagree with this advice: being able to spoon-feed input into a program is extremely useful, and is part of the "conversational" CLI paradigm that the authors mention above. Commands that silently block on `stdin` are frustrating, but the right solution is to check `isatty(3)` and print an informational message rather than killing the program entirely.
> Check general-purpose environment variables for configuration values when possible
Consider adding `$IFS` to this! Commands that support different output models based on `isatty(3)` often neglect to support `$IFS`, making them more difficult to use in pipelines. This is especially handy in pipelines that need to deal with messy or untrusted inputs; I regularly use `IFS` with the ASCII field escape bytes.
> Don’t bother with man pages.
Please do bother with them! Nobody needs to write raw roff or troff in 2020; there are plenty of high quality manpage generators[1][2][3] that will turn your Markdown/ReST/whatever documentation into sensibly formatted manpages. manpages are much easier to search than the medley of pseudo-formats that CLI tools choose to render their `--help` outputs with.
> being able to spoon-feed input into a program is extremely useful, and is part of the "conversational" CLI paradigm that the authors mention above. Commands that silently block on `stdin` are frustrating, but the right solution is to check `isatty(3)` and print an informational message rather than killing the program entirely
This was the one thing I was coming here to nitpick. I use the "blocking stdin" behavior regularly to pipe copy/pasted text through various commands (various openssl subcommands, for example) and would be very annoyed if a program decided on its own that I shouldn't be doing that. At most, it should print a message like "Reading from stdin..." or something on stderr, but even that seems like introducing noise to hold the hands of people who don't understand the pipe paradigm.
The man macros for roff/troff/groff are not even particulary complicated. Anyone with half an hour to spend learning them, could write a man page without needing additional tools.
I agree with this (and I maintain a bunch of manual troff, in both senses of the word "manual"), but I also think it isn't the point: the point is that you don't need to learn an additional markup/macro language to produce high-quality manpages.
IME, the tools that are missing nice manpages are less than a decade old and have very good online documentation, particularly in the form of community-assisted ReST or Markdown docs. Most projects would rather just compile those docs into another format than introduce a split maintenance load for manpages.
- if you want to make it look nice, use ANSI escape codes for color rather than emojis. even then, don't use color alone to convey meaning because it will most likely get destroyed by whatever you're piping it to.
- please take the time to write detailed man pages, not just a "--help" screen
- implement "did you mean?" for typos (git style) and potentially dangerous commands
- separate the interface into a tree of subcommands (Go/Docker/AWS style) rather than a flat assortment of flags
- if you are displaying tabular data, present an ncurses interface
- (extremely important) shell completion for bash and zsh
No, please don't use escape codes in your output. Use the library that is designed for this purpose: terminfo.
Explicit escape codes is problematic when using any terminal that isnt' fully compatible with the xterm control codes, and doesn't allow me to turn those codes off by setting TERM to dumb.
Far too many times have I redirected output from a program to a file only be be bombarded with escape codes, breaking grep and other tools that process the output.
Not to mention the fact that using terminfo is much easier than manually outputting the control codes.
> Not to mention the fact that using terminfo is much easier than manually outputting the control codes.
Strong disagree. I already know many control codes by heart and can write them inside the printf strings. For terminfo, I have to include it, link against it, call bizarre functions, and then the library must be available at compile time, at runtime, and also the shared files at runtime. I have seen each one of these constraints fail for different reasons, and now I simply ignore the existence of terminfo. It is better to not abuse color, make color optional, but if you need some color you just use the xterm control codes that work everywhere.
No, don't bother with terminfo. It just gives you support for terminals that haven't been in use since the Apollo program. Nowadays it's perfectly fine to use ANSI escapes directly, which are, after all, a standard.
isatty and checking for TERM=dumb are sensible (as well as flags to enable/disable colors). Terminfo however is a meh depencency and since common tools (e.g. git) don't bother either there is no point in anything beyond that.
I like emojis quite a bit, and I suspect many other people do to. I'd be sad if the CLI tools I use today stopped outputting them, and I feel their output would lose a lot of clarity.
Many people are more visually oriented, and are greatly aided by images and color. A standard `NO_EMOJIS` environment variable could perhaps be used to help both camps, just like `NO_COLOR` is available today.
git’s behavior is really nice: any program on $PATH named `git-foo` can be accessed as a sub command `git foo`. I’ve personally taken this a step further, and wrapped git in a shell function so that `git cd bar` navigates to the directory `bar` relative to the repository root.
It's not portable. It looks really bad on a terminal emulator that does not support emojis (at least half of the ones out there esp on Linux). There are dozens of emoji fonts which all look different. You never know what your user is going to see. If your users are only using macOS and Terminal.app, then it may not be so bad, but if you are building a command line application, then I should be able to use it from a text-only console on an old system, VM, or embedded device. Don't assume all your users are going to be using it from a Macbook Pro or Ubuntu.
Personally, I don't. I quite like things to be text, because text can be understood. Icons can be... learned, I suppose, but then they tend to be inconsistent between apps and even change depending on themes and whatnot - so in general, it's mostly like playing a game of Memory where someone keeps shuffling the pieces.
But the main reason not to use emojis would be that you have no idea how they'll look to the user. I tried to paste the example output from yubikey-agent into my terminal, and all I got was a bunch of differently sized squares. Very informative...
You should try to configure your terminal to use UTF-8.
Symbols are universally used for quickly warning/informing in the real world and if done well, are very intuitive. Ignoring them in the digital world would be going against human UX (but no one ignores them, of course, even very old CLis already used them, but with the widespread use of UTF-8 and Emojis with that, it just became much easier and better).
My terminal has been configured to use UTF-8 for nearly twenty years. Apparently I do not have any font with these glyphs installed though, and I don't think it's reasonable to expect that people do.
Icons in GUIs are commonly used for interactive elements. Most
CLI tools are not interactive, they just produce some output and
the user expects that output to be easy to parse and compatible
with as many terminals as possible.
You can easily output tables, bullet lists and many other things
just with basic symbols supported everywhere. If your CLI
program requires installing fontawesome or breaks in a terminal
multiplexer etc. I'm probably not going to use it.
I merely mentioned fontawesome as one of many possible examples.
And as already said, a symbol having its place in unicode does
not mean it is available on the computer or in a certain
program. For example, in Linux terminals it's not uncommon that
at least one optional font installation is required in order to
get various emoji to display correctly, let alone other non-western
symbols.
Many people use the terminal exactly because it displays fewer
kinds of content than e.g. a web browser, which as a side effect
simplifies many situations.
Not all terminals support emojis. AFAIR, xterm doesn’t. I was stuck on xterm at my last job (only terminal that really worked on that system). Emojis are for SMS, and that should be it. Use emoticons.
Or just don't use a terminal color scheme where the any foreground color except 30 has bad contrast with color 40 and don't use a default background color that has worse contrast than 40 with all foregreound colors other than 30.
Hah. This is a topic that's dear to my heart and have blogged occasionally about it... but never in such a comprehensive form. Great job! Now... I gotta read through everything.
And here is another single post that touches upon a single guideline I came across while skimming through the text: https://jmmv.dev/2020/08/config-files-vs-directories.html . It might be helpful in providing more details about the _whys_ behind each guideline.
Lastly, I'll also mention the "Producing open source software" book by Karl Fogel, which provides a lot of useful advice too, especially on how to ship the tools: https://producingoss.com/
> Traditionally, UNIX commands were written under the assumption they were going to be used primarily by other programs
Is there any real factual basis for these claims? I find them hard to believe, considering the origins of UNIX and the fact that shell was the primary (or even only) user interface for the system, and it was pretty much from day 1 designed as an interactive system (contrasted to the more batch/system oriented mainframes of IBM etc)
I think the common terseness of many of the core suite of original unix tools actually reflects a strong focus on human, not machine, ergonomics. I still appreciate the speed and ease of typing them, and like many other aspects of the CLI, it's optimized for users who know it well and use it heavily. Once you're familiar with the names, it's not challenging to remove that mv = move, wc = wordcount, etc. Terminals of the era also still actually printed mechanically, so keeping command length short was a major ergonomic win for round trip speed.
As a sibling comment mentions, these commands were (are) commonly composed into scripts. As the name implies, however, a script is just a playbook for a series of commands to run. Given the terminals of the era, I'm sure short commands/variables/etc. were appreciated in scripts as well, but it seems to me that the primary motivation for optimizing input speed would be the use of these commands in an interactive environment.
A few examples of these core short program names: ls, cat, cp, rm, wc, uniq, cmp, diff, od, dd, tail, tr, etc.
I find it interesting how GUI and CLI drift apart so far in this
area. Powerful GUI software for specialised tasks is often
overloaded with buttons and toolbars everywhere because the user
needs to be able to click them. The terminal is the complete
opposite, instead of clicking through menus to find the right
option(or use a ton of keyboard shortcuts) you have to know what
to type. But it's also very efficient and flexible, and in
exchange for more difficult discoverability of features it
circumvents menus completely.
My recollection is that non-programmers followed the instructions in the binder, which told them how to log in, and then how to start the program they needed. Very basic stuff. Actual commands were the province of programmers.
I'd say say there's some truth to it, but maybe in an accidental way.
Shell scripts have been part of UNIX since day 1. A lot of early UNIX commands were implemented as scripts, and shell was always intended to be one of the main extension mechanisms for users. Scripting is fundamentally based on the assumption that a script will run various commands, glued together in a user-specified way.
So it's true to say that most early UNIX commands were designed to be useful in a script, which implies that they were designed for use by programs. Maybe not designed _exclusively_ for use by programs, but definitely an important design consideration.
This is a great resource. I will return to read it closely next time I am designing a CLI.
One thing that puzzled me: `git push` is given as an example of the principle "If you change state, tell the user". Its output is:
$ git push
Enumerating objects: 18, done.
Counting objects: 100% (18/18), done.
Delta compression using up to 8 threads
Compressing objects: 100% (10/10), done.
Writing objects: 100% (10/10), 2.09 KiB | 2.09 MiB/s, done.
Total 10 (delta 8), reused 0 (delta 0), pack-reused 0
remote: Resolving deltas: 100% (8/8), completed with 8 local objects.
To github.com:replicate/replicate.git
+ 6c22c90...a2a5217 bfirsh/fix-delete -> bfirsh/fix-delete
The state change information only comes after many lines of (IMO) unnecessary detail about the inner workings of `git push`. I think this would be much better:
$ make --version
GNU Make 4.1
Built for x86_64-pc-linux-gnu
Copyright (C) 1988-2014 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
> Use standard names for flags, if there is a standard.
In the interest of consistency across tools, I recommend consulting the Table of Long Options from the GNU Coding Standards.
Some long options that are particularly helpful, when appropriate:
‘dry-run’
‘-n’ in make.
‘null’
‘-0’ in xargs.
Also, options that increase safety:
'no-clobber'
‘-n’ in mv
# do not overwrite an existing file
'overwrite'
‘-c’ in unshar
# even better than --no-clobber, require explicit
# permission before overwriting existing files
edit:
Never try to be "smart" or "magic" and guess what the user intended! DWIM can appear useful at first, but we've known for a long time that DWIM behavior is dangerous[1].
I disagree about abandoning manual pages. I always reach for `man foo` even before `foo --help`.
Missing detail: Exit codes should be restricted to a range of 0..255 inclusive. Many POSIX system calls only forward the low-order 8 bits of the exit code to a parent process.
I also appreciate man pages, hate it when they are missing. The (Free)BSD man-pages are truly great, I’d never have to google anything if others held the same standard.
> Let the user escape. Make it clear how to get out. (Don’t do what vim does.) If your program hangs on network I/O etc, always make Ctrl-C still work. If it’s a wrapper around program execution where Ctrl-C can’t quit (SSH, tmux, telnet, etc), make it clear how to do that. For example, SSH allows escape sequences with the ~ escape character.
I find it confusing that vim is considered less discoverable here than SSH. In either case, you can find the information in the manual (not ideal, but a baseline). In the case of vim, when you hit ctrl-c it tells you how to leave for real. So far as I'm aware, there's no real way to discover the magic ~ invocations in SSH except resorting to the manual or exhaustively trying every key combo with no evidence that you're on the right track or that there's even a track to be on.
Also, vim is (in part) a wrapper around program execution.
I'm not who you replied to, but I hardly ever need to use tilde escape sequences to quit ssh. I just terminate the session from inside itself with the `exit` command.
I've occasionally had things crash in ways that have swallowed ctrl-c. More recently I've learned to still try ctrl-z in those cases, as that often still works (but not always).
None of this is very common, but I found myself opening another terminal and killing the ssh process probably tens of times before I learned about the ~ escapes.
Yeah, that's the only time I've had to use the escape sequences. However that happens to me only once a year or so (but maybe I don't use ssh as often/in the same ways as others).
> Use a command-line argument parsing library where you can.
> ...
> Python: Click, Typer
What's the problem with using the python standard library argparse?
I would frown upon adding a dependency for such a core feature as argument parsing, unless it brings strong benefits. And even then, I'd recommend to use the standard lib and to switch to the other libs only when necessary.
argparse is really good and it's built-in. I was also wondering why it wasn't listed as the default option for Python, let alone it not being mentioned at all.
I'd use click over argparse for the same reason I prefer the requests lib over built-ins for standard network communication: it smoothes out the rough edges, cuts down on boilerplate and makes the code easier to read, write and maintain.
You couldn't disagree that "not enough people use man pages"? Do you have a source for that belief?
It's really weird to me that a few people here seem to really like their man page... when you can just do `cmd --help` and it works in pretty much any program, and you can visit the website for in-depth information.
"Do you have a source for that belief?" - well, you have this HN thread where about half of the comments speak about why the man pages should stay. You can find great arguments, for instance man pages are searchable inside, but also you can search for manpages containing given keywords (man -k). My shell (fish) autocompletes man pages: when I enter `man gi`, it tells me that I can see the git manual, or for instance, GIMP manual. The first thing I do with a program is to open its manpage. When it isn't available, all I can think is that it was written carelessly. If you want to have interoperable software, you absolutely need to provide manpages.
To be fair, it depends on your exact definition of manpages, which is not clear from the article.
If by "manpages" you mean locally stored documentation which goes beyond the simple usage and available options that you'd expect from a --help switch, then I completely disagree. I believe locally available documentation in an easily accessible form and compatible with a terminal is a must.
If by "manpages" you mean specifically "it needs to be generated via groff or troff or whatever it is that generates a manpage, and is specifically for use with `man`", then fine, I can see the point. But the rebuttal here wouldn't be "because webdocs are better", and it certainly wouldn't be "because you can always bloat your --help switch to dump a mountain of text". If you have another structured way of delivering local documentation for your project, then so be it (tex does this with texdoc for instance).
Having said that, as far as standards go for local documentation with a useful and consistent structure and interface, manpages is definitely #1 and worth considering in your project, and most definitely before "webdocs".
Why would you not want access to a program's documentation
within its own environment? Requiring a working internet
connection and a web browser just to look up plain text docs
sounds a bit like needing a fax machine to receive a receipt for
online payments.
`cmd --help` is usually only a shortened version and rarely
includes notes about specific behaviour, for example. Man pages
are very comprehensive and can be navigated without needing to
switch between windows and different input methods.
That's why I think both are important: -h/--help for a short description, man pages for more detailed "handbooks". There Ideally there would also be a "compatibility" rendering of man pages bundled for systems like Windows that don't have anything like man.
The software I will use to invoke your command immediately send EOF to your process's stdin. Almost all traditional Unix command do something sensible in response. "mv -i" for example, refrains from moving anything. The software I prefer to use to run your command has no provision for your command's trying to engage me in a conversation.
If I am sufficiently motivated to run your command, I might choose to re-run your command in Terminal.app. For example, I ran whatever command needed to install Command Line Tools in Terminal.app so that I could indicate my agreement to its terms and conditions by my typing "y". But your command is probably not important enough for me to bother running it in Terminal.app.
If I use your command incorrectly (e.g., if I misspell a flag), and your command tells me so, then I am willing to issue a new, corrected command line, and in fact I have the patience to try a command over and over, varying the command's arguments and the command's environment variables till I get it right.
But I no longer have the patience for things like vim or mutt or lynx -- i.e., programs that have what one might call a terminal user interface. (What about pagers? Well, I set the environment variable PAGER to cat. Similar to how Plan 9's terminal works, the software I will use to invoke your command does not automatically scroll to the bottom of the process's output: the window does not scroll unless I tell it to scroll, e.g., by hitting the page-down key.)
lynx was my primary browser for about 10 years, so it is not like I am ignorant of "full screen" terminal interfaces; I am just weary of them.
It's kind of crazy that we still don't have a standard computer readable way for a program to describe it's argument structure in a computer readable way that could be used for command line autocomplete. E.g. something like an extra section in the binary that describes how arguments are processed.
There still needs to be some out of hand signal that it accepts that option otherwise you would pass it to tools that don't and cause undefined behavior.
The general problem is that typing for these sorts of systems suck. If it works it's a nice little safety feature, but when the types don't line up it's infuriating. Treating everything as a lowest common denominator makes compatibility trivial.
> If your help text is long, pipe it through a pager.
> Use formatting in your help text.
So reimplement man in a non-standard way? No thanks. Keep --help short and concise and then have a separate man page with all the details. Don't just ignore standards and conventions because not all operating system agree on them.
The languages of the the argument parsing libraries listed (Go, Node, Python, Ruby) is also kinda suspect when many such tools are written in C.
> Use symbols and emoji where it makes things clearer.
Eww. And the example is pretty bad too with a number of meaningless icons.
> By default, don’t output information that’s only understandable by the creators of the software.
Except people with post the default output if they are having problems and having all useful information here saves having to ask for it. Also, don't underestimate your user's abitlity to understand stuff.
> Use a pager (e.g. less) if you are outputting a lot of text.
> A good sensible set of options to use for less
Stop right there. If you going to automatically use a pager, at least use the preferred one the user has specified in their environment. $PAGER is even mentioned later in the environment variable section.
And even if that were not the case I wouldn't not do something useful because of a deficiency in a particular platform. We have enough issues with that via needing to support IE11 still in the day job! Include the man page on the other documentation. You probably already have it written and can convert it from other docs so it might be very little work.
The command line, in terms of the terminal emulator, really needs to evolve to more of a Meta-X mini-buffer pattern.
People keep circling around the sweet spot which is a fixed ___location command input area that outputs, in addition to text streams, any variety of multi-media.
On one end you've got a set of people doing heroic designs in the terminal with all variety of UTF-8 characters. On the other end you've got things like Jupiter Notebook.
What I'm looking for is something like the Mattermost UI without the chat aspect being front and center. Or Emacs, but with the ability to embed Youtube videos in a buffer.
> If your help text is long, pipe it through a pager.
I disagree that this is a good practice. This prevents me from using my terminal's scrollback to read and/or copy the text. If it comes up in a pager, I either have to re-run the help command every time I want to read it (super annoying when I'm trying to build a command line!) or open yet another window on my screen. And depending on the terminal, this might overwrite a part of the scrollback buffer I had important context in.
If I want to page the help output, I will pipe it to a pager myself.
If the help text is really so unreasonably long that it's unsuitable for --help, hide it behind some other option that --help points me to.
Git does this (effectively treating "-h" the way every other CLI treats "--help", and "--help" as a request to open a man page), and it's incredibly annoying, because no other CLI I use works this way, so it screws up my muscle memory in a way that impacts both my usage of Git and of everything else.
I think one emoji is one emoji too many. Many terminals do not support them. This also addresses the point of terminal independence you brought up elsewhere. The yubi key example looked like using emojis for the sake of using emojis. Several of them were redundant or only slightly relevant (test tube, hand, key).
In all honesty, I just really dislike emojis
The guidelines were all helpful otherwise. It shows that I could only find one nitpick.
^Like here. I put laugh-crying emojis ironically at the end but they were filtered out. Many terminals wouldn't do me this courtesy of shielding my eyes and instead regurgitate gibberish characters.
Nice read, makes me happy to see people invest in CLI.
"Humans first"... How about "Humans with Tools first" ? :)
I would say that ALL logging should go to stderr (level should be --configurable of course), so that when running a container for example I can capture all the logging easily on stderr.
Also that [WARN], [ERROR], etc tags are quite nice to have in there when grep'ing over it.. Just saying :)
I would very much recommend The Art of UNIX programming, by Eric S. Raymond [0]
I used it extensively when developing Space.sh [1] (which was an insane shell script adventure purifying for the soul, but taxing on the brain).
Going full termie is the best choice I've made, thanks for a good read! \o
It would be nice to see an example of space.sh usage somewhere on the landing page. I was intrigued by “through 10 firewalls” but not quite enough to go to the documentation and look it up.
Note that server jumping (over firewalls) is perfectly viable in the last example too, by adding more hosts (comma separated). Note that it can be important to balance host names to user names (see the doc [0]).
In the examples space wraps module functionality where the last module file is the one finally running inside the container, wrapped first by the `ssh` module then the `docker` module.
It is totally agent-less so nothing is uploaded to the server, it's just like SSHing in and running commands.
If you add the `-d` flag to space it will dump the script to output, which you can save to a .sh file and run later (without needing space.sh at all).
Space it self requires bash, but the output runs with POSIX shells, so it also works with dash/ash/busybox, etc which is good because servers/containers often doesn't come with bash, but simpler ash/dash instead.
Thanks for showing interest, this makes me keen on pushing a few fixes and updates I have pending. Let me know if it doesn't work and I'll fix that :)
I find it very interesting that the sources and "further readings" are more often than not SO or stack exchange links. It is just a recognition that the collective knowledge, as informal as it is on forums, is often used as reference material like one of those dusty algorithm textbooks or a software engineering design book.
npm is terrible. Why? because the color and formatting outputs all kinds of escape characters. Guess what happens when Jenkins or other build systems try to render spinning ascii characters? What happens if color and spinning punctuation is allowed then developers who use configuration files for things like npm promote them to build with color escape character output enabled. It creates a mess of configuration files keeping separate build and development just for escape character configuration. My advice is
no color and no escape characters if your CLI will ever be part of a build chain or is intended to output to logs. All it takes is for one usage to a log that doesn't have '--no-color' and the binary characters will trip up diff and other tools that only work on text.
Quit and help are the two most important and basic command that should have priority. The rest you can learn. These two is hard. Exit, quit, crt-d, crt-z... and help is even higher especially if you give an error response.
The most important part for me when creating a new CLI is the ability for the end user to debug the CLI themselves and step through it. This is much easier with e.g. Python, but fairly more complex with a binary built by Go
This is a good resource with a lot of good points. But,
> Display output on success, but keep it brief.
Strong disagree. If everything went OK, or something is in progress, I don't need to be notified; it's distracting and makes me instinctively think something has gone wrong and needs my attention. The Art of Unix Programming got it right, "Silence is golden."[1]
I'm disappointed that `git push` is their example of good CLI output, because I've truly hated git's entire CLI design and philosophy for years. Git's CLI is the opposite of what good CLI design should be in almost every way: a noisy, confusing, inconsistent, staggeringly complex misery. Do I really need to know that `git push` is enumerating obects? That it's using delta compression using 8 threads? That it's counting objects? That it's compressing objects? That information is useless plumbing output that clutters and distracts for no purpose. Just do the work and keep quiet, with at most a progress bar for a long-running command--and even then it should be optional.
If people really want chatty output, include an optional --verbose flag, or even levels of verbosity with -v, -vv, -vvv, etc.
> Use symbols and emoji where it makes things clearer.
Please, no. This is a fad right now and it's also really distracting. Symbols and emoji don't have common meanings and a symbol next to a header or plain text is not helpful, because the plain text already says what I need to know without the symbol. Does having an insect icon next to a header that says "Error" really improve clarity, or does a colorful icon in a sea of plain text needlessly distract? Should the icon have been an insect, or a stop sign? Or an exclamation mark in a circle? Or a traffic cone? Couldn't all of those mean "Error"?
Their example of good emoji use includes a red X next to informational--not error--text (leading me to believe that an error occurred or something stopped unexpectedly) and some kind of duck-beak icon next to the text "When the YubiKey blinks, touch it to authorize the login." Huh?
Despite what Unicode wants you to believe, emojis are not "plain text" and translate poorly to professional workstation-based communication. They are difficult to type on a regular keyboard, are tedious to copy and paste, and don't belong in other plain text contexts like data storage or CLI pipes. Additionally emojis render differently for different users so you never truly know what your CLI might be displaying.
What is the current best practice for making machine readable documentation of methods and options available in a CLI program? Similar to how OpenAPI works for RESTful services.
There's some excellent content here. Unfortunately it is hidden behind an excessively wordy document. You should consider bringing in a ruthless editor to cut this down to about a quarter of it's current length.
What do you think is wordy? It's detailed, but as an occasional editor, I'm not seeing much wordiness. It also makes good use of headers and bolding so readers can skim and skip if they want.
There's definitely imagery, history, and definitions that technically could be cut, but the guidelines would be poorer for it—especially for people with less familiarity with the commandline. I do think they should consider releasing a cheat sheet alongside the full document with just the bolded guidelines.
As for consistency between programs, that's what vim keys accomplish for keymap. I used to be skeptical about these despite being a vim user. But viewed as a way to make programs more consistent, it's good.
I think we should abolish different prefixes for short/long flags except for core POSIX programs (like ls, cp, rm, mkdir, etc.) In other words, "-flag" should be interchangeable with "-flag". The ONLY reason I can think of why you would not recognize "-flag" as equal to "--flag" is because you want to recognize it as "-f -l -a -g", which makes sense for programs like ls, but for 99% of newer programs, don’t do it. Just make "-flag" and "--flag" equivalent.
Strongly disagree. Be consistent, avoid ambiguity, reduce risk of uncertainty. The convention has the feature of combining flags, but there's more. Making -flag and --flag equivalent means combining flags is not possible without potential collisions, and the distinction would become less obvious overall. Usage in practice would be split between -flag and --flag. Enable such inconsistency for what benefits?
Yes, I agree… be consistent and avoid ambiguity. Combined short flags are the most ambiguous of all.
Usage in practice would be split between -flag and --flag… so what? They shouldn’t mean different things, because it is too easy for humans to miss the extra -. And in the end, CLIs are for humans first.
I don't think that CLIs are more suited to either humans or computers. CLIs are flexible enough to be effectively used by both.
> Usage in practice would be split between -flag and --flag. So what?
Well, you're breaking everybody's expectations and many years of conventions by doing it. Saving one dash with every flag and losing the ability to combine short options is just not a good trade off.
> I don't think that CLIs are more suited to either humans or computers. CLIs are flexible enough to be effectively used by both.
Either a human is sitting at a terminal typing commands, or a human is sitting at a text editor writing a shell script, or a human is writing a program which runs other commands. In all cases, it's a bunch of strings pasted together and the computer can't typecheck it, so we need to make sure that the strings are readable by humans.
> Well, you're breaking everybody's expectations and many years of conventions by doing it. Saving one dash with every flag and losing the ability to combine short options is just not a good trade off.
If you've named your account on HN "gnubison", it's safe to say that you've got some biases. It might surprise you to know, then, that plenty of programs already don't let you combine short options, and that not everybody uses getopt_long or something compatible.
Combining short flags saves you a small amount of typing sometimes, but it adds extra confusion. So in general, I would say that most commands should not allow you to combine short flags.
Perhaps short options are unreadable. But if I see a shell script that contains a long option with only one dash, I will definitely be confused... I don't think that saving a dash makes a shell script more readable. It definitely doesn't improve the situation enough to justify messing with people's preconceived ideas.
I don't even know what to say about your assumption that I'm biased because of my username. One might even claim that you're the narrow minded person to be judging people's biases based on three letters in their username.
Which programs don't allow you to combine short options? Other than perhaps hastily written shell scripts. getopt(3) is POSIX AFAIK so there's not really any excuse for POSIX programs to not handle combined short options.
Right, "don't do it" is imperative mood, not indicative mood. If it were indicative mood, the clause would need a subject but in a casual internet forum I can see why people would omit the subject anyway ¯\_(ツ)_/¯
This is one of the reasons why I don't use getopt_long() -- I don't WANT short flags to be combined, for most commands I write. Fortunately, getopt_long() is one of those library functions that is truly trivial to reimplement and not something that has arcane behavior or edge cases.
To increase the readability in scripts, one can use comments. Long options only increase the likelihood of going over someones preferred column count, forcing one to use newline escapes, splitting the command over multiple lines - especially if the command in question is indented. Does that really increase readability? I don't think so personally.
I do the same, along with longer variable names. Every time I went back to look at old code, I am so grateful that I used the more descriptive information.
We’re Ben, Aanand, Carl, Eva, and Mark, and we made the Command Line Interface Guidelines.
Earlier this year, I was working on the Replicate CLI [0]. I had previously worked on Docker so I had a bunch of accumulated knowledge about what makes a good CLI, but I wanted to make Replicate really good, so I looked for some design guides or best practices. Turns out, nothing substantial had been published since the 1980s.
On this search I found a superb blog post by Carl about CLI naming [1]. He must be the only person in the world who cares about CLI design and is actually a good writer, so we teamed up. We also were joined by Aanand, who did loads of work on the Docker CLIs; Eva, who is a technical writer, to turn our scrappy ideas into a real piece of writing; and Mark, who typeset it and made a super design.
We love the CLI, but so much of it is a mess and hard to use. This is our attempt to make the CLI a better place. If you’re making a tool, we hope this is useful for you, and would love to hear your feedback.
Some of it is a bit opinionated, so feel free to challenge our ideas here or on GitHub! [2] We’ve also got a Discord server if you want to talk CLI design. [3]
[0] https://replicate.ai/
[1] https://smallstep.com/blog/the-poetics-of-cli-command-names/
[2] https://github.com/cli-guidelines/cli-guidelines
[3] https://discord.gg/EbAW5rUCkE