My opinionated design principle for RESTful APIs is to keep in mind that REST is just a style sheet of good practices and patterns as in, do not apply it blindly and just use whatever makes sense.
From there, I have conflicting emotions with this:
Although the web generally works on HATEOAS type principles (where we go
to a website's front page and follow links based on what we see on the
page), I don't think we're ready for HATEOAS on APIs just yet.
I do not think "being ready" is an argument to use or not use HATEOAS. Sure, HATEOAS as defined by Roy Fielding contains a lot of clutter, but there's nothing bad if it makes sense to assume a client that does not know how to construct URLs. Generally I find it nice to have something similar to:
There's a sliding scale of HATEOAS-ness. I've seen a much more extreme form of it described elsewhere. Mark Seemann (http://blog.ploeh.dk/2013/05/01/rest-lesson-learned-avoid-ha...) describes an approach that encrypts all URLs so that clients have no choice but to follow links because generating URLs from templates is impossible.
And you'd have to get that URL out of the response to another request you'd already made. I never liked this idea, and the linked article provides a really strong counterargument to this approach to HATEOAS.
When browsing a website, decisions on what links will be
clicked are made at run time. However, with an API, decisions
as to what requests will be sent are made when the API
integration code is written, not at run time. Could the
decisions be deferred to run time? Sure, however, there isn't
much to gain going down that route as code would still not be
able to handle significant API changes without breaking.
About the counterargument, yes, it does raise a good point (in fact, one that I skimmed through). Specially about changes and breaking. For instance, in the following document, what is to change in the href attribute?
Surely, not much. Either /fooes/ changes to /foos/, in that case, the call to /fooes is broken anyway, or the id has changed, and needs to be handled with an HTTP 301 anyway. So no, there's not much to gain.
But then again, I was just saying that the href attribute might make sense in some cases, with not much extra effort. Nobody is asking the clients to consume the href attribute, it just makes understanding the whole API easier.
As of images, what would make more sense, to add an endpoint /images/10/view or directly use an href attribute pointing to http://s.foo.bar/1337.png ?
My point is, in some cases it makes sense, just use it when it does. Just as one should not be encrypting URLs just because Mark Seemann feels it is within the holy grail of the Level 3 API as defined by Leonard Richarson [1], we should not be avoiding to make our APIs navigable.
[1]: Don't get me wrong. What they say might make sense in non-trivial APIs, as always, REST is not an specification.
I think we spend too much time trying to craft beautiful, descriptive URLs, where we should be spending our time crafting beautiful, descriptive responses. We try to make the URL describe to the developer what is being returned, when in reality, this should be dictated by media types and link relations.
Take for instance the example above. Whether the URL is /images/10/view or /images/10.png, the developer has no idea what kind of resource that is without any media types or relations (the file extension != to the media type). In other words, you could use both URLs to return the same exact resource, and the client shouldn't care. You could even change back and forth between those URLs and the client should never flinch.
This is actually a good example. Say you are serving a static PNG file at /images/10.png, linked in your server response (i.e. in some href like above). Say you want to change things up to do some type of logging on how many times the image has been viewed, so you write some code, create a response at /images/10/view, log views, and return the image with an image/png media type. You just did a really big API change and the client didn't have to be changed.
Now if your client was dumber, and was crafting URLs as /images/{id}.png, you would break all of your clients if you did the above changes. This of course is really just the tip of the iceberg of what flexibility HATEOAS can bring.
His argument makes sense in the current world of very stupid clients (as in, fragile to changes) and non-standard formats.
Current clients are like those factory robots that can only do a fixed series of movements, it's not really worth it to add ARTags and computer vision to the system if the arm joints can't do any other movement any way.
But on the other hand, if we don't add those tags (or in the API world, use HATEOAS and standard data formats) now, how are those smarter robots ever going to be made?
What about adding a field, removing a field or updating the type of data stored on a field? HATEOAS doesn't offer anything for that. This is why I feel we need more experimentation, more standardization, more time before some practices stand out enough to consider them 'best practices'
Isn't HATEOAS more about just URL changes, though? It is about letting the responses provide available responses to the client. This requires a commitment to creating media types or using existing ones, along with using link relations. Not using REST correctly leads to such complicated best practices that recommend adding .json to a URL to get the JSON representation. You shouldn't have to go to the documentation to figure out have to craft that URL to get a representation of a resource.
Than maybe it's too soon to have any kind of best practices. We agree that more experimentation is needed, but when is that going to happen when everyone is building according to the REST-but-not-really best practices and the systems trying new things are toy services?
Mutations in the resource are addressed by changes in the media type. You're talking about it as if it was rocket science, while everything is very well addressed.
While the author of that post is advocating a level 3 API, you could still have unguessable URLs without any hypermedia involved. If the client and server shared a secret key, or if you used random UUIDs as resource IDs, then you could achieve the same thing.
From there, I have conflicting emotions with this:
I do not think "being ready" is an argument to use or not use HATEOAS. Sure, HATEOAS as defined by Roy Fielding contains a lot of clutter, but there's nothing bad if it makes sense to assume a client that does not know how to construct URLs. Generally I find it nice to have something similar to: