You are on page 1of 8

HATEOAS is for Humans

8 May 2016

##TLDR##

HATEOAS is actually a simple concept: a server sends both data and


the network operations on that data to the client, which has no
special knowledge about the data.
This simple but powerful idea has been lost because it is typically
examined in terms of machines and APIs, rather than in terms of
humans and HTML.
Humans are uniquely positioned to take advantage of the power of
HATEOAS in a way that machines are not (yet) because they have
agency.

HATEOAS - Wat?
HATEOAS (https://en.wikipedia.org/wiki/HATEOAS) is perhaps the
least understood aspect of REST
(https://en.wikipedia.org/wiki/Representational_state_transfer) and
is often an object of outright hatred
(https://je knupp.com/blog/2014/06/03/why-i-hate-hateoas/) for
perfectly reasonable, competent developers. This is too bad, and it
is mainly due to the fact that the HATEOAS and REST concepts have
been applied to APIs (originally XML, then JSON) rather than to
where they originally arose: humans and HTML on the web.

Let’s see if I can give a simple (if incomplete) explanation of what


HATEOAS is and then explain why it works so well with humans and
HTML and why it works poorly for machines and JSON.

To begin at the beginning, what does the acronym HATEOAS mean?

Simple!
‘Hypermedia As The Engine Of Application State’

OK, maybe not so simple. So what does that mean?

What it means is: all “state” (that is, both the data and the network
actions available on that data) is encoded in hypermedia (e.g.
HTML) returned by the server. Clients know nothing speci c about
any particular network end point: both the data and the network
operations available on that data come from the server.

The crucial point, to repeat again: both the data and the network
operations on that data come from the server, together, in
hypermedia.

Sounds a bit object oriented, doesn’t it?

HTML - HATEOAS for Humans


Let’s look at an example to help make this idea concrete. Rather
than using an API example (which, unfortunately, even the
HATEOAS Wikipedia article does), let’s consider something even
simpler: just a bit of HTML. HTML, after all, is the most ubiquitous
and successful hypermedia in the world.

Consider the following snippet of HTML, retrieved from a server at,


say, the following end point: /contacts/42 :

<div>
<div>
Name: Joe Blow
</div>
<div>
Email: joe@blow.com
</div>
<div>
<a href="/contacts/42/edit">Edit</a>
<a href="/contacts/42/email">Email</a>
<a href="/contacts/42/archive">Archive</a>
</div>
</div>

This bit of HTML, you will notice, encodes both the data for the
contact, as well as the actions available on that data (Editing,
Emailing and Archiving) in the form of links. The client (a browser)
knows nothing about contacts, it knows only how to take this HTML
and render it as some UI for a human to interact with. It’s certainly
not the most e cient encoding of this data, and it is intermixed
with some other junk as well, but that’s OK. That other junk has
proven to be pretty useful on the client side, so let’s let it slide for
now.

This means that a web application that communicates in terms of


HTML, naturally satis es the HATEOAS constraint of REST, without
anyone needing to think very hard about it.

If you have ever built a traditional web app, congrats, you have
implemented HATEOAS better than 99% of all API developers.

It must be noted that, unfortunately, traditional HTML itself is


somewhat limited in the number of HTTP methods (mainly GET and
POST) and user actions (clicks and form submissions) that it allows,
which made it di cult to realize the complete bene ts of REST.

Fortunately, intercooler.js (/), in addition to much else (/docs.html),


recti es both of these issues with the ic-put-to (/attributes/ic-put-
to.html), ic-delete-from (/attributes/ic-delete-from.html), etc. and
the ic-trigger (/attributes/ic-trigger.html) attributes, respectively.
This gives you a much richer and complete programming
infrastructure for building your HTML-based REST-ful web
application.

You should use it. 😉


Why The Gnashing of Teeth?
Anyway, we can see that, despite all the fancy verbiage, HATEOAS is
almost idiotically easy to implement by just using HTML.

Why all the hate and confusion around it, then?

To understand why, let’s look at the example from the HATEOAS


Wikipage (https://en.wikipedia.org/wiki/HATEOAS):
<?xml version="1.0"?>
<account>
<account_number>12345</account_number>
<balance currency="usd">100.00</balance>
<link rel="deposit" href="http://somebank.org/account
<link rel="withdraw" href="http://somebank.org/accoun
<link rel="transfer" href="http://somebank.org/accoun
<link rel="close" href="http://somebank.org/account/1
</account>

This is an XML API satisfying HATEOAS by encoding all the actions


on the account as link elements. And that’s great as far as it goes.
You get the Gold REST Star for this API.

But consider what is consuming this data: some client code,


probably on behalf of yet another (thick or web) client further down
the line, or perhaps an automated script. Regardless, it is code,
rather than a human, that is likely dealing with it.

What can it do with all those actions? The actions, note, are
dynamic, but the script itself probably isn’t: it needs to either handle
all possible actions or forward them along to a human to deal with,
right?

And that gets to crux of the issue: the code doesn’t (yet) have
agency (https://en.wikipedia.org/wiki/Agency_(philosophy)).
It can’t reasonably decide what to do in the face of new and
unexpected actions. The coder writing the code could have it handle
all possible actions (tough) or pass them along to a human
somewhere else (also tough).

Realistically, the code will likely handle a few of the actions and just
ignore the rest, so all that work for a Gold REST Star is,
unfortunately, wasted.

Agency As A Service (AAAS)


Now, humans aren’t good at much, but (but!) one thing we are
pretty good at doing is agency. We can make decisions given new
and novel situations, making sense of somewhat chaotic
environments and learning new things. We can gure out when a
new action shows up, associated with some data, if we want to take
that action.

It’s just a thing that we do.

I like to turn the client-server relationship around, and consider the


human users of a software system as providing Agency As A
Service (AAAS) for the server.

The server software knows all about the data and what actions are
available on that data, but has no idea what the heck to do.
Fortunately, these otherwise bumbling humans show up and will
poke and prod the server to provide the agency the server so
desperately needs. The server, of course, wants to speak with the
humans in a language (hypermedia) that the humans nd pleasant,
or at least tolerable.

And that language is HTML.

So, you can see: a system satisfying HATEOAS is wasted if the


hypermedia isn’t being consumed by something with agency.
Humans are that thing, and, therefore for HATEOAS to be e ective,
the hypermedia needs to be humane.

Again, that’s HTML. I didn’t realize just how special it was until year
20 of writing web apps.

Once we have strong AI, maybe the situation changes. But that’s
what we’ve got today.

OK, So How Should We Speak To The Machines?


Well and good. But what about the machines? There are
integrations and scripts and scrapes and thick clients to be written,
and they all need to talk to servers as well, right?

That’s of course correct and, I’m not ashamed to admit, I’m not
sure what the right answer is here, of if there is a single one.
REST-minus-HATEOAS seems like it works OK in many cases. RPC-
style end points were once popular and appear to be getting
popular again. They all seem reasonably workable to me.

But what I am convinced of, and what I hope to convince you of, is
that HATEOAS is largely wasted on machines.

HATEOAS is for humans.

 | The Comments Section |


8 Comments intercoolerjs.org 
1 Login

 Recommend 5 t Tweet f Share Sort by Best

Join the discussion…

LOG IN WITH
OR SIGN UP WITH DISQUS ?

Name

Kevin Duffey • 3 months ago


I like this take and Darrels responses as well. Reading this, and elsewhere, my thought
was "when AI gets good enough, it will know how to respond to links like a human". In the
mean time, the best we client human developers can do is have detailed documentation
of the API, and build consumers, typically UI, but possibly some headless automated
stuff. I would love to build and see more HATEOAS based APIs, but as I mention
elsewhere, it is a lot of work on the developer side to implement all the possible response
links that account for things like API consumer roles, query parameters, versions and
more. I suspect you will continue to see /v1/ /v2/ etc type of APIs because developers are
typically a lazy bunch and managers/execs want features yesterday... so the time needed
to develop good HATEOAS based APIs complete with tests.. it something the majority of
the business world has no interest in spending time doing.
△ ▽ • Reply • Share ›

Darrel Miller • 3 years ago


Unfortunately, I believe this argument is a strawman. I don't believe that anyone who has
had any experience building hypermedia driven systems would argue that a client is
expected to be able to perform any kind of interaction that it hasn't already been explicitly
coded to perform, regardless of what is returned by the server.

A web browser knows how to render HTML because it has been coded to do so. The fact
that it can render HTML pages that it did not know about before, is the only clever thing
happening. If you teach a hypermedia driven client how to pay for things, then it may be
capable of buying all kinds of things as long as they can be represented as a resource.
With or without human interaction.
In order for hypermedia to work, a client needs to have "special knowledge" of the media
types and all the link relation types it will interact with.

Additionally, hypermedia as the engine of application state is not just about returning links
in representations. It is also about building clients whose state can be manipulated by the
responses that are returned from a server. A client expresses intent by following a link
and the server changes the client state appropriately by sending a response.
△ ▽ • Reply • Share ›

Brent Arias > Darrel Miller • 2 years ago


I see no straw man. Everywhere I look I see people bashing HATEOAS because
there are no "smart clients" to make use of it. So this article is truly a benefit for
redirecting the discussion.
Also, you said "hypermedia...is about...client state?" That sounds incorrect. I
would say the client is allowed to be stateless when the server is supplying
hypermedia responses; it is the server that is keeping state. I like hypermedia so
that my client (e.g. a SPA) can be as dumb as possible, not as smart as possible.
The essential example is a "withdraw funds" button that is not rendered in the SPA
because the server did not include the pertinent hypermedia link, because the
account has a zero balance.
1△ ▽ • Reply • Share ›

Darrel Miller > Brent Arias • 2 years ago


The server maintains resource state. The client has application state. It is
called "hypermedia as the engine of application state" because the client
follows links to change its application state. Both application state and
resource state is updated via the transfer of representations. That's why it
is called REST. Clients need to understand media types and link relations
in order to do anything useful. Whether you call that dumb or smart is up to
you.
2△ ▽ • Reply • Share ›

dotnetchris > Darrel Miller • 3 years ago


A software client is just as capable of processing <a href= as it is <link rel=

I'd argue that the <a href= is even more interoperable as it's been a standard for
decades now.
△ ▽ • Reply • Share ›

Darrel Miller > dotnetchris • 3 years ago


From my perspective < a href= is just HTML's shorthand encoding of <link
rel="anchor", so yes they are equivalent.
△ ▽ • Reply • Share ›

Show more replies

Alexandre Matos • 3 years ago


congrats for not trying to shoe-horn hateoas into json, like most people do. to me, its main
concept is that your website should be your web api, meaning that it should be consumed
p y y p g
by both humans (site) and machines (api). I think it's a cool idea, but honestly, never
seem it implemented. I don't even know if it's possible. I surely wouldn't want to work with
an html-api. maybe separating data from presentation, like we do today, just shows how
academic dissertations can't always be practical...
△ ▽ • Reply • Share ›

Carson Gross Mod > Alexandre Matos • 3 years ago


I'm now convinced that making the human API and the machine API the same is a
mistake. Rails tried to go down that route, with limited success on the machine
API side.

As I say in the blog post: the human API can and should be designed with human
agency in mind. That is: it can be much more dynamic and self-explanatory
whereas the machine API needs to be more regular and more generally
expressive.
1△ ▽ • Reply • Share ›

ALSO ON INTERCOOLERJS.ORG

Twelve, er, Ten HTML Attributes A Few Reasons To Try Intercooler In


Explained... 25 May 2016 2018 4 January 2018
3 comments • 3 years ago 3 comments • a year ago
Jiri Sedlacek — Very nice! Why do you use Carson Gross — Thanks a lot Taylor, I'm
multiple ic-trigger-delay's instead of a single glad you are finding intercooler
ic-poll? useful.Please tell your friends! :)

intercooler.js - Simple AJAX using HTML How It Feels To Learn Intercooler In 2016
attributes 5 October 2016

© IntercoolerJS.org (http://intercoolerjs.org) 2013-2019

You might also like