Skip to content

Instantly share code, notes, and snippets.

@pedro
Last active December 20, 2015 10:18
  • Star 6 You must be signed in to star a gist
  • Fork 2 You must be signed in to fork a gist
Star You must be signed in to star a gist
Save pedro/6114284 to your computer and use it in GitHub Desktop.

Hypermedia API design session

Proposed/ran by Andreas Schmidt, Nokia

Based off his design around the Nokia Places API

Notes

  • Picked JSON, no support for XML

  • Added ?accept=application/json to the URL in the browser for a raw response

  • Headers can be passed as a param

  • Lots of serialized objects

    • Introduced linked objects
    • They have a href attribute
    • Included a "type" param to the serialized object (eg: urn:nlp-types:category)
  • It's an API for local discovery

    • Design goal to make the API match the UI when possible
    • Every link followed by a user has an id and name ()
  • Question: Why id? Isn't the href the id?

    • href is an actionable request
    • doesn't ids break the hateoas principles?
    • ids are offered so clients know whether they visited that element before (caching key)
      • hrefs are not unique since they might contain context info
  • Localization: supports roughly 60 languages

    • Title is in the user language (eg: places "Nearby")
  • Biggest advantages noticed is that the API itself becomes the documentation of the API

    • Without HATEOAS developers would get data, go to a documentation, and figure out how to build the next request
    • With HATEOAS developers learn what are the available next actions
  • Question: How do you build the next available actions?

    • Suggestion on how to create an element:
      • Have a method, eg POST
      • Have a header, eg array of the supported types (eg: [urn:nlp-types:image-post])
  • Another design goal was to minimize the number of requests from the clients (mobile)

    • In the beginning the API was more "academic", but application developers felt it was painful to consume
    • Noticed developers enjoyed playing around with the HATEOAS apis to learn, but once it was time to code they fell back to fixed object models/URIs
      • Don't have the right tools available for consuming HATEOAS apis?
  • Question: What tooling is missing?

    • Client side: support for something like JSON Schema link objects
  • Clicking on a type takes developer to a description of it

  • Inline JSON Schema in the response, or link to it?

    • Would be referenced from the content-type header, then clients can fetch and cache it
    • A lot of concerns related to the size of responses (again, re mobile clients)
  • Clients need different representations of the resource

    • Link in href is static, doesn't give a lot of flexibility
    • They worked on a mix of HATEOAS and URI templates
    • Example: address is in html, but not all clients support it
      • So clients can append a param to the object href and get the address rendered in plain text, for instance
      • Response is still a JSON
  • Question: Shouldn't that be conveyed in the accept header? Extension to the content type

    • We have a hard time to convince developers to use different media types
    • JSONP was a requirement.
  • Question: deviating from HATEOAS, from the developer perspective this seems more confusing? Links, params, href, etc.

    • But it's easier to consume (easier to support a new param than parsing a bunch of links and picking the right one)
    • It seems like a lot of people in the room are coming from the API server/backend perspective, presenter is had to take client-side demands into account
  • They had many sessions with ~20 different teams, they didn't care about links and semantic types!

    • There's no tooling for it
    • They also offer SDKs in c++, hard/expensive to parse semantic info
  • Question: What kind of semantic markup would you add to the JSON to make it more identifiable?

    • The rel attribute for instance is not present there
    • JSON format by itself doesn't handle links, there are extensions to the media type to support it
  • Netflix had a big HATEOAS api, now going for device-specific APIs

  • Clients need to understand the object types, eg what is a urn:nlp-types:place?

    • Introduced a nightmare: they told developers to always check the type, and ignore link objects when they didn't know it
    • Whenever clients saw a new type they freaked out
    • Is this because there was no content negotiation?
      • If they're having a hard time to convince clients about different types, pushing content negotiation would be really hard
    • One idea they had was to introduce noisy data into their API so developers are forced to respect different types/constraints
      • Like chaos monkey for APIs
  • href includes a binary context so they can do things like change the relevance of results

    • Like a google analytics cookie instead of the URI space
    • When developers hardcode URIs they bypass this context
    • They're considering making every request start through one or two specific public endpoints
  • Why are developers not going for HATEOAS aware clients?

    • Pragmatism/urgency: they just want to get their stuff done
    • But that doesn't take into account the cost of maintaining this client over years
    • Some people are missing SOAP, better conventions to handle client specification and API changes
    • One idea to handle this is to take the Google approach, generate clients based off a JSON spec
  • They extend the API by adding new link object types

    • Clients broke for instance when they introduced a "did you mean?" type to search results
    • Again, they tell clients to ignore types they don't know
  • Some objects are dictionaries

    • Like contacts (label: phone, value: "(123) 456789" / label: email, value: foo@bar.com / etc)
  • Currently there's no way to fetch all type definitions from the API

    • There is a domain model and all documentation is generated off it, but they couldn't do that yet
    • Domain model is in Scala, gets rendered in different representations
  • How do we classify this API in relation to HATEOAS?

    • This is a hybrid/there is tight coupling between concepts, etc.
      • How? All his resources are provided in the responses?
        • Because of ids and URL patterns. Client must fill template URIs with ids
    • Someone invested 4 months to get a proper client written to their HATEOAS api
  • Filling ids in URI templates is harder than picking a URL from a list rendered by the API

    • But in practice there's still an effort into picking the right link
    • Say the API has 700 links for relevant information
  • What's the example of the perfect HATEOAS API?

    • Hard question - there are two examples in Steve Klabnik's blog
    • Twilio
    • GitHub is close
    • But all of them with reservations
    • This is a cutting edge concept, they are being made at the moment
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment