Skip to content

Instantly share code, notes, and snippets.


rmccue/ Secret

Created May 28, 2014 03:16
  • Star 2 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
Star You must be signed in to star a gist
What would you like to do?

API Centralization Issues


Currently, WP API is entirely designed around the concept of decentralization. That is, each site's API is entirely separate from the others. This is in contrast to many APIs on the web, which are based around a singular central server. There are problems with decentralization:

  • Developers cannot access all sites without registering on each one. Due to the nature of OAuth's key/secret based authentication, developers must register a unique client on each site. This is not feasible for clients that expect to connect to a large number of sites.

  • Discovery of sites can mean extra work for developers. Given an arbitrary URL, clients want to know with certainty whether they can use the API on it.


The discovery problem is solved in WP API through an autodiscovery system. All pages on a site with the API enabled send a Link header back to the client pointing to the API. This enables low-effort quick checking of whether a site supports the API with a single HEAD request, supported by all major browsers and by all HTTP clients:


< HTTP/1.0 200 OK
< Link: <>; rel=""

This same method is already used by WordPress core for providing the shortlink for a post.

As a fallback, autodiscovery is also indicated both by a <link> tag in the page's <head>, along with via the normal RSD support already used by XML-RPC.

In contrast,'s API currently only supports discovery by making an API request to their centralized API, and only allows querying by domain. Sites with paths are only available via a set of API calls, and require manual work by the client.

Client Registration

Client registration is the largest issue with a decentralized API. There are several approaches to handling this, including both centralized and decentralized approaches.

Dynamic Client Registration

Rather than requiring clients to pre-register themselves on sites, dynamic client registration uses a set of unauthenticated API calls to create the client. A draft standard for dynamic client registration with OAuth 2.0 is currently under development at the OAuth IETF Working Group, and will likely be simple to backport to an existing OAuth 1.0a system. An existing specification exists for OpenID Connect, built as a layer on top of OAuth 2, however this may be more work to backport to OAuth 1.0a.

Dynamic client registration allows any client to register, which may present a security and privacy issue. Client registration is used by most API providers for client accountability, and usually requires developers to have a user account along with contact details. Dynamic client registration removes this requirement, which means that clients are less accountable for their actions.

Dynamic client registration could also be used maliciously to impersonate a well-known client. There would be nothing stopping a malicious application from impersonating the WordPress mobile apps, for example. This could be solved by checking with the server, however would most likely require building another authentication layer on top of the existing system.

Centralized Client Registration

Clients wishing to access all WordPress installations would be able to register on This avoids any issues with accountability, as clients would be directly linked to accounts, which is already the case for plugins and themes released there.

Accessing a site using this authentication could be done in multiple ways:

  • would proxy API requests to a site. Authentication would be handled on the end, with authenticating with the site itself and proxying through this authentication.

    This leaves open the question of how would authenticate with the site. The obvious solution for this would be to sign the request using a private key on's end, and checking this against a public key. When a client wants to access the site's API, it would ask for the data, which in turn would request it from the API. The API would then check the signature with the request and once validated, execute and return the response, which would then pass back to the client.

    This would however require the ability to check signatures on sites, which may not be possible, depending on whether PHP is built with the necessary cryptographic primatives. The availability of these is yet to be determined, however would likely depend on the availability of the OpenSSL extension. While SSL itself could be used for this, this would require the site to send a request to to validate the request, which would significantly increase round-trip times for every request.

    Proxying requests would also increase both round-trip times and server load on, as all requests through these clients would need to pass through it.

    Malicious or compromised clients would be revoked on the side immediately, and the site would cease to proxy these requests through effective immediately.

  • would register the client on the site, on the client's behalf. The client would ask for a client key/secret for a site, at which point would ask the site to register a new client. The site would then validate the request with, then create a client key/secret and pass back to, which would then pass these through to the client. The client would then authorize with the site using these credentials without contacting

    This system would still require the ability to check signatures, however this could be done through HTTPS requests, as it would only be done on the initial request as a handshake. While this would increase the round-trip time for registration, this would be a one-time operation.

    Malicious or compromised clients could be revoked on either the side or on the site side, if sites want to revoke specific clients. Clients revoked by would need to be revoked by each site as well. This could be done by checking on a timed interval (as with existing update checks), or by pinging each site manually.

    One approach here would be to combine the two revocation policies: normal revoked clients (deleted clients on, for example) could be deleted after checking on a timed interval, while critical revocations (for example, compromised high-profile clients) could be achieved by pinging the sites directly.

The suggested approach here would be the latter, combined with both revocation policies.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment