Skip to content

Instantly share code, notes, and snippets.

@ktk
Last active August 23, 2016 15:43
Show Gist options
  • Save ktk/b50c938420edf78561c6 to your computer and use it in GitHub Desktop.
Save ktk/b50c938420edf78561c6 to your computer and use it in GitHub Desktop.
Content-Location instead of 303

Ceci n'est pas une pipe

I got asked on Twitter on how I would use Content-Location with Linked Data, see https://twitter.com/elfpavlik/status/605693139119153153.

I'm referring to my remark from the Trifid-LD Readme:

Trifid-LD does not care about HTTPRange-14 and neither should you. We consider the extra 303 redirect round-trip a waste of precious response time. You get back what you asked for in content-negotiation.

As you can see later in the text we did think about the Content-Location header but didn't implement it so far. The reason is pretty simple: While I understand the problem (up to some point at least) from a meta-level I fail to see where this is an issue in the real world. I'm just an engineer and maybe I'm oversimplifying things here but as long as no one gives me a clear explanation on why this will lead to real world problems, I will continue to ignore HTTPRange-14. Also I'm not a native English speaker so some of the discussions were quite hard to understand for me, fair chance that I missed the point somewhere.

The 303-solution in practice constantly annoyed me as this is causing more confusion than anything else to people with little or no Linked Data background. If you don't believe me try to give introductions to Linked Data to "normal" people and tell them why the URI they just copy/pasted from the DBpedia web browser interface does not work in the SPARQL query. When I tell them that the /page/is not the same as the /resource/ because some assume this could lead to issues they probably think I'm insane. Beside this I am interested in speed and doing an extra round-trip is, as I mention, just a waste of time.

So one day (and a long evening/night) I started to go through endless amounts of mails around the topic on the quest for a better solution. Unfortunately I can't find the notes anymore I did back then but I did find some postings where people propose to use Content-Location instead. IIRC some of the LD-gods like Kingsley agreed that this might be a viable alternative. Googling for it I find an older post by Ian Davis, see here and a related post here.

This is pretty much what I propose as well with the only difference that I wonder if Step 3 in his description is really necessary which says "include a triple in the body of the response whose subject is the URI of your thing".

And again, I doubt it matters in the real world. I will ignore it until someone gives me a really good reason about why I should care.

@ktk
Copy link
Author

ktk commented Jun 7, 2015

elv Pavlik pointed me to this post from Kingsley: https://lists.w3.org/Archives/Public/public-webid/2013Mar/0057.html

@pietercolpaert
Copy link

pietercolpaert commented Aug 20, 2016

There is not always a 1 on 1 relation between the documents that exist to describe real-world objects and the real-world objects (more documents can describe 1 resource and 1 document can describe more resources). Imagine for instance creating an ontology for all terms within GTFS, and the document is used for introducing these new identifiers for the terms. The Linked Data principles state that these identifiers have to be HTTP URIs, and that when you resolve this URI, you actually get useful information about this resource. It seems like a feasible solution to have all URIs just redirect to this document, or to have hash-URIs with the URL of the document as the base, as implemented by http://vocab.gtfs.org/terms (the 2 solutions to Range-14).

This end result is pretty straightforward as a data publisher and does not require special server infrastructure to make that happen. I only hosted a file and/or configured redirects. As a reuser I can now look up these identifiers by just resolving a web address and follow links to more interesting resources. Furthermore, in the example of GTFS, I can now say using RDF that the document I created, is created by me, and that it has a CC license attached to it, which would not be possible to say about the terms or the ontology (that's why we need the disambiguation and create different URIs instead of just 1 URL).

Another example with 303 instead of # can be found in LDF for identifiers for all of Bach's works: http://data.linkeddatafragments.org/bwv/classifications/10

In your case, your project explicitly makes a page per resource. As long as you will not run into the problem of having to say something about the document describing what your URI identifies, you will not run into any engineering solution, and nobody will complain (http://schema.org does what you do as well). However stating that nobody should care about Range-14 will just add confusion about a rather simple issue.

Regarding an extra round trip: I have not yet seen an end-user application that needed fast response times resolve URIs for things in order to find the next document. In a user-agent you would generally use the links/affordances in documents to get to your next document. You will only use the URIs to have unique identifiers for things, solving the semantic interoperability for identifiers used in documents that you would just have discovered .

Regarding the error in your explanation: it's not the URI and the representation that are the problem, it's the real-world object and the representation. Both of the latter are resources that are identified by a web address.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment