Skip to content

Instantly share code, notes, and snippets.


Tom Crane tomcrane

View GitHub Profile
tomcrane /
Created Jan 24, 2017
IIIFManifestLayouts, UV and Mirador

(trying to clarify my understanding)

iiifManifestLayouts (IIIFML) is a library that does several different things. The package as a whole has both the logic for organising the visibility and position of IIIF resources, and an implementation of this logic on top of OpenSeadragon (OSD). It makes two powerful features available to rich clients like UV and Mirador:

  1. It understands what a IIIF sc:Canvas is, and handles the of rendering it for you. This means that it understands all possible image annotation bodies on a canvas - dctypes:Image, oa:Choice and oa:SpecificResource, with or without IIIF Image services attached. It also makes use of thumbnails on sc:Canvas when present. It provides an API so that a client can generate UI to allow the user to control the visibility of all the available image resources - e.g. layers of multispectral views, foldouts, use of parts of other images not just the whole. Is is a realisation of what I was scrabbling
tomcrane /
Last active Feb 7, 2017
The IIIF AV API and its relationship with the Presentation API

This gist has been superseded by this Google Doc:

What are Audio and Video Content APIs?

This document starts with a quick recap of IIIF Presentation and Image APIs, to help probe what is different for time-based media in the context of use cases and existing community practice for A/V today. It assumes that there is only one Presentation API, otherwise we can't have mixed-media manifests (a recurring use case and aspiration). It is based on the discussions in The Hague, in the AV Github issues and use cases, and on AV working group calls.

Introduction/recap - access to content via the Presentation API today


We’re about ready to start running all of the images into an IDA instance of the DLCS, so that they have IIIF endpoints and we can generate initial manifests for them.

One decision to make before we do this is the sizes of optimised thumbnail derivatives. When we build UI we can take advantage of the fact that the DLCS can provide some sizes more efficiently (i.e., quicker) than others.

You can see the effect of this here: This UI makes use of the fact that Wellcome’s material has 100, 200, 400 and 1024 pixel-bound derivatives available faster than an arbitrary sized derivative, so it works entirely off those sizes.

(If you have a play with this you will notice a bit of a delay in loading a new work – this is because either the UAT server has gone to sleep, or it’s taking a second or two generating a new manifest – the UAT server doesn’t have any manifests cached so most things are n


Some examples about how your objects in IIIF (the bridge to the Human Presentation API) link to semantic description (your model, or other shared models). The IIIF manifests for all "catalogue" objects would link to a semantic description via the seeAlso property:


A link to a machine readable document that semantically describes the resource with the seeAlso property, such as an XML or RDF description. This document could be used for search and discovery or inferencing purposes, or just to provide a longer description of the resource. The profile and format properties of the document should be given to help the client to make appropriate use of the document.

While your catalogue Manifests link to some experimental RDF, your IIIF Collections currently don't link to seeAlso resources, but they should, expecially when you start making ad hoc curated collections. And similarly for ad hoc manifests.

This strong link between the Presentation API object that people "see" and the semantic descrip


Updated 2018 - Please see for later developments. The conversation on this gist is kept here for its usefulness to posterity...

There is a resuable component that sits between tile renderers like OpenSeadragon (OSD) and manifest-level viewers. This component can be reused in very simple IIIF-powered applications - a few lines of JavaScript - but is sophisticated enough to form the basis of custom applications with complex layout and annotation rendering requirements.

I'm not quite sure what its external interface looks like yet - but it's the component I'd reach for to make an ad-hoc IIIF powered mini-application instead of OSD because it understands canvases directly. Under the hood it's wrapping OSD (or another tile renderer).

This is a component that provides a surface for rendering one or more sc:Canvas objects and the image annotations on them. So it's both a library (or combination of libraries) and a UI component that takes up screen space. It takes

View auth-flow.txt
participant user
participant CAS
user-> select something> GET info.json
note left of
HTTP 401
tomcrane /
Last active May 9, 2016
Serendipitous Collection Browsing

Reqt - serendipitous discovery via membership of IIIF collections

(tag-based browsing - this kind of thing:

This is not part of the IIIF spec, as it isn't presentation semantics. It would be really useful to add an extra piece of metadata to tell me why a resource is a member of a particular collection.

tomcrane /
Last active Apr 1, 2016
UV / Shared Canvas

The current 2-up implementation was developed from the British Library's materials - but in those cases the facing pages were already cropped and framed to give a consistent viewing experience, so the problem wasn't seen. I suspect it would have been before long.

What's happening currently The UV code is postitioning the recto tile source on the canvas so that it aligns with the top of the verso image, and adds a small gutter between them (the size of this gutter is configurable, the current value is the result of user testing at the BL). What happens next is down to the OpenSeadragon library [0] that we use to render the deep zoom images. Currently this is what is scaling the images to be the same width. We need to prevent it doing that.

Quick fix - The only metadata the UV has to go on is that the pair of images are the verso of one page and recto of the following page - and even that information is inferred from the sequence rather than explicit in the metadata (but see

  • Does OSD handle an info.json with szes only (no tiling?)
  • If so use shimmy to generate info.json from flickr sizes
  • requires proxy to rewrite iiif -> flickr images

Make shimmy -> DLCS integration

You can’t perform that action at this time.