Skip to content

Instantly share code, notes, and snippets.

@tomcrane
tomcrane / wiki-iiif.md
Last active January 20, 2021 17:09
Wikimedia and IIIF

Wikimedia and IIIF

Why should Wikimedia implement IIIF? And what does that mean? There are several things they could do, some possibly quite straightforward.

Clarify two different things:

  1. Bring Wikimedia assets (images, audio, video) into shared IIIF annotation space, so that they become part of the IIIF Universe, can be annotated, referenced, mashed up, reused as Presentation API
  2. Allow image reuse via IIIF Image api:
  • statically, via level 0 (extreme level 0) service

@aeschylus - "I think as someone writing a more complex renderer, I would want to provide my own timer/ticker/clock/game loop and use eventedCanvas as a container for the information that comes from that process" - That makes sense to me, as far as my understanding of implementation goes. Whether it's Manifesto or eventedCanvas or a combination of both, something needs to hold the "scene graph" and support the renderer in the right place at the right time, where time is both relevant to the rendering of a static scene (Presentation 2.1, where timing is about marshaling requests and responses), and the rendering of a scene with a temporal extent (Presentation 3/AV, where timing is also about when a resource is annotated onto a canvas, as well as where). In Presentation 2.1, state changes come from the intial setup's requests and responses, and then user actions (switching between items in oa:Choice for example). In Presentation 3, all that still applies, but state changes can also come from timed events

@tomcrane
tomcrane / iiifml.md
Created January 24, 2017 09:29
IIIFManifestLayouts, UV and Mirador

(trying to clarify my understanding)

iiifManifestLayouts (IIIFML) is a library that does several different things. The package as a whole has both the logic for organising the visibility and position of IIIF resources, and an implementation of this logic on top of OpenSeadragon (OSD). It makes two powerful features available to rich clients like UV and Mirador:

  1. It understands what a IIIF sc:Canvas is, and handles the of rendering it for you. This means that it understands all possible image annotation bodies on a canvas - dctypes:Image, oa:Choice and oa:SpecificResource, with or without IIIF Image services attached. It also makes use of thumbnails on sc:Canvas when present. It provides an API so that a client can generate UI to allow the user to control the visibility of all the available image resources - e.g. layers of multispectral views, foldouts, use of parts of other images not just the whole. Is is a realisation of what I was scrabbling
@tomcrane
tomcrane / av-api.md
Last active July 2, 2021 12:00
The IIIF AV API and its relationship with the Presentation API

This gist has been superseded by this Google Doc:

http://bit.ly/av-issues

What are Audio and Video Content APIs?

This document starts with a quick recap of IIIF Presentation and Image APIs, to help probe what is different for time-based media in the context of use cases and existing community practice for A/V today. It assumes that there is only one Presentation API, otherwise we can't have mixed-media manifests (a recurring use case and aspiration). It is based on the discussions in The Hague, in the AV Github issues and use cases, and on AV working group calls.

Introduction/recap - access to content via the Presentation API today

We’re about ready to start running all of the images into an IDA instance of the DLCS, so that they have IIIF endpoints and we can generate initial manifests for them.

One decision to make before we do this is the sizes of optimised thumbnail derivatives. When we build UI we can take advantage of the fact that the DLCS can provide some sizes more efficiently (i.e., quicker) than others.

You can see the effect of this here: http://tomcrane.github.io/wellcome-today/thumbs.html?manifest=http://library-uat.wellcomelibrary.org/iiif/b21978426/manifest This UI makes use of the fact that Wellcome’s material has 100, 200, 400 and 1024 pixel-bound derivatives available faster than an arbitrary sized derivative, so it works entirely off those sizes.

(If you have a play with this you will notice a bit of a delay in loading a new work – this is because either the UAT server has gone to sleep, or it’s taking a second or two generating a new manifest – the UAT server doesn’t have any manifests cached so most things are n

Some examples about how your objects in IIIF (the bridge to the Human Presentation API) link to semantic description (your model, or other shared models). The IIIF manifests for all "catalogue" objects would link to a semantic description via the seeAlso property:

seeAlso

A link to a machine readable document that semantically describes the resource with the seeAlso property, such as an XML or RDF description. This document could be used for search and discovery or inferencing purposes, or just to provide a longer description of the resource. The profile and format properties of the document should be given to help the client to make appropriate use of the document.

While your catalogue Manifests link to some experimental RDF, your IIIF Collections currently don't link to seeAlso resources, but they should, expecially when you start making ad hoc curated collections. And similarly for ad hoc manifests.

This strong link between the Presentation API object that people "see" and the semantic descrip

Updated 2018 - Please see http://canvas-panel.digirati.com/#/about for later developments. The conversation on this gist is kept here for its usefulness to posterity...


There is a resuable component that sits between tile renderers like OpenSeadragon (OSD) and manifest-level viewers. This component can be reused in very simple IIIF-powered applications - a few lines of JavaScript - but is sophisticated enough to form the basis of custom applications with complex layout and annotation rendering requirements.

I'm not quite sure what its external interface looks like yet - but it's the component I'd reach for to make an ad-hoc IIIF powered mini-application instead of OSD because it understands canvases directly. Under the hood it's wrapping OSD (or another tile renderer).

This is a component that provides a surface for rendering one or more sc:Canvas objects and the image annotations on them. So it's both a library (or combination of libraries) and a UI component that takes up screen space. It takes

participant user
participant client.org
participant content.org
participant CAS
user->client.org: select something
client.org->content.org: GET info.json
note left of content.org
HTTP 401
unauthorized
@tomcrane
tomcrane / collections-browse.md
Last active May 9, 2016 11:51
Serendipitous Collection Browsing

Reqt - serendipitous discovery via membership of IIIF collections

(tag-based browsing - this kind of thing: http://iangilman.com/openseadragon/flickr/)

This is not part of the IIIF spec, as it isn't presentation semantics. It would be really useful to add an extra piece of metadata to tell me why a resource is a member of a particular collection.

{