Skip to content

Instantly share code, notes, and snippets.

Tom Crane tomcrane

Block or report user

Report or block tomcrane

Hide content and notifications from this user.

Learn more about blocking users

Contact Support about this user’s behavior.

Learn more about reporting abuse

Report abuse
View GitHub Profile
@tomcrane
tomcrane / mosaics.md
Created Mar 21, 2019
Mosaics content
View mosaics.md

As an editor, you have been given the task of making the slideshow seen here:

https://www.vam.ac.uk/articles/gilbert-mosaics

The materials you have been given to work with are a list of IIIF Image API endpoints. The thumbnails below are not to be used, they are just to be able to see which image is which. The only data to use is the Image Service URL.

It's a better test of the tools if you type at least some of the text by hand, creating links and styling where appropriate. However, there is quite a lot of text, so you could also copy and paste bits from the manifest directly. However, you should treat this text as if it were in a document given to you by a content editor.

https://iiif.vam.ac.uk/slideshows/gilbert/manifest.json

@tomcrane
tomcrane / identifying_images.md
Last active Sep 7, 2018
Harvesting illustrations with nearby text
View identifying_images.md

Many Wellcome works are printed books with illustrations, figures, tables and so on that are identified by OCR during digitisation.

Many of those will have text nearby, that is likely to be a caption that describes the image.

The IIIF representation offers a quick way of getting pixels for the region of the page occupied by the image, and could be a way of finding nearby text.

Example:

View keybase.md

Keybase proof

I hereby claim:

  • I am tomcrane on github.
  • I am tomcrane (https://keybase.io/tomcrane) on keybase.
  • I have a public key whose fingerprint is B254 DAA7 0D2A 9B8C 7697 4158 05D4 84F2 921F F0E8

To claim this, I am signing this object:

@tomcrane
tomcrane / uv-discuss.md
Last active Mar 11, 2019
Universal Viewer Discussion Documents
View uv-discuss.md
@tomcrane
tomcrane / wiki-iiif.md
Last active Sep 10, 2018
Wikimedia and IIIF
View wiki-iiif.md

Wikimedia and IIIF

Why should Wikimedia implement IIIF? And what does that mean? There are several things they could do, some possibly quite straightforward.

Clarify two different things:

  1. Bring Wikimedia assets (images, audio, video) into shared IIIF annotation space, so that they become part of the IIIF Universe, can be annotated, referenced, mashed up, reused as Presentation API
  2. Allow image reuse via IIIF Image api:
  • statically, via level 0 (extreme level 0) service
View canvas-timing.md

@aeschylus - "I think as someone writing a more complex renderer, I would want to provide my own timer/ticker/clock/game loop and use eventedCanvas as a container for the information that comes from that process" - That makes sense to me, as far as my understanding of implementation goes. Whether it's Manifesto or eventedCanvas or a combination of both, something needs to hold the "scene graph" and support the renderer in the right place at the right time, where time is both relevant to the rendering of a static scene (Presentation 2.1, where timing is about marshaling requests and responses), and the rendering of a scene with a temporal extent (Presentation 3/AV, where timing is also about when a resource is annotated onto a canvas, as well as where). In Presentation 2.1, state changes come from the intial setup's requests and responses, and then user actions (switching between items in oa:Choice for example). In Presentation 3, all that still applies, but state changes can also come from timed events

@tomcrane
tomcrane / iiifml.md
Created Jan 24, 2017
IIIFManifestLayouts, UV and Mirador
View iiifml.md

(trying to clarify my understanding)

iiifManifestLayouts (IIIFML) is a library that does several different things. The package as a whole has both the logic for organising the visibility and position of IIIF resources, and an implementation of this logic on top of OpenSeadragon (OSD). It makes two powerful features available to rich clients like UV and Mirador:

  1. It understands what a IIIF sc:Canvas is, and handles the of rendering it for you. This means that it understands all possible image annotation bodies on a canvas - dctypes:Image, oa:Choice and oa:SpecificResource, with or without IIIF Image services attached. It also makes use of thumbnails on sc:Canvas when present. It provides an API so that a client can generate UI to allow the user to control the visibility of all the available image resources - e.g. layers of multispectral views, foldouts, use of parts of other images not just the whole. Is is a realisation of what I was scrabbling
@tomcrane
tomcrane / av-api.md
Last active Feb 7, 2017
The IIIF AV API and its relationship with the Presentation API
View av-api.md

This gist has been superseded by this Google Doc:

http://bit.ly/av-issues

What are Audio and Video Content APIs?

This document starts with a quick recap of IIIF Presentation and Image APIs, to help probe what is different for time-based media in the context of use cases and existing community practice for A/V today. It assumes that there is only one Presentation API, otherwise we can't have mixed-media manifests (a recurring use case and aspiration). It is based on the discussions in The Hague, in the AV Github issues and use cases, and on AV working group calls.

Introduction/recap - access to content via the Presentation API today

View ida-thumbs.md

We’re about ready to start running all of the images into an IDA instance of the DLCS, so that they have IIIF endpoints and we can generate initial manifests for them.

One decision to make before we do this is the sizes of optimised thumbnail derivatives. When we build UI we can take advantage of the fact that the DLCS can provide some sizes more efficiently (i.e., quicker) than others.

You can see the effect of this here: http://tomcrane.github.io/wellcome-today/thumbs.html?manifest=http://library-uat.wellcomelibrary.org/iiif/b21978426/manifest This UI makes use of the fact that Wellcome’s material has 100, 200, 400 and 1024 pixel-bound derivatives available faster than an arbitrary sized derivative, so it works entirely off those sizes.

(If you have a play with this you will notice a bit of a delay in loading a new work – this is because either the UAT server has gone to sleep, or it’s taking a second or two generating a new manifest – the UAT server doesn’t have any manifests cached so most things are n

You can’t perform that action at this time.