Skip to content

Instantly share code, notes, and snippets.

Tom Crane tomcrane

Block or report user

Report or block tomcrane

Hide content and notifications from this user.

Learn more about blocking users

Contact Support about this user’s behavior.

Learn more about reporting abuse

Report abuse
View GitHub Profile
@tomcrane
tomcrane / storagemap.md
Created Oct 15, 2019
StorageMap efficiency
View storagemap.md

(the answer to this might be completely different for DDS on AWS - e.g., a massive location-resolving key-value store.)

Given a relative file path in a METS file (e.g., a JP2 or ALTO file), the DDS needs to resolve the S3 location of that file.

The simple way to do this is load the storage manifest, find the entry corresponding to the path in METS, and deduce the S3 URI. But that would be very inefficient.

So the DDS caches a StorageMap for each storage manifest, which is this object, serialised:

    [Serializable]
@tomcrane
tomcrane / sorty-presley-omeka.txt
Created Oct 10, 2019
Sequence: Sorty, Presley and Omeka
View sorty-presley-omeka.txt
title Sorty, Presley and Omeka
participant User
participant Sorty
participant Presley
participant Omeka
note over User, Sorty
Sorty instance is configured with module for selecting
appropiate source manifests for the project. In IDA case,
a module that pulls in the list of prepared reels.
end note
@tomcrane
tomcrane / mosaics.md
Created Mar 21, 2019
Mosaics content
View mosaics.md

As an editor, you have been given the task of making the slideshow seen here:

https://www.vam.ac.uk/articles/gilbert-mosaics

The materials you have been given to work with are a list of IIIF Image API endpoints. The thumbnails below are not to be used, they are just to be able to see which image is which. The only data to use is the Image Service URL.

It's a better test of the tools if you type at least some of the text by hand, creating links and styling where appropriate. However, there is quite a lot of text, so you could also copy and paste bits from the manifest directly. However, you should treat this text as if it were in a document given to you by a content editor.

https://iiif.vam.ac.uk/slideshows/gilbert/manifest.json

@tomcrane
tomcrane / identifying_images.md
Last active Sep 7, 2018
Harvesting illustrations with nearby text
View identifying_images.md

Many Wellcome works are printed books with illustrations, figures, tables and so on that are identified by OCR during digitisation.

Many of those will have text nearby, that is likely to be a caption that describes the image.

The IIIF representation offers a quick way of getting pixels for the region of the page occupied by the image, and could be a way of finding nearby text.

Example:

View keybase.md

Keybase proof

I hereby claim:

  • I am tomcrane on github.
  • I am tomcrane (https://keybase.io/tomcrane) on keybase.
  • I have a public key whose fingerprint is B254 DAA7 0D2A 9B8C 7697 4158 05D4 84F2 921F F0E8

To claim this, I am signing this object:

@tomcrane
tomcrane / uv-discuss.md
Last active Mar 11, 2019
Universal Viewer Discussion Documents
View uv-discuss.md
@tomcrane
tomcrane / wiki-iiif.md
Last active Sep 10, 2018
Wikimedia and IIIF
View wiki-iiif.md

Wikimedia and IIIF

Why should Wikimedia implement IIIF? And what does that mean? There are several things they could do, some possibly quite straightforward.

Clarify two different things:

  1. Bring Wikimedia assets (images, audio, video) into shared IIIF annotation space, so that they become part of the IIIF Universe, can be annotated, referenced, mashed up, reused as Presentation API
  2. Allow image reuse via IIIF Image api:
  • statically, via level 0 (extreme level 0) service
View canvas-timing.md

@aeschylus - "I think as someone writing a more complex renderer, I would want to provide my own timer/ticker/clock/game loop and use eventedCanvas as a container for the information that comes from that process" - That makes sense to me, as far as my understanding of implementation goes. Whether it's Manifesto or eventedCanvas or a combination of both, something needs to hold the "scene graph" and support the renderer in the right place at the right time, where time is both relevant to the rendering of a static scene (Presentation 2.1, where timing is about marshaling requests and responses), and the rendering of a scene with a temporal extent (Presentation 3/AV, where timing is also about when a resource is annotated onto a canvas, as well as where). In Presentation 2.1, state changes come from the intial setup's requests and responses, and then user actions (switching between items in oa:Choice for example). In Presentation 3, all that still applies, but state changes can also come from timed events

You can’t perform that action at this time.