What are we doing here?
Combo of text from the Google doc and text from the process recipe/index.md
https://docs.google.com/document/d/1DEwDTU_PGV_1mXYJlAOtADxwfnqfov2gRO7NbMfl0t8/edit https://preview.iiif.io/cookbook/master/recipe/
{ | |
"@context": "http://iiif.io/api/presentation/4/context.json", | |
"id": "https://example.org/iiif/3d/manifest", | |
"type": "Manifest", | |
"label": { "en": [ "I am 3D" ] }, | |
"behavior": [ "some", "new", "behaviors", "probably" ], | |
"items": [ | |
{ |
{ | |
"@context": "http://iiif.io/api/presentation/4/context.json", | |
"id": "https://example.org/iiif/3d/manifest", | |
"type": "Manifest", | |
"label": { "en": [ "I am 3D" ] }, | |
// all your favourite IIIF props are exactly the same | |
"behavior": [ "some", "new", "behaviors", "probably" ], | |
What are we doing here?
Combo of text from the Google doc and text from the process recipe/index.md
https://docs.google.com/document/d/1DEwDTU_PGV_1mXYJlAOtADxwfnqfov2gRO7NbMfl0t8/edit https://preview.iiif.io/cookbook/master/recipe/
This gist has been superseded by this Google Doc:
This document starts with a quick recap of IIIF Presentation and Image APIs, to help probe what is different for time-based media in the context of use cases and existing community practice for A/V today. It assumes that there is only one Presentation API, otherwise we can't have mixed-media manifests (a recurring use case and aspiration). It is based on the discussions in The Hague, in the AV Github issues and use cases, and on AV working group calls.
Why should Wikimedia implement IIIF? And what does that mean? There are several things they could do, some possibly quite straightforward.
Clarify two different things:
New version in Google doc
https://docs.google.com/document/d/1l7tjSrn7CDeYWp_Z_Id3BAIhWktwReXvkqhzPhu-sp0/edit#
We want to move the Wellcome Library away from the Wellcome Player and onto the IIIF 2.0 Universal Viewer (UV).
This allows us to move all the Wellcome Library's image API endpoints to the protoype DLCS (Digital Library Cloud Services) that we have started building.
We have a problem. We have video, audio and born-digital content, besides image sequence content. We don't want to maintain the Player and the UV together. This non-image content is a tiny fraction of the total, but an important one.
Other institutions share this problem, and everyone agrees that IIIF will need to extend to handle non-image-sequence resources - "IxIF". We want to inherit all that we can from IIIF - the JSON-LD, the Open Annotation model, the manifest wrapper and general approach to metadata ("presentation not semantics"). Shared Canvas may be appropriate for some media but not others.
Rendering thumbnails might be the activity that puts most stress on servers if not given some care both on server and in viewer.
We have to remember that IIIF consumers are under no obligation to do what you want them to do. It’s not necessarily your viewer looking at the endpoints. While you can arrange a careful honourable agreement between client and server if it’s your viewer looking at your server, other viewers won’t know the rules – or may choose not to play by them.
An uncaring viewer app takes no notice of any hints, recommended sizes or explicit thumbnails declared in the manifest and goes straight to the first image service it finds for a canvas and requests an image in the size it thinks appropriate for a thumbnail. It doesn’t know whether this is an optimum size for the server to produce, or whether the server has this cached.
This is what the our Viewer is doing currently. If generated images are cached then over time the performance of thumbnails for an already
(the answer to this might be completely different for DDS on AWS - e.g., a massive location-resolving key-value store.)
Given a relative file path in a METS file (e.g., a JP2 or ALTO file), the DDS needs to resolve the S3 location of that file.
The simple way to do this is load the storage manifest, find the entry corresponding to the path in METS, and deduce the S3 URI. But that would be very inefficient.
So the DDS caches a StorageMap for each storage manifest, which is this object, serialised:
[Serializable]
title Sorty, Presley and Omeka | |
participant User | |
participant Sorty | |
participant Presley | |
participant Omeka | |
note over User, Sorty | |
Sorty instance is configured with module for selecting | |
appropiate source manifests for the project. In IDA case, | |
a module that pulls in the list of prepared reels. | |
end note |