Skip to content

Instantly share code, notes, and snippets.

@rwb27
Last active May 30, 2024 22:52
Show Gist options
  • Save rwb27/7e0c0b70a2e9b32cfa35a67fe1d7ae33 to your computer and use it in GitHub Desktop.
Save rwb27/7e0c0b70a2e9b32cfa35a67fe1d7ae33 to your computer and use it in GitHub Desktop.
Thing Descriptions for the OpenFlexure Microscope (v3 pre-release) (all TDs in a single JSON file)
{
"/camera/": {
"title": "StreamingPiCamera2",
"properties": {
"analogue_gain": {
"title": "analogue_gain",
"type": "number",
"forms": [
{
"href": "/camera/analogue_gain",
"op": [
"readproperty",
"writeproperty"
]
}
]
},
"camera_configuration": {
"description": "The \"configuration\" sets the resolution and format of the camera's streams.\nTogether with the \"tuning\" it determines how the sensor is configured and\nhow the data is processed.\n\nNote that the configuration may be modified when taking still images, and\nthis property refers to whatever configuration is currently in force -\nusually the one used for the preview stream.",
"title": "The \"configuration\" dictionary of the picamera2 object",
"type": "object",
"forms": [
{
"href": "/camera/camera_configuration",
"op": [
"readproperty"
]
}
]
},
"capture_metadata": {
"description": "Return the metadata from the camera",
"title": "Return the metadata from the camera",
"type": "object",
"forms": [
{
"href": "/camera/capture_metadata",
"op": [
"readproperty"
]
}
]
},
"colour_correction_matrix": {
"title": "colour_correction_matrix",
"type": "array",
"items": [
{
"type": "number"
},
{
"type": "number"
},
{
"type": "number"
},
{
"type": "number"
},
{
"type": "number"
},
{
"type": "number"
},
{
"type": "number"
},
{
"type": "number"
},
{
"type": "number"
}
],
"maxItems": 9,
"minItems": 9,
"forms": [
{
"href": "/camera/colour_correction_matrix",
"op": [
"readproperty",
"writeproperty"
]
}
]
},
"colour_gains": {
"title": "colour_gains",
"type": "array",
"items": [
{
"type": "number"
},
{
"type": "number"
}
],
"maxItems": 2,
"minItems": 2,
"forms": [
{
"href": "/camera/colour_gains",
"op": [
"readproperty",
"writeproperty"
]
}
]
},
"exposure": {
"description": "An alias for `exposure_time` to fit the micromanager API",
"title": "An alias for `exposure_time` to fit the micromanager API",
"type": "number",
"forms": [
{
"href": "/camera/exposure",
"op": [
"readproperty",
"writeproperty"
]
}
]
},
"exposure_time": {
"description": "The exposure time in microseconds",
"title": "exposure_time",
"type": "integer",
"forms": [
{
"href": "/camera/exposure_time",
"op": [
"readproperty",
"writeproperty"
]
}
]
},
"lens_shading_is_static": {
"description": "This property is true if the lens shading correction has been set to use\na static table (i.e. the number of automatic correction iterations is zero).\nThe default LST is not static, but all the calibration controls will set it\nto be static (except \"reset\")",
"title": "Whether the lens shading is static",
"type": "boolean",
"forms": [
{
"href": "/camera/lens_shading_is_static",
"op": [
"readproperty"
]
}
]
},
"lens_shading_tables": {
"description": "This returns the current lens shading correction, as three 2D lists\neach with dimensions 16x12. This assumes that we are using a static\nlens shading table - if adaptive control is enabled, or if there\nare multiple LSTs in use for different colour temperatures,\nwe return a null value to avoid confusion.",
"title": "The current lens shading (i.e. flat-field correction)",
"oneOf": [
{
"title": "LensShading",
"type": "object",
"properties": {
"luminance": {
"title": "Luminance",
"type": "array",
"items": {
"type": "array",
"items": {
"type": "number"
}
}
},
"Cr": {
"title": "Cr",
"type": "array",
"items": {
"type": "array",
"items": {
"type": "number"
}
}
},
"Cb": {
"title": "Cb",
"type": "array",
"items": {
"type": "array",
"items": {
"type": "number"
}
}
}
},
"required": [
"luminance",
"Cr",
"Cb"
]
},
{
"type": "null"
}
],
"forms": [
{
"href": "/camera/lens_shading_tables",
"op": [
"readproperty",
"writeproperty"
]
}
]
},
"mjpeg_bitrate": {
"description": "Bitrate for MJPEG stream (None for default)",
"title": "mjpeg_bitrate",
"oneOf": [
{
"type": "integer"
},
{
"type": "null"
}
],
"forms": [
{
"href": "/camera/mjpeg_bitrate",
"op": [
"readproperty",
"writeproperty"
]
}
]
},
"sensor_modes": {
"description": "All the available modes the current sensor supports",
"title": "All the available modes the current sensor supports",
"type": "array",
"items": {
"title": "SensorMode",
"type": "object",
"properties": {
"unpacked": {
"title": "Unpacked",
"type": "string"
},
"bit_depth": {
"title": "Bit Depth",
"type": "integer"
},
"size": {
"title": "Size",
"type": "array",
"items": [
{
"type": "integer"
},
{
"type": "integer"
}
],
"maxItems": 2,
"minItems": 2
},
"fps": {
"title": "Fps",
"type": "number"
},
"crop_limits": {
"title": "Crop Limits",
"type": "array",
"items": [
{
"type": "integer"
},
{
"type": "integer"
},
{
"type": "integer"
},
{
"type": "integer"
}
],
"maxItems": 4,
"minItems": 4
},
"exposure_limits": {
"title": "Exposure Limits",
"type": "array",
"items": [
{
"oneOf": [
{
"type": "integer"
},
{
"type": "null"
}
]
},
{
"oneOf": [
{
"type": "integer"
},
{
"type": "null"
}
]
},
{
"oneOf": [
{
"type": "integer"
},
{
"type": "null"
}
]
}
],
"maxItems": 3,
"minItems": 3
},
"format": {
"title": "Format",
"type": "string"
}
},
"required": [
"unpacked",
"bit_depth",
"size",
"fps",
"crop_limits",
"exposure_limits",
"format"
]
},
"forms": [
{
"href": "/camera/sensor_modes",
"op": [
"readproperty"
]
}
]
},
"sensor_resolution": {
"description": "The native resolution of the camera's sensor",
"title": "The native resolution of the camera's sensor",
"type": "array",
"items": [
{
"type": "integer"
},
{
"type": "integer"
}
],
"maxItems": 2,
"minItems": 2,
"forms": [
{
"href": "/camera/sensor_resolution",
"op": [
"readproperty"
]
}
]
},
"stream_active": {
"description": "Whether the MJPEG stream is active",
"title": "stream_active",
"type": "boolean",
"forms": [
{
"href": "/camera/stream_active",
"op": [
"readproperty"
]
}
]
},
"stream_resolution": {
"description": "Resolution to use for the MJPEG stream",
"title": "stream_resolution",
"type": "array",
"items": [
{
"type": "integer"
},
{
"type": "integer"
}
],
"maxItems": 2,
"minItems": 2,
"forms": [
{
"href": "/camera/stream_resolution",
"op": [
"readproperty",
"writeproperty"
]
}
]
},
"tuning": {
"title": "tuning",
"oneOf": [
{
"type": "object"
},
{
"type": "null"
}
],
"forms": [
{
"href": "/camera/tuning",
"op": [
"readproperty"
]
}
]
}
},
"actions": {
"auto_expose_from_minimum": {
"description": "Starting from the minimum exposure, we gradually increase exposure until\nwe hit the specified white level. We use a percentile rather than the\nmaximum, in order to be robust to a small number of noisy/bright pixels.",
"title": "Adjust exposure to hit the target white level",
"forms": [
{
"href": "/camera/auto_expose_from_minimum",
"op": [
"invokeaction"
]
}
],
"input": {
"title": "auto_expose_from_minimum_input",
"type": "object",
"properties": {
"target_white_level": {
"title": "Target White Level",
"default": 700,
"type": "integer"
},
"percentile": {
"title": "Percentile",
"default": 99.9,
"type": "number"
}
}
},
"output": {
"title": "auto_expose_from_minimum_output"
}
},
"calibrate_lens_shading": {
"description": "This method requires an empty (i.e. bright) field of view. It will take\na raw image and effectively divide every subsequent image by the current\none. This uses the camera's \"tuning\" file to correct the preview and\nthe processed images. It should not affect raw images.",
"title": "Take an image and use it for flat-field correction.",
"forms": [
{
"href": "/camera/calibrate_lens_shading",
"op": [
"invokeaction"
]
}
],
"input": {
"title": "calibrate_lens_shading_input",
"oneOf": [
{
"title": "StrictEmptyObject",
"type": "object",
"properties": {}
},
{
"type": "null"
}
]
},
"output": {
"title": "calibrate_lens_shading_output"
}
},
"calibrate_white_balance": {
"description": "This calibration requires a neutral image, such that the 99th centile \nof each colour channel should correspond to white. We calculate the\ncentiles and use this to set the colour gains. This is done on the raw\nimage with the lens shading correction applied, which should mean\nthat the image is uniform, rather than weighted towards the centre.\n\nIf `method` is `\"centre\"`, we will correct the mean of the central 10%\nof the image.",
"title": "Correct the white balance of the image",
"forms": [
{
"href": "/camera/calibrate_white_balance",
"op": [
"invokeaction"
]
}
],
"input": {
"title": "calibrate_white_balance_input",
"type": "object",
"properties": {
"method": {
"title": "Method",
"enum": [
"percentile",
"centre"
],
"default": "centre",
"type": "string"
},
"luminance_power": {
"title": "Luminance Power",
"default": 1.0,
"type": "number"
}
}
},
"output": {
"title": "calibrate_white_balance_output"
}
},
"capture_array": {
"description": "This function will produce a nested list containing an uncompressed RGB image.\nIt's likely to be highly inefficient - raw and/or uncompressed captures using\nbinary image formats will be added in due course.",
"title": "Acquire one image from the camera and return as an array",
"forms": [
{
"href": "/camera/capture_array",
"op": [
"invokeaction"
]
}
],
"input": {
"title": "capture_array_input",
"type": "object",
"properties": {
"stream_name": {
"title": "Stream Name",
"enum": [
"main",
"lores",
"raw"
],
"default": "main",
"type": "string"
}
}
},
"output": {
"title": "capture_array_output",
"oneOf": [
{
"type": "integer"
},
{
"type": "number"
},
{
"type": "array",
"items": {
"oneOf": [
{
"type": "integer"
},
{
"type": "number"
}
]
}
},
{
"type": "array",
"items": {
"type": "array",
"items": {
"oneOf": [
{
"type": "integer"
},
{
"type": "number"
}
]
}
}
},
{
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"oneOf": [
{
"type": "integer"
},
{
"type": "number"
}
]
}
}
}
},
{
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"oneOf": [
{
"type": "integer"
},
{
"type": "number"
}
]
}
}
}
}
},
{
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"oneOf": [
{
"type": "integer"
},
{
"type": "number"
}
]
}
}
}
}
}
},
{
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"oneOf": [
{
"type": "integer"
},
{
"type": "number"
}
]
}
}
}
}
}
}
},
{
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {}
}
}
}
}
}
}
}
]
}
},
"capture_jpeg": {
"description": "The JPEG will be acquired using `Picamera2.capture_file`. If the\n`resolution` parameter is `main` or `lores`, it will be captured\nfrom the main preview stream, or the low-res preview stream,\nrespectively. This means the camera won't be reconfigured, and\nthe stream will not pause (though it may miss one frame).\n\nIf `full` resolution is requested, we will briefly pause the\nMJPEG stream and reconfigure the camera to capture a full\nresolution image.\n\nNote that this always uses the image processing pipeline - to\nbypass this, you must use a raw capture.",
"title": "Acquire one image from the camera as a JPEG",
"forms": [
{
"href": "/camera/capture_jpeg",
"op": [
"invokeaction"
]
}
],
"input": {
"title": "capture_jpeg_input",
"type": "object",
"properties": {
"resolution": {
"title": "Resolution",
"enum": [
"lores",
"main",
"full"
],
"default": "main",
"type": "string"
}
}
},
"output": {
"title": "capture_jpeg_output",
"type": "object",
"properties": {
"media_type": {
"title": "Media Type",
"const": "image/jpeg",
"default": "image/jpeg"
},
"href": {
"title": "Href",
"type": "string"
},
"rel": {
"title": "Rel",
"const": "output",
"default": "output"
},
"description": {
"title": "Description",
"default": "The output from this action is not serialised to JSON, so it must be retrieved as a file. This link will return the file.",
"type": "string"
}
},
"required": [
"href"
]
}
},
"flat_lens_shading": {
"description": "This method will set a completely flat lens shading table. It is not the\nsame as the default behaviour, which is to use an adaptive lens shading\ntable.",
"title": "Disable flat-field correction",
"forms": [
{
"href": "/camera/flat_lens_shading",
"op": [
"invokeaction"
]
}
],
"input": {
"title": "flat_lens_shading_input",
"oneOf": [
{
"title": "StrictEmptyObject",
"type": "object",
"properties": {}
},
{
"type": "null"
}
]
},
"output": {
"title": "flat_lens_shading_output"
}
},
"flat_lens_shading_chrominance": {
"description": "This method will set the chrominance of the lens shading table to be\nflat, i.e. we'll correct vignetting of intensity, but not any change in\ncolour across the image.",
"title": "Disable flat-field correction",
"forms": [
{
"href": "/camera/flat_lens_shading_chrominance",
"op": [
"invokeaction"
]
}
],
"input": {
"title": "flat_lens_shading_chrominance_input",
"oneOf": [
{
"title": "StrictEmptyObject",
"type": "object",
"properties": {}
},
{
"type": "null"
}
]
},
"output": {
"title": "flat_lens_shading_chrominance_output"
}
},
"full_auto_calibrate": {
"description": "This function will call the other calibration actions in sequence:\n\n* `flat_lens_shading` to disable flat-field\n* `auto_expose_from_minimum`\n* `calibrate_white_balance`\n* `calibrate_lens_shading`",
"title": "Perform a full auto-calibration",
"forms": [
{
"href": "/camera/full_auto_calibrate",
"op": [
"invokeaction"
]
}
],
"input": {
"title": "full_auto_calibrate_input",
"oneOf": [
{
"title": "StrictEmptyObject",
"type": "object",
"properties": {}
},
{
"type": "null"
}
]
},
"output": {
"title": "full_auto_calibrate_output"
}
},
"grab_jpeg": {
"description": "This differs from `capture_jpeg` in that it does not pause the MJPEG\npreview stream. Instead, we simply return the next frame from that\nstream (either \"main\" for the preview stream, or \"lores\" for the low\nresolution preview). No metadata is returned.",
"title": "Acquire one image from the preview stream and return as an array",
"forms": [
{
"href": "/camera/grab_jpeg",
"op": [
"invokeaction"
]
}
],
"input": {
"title": "grab_jpeg_input",
"type": "object",
"properties": {
"stream_name": {
"title": "Stream Name",
"enum": [
"main",
"lores"
],
"default": "main",
"type": "string"
}
}
},
"output": {
"title": "grab_jpeg_output",
"type": "object",
"properties": {
"media_type": {
"title": "Media Type",
"const": "image/jpeg",
"default": "image/jpeg"
},
"href": {
"title": "Href",
"type": "string"
},
"rel": {
"title": "Rel",
"const": "output",
"default": "output"
},
"description": {
"title": "Description",
"default": "The output from this action is not serialised to JSON, so it must be retrieved as a file. This link will return the file.",
"type": "string"
}
},
"required": [
"href"
]
}
},
"grab_jpeg_size": {
"description": "Acquire one image from the preview stream and return its size",
"title": "Acquire one image from the preview stream and return its size",
"forms": [
{
"href": "/camera/grab_jpeg_size",
"op": [
"invokeaction"
]
}
],
"input": {
"title": "grab_jpeg_size_input",
"type": "object",
"properties": {
"stream_name": {
"title": "Stream Name",
"enum": [
"main",
"lores"
],
"default": "main",
"type": "string"
}
}
},
"output": {
"title": "grab_jpeg_size_output",
"type": "integer"
}
},
"reset_lens_shading": {
"description": "This method will restore the default \"adaptive\" lens shading method used\nby the Raspberry Pi camera.",
"title": "Revert to default lens shading settings",
"forms": [
{
"href": "/camera/reset_lens_shading",
"op": [
"invokeaction"
]
}
],
"input": {
"title": "reset_lens_shading_input",
"oneOf": [
{
"title": "StrictEmptyObject",
"type": "object",
"properties": {}
},
{
"type": "null"
}
]
},
"output": {
"title": "reset_lens_shading_output"
}
},
"snap_image": {
"description": "This action cannot run if the camera is in use by a background thread, for\nexample if a preview stream is running.",
"title": "Acquire one image from the camera.",
"forms": [
{
"href": "/camera/snap_image",
"op": [
"invokeaction"
]
}
],
"input": {
"title": "snap_image_input",
"oneOf": [
{
"title": "StrictEmptyObject",
"type": "object",
"properties": {}
},
{
"type": "null"
}
]
},
"output": {
"title": "snap_image_output",
"oneOf": [
{
"type": "integer"
},
{
"type": "number"
},
{
"type": "array",
"items": {
"oneOf": [
{
"type": "integer"
},
{
"type": "number"
}
]
}
},
{
"type": "array",
"items": {
"type": "array",
"items": {
"oneOf": [
{
"type": "integer"
},
{
"type": "number"
}
]
}
}
},
{
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"oneOf": [
{
"type": "integer"
},
{
"type": "number"
}
]
}
}
}
},
{
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"oneOf": [
{
"type": "integer"
},
{
"type": "number"
}
]
}
}
}
}
},
{
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"oneOf": [
{
"type": "integer"
},
{
"type": "number"
}
]
}
}
}
}
}
},
{
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"oneOf": [
{
"type": "integer"
},
{
"type": "number"
}
]
}
}
}
}
}
}
},
{
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {}
}
}
}
}
}
}
}
]
}
}
},
"base": "http://microscope1.local:5000/",
"securityDefinitions": {
"no_security": {
"description": "No security",
"scheme": "nosec"
}
},
"security": "no_security",
"@context": "https://www.w3.org/2022/wot/td/v1.1"
},
"/stage/": {
"title": "SangaboardThing",
"properties": {
"axis_names": {
"description": "The names of the stage's axes, in order.",
"title": "The names of the stage's axes, in order.",
"type": "array",
"items": {
"type": "string"
},
"forms": [
{
"href": "/stage/axis_names",
"op": [
"readproperty"
]
}
]
},
"moving": {
"description": "Whether the stage is in motion",
"title": "moving",
"type": "boolean",
"forms": [
{
"href": "/stage/moving",
"op": [
"readproperty"
]
}
]
},
"position": {
"description": "Current position of the stage",
"title": "position",
"type": "object",
"forms": [
{
"href": "/stage/position",
"op": [
"readproperty"
]
}
]
}
},
"actions": {
"abort_move": {
"description": "Abort a current move",
"title": "Abort a current move",
"forms": [
{
"href": "/stage/abort_move",
"op": [
"invokeaction"
]
}
],
"input": {
"title": "abort_move_input",
"oneOf": [
{
"title": "StrictEmptyObject",
"type": "object",
"properties": {}
},
{
"type": "null"
}
]
},
"output": {
"title": "abort_move_output"
}
},
"flash_led": {
"description": "This is intended to be useful in situations where there are multiple\nSangaboards in use, and it is necessary to identify which one is \nbeing addressed.",
"title": "Flash the LED to identify the board",
"forms": [
{
"href": "/stage/flash_led",
"op": [
"invokeaction"
]
}
],
"input": {
"title": "flash_led_input",
"type": "object",
"properties": {
"number_of_flashes": {
"title": "Number Of Flashes",
"default": 10,
"type": "integer"
},
"dt": {
"title": "Dt",
"default": 0.5,
"type": "number"
},
"led_channel": {
"title": "Led Channel",
"const": "cc",
"default": "cc"
}
}
},
"output": {
"title": "flash_led_output",
"type": "null"
}
},
"move_absolute": {
"description": "Make an absolute move. Keyword arguments should be axis names.",
"title": "Make an absolute move. Keyword arguments should be axis names.",
"forms": [
{
"href": "/stage/move_absolute",
"op": [
"invokeaction"
]
}
],
"input": {
"title": "move_absolute_input",
"type": "object",
"properties": {
"block_cancellation": {
"title": "Block Cancellation",
"default": false,
"type": "boolean"
}
}
},
"output": {
"title": "move_absolute_output"
}
},
"move_relative": {
"description": "Make a relative move. Keyword arguments should be axis names.",
"title": "Make a relative move. Keyword arguments should be axis names.",
"forms": [
{
"href": "/stage/move_relative",
"op": [
"invokeaction"
]
}
],
"input": {
"title": "move_relative_input",
"type": "object",
"properties": {
"block_cancellation": {
"title": "Block Cancellation",
"default": false,
"type": "boolean"
}
}
},
"output": {
"title": "move_relative_output"
}
},
"set_zero_position": {
"description": "This action does not move the stage, but resets the position to zero.\nIt is intended for use after manually or automatically recentring the\nstage.",
"title": "Make the current position zero in all axes",
"forms": [
{
"href": "/stage/set_zero_position",
"op": [
"invokeaction"
]
}
],
"input": {
"title": "set_zero_position_input",
"oneOf": [
{
"title": "StrictEmptyObject",
"type": "object",
"properties": {}
},
{
"type": "null"
}
]
},
"output": {
"title": "set_zero_position_output"
}
}
},
"base": "http://microscope1.local:5000/",
"securityDefinitions": {
"no_security": {
"description": "No security",
"scheme": "nosec"
}
},
"security": "no_security",
"@context": "https://www.w3.org/2022/wot/td/v1.1"
},
"/auto_recentre_stage/": {
"title": "RecentringThing",
"properties": {},
"actions": {
"recentre": {
"description": "Autofocuses at multiple points around the sample to\nfind the overall maximum (or minimum) height, which\ncorresponds to the centre of the stage. This exploits the\nfact that the OpenFlexure stage moves in an arc, i.e. its\nheight will vary with X and Y. The point where the variation\nof Z with X and Y motion is smallest is the centre of its \nXY travel. This routine moves in X and Y, monitoring the\nZ value of the focal plane, and attempts to find the point\nwhere Z does not vary with X and Y, which is where it stops.\n\nmax_steps: The maximum number of moves in x or y before\naborting due to a poorly positioned stage or hard to focus\nsample\n\nlateral_distance: The xy distance between areas to check.\nBelow 3000 becomes less reliable, as focus shouldn't shift\nmuch between these sites, making the procedure more sensitive\nto noise or a failed autofocus.",
"title": "Recentre the stage, based on the focal plane",
"forms": [
{
"href": "/auto_recentre_stage/recentre",
"op": [
"invokeaction"
]
}
],
"input": {
"title": "recentre_input",
"type": "object",
"properties": {
"max_steps": {
"title": "Max Steps",
"default": 15
},
"lateral_distance": {
"title": "Lateral Distance",
"default": 5000
}
}
},
"output": {
"title": "recentre_output"
}
}
},
"base": "http://microscope1.local:5000/",
"securityDefinitions": {
"no_security": {
"description": "No security",
"scheme": "nosec"
}
},
"security": "no_security",
"@context": "https://www.w3.org/2022/wot/td/v1.1"
},
"/autofocus/": {
"title": "AutofocusThing",
"properties": {},
"actions": {
"fast_autofocus": {
"description": "This method will will move down by dz/2, sweep up by dz, and then evaluate\nthe position where the image was sharpest. We'll then move back down, and\nfinally up to the sharpest point.",
"title": "Sweep the stage up and down, then move to the sharpest point",
"forms": [
{
"href": "/autofocus/fast_autofocus",
"op": [
"invokeaction"
]
}
],
"input": {
"title": "fast_autofocus_input",
"type": "object",
"properties": {
"dz": {
"title": "Dz",
"default": 2000,
"type": "integer"
},
"start": {
"title": "Start",
"default": "centre",
"type": "string"
}
}
},
"output": {
"title": "fast_autofocus_output",
"type": "object",
"properties": {
"jpeg_times": {
"title": "NestedListOfNumbersModel",
"oneOf": [
{
"type": "integer"
},
{
"type": "number"
},
{
"type": "array",
"items": {
"oneOf": [
{
"type": "integer"
},
{
"type": "number"
}
]
}
},
{
"type": "array",
"items": {
"type": "array",
"items": {
"oneOf": [
{
"type": "integer"
},
{
"type": "number"
}
]
}
}
},
{
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"oneOf": [
{
"type": "integer"
},
{
"type": "number"
}
]
}
}
}
},
{
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"oneOf": [
{
"type": "integer"
},
{
"type": "number"
}
]
}
}
}
}
},
{
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"oneOf": [
{
"type": "integer"
},
{
"type": "number"
}
]
}
}
}
}
}
},
{
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"oneOf": [
{
"type": "integer"
},
{
"type": "number"
}
]
}
}
}
}
}
}
},
{
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {}
}
}
}
}
}
}
}
]
},
"jpeg_sizes": {
"title": "NestedListOfNumbersModel",
"oneOf": [
{
"type": "integer"
},
{
"type": "number"
},
{
"type": "array",
"items": {
"oneOf": [
{
"type": "integer"
},
{
"type": "number"
}
]
}
},
{
"type": "array",
"items": {
"type": "array",
"items": {
"oneOf": [
{
"type": "integer"
},
{
"type": "number"
}
]
}
}
},
{
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"oneOf": [
{
"type": "integer"
},
{
"type": "number"
}
]
}
}
}
},
{
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"oneOf": [
{
"type": "integer"
},
{
"type": "number"
}
]
}
}
}
}
},
{
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"oneOf": [
{
"type": "integer"
},
{
"type": "number"
}
]
}
}
}
}
}
},
{
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"oneOf": [
{
"type": "integer"
},
{
"type": "number"
}
]
}
}
}
}
}
}
},
{
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {}
}
}
}
}
}
}
}
]
},
"stage_times": {
"title": "NestedListOfNumbersModel",
"oneOf": [
{
"type": "integer"
},
{
"type": "number"
},
{
"type": "array",
"items": {
"oneOf": [
{
"type": "integer"
},
{
"type": "number"
}
]
}
},
{
"type": "array",
"items": {
"type": "array",
"items": {
"oneOf": [
{
"type": "integer"
},
{
"type": "number"
}
]
}
}
},
{
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"oneOf": [
{
"type": "integer"
},
{
"type": "number"
}
]
}
}
}
},
{
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"oneOf": [
{
"type": "integer"
},
{
"type": "number"
}
]
}
}
}
}
},
{
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"oneOf": [
{
"type": "integer"
},
{
"type": "number"
}
]
}
}
}
}
}
},
{
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"oneOf": [
{
"type": "integer"
},
{
"type": "number"
}
]
}
}
}
}
}
}
},
{
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {}
}
}
}
}
}
}
}
]
},
"stage_positions": {
"title": "NestedListOfNumbersModel",
"oneOf": [
{
"type": "integer"
},
{
"type": "number"
},
{
"type": "array",
"items": {
"oneOf": [
{
"type": "integer"
},
{
"type": "number"
}
]
}
},
{
"type": "array",
"items": {
"type": "array",
"items": {
"oneOf": [
{
"type": "integer"
},
{
"type": "number"
}
]
}
}
},
{
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"oneOf": [
{
"type": "integer"
},
{
"type": "number"
}
]
}
}
}
},
{
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"oneOf": [
{
"type": "integer"
},
{
"type": "number"
}
]
}
}
}
}
},
{
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"oneOf": [
{
"type": "integer"
},
{
"type": "number"
}
]
}
}
}
}
}
},
{
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"oneOf": [
{
"type": "integer"
},
{
"type": "number"
}
]
}
}
}
}
}
}
},
{
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {}
}
}
}
}
}
}
}
]
}
},
"required": [
"jpeg_times",
"jpeg_sizes",
"stage_times",
"stage_positions"
]
}
},
"looping_autofocus": {
"description": "This action will run the `fast_autofocus` action until it settles on a point\nin the middle 3/5 of its range. Such logic can be helpful if the microscope\nis close to focus, but not quite within `dz/2`. It will attempt to autofocus\nup to 10 times.",
"title": "Repeatedly autofocus the stage until it looks focused.",
"forms": [
{
"href": "/autofocus/looping_autofocus",
"op": [
"invokeaction"
]
}
],
"input": {
"title": "looping_autofocus_input",
"type": "object",
"properties": {
"dz": {
"title": "Dz",
"default": 2000
},
"start": {
"title": "Start",
"default": "centre"
}
}
},
"output": {
"title": "looping_autofocus_output"
}
},
"move_and_measure": {
"description": "This method will will make a series of relative moves in z, and\nreturn the sharpness (JPEG size) vs time, along with timestamps\nfor the moves. This can be used to calibrate autofocus.\n\nEach move is relative to the last one, i.e. we will finish at\n`sum(dz)` relative to the starting position.\n\nIf `wait` is specified, we will wait for that many seconds\nbetween moves.",
"title": "Make a move (or a series of moves) and monitor sharpness",
"forms": [
{
"href": "/autofocus/move_and_measure",
"op": [
"invokeaction"
]
}
],
"input": {
"title": "move_and_measure_input",
"type": "object",
"properties": {
"dz": {
"title": "Dz",
"type": "array",
"items": {
"type": "integer"
}
},
"wait": {
"title": "Wait",
"default": 0,
"type": "number"
}
},
"required": [
"dz"
]
},
"output": {
"title": "move_and_measure_output",
"type": "object",
"properties": {
"jpeg_times": {
"title": "NestedListOfNumbersModel",
"oneOf": [
{
"type": "integer"
},
{
"type": "number"
},
{
"type": "array",
"items": {
"oneOf": [
{
"type": "integer"
},
{
"type": "number"
}
]
}
},
{
"type": "array",
"items": {
"type": "array",
"items": {
"oneOf": [
{
"type": "integer"
},
{
"type": "number"
}
]
}
}
},
{
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"oneOf": [
{
"type": "integer"
},
{
"type": "number"
}
]
}
}
}
},
{
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"oneOf": [
{
"type": "integer"
},
{
"type": "number"
}
]
}
}
}
}
},
{
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"oneOf": [
{
"type": "integer"
},
{
"type": "number"
}
]
}
}
}
}
}
},
{
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"oneOf": [
{
"type": "integer"
},
{
"type": "number"
}
]
}
}
}
}
}
}
},
{
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {}
}
}
}
}
}
}
}
]
},
"jpeg_sizes": {
"title": "NestedListOfNumbersModel",
"oneOf": [
{
"type": "integer"
},
{
"type": "number"
},
{
"type": "array",
"items": {
"oneOf": [
{
"type": "integer"
},
{
"type": "number"
}
]
}
},
{
"type": "array",
"items": {
"type": "array",
"items": {
"oneOf": [
{
"type": "integer"
},
{
"type": "number"
}
]
}
}
},
{
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"oneOf": [
{
"type": "integer"
},
{
"type": "number"
}
]
}
}
}
},
{
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"oneOf": [
{
"type": "integer"
},
{
"type": "number"
}
]
}
}
}
}
},
{
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"oneOf": [
{
"type": "integer"
},
{
"type": "number"
}
]
}
}
}
}
}
},
{
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"oneOf": [
{
"type": "integer"
},
{
"type": "number"
}
]
}
}
}
}
}
}
},
{
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {}
}
}
}
}
}
}
}
]
},
"stage_times": {
"title": "NestedListOfNumbersModel",
"oneOf": [
{
"type": "integer"
},
{
"type": "number"
},
{
"type": "array",
"items": {
"oneOf": [
{
"type": "integer"
},
{
"type": "number"
}
]
}
},
{
"type": "array",
"items": {
"type": "array",
"items": {
"oneOf": [
{
"type": "integer"
},
{
"type": "number"
}
]
}
}
},
{
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"oneOf": [
{
"type": "integer"
},
{
"type": "number"
}
]
}
}
}
},
{
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"oneOf": [
{
"type": "integer"
},
{
"type": "number"
}
]
}
}
}
}
},
{
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"oneOf": [
{
"type": "integer"
},
{
"type": "number"
}
]
}
}
}
}
}
},
{
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"oneOf": [
{
"type": "integer"
},
{
"type": "number"
}
]
}
}
}
}
}
}
},
{
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {}
}
}
}
}
}
}
}
]
},
"stage_positions": {
"title": "NestedListOfNumbersModel",
"oneOf": [
{
"type": "integer"
},
{
"type": "number"
},
{
"type": "array",
"items": {
"oneOf": [
{
"type": "integer"
},
{
"type": "number"
}
]
}
},
{
"type": "array",
"items": {
"type": "array",
"items": {
"oneOf": [
{
"type": "integer"
},
{
"type": "number"
}
]
}
}
},
{
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"oneOf": [
{
"type": "integer"
},
{
"type": "number"
}
]
}
}
}
},
{
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"oneOf": [
{
"type": "integer"
},
{
"type": "number"
}
]
}
}
}
}
},
{
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"oneOf": [
{
"type": "integer"
},
{
"type": "number"
}
]
}
}
}
}
}
},
{
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"oneOf": [
{
"type": "integer"
},
{
"type": "number"
}
]
}
}
}
}
}
}
},
{
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {
"type": "array",
"items": {}
}
}
}
}
}
}
}
]
}
},
"required": [
"jpeg_times",
"jpeg_sizes",
"stage_times",
"stage_positions"
]
}
}
},
"base": "http://microscope1.local:5000/",
"securityDefinitions": {
"no_security": {
"description": "No security",
"scheme": "nosec"
}
},
"security": "no_security",
"@context": "https://www.w3.org/2022/wot/td/v1.1"
},
"/camera_stage_mapping/": {
"title": "CameraStageMapper",
"properties": {
"image_resolution": {
"description": "The image size used to calibrate the image_to_stage_displacement_matrix",
"title": "The image size used to calibrate the image_to_stage_displacement_matrix",
"oneOf": [
{
"type": "array",
"items": [
{
"type": "number"
},
{
"type": "number"
}
],
"maxItems": 2,
"minItems": 2
},
{
"type": "null"
}
],
"forms": [
{
"href": "/camera_stage_mapping/image_resolution",
"op": [
"readproperty"
]
}
]
},
"image_to_stage_displacement_matrix": {
"description": "Note that this matrix is defined using \"matrix coordinates\", i.e. image coordinates\nmay be (y,x). This is an artifact of the way numpy, opencv, etc. define images. If\nyou are making use of this matrix in your own code, you will need to take care of\nthat conversion.\n\nIt is often helpful to give a concrete example: to make a move in image coordinates\n(`dy`, `dx`), where `dx` is horizontal, i.e. the longer dimension of the image, you\nshould move the stage by:\n```\nstage_disp = np.dot(\n np.array(image_to_stage_displacement_matrix),\n np.array([dy,dx]),\n)\n```",
"title": "A 2x2 matrix that converts displacement in image coordinates to stage coordinates.",
"oneOf": [
{
"type": "array",
"items": {
"type": "array",
"items": {
"type": "number"
}
}
},
{
"type": "null"
}
],
"forms": [
{
"href": "/camera_stage_mapping/image_to_stage_displacement_matrix",
"op": [
"readproperty"
]
}
]
},
"last_calibration": {
"description": "The results of the last calibration that was run\n ",
"title": "The results of the last calibration that was run",
"oneOf": [
{
"type": "object"
},
{
"type": "null"
}
],
"forms": [
{
"href": "/camera_stage_mapping/last_calibration",
"op": [
"readproperty"
]
}
]
},
"thing_state": {
"description": "Summary metadata describing the current state of the Thing",
"title": "Summary metadata describing the current state of the Thing",
"type": "object",
"forms": [
{
"href": "/camera_stage_mapping/thing_state",
"op": [
"readproperty"
]
}
]
}
},
"actions": {
"calibrate_1d": {
"description": "Move a microscope's stage in 1D, and figure out the relationship with the camera",
"title": "Move a microscope's stage in 1D, and figure out the relationship with the camera",
"forms": [
{
"href": "/camera_stage_mapping/calibrate_1d",
"op": [
"invokeaction"
]
}
],
"input": {
"title": "calibrate_1d_input",
"type": "object",
"properties": {
"direction": {
"title": "Direction",
"type": "array",
"items": [
{
"type": "number"
},
{
"type": "number"
},
{
"type": "number"
}
],
"maxItems": 3,
"minItems": 3
}
},
"required": [
"direction"
]
},
"output": {
"title": "calibrate_1d_output",
"type": "object"
}
},
"calibrate_xy": {
"description": "This performs two 1d calibrations in x and y, then combines their results.\n ",
"title": "Move the microscope's stage in X and Y, to calibrate its relationship to the camera",
"forms": [
{
"href": "/camera_stage_mapping/calibrate_xy",
"op": [
"invokeaction"
]
}
],
"input": {
"title": "calibrate_xy_input",
"oneOf": [
{
"title": "StrictEmptyObject",
"type": "object",
"properties": {}
},
{
"type": "null"
}
]
},
"output": {
"title": "calibrate_xy_output",
"type": "object"
}
},
"move_in_image_coordinates": {
"description": "NB x and y here refer to what is usually understood to be the horizontal and\nvertical axes of the image. In many toolkits, \"matrix indices\" are used, which\nswap the order of these coordinates. This includes opencv and PIL. So, don't be\nsurprised if you find it necessary to swap x and y around.\n\nAs a general rule, `x` usually corresponds to the longer dimension of the image,\nand `y` to the shorter one. Checking what shape your chosen toolkit reports for\nan image usually helps resolve any ambiguity.",
"title": "Move by a given number of pixels on the camera",
"forms": [
{
"href": "/camera_stage_mapping/move_in_image_coordinates",
"op": [
"invokeaction"
]
}
],
"input": {
"title": "move_in_image_coordinates_input",
"type": "object",
"properties": {
"x": {
"title": "X",
"type": "number"
},
"y": {
"title": "Y",
"type": "number"
}
},
"required": [
"x",
"y"
]
},
"output": {
"title": "move_in_image_coordinates_output"
}
}
},
"base": "http://microscope1.local:5000/",
"securityDefinitions": {
"no_security": {
"description": "No security",
"scheme": "nosec"
}
},
"security": "no_security",
"@context": "https://www.w3.org/2022/wot/td/v1.1"
},
"/system_control/": {
"title": "SystemControlThing",
"properties": {
"is_raspberrypi": {
"description": "Checks if we are running on a Raspberry Pi.",
"title": "Checks if we are running on a Raspberry Pi.",
"type": "boolean",
"forms": [
{
"href": "/system_control/is_raspberrypi",
"op": [
"readproperty"
]
}
]
}
},
"actions": {
"reboot": {
"description": "Attempt to reboot the device",
"title": "Attempt to reboot the device",
"forms": [
{
"href": "/system_control/reboot",
"op": [
"invokeaction"
]
}
],
"input": {
"title": "reboot_input",
"oneOf": [
{
"title": "StrictEmptyObject",
"type": "object",
"properties": {}
},
{
"type": "null"
}
]
},
"output": {
"title": "reboot_output",
"type": "object",
"properties": {
"output": {
"title": "Output",
"type": "string"
},
"error": {
"title": "Error",
"type": "string"
}
},
"required": [
"output",
"error"
]
}
},
"shutdown": {
"description": "Attempt to shutdown the device",
"title": "Attempt to shutdown the device",
"forms": [
{
"href": "/system_control/shutdown",
"op": [
"invokeaction"
]
}
],
"input": {
"title": "shutdown_input",
"oneOf": [
{
"title": "StrictEmptyObject",
"type": "object",
"properties": {}
},
{
"type": "null"
}
]
},
"output": {
"title": "shutdown_output",
"type": "object",
"properties": {
"output": {
"title": "Output",
"type": "string"
},
"error": {
"title": "Error",
"type": "string"
}
},
"required": [
"output",
"error"
]
}
}
},
"base": "http://microscope1.local:5000/",
"securityDefinitions": {
"no_security": {
"description": "No security",
"scheme": "nosec"
}
},
"security": "no_security",
"@context": "https://www.w3.org/2022/wot/td/v1.1"
},
"/settings/": {
"title": "SettingsManager",
"properties": {
"external_metadata": {
"description": "External metadata stored in the server's settings",
"title": "External metadata stored in the server's settings",
"type": "object",
"forms": [
{
"href": "/settings/external_metadata",
"op": [
"readproperty"
]
}
]
},
"external_metadata_in_state": {
"description": "A list of strings that are included in the \"state\" metadata",
"title": "A list of strings that are included in the \"state\" metadata",
"type": "array",
"items": {
"type": "string"
},
"forms": [
{
"href": "/settings/external_metadata_in_state",
"op": [
"readproperty",
"writeproperty"
]
}
]
},
"hostname": {
"description": "The hostname of the microscope, as reported by its operating system.",
"title": "The hostname of the microscope, as reported by its operating system.",
"type": "string",
"forms": [
{
"href": "/settings/hostname",
"op": [
"readproperty"
]
}
]
},
"microscope_id": {
"description": "A unique identifier for this microscope",
"title": "A unique identifier for this microscope",
"format": "uuid",
"type": "string",
"forms": [
{
"href": "/settings/microscope_id",
"op": [
"readproperty"
]
}
]
}
},
"actions": {
"delete_external_metadata": {
"description": "The key may contain forward slashes, which are understood to separate\nlevels of the dictionary - i.e. `'a/c'` will remove the `c` key from a\ndictionary that looks like: `{'a': {'c': 1}, 'b': {'d': 2}}`",
"title": "Delete a key from the stored metadata.",
"forms": [
{
"href": "/settings/delete_external_metadata",
"op": [
"invokeaction"
]
}
],
"input": {
"title": "delete_external_metadata_input",
"type": "object",
"properties": {
"key": {
"title": "Key",
"type": "string"
}
},
"required": [
"key"
]
},
"output": {
"title": "delete_external_metadata_output",
"type": "null"
}
},
"get_things_state": {
"description": "Metadata summarising the current state of all Things in the server",
"title": "Metadata summarising the current state of all Things in the server",
"forms": [
{
"href": "/settings/get_things_state",
"op": [
"invokeaction"
]
}
],
"input": {
"title": "get_things_state_input",
"oneOf": [
{
"title": "StrictEmptyObject",
"type": "object",
"properties": {}
},
{
"type": "null"
}
]
},
"output": {
"title": "get_things_state_output",
"type": "object"
}
},
"save_all_thing_settings": {
"description": "Normally, each Thing has a `thing_settings` attribute that can be\nused to save settings. That dictionary is saved to a JSON file\nwhen the server shuts down cleanly.\n\nThis action causes all the `thing_settings` dictionaries to be\nsaved to files immediately, meaning that settings will not be\nlost in the event of a server crash, power outage, etc.",
"title": "Ensure all the Things sync their settings to disk.",
"forms": [
{
"href": "/settings/save_all_thing_settings",
"op": [
"invokeaction"
]
}
],
"input": {
"title": "save_all_thing_settings_input",
"oneOf": [
{
"title": "StrictEmptyObject",
"type": "object",
"properties": {}
},
{
"type": "null"
}
]
},
"output": {
"title": "save_all_thing_settings_output",
"type": "null"
}
},
"update_external_metadata": {
"description": "The data supplied will be merged into the existing dictionary\nrecursively, i.e. if a key exists, it will be added to rather than\nreplaced.\n\nIf a key is supplied, we will treat `data` as being relative to that\nkey, i.e. calling this action with `data={\"a\":1 }, key=\"foo/bar\"` is\nequivalent to calling it with `data={\"foo\": {\"bar\": {\"a\": 1}}}`.",
"title": "Add or replace keys in the external metadata.",
"forms": [
{
"href": "/settings/update_external_metadata",
"op": [
"invokeaction"
]
}
],
"input": {
"title": "update_external_metadata_input",
"type": "object",
"properties": {
"data": {
"title": "Data",
"type": "object"
},
"key": {
"title": "Key",
"oneOf": [
{
"type": "string"
},
{
"type": "null"
}
]
}
},
"required": [
"data"
]
},
"output": {
"title": "update_external_metadata_output",
"type": "null"
}
}
},
"base": "http://microscope1.local:5000/",
"securityDefinitions": {
"no_security": {
"description": "No security",
"scheme": "nosec"
}
},
"security": "no_security",
"@context": "https://www.w3.org/2022/wot/td/v1.1"
},
"/smart_scan/": {
"title": "SmartScanThing",
"properties": {
"autofocus_dz": {
"description": "The z distance to perform an autofocus",
"title": "The z distance to perform an autofocus",
"type": "integer",
"forms": [
{
"href": "/smart_scan/autofocus_dz",
"op": [
"readproperty",
"writeproperty"
]
}
]
},
"latest_preview_stitch_time": {
"description": "This will return `null` if there is no preview image to return.\n ",
"title": "The modification time of the latest preview image",
"oneOf": [
{
"format": "date-time",
"type": "string"
},
{
"type": "null"
}
],
"forms": [
{
"href": "/smart_scan/latest_preview_stitch_time",
"op": [
"readproperty"
]
}
]
},
"latest_scan_name": {
"description": "The name of the last scan to be started.",
"title": "The name of the last scan to be started.",
"oneOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"forms": [
{
"href": "/smart_scan/latest_scan_name",
"op": [
"readproperty"
]
}
]
},
"max_range": {
"description": "The maximum distance from the centre of the scan before we break",
"title": "The maximum distance from the centre of the scan before we break",
"type": "integer",
"forms": [
{
"href": "/smart_scan/max_range",
"op": [
"readproperty",
"writeproperty"
]
}
]
},
"overlap": {
"description": "The z distance to perform an autofocus",
"title": "The z distance to perform an autofocus",
"type": "number",
"forms": [
{
"href": "/smart_scan/overlap",
"op": [
"readproperty",
"writeproperty"
]
}
]
},
"scans": {
"description": "Each scan has a name (which can be used to access it), along with\nits modified and created times (according to the filesystem) and\nthe number of items in the `images` folder. Note that the number\nof images reported may be confused if non-image files are present\nin the `images` folder.",
"title": "All the available scans",
"type": "array",
"items": {
"description": "\"Summary information about a scan folder",
"title": "ScanInfo",
"type": "object",
"properties": {
"name": {
"title": "Name",
"type": "string"
},
"created": {
"title": "Created",
"format": "date-time",
"type": "string"
},
"modified": {
"title": "Modified",
"format": "date-time",
"type": "string"
},
"number_of_images": {
"title": "Number Of Images",
"type": "integer"
}
},
"required": [
"name",
"created",
"modified",
"number_of_images"
]
},
"forms": [
{
"href": "/smart_scan/scans",
"op": [
"readproperty"
]
}
]
},
"skip_background": {
"description": "This uses the settings from the `background_detect` Thing.\n ",
"title": "Whether to detect and skip empty fields of view",
"type": "boolean",
"forms": [
{
"href": "/smart_scan/skip_background",
"op": [
"readproperty",
"writeproperty"
]
}
]
},
"stitch_automatically": {
"description": "Should we attempt to stitch scans as we go?",
"title": "Should we attempt to stitch scans as we go?",
"type": "boolean",
"forms": [
{
"href": "/smart_scan/stitch_automatically",
"op": [
"readproperty",
"writeproperty"
]
}
]
}
},
"actions": {
"create_zip_of_scan": {
"description": "Generate a zip file that can be downloaded, with all the scan files in it.",
"title": "Generate a zip file that can be downloaded, with all the scan files in it.",
"forms": [
{
"href": "/smart_scan/create_zip_of_scan",
"op": [
"invokeaction"
]
}
],
"input": {
"title": "create_zip_of_scan_input",
"type": "object",
"properties": {
"scan_name": {
"title": "Scan Name",
"oneOf": [
{
"type": "string"
},
{
"type": "null"
}
]
},
"download_zip": {
"title": "Download Zip",
"default": true
}
}
},
"output": {
"title": "create_zip_of_scan_output",
"type": "object",
"properties": {
"media_type": {
"title": "Media Type",
"const": "application/zip",
"default": "application/zip"
},
"href": {
"title": "Href",
"type": "string"
},
"rel": {
"title": "Rel",
"const": "output",
"default": "output"
},
"description": {
"title": "Description",
"default": "The output from this action is not serialised to JSON, so it must be retrieved as a file. This link will return the file.",
"type": "string"
}
},
"required": [
"href"
]
}
},
"get_files_in_zip": {
"description": "List the relative paths of all files and folders in the zip folder specified",
"title": "List the relative paths of all files and folders in the zip folder specified",
"forms": [
{
"href": "/smart_scan/get_files_in_zip",
"op": [
"invokeaction"
]
}
],
"input": {
"title": "get_files_in_zip_input",
"type": "object",
"properties": {
"zip_path": {
"title": "Zip Path"
}
},
"required": [
"zip_path"
]
},
"output": {
"title": "get_files_in_zip_output"
}
},
"sample_scan": {
"description": "The stage will move in a pattern that grows outwards from the starting point,\nstopping once it is surrounded by \"background\" (as detected by the \nbackground_detect Thing).\n\nInput:\n\n* `overlap` is the fraction by which images should overlap, i.e. \n `0.3` means we will move by 70% of the field of view each time.",
"title": "Move the stage to cover an area, taking images that can be tiled together.",
"forms": [
{
"href": "/smart_scan/sample_scan",
"op": [
"invokeaction"
]
}
],
"input": {
"title": "sample_scan_input",
"type": "object",
"properties": {
"scan_name": {
"title": "Scan Name",
"default": "",
"type": "string"
}
}
},
"output": {
"title": "sample_scan_output"
}
},
"stitch_scan": {
"description": "Generate a stitched image based on stage position metadata",
"title": "Generate a stitched image based on stage position metadata",
"forms": [
{
"href": "/smart_scan/stitch_scan",
"op": [
"invokeaction"
]
}
],
"input": {
"title": "stitch_scan_input",
"type": "object",
"properties": {
"scan_name": {
"title": "Scan Name",
"oneOf": [
{
"type": "string"
},
{
"type": "null"
}
]
},
"overlap": {
"title": "Overlap",
"default": 0.0,
"type": "number"
}
}
},
"output": {
"title": "stitch_scan_output",
"type": "null"
}
}
},
"base": "http://microscope1.local:5000/",
"securityDefinitions": {
"no_security": {
"description": "No security",
"scheme": "nosec"
}
},
"security": "no_security",
"@context": "https://www.w3.org/2022/wot/td/v1.1"
},
"/background_detect/": {
"title": "BackgroundDetectThing",
"properties": {
"background_distributions": {
"description": "The statistics of the background image",
"title": "The statistics of the background image",
"oneOf": [
{
"title": "ChannelDistributions",
"type": "object",
"properties": {
"means": {
"title": "Means",
"type": "array",
"items": {
"type": "number"
}
},
"standard_deviations": {
"title": "Standard Deviations",
"type": "array",
"items": {
"type": "number"
}
},
"colorspace": {
"title": "Colorspace",
"default": "LUV",
"type": "string"
}
},
"required": [
"means",
"standard_deviations"
]
},
{
"type": "null"
}
],
"forms": [
{
"href": "/background_detect/background_distributions",
"op": [
"readproperty",
"writeproperty"
]
}
]
},
"fraction": {
"description": "How much of the image needs to be not background to label as sample",
"title": "How much of the image needs to be not background to label as sample",
"type": "number",
"forms": [
{
"href": "/background_detect/fraction",
"op": [
"readproperty",
"writeproperty"
]
}
]
},
"tolerance": {
"description": "How many standard deviations to allow for the background",
"title": "How many standard deviations to allow for the background",
"type": "number",
"forms": [
{
"href": "/background_detect/tolerance",
"op": [
"readproperty",
"writeproperty"
]
}
]
}
},
"actions": {
"background_fraction": {
"description": "This action will acquire a new image from the preview stream, then\nevaluate whether it is foreground or background, by comparing it\ntoo the saved statistics. This is done on a per-pixel basis, and\nthe returned value (between 0 and 100) is the fraction of the image\nthat is background.",
"title": "Determine what fraction of the current image is background",
"forms": [
{
"href": "/background_detect/background_fraction",
"op": [
"invokeaction"
]
}
],
"input": {
"title": "background_fraction_input",
"oneOf": [
{
"title": "StrictEmptyObject",
"type": "object",
"properties": {}
},
{
"type": "null"
}
]
},
"output": {
"title": "background_fraction_output",
"type": "number"
}
},
"image_is_sample": {
"description": "Label the current image as either background or sample",
"title": "Label the current image as either background or sample",
"forms": [
{
"href": "/background_detect/image_is_sample",
"op": [
"invokeaction"
]
}
],
"input": {
"title": "image_is_sample_input",
"oneOf": [
{
"title": "StrictEmptyObject",
"type": "object",
"properties": {}
},
{
"type": "null"
}
]
},
"output": {
"title": "image_is_sample_output",
"type": "boolean"
}
},
"set_background": {
"description": "This should be run when the microscope is looking at an empty region,\nand will calculate the mean and standard deviation of the pixel values\nin the LUV colourspace. These values will then be used to compare\nfuture images to the distribution, to determine if each pixel is\nforeground or background.",
"title": "Grab an image, and use its statistics to set the background",
"forms": [
{
"href": "/background_detect/set_background",
"op": [
"invokeaction"
]
}
],
"input": {
"title": "set_background_input",
"oneOf": [
{
"title": "StrictEmptyObject",
"type": "object",
"properties": {}
},
{
"type": "null"
}
]
},
"output": {
"title": "set_background_output"
}
}
},
"base": "http://microscope1.local:5000/",
"securityDefinitions": {
"no_security": {
"description": "No security",
"scheme": "nosec"
}
},
"security": "no_security",
"@context": "https://www.w3.org/2022/wot/td/v1.1"
},
"/api_test/": {
"title": "APITestThing",
"properties": {
"counter": {
"title": "counter",
"type": "integer",
"forms": [
{
"href": "/api_test/counter",
"op": [
"readproperty"
]
}
]
}
},
"actions": {
"count_slowly": {
"title": "count_slowly",
"forms": [
{
"href": "/api_test/count_slowly",
"op": [
"invokeaction"
]
}
],
"input": {
"title": "count_slowly_input",
"type": "object",
"properties": {
"n": {
"title": "N",
"default": 100
},
"dt": {
"title": "Dt",
"default": 0.1
}
}
},
"output": {
"title": "count_slowly_output",
"type": "null"
}
}
},
"base": "http://microscope1.local:5000/",
"securityDefinitions": {
"no_security": {
"description": "No security",
"scheme": "nosec"
}
},
"security": "no_security",
"@context": "https://www.w3.org/2022/wot/td/v1.1"
}
}
@jaller94
Copy link

What's the meaning of the relatively complex array types jpeg_times and stage_times`?

@jaller94
Copy link

calibrate_lens_shading has an input which accepts null or {} (empty object).
Has this been done to satisfy different Web of Things clients which only support one of the input types?

@rwb27
Copy link
Author

rwb27 commented May 30, 2024

What's the meaning of the relatively complex array types jpeg_times and stage_times`?

Sorry for the long delay - I am not good at keeping up with github notifications...

Those arrays are simply typed as np.ndarray, i.e. a multidimensional array. Python's type system does not (by default) distinguish between a 1d array of numbers using numpy (which is what these are) and a many-dimensional array.

I have currently rendered these in JSON as either a number, or a list of numbers, or a list of lists of numbers, etc. - thus allowing a many-dimensional array. It's ugly and I'd like to get rid of it, but it works. My preferred solution would be to implement something that allows Python type hints to specify the dimensionality, and then generate the JSON type accordingly.

@rwb27
Copy link
Author

rwb27 commented May 30, 2024

calibrate_lens_shading has an input which accepts null or {} (empty object). Has this been done to satisfy different Web of Things clients which only support one of the input types?

Again, this is partly to make it more accepting of clients, and largely because it was hard to differentiate between null and {}. I'm generating the input schema from the arguments of a function: it therefore wasn't clear to me whether a function with no arguments should accept {} or null. Similarly, if there are only optional arguments, it felt right to accept null rather than {}. So, it's one part making client code easy, one part me not being totally sure what the "right" thing to do was.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment