I always end up getting this wrong; steps below worked for Linux Mint 19.3 (based on Ubuntu 18.04).
Build/installation order is important; JPEG 2000 support in ImageMagick only works if OpenJPEG is
found at build time, so we have to start with that. Note that for OpenJPEG an 'openjpeg-dev' Debian package exists.
As I'm not entirely sure this is the most up-to-date version, and JPEG 2000 support is important for me, I'm compiling
this library from the sources here. Otherwise everything under the 'OpenJPEG' could probably be subsituted by the
one-liner sudo at-get install openjpeg-dev
).
<?xml version="1.0" ?> | |
<isolyzer> | |
<toolInfo> | |
<toolName>cli.py</toolName> | |
<toolVersion>1.4.0a1</toolVersion> | |
</toolInfo> | |
<image> | |
<fileInfo> | |
<fileName>BOOKSHELF.iso01.iso</fileName> | |
<filePath>/home/johan/kb/iso-identification/HSF/BOOKSHELF.iso01.iso</filePath> |
import sys | |
import os | |
import glob | |
import platform | |
import codecs | |
import argparse | |
# Create parser | |
parser = argparse.ArgumentParser( | |
description="Test CLI input with wildcards, multiple platforms") |
#!/bin/sh | |
# Analyze file with jpylyzer, display result with | |
# default text editor | |
# I/O stuff | |
fileIn=$1 | |
fileOut=/tmp/"$fileIn".xml | |
# Viewer - default text editor (should work across most Linux flavors) |
#! /usr/bin/env python3 | |
# | |
""" | |
Save web pages to Wayback Machine. Argument urlsIn can either be | |
a text file with URLs (each line contains one URL), or a single | |
URL. In the first (input file) case it will simply save each URL. | |
In the latter case (input URL) it will extract all links from the URL, and | |
save those as well as the root URL (useful for saving a page with all | |
of its direct references). The optional --extensions argument can be used | |
to limit this to one or more specific file extensions. E.g. the following |
cdparanoia -B -L | |
cdparanoia III release 10.2 (September 11, 2008) | |
Ripping from sector 0 (track 1 [0:00.00]) | |
to sector 152169 (track 17 [2:40.35]) | |
outputting to track01.cdda.wav | |
An EPUB is just a ZIP container, but using a ZIP tool directly on a directory with content documents won't usually result in a valid EPUB. This is because the standard requires that:
- The mimetype resource must appear as the first file in the container
- The mimetype resource must be uncompressed
So to meet these requirements we must ZIP the files in a special way. This gist describes how to do this with InfoZip (which is the default ZIP tool on most Linux systems).
Let's suppose all content files are in a directory called /home/johan/epubPolicyTests/content/epub20_minimal/
.
#! /usr/bin/env python3 | |
from warcio.capture_http import capture_http | |
import requests | |
def main(): | |
# Existing warc.gz file (created with wget, then compressed using warcio's | |
# 'recompress' command) | |
with capture_http("ziklies.home.xs4all.nl.warc.gz"): | |
for indexOnder in range(1, 8): | |
for indexMidden in range(1, 8): |
Gebruik omSipCreator voor de tests op kopieën van batches, en NIET op de originele opslaglocaties! Dit is vooral omdat omSipCreator in 'prune' modus (opschoonfunctie) batches wijzigt en daarbij data verwijdert!!
Ik neem in de voorbeelden hieronder even aan dat Python onder de volgende folder geïnstalleerd is:
C:\Python37\