Skip to content

Instantly share code, notes, and snippets.

@AlttiRi
Last active June 7, 2024 01:46
Show Gist options
  • Save AlttiRi/20e5442961f800f8d0f3f1d05d0535e2 to your computer and use it in GitHub Desktop.
Save AlttiRi/20e5442961f800f8d0f3f1d05d0535e2 to your computer and use it in GitHub Desktop.

Configuration

| Configuration files for gallery-dl use a JSON-based file format. | For a (more or less) complete example with options set to their default values, see gallery-dl.conf. | For a configuration file example with more involved settings and options, see gallery-dl-example.conf. |

This file lists all available configuration options and their descriptions.

Contents

  1. Extractor Options
  2. Extractor-specific Options
  3. Downloader Options
  4. Output Options
  5. Postprocessor Options
  6. Miscellaneous Options
  7. API Tokens & IDs

Extractor Options

Each extractor is identified by its category and subcategory. The category is the lowercase site name without any spaces or special characters, which is usually just the module name (pixiv, danbooru, ...). The subcategory is a lowercase word describing the general functionality of that extractor (user, favorite, manga, ...).

Each one of the following options can be specified on multiple levels of the configuration tree:


Base level: extractor.<option-name> Category level: extractor.<category>.<option-name> Subcategory level: extractor.<category>.<subcategory>.<option-name>


A value in a "deeper" level hereby overrides a value of the same name on a lower level. Setting the extractor.pixiv.filename value, for example, lets you specify a general filename pattern for all the different pixiv extractors. Using the extractor.pixiv.user.filename value lets you override this general pattern specifically for PixivUserExtractor instances.

The category and subcategory of all extractors are included in the output of gallery-dl --list-extractors. For a specific URL these values can also be determined by using the -K/--list-keywords command-line option (see the example below).

extractor..filename --------------------Type string

: - object ([condition]{.title-ref} -> format string)

Example

: json "{manga}_c{chapter}_{page:>03}.{extension}"

``` json
{
    "extension == 'mp4'": "{id}_video.{extension}",
    "'nature' in title" : "{id}_{title}.{extension}",
    ""                  : "{id}_default.{extension}"
}
```

Description

: A format string to build filenames for downloaded files with.

If this is an `object`, it must contain Python expressions mapping
to the filename format strings to use. These expressions are
evaluated in the order as specified in Python 3.6+ and in an
undetermined order in Python 3.4 and 3.5.

The available replacement keys depend on the extractor used. A list
of keys for a specific one can be acquired by calling *gallery-dl*
with the `-K`/`--list-keywords` command-line option. For example:

``` 
$ gallery-dl -K http://seiga.nicovideo.jp/seiga/im5977527
Keywords for directory names:
-----------------------------
category
  seiga
subcategory
  image

Keywords for filenames:
-----------------------
category
  seiga
extension
  None
image-id
  5977527
subcategory
  image
```

Note: Even if the value of the `extension` key is missing or `None`,
it will be filled in later when the file download is starting. This
key is therefore always available to provide a valid filename
extension.

extractor..directory ---------------------Type list of strings

: - object ([condition]{.title-ref} -> format strings)

Example

: json ["{category}", "{manga}", "c{chapter} - {title}"]

``` json
{
    "'nature' in content": ["Nature Pictures"],
    "retweet_id != 0"    : ["{category}", "{user[name]}", "Retweets"],
    ""                   : ["{category}", "{user[name]}"]
}
```

Description

: A list of format strings to build target directory paths with.

If this is an `object`, it must contain Python expressions mapping
to the list of format strings to use.

Each individual string in such a list represents a single path
segment, which will be joined together and appended to the
[base-directory](#extractor..base-directory) to form the complete
target directory path.

extractor.*.base-directory

Type

: Path_

Default

: "./gallery-dl/"

Description

: Directory path used as base for all download destinations.

extractor..parent-directory ----------------------------Type ``bool`` Default ``false`` Description Use an extractor's current target directory as `base-directory <extractor..base-directory>`__

: for any spawned child extractors.

extractor..parent-metadata ---------------------------extractor..metadata-parent

Type

: - bool - string

Default

: false

Description

: If true, overwrite any metadata provided by a child extractor with its parent's.

| If this is a `string`, add a parent\'s metadata to its children\'s
  to a field named after said string.
| For example with `"parent-metadata": "_p_"`:

``` json
{
    "id": "child-id",
    "_p_": {"id": "parent-id"}
}
```

extractor.*.parent-skip

Type

: bool

Default

: false

Description

: Share number of skipped downloads between parent and child extractors.

extractor..path-restrict -------------------------Type string

: - object ([character]{.title-ref} -> [replacement character(s)]{.title-ref})

Default

: "auto"

Example

: - "/!? (){}" - {" ": "_", "/": "-", "|": "-", ":": "_-_", "*": "_+_"}

Description

: | A string of characters to be replaced with the value of path-replace | or an object mapping invalid/unwanted characters to their replacements | for generated path segment names.

Special values:

-   `"auto"`: Use characters from `"unix"` or `"windows"` depending
    on the local operating system
-   `"unix"`: `"/"`
-   `"windows"`: `"\\\\|/<>:\"?*"`
-   `"ascii"`: `"^0-9A-Za-z_."` (only ASCII digits, letters,
    underscores, and dots)
-   `"ascii+"`: `"^0-9@-[\\]-{ #-)+-.;=!}~"` (all ASCII characters
    except the ones not allowed by Windows)

Implementation Detail: For `strings` with length \>= 2, this option
uses a [Regular Expression Character
Set](https://www.regular-expressions.info/charclass.html), meaning
that:

-   using a caret `^` as first character inverts the set
-   character ranges are supported (`0-9a-z`)
-   `]`, `-`, and `\` need to be escaped as `\\]`, `\\-`, and `\\\\`
    respectively to use them as literal characters

extractor..path-replace ------------------------Type ``string`` Default ``"_"`` Description The replacement character(s) for `path-restrict <extractor..path-restrict>`__

extractor.*.path-remove

Type

: string

Default

: "\u0000-\u001f\u007f" (ASCII control characters)

Description

: Set of characters to remove from generated path names.

Note: In a string with 2 or more characters, `[]^-\` need to be
escaped with backslashes, e.g. `"\\[\\]"`

extractor.*.path-strip

Type

: string

Default

: "auto"

Description

: Set of characters to remove from the end of generated path segment names using str.rstrip()

Special values:

-   `"auto"`: Use characters from `"unix"` or `"windows"` depending
    on the local operating system
-   `"unix"`: `""`
-   `"windows"`: `". "`

extractor.*.path-extended

Type

: bool

Default

: true

Description

: On Windows, use extended-length paths prefixed with \\?\ to work around the 260 characters path length limit.

extractor.*.extension-map

Type

: object ([extension]{.title-ref} -> [replacement]{.title-ref})

Default

: json { "jpeg": "jpg", "jpe" : "jpg", "jfif": "jpg", "jif" : "jpg", "jfi" : "jpg" }

Description

: A JSON object mapping filename extensions to their replacements.

extractor..skip ----------------Type bool

: - string

Default

: true

Description

: Controls the behavior when downloading files that have been downloaded before, i.e. a file with the same filename already exists or its ID is in a download archive.

-   `true`: Skip downloads
-   `false`: Overwrite already existing files
-   `"abort"`: Stop the current extractor run
-   `"abort:N"`: Skip downloads and stop the current extractor run
    after `N` consecutive skips
-   `"terminate"`: Stop the current extractor run, including parent
    extractors
-   `"terminate:N"`: Skip downloads and stop the current extractor
    run, including parent extractors, after `N` consecutive skips
-   `"exit"`: Exit the program altogether
-   `"exit:N"`: Skip downloads and exit the program after `N`
    consecutive skips
-   `"enumerate"`: Add an enumeration index to the beginning of the
    filename extension (`file.1.ext`, `file.2.ext`, etc.)

extractor.*.skip-filter

Type

: string

Description

: Python expression controlling which skipped files to count towards "abort" / "terminate" / "exit".

extractor.*.sleep

Type

: Duration_

Default

: 0

Description

: Number of seconds to sleep before each download.

extractor.*.sleep-extractor

Type

: Duration_

Default

: 0

Description

: Number of seconds to sleep before handling an input URL, i.e. before starting a new extractor.

extractor..sleep-429 ---------------------Type |Duration|_ Default ``60`` Description Number of seconds to sleep when receiving a `429 Too Many Requests` response before `retrying <extractor..retries>`__ the request.

extractor..sleep-request -------------------------Type |Duration|_ Default "0.5-1.5"

: [Danbooru], [E621], [foolfuuka]:search, itaku, newgrounds, [philomena], pixiv:novel, plurk, poipiku , pornpics, soundgasm, urlgalleries, vk, zerochan * "1.0-2.0" flickr, weibo, [wikimedia] * "2.0-4.0" behance, imagefap, [Nijie] * "3.0-6.0" exhentai, idolcomplex, [reactor], readcomiconline * "6.0-6.1" twibooru * "6.0-12.0" instagram * 0 otherwise

Description

: Minimal time interval in seconds between each HTTP request during data extraction.

extractor.*.username & .password

Type

: string

Default

: null

Description

: The username and password to use when attempting to log in to another site.

Specifying username and password is required for

-   `nijie`
-   `horne`

and optional for

-   `aibooru` (\*)
-   `aryion`
-   `atfbooru` (\*)
-   `bluesky`
-   `booruvar` (\*)
-   `coomerparty`
-   `danbooru` (\*)
-   `deviantart`
-   `e621` (\*)
-   `e6ai` (\*)
-   `e926` (\*)
-   `exhentai`
-   `idolcomplex`
-   `imgbb`
-   `inkbunny`
-   `kemonoparty`
-   `mangadex`
-   `mangoxo`
-   `pillowfort`
-   `sankaku`
-   `subscribestar`
-   `tapas`
-   `tsumino`
-   `twitter`
-   `vipergirls`
-   `zerochan`

These values can also be specified via the `-u/--username` and
`-p/--password` command-line options or by using a `.netrc`\_ file.
(see
[Authentication](https://github.com/mikf/gallery-dl#authentication))

(\*) The password value for these sites should be the API key found
in your user profile, not the actual account password.

Note: Leave the `password` value empty or undefined to be prompted
for a passeword when performing a login (see
[getpass()](https://docs.python.org/3/library/getpass.html#getpass.getpass)).

extractor.*.netrc

Type

: bool

Default

: false

Description

: Enable the use of .netrc_ authentication data.

extractor..cookies -------------------Type Path_

: - object ([name]{.title-ref} -> [value]{.title-ref}) - list

Description

: Source to read additional cookies from. This can be

-   The `Path`\_ to a Mozilla/Netscape format cookies.txt file

    ``` json
    "~/.local/share/cookies-instagram-com.txt"
    ```

-   An `object` specifying cookies as name-value pairs

    ``` json
    {
        "cookie-name": "cookie-value",
        "sessionid"  : "14313336321%3AsabDFvuASDnlpb%3A31",
        "isAdult"    : "1"
    }
    ```

-   A `list` with up to 5 entries specifying a browser profile.

    -   The first entry is the browser name
    -   The optional second entry is a profile name or an absolute
        path to a profile directory
    -   The optional third entry is the keyring to retrieve
        passwords for decrypting cookies from
    -   The optional fourth entry is a (Firefox) container name
        (`"none"` for only cookies with no container)
    -   The optional fifth entry is the domain to extract cookies
        for. Prefix it with a dot `.` to include cookies for
        subdomains. Has no effect when also specifying a container.

    ``` json
    ["firefox"]
    ["firefox", null, null, "Personal"]
    ["chromium", "Private", "kwallet", null, ".twitter.com"]
    ```

extractor..cookies-update --------------------------Type bool

: - Path_

Default

: true

Description

: Export session cookies in cookies.txt format.

-   If this is a `Path`\_, write cookies to the given file path.
-   If this is `true` and [extractor.\*.cookies]() specifies the
    `Path`\_ of a valid cookies.txt file, update its contents.

extractor..proxy -----------------Type string

: - object ([scheme]{.title-ref} -> [proxy]{.title-ref})

Example

: json "http://10.10.1.10:3128"

``` json
{
    "http" : "http://10.10.1.10:3128",
    "https": "http://10.10.1.10:1080",
    "http://10.20.1.128": "http://10.10.1.10:5323"
}
```

Description

: Proxy (or proxies) to be used for remote connections.

-   If this is a `string`, it is the proxy URL for all outgoing
    requests.
-   If this is an `object`, it is a scheme-to-proxy mapping to
    specify different proxy URLs for each scheme. It is also
    possible to set a proxy for a specific host by using
    `scheme://host` as key. See [Requests\' proxy
    documentation](https://requests.readthedocs.io/en/master/user/advanced/#proxies)
    for more details.

Note: If a proxy URLs does not include a scheme, `http://` is
assumed.

extractor..source-address --------------------------Type string

: - list with 1 string and 1 integer as elements

Example

: - "192.168.178.20" - ["192.168.178.20", 8080]

Description

: Client-side IP address to bind to.

| Can be either a simple `string` with just the local IP address
| or a `list` with IP and explicit port number as elements.

extractor.*.user-agent

Type

: string

Default

: "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:109.0) Gecko/20100101 Firefox/115.0"

Description

: User-Agent header value to be used for HTTP requests.

Setting this value to `"browser"` will try to automatically detect
and use the User-Agent used by the system\'s default browser.

Note: This option has no effect on [pixiv]{.title-ref},
[e621]{.title-ref}, and [mangadex]{.title-ref} extractors, as these
need specific values to function correctly.

extractor..browser -------------------Type ``string`` Default "firefox": artstation, mangasee, patreon, pixiv:series, twitter

: - null: otherwise

Example

: - "chrome:macos"

Description

: Try to emulate a real browser (firefox or chrome) by using their default HTTP headers and TLS ciphers for HTTP requests.

Optionally, the operating system used in the `User-Agent` header can
be specified after a `:` (`windows`, `linux`, or `macos`).

Note: `requests` and `urllib3` only support HTTP/1.1, while a real
browser would use HTTP/2.

extractor..referer -------------------Type bool

: - string

Default

: true

Description

: Send Referer headers with all outgoing HTTP requests.

If this is a `string`, send it as Referer instead of the
extractor\'s `root` domain.

extractor.*.headers

Type

: object ([name]{.title-ref} -> [value]{.title-ref})

Default

: json { "User-Agent" : "<extractor.*.user-agent>", "Accept" : "*/*", "Accept-Language": "en-US,en;q=0.5", "Accept-Encoding": "gzip, deflate", "Referer" : "<extractor.*.referer>" }

Description

: Additional HTTP headers to be sent with each HTTP request,

To disable sending a header, set its value to `null`.

extractor.*.ciphers

Type

: list of strings

Example

: json ["ECDHE-ECDSA-AES128-GCM-SHA256", "ECDHE-RSA-AES128-GCM-SHA256", "ECDHE-ECDSA-CHACHA20-POLY1305", "ECDHE-RSA-CHACHA20-POLY1305"]

Description

: List of TLS/SSL cipher suites in OpenSSL cipher list format to be passed to ssl.SSLContext.set_ciphers()

extractor..tls12 -----------------Type ``bool`` Default false: patreon, pixiv:series

: - true: otherwise

Description

: Allow selecting TLS 1.2 cipher suites.

Can be disabled to alter TLS fingerprints and potentially bypass
Cloudflare blocks.

extractor.*.keywords

Type

: object ([name]{.title-ref} -> [value]{.title-ref})

Example

: {"type": "Pixel Art", "type_id": 123}

Description

: Additional name-value pairs to be added to each metadata dictionary.

extractor..keywords-eval -------------------------Type ``bool`` Default ``false`` Description Evaluate each `keywords <extractor..keywords>[__ ]{.title-ref}[string]{.title-ref}[ value as a `format string]{.title-ref}_.

extractor.*.keywords-default

Type

: any

Default

: "None"

Description

: Default value used for missing or undefined keyword names in format strings.

extractor..metadata-url ------------------------extractor..url-metadata

Type

: string

Description

: Insert a file's download URL into its metadata dictionary as the given name.

For example, setting this option to `"gdl_file_url"` will cause a
new metadata field with name `gdl_file_url` to appear, which
contains the current file\'s download URL. This can then be used in
[filenames](), with a `metadata` post processor, etc.

extractor..metadata-path -------------------------extractor..path-metadata

Type

: string

Description

: Insert a reference to the current PathFormat data structure into metadata dictionaries as the given name.

For example, setting this option to `"gdl_path"` would make it
possible to access the current file\'s filename as
`"{gdl_path.filename}"`.

extractor..metadata-extractor ------------------------------extractor..extractor-metadata

Type

: string

Description

: Insert a reference to the current Extractor object into metadata dictionaries as the given name.

extractor..metadata-http -------------------------extractor..http-metadata

Type

: string

Description

: Insert an object containing a file's HTTP headers and filename, extension, and date parsed from them into metadata dictionaries as the given name.

For example, setting this option to `"gdl_http"` would make it
possible to access the current file\'s `Last-Modified` header as
`"{gdl_http[Last-Modified]}"` and its parsed form as
`"{gdl_http[date]}"`.

extractor..metadata-version ----------------------------extractor..version-metadata

Type

: string

Description

: Insert an object containing gallery-dl's version info into metadata dictionaries as the given name.

The content of the object is as follows:

``` json
{
    "version"         : "string",
    "is_executable"   : "bool",
    "current_git_head": "string or null"
}
```

extractor.*.category-transfer

Type

: bool

Default

: Extractor-specific

Description

: Transfer an extractor's (sub)category values to all child extractors spawned by it, to let them inherit their parent's config options.

extractor..blacklist & .whitelist ----------------------------------Type ``list`` of ``strings`` Default ``["oauth", "recursive", "test"]`` + current extractor category Example ``["imgur", "redgifs:user", ":image"]Description A list of extractor identifiers to ignore (or allow) when spawning child extractors for unknown URLs, e.g. fromredditorplurk. Each identifier can be * A category or basecategory name ("imgur","mastodon") * | A (base)category-subcategory pair, where both names are separated by a colon ("redgifs:user"). | Both names can be a `*` or left empty, matching all possible names ("*:image",":user"). Note: Anyblacklistsetting will automatically include"oauth","recursive", and"test". extractor.*.archive ------------------- Type |Path|_ DefaultnullExample"$HOME/.archives/{category}.sqlite3"Description File to store IDs of downloaded files in. Downloads of files already recorded in this archive file will be `skipped <extractor.*.skip_>`__. The resulting archive file is not a plain text file but an SQLite3 database, as either lookup operations are significantly faster or memory requirements are significantly lower when the amount of stored IDs gets reasonably large. Note: Archive files that do not already exist get generated automatically. Note: Archive paths support regular `format string`_ replacements, but be aware that using external inputs for building local paths may pose a security risk. extractor.*.archive-format -------------------------- TypestringExample"{id}_{offset}"Description An alternative `format string`_ to build archive IDs with. extractor.*.archive-mode ------------------------ TypestringDefault"file"Description Controls when to write `archive IDs <extractor.*.archive-format_>`__ to the archive database. *"file": Write IDs immediately after completing or skipping a file download. *"memory": Keep IDs in memory and only write them after successful job completion. extractor.*.archive-prefix -------------------------- TypestringDefault"{category}"Description Prefix for archive IDs. extractor.*.archive-pragma -------------------------- TypelistofstringsExample["journal_mode=WAL", "synchronous=NORMAL"]Description A list of SQLitePRAGMAstatements to run during archive initialization. See `<https://www.sqlite.org/pragma.html>`__ for availablePRAGMAstatements and further details. extractor.*.actions ------------------- Type *object(`pattern` -> `action`) *listoflistswith 2stringsas elements Example .. code:: json { "error" : "status |= 1", "warning:(?i)unable to .+": "exit 127", "info:Logging in as .+" : "level = debug" } .. code:: json [ ["error" , "status |= 1" ], ["warning:(?i)unable to .+", "exit 127" ], ["info:Logging in as .+" , "level = debug"] ] Description Perform anactionwhen logging a message matched bypattern.patternis parsed as severity level (debug,info,warning,error, or integer value) followed by an optional `Python Regular Expression <https://docs.python.org/3/library/re.html#regular-expression-syntax>`__ separated by a colon:. Using`` as `level` or leaving it empty matches logging messages of all levels (e.g. ``:<re>or:<re>).actionis parsed as action type followed by (optional) arguments. Supported Action Types:status: | Modify job exit status. | Expected syntax is<operator> <value>(e.g.= 100). Supported operators are=(assignment),&(bitwise AND),|(bitwise OR),^(bitwise XOR).level: | Modify severity level of the current logging message. | Can be one ofdebug,info,warning,erroror an integer value.printWrite argument to stdout.restart: Restart the current extractor run.wait: Stop execution until Enter is pressed.exit: Exit the program with the given argument as exit status. extractor.*.postprocessors -------------------------- Typelistof |Postprocessor Configuration|_ objects Example .. code:: json [ { "name": "zip" , "compression": "store" }, { "name": "exec", "command": ["/home/foobar/script", "{category}", "{image_id}"] } ] Description A list of `post processors <Postprocessor Configuration_>`__ to be applied to each downloaded file in the specified order. | Unlike other options, a |postprocessors|_ setting at a deeper level does not override any |postprocessors|_ setting at a lower level. | Instead, all post processors from all applicable |postprocessors|_ settings get combined into a single list. For example * anmtimepost processor atextractor.postprocessors, * azippost processor atextractor.pixiv.postprocessors, * and using--execwill run all three post processors -mtime,zip,exec- for each downloadedpixivfile. extractor.*.postprocessor-options --------------------------------- Typeobject(`name` -> `value`) Example .. code:: json { "archive": null, "keep-files": true } Description Additional `Postprocessor Options`_ that get added to each individual `post processor object <Postprocessor Configuration_>`__ before initializing it and evaluating filters. extractor.*.retries ------------------- TypeintegerDefault4Description Maximum number of times a failed HTTP request is retried before giving up, or-1for infinite retries. extractor.*.retry-codes ----------------------- TypelistofintegersExample[404, 429, 430]Description Additional `HTTP response status codes <https://developer.mozilla.org/en-US/docs/Web/HTTP/Status>`__ to retry an HTTP request on.2xxcodes (success responses) and3xxcodes (redirection messages) will never be retried and always count as success, regardless of this option.5xxcodes (server error responses) will always be retried, regardless of this option. extractor.*.timeout ------------------- TypefloatDefault30.0Description Amount of time (in seconds) to wait for a successful connection and response from a remote server. This value gets internally used as the |timeout|_ parameter for the |requests.request()|_ method. extractor.*.verify ------------------ Type *bool*stringDefaulttrueDescription Controls whether to verify SSL/TLS certificates for HTTPS requests. If this is astring, it must be the path to a CA bundle to use instead of the default certificates. This value gets internally used as the |verify|_ parameter for the |requests.request()|_ method. extractor.*.download -------------------- TypeboolDefaulttrueDescription Controls whether to download media files. Setting this tofalsewon't download any files, but all other functions (`postprocessors`_, `download archive`_, etc.) will be executed as normal. extractor.*.fallback -------------------- TypeboolDefaulttrueDescription Use fallback download URLs when a download fails. extractor.*.image-range ----------------------- Type *string*listofstringsExamples *"10-20"*"-5, 10, 30-50, 100-"*"10:21, 30:51:2, :5, 100:"*["-5", "10", "30-50", "100-"]Description Index range(s) selecting which files to download. These can be specified as * index:3(file number 3) * range:2-4(files 2, 3, and 4) * `slice <https://docs.python.org/3/library/functions.html#slice>`__:3:8:2(files 3, 5, and 7) | Arguments for range and slice notation are optional and will default to begin (1) or end (sys.maxsize) if omitted. | For example5-,5:, and5::all mean "Start at file number 5". Note: The index of the first file is1. extractor.*.chapter-range ------------------------- TypestringDescription Like `image-range <extractor.*.image-range_>`__, but applies to delegated URLs like manga chapters, etc. extractor.*.image-filter ------------------------ Type *string*listofstringsExamples *"re.search(r'foo(bar)+', description)"*["width >= 1200", "width/height > 1.2"]Description Python expression controlling which files to download. A file only gets downloaded when *all* of the given expressions evaluate toTrue. Available values are the filename-specific ones listed by-Kor-j. extractor.*.chapter-filter -------------------------- Type *string*listofstringsExamples *"lang == 'en'"*["language == 'French'", "10 <= chapter < 20"]Description Like `image-filter <extractor.*.image-filter_>`__, but applies to delegated URLs like manga chapters, etc. extractor.*.image-unique ------------------------ TypeboolDefaultfalseDescription Ignore image URLs that have been encountered before during the current extractor run. extractor.*.chapter-unique -------------------------- TypeboolDefaultfalseDescription Like `image-unique <extractor.*.image-unique_>`__, but applies to delegated URLs like manga chapters, etc. extractor.*.date-format ----------------------- TypestringDefault"%Y-%m-%dT%H:%M:%S"Description Format string used to parsestringvalues of `date-min` and `date-max`. See |strptime|_ for a list of formatting directives. Note: Despite its name, this option does **not** control how{date}metadata fields are formatted. To use a different formatting for those values other than the default%Y-%m-%d %H:%M:%S, put |strptime|_ formatting directives after a colon:, for example{date:%Y%m%d}. extractor.*.write-pages ----------------------- Type *bool*stringDefaultfalseDescription During data extraction, write received HTTP request data to enumerated files in the current working directory. Special values: *"all": Include HTTP request and response headers. HideAuthorization,Cookie, andSet-Cookievalues. *"ALL": Include all HTTP request and response headers. Extractor-specific Options ========================== extractor.artstation.external ----------------------------- TypeboolDefaultfalseDescription Try to follow external URLs of embedded players. extractor.artstation.max-posts ------------------------------ TypeintegerDefaultnullDescription Limit the number of posts/projects to download. extractor.artstation.previews ----------------------------- TypeboolDefaultfalseDescription Download video previews. extractor.artstation.videos --------------------------- TypeboolDefaulttrueDescription Download video clips. extractor.artstation.search.pro-first ------------------------------------- TypeboolDefaulttrueDescription Enable the "Show Studio and Pro member artwork first" checkbox when retrieving search results. extractor.aryion.recursive -------------------------- TypeboolDefaulttrueDescription Controls the post extraction strategy. *true: Start on users' main gallery pages and recursively descend into subfolders *false: Get posts from "Latest Updates" pages extractor.bbc.width ------------------- TypeintegerDefault1920Description Specifies the requested image width. This value must be divisble by 16 and gets rounded down otherwise. The maximum possible value appears to be1920. extractor.behance.modules ------------------------- TypelistofstringsDefault["image", "video", "mediacollection", "embed"]Description Selects which gallery modules to download from. Supported module types areimage,video,mediacollection,embed,text. extractor.blogger.videos ------------------------ TypeboolDefaulttrueDescription Download embedded videos hosted on https://www.blogger.com/ extractor.bluesky.include ------------------------- Type *string*listofstringsDefault"media"Example *"avatar,background,posts"*["avatar", "background", "posts"]Description A (comma-separated) list of subcategories to include when processing a user profile. Possible values are"avatar","background","posts","replies","media","likes", It is possible to use"all"instead of listing all values separately. extractor.bluesky.metadata -------------------------- Type *bool*string*listofstringsDefaultfalseExample *"facets,user"*["facets", "user"]Description Extract additional metadata. *facets:hashtags,mentions, anduris*user: detailedusermetadata for the user referenced in the input URL (See `app.bsky.actor.getProfile <https://www.docs.bsky.app/docs/api/app-bsky-actor-get-profile>`__). extractor.bluesky.post.depth ---------------------------- TypeintegerDefault0Description Sets the maximum depth of returned reply posts. (See `depth` parameter of `app.bsky.feed.getPostThread <https://www.docs.bsky.app/docs/api/app-bsky-feed-get-post-thread>`__) extractor.bluesky.reposts ------------------------- TypeboolDefaultfalseDescription Process reposts. extractor.cyberdrop.domain -------------------------- TypestringDefaultnullExample"cyberdrop.to"Description Specifies the domain used bycyberdropregardless of input URL. Setting this option to"auto"uses the same domain as a given input URL. extractor.danbooru.external --------------------------- TypeboolDefaultfalseDescription For unavailable or restricted posts, follow thesourceand download from there if possible. extractor.danbooru.ugoira ------------------------- TypeboolDefaultfalseDescription Controls the download target for Ugoira posts. *true: Original ZIP archives *false: Converted video files extractor.[Danbooru].metadata ----------------------------- Type *bool*string*listofstringsDefaultfalseExample *replacements,comments,ai_tags*["replacements", "comments", "ai_tags"]Description Extract additional metadata (notes, artist commentary, parent, children, uploader) It is possible to specify a custom list of metadata includes. See `available_includes <https://github.com/danbooru/danbooru/blob/2cf7baaf6c5003c1a174a8f2d53db010cf05dca7/app/models/post.rb#L1842-L1849>`__ for possible field names.aiboorualso supportsai_metadata. Note: This requires 1 additional HTTP request per 200-post batch. extractor.[Danbooru].threshold ------------------------------ Type *string*integerDefault"auto"Description Stop paginating over API results if the length of a batch of returned posts is less than the specified number. Defaults to the per-page limit of the current instance, which is 200. Note: Changing this setting is normally not necessary. When the value is greater than the per-page limit, gallery-dl will stop after the first batch. The value cannot be less than 1. extractor.derpibooru.api-key ---------------------------- TypestringDefaultnullDescription Your `Derpibooru API Key <https://derpibooru.org/registrations/edit>`__, to use your account's browsing settings and filters. extractor.derpibooru.filter --------------------------- TypeintegerDefault56027(`Everything <https://derpibooru.org/filters/56027>`_ filter) Description The content filter ID to use. Setting an explicit filter ID overrides any default filters and can be used to access 18+ content without `API Key <extractor.derpibooru.api-key_>`_. See `Filters <https://derpibooru.org/filters>`_ for details. extractor.deviantart.auto-watch ------------------------------- TypeboolDefaultfalseDescription Automatically watch users when encountering "Watchers-Only Deviations" (requires a `refresh-token <extractor.deviantart.refresh-token_>`_). extractor.deviantart.auto-unwatch --------------------------------- TypeboolDefaultfalseDescription After watching a user through `auto-watch <extractor.deviantart.auto-watch_>`_, unwatch that user at the end of the current extractor run. extractor.deviantart.comments ----------------------------- TypeboolDefaultfalseDescription Extractcommentsmetadata. extractor.deviantart.comments-avatars ------------------------------------- TypeboolDefaultfalseDescription Download the avatar of each commenting user. Note: Enabling this option also enables deviantart.comments_. extractor.deviantart.extra -------------------------- TypeboolDefaultfalseDescription Download extra Sta.sh resources from description texts and journals. Note: Enabling this option also enables deviantart.metadata_. extractor.deviantart.flat ------------------------- TypeboolDefaulttrueDescription Select the directory structure created by the Gallery- and Favorite-Extractors. *true: Use a flat directory structure. *false: Collect a list of all gallery-folders or favorites-collections and transfer any further work to other extractors (folderorcollection), which will then create individual subdirectories for each of them. Note: Going through all gallery folders will not be able to fetch deviations which aren't in any folder. extractor.deviantart.folders ---------------------------- TypeboolDefaultfalseDescription Provide afoldersmetadata field that contains the names of all folders a deviation is present in. Note: Gathering this information requires a lot of API calls. Use with caution. extractor.deviantart.group -------------------------- Type *bool*stringDefaulttrueDescription Check whether the profile name in a given URL belongs to a group or a regular user. When disabled, assume every given profile name belongs to a regular user. Special values: *"skip": Skip groups extractor.deviantart.include ---------------------------- Type *string*listofstringsDefault"gallery"Example *"favorite,journal,scraps"*["favorite", "journal", "scraps"]Description A (comma-separated) list of subcategories to include when processing a user profile. Possible values are"avatar","background","gallery","scraps","journal","favorite","status". It is possible to use"all"instead of listing all values separately. extractor.deviantart.intermediary --------------------------------- TypeboolDefaulttrueDescription For older non-downloadable images, download a higher-quality/intermediary/version. extractor.deviantart.journals ----------------------------- TypestringDefault"html"Description Selects the output format for textual content. This includes journals, literature and status updates. *"html": HTML with (roughly) the same layout as on DeviantArt. *"text": Plain text with image references and HTML tags removed. *"none": Don't download textual content. extractor.deviantart.jwt ------------------------ TypeboolDefaultfalseDescription Update `JSON Web Tokens <https://jwt.io/>`__ (thetokenURL parameter) of otherwise non-downloadable, low-resolution images to be able to download them in full resolution. Note: No longer functional as of 2023-10-11 extractor.deviantart.mature --------------------------- TypeboolDefaulttrueDescription Enable mature content. This option simply sets the |mature_content|_ parameter for API calls to either"true"or"false"and does not do any other form of content filtering. extractor.deviantart.metadata ----------------------------- Type *bool*string*listofstringsDefaultfalseExample *"stats,submission"*["camera", "stats", "submission"]Description Extract additional metadata for deviation objects. Providesdescription,tags,license, andis_watchingfields when enabled. It is possible to request extended metadata by specifying a list of *camera: EXIF information (if available) *stats: deviation statistics *submission: submission information *collection: favourited folder information (requires a `refresh token <extractor.deviantart.refresh-token_>`__) *gallery: gallery folder information (requires a `refresh token <extractor.deviantart.refresh-token_>`__) Set this option to"all"to request all extended metadata categories. See `/deviation/metadata <https://www.deviantart.com/developers/http/v1/20210526/deviation_metadata/7824fc14d6fba6acbacca1cf38c24158>`__ for official documentation. extractor.deviantart.original ----------------------------- Type *bool*stringDefaulttrueDescription Download original files if available. Setting this option to"images"only downloads original files if they are images and falls back to preview versions for everything else (archives, etc.). extractor.deviantart.pagination ------------------------------- TypestringDefault"api"Description Controls when to stop paginating over API results. *"api": Trust the API and stop whenhas_moreisfalse. *"manual": Disregardhas_moreand only stop when a batch of results is empty. extractor.deviantart.public --------------------------- TypeboolDefaulttrueDescription Use a public access token for API requests. Disable this option to *force* using a private token for all requests when a `refresh token <extractor.deviantart.refresh-token_>`__ is provided. extractor.deviantart.quality ---------------------------- Type *integer*stringDefault100Description JPEG quality level of images for which an original file download is not available. Set this to"png"to download a PNG version of these images instead. extractor.deviantart.refresh-token ---------------------------------- TypestringDefaultnullDescription Therefresh-tokenvalue you get from `linking your DeviantArt account to gallery-dl <OAuth_>`__. Using arefresh-tokenallows you to access private or otherwise not publicly available deviations. Note: Therefresh-tokenbecomes invalid `after 3 months <https://www.deviantart.com/developers/authentication#refresh>`__ or whenever your `cache file <cache.file_>`__ is deleted or cleared. extractor.deviantart.wait-min ----------------------------- TypeintegerDefault0Description Minimum wait time in seconds before API requests. extractor.deviantart.avatar.formats ----------------------------------- TypelistofstringsExample["original.jpg", "big.jpg", "big.gif", ".png"]Description Avatar URL formats to return. | Each format is parsed asSIZE.EXT. | LeaveSIZEempty to download the regular, small avatar format. extractor.[E621].metadata ------------------------- Type *bool*string*listofstringsDefaultfalseExample *"notes,pools"*["notes", "pools"]Description Extract additional metadata (notes, pool metadata) if available. Note: This requires 0-2 additional HTTP requests per post. extractor.[E621].threshold -------------------------- Type *string*integerDefault"auto"Description Stop paginating over API results if the length of a batch of returned posts is less than the specified number. Defaults to the per-page limit of the current instance, which is 320. Note: Changing this setting is normally not necessary. When the value is greater than the per-page limit, gallery-dl will stop after the first batch. The value cannot be less than 1. extractor.exhentai.domain ------------------------- TypestringDefault"auto"Description *"auto": Usee-hentai.orgorexhentai.orgdepending on the input URL *"e-hentai.org": Usee-hentai.orgfor all URLs *"exhentai.org": Useexhentai.orgfor all URLs extractor.exhentai.fallback-retries ----------------------------------- TypeintegerDefault2Description Number of times a failed image gets retried or-1for infinite retries. extractor.exhentai.fav ---------------------- TypestringExample"4"Description After downloading a gallery, add it to your account's favorites as the given category number. Note: Set this to `"favdel"` to remove galleries from your favorites. Note: This will remove any Favorite Notes when applied to already favorited galleries. extractor.exhentai.gp --------------------- TypestringDefault"resized"Description Selects how to handle "you do not have enough GP" errors. * `"resized"`: Continue downloading `non-original <extractor.exhentai.original_>`__ images. * `"stop"`: Stop the current extractor run. * `"wait"`: Wait for user input before retrying the current image. extractor.exhentai.limits ------------------------- TypeintegerDefaultnullDescription Sets a custom image download limit and stops extraction when it gets exceeded. extractor.exhentai.metadata --------------------------- TypeboolDefaultfalseDescription Load extended gallery metadata from the `API <https://ehwiki.org/wiki/API#Gallery_Metadata>`_. Addsarchiver_key,posted, andtorrents. Makesdateandfilesizemore precise. extractor.exhentai.original --------------------------- TypeboolDefaulttrueDescription Download full-sized original images if available. extractor.exhentai.source ------------------------- TypestringDefault"gallery"Description Selects an alternative source to download files from. *"hitomi": Download the corresponding gallery fromhitomi.laextractor.fanbox.embeds ----------------------- Type *bool*stringDefaulttrueDescription Control behavior on embedded content from external sites. *true: Extract embed URLs and download them if supported (videos are not downloaded). *"ytdl": Liketrue, but let `youtube-dl`_ handle video extraction and download for YouTube, Vimeo and SoundCloud embeds. *false: Ignore embeds. extractor.fanbox.metadata ------------------------- Type *bool*string*listofstringsDefaultfalseExample *user,plan*["user", "plan"]Description Extractplanand extendedusermetadata. extractor.flickr.access-token & .access-token-secret ---------------------------------------------------- TypestringDefaultnullDescription Theaccess_tokenandaccess_token_secretvalues you get from `linking your Flickr account to gallery-dl <OAuth_>`__. extractor.flickr.contexts ------------------------- TypeboolDefaultfalseDescription For each photo, return the albums and pools it belongs to assetandpoolmetadata. Note: This requires 1 additional API call per photo. See `flickr.photos.getAllContexts <https://www.flickr.com/services/api/flickr.photos.getAllContexts.html>`__ for details. extractor.flickr.exif --------------------- TypeboolDefaultfalseDescription For each photo, return its EXIF/TIFF/GPS tags asexifandcamerametadata. Note: This requires 1 additional API call per photo. See `flickr.photos.getExif <https://www.flickr.com/services/api/flickr.photos.getExif.html>`__ for details. extractor.flickr.metadata ------------------------- Type *bool*string*listofstringsDefaultfalseExample *license,last_update,machine_tags*["license", "last_update", "machine_tags"]Description Extract additional metadata (license, date_taken, original_format, last_update, geo, machine_tags, o_dims) It is possible to specify a custom list of metadata includes. See `the extras parameter <https://www.flickr.com/services/api/flickr.people.getPhotos.html>`__ in `Flickr's API docs <https://www.flickr.com/services/api/>`__ for possible field names. extractor.flickr.videos ----------------------- TypeboolDefaulttrueDescription Extract and download videos. extractor.flickr.size-max -------------------------- Type *integer*stringDefaultnullDescription Sets the maximum allowed size for downloaded images. * If this is aninteger, it specifies the maximum image dimension (width and height) in pixels. * If this is astring, it should be one of Flickr's format specifiers ("Original","Large", ... or"o","k","h","l", ...) to use as an upper limit. extractor.furaffinity.descriptions ---------------------------------- TypestringDefault"text"Description Controls the format ofdescriptionmetadata fields. *"text": Plain text with HTML tags removed *"html": Raw HTML content extractor.furaffinity.external ------------------------------ TypeboolDefaultfalseDescription Follow external URLs linked in descriptions. extractor.furaffinity.include ----------------------------- Type *string*listofstringsDefault"gallery"Example *"scraps,favorite"*["scraps", "favorite"]Description A (comma-separated) list of subcategories to include when processing a user profile. Possible values are"gallery","scraps","favorite". It is possible to use"all"instead of listing all values separately. extractor.furaffinity.layout ---------------------------- TypestringDefault"auto"Description Selects which site layout to expect when parsing posts. *"auto": Automatically differentiate between"old"and"new"*"old": Expect the *old* site layout *"new": Expect the *new* site layout extractor.gelbooru.api-key & .user-id ------------------------------------- TypestringDefaultnullDescription Values from the API Access Credentials section found at the bottom of your `Account Options <https://gelbooru.com/index.php?page=account&s=options>`__ page. extractor.gelbooru.favorite.order-posts --------------------------------------- TypestringDefault"desc"Description Controls the order in which favorited posts are returned. *"asc": Ascending favorite date order (oldest first) *"desc": Descending favorite date order (newest first) *"reverse": Same as"asc"extractor.generic.enabled ------------------------- TypeboolDefaultfalseDescription Match **all** URLs not otherwise supported by gallery-dl, even ones without ageneric:prefix. extractor.gofile.api-token -------------------------- TypestringDefaultnullDescription API token value found at the bottom of your `profile page <https://gofile.io/myProfile>`__. If not set, a temporary guest token will be used. extractor.gofile.website-token ------------------------------ TypestringDescription API token value used during API requests. An invalid or not up-to-date value will result in401 Unauthorizederrors. Keeping this option unset will use an extra HTTP request to attempt to fetch the current value used by gofile. extractor.gofile.recursive -------------------------- TypeboolDefaultfalseDescription Recursively download files from subfolders. extractor.hentaifoundry.include ------------------------------- Type *string*listofstringsDefault"pictures"Example *"scraps,stories"*["scraps", "stories"]Description A (comma-separated) list of subcategories to include when processing a user profile. Possible values are"pictures","scraps","stories","favorite". It is possible to use"all"instead of listing all values separately. extractor.hitomi.format ----------------------- TypestringDefault"webp"Description Selects which image format to download. Available formats are"webp"and"avif"."original"will try to download the originaljpgorpngversions, but is most likely going to fail with403 Forbiddenerrors. extractor.imagechest.access-token --------------------------------- TypestringDescription Your personal Image Chest access token. These tokens allow using the API instead of having to scrape HTML pages, providing more detailed metadata. (date,description, etc) See https://imgchest.com/docs/api/1.0/general/authorization for instructions on how to generate such a token. extractor.imgur.client-id ------------------------- TypestringDescription Custom Client ID value for API requests. extractor.imgur.mp4 ------------------- Type *bool*stringDefaulttrueDescription Controls whether to choose the GIF or MP4 version of an animation. *true: Follow Imgur's advice and choose MP4 if theprefer_videoflag in an image's metadata is set. *false: Always choose GIF. *"always": Always choose MP4. extractor.inkbunny.orderby -------------------------- TypestringDefault"create_datetime"Description Value of theorderbyparameter for submission searches. (See `API#Search <https://wiki.inkbunny.net/wiki/API#Search>`__ for details) extractor.instagram.api ----------------------- TypestringDefault"rest"Description Selects which API endpoints to use. *"rest": REST API - higher-resolution media *"graphql": GraphQL API - lower-resolution media extractor.instagram.include --------------------------- Type *string*listofstringsDefault"posts"Example *"stories,highlights,posts"*["stories", "highlights", "posts"]Description A (comma-separated) list of subcategories to include when processing a user profile. Possible values are"posts","reels","tagged","stories","highlights","avatar". It is possible to use"all"instead of listing all values separately. extractor.instagram.metadata ---------------------------- TypeboolDefaultfalseDescription Provide extendedusermetadata even when referring to a user by ID, e.g.instagram.com/id:12345678. Note: This metadata is always available when referring to a user by name, e.g.instagram.com/USERNAME. extractor.instagram.order-files ------------------------------- TypestringDefault"asc"Description Controls the order in which files of each post are returned. *"asc": Same order as displayed in a post *"desc": Reverse order as displayed in a post *"reverse": Same as"desc"Note: This option does *not* affect{num}. To enumerate files in reverse order, usecount - num + 1. extractor.instagram.order-posts ------------------------------- TypestringDefault"asc"Description Controls the order in which posts are returned. *"asc": Same order as displayed *"desc": Reverse order as displayed *"id"or"id_asc": Ascending order by ID *"id_desc": Descending order by ID *"reverse": Same as"desc"Note: This option only affectshighlights. extractor.instagram.previews ---------------------------- TypeboolDefaultfalseDescription Download video previews. extractor.instagram.videos -------------------------- TypeboolDefaulttrueDescription Download video files. extractor.itaku.videos ---------------------- TypeboolDefaulttrueDescription Download video files. extractor.kemonoparty.comments ------------------------------ TypeboolDefaultfalseDescription Extractcommentsmetadata. Note: This requires 1 additional HTTP request per post. extractor.kemonoparty.duplicates -------------------------------- TypeboolDefaultfalseDescription Controls how to handle duplicate files in a post. *true: Download duplicates *false: Ignore duplicates extractor.kemonoparty.dms ------------------------- TypeboolDefaultfalseDescription Extract a user's direct messages asdmsmetadata. extractor.kemonoparty.announcements ----------------------------------- TypeboolDefaultfalseDescription Extract a user's announcements asannouncementsmetadata. extractor.kemonoparty.favorites ------------------------------- TypestringDefaultartistDescription Determines the type of favorites to be downloaded. Available types areartist, andpost. extractor.kemonoparty.files --------------------------- TypelistofstringsDefault["attachments", "file", "inline"]Description Determines the type and order of files to be downloaded. Available types arefile,attachments, andinline. extractor.kemonoparty.max-posts ------------------------------- TypeintegerDefaultnullDescription Limit the number of posts to download. extractor.kemonoparty.metadata ------------------------------ TypeboolDefaultfalseDescription Extractusernamemetadata. extractor.kemonoparty.revisions ------------------------------- Type *bool*stringDefaultfalseDescription Extract post revisions. Set this to"unique"to filter out duplicate revisions. Note: This requires 1 additional HTTP request per post. extractor.kemonoparty.order-revisions ------------------------------------- TypestringDefault"desc"Description Controls the order in which `revisions <extractor.kemonoparty.revisions_>`__ are returned. *"asc": Ascending order (oldest first) *"desc": Descending order (newest first) *"reverse": Same as"asc"extractor.khinsider.format -------------------------- TypestringDefault"mp3"Description The name of the preferred file format to download. Use"all"to download all available formats, or a (comma-separated) list to select multiple formats. If the selected format is not available, the first in the list gets chosen (usually `mp3`). extractor.lolisafe.domain ------------------------- TypestringDefaultnullDescription Specifies the domain used by alolisafeextractor regardless of input URL. Setting this option to"auto"uses the same domain as a given input URL. extractor.luscious.gif ---------------------- TypeboolDefaultfalseDescription Format in which to download animated images. Usetrueto download animated images as gifs andfalseto download as mp4 videos. extractor.mangadex.api-server ----------------------------- TypestringDefault"https://api.mangadex.org"Description The server to use for API requests. extractor.mangadex.api-parameters --------------------------------- Typeobject(`name` -> `value`) Example{"order[updatedAt]": "desc"}Description Additional query parameters to send when fetching manga chapters. (See `/manga/{id}/feed <https://api.mangadex.org/docs/swagger.html#/Manga/get-manga-id-feed>`__ and `/user/follows/manga/feed <https://api.mangadex.org/docs/swagger.html#/Feed/get-user-follows-manga-feed>`__) extractor.mangadex.lang ----------------------- Type *string*listofstringsExample *"en"*"fr,it"*["fr", "it"]Description `ISO 639-1 <https://en.wikipedia.org/wiki/ISO_639-1>`__ language codes to filter chapters by. extractor.mangadex.ratings -------------------------- TypelistofstringsDefault["safe", "suggestive", "erotica", "pornographic"]Description List of acceptable content ratings for returned chapters. extractor.mangapark.source -------------------------- Type *string*integerExample *"koala:en"*15150116Description Select chapter source and language for a manga. | The general syntax is"<source name>:<ISO 639-1 language code>". | Both are optional, meaning"koala","koala:",":en", or even just":"are possible as well. Specifying the numericIDof a source is also supported. extractor.[mastodon].access-token --------------------------------- TypestringDefaultnullDescription Theaccess-tokenvalue you get from `linking your account to gallery-dl <OAuth_>`__. Note: gallery-dl comes with built-in tokens formastodon.social,pawooandbaraag. For other instances, you need to obtain anaccess-tokenin order to use usernames in place of numerical user IDs. extractor.[mastodon].cards -------------------------- TypeboolDefaultfalseDescription Fetch media from cards. extractor.[mastodon].reblogs ---------------------------- TypeboolDefaultfalseDescription Fetch media from reblogged posts. extractor.[mastodon].replies ---------------------------- TypeboolDefaulttrueDescription Fetch media from replies to other posts. extractor.[mastodon].text-posts ------------------------------- TypeboolDefaultfalseDescription Also emit metadata for text-only posts without media content. extractor.[misskey].access-token -------------------------------- TypestringDescription Your access token, necessary to fetch favorited notes. extractor.[misskey].renotes --------------------------- TypeboolDefaultfalseDescription Fetch media from renoted notes. extractor.[misskey].replies --------------------------- TypeboolDefaulttrueDescription Fetch media from replies to other notes. extractor.[moebooru].pool.metadata ---------------------------------- TypeboolDefaultfalseDescription Extract extendedpoolmetadata. Note: Not supported by allmoebooruinstances. extractor.newgrounds.flash -------------------------- TypeboolDefaulttrueDescription Download original Adobe Flash animations instead of pre-rendered videos. extractor.newgrounds.format --------------------------- TypestringDefault"original"Example"720p"Description Selects the preferred format for video downloads. If the selected format is not available, the next smaller one gets chosen. extractor.newgrounds.include ---------------------------- Type *string*listofstringsDefault"art"Example *"movies,audio"*["movies", "audio"]Description A (comma-separated) list of subcategories to include when processing a user profile. Possible values are"art","audio","games","movies". It is possible to use"all"instead of listing all values separately. extractor.nijie.include ----------------------- Type *string*listofstringsDefault"illustration,doujin"Description A (comma-separated) list of subcategories to include when processing a user profile. Possible values are"illustration","doujin","favorite","nuita". It is possible to use"all"instead of listing all values separately. extractor.nitter.quoted ----------------------- TypeboolDefaultfalseDescription Fetch media from quoted Tweets. extractor.nitter.retweets ------------------------- TypeboolDefaultfalseDescription Fetch media from Retweets. extractor.nitter.videos ----------------------- Type *bool*stringDefaulttrueDescription Control video download behavior. *true: Download videos *"ytdl": Download videos using `youtube-dl`_ *false: Skip video Tweets extractor.oauth.browser ----------------------- TypeboolDefaulttrueDescription Controls how a user is directed to an OAuth authorization page. *true: Use Python's |webbrowser.open()|_ method to automatically open the URL in the user's default browser. *false: Ask the user to copy & paste an URL from the terminal. extractor.oauth.cache --------------------- TypeboolDefaulttrueDescription Store tokens received during OAuth authorizations in `cache <cache.file_>`__. extractor.oauth.host -------------------- TypestringDefault"localhost"Description Host name / IP address to bind to during OAuth authorization. extractor.oauth.port -------------------- TypeintegerDefault6414Description Port number to listen on during OAuth authorization. Note: All redirects will go to port6414, regardless of the port specified here. You'll have to manually adjust the port number in your browser's address bar when using a different port than the default. extractor.paheal.metadata ------------------------- TypeboolDefaultfalseDescription Extract additional metadata (source,uploader) Note: This requires 1 additional HTTP request per post. extractor.patreon.files ----------------------- TypelistofstringsDefault["images", "image_large", "attachments", "postfile", "content"]Description Determines the type and order of files to be downloaded. Available types arepostfile,images,image_large,attachments, andcontent. extractor.photobucket.subalbums ------------------------------- TypeboolDefaulttrueDescription Download subalbums. extractor.pillowfort.external ----------------------------- TypeboolDefaultfalseDescription Follow links to external sites, e.g. Twitter, extractor.pillowfort.inline --------------------------- TypeboolDefaulttrueDescription Extract inline images. extractor.pillowfort.reblogs ---------------------------- TypeboolDefaultfalseDescription Extract media from reblogged posts. extractor.pinterest.domain -------------------------- TypestringDefault"auto"Description Specifies the domain used bypinterestextractors. Setting this option to"auto"uses the same domain as a given input URL. extractor.pinterest.sections ---------------------------- TypeboolDefaulttrueDescription Include pins from board sections. extractor.pinterest.videos -------------------------- TypeboolDefaulttrueDescription Download from video pins. extractor.pixeldrain.api-key ---------------------------- TypestringDescription Your account's `API key <https://pixeldrain.com/user/api_keys>`__ extractor.pixiv.include ----------------------- Type *string*listofstringsDefault"artworks"Example *"avatar,background,artworks"*["avatar", "background", "artworks"]Description A (comma-separated) list of subcategories to include when processing a user profile. Possible values are"artworks","avatar","background","favorite","novel-user","novel-bookmark". It is possible to use"all"instead of listing all values separately. extractor.pixiv.refresh-token ----------------------------- TypestringDescription Therefresh-tokenvalue you get from runninggallery-dl oauth:pixiv(see OAuth_) or by using a third-party tool like `gppt <https://github.com/eggplants/get-pixivpy-token>`__. extractor.pixiv.novel.covers ---------------------------- TypeboolDefaultfalseDescription Download cover images. extractor.pixiv.novel.embeds ---------------------------- TypeboolDefaultfalseDescription Download embedded images. extractor.pixiv.novel.full-series --------------------------------- TypeboolDefaultfalseDescription When downloading a novel being part of a series, download all novels of that series. extractor.pixiv.metadata ------------------------ TypeboolDefaultfalseDescription Fetch extendedusermetadata. extractor.pixiv.metadata-bookmark --------------------------------- TypeboolDefaultfalseDescription For works bookmarked by `your own account <extractor.pixiv.refresh-token_>`__, fetch bookmark tags astags_bookmarkmetadata. Note: This requires 1 additional API call per bookmarked post. extractor.pixiv.work.related ---------------------------- TypeboolDefaultfalseDescription Also download related artworks. extractor.pixiv.tags -------------------- TypestringDefault"japanese"Description Controls thetagsmetadata field. * `"japanese"`: List of Japanese tags * `"translated"`: List of translated tags * `"original"`: Unmodified list with both Japanese and translated tags extractor.pixiv.ugoira ---------------------- TypeboolDefaulttrueDescription Download Pixiv's Ugoira animations or ignore them. These animations come as a.zipfile containing all animation frames in JPEG format. Use an `ugoira` post processor to convert them to watchable videos. (Example__) extractor.pixiv.max-posts ------------------------- TypeintegerDefault0Description When downloading galleries, this sets the maximum number of posts to get. A value of0means no limit. extractor.plurk.comments ------------------------ TypeboolDefaultfalseDescription Also search Plurk comments for URLs. extractor.[postmill].save-link-post-body ---------------------------------------- TypeboolDefaultfalseDescription Whether or not to save the body for link/image posts. extractor.reactor.gif --------------------- TypeboolDefaultfalseDescription Format in which to download animated images. Usetrueto download animated images as gifs andfalseto download as mp4 videos. extractor.readcomiconline.captcha --------------------------------- TypestringDefault"stop"Description Controls how to handle redirects to CAPTCHA pages. *"stop: Stop the current extractor run. *"wait: Ask the user to solve the CAPTCHA and wait. extractor.readcomiconline.quality --------------------------------- TypestringDefault"auto"Description Sets thequalityquery parameter of issue pages. ("lq"or"hq")"auto"uses the quality parameter of the input URL or"hq"if not present. extractor.reddit.comments ------------------------- TypeintegerDefault0Description The value of thelimitparameter when loading a submission and its comments. This number (roughly) specifies the total amount of comments being retrieved with the first API call. Reddit's internal default and maximum values for this parameter appear to be 200 and 500 respectively. The value0ignores all comments and significantly reduces the time required when scanning a subreddit. extractor.reddit.morecomments ----------------------------- TypeboolDefaultfalseDescription Retrieve additional comments by resolving themorecomment stubs in the base comment tree. Note: This requires 1 additional API call for every 100 extra comments. extractor.reddit.date-min & .date-max ------------------------------------- Type |Date|_ Default0and253402210800(timestamp of |datetime.max|_) Description Ignore all submissions posted before/after this date. extractor.reddit.id-min & .id-max --------------------------------- TypestringExample"6kmzv2"Description Ignore all submissions posted before/after the submission with this ID. extractor.reddit.previews ------------------------- TypeboolDefaulttrueDescription For failed downloads from external URLs / child extractors, download Reddit's preview image/video if available. extractor.reddit.recursion -------------------------- TypeintegerDefault0Description Reddit extractors can recursively visit other submissions linked to in the initial set of submissions. This value sets the maximum recursion depth. Special values: *0: Recursion is disabled *-1: Infinite recursion (don't do this) extractor.reddit.refresh-token ------------------------------ TypestringDefaultnullDescription Therefresh-tokenvalue you get from `linking your Reddit account to gallery-dl <OAuth_>`__. Using arefresh-tokenallows you to access private or otherwise not publicly available subreddits, given that your account is authorized to do so, but requests to the reddit API are going to be rate limited at 600 requests every 10 minutes/600 seconds. extractor.reddit.videos ----------------------- Type *bool*stringDefaulttrueDescription Control video download behavior. *true: Download videos and use `youtube-dl`_ to handle HLS and DASH manifests *"ytdl": Download videos and let `youtube-dl`_ handle all of video extraction and download *"dash": Extract DASH manifest URLs and use `youtube-dl`_ to download and merge them. (*) *false: Ignore videos (*) This saves 1 HTTP request per video and might potentially be able to download otherwise deleted videos, but it will not always get the best video quality available. extractor.redgifs.format ------------------------ Type *string*listofstringsDefault["hd", "sd", "gif"]Description List of names of the preferred animation format, which can be"hd","sd","gif","thumbnail","vthumbnail", or"poster". If a selected format is not available, the next one in the list will be tried until an available format is found. If the format is given asstring, it will be extended with["hd", "sd", "gif"]. Use a list with one element to restrict it to only one possible format. extractor.sankaku.id-format --------------------------- TypestringDefault"numeric"Description Format ofidmetadata fields. *"alphanumeric"or"alnum": 11-character alphanumeric IDs (y0abGlDOr2o) *"numeric"or"legacy": numeric IDs (360451) extractor.sankaku.refresh ------------------------- TypeboolDefaultfalseDescription Refresh download URLs before they expire. extractor.sankakucomplex.embeds ------------------------------- TypeboolDefaultfalseDescription Download video embeds from external sites. extractor.sankakucomplex.videos ------------------------------- TypeboolDefaulttrueDescription Download videos. extractor.skeb.article ---------------------- TypeboolDefaultfalseDescription Download article images. extractor.skeb.sent-requests ---------------------------- TypeboolDefaultfalseDescription Download sent requests. extractor.skeb.thumbnails ------------------------- TypeboolDefaultfalseDescription Download thumbnails. extractor.skeb.search.filters ----------------------------- Type *string*listofstringsDefault["genre:art", "genre:voice", "genre:novel", "genre:video", "genre:music", "genre:correction"]Example"genre:music OR genre:voice"Description Filters used during searches. extractor.smugmug.videos ------------------------ TypeboolDefaulttrueDescription Download video files. extractor.steamgriddb.animated ------------------------------ TypeboolDefaulttrueDescription Include animated assets when downloading from a list of assets. extractor.steamgriddb.epilepsy ------------------------------ TypeboolDefaulttrueDescription Include assets tagged with epilepsy when downloading from a list of assets. extractor.steamgriddb.dimensions -------------------------------- Type *string*listofstringsDefault"all"Examples *"1024x512,512x512"*["460x215", "920x430"]Description Only include assets that are in the specified dimensions.allcan be used to specify all dimensions. Valid values are: * Grids:460x215,920x430,600x900,342x482,660x930,512x512,1024x1024* Heroes:1920x620,3840x1240,1600x650* Logos: N/A (will be ignored) * Icons:8x8,10x10,14x14,16x16,20x20,24x24,28x28,32x32,35x35,40x40,48x48,54x54,56x56,57x57,60x60,64x64,72x72,76x76,80x80,90x90,96x96,100x100,114x114,120x120,128x128,144x144,150x150,152x152,160x160,180x180,192x192,194x194,256x256,310x310,512x512,768x768,1024x1024extractor.steamgriddb.file-types -------------------------------- Type *string*listofstringsDefault"all"Examples *"png,jpeg"*["jpeg", "webp"]Description Only include assets that are in the specified file types.allcan be used to specify all file types. Valid values are: * Grids:png,jpeg,jpg,webp* Heroes:png,jpeg,jpg,webp* Logos:png,webp* Icons:png,icoextractor.steamgriddb.download-fake-png --------------------------------------- TypeboolDefaulttrueDescription Download fake PNGs alongside the real file. extractor.steamgriddb.humor --------------------------- TypeboolDefaulttrueDescription Include assets tagged with humor when downloading from a list of assets. extractor.steamgriddb.languages ------------------------------- Type *string*listofstringsDefault"all"Examples *"en,km"*["fr", "it"]Description Only include assets that are in the specified languages.allcan be used to specify all languages. Valid values are `ISO 639-1 <https://en.wikipedia.org/wiki/ISO_639-1>`__ language codes. extractor.steamgriddb.nsfw -------------------------- TypeboolDefaulttrueDescription Include assets tagged with adult content when downloading from a list of assets. extractor.steamgriddb.sort -------------------------- TypestringDefaultscore_descDescription Set the chosen sorting method when downloading from a list of assets. Can be one of: *score_desc(Highest Score (Beta)) *score_asc(Lowest Score (Beta)) *score_old_desc(Highest Score (Old)) *score_old_asc(Lowest Score (Old)) *age_desc(Newest First) *age_asc(Oldest First) extractor.steamgriddb.static ---------------------------- TypeboolDefaulttrueDescription Include static assets when downloading from a list of assets. extractor.steamgriddb.styles ---------------------------- Type *string*listofstringsDefaultallExamples *white,black*["no_logo", "white_logo"]Description Only include assets that are in the specified styles.allcan be used to specify all styles. Valid values are: * Grids:alternate,blurred,no_logo,material,white_logo* Heroes:alternate,blurred,material* Logos:official,white,black,custom* Icons:official,customextractor.steamgriddb.untagged ------------------------------ TypeboolDefaulttrueDescription Include untagged assets when downloading from a list of assets. extractor.[szurubooru].username & .token ---------------------------------------- TypestringDescription Username and login token of your account to access private resources. To generate a token, visit/user/USERNAME/list-tokensand clickCreate Token. extractor.tumblr.avatar ----------------------- TypeboolDefaultfalseDescription Download blog avatars. extractor.tumblr.date-min & .date-max ------------------------------------- Type |Date|_ Default0andnullDescription Ignore all posts published before/after this date. extractor.tumblr.external ------------------------- TypeboolDefaultfalseDescription Follow external URLs (e.g. from "Link" posts) and try to extract images from them. extractor.tumblr.inline ----------------------- TypeboolDefaulttrueDescription Search posts for inline images and videos. extractor.tumblr.offset ----------------------- TypeintegerDefault0Description Customoffsetstarting value when paginating over blog posts. Allows skipping over posts without having to waste API calls. extractor.tumblr.original ------------------------- TypeboolDefaulttrueDescription Download full-resolutionphotoandinlineimages. For each photo with "maximum" resolution (width equal to 2048 or height equal to 3072) or each inline image, use an extra HTTP request to find the URL to its full-resolution version. extractor.tumblr.ratelimit -------------------------- TypestringDefault"abort"Description Selects how to handle exceeding the daily API rate limit. *"abort": Raise an error and stop extraction *"wait": Wait until rate limit reset extractor.tumblr.reblogs ------------------------ Type *bool*stringDefaulttrueDescription *true: Extract media from reblogged posts *false: Skip reblogged posts *"same-blog": Skip reblogged posts unless the original post is from the same blog extractor.tumblr.posts ---------------------- Type *string*listofstringsDefault"all"Example *"video,audio,link"*["video", "audio", "link"]Description A (comma-separated) list of post types to extract images, etc. from. Possible types aretext,quote,link,answer,video,audio,photo,chat. It is possible to use"all"instead of listing all types separately. extractor.tumblr.fallback-delay ------------------------------- TypefloatDefault120.0Description Number of seconds to wait between retries for fetching full-resolution images. extractor.tumblr.fallback-retries --------------------------------- TypeintegerDefault2Description Number of retries for fetching full-resolution images or-1for infinite retries. extractor.twibooru.api-key -------------------------- TypestringDefaultnullDescription Your `Twibooru API Key <https://twibooru.org/users/edit>`__, to use your account's browsing settings and filters. extractor.twibooru.filter ------------------------- TypeintegerDefault2(`Everything <https://twibooru.org/filters/2>`__ filter) Description The content filter ID to use. Setting an explicit filter ID overrides any default filters and can be used to access 18+ content without `API Key <extractor.twibooru.api-key_>`__. See `Filters <https://twibooru.org/filters>`__ for details. extractor.twitter.ads --------------------- TypeboolDefaultfalseDescription Fetch media from promoted Tweets. extractor.twitter.cards ----------------------- Type *bool*stringDefaultfalseDescription Controls how to handle `Twitter Cards <https://developer.twitter.com/en/docs/twitter-for-websites/cards/overview/abouts-cards>`__. *false: Ignore cards *true: Download image content from supported cards *"ytdl": Additionally download video content from unsupported cards using `youtube-dl`_ extractor.twitter.cards-blacklist --------------------------------- TypelistofstringsExample["summary", "youtube.com", "player:twitch.tv"]Description List of card types to ignore. Possible values are * card names * card domains *<card name>:<card domain>extractor.twitter.conversations ------------------------------- Type *bool*stringDefaultfalseDescription For input URLs pointing to a single Tweet, e.g. `https://twitter.com/i/web/status/<TweetID>`, fetch media from all Tweets and replies in this `conversation <https://help.twitter.com/en/using-twitter/twitter-conversations>`__. If this option is equal to"accessible", only download from conversation Tweets if the given initial Tweet is accessible. extractor.twitter.csrf ---------------------- TypestringDefault"cookies"Description Controls how to handle Cross Site Request Forgery (CSRF) tokens. *"auto": Always auto-generate a token. *"cookies": Use token given by thect0cookie if present. extractor.twitter.expand ------------------------ TypeboolDefaultfalseDescription For each Tweet, return *all* Tweets from that initial Tweet's conversation or thread, i.e. *expand* all Twitter threads. Going through a timeline with this option enabled is essentially the same as runninggallery-dl https://twitter.com/i/web/status/<TweetID>with enabled `conversations <extractor.twitter.conversations_>`__ option for each Tweet in said timeline. Note: This requires at least 1 additional API call per initial Tweet. extractor.twitter.include ------------------------- Type *string*listofstringsDefault"timeline"Example *"avatar,background,media"*["avatar", "background", "media"]Description A (comma-separated) list of subcategories to include when processing a user profile. Possible values are"avatar","background","timeline","tweets","media","replies","likes". It is possible to use"all"instead of listing all values separately. extractor.twitter.transform --------------------------- TypeboolDefaulttrueDescription Transform Tweet and User metadata into a simpler, uniform format. extractor.twitter.tweet-endpoint -------------------------------- TypestringDefault"auto"Description Selects the API endpoint used to retrieve single Tweets. *"restid":/TweetResultByRestId- accessible to guest users *"detail":/TweetDetail- more stable *"auto":"detail"when logged in,"restid"otherwise extractor.twitter.size ---------------------- TypelistofstringsDefault["orig", "4096x4096", "large", "medium", "small"]Description The image version to download. Any entries after the first one will be used for potential `fallback <extractor.*.fallback_>`_ URLs. Known available sizes are4096x4096,orig,large,medium, andsmall. extractor.twitter.logout ------------------------ TypeboolDefaultfalseDescription Logout and retry as guest when access to another user's Tweets is blocked. extractor.twitter.pinned ------------------------ TypeboolDefaultfalseDescription Fetch media from pinned Tweets. extractor.twitter.quoted ------------------------ TypeboolDefaultfalseDescription Fetch media from quoted Tweets. If this option is enabled, gallery-dl will try to fetch a quoted (original) Tweet when it sees the Tweet which quotes it. extractor.twitter.ratelimit --------------------------- TypestringDefault"wait"Description Selects how to handle exceeding the API rate limit. *"abort": Raise an error and stop extraction *"wait": Wait until rate limit reset extractor.twitter.relogin ------------------------- TypeboolDefaulttrueDescription | When receiving a "Could not authenticate you" error while logged in with `username & passeword <extractor.*.username & .password_>`__, | refresh the current login session and try to continue from where it left off. extractor.twitter.locked ------------------------ TypestringDefault"abort"Description Selects how to handle "account is temporarily locked" errors. *"abort": Raise an error and stop extraction *"wait": Wait until the account is unlocked and retry extractor.twitter.replies ------------------------- TypeboolDefaulttrueDescription Fetch media from replies to other Tweets. If this value is"self", only consider replies where reply and original Tweet are from the same user. Note: Twitter will automatically expand conversations if you use the/with_repliestimeline while logged in. For example, media from Tweets which the user replied to will also be downloaded. It is possible to exclude unwanted Tweets using `image-filter <extractor.*.image-filter_>`__. extractor.twitter.retweets -------------------------- TypeboolDefaultfalseDescription Fetch media from Retweets. If this value is"original", metadata for these files will be taken from the original Tweets, not the Retweets. extractor.twitter.timeline.strategy ----------------------------------- TypestringDefault"auto"Description Controls the strategy / tweet source used for timeline URLs (https://twitter.com/USER/timeline). *"tweets": `/tweets <https://twitter.com/USER/tweets>`__ timeline + search *"media": `/media <https://twitter.com/USER/media>`__ timeline + search *"with_replies": `/with_replies <https://twitter.com/USER/with_replies>`__ timeline + search *"auto":"tweets"or"media", depending on `retweets <extractor.twitter.retweets_>`__ and `text-tweets <extractor.twitter.text-tweets_>`__ settings extractor.twitter.text-tweets ----------------------------- TypeboolDefaultfalseDescription Also emit metadata for text-only Tweets without media content. This only has an effect with ametadata(orexec) post processor with `"event": "post" <metadata.event_>`_ and appropriate `filename <metadata.filename_>`_. extractor.twitter.twitpic ------------------------- TypeboolDefaultfalseDescription Extract `TwitPic <https://twitpic.com/>`__ embeds. extractor.twitter.unique ------------------------ TypeboolDefaulttrueDescription Ignore previously seen Tweets. extractor.twitter.users ----------------------- TypestringDefault"user"Example"https://twitter.com/search?q=from:{legacy[screen_name]}"Description | Format string for user URLs generated fromfollowingandlist-membersqueries, | whose replacement field values come from Twitteruserobjects (`Example <https://gist.githubusercontent.com/mikf/99d2719b3845023326c7a4b6fb88dd04/raw/275b4f0541a2c7dc0a86d3998f7d253e8f10a588/github.json>`_) Special values: *"user":https://twitter.com/i/user/%7Brest_id%7D*"timeline":https://twitter.com/id:{rest_id}/timeline*"tweets":https://twitter.com/id:{rest_id}/tweets*"media":https://twitter.com/id:{rest_id}/mediaNote: To allow gallery-dl to follow custom URL formats, set the blacklist__ fortwitterto a non-default value, e.g. an empty string"". extractor.twitter.videos ------------------------ Type *bool*stringDefaulttrueDescription Control video download behavior. *true: Download videos *"ytdl": Download videos using `youtube-dl`_ *false: Skip video Tweets extractor.unsplash.format ------------------------- TypestringDefault"raw"Description Name of the image format to download. Available formats are"raw","full","regular","small", and"thumb". extractor.vipergirls.domain --------------------------- TypestringDefault"vipergirls.to"Description Specifies the domain used byvipergirlsextractors. For example"viper.click"if the main domain is blocked or to bypass Cloudflare, extractor.vipergirls.like ------------------------- TypeboolDefaultfalseDescription Automatically `like` posts after downloading their images. Note: Requires `login <extractor.*.username & .password_>`__ or `cookies <extractor.*.cookies_>`__ extractor.vsco.videos --------------------- TypeboolDefaulttrueDescription Download video files. extractor.wallhaven.api-key --------------------------- TypestringDefaultnullDescription Your `Wallhaven API Key <https://wallhaven.cc/settings/account>`__, to use your account's browsing settings and default filters when searching. See https://wallhaven.cc/help/api for more information. extractor.wallhaven.include --------------------------- Type *string*listofstringsDefault"uploads"Example *"uploads,collections"*["uploads", "collections"]Description A (comma-separated) list of subcategories to include when processing a user profile. Possible values are"uploads","collections". It is possible to use"all"instead of listing all values separately. extractor.wallhaven.metadata ---------------------------- TypeboolDefaultfalseDescription Extract additional metadata (tags, uploader) Note: This requires 1 additional HTTP request per post. extractor.weasyl.api-key ------------------------ TypestringDefaultnullDescription Your `Weasyl API Key <https://www.weasyl.com/control/apikeys>`__, to use your account's browsing settings and filters. extractor.weasyl.metadata ------------------------- TypeboolDefaultfalseDescription | Fetch extra submission metadata during gallery downloads. | (comments,description,favorites,folder_name,tags,views) Note: This requires 1 additional HTTP request per submission. extractor.weibo.gifs -------------------- Type *bool*stringDefaulttrueDescription Downloadgiffiles. Set this to"video"to download GIFs as video files. extractor.weibo.include ----------------------- Type *string*listofstringsDefault"feed"Description A (comma-separated) list of subcategories to include when processing a user profile. Possible values are"home","feed","videos","newvideo","article","album". It is possible to use"all"instead of listing all values separately. extractor.weibo.livephoto ------------------------- TypeboolDefaulttrueDescription Downloadlivephotofiles. extractor.weibo.retweets ------------------------ TypeboolDefaultfalseDescription Fetch media from retweeted posts. If this value is"original", metadata for these files will be taken from the original posts, not the retweeted posts. extractor.weibo.videos ---------------------- TypeboolDefaulttrueDescription Download video files. extractor.ytdl.enabled ---------------------- TypeboolDefaultfalseDescription Match **all** URLs, even ones without aytdl:prefix. extractor.ytdl.format --------------------- TypestringDefault youtube-dl's default, currently"bestvideo+bestaudio/best"Description Video `format selection <https://github.com/ytdl-org/youtube-dl#format-selection>`__ directly passed to youtube-dl. extractor.ytdl.generic ---------------------- TypeboolDefaulttrueDescription Controls the use of youtube-dl's generic extractor. Set this option to"force"for the same effect as youtube-dl's--force-generic-extractor. extractor.ytdl.logging ---------------------- TypeboolDefaulttrueDescription Route youtube-dl's output through gallery-dl's logging system. Otherwise youtube-dl will write its output directly to stdout/stderr. Note: Setquietandno_warningsin `extractor.ytdl.raw-options`_ totrueto suppress all output. extractor.ytdl.module --------------------- TypestringDefaultnullDescription Name of the youtube-dl Python module to import. Setting this tonullwill try to import"yt_dlp"followed by"youtube_dl"as fallback. extractor.ytdl.raw-options -------------------------- Typeobject(`name` -> `value`) Example .. code:: json { "quiet": true, "writesubtitles": true, "merge_output_format": "mkv" } Description Additional options passed directly to theYoutubeDLconstructor. All available options can be found in `youtube-dl's docstrings <https://github.com/ytdl-org/youtube-dl/blob/master/youtube_dl/YoutubeDL.py#L138-L318>`__. extractor.ytdl.cmdline-args --------------------------- Type *string*listofstringsExample *"--quiet --write-sub --merge-output-format mkv"*["--quiet", "--write-sub", "--merge-output-format", "mkv"]Description Additional options specified as youtube-dl command-line arguments. extractor.ytdl.config-file -------------------------- Type |Path|_ Example"~/.config/youtube-dl/config"Description Location of a youtube-dl configuration file to load options from. extractor.zerochan.metadata --------------------------- TypeboolDefaultfalseDescription Extract additional metadata (date, md5, tags, ...) Note: This requires 1-2 additional HTTP requests per post. extractor.zerochan.pagination ----------------------------- TypestringDefault"api"Description Controls how to paginate over tag search results. *"api": Use the `JSON API <https://www.zerochan.net/api>`__ (noextensionmetadata) *"html": Parse HTML pages (limited to 100 pages * 24 posts) extractor.[booru].tags ---------------------- TypeboolDefaultfalseDescription Categorize tags by their respective types and provide them astags<type>metadata fields. Note: This requires 1 additional HTTP request per post. extractor.[booru].notes ----------------------- TypeboolDefaultfalseDescription Extract overlay notes (position and text). Note: This requires 1 additional HTTP request per post. extractor.[booru].url --------------------- TypestringDefault"file_url"Example"preview_url"Description Alternate field name to retrieve download URLs from. extractor.[manga-extractor].chapter-reverse ------------------------------------------- TypeboolDefaultfalseDescription Reverse the order of chapter URLs extracted from manga pages. *true: Start with the latest chapter *false: Start with the first chapter extractor.[manga-extractor].page-reverse ---------------------------------------- TypeboolDefaultfalseDescription Download manga chapter pages in reverse order. Downloader Options ================== downloader.*.enabled -------------------- TypeboolDefaulttrueDescription Enable/Disable this downloader module. downloader.*.filesize-min & .filesize-max ----------------------------------------- TypestringDefaultnullExample"32000","500k","2.5M"Description Minimum/Maximum allowed file size in bytes. Any file smaller/larger than this limit will not be downloaded. Possible values are valid integer or floating-point numbers optionally followed by one ofk,m.g,t, orp. These suffixes are case-insensitive. downloader.*.mtime ------------------ TypeboolDefaulttrueDescription Use |Last-Modified|_ HTTP response headers to set file modification times. downloader.*.part ----------------- TypeboolDefaulttrueDescription Controls the use of.partfiles during file downloads. *true: Write downloaded data into.partfiles and rename them upon download completion. This mode additionally supports resuming incomplete downloads. *false: Do not use.partfiles and write data directly into the actual output files. downloader.*.part-directory --------------------------- Type |Path|_ DefaultnullDescription Alternate location for.partfiles. Missing directories will be created as needed. If this value isnull,.partfiles are going to be stored alongside the actual output files. downloader.*.progress --------------------- TypefloatDefault3.0Description Number of seconds until a download progress indicator for the current download is displayed. Set this option tonullto disable this indicator. downloader.*.rate ----------------- TypestringDefaultnullExample"32000","500k","2.5M"Description Maximum download rate in bytes per second. Possible values are valid integer or floating-point numbers optionally followed by one ofk,m.g,t, orp. These suffixes are case-insensitive. downloader.*.retries -------------------- TypeintegerDefault `extractor.*.retries`_ Description Maximum number of retries during file downloads, or-1for infinite retries. downloader.*.timeout -------------------- TypefloatDefault `extractor.*.timeout`_ Description Connection timeout during file downloads. downloader.*.verify ------------------- Type *bool*stringDefault `extractor.*.verify`_ Description Certificate validation during file downloads. downloader.*.proxy ------------------ Type *string*object(`scheme` -> `proxy`) Default `extractor.*.proxy`_ Description Proxy server used for file downloads. Disable the use of a proxy for file downloads by explicitly setting this option tonull. downloader.http.adjust-extensions --------------------------------- TypeboolDefaulttrueDescription Check file headers of downloaded files and adjust their filename extensions if they do not match. For example, this will change the filename extension ({extension}) of a file calledexample.pngfrompngtojpgwhen said file contains JPEG/JFIF data. downloader.http.consume-content ------------------------------- TypeboolDefaultfalseDescription Controls the behavior when an HTTP response is considered unsuccessful If the value istrue, consume the response body. This avoids closing the connection and therefore improves connection reuse. If the value isfalse, immediately close the connection without reading the response. This can be useful if the server is known to send large bodies for error responses. downloader.http.chunk-size -------------------------- Type *integer*stringDefault32768Example"50k","0.8M"Description Number of bytes per downloaded chunk. Possible values are integer numbers optionally followed by one ofk,m.g,t, orp. These suffixes are case-insensitive. downloader.http.headers ----------------------- Typeobject(`name` -> `value`) Example{"Accept": "image/webp,*/*", "Referer": "https://example.org/"}Description Additional HTTP headers to send when downloading files, downloader.http.retry-codes --------------------------- TypelistofintegersDefault `extractor.*.retry-codes`_ Description Additional `HTTP response status codes <https://developer.mozilla.org/en-US/docs/Web/HTTP/Status>`__ to retry a download on. Codes200,206, and416(when resuming a `partial <downloader.*.part_>`__ download) will never be retried and always count as success, regardless of this option.5xxcodes (server error responses) will always be retried, regardless of this option. downloader.http.validate ------------------------ TypeboolDefaulttrueDescription Check for invalid responses. Fail a download when a file does not pass instead of downloading a potentially broken file. downloader.ytdl.format ---------------------- TypestringDefault youtube-dl's default, currently"bestvideo+bestaudio/best"Description Video `format selection <https://github.com/ytdl-org/youtube-dl#format-selection>`__ directly passed to youtube-dl. downloader.ytdl.forward-cookies ------------------------------- TypeboolDefaultfalseDescription Forward cookies to youtube-dl. downloader.ytdl.logging ----------------------- TypeboolDefaulttrueDescription Route youtube-dl's output through gallery-dl's logging system. Otherwise youtube-dl will write its output directly to stdout/stderr. Note: Setquietandno_warningsin `downloader.ytdl.raw-options`_ totrueto suppress all output. downloader.ytdl.module ---------------------- TypestringDefaultnullDescription Name of the youtube-dl Python module to import. Setting this tonullwill first try to import"yt_dlp"and use"youtube_dl"as fallback. downloader.ytdl.outtmpl ----------------------- TypestringDefaultnullDescription The `Output Template <https://github.com/ytdl-org/youtube-dl#output-template>`__ used to generate filenames for files downloaded with youtube-dl. Special values: *null: generate filenames with `extractor.*.filename`_ *"default": use youtube-dl's default, currently"%(title)s-%(id)s.%(ext)s"Note: An output template other thannullmight cause unexpected results in combination with other options (e.g."skip": "enumerate") downloader.ytdl.raw-options --------------------------- Typeobject(`name` -> `value`) Example .. code:: json { "quiet": true, "writesubtitles": true, "merge_output_format": "mkv" } Description Additional options passed directly to theYoutubeDLconstructor. All available options can be found in `youtube-dl's docstrings <https://github.com/ytdl-org/youtube-dl/blob/master/youtube_dl/YoutubeDL.py#L138-L318>`__. downloader.ytdl.cmdline-args ---------------------------- Type *string*listofstringsExample *"--quiet --write-sub --merge-output-format mkv"*["--quiet", "--write-sub", "--merge-output-format", "mkv"]Description Additional options specified as youtube-dl command-line arguments. downloader.ytdl.config-file --------------------------- Type |Path|_ Example"~/.config/youtube-dl/config"Description Location of a youtube-dl configuration file to load options from. Output Options ============== output.mode ----------- Type *string*object(`key` -> `format string`) Default"auto"Description Controls the output string format and status indicators. *"null": No output *"pipe": Suitable for piping to other processes or files *"terminal": Suitable for the standard Windows console *"color": Suitable for terminals that understand ANSI escape codes and colors *"auto":"terminal"on Windows with `output.ansi`_ disabled,"color"otherwise. | It is possible to use custom output format strings by setting this option to anobjectand specifying |start,success,skip,progress, andprogress-total. For example, the following will replicate the same output as |mode: color|: .. code:: json { "start" : "{}", "success": "\r\u001b[1;32m{}\u001b[0m\n", "skip" : "\u001b[2m{}\u001b[0m\n", "progress" : "\r{0:>7}B {1:>7}B/s ", "progress-total": "\r{3:>3}% {0:>7}B {1:>7}B/s " }start,success, andskipare used to output the current filename, where{}or{0}is replaced with said filename. If a given format string contains printable characters other than that, their number needs to be specified as[<number>, <format string>]to get the correct results for `output.shorten`_. For example .. code:: json "start" : [12, "Downloading {}"] |progressandprogress-totalare used when displaying the `download progress indicator <downloader.*.progress_>`__, |progresswhen the total number of bytes to download is unknown,progress-totalotherwise. For these format strings *{0}is number of bytes downloaded *{1}is number of downloaded bytes per second *{2}is total number of bytes *{3}is percent of bytes downloaded to total bytes output.stdout & .stdin & .stderr -------------------------------- Type *string*objectExample .. code:: json "utf-8" .. code:: json { "encoding": "utf-8", "errors": "replace", "line_buffering": true } Description `Reconfigure <https://docs.python.org/3/library/io.html#io.TextIOWrapper.reconfigure>`__ a `standard stream <https://docs.python.org/3/library/sys.html#sys.stdin>`__. Possible options are *encoding*errors*newline*line_buffering*write_throughWhen this option is specified as a simplestring, it is interpreted as{"encoding": "<string-value>", "errors": "replace"}Note:errorsalways defaults to"replace"output.shorten -------------- TypeboolDefaulttrueDescription Controls whether the output strings should be shortened to fit on one console line. Set this option to"eaw"to also work with east-asian characters with a display width greater than 1. output.colors ------------- Typeobject(`key` -> `ANSI color`) Default .. code:: json { "success": "1;32", "skip" : "2", "debug" : "0;37", "info" : "1;37", "warning": "1;33", "error" : "1;31" } Description Controls the `ANSI colors <https://gist.github.com/fnky/458719343aabd01cfb17a3a4f7296797#colors--graphics-mode>`__ used for various outputs. Output for |mode: color|__ *success: successfully downloaded files *skip: skipped files Logging Messages: *debug: debug logging messages *info: info logging messages *warning: warning logging messages *error: error logging messages output.ansi ----------- TypeboolDefaulttrueDescription | On Windows, enable ANSI escape sequences and colored output | by setting theENABLE_VIRTUAL_TERMINAL_PROCESSINGflag for stdout and stderr. output.skip ----------- TypeboolDefaulttrueDescription Show skipped file downloads. output.fallback --------------- TypeboolDefaulttrueDescription Include fallback URLs in the output of-g/--get-urls. output.private -------------- TypeboolDefaultfalseDescription Include private fields, i.e. fields whose name starts with an underscore, in the output of-K/--list-keywordsand-j/--dump-json. output.progress --------------- Type *bool*stringDefaulttrueDescription Controls the progress indicator when *gallery-dl* is run with multiple URLs as arguments. *true: Show the default progress indicator ("[{current}/{total}] {url}") *false: Do not show any progress indicator * Anystring: Show the progress indicator using this as a custom `format string`_. Possible replacement keys arecurrent,totalandurl. output.log ---------- Type *string* |Logging Configuration|_ Default"[{name}][{levelname}] {message}"Description Configuration for logging output to stderr. If this is a simplestring, it specifies the format string for logging messages. output.logfile -------------- Type * |Path|_ * |Logging Configuration|_ Description File to write logging output to. output.unsupportedfile ---------------------- Type * |Path|_ * |Logging Configuration|_ Description File to write external URLs unsupported by *gallery-dl* to. The default format string here is"{message}". output.errorfile ---------------- Type * |Path|_ * |Logging Configuration|_ Description File to write input URLs which returned an error to. The default format string here is also"{message}". When combined with-I/--input-file-commentor-x/--input-file-delete, this option will cause *all* input URLs from these files to be commented/deleted after processing them and not just successful ones. output.num-to-str ----------------- TypeboolDefaultfalseDescription Convert numeric values (integerorfloat) tostringbefore outputting them as JSON. Postprocessor Options ===================== This section lists all options available inside `Postprocessor Configuration`_ objects. Each option is titled as<name>.<option>, meaning a post processor of type<name>will look for an<option>field inside its "body". For example anexecpost processor will recognize an `async <exec.async_>`__, `command <exec.command_>`__, and `event <exec.event_>`__ field: .. code:: json { "name" : "exec", "async" : false, "command": "...", "event" : "after" } classify.mapping ---------------- Typeobject(`directory` -> `extensions`) Default .. code:: json { "Pictures": ["jpg", "jpeg", "png", "gif", "bmp", "svg", "webp"], "Video" : ["flv", "ogv", "avi", "mp4", "mpg", "mpeg", "3gp", "mkv", "webm", "vob", "wmv"], "Music" : ["mp3", "aac", "flac", "ogg", "wma", "m4a", "wav"], "Archives": ["zip", "rar", "7z", "tar", "gz", "bz2"] } Description A mapping from directory names to filename extensions that should be stored in them. Files with an extension not listed will be ignored and stored in their default location. compare.action -------------- TypestringDefault"replace"Description The action to take when files do **not** compare as equal. *"replace": Replace/Overwrite the old version with the new one *"enumerate": Add an enumeration index to the filename of the new version like `skip = "enumerate" <extractor.*.skip_>`__ compare.equal ------------- TypestringDefault"null"Description The action to take when files do compare as equal. *"abort:N": Stop the current extractor run afterNconsecutive files compared as equal. *"terminate:N": Stop the current extractor run, including parent extractors, afterNconsecutive files compared as equal. *"exit:N": Exit the program afterNconsecutive files compared as equal. compare.shallow --------------- TypeboolDefaultfalseDescription Only compare file sizes. Do not read and compare their content. exec.archive ------------ Type |Path|_ Description File to store IDs of executed commands in, similar to `extractor.*.archive`_.archive-format,archive-prefix, andarchive-pragmaoptions, akin to `extractor.*.archive-format`_, `extractor.*.archive-prefix`_, and `extractor.*.archive-pragma`_, are supported as well. exec.async ---------- TypeboolDefaultfalseDescription Controls whether to wait for a subprocess to finish or to let it run asynchronously. exec.command ------------ Type *string*listofstringsExample *"convert {} {}.png && rm {}"*["echo", "{user[account]}", "{id}"]Description The command to run. * If this is astring, it will be executed using the system's shell, e.g./bin/sh. Any{}will be replaced with the full path of a file or target directory, depending on `exec.event`_ * If this is alist, the first element specifies the program name and any further elements its arguments. Each element of this list is treated as a `format string`_ using the files' metadata as well as{_path},{_directory}, and{_filename}. exec.event ---------- Type *string*listofstringsDefault"after"Description The event(s) for which `exec.command`_ is run. See `metadata.event`_ for a list of available events. metadata.mode ------------- TypestringDefault"json"Description Selects how to process metadata. *"json": write metadata using |json.dump()|_ *"jsonl": write metadata in `JSON Lines <https://jsonlines.org/>`__ format *"tags": writetagsseparated by newlines *"custom": write the result of applying `metadata.content-format`_ to a file's metadata dictionary *"modify": add or modify metadata entries *"delete": remove metadata entries metadata.filename ----------------- TypestringDefaultnullExample"{id}.data.json"Description A `format string`_ to build the filenames for metadata files with. (see `extractor.filename <extractor.*.filename_>`__) Using"-"as filename will write all output tostdout. If this option is set, `metadata.extension`_ and `metadata.extension-format`_ will be ignored. metadata.directory ------------------ TypestringDefault"."Example"metadata"Description Directory where metadata files are stored in relative to the current target location for file downloads. metadata.extension ------------------ TypestringDefault"json"or"txt"Description Filename extension for metadata files that will be appended to the original file names. metadata.extension-format ------------------------- TypestringExample *"{extension}.json"*"json"Description Custom format string to build filename extensions for metadata files with, which will replace the original filename extensions. Note: `metadata.extension`_ is ignored if this option is set. metadata.event -------------- Type *string*listofstringsDefault"file"Example *"prepare,file,after"*["prepare-after", "skip"]Description The event(s) for which metadata gets written to a file. Available events are:initAfter post processor initialization and before the first file downloadfinalizeOn extractor shutdown, e.g. after all files were downloadedfinalize-successOn extractor shutdown when no error occurredfinalize-errorOn extractor shutdown when at least one error occurredprepareBefore a file downloadprepare-afterBefore a file download, but after building and checking file pathsfileWhen completing a file download, but before it gets moved to its target locationafterAfter a file got moved to its target locationskipWhen skipping a file downloadpostWhen starting to download all files of a `post`, e.g. a Tweet on Twitter or a post on Patreon.post-afterAfter downloading all files of a `post` metadata.fields --------------- Type *listofstrings*object(`field name` -> `format string`_) Example .. code:: json ["blocked", "watching", "status[creator][name]"] .. code:: json { "blocked" : "***", "watching" : "\fE 'yes' if watching else 'no'", "status[username]": "{status[creator][name]!l}" } Description *"mode": "delete": A list of metadata field names to remove. *"mode": "modify": An object with metadata field names mapping to a `format string`_ whose result is assigned to said field name. metadata.content-format ----------------------- Type *string*listofstringsExample *"tags:nn{tags:Jn}n"*["tags:", "", "{tags:Jn}"]Description Custom format string to build the content of metadata files with. Note: Only applies for"mode": "custom". metadata.ascii -------------- TypeboolDefaultfalseDescription Escape all non-ASCII characters. See theensure_asciiargument of |json.dump()|_ for further details. Note: Only applies for"mode": "json"and"jsonl". metadata.indent --------------- Type *integer*stringDefault4Description Indentation level of JSON output. See theindentargument of |json.dump()|_ for further details. Note: Only applies for"mode": "json". metadata.separators ------------------- Typelistwith twostringelements Default[", ", ": "]Description<item separator>-<key separator>pair to separate JSON keys and values with. See theseparatorsargument of |json.dump()|_ for further details. Note: Only applies for"mode": "json"and"jsonl". metadata.sort ------------- TypeboolDefaultfalseDescription Sort output by `key`. See thesort_keysargument of |json.dump()|_ for further details. Note: Only applies for"mode": "json"and"jsonl". metadata.open ------------- TypestringDefsult"w"Description Themodein which metadata files get opened. For example, use"a"to append to a file's content or"w"to truncate it. See themodeargument of |open()|_ for further details. metadata.encoding ----------------- TypestringDefsult"utf-8"Description Name of the encoding used to encode a file's content. See theencodingargument of |open()|_ for further details. metadata.private ---------------- TypeboolDefaultfalseDescription Include private fields, i.e. fields whose name starts with an underscore. metadata.skip ------------- TypeboolDefaultfalseDescription Do not overwrite already existing files. metadata.archive ---------------- Type |Path|_ Description File to store IDs of generated metadata files in, similar to `extractor.*.archive`_.archive-format,archive-prefix, andarchive-pragmaoptions, akin to `extractor.*.archive-format`_, `extractor.*.archive-prefix`_, and `extractor.*.archive-pragma`_, are supported as well. metadata.mtime -------------- TypeboolDefaultfalseDescription Set modification times of generated metadata files according to the accompanying downloaded file. Enabling this option will only have an effect *if* there is actualmtimemetadata available, that is * after a file download ("event": "file"(default),"event": "after") * when running *after* anmtimepost processes for the same `event <metadata.event_>`__ For example, ametadatapost processor for"event": "post"will *not* be able to set its file's modification time unless anmtimepost processor with"event": "post"runs *before* it. mtime.event ----------- Type *string*listofstringsDefault"file"Description The event(s) for which `mtime.key`_ or `mtime.value`_ get evaluated. See `metadata.event`_ for a list of available events. mtime.key --------- TypestringDefault"date"Description Name of the metadata field whose value should be used. This value must be either a UNIX timestamp or a |datetime|_ object. Note: This option gets ignored if `mtime.value`_ is set. mtime.value ----------- TypestringDefaultnullExample *"{status[date]}"*"{content[0:6]:R22/2022/D%Y%m%d/}"Description A `format string`_ whose value should be used. The resulting value must be either a UNIX timestamp or a |datetime|_ object. python.archive -------------- Type |Path|_ Description File to store IDs of called Python functions in, similar to `extractor.*.archive`_.archive-format,archive-prefix, andarchive-pragmaoptions, akin to `extractor.*.archive-format`_, `extractor.*.archive-prefix`_, and `extractor.*.archive-pragma`_, are supported as well. python.event ------------ Type *string*listofstringsDefault"file"Description The event(s) for which `python.function`_ gets called. See `metadata.event`_ for a list of available events. python.function --------------- TypestringExample *"my_module:generate_text"*"~/.local/share/gdl-utils.py:resize"Description The Python function to call. This function is specified as<module>:<function name>and gets called with the current metadata dict as argument.moduleis either an importable Python module name or the |Path|_ to a `.py` file, ugoira.extension ---------------- TypestringDefault"webm"Description Filename extension for the resulting video files. ugoira.ffmpeg-args ------------------ TypelistofstringsDefaultnullExample["-c:v", "libvpx-vp9", "-an", "-b:v", "2M"]Description Additional FFmpeg command-line arguments. ugoira.ffmpeg-demuxer --------------------- TypestringDefaultautoDescription FFmpeg demuxer to read and process input files with. Possible values are * "`concat <https://ffmpeg.org/ffmpeg-formats.html#concat-1>`_" (inaccurate frame timecodes for non-uniform frame delays) * "`image2 <https://ffmpeg.org/ffmpeg-formats.html#image2-1>`_" (accurate timecodes, requires nanosecond file timestamps, i.e. no Windows or macOS) * "mkvmerge" (accurate timecodes, only WebM or MKV, requires `mkvmerge <ugoira.mkvmerge-location_>`__) `"auto"` will select `mkvmerge` if available and fall back to `concat` otherwise. ugoira.ffmpeg-location ---------------------- Type |Path|_ Default"ffmpeg"Description Location of theffmpeg(oravconv) executable to use. ugoira.mkvmerge-location ------------------------ Type |Path|_ Default"mkvmerge"Description Location of themkvmergeexecutable for use with the `mkvmerge demuxer <ugoira.ffmpeg-demuxer_>`__. ugoira.ffmpeg-output -------------------- Type *bool*stringDefault"error"Description Controls FFmpeg output. *true: Enable FFmpeg output *false: Disable all FFmpeg output * anystring: Pass-hide_bannerand-loglevelwith this value as argument to FFmpeg ugoira.ffmpeg-twopass --------------------- TypeboolDefaultfalseDescription Enable Two-Pass encoding. ugoira.framerate ---------------- TypestringDefault"auto"Description Controls the frame rate argument (-r) for FFmpeg *"auto": Automatically assign a fitting frame rate based on delays between frames. *"uniform": Likeauto, but assign an explicit frame rate only to Ugoira with uniform frame delays. * any otherstring: Use this value as argument for-r. *nullor an emptystring: Don't set an explicit frame rate. ugoira.keep-files ----------------- TypeboolDefaultfalseDescription Keep ZIP archives after conversion. ugoira.libx264-prevent-odd -------------------------- TypeboolDefaulttrueDescription Prevent"width/height not divisible by 2"errors when usinglibx264orlibx265encoders by applying a simple cropping filter. See this `Stack Overflow thread <https://stackoverflow.com/questions/20847674>`__ for more information. This option, whenlibx264/5is used, automatically adds["-vf", "crop=iw-mod(iw\,2):ih-mod(ih\,2)"]to the list of FFmpeg command-line arguments to reduce an odd width/height by 1 pixel and make them even. ugoira.mtime ------------ TypeboolDefaulttrueDescription Set modification times of generated ugoira aniomations. ugoira.repeat-last-frame ------------------------ TypeboolDefaulttrueDescription Allow repeating the last frame when necessary to prevent it from only being displayed for a very short amount of time. zip.compression --------------- TypestringDefault"store"Description Compression method to use when writing the archive. Possible values are"store","zip","bzip2","lzma". zip.extension ------------- TypestringDefault"zip"Description Filename extension for the created ZIP archive. zip.files --------- Typelistof |Path| Example["info.json"]Description List of extra files to be added to a ZIP archive. Note: Relative paths are relative to the current `download directory <extractor.*.directory_>`__. zip.keep-files -------------- TypeboolDefaultfalseDescription Keep the actual files after writing them to a ZIP archive. zip.mode -------- TypestringDefault"default"Description *"default": Write the central directory file header once after everything is done or an exception is raised. *"safe": Update the central directory file header each time a file is stored in a ZIP archive. This greatly reduces the chance a ZIP archive gets corrupted in case the Python interpreter gets shut down unexpectedly (power outage, SIGKILL) but is also a lot slower. Miscellaneous Options ===================== extractor.modules ----------------- TypelistofstringsDefault Themoduleslist in `extractor/__init__.py <../gallery_dl/extractor/__init__.py#L12>`__ Example["reddit", "danbooru", "mangadex"]Description List of internal modules to load when searching for a suitable extractor class. Useful to reduce startup time and memory usage. extractor.module-sources ------------------------ Typelistof |Path|_ instances Example["~/.config/gallery-dl/modules", null]Description List of directories to load external extractor modules from. Any file in a specified directory with a.pyfilename extension gets `imported <https://docs.python.org/3/reference/import.html>`__ and searched for potential extractors, i.e. classes with apatternattribute. Note:nullreferences internal extractors defined in `extractor/__init__.py <../gallery_dl/extractor/__init__.py#L12>`__ or by `extractor.modules`_. globals ------- Type * |Path|_ *stringExample *"~/.local/share/gdl-globals.py"*"gdl-globals"Description | Path to or name of an `importable <https://docs.python.org/3/reference/import.html>`__ Python module, | whose namespace, in addition to theGLOBALSdict in `util.py <../gallery_dl/util.py>`__, gets used as |globals parameter|__ for compiled Python expressions. cache.file ---------- Type |Path|_ Default * (%APPDATA%or"~") +"/gallery-dl/cache.sqlite3"on Windows * ($XDG_CACHE_HOMEor"~/.cache") +"/gallery-dl/cache.sqlite3"on all other platforms Description Path of the SQLite3 database used to cache login sessions, cookies and API tokens across `gallery-dl` invocations. Set this option tonullor an invalid path to disable this cache. format-separator ---------------- TypestringDefault"/"Description Character(s) used as argument separator in format string `format specifiers <formatting.md#format-specifiers>`__. For example, setting this option to"#"would allow a replacement operation to beRold#new#instead of the defaultRold/new/signals-ignore -------------- TypelistofstringsExample["SIGTTOU", "SIGTTIN", "SIGTERM"]Description The list of signal names to ignore, i.e. set `SIG_IGN <https://docs.python.org/3/library/signal.html#signal.SIG_IGN>`_ as signal handler for. subconfigs ---------- Typelistof |Path|_ Example["~/cfg-twitter.json", "~/cfg-reddit.json"]Description Additional configuration files to load. warnings -------- TypestringDefault"default"Description The `Warnings Filter action <https://docs.python.org/3/library/warnings.html#the-warnings-filter>`__ used for (urllib3) warnings. API Tokens & IDs ================ All configuration keys listed in this section have fully functional default values embedded into *gallery-dl* itself, but if things unexpectedly break or you want to use your own personal client credentials, you can follow these instructions to get an alternative set of API tokens and IDs. extractor.deviantart.client-id & .client-secret ----------------------------------------------- TypestringHow To * login and visit DeviantArt's `Applications & Keys <https://www.deviantart.com/developers/apps>`__ section * click "Register Application" * scroll to "OAuth2 Redirect URI Whitelist (Required)" and enter "https://mikf.github.io/gallery-dl/oauth-redirect.html" * scroll to the bottom and agree to the API License Agreement. Submission Policy, and Terms of Service. * click "Save" * copyclient_idandclient_secretof your new application and put them in your configuration file as"client-id"and"client-secret"* clear your `cache <cache.file_>`__ to delete any remainingaccess-tokenentries. (gallery-dl --clear-cache deviantart) * get a new `refresh-token <extractor.deviantart.refresh-token_>`__ for the newclient-id(gallery-dl oauth:deviantart) extractor.flickr.api-key & .api-secret -------------------------------------- TypestringHow To * login and `Create an App <https://www.flickr.com/services/apps/create/apply/>`__ in Flickr's `App Garden <https://www.flickr.com/services/>`__ * click "APPLY FOR A NON-COMMERCIAL KEY" * fill out the form with a random name and description and click "SUBMIT" * copyKeyandSecretand put them in your configuration file as"api-key"and"api-secret"extractor.reddit.client-id & .user-agent ---------------------------------------- TypestringHow To * login and visit the `apps <https://www.reddit.com/prefs/apps/>`__ section of your account's preferences * click the "are you a developer? create an app..." button * fill out the form: * choose a name * select "installed app" * sethttp://localhost:6414/as "redirect uri" * solve the "I'm not a robot" reCAPTCHA if needed * click "create app" * copy the client id (third line, under your application's name and "installed app") and put it in your configuration file as"client-id"* use "Python:<application name>:v1.0 (by /u/<username>)" asuser-agentand replace<application name>and<username>accordingly (see Reddit's `API access rules <https://github.com/reddit/reddit/wiki/API>`__) * clear your `cache <cache.file_>`__ to delete any remainingaccess-tokenentries. (gallery-dl --clear-cache reddit) * get a `refresh-token <extractor.reddit.refresh-token_>`__ for the newclient-id(gallery-dl oauth:reddit) extractor.smugmug.api-key & .api-secret --------------------------------------- TypestringHow To * login and `Apply for an API Key <https://api.smugmug.com/api/developer/apply>`__ * use a random name and description, set "Type" to "Application", "Platform" to "All", and "Use" to "Non-Commercial" * fill out the two checkboxes at the bottom and click "Apply" * copyAPI KeyandAPI Secretand put them in your configuration file as"api-key"and"api-secret"extractor.tumblr.api-key & .api-secret -------------------------------------- TypestringHow To * login and visit Tumblr's `Applications <https://www.tumblr.com/oauth/apps>`__ section * click "Register application" * fill out the form: use a random name and description, set https://example.org/ as "Application Website" and "Default callback URL" * solve Google's "I'm not a robot" challenge and click "Register" * click "Show secret key" (below "OAuth Consumer Key") * copy yourOAuth Consumer KeyandSecret Keyand put them in your configuration file as"api-key"and"api-secret"Custom Types ============ Date ---- Type *string*integerExample *"2019-01-01T00:00:00"*"2019"with"%Y"as `date-format`_ *1546297200Description A |Date|_ value represents a specific point in time. * If given asstring, it is parsed according to `date-format`_. * If given asinteger, it is interpreted as UTC timestamp. Duration -------- Type *float*listwith 2floats*stringExample *2.85*[1.5, 3.0]*"2.85","1.5-3.0"Description A |Duration|_ represents a span of time in seconds. * If given as a singlefloat, it will be used as that exact value. * If given as alistwith 2 floating-point numbersa&b, it will be randomly chosen with uniform distribution such thata <= N <= b. (see `random.uniform() <https://docs.python.org/3/library/random.html#random.uniform>`_) * If given as astring, it can either represent a singlefloatvalue ("2.85") or a range ("1.5-3.0"). Path ---- Type *string*listofstringsExample *"file.ext"*"~/path/to/file.ext"*"$HOME/path/to/file.ext"*["$HOME", "path", "to", "file.ext"]Description A |Path|_ is astringrepresenting the location of a file or directory. Simple `tilde expansion <https://docs.python.org/3/library/os.path.html#os.path.expanduser>`__ and `environment variable expansion <https://docs.python.org/3/library/os.path.html#os.path.expandvars>`__ is supported. In Windows environments, backslashes ("") can, in addition to forward slashes ("/"), be used as path separators. Because backslashes are JSON's escape character, they themselves have to be escaped. The pathC:pathtofile.exthas therefore to be written as"C:\path\to\file.ext"if you want to use backslashes. Logging Configuration --------------------- TypeobjectExample .. code:: json { "format" : "{asctime} {name}: {message}", "format-date": "%H:%M:%S", "path" : "~/log.txt", "encoding" : "ascii" } .. code:: json { "level" : "debug", "format": { "debug" : "debug: {message}", "info" : "[{name}] {message}", "warning": "Warning: {message}", "error" : "ERROR: {message}" } } Description Extended logging output configuration. * format * General format string for logging messages or anobjectwith format strings for each loglevel. In addition to the default `LogRecord attributes <https://docs.python.org/3/library/logging.html#logrecord-attributes>`__, it is also possible to access the current `extractor <https://github.com/mikf/gallery-dl/blob/v1.24.2/gallery_dl/extractor/common.py#L26>`__, `job <https://github.com/mikf/gallery-dl/blob/v1.24.2/gallery_dl/job.py#L21>`__, `path <https://github.com/mikf/gallery-dl/blob/v1.24.2/gallery_dl/path.py#L27>`__, and `keywords` objects and their attributes, for example"{extractor.url}","{path.filename}","{keywords.title}"* Default:"[{name}][{levelname}] {message}"* format-date * Format string for{asctime}fields in logging messages (see `strftime() directives <https://docs.python.org/3/library/time.html#time.strftime>`__) * Default:"%Y-%m-%d %H:%M:%S"* level * Minimum logging message level (one of"debug","info","warning","error","exception") * Default:"info"* path * |Path|_ to the output file * mode * Mode in which the file is opened; use"w"to truncate or"a"to append (see |open()|_) * Default:"w"* encoding * File encoding * Default:"utf-8"Note: path, mode, and encoding are only applied when configuring logging output to a file. Postprocessor Configuration --------------------------- TypeobjectExample .. code:: json { "name": "mtime" } .. code:: json { "name" : "zip", "compression": "store", "extension" : "cbz", "filter" : "extension not in ('zip', 'rar')", "whitelist" : ["mangadex", "exhentai", "nhentai"] } Description Anobjectcontaining a"name"attribute specifying the post-processor type, as well as any of its `options <Postprocessor Options_>`__. It is possible to set a"filter"expression similar to `image-filter <extractor.*.image-filter_>`_ to only run a post-processor conditionally. It is also possible set a"whitelist"or"blacklist"to only enable or disable a post-processor for the specified extractor categories. The available post-processor types areclassifyCategorize files by filename extensioncompare| Compare versions of the same file and replace/enumerate them on mismatch | (requires `downloader.*.part`_ =trueand `extractor.*.skip`_ =false)execExecute external commandsmetadataWrite metadata to separate filesmtimeSet file modification time according to its metadatapythonCall Python functionsugoiraConvert Pixiv Ugoira to WebM using `FFmpeg <https://www.ffmpeg.org/>`__zip``

: Store files in a ZIP archive

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment