Skip to content

Instantly share code, notes, and snippets.

@noteed
Last active December 7, 2022 07:20
Show Gist options
  • Star 4 You must be signed in to star a gist
  • Fork 1 You must be signed in to fork a gist
  • Save noteed/90fffa32b6fc07d7b62924585e3f983e to your computer and use it in GitHub Desktop.
Save noteed/90fffa32b6fc07d7b62924585e3f983e to your computer and use it in GitHub Desktop.
Nix cache

Notes about caching Nix builds

You can use Cachix (I haven't tried it yet but people are happy with it), or spend time like I do crawling through similar notes.

I have to put back my notes in order, because of this tweets: https://twitter.com/noteed/status/1285875859468029958

It turns out I did use a cache on Digital Ocean Spaces in the past, but I didn't have much notes.

Links

TODO

  • Clarify wording: "cache" and "substituter". The documentation says "Deprecated: binary-caches is now an alias to substituters." Also, they seem quite similar to "store".
  • A NAR, or Nix archive, is a set of store paths exported out of the Nix store as a standalone file. A NAR can then be imported into the store. (This reminds me how a Docker image can be docker saved and docker loaded.)
  • Can I configure a SSH-accessible machine as a cache (instead of specifying it with --substituters) ? Yes, see e.g. here: http://softwaresimply.blogspot.com/2018/07/setting-up-private-nix-cache.html
  • How can I list store paths that are not yet uploaded to the cache ? How can I make the example upload-to-cache.sh script better (e.g. when there is no network) ?

Quick notes

A cache is a location containing optionally signed store paths, that can be used to download (the documentation says "fetch") those store paths instead of actually building them when using e.g. nix-build. Caches can be local directories, directories served through HTTP(S), or S3.

A HTTP cache URL can look like cache.nixos.org, my-cache.cachix.org, cache.ams3.digitaloceanspaces.com, ... (i.e. so a S3 bucket can naturally be accessed through HTTP too.)

When configuring a cache on a client, in addition of the URL, a matching public key can be set to verify the downloaded store paths.

A cache public key can look like gravity.cs.illinois.edu-1:yymmNS/WMf0iTj2NnD0nrVV8cBOXM9ivAkEdO1Lro3U= (I forgot what this specific key is about). Here gravity.cs.illinois.edu-1 is the key name and probably matches a hostname (that is the recommended practice) but it can actually be anything.

Generating a signing key: https://nixos.org/nix/manual/#operation-generate-binary-cache-key

Uploading to a local cache:

nix copy --to "file://$(pwd)/cache" $(nix-build ... --no-out-link)

Uploading to a S3 cache can be done with:

nix copy --to 's3://example-nix-cache?profile=cache-upload&region=eu-west-2' nixpkgs.hello
nix copy --to 's3://example-nix-cache?profile=cache-upload&scheme=https&endpoint=minio.example.com' nixpkgs.hello

See https://nixos.org/nix/manual/#ssec-s3-substituter-authenticated-writes

In my old notes, I have

find /path/to/cache/ -maxdepth 1 -not -name cache | xargs -I {} s3cmd put {} s3://cache --acl-public --recursive

(I guess I din't know yet about the s3cmd sync command.)

When using Backblaze B2, this worked (listing the profile as above didn't work (I don't know if Backblaze has such profiles, the the scheme is not needed):

$ nix copy --to 's3://noteed-actions?endpoint=s3.eu-central-003.backblazeb2.com' nixpkgs.hello

noteed-actions was the bucket name, and the endpoint was given when I created the application key and is repeated on the bucket in the Backblaze web interface.

If you don't want all the files at the root of the bucket, a directory name can be specified, e.g. s3://noteed-actions/cache.

Making the cache private

  • It seems the simpler way to make a cache private, is to use a SSH-accessible cache (thus controlling access with SSH keys).
  • There is a netrc-file option for Nix.
  • For S3: https://nixos.org/nix/manual/#ssec-s3-substituter-authenticated-reads but I haven't seen how to configure the credentials. I guess environment variables can be set. As often with Digital Ocean, it seems that Spaces access key have access to all buckets instead of just one. It seems Backblaze supports limiting a key to a specific bucket:

If an Application Key is restricted to a bucket, the listAllBucketNames permission is required for compatibility with SDKs and integrations. The listAllBucketNames permission can be enabled upon creation in the web UI or using the b2_create_key API call. More: https://www.backblaze.com/b2/docs/s3_compatible_api.html

(Backblaze, Digital Ocean and Packet are members of the Bandwidth Alliance.)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment