Skip to content

Instantly share code, notes, and snippets.

Last active Apr 27, 2022
What would you like to do?
Hydra with Git setup links
Lots of goodies like <githubstatus> pinging.
Declarative jobset bootstrapping:
Declarative jobset with Git PR evals:
Other notes:
- Use valued git inputs for private repos + see below
- Create a github deploy key for your repo, put it in /var/lib/hydra/.ssh
- Also create a /var/lib/hydra/.ssh/config file:
StrictHostKeyChecking No
UserKnownHostsFile /dev/null
IdentityFile /var/lib/hydra/.ssh/id_rsa
- Alternatively set up the known hosts in advance with some ops script (can't find it now).
- `journalctl | grep -C1 hydra | tail` to see output.
- For githubpulls / githubstatus plugins, create a `<github_authorization>` section in the hydra config with a personal access token
- repo scope for githubpulls, otherwise repo:status would be enough :/
Other resources:
### Binary cache
- Can use `nix-serve` to expose a nix-store on HTTP.
- It will on-the-fly compress stuff in the store into NAR files.
- Example:
- curl http://localhost:3000/gpl4id13hjbxm909srpxximik3b5lg3p.narinfo
- curl http://localhost:3000/nar/gpl4id13hjbxm909srpxximik3b5lg3p.nar
- The actual NAR url is found in the .narinfo (Hydra will pack into .nar.xz for example).
- Without extra config, Hydra also serves the binary cache from nix-store.
- Drawback of serving the store is it might expose more than the build artifacts.
- Source code etc.
- Would need guessing the hash though (or sniping from a screenshot).
- Other drawback is on-the-fly NAR creation might be costy (didn't measure).
- When Hydra is configured with `store_uri` (either local or remote like s3) ..
- .. it writes the NAR files there directly.
- This slows the build process upfront, but serving is faster.
- Note: the first builds are especially slow, since it seems all deps from /nix/store are NAR-ified into the cache.
- Hm, source derivations as well?
- Yes. Ouch. Well.
- Also takes space (if stored locally... well it takes space remotely as well ;).
- The `binary_cache_secret_key_file` and `binary_cache_dir` options seem needed.
- Logs warn they are not, but then NAR signing doesn't work otherwise.
- See
- It seems Hydra won't serve the the binary cache it this mode
- You can serve it yourself anyway.
- So its better to let Hydra write the cache, but actually serve it separately.
- It is just a set of static files, so nginx can serve them statically.
- Make sure to disable dir listing though.
- Or upload to S3. See
- Configure using the binary caches
- How to serve a private binary cache?
- Option: no auth, serve via private network (using nix-serve or direct file serving).
- No auth..
- Option: serve via ssh (
- Gives too much access to clients via ssh, but can work if needed fast.
- Can be fixed, see link above.
- Quickstart
- nix-build --substituters ssh://root@ --option require-sigs false pyurlex.nix
- Don't know how to specify the signing key..
- We don't have any NAR files in this pathway, so signing is likely not an option.
- Option: upload to s3 bucket (or compatible).
- Clients should have the right access keys in their env. Pretty flexible.
### What can trip up binary cache
- When local workdir doesn't match hydra one
- Local hidden stuff (for `git status` at least):
- Files ignored by .gitignore
- use `git status --ignored`
- Empty dirs (not put in git, so not present in the Hydra checkout)
- `find . -type d -empty | grep -v \.git`
- Dotfiles (often sneak under the radar).
- Or dotdirs.
- Using `filterSource ./.` or other stuff involving `./.`
- Since the derivation of `./.` will leak the dir-name into the hash
- Bad if this is the top-level dir, differing at various checkouts
- TLDR use `builtins.path` until is done.
- Debugging the diff
- Look into the `.drv` file, find the `src` input to the derivation.
- It is the likely culprit. Compare the files with those present on Hydra.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment