Today I setup a RabbitMQ server to listen for context resolves in real-time.
An (as of yet) undocumented feature of Rez is its ability to emit signals to an external service, like RabbitMQ, whenever a context is resolved.
$ rez env python-3
# Emitting!
> $
This is useful for us, as it means we can monitor these events to determine package frequency for localisation.
3 hours later
It took some effort, but with help from the Rez chat and fiddling around with the Rabbit MQ Management Console I managed to capture just the right kind of messages!
What took the most amount of time wasn't the program itself, but getting Rez to collaborate with RabbitMQ, for which I put together a tutorial for future readers.
And that's great. Now we've got a "consumer" of data sent to RabbitMQ to capture and store resolves made from any computer, which means we can either:
- Run it locally, and monitor our own resolves to later use the collected data to make decisions about pruning.
- Run it centrally, and monitor all resolves and make decisions with knowledge of all machines involved for distributed localisation on machines that require it.
Locally can be helpful for debugging and testing, and for advanced use. The server can be setup to run automatically in the background on start-up and integrate with housekeeping to automatically localise/delocalise when necessary.
A central server can avoid that, and handle localisation alongside other software provisioning services such as Ansible and Puppet.
One of the shortcomings of installing PyPI packages with Rez was a lack of console_scripts
; the executables that some projects ship to enable use of a project as a command-line application. Like pip.exe
. Because Rez encapsulates a given project, the regular method of generating these console scripts won't work, because they contain an absolute path to the Python interpreter used to create them, which may not be the version of Python used later on.
Solving this on Linux and MacOS is straightforward; it supports executing shell scripts as though they were executables, meaning we can simply write..
#!/usr/bin/env bash
python -m some_module
And it would refer to "python", whichever was found on the current PATH.
On Windows things get more complicated because it provides this functionality through .bat
scripts which suffer from the dreaded Terminate Batch Job? (Y/N)
. To work around this, I borrowed from the Scoop project and generate "shims".
python.exe
# precompiled binary, identical file for every executable
python.shim
path = python
args = -m my_module
The executable in this case looks for a file with the same basename and extension *.shim
, reads it and wraps the path
and args
into a real process. The result is proper handling of input/output, signals and process hierarchy.
With localisation implemented, what's left is to test it out. Before I do, I need to separate between development and localised packages, such that developers can explicitly exclude packages, whilst still benefitting from localisation.
From there, there are a few cosmetic issues with Allspark that needs attention, primarily the Project button, followed by editing of environment. I'll make proper GitHub issues of the remaining tasks as well.