Skip to content

Instantly share code, notes, and snippets.

@olofk
Created March 8, 2024 22:33
Show Gist options
  • Save olofk/732b33c07e565065b28d24c2546e57aa to your computer and use it in GitHub Desktop.
Save olofk/732b33c07e565065b28d24c2546e57aa to your computer and use it in GitHub Desktop.
import os
import subprocess
import sys
env = os.environ.copy()
args = sys.argv[1:]
print(args)
while args:
arg = args.pop(0)
if '=' in arg:
[k,v] = arg.split('=', 1)
env[k] = v
else:
break
subprocess.run([arg]+args, env=env)
@cavearr
Copy link

cavearr commented Mar 9, 2024

If you are working in a distributed synthesis or compilation environment, I understand that you will have a job manager or similar.

In distributed architectures of this type I usually manage the tasks in a queue manager (there are a lot, for example rabitmq is very good but for simpler things you could use a redis database or even something very basic custom with python itself that opens a socket and stores the tasks in files or in a sqlite.

In the definition of each task you can add those configuration/compilation/synthesis options. In this way, on the one hand you would have a node that would have the queue of pending jobs and on the other hand, you could have all the machines you wanted or that could even be incorporated dynamically, those processes/machines do not know what they have to do initially, as they start up they ask the task manager for "something to do", the task manager releases the first task available in the queue and gives it to the proccess with all its variables, the process liquidates it and returns or generates the corresponding output.

This way you don't have to worry about configuring files per machine or anything similar, all of you need to setup is done in the queue manager.

I hope it helps.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment