Skip to content

Instantly share code, notes, and snippets.

@olofk
Created March 8, 2024 22:33
Show Gist options
  • Save olofk/732b33c07e565065b28d24c2546e57aa to your computer and use it in GitHub Desktop.
Save olofk/732b33c07e565065b28d24c2546e57aa to your computer and use it in GitHub Desktop.
import os
import subprocess
import sys
env = os.environ.copy()
args = sys.argv[1:]
print(args)
while args:
arg = args.pop(0)
if '=' in arg:
[k,v] = arg.split('=', 1)
env[k] = v
else:
break
subprocess.run([arg]+args, env=env)
@cavearr
Copy link

cavearr commented Mar 9, 2024

I don't know if I understood your question correctly, but I normally use the load_dotenv package for things like this (pip install load_dotenv).

You have to place a .env file in your project folder (remember to include it in .gitignore to avoid possible security problems), so it is easy to configure environments, simply replacing the .env and not having to touch code.

In this .env file the environment variables will be configured like this (possibly you have already used it but just in case I point it out):

MY_TEST_ENV = "Hi fpga folks!"
MY_SECOND_ENV_VAR = 1234567890

We add the new module to your code:

from dotenv import load_dotenv
import os
import subprocess
import sys

load_dotenv()
env = os.environ.copy()
args = sys.argv[1:]
print(args)
while args:
    arg = args.pop(0)
    if '=' in arg:
        [k,v] = arg.split('=', 1)
        env[k] = v
    else:
        break
subprocess.run([arg]+args, env=env)

Now you only need to call envrun.py original_command_to_run

This way all environment variables from the .env file are passed to the subprocces and this is compatible between shells ( i don't remove the args treatment because could be compatible both env vars interaction , for command line and from file).

@olofk
Copy link
Author

olofk commented Mar 9, 2024

Relying on values of external env vars is always a bit problematic when it comes to reproducible builds, so I don't think loading things from the home dir works for this. However, putting all the needed environment variables in a separate file that gets autogenerated inside the build tree would technically solve the issue, but this also means I need to make sure that file is present where the command is launched. This might add some complications if the commands are actually distributed to execute on different machines.

@cavearr
Copy link

cavearr commented Mar 9, 2024

If you are working in a distributed synthesis or compilation environment, I understand that you will have a job manager or similar.

In distributed architectures of this type I usually manage the tasks in a queue manager (there are a lot, for example rabitmq is very good but for simpler things you could use a redis database or even something very basic custom with python itself that opens a socket and stores the tasks in files or in a sqlite.

In the definition of each task you can add those configuration/compilation/synthesis options. In this way, on the one hand you would have a node that would have the queue of pending jobs and on the other hand, you could have all the machines you wanted or that could even be incorporated dynamically, those processes/machines do not know what they have to do initially, as they start up they ask the task manager for "something to do", the task manager releases the first task available in the queue and gives it to the proccess with all its variables, the process liquidates it and returns or generates the corresponding output.

This way you don't have to worry about configuring files per machine or anything similar, all of you need to setup is done in the queue manager.

I hope it helps.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment