-
-
Save judy2k/7656bfe3b322d669ef75364a46327836 to your computer and use it in GitHub Desktop.
# Pass the env-vars to MYCOMMAND | |
eval $(egrep -v '^#' .env | xargs) MYCOMMAND | |
# … or ... | |
# Export the vars in .env into your shell: | |
export $(egrep -v '^#' .env | xargs) |
One-liner that allows unquoted variables that contain spaces:
OLD_IFS=$IFS; IFS=$'\n'; for x in `grep -v '^#.*' .env`; do export $x; done; IFS=$OLD_IFS
ead_var() {
VAR=$(grep $1 $2 | xargs)
IFS="=" read -ra VAR <<< "$VAR"
echo ${VAR[1]}
}MY_VAR=$(read_var MY_VAR .env)
Perfect, thanks
What about this
# save the existing environment variables
prevEnv=$(env)
# if the .env file exists, source it
[ -f .env ] && . .env
# re-export all vars from the env so they override what ever was set in .env
for e in $prevEnv
do
export $e
done
I wrote my own because was using forbidden symbols in envs.
This basically adds apostrophes so that all variables will be treated as strings. This way you can use your Docker env files and source them with source envs.sh
import sys
def main(input_path, postfix='.sh'):
with open(input_path, 'r') as file_handle:
lines = file_handle.readlines()
envs = {}
for line in lines:
try:
parts = line.split('=')
name = parts[0]
value = ''.join(parts[1:]).rstrip('\n')
except ValueError:
pass
else:
envs[name] = value
output_path = f'{input_path}{postfix}'
with open(output_path, 'w') as file_handle:
lines = []
for name, value in envs.items():
line = f'{name}=\'{value}\'\n'
lines.append(line)
file_handle.writelines(lines)
if __name__ == '__main__':
# Passes first argument as input path.
main(sys.argv[1])
The solution proposed by @arizonaherbaltea will not work correctly when the file is not terminated with a newline. For example using this .env
example,
# test.env
MY_VAR=a
Will not work properly, whereas the following will,
# test.env
MY_VAR=a
The only change is adding a newline at the end of the file. This is because read
requires a newline to parse the line correctly. Thus, in order to ensure everything works properly we can add a check to see if the file ends with newline and if not append it before parsing it. Doing this will ensure all lines are parsed correctly.
#!/bin/bash
function export_envs() {
local env_file=${1:-.env}
local is_comment='^[[:space:]]*#'
local is_blank='^[[:space:]]*$'
echo "trying env file: ${env_file}"
# ensure it has a newline that the end, if it does not already
tail_line=`tail -n 1 "${env_file}"`
if [[ "${tail_line}" != "" ]]; then
echo "No Newline at end of ${env_file}, appending!"
echo "" >> "${env_file}"
fi
while IFS= read -r line; do
echo "${line}"
[[ $line =~ $is_comment ]] && continue
[[ $line =~ $is_blank ]] && continue
key=$(echo "$line" | cut -d '=' -f 1)
# shellcheck disable=SC2034
value=$(echo "$line" | cut -d '=' -f 2-)
# shellcheck disable=SC2116,SC1083
echo "The key: ${key} and value: ${value}"
eval "export ${key}=\"$(echo \${value})\""
done < <(cat "${env_file}")
}
export_envs ${1}
Then it results,
✗ ./test.sh test.env
trying env file: test.env
The key: MY_VAR and value: a
The key: MY_VAR and value: b
Hope this helps someone out there :)
What's wrong with:
if [ -f .env ]; then
set -o allexport
source .env
fi
Works on macOS with my .env that works with docker compose
and does not have quotes around every string.
Test with:
envsubst < "$secret_file" | cat
later in the same script.
Because not every system has nodejs on it. And it's good that way.