Skip to content

Instantly share code, notes, and snippets.

@aacater
Last active May 16, 2023 16:22
Show Gist options
  • Save aacater/6086b51732dfdd9a6ef0db6fa7d316d4 to your computer and use it in GitHub Desktop.
Save aacater/6086b51732dfdd9a6ef0db6fa7d316d4 to your computer and use it in GitHub Desktop.
Dockerfile for BorgWarehouse
FROM node:18-slim
ARG USERNAME=borgwarehouse
ARG USER_UID=1001
ARG USER_GID=$USER_UID
ARG SUDO_LINE="$USERNAME ALL=(ALL) NOPASSWD: /usr/sbin/useradd,/bin/mkdir,/usr/bin/touch,/bin/chmod,/bin/chown,/bin/bash,/usr/bin/jc,/usr/bin/jq,/bin/sed,/bin/grep,/usr/bin/stat,/usr/bin/borg,/bin/echo,/usr/sbin/userdel,/usr/sbin/service"
ENV DEBIAN_FRONTEND=noninteractive
RUN apt update && apt install -y --no-install-recommends \
jc jq sudo borgbackup openssh-server openssl \
&& rm -rf /var/lib/apt/lists/* /var/cache/apt
RUN addgroup --gid $USER_GID $USERNAME \
&& adduser --disabled-login --disabled-password --uid $USER_UID --ingroup $USERNAME --gecos BorgWarehouse $USERNAME \
&& echo $SUDO_LINE > /etc/sudoers.d/10-$USERNAME \
&& chmod 0440 /etc/sudoers.d/10-$USERNAME
RUN echo -e "* * * * * root curl --request POST --url '$NEXTAUTH_URL/api/cronjob/checkStatus' --header 'Authorization: Bearer $CRONJOB_KEY' \n\
* * * * * root curl --request POST --url '$NEXTAUTH_URL/api/cronjob/getStorageUsed' --header 'Authorization: Bearer $CRONJOB_KEY' \
" > /etc/cron.d/borgwarehouse
USER $USERNAME
WORKDIR /app
COPY --chown=$USER_UID:$USER_GID package*.json .
RUN npm ci --only=production
COPY --chown=$USER_UID:$USER_GID . .
RUN chmod 700 /app/helpers/shells/*
RUN npm run build
EXPOSE 22 3000
VOLUME /app/config
VOLUME /var/borgwarehouse
COPY entrypoint.sh /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
CMD ["init"]
#!/bin/bash
CONFIG_DIR="/app/config"
sudo service ssh start &> /dev/null
if [ ! -f "$CONFIG_DIR/users.json" ];then
echo '[{"id":0,"email":"admin@demo.fr","username":"admin","password":"$2a$12$20yqRnuaDBH6AE0EvIUcEOzqkuBtn1wDzJdw2Beg8w9S.vEqdso0a","roles":["admin"]}]' > "$CONFIG_DIR/users.json"
fi
if [ ! -f "$CONFIG_DIR/repo.json" ];then
echo '[]' > "$CONFIG_DIR/repo.json"
fi
if [ "$1" == "init" ] ; then
npm run start
exit
fi
exec "$@"
@aacater
Copy link
Author

aacater commented Dec 13, 2022

Current bugs:

  • When copying commands from the UI, the hostname and port are not filled in, they are just left blank. For example I get ssh://bdd5588d@:/./repo1, when based on my environment variables it should be ssh://bdd5588d@localhost:2222/./repo1. I am unsure if this is related to the Dockerfile or the app.

  • Users are not persistent and are reset when the container is recreated.

  • No handling of public SSH keys inside container. So setting NEXT_PUBLIC_SSH_SERVER_FINGERPRINT_* is useless.

Fixed:

I initially limited the borgwarehouse user's access to commands like in the docs. But then the scripts where getting stuck with sudo requiring a password. Too lazy to debug this at the moment.

@Ravinou
Copy link

Ravinou commented Dec 15, 2022

When copying commands from the UI, the hostname and port are not filled in, they are just left blank. For example I get ssh://bdd5588d@:/./repo1, when based on my environment variables it should be ssh://bdd5588d@localhost:2222/./repo1. I am unsure if this is related to the Dockerfile or the app.

According to what you describe, the data in the file containing the environment variables are not read : https://borgwarehouse.com/docs/admin-manual/debian-installation/#configure-application-environment-variables
It's the .env.local file that should read the hostname and SSH port.

I initially limited the borgwarehouse user's access to commands like in the docs. But then the scripts where getting stuck with sudo requiring a password. Too lazy to debug this at the moment.

If sudo is well set up as indicated in the documentation, you must check the rights on the files. You apply a chmod 700 on the scripts, but have you checked that the scripts are the property of the user borgwarehouse ?

@aacater
Copy link
Author

aacater commented Dec 16, 2022

@Ravinou

re: Environment Variables

My current .env.local file is:

NEXTAUTH_URL=http://localhost:3000
NEXTAUTH_SECRET=chQfWZwo1gpMRMbtMSDCjjYR9Ht/0f6+8Vmz9AWZqfo=
CRONJOB_KEY=nScf90mOOb/8kAEhF12CB2nchg9pSHO8PH0AZ86jjVY=
NEXT_PUBLIC_HOSTNAME=localhost
NEXT_PUBLIC_SSH_SERVER_PORT=2222
NEXT_PUBLIC_SSH_SERVER_FINGERPRINT_RSA=SHA256:36mfYNRrm1aconVt6cBpi8LhAoPP4kB8QsVW4n8eGHQ
NEXT_PUBLIC_SSH_SERVER_FINGERPRINT_ED25519=SHA256:tYQuzrZZMqaw0Bzvn/sMoDs1CVEitZ9IrRyUg02yTPA
NEXT_PUBLIC_SSH_SERVER_FINGERPRINT_ECDSA=SHA256:nTpxui1oEmH9konPau17qBVIzBQVOsD1BIbBFU5IL04

I haven't bothered to change NEXT_PUBLIC_SSH_SERVER_FINGERPRINT_* because the SSH keys in the container are generated when the image is built. So this is nothing thing that needs to be implemented.

I run the container with:

docker run -it --rm -p 3000:3000 -p 2222:22 --env-file ./.env.local --name borg -v ./config/:/app/config/ -v ./repos:/var/borgwarehouse/ borg:latest

If I enter the container, the environment variables appear in the output of env.

output of env:

CRONJOB_KEY=nScf90mOOb/8kAEhF12CB2nchg9pSHO8PH0AZ86jjVY=
HOSTNAME=e4d300352c96
NEXT_PUBLIC_SSH_SERVER_FINGERPRINT_RSA=SHA256:36mfYNRrm1aconVt6cBpi8LhAoPP4kB8QsVW4n8eGHQ
NEXTAUTH_URL=http://localhost:3000
NEXT_PUBLIC_SSH_SERVER_FINGERPRINT_ECDSA=SHA256:nTpxui1oEmH9konPau17qBVIzBQVOsD1BIbBFU5IL04
NEXT_PUBLIC_HOSTNAME=localhost
YARN_VERSION=1.22.19
PWD=/app
NEXT_PUBLIC_SSH_SERVER_PORT=2222
container=podman
HOME=/home/borgwarehouse
LS_COLORS=rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=00:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arc=01;31:*.arj=01;31:*.taz=01;31:*.lha=01;31:*.lz4=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.tzo=01;31:*.t7z=01;31:*.zip=01;31:*.z=01;31:*.dz=01;31:*.gz=01;31:*.lrz=01;31:*.lz=01;31:*.lzo=01;31:*.xz=01;31:*.zst=01;31:*.tzst=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.alz=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.cab=01;31:*.wim=01;31:*.swm=01;31:*.dwm=01;31:*.esd=01;31:*.jpg=01;35:*.jpeg=01;35:*.mjpg=01;35:*.mjpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.webp=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36:*.m4a=00;36:*.mid=00;36:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:*.ra=00;36:*.wav=00;36:*.oga=00;36:*.opus=00;36:*.spx=00;36:*.xspf=00;36:
NEXT_PUBLIC_SSH_SERVER_FINGERPRINT_ED25519=SHA256:tYQuzrZZMqaw0Bzvn/sMoDs1CVEitZ9IrRyUg02yTPA
NEXTAUTH_SECRET=chQfWZwo1gpMRMbtMSDCjjYR9Ht/0f6+8Vmz9AWZqfo=
TERM=xterm
SHLVL=1
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
NODE_VERSION=18.12.1
DEBIAN_FRONTEND=noninteractive
_=/usr/bin/env

But then, the URL or port are not filled in the UI. For example, ssh://bdd5588d@localhost:2222/./repo1.

re: Sudo

have you checked that the scripts are the property of the user borgwarehouse ?

Yes. I made sure of that with COPY --chown=$USER_UID:$USER_GID . ..

I found the problem, the the paths aren't the same. Some of the commands changed location compared how they're listed in the docs. Which is weird because the container is based off of Debian Bullseye.
Commands with different paths (others are the same):

/bin/mkdir
/bin/chmod
/bin/chown
/bin/bash
/bin/sed
/bin/grep
/bin/echo

So once I add those to the sudoers file, it works.

@Ravinou
Copy link

Ravinou commented Dec 16, 2022

@aacater

  • For sudo :
    Okay, it was just the paths of command. You are using node:18-slim in dockerfile, are you sure it is not based on alpine rather than Debian ?

  • For .env.local, I will test and answer later here.

@aacater
Copy link
Author

aacater commented Dec 16, 2022

cat /etc/os-release

PRETTY_NAME="Debian GNU/Linux 11 (bullseye)"
NAME="Debian GNU/Linux"
VERSION_ID="11"
VERSION="11 (bullseye)"
VERSION_CODENAME=bullseye
ID=debian
HOME_URL="https://www.debian.org/"
SUPPORT_URL="https://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/"

@Ravinou
Copy link

Ravinou commented Dec 16, 2022

@aacater I confirm that some commands are not placed in the same place on the docker version of debian, so the sudo file must be adapted 😕

@Ravinou
Copy link

Ravinou commented Dec 16, 2022

When I build the docker image with the dockerfile, I'm stuck at the "npm ci" step because the user can't create anything in the /app folder which belongs to root.

I can build with this version :

FROM node:18-slim

ARG USERNAME=borgwarehouse
ARG USER_UID=1001
ARG USER_GID=$USER_UID

ARG SUDO_LINE="$USERNAME ALL=(ALL) NOPASSWD: /usr/sbin/useradd,/bin/mkdir,/usr/bin/touch,/bin/chmod,/bin/chown,/bin/bash,/usr/bin/jc,/usr/bin/jq,/bin/sed,/bin/grep,/usr/bin/stat,/usr/bin/borg,/bin/echo,/usr/sbin/userdel,/usr/sbin/service"

ENV DEBIAN_FRONTEND=noninteractive

RUN apt update && apt install -y --no-install-recommends \
    jc jq sudo borgbackup openssh-server openssl \
    && rm -rf /var/lib/apt/lists/* /var/cache/apt

RUN addgroup --gid $USER_GID $USERNAME \
    && adduser --disabled-login --disabled-password --uid $USER_UID --ingroup $USERNAME --gecos BorgWarehouse $USERNAME \
    && echo $SUDO_LINE > /etc/sudoers.d/10-$USERNAME \
    && chmod 0440 /etc/sudoers.d/10-$USERNAME

RUN echo -e "* * * * * root curl --request POST --url '$NEXTAUTH_URL/api/cronjob/checkStatus' --header 'Authorization: Bearer $CRONJOB_KEY' \n\
* * * * * root curl --request POST --url '$NEXTAUTH_URL/api/cronjob/getStorageUsed' --header 'Authorization: Bearer $CRONJOB_KEY' \
" > /etc/cron.d/borgwarehouse
    

WORKDIR /app

RUN chown $USER_UID:$USER_GID /app

USER $USERNAME

COPY --chown=$USER_UID:$USER_GID package*.json ./

RUN npm ci --only=production

COPY --chown=$USER_UID:$USER_GID . .

RUN chmod 700 /app/helpers/shells/*

RUN npm run build

EXPOSE 22 3000

VOLUME /app/config

VOLUME /var/borgwarehouse

COPY entrypoint.sh /entrypoint.sh

ENTRYPOINT ["/entrypoint.sh"]

CMD ["init"]

WORKDIR is always execute with "root" if the folder exist. Don't you have any problem with your dockerfile version ? In my case, I need to chown /app

@Ravinou
Copy link

Ravinou commented Dec 16, 2022

Everything works for me now. If your datas are not persistent, you have a problem with the rights on the locally mounted volumes. The environment variables are well read and it's working for me.

On the other hand I have another problem... the created UNIX users are not persistent... so when the container is restarted the data is there, but you can't edit it anymore, because the shell returns "the user doesn't exist" errors.

I continue in this way...

I think we need to make /etc/passwd and /etc/shadow persistent. What do you think ?

@tagdara
Copy link

tagdara commented Apr 2, 2023

I've been looking at this after trying to get borgwarehouse working in docker, and I think trying to persist the passwd and shadow will be fraught with problems.

Instead I've been testing a "recreateRepo" script which can be called during entrypoint to create the users based off of the data in repos.json. Now when the docker is reset, we effectively recreate the users and update their authorized keys from the json to make sure it's all intact.

First I have the overall recreateRepos (note that jq is already being added in the dockerfile to the node-18:slim image being used):

#!/bin/bash
mapfile -t newValue < <(
  jq -c -r '.[]' /app/config/repo.json
)

for item in "${newValue[@]}"; do
    unixUser=$( echo ${item} | jq -c -r '.unixUser')
    sshPublicKey=$( echo ${item} | jq -c -r '.sshPublicKey')
    echo "user $unixUser / $sshPublicKey"
    /app/recreateRepo.sh $unixUser "$sshPublicKey"
done

which then calls recreateRepo.sh (a trimmed down version of createRepo):

set -e
if [ "$1" == "" ] || [ "$2" == "" ];then
    echo "This shell takes 2 arguments : Reponame, SSH Public Key"
    exit 1
fi

pattern='(ssh-ed25519 AAAAC3NzaC1lZDI1NTE5|sk-ssh-ed25519@openssh.com AAAAGnNrLXNzaC1lZDI1NTE5QG9wZW5zc2guY29t|ssh-rsa AAAAB3NzaC1yc2)[0-9A-Za-z+/]+[=]{0,3}(\s.*)?'
if [[ ! "$2" =~ $pattern ]]
then
    echo "Invalid public SSH KEY format. Provide a key in OpenSSH format (rsa, ed25519, ed25519-sk)"
    exit 2
fi

user=$1
group="${user}"
home="/var/borgwarehouse/${user}"
pool="${home}/repos"
authorized_keys="${home}/.ssh/authorized_keys"
sudo useradd -d ${home} -s "/bin/bash" -m --badname ${user}
sudo mkdir -p ${home}/.ssh
sudo touch ${authorized_keys}
sudo chmod 777 ${authorized_keys}
sudo echo $2 > ${authorized_keys}
sudo mkdir -p "${pool}/$1"
sudo chmod -R 750 ${home}
sudo chmod 600 ${authorized_keys}
sudo chown -R ${user}:borgwarehouse ${home}

if [ ! -f "${authorized_keys}" ];then
    echo "${authorized_keys} must be present"
    exit 4
fi

echo ${user}

Then the modifed entrypoint.sh (this is already a mess anyway and the hardcoded user defaults should be moved to another helper script):

#!/bin/bash

CONFIG_DIR="/app/config"

sudo service ssh start &> /dev/null

if [ ! -f "$CONFIG_DIR/users.json" ];then
    echo '[{"id":0,"email":"admin@demo.fr","username":"admin","password":"$2a$12$20yqRnuaDBH6AE0EvIUcEOzqkuBtn1wDzJdw2Beg8w9S.vEqdso0a","roles":["admin"]}]' > "$CONFIG_DIR/users.json"
fi

if [ ! -f "$CONFIG_DIR/repo.json" ];then
    echo '[]' > "$CONFIG_DIR/repo.json"
fi

/app/recreateRepos.sh

if [ "$1" == "init" ] ; then
    npm run start
    exit
fi

exec "$@"

and finally the Dockerfile (added adduser as well as one or two others that were causing sudo hangs during testing):

FROM node:18-slim

ARG USERNAME=borgwarehouse
ARG USER_UID=1900
ARG USER_GID=$USER_UID

ARG SUDO_LINE="$USERNAME ALL=(ALL) NOPASSWD: /usr/sbin/adduser,/usr/sbin/useradd,/bin/mkdir,/usr/bin/touch,/bin/chmod,/bin/chown,/bin/bash,/usr/bin/jc,/usr/bin/jq,/bin/sed,/bin/grep,/usr/bin/stat,/usr/bin/tee,/usr/bin/borg,/bin/echo,/usr/sbin/userdel,/usr/sbin/service"

ENV DEBIAN_FRONTEND=noninteractive

RUN apt update && apt install -y --no-install-recommends \
    jc jq sudo borgbackup openssh-server openssl \
    && rm -rf /var/lib/apt/lists/* /var/cache/apt

RUN addgroup --gid $USER_GID $USERNAME \
    && adduser --disabled-login --disabled-password --uid $USER_UID --ingroup $USERNAME --gecos BorgWarehouse $USERNAME \
    && echo $SUDO_LINE > /etc/sudoers.d/10-$USERNAME \
    && chmod 0440 /etc/sudoers.d/10-$USERNAME

RUN echo -e "* * * * * root curl --request POST --url '$NEXTAUTH_URL/api/cronjob/checkStatus' --header 'Authorization: Bearer $CRONJOB_KEY' \n\
* * * * * root curl --request POST --url '$NEXTAUTH_URL/api/cronjob/getStorageUsed' --header 'Authorization: Bearer $CRONJOB_KEY' \
" > /etc/cron.d/borgwarehouse
    

WORKDIR /app

RUN chown $USER_UID:$USER_GID /app

USER $USERNAME

COPY --chown=$USER_UID:$USER_GID package*.json ./

RUN npm ci --only=production

COPY --chown=$USER_UID:$USER_GID . .

RUN chmod 700 /app/helpers/shells/*

RUN npm run build

EXPOSE 22 3000

VOLUME /app/config

VOLUME /var/borgwarehouse

COPY --chown=$USER_UID:$USER_GID docker/entrypoint.sh /app/entrypoint.sh
COPY --chown=$USER_UID:$USER_GID docker/recreateRepo.sh /app/recreateRepo.sh
COPY --chown=$USER_UID:$USER_GID docker/recreateRepos.sh /app/recreateRepos.sh

RUN chmod +x /app/entrypoint.sh
RUN chmod +x /app/recreateRepo.sh
RUN chmod +x /app/recreateRepos.sh

ENTRYPOINT ["/app/entrypoint.sh"]

CMD ["init"]

@Ravinou
Copy link

Ravinou commented Apr 10, 2023

I've been digging the question and indeed the main problem that prevents me for the moment to propose a dockerfile is the persistence of unix users...

It is impossible to use a persistent mount on /etc/passwd or /etc/shadow files, UNIX does not support that, certainly for obvious security reasons.

I have not yet taken the time to think about how to overcome this problem. I have one or two ideas but it's not easy to do it without breaking changes.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment