Skip to content

Instantly share code, notes, and snippets.

@samsieber
Forked from atheiman/export-files-job.yaml
Last active June 9, 2021 03:09
Show Gist options
  • Star 4 You must be signed in to star a gist
  • Fork 1 You must be signed in to fork a gist
  • Save samsieber/349fde899d508b4e6be119e762fb600c to your computer and use it in GitHub Desktop.
Save samsieber/349fde899d508b4e6be119e762fb600c to your computer and use it in GitHub Desktop.
Generate job artifacts in an initContainer and export the files to workstation afterwards.
# Allows copying of job files to local after execution. Example copy command:
apiVersion: batch/v1
kind: Job
metadata:
name: export-files-job
spec:
template:
metadata:
labels:
job-name: export-files-job
spec:
restartPolicy: Never
volumes:
- name: job-files
emptyDir: {}
initContainers:
- name: job
image: alpine
volumeMounts:
- name: job-files
mountPath: /job
workingDir: /job
command: [/bin/sh, -c]
args:
- |
echo "This is where your job would happen. The log from this job is"
echo "is visible with 'kubectl logs POD job'."
echo "Imagine the files below are generated from the job."
echo contents of a file I want from a job container > ./file-1.txt
date > ./file-2.txt
hostname > ./file-3.txt
echo "Now this initContainer will exit and the container below will start"
echo "and sit idle so that these job files can be streamed out of the pod."
containers:
- name: export-files
image: alpine
volumeMounts:
- name: job-files
mountPath: /job
workingDir: /job
command: [/bin/sh, -c]
args:
- |
echo "Using 'set -e' to fail on errors"
set -e
echo "Creating the fifo pipe to send the data"
mkfifo /export-pipe
echo "Tarring the correct files and piping them to the std of the pipe"
echo "You'll need to stream them out with kubectl for this pod to shutdown"
tar -cvf - $(find . -type f) | /export-pipe
echo "The files have been read, so the previous command to pipe them stopped blocking"
#!/bin/bash
echo "Using 'set -e' to be strict about errors"
set -e
JOB_NAME=$1
DEST_DIR=$2
echo "Looking up the pod name"
POD_NAME=$(kubectl get pods --selector="job-name=$JOB_NAME" -o=jsonpath="{.items[0].metadata.name}")
echo "Creating the destination folder
mkdir -p $DEST_DIR || true
echo "Waiting for results to be available from $POD_NAME"
kubectl wait --timeout=600s --for=condition=ContainersReady pod/$POD_NAME
echo "Fetching results"
echo "Reading the stdout of the fifo pipe via 'kubectl exec'"
echo " and then piping the kubectl output into tar to expand the files again"
kubectl exec $POD_NAME -c export-files -- bash -c 'cat /export_pipe' | tar -C $DEST_DIR -xvf -
#!/bin/bash
# Start the job
kubectl apply -f export-job-files.yaml
# Use the script to wait for the job (named "export-files-job") and then unpack the copied files into the "output folder"
./grab-files export-files-job output
@atheiman
Copy link

Neat! Thanks for sharing

@wood-push-melon
Copy link

Awesome! Thanks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment