Skip to content

Instantly share code, notes, and snippets.

@salanki
Last active October 25, 2019 18:45
Show Gist options
  • Save salanki/e58e4379a1e6fb4681ac18accb9b0ac2 to your computer and use it in GitHub Desktop.
Save salanki/e58e4379a1e6fb4681ac18accb9b0ac2 to your computer and use it in GitHub Desktop.
For Brian
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: render-123-classroom
spec:
maxParallelism: 50 # Run maximum 50 tasks in parallel
retryStrategy:
limit: 2 # Retry each task max twice
entrypoint: render
templates:
- name: render
steps:
- - name: bake-analyze ## Imaginary step where we Bake an animation if it isn't already baked, and extract info such as total frames.
template: bake-analyze
arguments:
parameters:
- name: user
value: "13213f12312312"
- name: file
value: "123-classrom/classroom.blend"
- - name: render
template: render-gpu
arguments:
parameters:
- name: user
value: "13213f12312312"
- name: file
value: "123-classrom/classroom.blend"
- name: frame
value: "{{item}}"
withParam: "{{steps.generate.outputs.result}}" # This will loop over all frames outputted from the previous step and start a task for each
- name: bake-analyze
script:
image: blender:2.81
command: [bash]
source: |
blender -b some-stuff-here {{inputs.parameters.file}} # The output of this script should be all frames to render formatted as JSON array
volumeMounts:
- name: workdir
hostPath: /home/acc/mnt/worker-shared/concierge/{{inputs.parameters.work}}
path: /user
resources:
requests:
memory: 32Gi
cpu: 5 # Cores
nodeAffinity:
labels:
cpu.atlantlic.cloud/family: xeon
- name: render-gpu
inputs:
parameters:
- name: frame
script:
image: blender:2.81
command: [bash]
source: |
blender --start-frame {{inputs.parameters.frame}} --stop-frame {{inputs.parameters.frame}}
volumeMounts:
- name: workdir
hostPath: /home/acc/mnt/worker-shared/concierge/{{inputs.parameters.work}}
path: /user
resources:
requests:
memory: 12Gi
cpu: 1 # Cores
nvidia.com/gpu: 8 # Request all 8 GPUs in a host
nodeAffinity:
labels:
gpu.nvidia.com/vram: 8 # Run on any GPU with 8GB VRAM
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment