Skip to content

Instantly share code, notes, and snippets.

@everpeace everpeace/pod.yaml
Created May 11, 2017

Embed
What would you like to do?
test workload for GPU schedule with nodeAffinity
kind: Pod
apiVersion: v1
metadata:
name: gpu-pod
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: alpha.kubernetes.io/nvidia-gpu-name
operator: In
values:
- Tesla_K80
containers:
- name: gpu-container
image: gcr.io/tensorflow/tensorflow:latest-gpu
imagePullPolicy: Always
command: ["python"]
args:
- -u
- -c
- |
import tensorflow as tf;a = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[2, 3], name='a');b = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[3, 2], name='b');c = tf.matmul(a, b);sess = tf.Session(config=tf.ConfigProto(log_device_placement=True));print(sess.run(c))
resources:
requests:
alpha.kubernetes.io/nvidia-gpu: 1
limits:
alpha.kubernetes.io/nvidia-gpu: 1
volumeMounts:
- name: nvidia-driver-375-66
mountPath: /usr/local/nvidia
readOnly: true
- name: libcuda-so
mountPath: /usr/lib/x86_64-linux-gnu/libcuda.so
- name: libcuda-so-1
mountPath: /usr/lib/x86_64-linux-gnu/libcuda.so.1
- name: libcuda-so-375-66
mountPath: /usr/lib/x86_64-linux-gnu/libcuda.so.375.66
restartPolicy: Never
volumes:
- name: nvidia-driver-375-66
hostPath:
path: /opt/nvidia/current
- name: libcuda-so
hostPath:
path: /opt/nvidia/current/lib/libcuda.so
- name: libcuda-so-1
hostPath:
path: /opt/nvidia/current/lib/libcuda.so.1
- name: libcuda-so-375-66
hostPath:
path: /opt/nvidia/current/lib/libcuda.so.375.66
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.