Skip to content

Instantly share code, notes, and snippets.

@lubars
Last active May 14, 2020 14:37
Show Gist options
  • Save lubars/2abb6754b3916665a66fb6fc1644f697 to your computer and use it in GitHub Desktop.
Save lubars/2abb6754b3916665a66fb6fc1644f697 to your computer and use it in GitHub Desktop.
zone affinity
# kubectl describe pod iris-data-0-1
Name: iris-data-0-1
Namespace: default
Priority: 0
Node: <none>
Labels: controller-revision-hash=iris-data-0-9d88f8994
intersystems.com/component=data
intersystems.com/kind=IrisCluster
intersystems.com/name=iris
intersystems.com/role=iris
intersystems.com/shard=0
statefulset.kubernetes.io/pod-name=iris-data-0-1
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: StatefulSet/iris-data-0
Containers:
iriscluster:
Image: intersystems/iris:2020.3.0-dev
Ports: 51773/TCP, 52773/TCP
Host Ports: 0/TCP, 0/TCP
Args:
--key
/irissys/key/iris.key
--before
/home/irisowner/irissys/startISCAgent.sh 2188
Liveness: exec [/usr/irissys/dev/Cloud/ICM/waitISC.sh] delay=10s timeout=10s period=10s #success=1 #failure=60
Readiness: exec [/usr/irissys/dev/Cloud/ICM/waitISC.sh] delay=10s timeout=10s period=10s #success=1 #failure=60
Environment:
ISC_CPF_MERGE_FILE: /irissys/cpf/data.cpf
ISC_DATA_DIRECTORY: /irissys/data/IRIS
Mounts:
/irissys/cpf/ from iris-cpf (rw)
/irissys/data/ from iris-data (rw)
/irissys/key/ from iris-key (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-8wh96 (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
iris-data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: iris-data-iris-data-0-1
ReadOnly: false
iris-cpf:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: iris-data
Optional: false
iris-key:
Type: Secret (a volume populated by a Secret)
SecretName: iris-key-secret
Optional: false
default-token-8wh96:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-8wh96
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 8m2s (x3 over 8m4s) default-scheduler error while running "VolumeBinding" filter plugin for pod "iris-data-0-1": pod has unbound immediate PersistentVolumeClaims
Normal NotTriggerScaleUp 2m56s (x32 over 8m2s) cluster-autoscaler pod didn't trigger scale-up (it wouldn't fit if a new node is added):
Warning FailedScheduling 37s (x7 over 7m59s) default-scheduler 0/9 nodes are available: 3 node(s) didn't match node selector, 6 node(s) had volume node affinity conflict.
# kubectl get pod iris-data-0-1 -o yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: "2020-05-12T20:26:06Z"
generateName: iris-data-0-
labels:
controller-revision-hash: iris-data-0-9d88f8994
intersystems.com/component: data
intersystems.com/kind: IrisCluster
intersystems.com/name: iris
intersystems.com/role: iris
intersystems.com/shard: "0"
statefulset.kubernetes.io/pod-name: iris-data-0-1
name: iris-data-0-1
namespace: default
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: StatefulSet
name: iris-data-0
uid: f80cd257-9c2f-4a6a-bbfa-9a07125e628b
resourceVersion: "5988"
selfLink: /api/v1/namespaces/default/pods/iris-data-0-1
uid: b55485a0-1cf7-4e63-90fb-e18d8f437448
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: failure-domain.beta.kubernetes.io/zone
operator: In
values:
- us-east1-b
- us-east1-c
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchLabels:
intersystems.com/component: data
intersystems.com/kind: IrisCluster
intersystems.com/name: iris
intersystems.com/role: iris
intersystems.com/shard: "0"
namespaces:
- default
topologyKey: kubernetes.io/hostname
weight: 100
- podAffinityTerm:
labelSelector:
matchLabels:
intersystems.com/component: data
intersystems.com/kind: IrisCluster
intersystems.com/name: iris
intersystems.com/role: iris
intersystems.com/shard: "0"
namespaces:
- default
topologyKey: failure-domain.beta.kubernetes.io/zone
weight: 50
containers:
- args:
- --key
- /irissys/key/iris.key
- --before
- /home/irisowner/irissys/startISCAgent.sh 2188
env:
- name: ISC_CPF_MERGE_FILE
value: /irissys/cpf/data.cpf
- name: ISC_DATA_DIRECTORY
value: /irissys/data/IRIS
image: intersystems/iris:2020.3.0-dev
imagePullPolicy: Always
livenessProbe:
exec:
command:
- /usr/irissys/dev/Cloud/ICM/waitISC.sh
failureThreshold: 60
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 10
name: iriscluster
ports:
- containerPort: 51773
name: superserver
protocol: TCP
- containerPort: 52773
name: webserver
protocol: TCP
readinessProbe:
exec:
command:
- /usr/irissys/dev/Cloud/ICM/waitISC.sh
failureThreshold: 60
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 10
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /irissys/cpf/
name: iris-cpf
- mountPath: /irissys/key/
name: iris-key
- mountPath: /irissys/data/
name: iris-data
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-8wh96
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
hostname: iris-data-0-1
imagePullSecrets:
- name: dockerhub-secret
priority: 0
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: default
serviceAccountName: default
subdomain: iris-svc
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- name: iris-data
persistentVolumeClaim:
claimName: iris-data-iris-data-0-1
- configMap:
defaultMode: 420
name: iris-data
name: iris-cpf
- name: iris-key
secret:
defaultMode: 420
secretName: iris-key-secret
- name: default-token-8wh96
secret:
defaultMode: 420
secretName: default-token-8wh96
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2020-05-12T20:26:06Z"
message: '0/9 nodes are available: 3 node(s) didn''t match node selector, 6 node(s)
had volume node affinity conflict.'
reason: Unschedulable
status: "False"
type: PodScheduled
phase: Pending
qosClass: BestEffort
# kubectl describe sts iris-data-0
Name: iris-data-0
Namespace: default
CreationTimestamp: Tue, 12 May 2020 16:24:47 -0400
Selector: intersystems.com/component=data,intersystems.com/kind=IrisCluster,intersystems.com/name=iris,intersystems.com/role=iris,intersystems.com/shard=0
Labels: app.kubernetes.io/component=data
app.kubernetes.io/instance=iris
app.kubernetes.io/managed-by=intersystems.com
app.kubernetes.io/name=iriscluster
intersystems.com/kind=IrisCluster
intersystems.com/name=iris
Annotations: <none>
Replicas: 2 desired | 2 total
Update Strategy: RollingUpdate
Pods Status: 1 Running / 1 Waiting / 0 Succeeded / 0 Failed
Pod Template:
Labels: intersystems.com/component=data
intersystems.com/kind=IrisCluster
intersystems.com/name=iris
intersystems.com/role=iris
intersystems.com/shard=0
Containers:
iriscluster:
Image: intersystems/iris:2020.3.0-dev
Ports: 51773/TCP, 52773/TCP
Host Ports: 0/TCP, 0/TCP
Args:
--key
/irissys/key/iris.key
--before
/home/irisowner/irissys/startISCAgent.sh 2188
Liveness: exec [/usr/irissys/dev/Cloud/ICM/waitISC.sh] delay=10s timeout=10s period=10s #success=1 #failure=60
Readiness: exec [/usr/irissys/dev/Cloud/ICM/waitISC.sh] delay=10s timeout=10s period=10s #success=1 #failure=60
Environment:
ISC_CPF_MERGE_FILE: /irissys/cpf/data.cpf
ISC_DATA_DIRECTORY: /irissys/data/IRIS
Mounts:
/irissys/cpf/ from iris-cpf (rw)
/irissys/data/ from iris-data (rw)
/irissys/key/ from iris-key (rw)
Volumes:
iris-cpf:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: iris-data
Optional: false
iris-key:
Type: Secret (a volume populated by a Secret)
SecretName: iris-key-secret
Optional: false
Volume Claims:
Name: iris-data
StorageClass: iris-ssd-storageclass
Labels: <none>
Annotations: volume.beta.kubernetes.io/storage-class=iris-ssd-storageclass
Capacity: 2Gi
Access Modes: [ReadWriteOnce]
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulCreate 9m8s statefulset-controller create Claim iris-data-iris-data-0-0 Pod iris-data-0-0 in StatefulSet iris-data-0 success
Normal SuccessfulCreate 9m8s statefulset-controller create Pod iris-data-0-0 in StatefulSet iris-data-0 successful
Normal SuccessfulCreate 7m49s statefulset-controller create Claim iris-data-iris-data-0-1 Pod iris-data-0-1 in StatefulSet iris-data-0 success
Normal SuccessfulCreate 7m49s statefulset-controller create Pod iris-data-0-1 in StatefulSet iris-data-0 successful
# kubectl get sts iris-data-0 -o yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
creationTimestamp: "2020-05-12T20:24:47Z"
generation: 1
labels:
app.kubernetes.io/component: data
app.kubernetes.io/instance: iris
app.kubernetes.io/managed-by: intersystems.com
app.kubernetes.io/name: iriscluster
intersystems.com/kind: IrisCluster
intersystems.com/name: iris
name: iris-data-0
namespace: default
ownerReferences:
- apiVersion: intersystems.com/v1alpha1
blockOwnerDeletion: false
kind: IrisCluster
name: iris
uid: 81cea06a-31f3-457e-9b5b-79299b73fba6
resourceVersion: "5937"
selfLink: /apis/apps/v1/namespaces/default/statefulsets/iris-data-0
uid: f80cd257-9c2f-4a6a-bbfa-9a07125e628b
spec:
podManagementPolicy: OrderedReady
replicas: 2
revisionHistoryLimit: 10
selector:
matchLabels:
intersystems.com/component: data
intersystems.com/kind: IrisCluster
intersystems.com/name: iris
intersystems.com/role: iris
intersystems.com/shard: "0"
serviceName: iris-svc
template:
metadata:
creationTimestamp: null
labels:
intersystems.com/component: data
intersystems.com/kind: IrisCluster
intersystems.com/name: iris
intersystems.com/role: iris
intersystems.com/shard: "0"
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: failure-domain.beta.kubernetes.io/zone
operator: In
values:
- us-east1-b
- us-east1-c
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchLabels:
intersystems.com/component: data
intersystems.com/kind: IrisCluster
intersystems.com/name: iris
intersystems.com/role: iris
intersystems.com/shard: "0"
namespaces:
- default
topologyKey: kubernetes.io/hostname
weight: 100
- podAffinityTerm:
labelSelector:
matchLabels:
intersystems.com/component: data
intersystems.com/kind: IrisCluster
intersystems.com/name: iris
intersystems.com/role: iris
intersystems.com/shard: "0"
namespaces:
- default
topologyKey: failure-domain.beta.kubernetes.io/zone
weight: 50
containers:
- args:
- --key
- /irissys/key/iris.key
- --before
- /home/irisowner/irissys/startISCAgent.sh 2188
env:
- name: ISC_CPF_MERGE_FILE
value: /irissys/cpf/data.cpf
- name: ISC_DATA_DIRECTORY
value: /irissys/data/IRIS
image: intersystems/iris:2020.3.0-dev
imagePullPolicy: Always
livenessProbe:
exec:
command:
- /usr/irissys/dev/Cloud/ICM/waitISC.sh
failureThreshold: 60
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 10
name: iriscluster
ports:
- containerPort: 51773
name: superserver
protocol: TCP
- containerPort: 52773
name: webserver
protocol: TCP
readinessProbe:
exec:
command:
- /usr/irissys/dev/Cloud/ICM/waitISC.sh
failureThreshold: 60
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 10
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /irissys/cpf/
name: iris-cpf
- mountPath: /irissys/key/
name: iris-key
- mountPath: /irissys/data/
name: iris-data
dnsPolicy: ClusterFirst
imagePullSecrets:
- name: dockerhub-secret
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
volumes:
- configMap:
defaultMode: 420
name: iris-data
name: iris-cpf
- name: iris-key
secret:
defaultMode: 420
secretName: iris-key-secret
updateStrategy:
type: RollingUpdate
volumeClaimTemplates:
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
volume.beta.kubernetes.io/storage-class: iris-ssd-storageclass
creationTimestamp: null
name: iris-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
storageClassName: iris-ssd-storageclass
volumeMode: Filesystem
status:
phase: Pending
status:
collisionCount: 0
currentReplicas: 2
currentRevision: iris-data-0-9d88f8994
observedGeneration: 1
readyReplicas: 1
replicas: 2
updateRevision: iris-data-0-9d88f8994
updatedReplicas: 2
# get pvc iris-data-iris-data-0-1 -o yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
pv.kubernetes.io/bind-completed: "yes"
pv.kubernetes.io/bound-by-controller: "yes"
volume.beta.kubernetes.io/storage-class: iris-ssd-storageclass
volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/gce-pd
creationTimestamp: "2020-05-13T04:15:59Z"
finalizers:
- kubernetes.io/pvc-protection
labels:
intersystems.com/component: data
intersystems.com/kind: IrisCluster
intersystems.com/name: iris
intersystems.com/role: iris
intersystems.com/shard: "0"
name: iris-data-iris-data-0-1
namespace: default
resourceVersion: "193125"
selfLink: /api/v1/namespaces/default/persistentvolumeclaims/iris-data-iris-data-0-1
uid: 5497dce3-0b5b-4224-8134-c3ffc842d495
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
storageClassName: iris-ssd-storageclass
volumeMode: Filesystem
volumeName: pvc-5497dce3-0b5b-4224-8134-c3ffc842d495
status:
accessModes:
- ReadWriteOnce
capacity:
storage: 2Gi
phase: Bound
NAME STATUS ROLES AGE VERSION LABELS
gke-lubars-cluster-default-pool-843ab83a-fwg1 Ready <none> 42h v1.17.5-gke.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=n1-standard-1,beta.kubernetes.io/os=linux,cloud.google.com/gke-nodepool=default-pool,cloud.google.com/gke-os-distribution=cos,failure-domain.beta.kubernetes.io/region=us-east1,failure-domain.beta.kubernetes.io/zone=us-east1-d,kubernetes.io/arch=amd64,kubernetes.io/hostname=gke-lubars-cluster-default-pool-843ab83a-fwg1,kubernetes.io/os=linux,node.kubernetes.io/instance-type=n1-standard-1,node.kubernetes.io/kube-proxy-ds-ready=true,topology.kubernetes.io/region=us-east1,topology.kubernetes.io/zone=us-east1-d
gke-lubars-cluster-default-pool-843ab83a-gvbs Ready <none> 42h v1.17.5-gke.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=n1-standard-1,beta.kubernetes.io/os=linux,cloud.google.com/gke-nodepool=default-pool,cloud.google.com/gke-os-distribution=cos,failure-domain.beta.kubernetes.io/region=us-east1,failure-domain.beta.kubernetes.io/zone=us-east1-d,kubernetes.io/arch=amd64,kubernetes.io/hostname=gke-lubars-cluster-default-pool-843ab83a-gvbs,kubernetes.io/os=linux,node.kubernetes.io/instance-type=n1-standard-1,node.kubernetes.io/kube-proxy-ds-ready=true,topology.kubernetes.io/region=us-east1,topology.kubernetes.io/zone=us-east1-d
gke-lubars-cluster-default-pool-843ab83a-kjtl Ready <none> 42h v1.17.5-gke.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=n1-standard-1,beta.kubernetes.io/os=linux,cloud.google.com/gke-nodepool=default-pool,cloud.google.com/gke-os-distribution=cos,failure-domain.beta.kubernetes.io/region=us-east1,failure-domain.beta.kubernetes.io/zone=us-east1-d,kubernetes.io/arch=amd64,kubernetes.io/hostname=gke-lubars-cluster-default-pool-843ab83a-kjtl,kubernetes.io/os=linux,node.kubernetes.io/instance-type=n1-standard-1,node.kubernetes.io/kube-proxy-ds-ready=true,topology.kubernetes.io/region=us-east1,topology.kubernetes.io/zone=us-east1-d
gke-lubars-cluster-default-pool-e74ebe1e-11xk Ready <none> 42h v1.17.5-gke.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=n1-standard-1,beta.kubernetes.io/os=linux,cloud.google.com/gke-nodepool=default-pool,cloud.google.com/gke-os-distribution=cos,failure-domain.beta.kubernetes.io/region=us-east1,failure-domain.beta.kubernetes.io/zone=us-east1-b,kubernetes.io/arch=amd64,kubernetes.io/hostname=gke-lubars-cluster-default-pool-e74ebe1e-11xk,kubernetes.io/os=linux,node.kubernetes.io/instance-type=n1-standard-1,node.kubernetes.io/kube-proxy-ds-ready=true,topology.kubernetes.io/region=us-east1,topology.kubernetes.io/zone=us-east1-b
gke-lubars-cluster-default-pool-e74ebe1e-74zf Ready <none> 42h v1.17.5-gke.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=n1-standard-1,beta.kubernetes.io/os=linux,cloud.google.com/gke-nodepool=default-pool,cloud.google.com/gke-os-distribution=cos,failure-domain.beta.kubernetes.io/region=us-east1,failure-domain.beta.kubernetes.io/zone=us-east1-b,kubernetes.io/arch=amd64,kubernetes.io/hostname=gke-lubars-cluster-default-pool-e74ebe1e-74zf,kubernetes.io/os=linux,node.kubernetes.io/instance-type=n1-standard-1,node.kubernetes.io/kube-proxy-ds-ready=true,topology.kubernetes.io/region=us-east1,topology.kubernetes.io/zone=us-east1-b
gke-lubars-cluster-default-pool-e74ebe1e-htms Ready <none> 42h v1.17.5-gke.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=n1-standard-1,beta.kubernetes.io/os=linux,cloud.google.com/gke-nodepool=default-pool,cloud.google.com/gke-os-distribution=cos,failure-domain.beta.kubernetes.io/region=us-east1,failure-domain.beta.kubernetes.io/zone=us-east1-b,kubernetes.io/arch=amd64,kubernetes.io/hostname=gke-lubars-cluster-default-pool-e74ebe1e-htms,kubernetes.io/os=linux,node.kubernetes.io/instance-type=n1-standard-1,node.kubernetes.io/kube-proxy-ds-ready=true,topology.kubernetes.io/region=us-east1,topology.kubernetes.io/zone=us-east1-b
gke-lubars-cluster-default-pool-f6ab1d26-p4p5 Ready <none> 42h v1.17.5-gke.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=n1-standard-1,beta.kubernetes.io/os=linux,cloud.google.com/gke-nodepool=default-pool,cloud.google.com/gke-os-distribution=cos,failure-domain.beta.kubernetes.io/region=us-east1,failure-domain.beta.kubernetes.io/zone=us-east1-c,kubernetes.io/arch=amd64,kubernetes.io/hostname=gke-lubars-cluster-default-pool-f6ab1d26-p4p5,kubernetes.io/os=linux,node.kubernetes.io/instance-type=n1-standard-1,node.kubernetes.io/kube-proxy-ds-ready=true,topology.kubernetes.io/region=us-east1,topology.kubernetes.io/zone=us-east1-c
gke-lubars-cluster-default-pool-f6ab1d26-qkcz Ready <none> 42h v1.17.5-gke.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=n1-standard-1,beta.kubernetes.io/os=linux,cloud.google.com/gke-nodepool=default-pool,cloud.google.com/gke-os-distribution=cos,failure-domain.beta.kubernetes.io/region=us-east1,failure-domain.beta.kubernetes.io/zone=us-east1-c,kubernetes.io/arch=amd64,kubernetes.io/hostname=gke-lubars-cluster-default-pool-f6ab1d26-qkcz,kubernetes.io/os=linux,node.kubernetes.io/instance-type=n1-standard-1,node.kubernetes.io/kube-proxy-ds-ready=true,topology.kubernetes.io/region=us-east1,topology.kubernetes.io/zone=us-east1-c
gke-lubars-cluster-default-pool-f6ab1d26-s9x9 Ready <none> 42h v1.17.5-gke.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=n1-standard-1,beta.kubernetes.io/os=linux,cloud.google.com/gke-nodepool=default-pool,cloud.google.com/gke-os-distribution=cos,failure-domain.beta.kubernetes.io/region=us-east1,failure-domain.beta.kubernetes.io/zone=us-east1-c,kubernetes.io/arch=amd64,kubernetes.io/hostname=gke-lubars-cluster-default-pool-f6ab1d26-s9x9,kubernetes.io/os=linux,node.kubernetes.io/instance-type=n1-standard-1,node.kubernetes.io/kube-proxy-ds-ready=true,topology.kubernetes.io/region=us-east1,topology.kubernetes.io/zone=us-east1-c
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.2", GitCommit:"52c56ce7a8272c798dbc29846288d7cd9fbae032", GitTreeState:"clean", BuildDate:"2020-04-16T11:56:40Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
Unable to connect to the server: dial tcp: lookup hort on 169.254.169.254:53: no such host
# kubectl get pv pvc-5497dce3-0b5b-4224-8134-c3ffc842d495 -o yaml
apiVersion: v1
kind: PersistentVolume
metadata:
annotations:
kubernetes.io/createdby: gce-pd-dynamic-provisioner
pv.kubernetes.io/bound-by-controller: "yes"
pv.kubernetes.io/provisioned-by: kubernetes.io/gce-pd
creationTimestamp: "2020-05-13T04:16:02Z"
finalizers:
- kubernetes.io/pv-protection
labels:
failure-domain.beta.kubernetes.io/region: us-east1
failure-domain.beta.kubernetes.io/zone: us-east1-d
name: pvc-5497dce3-0b5b-4224-8134-c3ffc842d495
resourceVersion: "193123"
selfLink: /api/v1/persistentvolumes/pvc-5497dce3-0b5b-4224-8134-c3ffc842d495
uid: 71f111fb-9ebe-4085-845b-2f77f9780252
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 2Gi
claimRef:
apiVersion: v1
kind: PersistentVolumeClaim
name: iris-data-iris-data-0-1
namespace: default
resourceVersion: "193099"
uid: 5497dce3-0b5b-4224-8134-c3ffc842d495
gcePersistentDisk:
fsType: ext4
pdName: gke-lubars-cluster-e8f-pvc-5497dce3-0b5b-4224-8134-c3ffc842d495
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: failure-domain.beta.kubernetes.io/zone
operator: In
values:
- us-east1-d
- key: failure-domain.beta.kubernetes.io/region
operator: In
values:
- us-east1
persistentVolumeReclaimPolicy: Delete
storageClassName: iris-ssd-storageclass
volumeMode: Filesystem
status:
phase: Bound
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment