Skip to content

Instantly share code, notes, and snippets.

@jakexks
Created November 10, 2017 17:37
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save jakexks/56f9703b28e958b3f435af14ad760a55 to your computer and use it in GitHub Desktop.
Save jakexks/56f9703b28e958b3f435af14ad760a55 to your computer and use it in GitHub Desktop.
Kubernetes scheduler issue
helm install --namespace "cassandra" --set persistence.storageClass=default -n "cassandra" --set resources.requests.memory=512Mi --set resources.requests.cpu=0.5 incubator/cassandra
NAME STATUS ROLES AGE VERSION LABELS
ip-10-39-0-113.ec2.internal Ready master 1d v1.8.2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=t2.medium,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=us-east-1,failure-domain.beta.kubernetes.io/zone=us-east-1a,kubernetes.io/hostname=master0,kubernetes.io/role=master,node-role.kubernetes.io/master=
ip-10-39-0-98.ec2.internal Ready worker 35m v1.8.2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=t2.medium,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=us-east-1,failure-domain.beta.kubernetes.io/zone=us-east-1a,kubernetes.io/hostname=ip-10-39-0-98,kubernetes.io/role=worker
ip-10-39-1-58.ec2.internal Ready worker 1d v1.8.2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=t2.medium,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=us-east-1,failure-domain.beta.kubernetes.io/zone=us-east-1b,kubernetes.io/hostname=ip-10-39-1-58,kubernetes.io/role=worker
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE LABELS
pvc-ed2a5ae6-c63a-11e7-9644-12d2abc93736 10Gi RWO Delete Bound cassandra/data-cassandra-cassandra-1 default 4m failure-domain.beta.kubernetes.io/region=us-east-1,failure-domain.beta.kubernetes.io/zone=us-east-1a
pvc-f74642a7-c639-11e7-9644-12d2abc93736 10Gi RWO Delete Bound cassandra/data-cassandra-cassandra-0 default 11m failure-domain.beta.kubernetes.io/region=us-east-1,failure-domain.beta.kubernetes.io/zone=us-east-1b
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE LABELS
data-cassandra-cassandra-0 Bound pvc-f74642a7-c639-11e7-9644-12d2abc93736 10Gi RWO default 11m app=cassandra-cassandra
data-cassandra-cassandra-1 Bound pvc-ed2a5ae6-c63a-11e7-9644-12d2abc93736 10Gi RWO default 4m app=cassandra-cassandra
I1110 17:11:58.274496 5 generic_scheduler.go:742] Node ip-10-39-1-58.ec2.internal is a potential node for preemption.
I1110 17:11:58.274511 5 generic_scheduler.go:742] Node ip-10-39-0-98.ec2.internal is a potential node for preemption.
I1110 17:11:58.274564 5 predicates.go:1395] Checking for prebound volumes with node affinity
I1110 17:11:58.274577 5 predicates.go:1433] VolumeNode predicate allows node "ip-10-39-1-58.ec2.internal" for pod "cassandra-cassandra-0" due to volume "pvc-f74642a7-c639-11e7-9644-12d2abc93736"
I1110 17:11:58.274669 5 predicates.go:1395] Checking for prebound volumes with node affinity
I1110 17:11:58.274681 5 predicates.go:1433] VolumeNode predicate allows node "ip-10-39-0-98.ec2.internal" for pod "cassandra-cassandra-0" due to volume "pvc-f74642a7-c639-11e7-9644-12d2abc93736"
jakexks: nodeConstraints: map[string]string{"failure-domain.beta.kubernetes.io/region":"us-east-1", "failure-domain.beta.kubernetes.io/zone":"us-east-1a"}
jakexks: Examining label failure-domain.beta.kubernetes.io/zone = us-east-1b
jakexks: nodeConstraints: map[string]string{"failure-domain.beta.kubernetes.io/region":"us-east-1", "failure-domain.beta.kubernetes.io/zone":"us-east-1b"}
jakexks: Examining label failure-domain.beta.kubernetes.io/region = us-east-1
jakexks: Examining label failure-domain.beta.kubernetes.io/zone = us-east-1b
I1110 17:12:02.279044 5 factory.go:793] About to try and schedule pod cassandra-cassandra-0
I1110 17:12:02.279062 5 scheduler.go:301] Attempting to schedule pod: cassandra/cassandra-cassandra-0
I1110 17:12:02.279240 5 scheduler.go:176] Failed to schedule pod: cassandra/cassandra-cassandra-0
I1110 17:12:02.279271 5 factory.go:911] Unable to schedule cassandra cassandra-cassandra-0: no fit: No nodes are available that match all of the predicates: Insufficient cpu (3), Insufficient memory (3), NoVolumeZoneConflict (2), PodToleratesNodeTaints (1).; waiting
I1110 17:12:02.279327 5 factory.go:988] Updating pod condition for cassandra/cassandra-cassandra-0 to (PodScheduled==False)
I1110 17:12:02.279428 5 backoff_utils.go:79] Backing off 8s
I1110 17:12:02.279470 5 event.go:218] Event(v1.ObjectReference{Kind:"Pod", Namespace:"cassandra", Name:"cassandra-cassandra-0", UID:"f7474a62-c639-11e7-9644-12d2abc93736", APIVersion:"v1", ResourceVersion:"133409", FieldPath:""}): type: 'Warning' reason: 'FailedScheduling' No nodes are available that match all of the predicates: Insufficient cpu (3), Insufficient memory (3), NoVolumeZoneConflict (2), PodToleratesNodeTaints (1).
I1110 17:12:02.284800 5 generic_scheduler.go:742] Node ip-10-39-1-58.ec2.internal is a potential node for preemption.
I1110 17:12:02.284816 5 generic_scheduler.go:742] Node ip-10-39-0-98.ec2.internal is a potential node for preemption.
I1110 17:12:02.285104 5 predicates.go:1395] Checking for prebound volumes with node affinity
I1110 17:12:02.285140 5 predicates.go:1433] VolumeNode predicate allows node "ip-10-39-0-98.ec2.internal" for pod "cassandra-cassandra-0" due to volume "pvc-f74642a7-c639-11e7-9644-12d2abc93736"
jakexks: nodeConstraints: map[string]string{"failure-domain.beta.kubernetes.io/region":"us-east-1", "failure-domain.beta.kubernetes.io/zone":"us-east-1a"}
jakexks: Examining label failure-domain.beta.kubernetes.io/region = us-east-1
jakexks: Examining label failure-domain.beta.kubernetes.io/zone = us-east-1b
I1110 17:12:02.285104 5 predicates.go:1395] Checking for prebound volumes with node affinity
I1110 17:12:02.285309 5 predicates.go:1433] VolumeNode predicate allows node "ip-10-39-1-58.ec2.internal" for pod "cassandra-cassandra-0" due to volume "pvc-f74642a7-c639-11e7-9644-12d2abc93736"
jakexks: nodeConstraints: map[string]string{"failure-domain.beta.kubernetes.io/region":"us-east-1", "failure-domain.beta.kubernetes.io/zone":"us-east-1b"}
jakexks: Examining label failure-domain.beta.kubernetes.io/region = us-east-1
jakexks: Examining label failure-domain.beta.kubernetes.io/zone = us-east-1b
I1110 17:12:10.283250 5 factory.go:793] About to try and schedule pod cassandra-cassandra-0
I1110 17:12:10.283268 5 scheduler.go:301] Attempting to schedule pod: cassandra/cassandra-cassandra-0
I1110 17:12:10.283476 5 scheduler.go:176] Failed to schedule pod: cassandra/cassandra-cassandra-0
I1110 17:12:10.283525 5 factory.go:911] Unable to schedule cassandra cassandra-cassandra-0: no fit: No nodes are available that match all of the predicates: Insufficient cpu (3), Insufficient memory (3), NoVolumeZoneConflict (2), PodToleratesNodeTaints (1).; waiting
I1110 17:12:10.283661 5 factory.go:988] Updating pod condition for cassandra/cassandra-cassandra-0 to (PodScheduled==False)
I1110 17:12:10.283672 5 backoff_utils.go:79] Backing off 16s
I1110 17:12:10.283952 5 event.go:218] Event(v1.ObjectReference{Kind:"Pod", Namespace:"cassandra", Name:"cassandra-cassandra-0", UID:"f7474a62-c639-11e7-9644-12d2abc93736", APIVersion:"v1", ResourceVersion:"133409", FieldPath:""}): type: 'Warning' reason: 'FailedScheduling' No nodes are available that match all of the predicates: Insufficient cpu (3), Insufficient memory (3), NoVolumeZoneConflict (2), PodToleratesNodeTaints (1).
I1110 17:12:10.286934 5 generic_scheduler.go:742] Node ip-10-39-1-58.ec2.internal is a potential node for preemption.
I1110 17:12:10.286946 5 generic_scheduler.go:742] Node ip-10-39-0-98.ec2.internal is a potential node for preemption.
I1110 17:12:10.287019 5 predicates.go:1395] Checking for prebound volumes with node affinity
I1110 17:12:10.287041 5 predicates.go:1395] Checking for prebound volumes with node affinity
I1110 17:12:10.287055 5 predicates.go:1433] VolumeNode predicate allows node "ip-10-39-1-58.ec2.internal" for pod "cassandra-cassandra-0" due to volume "pvc-f74642a7-c639-11e7-9644-12d2abc93736"
I1110 17:12:10.287044 5 predicates.go:1433] VolumeNode predicate allows node "ip-10-39-0-98.ec2.internal" for pod "cassandra-cassandra-0" due to volume "pvc-f74642a7-c639-11e7-9644-12d2abc93736"
jakexks: nodeConstraints: map[string]string{"failure-domain.beta.kubernetes.io/region":"us-east-1", "failure-domain.beta.kubernetes.io/zone":"us-east-1b"}
jakexks: Examining label failure-domain.beta.kubernetes.io/region = us-east-1
jakexks: Examining label failure-domain.beta.kubernetes.io/zone = us-east-1b
jakexks: nodeConstraints: map[string]string{"failure-domain.beta.kubernetes.io/region":"us-east-1", "failure-domain.beta.kubernetes.io/zone":"us-east-1a"}
jakexks: Examining label failure-domain.beta.kubernetes.io/region = us-east-1
jakexks: Examining label failure-domain.beta.kubernetes.io/zone = us-east-1b
I1110 17:12:26.287876 5 factory.go:793] About to try and schedule pod cassandra-cassandra-0
I1110 17:12:26.287916 5 scheduler.go:301] Attempting to schedule pod: cassandra/cassandra-cassandra-0
I1110 17:12:26.288071 5 scheduler.go:176] Failed to schedule pod: cassandra/cassandra-cassandra-0
I1110 17:12:26.288095 5 factory.go:911] Unable to schedule cassandra cassandra-cassandra-0: no fit: No nodes are available that match all of the predicates: Insufficient cpu (3), Insufficient memory (3), NoVolumeZoneConflict (2), PodToleratesNodeTaints (1).; waiting
I1110 17:12:26.288151 5 factory.go:988] Updating pod condition for cassandra/cassandra-cassandra-0 to (PodScheduled==False)
I1110 17:12:26.288239 5 backoff_utils.go:79] Backing off 32s
I1110 17:12:26.288267 5 event.go:218] Event(v1.ObjectReference{Kind:"Pod", Namespace:"cassandra", Name:"cassandra-cassandra-0", UID:"f7474a62-c639-11e7-9644-12d2abc93736", APIVersion:"v1", ResourceVersion:"133409", FieldPath:""}): type: 'Warning' reason: 'FailedScheduling' No nodes are available that match all of the predicates: Insufficient cpu (3), Insufficient memory (3), NoVolumeZoneConflict (2), PodToleratesNodeTaints (1).
I1110 17:12:26.291336 5 generic_scheduler.go:742] Node ip-10-39-1-58.ec2.internal is a potential node for preemption.
I1110 17:12:26.291353 5 generic_scheduler.go:742] Node ip-10-39-0-98.ec2.internal is a potential node for preemption.
I1110 17:12:26.291422 5 predicates.go:1395] Checking for prebound volumes with node affinity
I1110 17:12:26.291435 5 predicates.go:1433] VolumeNode predicate allows node "ip-10-39-1-58.ec2.internal" for pod "cassandra-cassandra-0" due to volume "pvc-f74642a7-c639-11e7-9644-12d2abc93736"
jakexks: nodeConstraints: map[string]string{"failure-domain.beta.kubernetes.io/region":"us-east-1", "failure-domain.beta.kubernetes.io/zone":"us-east-1b"}
jakexks: Examining label failure-domain.beta.kubernetes.io/region = us-east-1
jakexks: Examining label failure-domain.beta.kubernetes.io/zone = us-east-1b
I1110 17:12:26.291488 5 predicates.go:1395] Checking for prebound volumes with node affinity
I1110 17:12:26.291495 5 predicates.go:1433] VolumeNode predicate allows node "ip-10-39-0-98.ec2.internal" for pod "cassandra-cassandra-0" due to volume "pvc-f74642a7-c639-11e7-9644-12d2abc93736"
jakexks: nodeConstraints: map[string]string{"failure-domain.beta.kubernetes.io/region":"us-east-1", "failure-domain.beta.kubernetes.io/zone":"us-east-1a"}
jakexks: Examining label failure-domain.beta.kubernetes.io/region = us-east-1
jakexks: Examining label failure-domain.beta.kubernetes.io/zone = us-east-1b
I1110 17:12:35.864806 5 reflector.go:421] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:73: Watch close - *v1beta1.StatefulSet total 2 items received
I1110 17:12:41.859088 5 reflector.go:421] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:73: Watch close - *v1.Service total 1 items received
I1110 17:12:55.900069 5 factory.go:793] About to try and schedule pod cassandra-cassandra-0
I1110 17:12:55.900110 5 scheduler.go:297] Skip schedule deleting pod: cassandra/cassandra-cassandra-0
I1110 17:12:55.900284 5 event.go:218] Event(v1.ObjectReference{Kind:"Pod", Namespace:"cassandra", Name:"cassandra-cassandra-0", UID:"f7474a62-c639-11e7-9644-12d2abc93736", APIVersion:"v1", ResourceVersion:"133687", FieldPath:""}): type: 'Warning' reason: 'FailedScheduling' skip schedule deleting pod: cassandra/cassandra-cassandra-0
W1110 17:12:58.292127 5 factory.go:942] A pod cassandra/cassandra-cassandra-0 no longer exists
I1110 17:13:52.860900 5 reflector.go:421] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:73: Watch close - *v1.PersistentVolumeClaim total 4 items received
I1110 17:14:09.864447 5 reflector.go:421] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:73: Watch close - *v1.ReplicationController total 0 items received
I1110 17:14:25.021900 5 factory.go:793] About to try and schedule pod cassandra-cassandra-0
I1110 17:14:25.021920 5 scheduler.go:301] Attempting to schedule pod: cassandra/cassandra-cassandra-0
I1110 17:14:25.022196 5 predicates.go:1395] Checking for prebound volumes with node affinity
I1110 17:14:25.022215 5 predicates.go:1433] VolumeNode predicate allows node "ip-10-39-1-58.ec2.internal" for pod "cassandra-cassandra-0" due to volume "pvc-f74642a7-c639-11e7-9644-12d2abc93736"
jakexks: nodeConstraints: map[string]string{"failure-domain.beta.kubernetes.io/region":"us-east-1", "failure-domain.beta.kubernetes.io/zone":"us-east-1b"}
jakexks: Examining label failure-domain.beta.kubernetes.io/region = us-east-1
jakexks: Examining label failure-domain.beta.kubernetes.io/zone = us-east-1b
I1110 17:14:25.022340 5 predicates.go:1395] Checking for prebound volumes with node affinity
I1110 17:14:25.022358 5 predicates.go:1433] VolumeNode predicate allows node "ip-10-39-0-98.ec2.internal" for pod "cassandra-cassandra-0" due to volume "pvc-f74642a7-c639-11e7-9644-12d2abc93736"
jakexks: nodeConstraints: map[string]string{"failure-domain.beta.kubernetes.io/region":"us-east-1", "failure-domain.beta.kubernetes.io/zone":"us-east-1a"}
jakexks: Examining label failure-domain.beta.kubernetes.io/region = us-east-1
jakexks: Examining label failure-domain.beta.kubernetes.io/zone = us-east-1b
I1110 17:14:25.022520 5 predicates.go:1395] Checking for prebound volumes with node affinity
I1110 17:14:25.022533 5 predicates.go:1433] VolumeNode predicate allows node "ip-10-39-0-113.ec2.internal" for pod "cassandra-cassandra-0" due to volume "pvc-f74642a7-c639-11e7-9644-12d2abc93736"
jakexks: nodeConstraints: map[string]string{"failure-domain.beta.kubernetes.io/region":"us-east-1", "failure-domain.beta.kubernetes.io/zone":"us-east-1a"}
jakexks: Examining label failure-domain.beta.kubernetes.io/region = us-east-1
jakexks: Examining label failure-domain.beta.kubernetes.io/zone = us-east-1b
I1110 17:14:25.022896 5 factory.go:979] Attempting to bind cassandra-cassandra-0 to ip-10-39-1-58.ec2.internal
I1110 17:14:25.034146 5 event.go:218] Event(v1.ObjectReference{Kind:"Pod", Namespace:"cassandra", Name:"cassandra-cassandra-0", UID:"99e0badc-c63a-11e7-9644-12d2abc93736", APIVersion:"v1", ResourceVersion:"133830", FieldPath:""}): type: 'Normal' reason: 'Scheduled' Successfully assigned cassandra-cassandra-0 to ip-10-39-1-58.ec2.internal
I1110 17:14:25.859816 5 reflector.go:421] k8s.io/kubernetes/plugin/cmd/kube-scheduler/app/server.go:103: Watch close - *v1.Pod total 14 items received
I1110 17:16:22.863530 5 reflector.go:421] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:73: Watch close - *v1beta1.ReplicaSet total 0 items received
I1110 17:16:24.862326 5 reflector.go:421] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:73: Watch close - *v1.PersistentVolume total 2 items received
I1110 17:16:44.763934 5 factory.go:793] About to try and schedule pod cassandra-cassandra-1
I1110 17:16:44.763955 5 scheduler.go:301] Attempting to schedule pod: cassandra/cassandra-cassandra-1
I1110 17:16:44.764641 5 factory.go:979] Attempting to bind cassandra-cassandra-1 to ip-10-39-1-58.ec2.internal
I1110 17:16:44.774896 5 event.go:218] Event(v1.ObjectReference{Kind:"Pod", Namespace:"cassandra", Name:"cassandra-cassandra-1", UID:"ed2bb77f-c63a-11e7-9644-12d2abc93736", APIVersion:"v1", ResourceVersion:"134050", FieldPath:""}): type: 'Normal' reason: 'Scheduled' Successfully assigned cassandra-cassandra-1 to ip-10-39-1-58.ec2.internal
I1110 17:17:13.842409 5 reflector.go:421] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:73: Watch close - *v1.Node total 176 items received
I1110 17:19:42.868505 5 reflector.go:421] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:73: Watch close - *v1.ReplicationController total 0 items received
I1110 17:19:45.863883 5 reflector.go:421] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:73: Watch close - *v1.PersistentVolumeClaim total 4 items received
I1110 17:20:08.867520 5 reflector.go:421] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:73: Watch close - *v1beta1.StatefulSet total 7 items received
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: default
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp2
zones: us-east-1a, us-east-1b
@jakexks
Copy link
Author

jakexks commented Nov 10, 2017

Additionally: kubectl describe pods -n cassandra:

Name:           cassandra-cassandra-0
Namespace:      cassandra
Node:           ip-10-39-1-58.ec2.internal/10.39.1.58
Start Time:     Fri, 10 Nov 2017 17:14:25 +0000
Labels:         app=cassandra-cassandra
                controller-revision-hash=cassandra-cassandra-6558cfdfbf
Annotations:    kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"StatefulSet","namespace":"cassandra","name":"cassandra-cassandra","uid":"99dd91c0-c63a-11e7-9644-12d2abc93...
Status:         Running
IP:             192.168.58.135
Created By:     StatefulSet/cassandra-cassandra
Controlled By:  StatefulSet/cassandra-cassandra
Containers:
  cassandra-cassandra:
    Container ID:   docker://db70250fdf14d4dd5aeaa0a66335fa4580411a7a8a3a0adf651d385d5ed7dbcf
    Image:          cassandra:3
    Image ID:       docker-pullable://cassandra@sha256:afe579efbad590ac59992b2984d9010184e2f5c1e24e5f1107dde7dd74fd7913
    Ports:          7000/TCP, 7001/TCP, 7199/TCP, 9042/TCP, 9160/TCP
    State:          Running
      Started:      Fri, 10 Nov 2017 17:14:54 +0000
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     2
      memory:  4Gi
    Requests:
      cpu:      500m
      memory:   512Mi
    Liveness:   exec [/bin/sh -c nodetool status] delay=90s timeout=1s period=30s #success=1 #failure=3
    Readiness:  exec [/bin/sh -c nodetool status | grep -E "^UN\s+${POD_IP}"] delay=90s timeout=1s period=30s #success=1 #failure=3
    Environment:
      CASSANDRA_SEEDS:            cassandra-cassandra-0.cassandra-cassandra.cassandra.svc.cluster.local,cassandra-cassandra-1.cassandra-cassandra.cassandra.svc.cluster.local,
      MAX_HEAP_SIZE:              2048M
      HEAP_NEWSIZE:               512M
      CASSANDRA_ENDPOINT_SNITCH:  SimpleSnitch
      CASSANDRA_CLUSTER_NAME:     cassandra
      CASSANDRA_DC:               DC1
      CASSANDRA_RACK:             RAC1
      POD_IP:                      (v1:status.podIP)
    Mounts:
      /var/lib/cassandra from data (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-4lbkl (ro)
Conditions:
  Type           Status
  Initialized    True 
  Ready          True 
  PodScheduled   True 
Volumes:
  data:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  data-cassandra-cassandra-0
    ReadOnly:   false
  default-token-4lbkl:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-4lbkl
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.alpha.kubernetes.io/notReady:NoExecute for 300s
                 node.alpha.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason                 Age   From                                 Message
  ----    ------                 ----  ----                                 -------
  Normal  Scheduled              45m   default-scheduler                    Successfully assigned cassandra-cassandra-0 to ip-10-39-1-58.ec2.internal
  Normal  SuccessfulMountVolume  45m   kubelet, ip-10-39-1-58.ec2.internal  MountVolume.SetUp succeeded for volume "default-token-4lbkl"
  Normal  SuccessfulMountVolume  44m   kubelet, ip-10-39-1-58.ec2.internal  MountVolume.SetUp succeeded for volume "pvc-f74642a7-c639-11e7-9644-12d2abc93736"
  Normal  Pulling                44m   kubelet, ip-10-39-1-58.ec2.internal  pulling image "cassandra:3"
  Normal  Pulled                 44m   kubelet, ip-10-39-1-58.ec2.internal  Successfully pulled image "cassandra:3"
  Normal  Created                44m   kubelet, ip-10-39-1-58.ec2.internal  Created container
  Normal  Started                44m   kubelet, ip-10-39-1-58.ec2.internal  Started container


Name:           cassandra-cassandra-1
Namespace:      cassandra
Node:           ip-10-39-1-58.ec2.internal/10.39.1.58
Start Time:     Fri, 10 Nov 2017 17:16:44 +0000
Labels:         app=cassandra-cassandra
                controller-revision-hash=cassandra-cassandra-6558cfdfbf
Annotations:    kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"StatefulSet","namespace":"cassandra","name":"cassandra-cassandra","uid":"99dd91c0-c63a-11e7-9644-12d2abc93...
Status:         Pending
IP:             
Created By:     StatefulSet/cassandra-cassandra
Controlled By:  StatefulSet/cassandra-cassandra
Containers:
  cassandra-cassandra:
    Container ID:   
    Image:          cassandra:3
    Image ID:       
    Ports:          7000/TCP, 7001/TCP, 7199/TCP, 9042/TCP, 9160/TCP
    State:          Waiting
      Reason:       ContainerCreating
    Ready:          False
    Restart Count:  0
    Limits:
      cpu:     2
      memory:  4Gi
    Requests:
      cpu:      500m
      memory:   512Mi
    Liveness:   exec [/bin/sh -c nodetool status] delay=90s timeout=1s period=30s #success=1 #failure=3
    Readiness:  exec [/bin/sh -c nodetool status | grep -E "^UN\s+${POD_IP}"] delay=90s timeout=1s period=30s #success=1 #failure=3
    Environment:
      CASSANDRA_SEEDS:            cassandra-cassandra-0.cassandra-cassandra.cassandra.svc.cluster.local,cassandra-cassandra-1.cassandra-cassandra.cassandra.svc.cluster.local,
      MAX_HEAP_SIZE:              2048M
      HEAP_NEWSIZE:               512M
      CASSANDRA_ENDPOINT_SNITCH:  SimpleSnitch
      CASSANDRA_CLUSTER_NAME:     cassandra
      CASSANDRA_DC:               DC1
      CASSANDRA_RACK:             RAC1
      POD_IP:                      (v1:status.podIP)
    Mounts:
      /var/lib/cassandra from data (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-4lbkl (ro)
Conditions:
  Type           Status
  Initialized    True 
  Ready          False 
  PodScheduled   True 
Volumes:
  data:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  data-cassandra-cassandra-1
    ReadOnly:   false
  default-token-4lbkl:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-4lbkl
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.alpha.kubernetes.io/notReady:NoExecute for 300s
                 node.alpha.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason                 Age                From                                 Message
  ----     ------                 ----               ----                                 -------
  Normal   Scheduled              42m                default-scheduler                    Successfully assigned cassandra-cassandra-1 to ip-10-39-1-58.ec2.internal
  Normal   SuccessfulMountVolume  42m                kubelet, ip-10-39-1-58.ec2.internal  MountVolume.SetUp succeeded for volume "default-token-4lbkl"
  Warning  FailedMount            39m                attachdetach                         AttachVolume.Attach failed for volume "pvc-ed2a5ae6-c63a-11e7-9644-12d2abc93736" : Error attaching EBS volume "vol-08507ddc89ae593ab" to instance "i-07c30c1ebb6e99640": "InvalidVolume.ZoneMismatch: The volume 'vol-08507ddc89ae593ab' is not in the same availability zone as instance 'i-07c30c1ebb6e99640'\n\tstatus code: 400, request id: c4d47878-f3ce-43e8-8078-3e2ce847cb05"
  Warning  FailedMount            39m                attachdetach                         AttachVolume.Attach failed for volume "pvc-ed2a5ae6-c63a-11e7-9644-12d2abc93736" : Error attaching EBS volume "vol-08507ddc89ae593ab" to instance "i-07c30c1ebb6e99640": "InvalidVolume.ZoneMismatch: The volume 'vol-08507ddc89ae593ab' is not in the same availability zone as instance 'i-07c30c1ebb6e99640'\n\tstatus code: 400, request id: 6d96707b-4c11-453d-bce8-85eab2db5418"
  Warning  FailedMount            39m                attachdetach                         AttachVolume.Attach failed for volume "pvc-ed2a5ae6-c63a-11e7-9644-12d2abc93736" : Error attaching EBS volume "vol-08507ddc89ae593ab" to instance "i-07c30c1ebb6e99640": "InvalidVolume.ZoneMismatch: The volume 'vol-08507ddc89ae593ab' is not in the same availability zone as instance 'i-07c30c1ebb6e99640'\n\tstatus code: 400, request id: 69967949-01fe-43a0-abbb-1690bd13ae53"
  Warning  FailedMount            39m                attachdetach                         AttachVolume.Attach failed for volume "pvc-ed2a5ae6-c63a-11e7-9644-12d2abc93736" : Error attaching EBS volume "vol-08507ddc89ae593ab" to instance "i-07c30c1ebb6e99640": "InvalidVolume.ZoneMismatch: The volume 'vol-08507ddc89ae593ab' is not in the same availability zone as instance 'i-07c30c1ebb6e99640'\n\tstatus code: 400, request id: 47e2726f-3e36-42eb-86ee-e06522f1bae8"
  Warning  FailedMount            39m                attachdetach                         AttachVolume.Attach failed for volume "pvc-ed2a5ae6-c63a-11e7-9644-12d2abc93736" : Error attaching EBS volume "vol-08507ddc89ae593ab" to instance "i-07c30c1ebb6e99640": "InvalidVolume.ZoneMismatch: The volume 'vol-08507ddc89ae593ab' is not in the same availability zone as instance 'i-07c30c1ebb6e99640'\n\tstatus code: 400, request id: 0fcd222d-a718-4eef-8f34-b5a80fca6df5"
  Warning  FailedMount            39m                attachdetach                         AttachVolume.Attach failed for volume "pvc-ed2a5ae6-c63a-11e7-9644-12d2abc93736" : Error attaching EBS volume "vol-08507ddc89ae593ab" to instance "i-07c30c1ebb6e99640": "InvalidVolume.ZoneMismatch: The volume 'vol-08507ddc89ae593ab' is not in the same availability zone as instance 'i-07c30c1ebb6e99640'\n\tstatus code: 400, request id: 6f6e7485-9f10-441d-bad4-8df47f4cf5e4"
  Warning  FailedMount            39m                attachdetach                         AttachVolume.Attach failed for volume "pvc-ed2a5ae6-c63a-11e7-9644-12d2abc93736" : Error attaching EBS volume "vol-08507ddc89ae593ab" to instance "i-07c30c1ebb6e99640": "InvalidVolume.ZoneMismatch: The volume 'vol-08507ddc89ae593ab' is not in the same availability zone as instance 'i-07c30c1ebb6e99640'\n\tstatus code: 400, request id: 15d63338-5a01-4cc4-9067-824b63c1e936"
  Warning  FailedMount            38m                attachdetach                         AttachVolume.Attach failed for volume "pvc-ed2a5ae6-c63a-11e7-9644-12d2abc93736" : Error attaching EBS volume "vol-08507ddc89ae593ab" to instance "i-07c30c1ebb6e99640": "InvalidVolume.ZoneMismatch: The volume 'vol-08507ddc89ae593ab' is not in the same availability zone as instance 'i-07c30c1ebb6e99640'\n\tstatus code: 400, request id: 2de57802-bd2f-4052-8f8a-b9c641712bc6"
  Warning  FailedMount            37m                attachdetach                         AttachVolume.Attach failed for volume "pvc-ed2a5ae6-c63a-11e7-9644-12d2abc93736" : Error attaching EBS volume "vol-08507ddc89ae593ab" to instance "i-07c30c1ebb6e99640": "InvalidVolume.ZoneMismatch: The volume 'vol-08507ddc89ae593ab' is not in the same availability zone as instance 'i-07c30c1ebb6e99640'\n\tstatus code: 400, request id: c6051faf-2922-47aa-a3dd-1b72cde2355a"
  Warning  FailedMount            9m (x15 over 40m)  kubelet, ip-10-39-1-58.ec2.internal  Unable to mount volumes for pod "cassandra-cassandra-1_cassandra(ed2bb77f-c63a-11e7-9644-12d2abc93736)": timeout expired waiting for volumes to attach/mount for pod "cassandra"/"cassandra-cassandra-1". list of unattached/unmounted volumes=[data]
  Warning  FailedSync             2m (x18 over 40m)  kubelet, ip-10-39-1-58.ec2.internal  Error syncing pod
  Warning  FailedMount            1m (x18 over 35m)  attachdetach                         (combined from similar events): AttachVolume.Attach failed for volume "pvc-ed2a5ae6-c63a-11e7-9644-12d2abc93736" : Error attaching EBS volume "vol-08507ddc89ae593ab" to instance "i-07c30c1ebb6e99640": "InvalidVolume.ZoneMismatch: The volume 'vol-08507ddc89ae593ab' is not in the same availability zone as instance 'i-07c30c1ebb6e99640'\n\tstatus code: 400, request id: f6f4f630-7865-4e41-a3fb-3ec7970c1f24"

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment