Created
December 1, 2017 16:22
-
-
Save johnhamelink/f8c3074d35ccb55f1203a4fa021b0cbb to your computer and use it in GitHub Desktop.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
-- Logs begin at Fri 2017-12-01 15:48:52 UTC, end at Fri 2017-12-01 16:21:27 UTC. -- | |
Dec 01 15:49:34 minikube systemd[1]: Starting Localkube... | |
Dec 01 15:49:35 minikube localkube[3201]: listening for peers on http://localhost:2380 | |
Dec 01 15:49:35 minikube localkube[3201]: listening for client requests on localhost:2379 | |
Dec 01 15:49:35 minikube localkube[3201]: name = default | |
Dec 01 15:49:35 minikube localkube[3201]: data dir = /var/lib/localkube/etcd | |
Dec 01 15:49:35 minikube localkube[3201]: member dir = /var/lib/localkube/etcd/member | |
Dec 01 15:49:35 minikube localkube[3201]: heartbeat = 100ms | |
Dec 01 15:49:35 minikube localkube[3201]: election = 1000ms | |
Dec 01 15:49:35 minikube localkube[3201]: snapshot count = 10000 | |
Dec 01 15:49:35 minikube localkube[3201]: advertise client URLs = http://localhost:2379 | |
Dec 01 15:49:35 minikube localkube[3201]: initial advertise peer URLs = http://localhost:2380 | |
Dec 01 15:49:35 minikube localkube[3201]: initial cluster = default=http://localhost:2380 | |
Dec 01 15:49:35 minikube localkube[3201]: starting member 8e9e05c52164694d in cluster cdf818194e3a8c32 | |
Dec 01 15:49:35 minikube localkube[3201]: 8e9e05c52164694d became follower at term 0 | |
Dec 01 15:49:35 minikube localkube[3201]: newRaft 8e9e05c52164694d [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0] | |
Dec 01 15:49:35 minikube localkube[3201]: 8e9e05c52164694d became follower at term 1 | |
Dec 01 15:49:35 minikube localkube[3201]: starting server... [version: 3.1.10, cluster version: to_be_decided] | |
Dec 01 15:49:35 minikube localkube[3201]: added member 8e9e05c52164694d [http://localhost:2380] to cluster cdf818194e3a8c32 | |
Dec 01 15:49:36 minikube localkube[3201]: 8e9e05c52164694d is starting a new election at term 1 | |
Dec 01 15:49:36 minikube localkube[3201]: 8e9e05c52164694d became candidate at term 2 | |
Dec 01 15:49:36 minikube localkube[3201]: 8e9e05c52164694d received MsgVoteResp from 8e9e05c52164694d at term 2 | |
Dec 01 15:49:36 minikube localkube[3201]: 8e9e05c52164694d became leader at term 2 | |
Dec 01 15:49:36 minikube localkube[3201]: raft.node: 8e9e05c52164694d elected leader 8e9e05c52164694d at term 2 | |
Dec 01 15:49:36 minikube localkube[3201]: setting up the initial cluster version to 3.1 | |
Dec 01 15:49:36 minikube localkube[3201]: set the initial cluster version to 3.1 | |
Dec 01 15:49:36 minikube localkube[3201]: enabled capabilities for version 3.1 | |
Dec 01 15:49:36 minikube localkube[3201]: published {Name:default ClientURLs:[http://localhost:2379]} to cluster cdf818194e3a8c32 | |
Dec 01 15:49:36 minikube localkube[3201]: I1201 15:49:36.223284 3201 etcd.go:58] Etcd server is ready | |
Dec 01 15:49:36 minikube localkube[3201]: localkube host ip address: 10.0.2.15 | |
Dec 01 15:49:36 minikube localkube[3201]: Starting apiserver... | |
Dec 01 15:49:36 minikube localkube[3201]: Waiting for apiserver to be healthy... | |
Dec 01 15:49:36 minikube localkube[3201]: ready to serve client requests | |
Dec 01 15:49:36 minikube localkube[3201]: serving insecure client requests on 127.0.0.1:2379, this is strongly discouraged! | |
Dec 01 15:49:36 minikube localkube[3201]: I1201 15:49:36.224397 3201 server.go:114] Version: v1.8.0 | |
Dec 01 15:49:36 minikube localkube[3201]: W1201 15:49:36.224627 3201 authentication.go:380] AnonymousAuth is not allowed with the AllowAll authorizer. Resetting AnonymousAuth to false. You should use a different authorizer | |
Dec 01 15:49:36 minikube localkube[3201]: I1201 15:49:36.225070 3201 plugins.go:101] No cloud provider specified. | |
Dec 01 15:49:36 minikube localkube[3201]: [restful] 2017/12/01 15:49:36 log.go:33: [restful/swagger] listing is available at https://10.0.2.15:8443/swaggerapi | |
Dec 01 15:49:36 minikube localkube[3201]: [restful] 2017/12/01 15:49:36 log.go:33: [restful/swagger] https://10.0.2.15:8443/swaggerui/ is mapped to folder /swagger-ui/ | |
Dec 01 15:49:37 minikube localkube[3201]: [restful] 2017/12/01 15:49:37 log.go:33: [restful/swagger] listing is available at https://10.0.2.15:8443/swaggerapi | |
Dec 01 15:49:37 minikube localkube[3201]: [restful] 2017/12/01 15:49:37 log.go:33: [restful/swagger] https://10.0.2.15:8443/swaggerui/ is mapped to folder /swagger-ui/ | |
Dec 01 15:49:37 minikube localkube[3201]: I1201 15:49:37.224128 3201 ready.go:30] Performing healthcheck on https://localhost:8443/healthz | |
Dec 01 15:49:37 minikube localkube[3201]: E1201 15:49:37.225491 3201 ready.go:40] Error performing healthcheck: Get https://localhost:8443/healthz: dial tcp 127.0.0.1:8443: getsockopt: connection refused | |
Dec 01 15:49:38 minikube localkube[3201]: I1201 15:49:38.224957 3201 ready.go:30] Performing healthcheck on https://localhost:8443/healthz | |
Dec 01 15:49:38 minikube localkube[3201]: E1201 15:49:38.225642 3201 ready.go:40] Error performing healthcheck: Get https://localhost:8443/healthz: dial tcp 127.0.0.1:8443: getsockopt: connection refused | |
Dec 01 15:49:39 minikube localkube[3201]: I1201 15:49:39.224584 3201 ready.go:30] Performing healthcheck on https://localhost:8443/healthz | |
Dec 01 15:49:39 minikube localkube[3201]: E1201 15:49:39.225309 3201 ready.go:40] Error performing healthcheck: Get https://localhost:8443/healthz: dial tcp 127.0.0.1:8443: getsockopt: connection refused | |
Dec 01 15:49:39 minikube localkube[3201]: I1201 15:49:39.318737 3201 aggregator.go:138] Skipping APIService creation for scheduling.k8s.io/v1alpha1 | |
Dec 01 15:49:39 minikube localkube[3201]: I1201 15:49:39.319448 3201 serve.go:85] Serving securely on 0.0.0.0:8443 | |
Dec 01 15:49:39 minikube localkube[3201]: I1201 15:49:39.319693 3201 available_controller.go:192] Starting AvailableConditionController | |
Dec 01 15:49:39 minikube localkube[3201]: I1201 15:49:39.319711 3201 cache.go:32] Waiting for caches to sync for AvailableConditionController controller | |
Dec 01 15:49:39 minikube systemd[1]: Started Localkube. | |
Dec 01 15:49:39 minikube localkube[3201]: I1201 15:49:39.321015 3201 controller.go:84] Starting OpenAPI AggregationController | |
Dec 01 15:49:39 minikube localkube[3201]: I1201 15:49:39.322160 3201 crd_finalizer.go:242] Starting CRDFinalizer | |
Dec 01 15:49:39 minikube localkube[3201]: I1201 15:49:39.322490 3201 apiservice_controller.go:112] Starting APIServiceRegistrationController | |
Dec 01 15:49:39 minikube localkube[3201]: I1201 15:49:39.322589 3201 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller | |
Dec 01 15:49:39 minikube localkube[3201]: I1201 15:49:39.322684 3201 crdregistration_controller.go:112] Starting crd-autoregister controller | |
Dec 01 15:49:39 minikube localkube[3201]: I1201 15:49:39.322768 3201 controller_utils.go:1041] Waiting for caches to sync for crd-autoregister controller | |
Dec 01 15:49:39 minikube localkube[3201]: I1201 15:49:39.323444 3201 customresource_discovery_controller.go:152] Starting DiscoveryController | |
Dec 01 15:49:39 minikube localkube[3201]: I1201 15:49:39.323589 3201 naming_controller.go:277] Starting NamingConditionController | |
Dec 01 15:49:39 minikube localkube[3201]: I1201 15:49:39.420373 3201 cache.go:39] Caches are synced for AvailableConditionController controller | |
Dec 01 15:49:39 minikube localkube[3201]: I1201 15:49:39.422862 3201 cache.go:39] Caches are synced for APIServiceRegistrationController controller | |
Dec 01 15:49:39 minikube localkube[3201]: I1201 15:49:39.423235 3201 controller_utils.go:1048] Caches are synced for crd-autoregister controller | |
Dec 01 15:49:39 minikube localkube[3201]: I1201 15:49:39.423446 3201 autoregister_controller.go:136] Starting autoregister controller | |
Dec 01 15:49:39 minikube localkube[3201]: I1201 15:49:39.423574 3201 cache.go:32] Waiting for caches to sync for autoregister controller | |
Dec 01 15:49:39 minikube localkube[3201]: I1201 15:49:39.523780 3201 cache.go:39] Caches are synced for autoregister controller | |
Dec 01 15:49:40 minikube localkube[3201]: I1201 15:49:40.224177 3201 ready.go:30] Performing healthcheck on https://localhost:8443/healthz | |
Dec 01 15:49:40 minikube localkube[3201]: I1201 15:49:40.231541 3201 ready.go:49] Got healthcheck response: [+]ping ok | |
Dec 01 15:49:40 minikube localkube[3201]: [+]etcd ok | |
Dec 01 15:49:40 minikube localkube[3201]: [+]poststarthook/generic-apiserver-start-informers ok | |
Dec 01 15:49:40 minikube localkube[3201]: [+]poststarthook/start-apiextensions-informers ok | |
Dec 01 15:49:40 minikube localkube[3201]: [+]poststarthook/start-apiextensions-controllers ok | |
Dec 01 15:49:40 minikube localkube[3201]: [+]poststarthook/bootstrap-controller ok | |
Dec 01 15:49:40 minikube localkube[3201]: [-]poststarthook/ca-registration failed: reason withheld | |
Dec 01 15:49:40 minikube localkube[3201]: [+]poststarthook/start-kube-apiserver-informers ok | |
Dec 01 15:49:40 minikube localkube[3201]: [+]poststarthook/start-kube-aggregator-informers ok | |
Dec 01 15:49:40 minikube localkube[3201]: [+]poststarthook/apiservice-registration-controller ok | |
Dec 01 15:49:40 minikube localkube[3201]: [+]poststarthook/apiservice-status-available-controller ok | |
Dec 01 15:49:40 minikube localkube[3201]: [+]poststarthook/apiservice-openapi-controller ok | |
Dec 01 15:49:40 minikube localkube[3201]: [+]poststarthook/kube-apiserver-autoregistration ok | |
Dec 01 15:49:40 minikube localkube[3201]: [+]autoregister-completion ok | |
Dec 01 15:49:40 minikube localkube[3201]: healthz check failed | |
Dec 01 15:49:41 minikube localkube[3201]: I1201 15:49:41.224133 3201 ready.go:30] Performing healthcheck on https://localhost:8443/healthz | |
Dec 01 15:49:41 minikube localkube[3201]: I1201 15:49:41.239856 3201 ready.go:49] Got healthcheck response: ok | |
Dec 01 15:49:41 minikube localkube[3201]: apiserver is ready! | |
Dec 01 15:49:41 minikube localkube[3201]: Starting controller-manager... | |
Dec 01 15:49:41 minikube localkube[3201]: Waiting for controller-manager to be healthy... | |
Dec 01 15:49:41 minikube localkube[3201]: I1201 15:49:41.240823 3201 controllermanager.go:109] Version: v1.8.0 | |
Dec 01 15:49:41 minikube localkube[3201]: I1201 15:49:41.245662 3201 leaderelection.go:174] attempting to acquire leader lease... | |
Dec 01 15:49:41 minikube localkube[3201]: I1201 15:49:41.260893 3201 leaderelection.go:184] successfully acquired lease kube-system/kube-controller-manager | |
Dec 01 15:49:41 minikube localkube[3201]: I1201 15:49:41.261693 3201 event.go:218] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"kube-controller-manager", UID:"3e663e68-d6af-11e7-9596-080027aac058", APIVersion:"v1", ResourceVersion:"35", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' minikube became leader | |
Dec 01 15:49:41 minikube localkube[3201]: I1201 15:49:41.276575 3201 plugins.go:101] No cloud provider specified. | |
Dec 01 15:49:41 minikube localkube[3201]: I1201 15:49:41.278764 3201 controller_utils.go:1041] Waiting for caches to sync for tokens controller | |
Dec 01 15:49:41 minikube localkube[3201]: I1201 15:49:41.279590 3201 controllermanager.go:487] Started "statefulset" | |
Dec 01 15:49:41 minikube localkube[3201]: E1201 15:49:41.280499 3201 core.go:70] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail. | |
Dec 01 15:49:41 minikube localkube[3201]: W1201 15:49:41.280613 3201 controllermanager.go:484] Skipping "service" | |
Dec 01 15:49:41 minikube localkube[3201]: I1201 15:49:41.280594 3201 stateful_set.go:146] Starting stateful set controller | |
Dec 01 15:49:41 minikube localkube[3201]: I1201 15:49:41.281808 3201 controller_utils.go:1041] Waiting for caches to sync for stateful set controller | |
Dec 01 15:49:41 minikube localkube[3201]: I1201 15:49:41.281733 3201 controllermanager.go:487] Started "replicaset" | |
Dec 01 15:49:41 minikube localkube[3201]: I1201 15:49:41.281746 3201 replica_set.go:156] Starting replica set controller | |
Dec 01 15:49:41 minikube localkube[3201]: I1201 15:49:41.282063 3201 controller_utils.go:1041] Waiting for caches to sync for replica set controller | |
Dec 01 15:49:41 minikube localkube[3201]: I1201 15:49:41.283528 3201 controllermanager.go:487] Started "disruption" | |
Dec 01 15:49:41 minikube localkube[3201]: I1201 15:49:41.283555 3201 disruption.go:288] Starting disruption controller | |
Dec 01 15:49:41 minikube localkube[3201]: I1201 15:49:41.283790 3201 controller_utils.go:1041] Waiting for caches to sync for disruption controller | |
Dec 01 15:49:41 minikube localkube[3201]: I1201 15:49:41.285032 3201 controllermanager.go:487] Started "daemonset" | |
Dec 01 15:49:41 minikube localkube[3201]: I1201 15:49:41.285120 3201 daemon_controller.go:230] Starting daemon sets controller | |
Dec 01 15:49:41 minikube localkube[3201]: I1201 15:49:41.285446 3201 controller_utils.go:1041] Waiting for caches to sync for daemon sets controller | |
Dec 01 15:49:41 minikube localkube[3201]: I1201 15:49:41.286634 3201 controllermanager.go:487] Started "job" | |
Dec 01 15:49:41 minikube localkube[3201]: I1201 15:49:41.286666 3201 job_controller.go:138] Starting job controller | |
Dec 01 15:49:41 minikube localkube[3201]: I1201 15:49:41.286857 3201 controller_utils.go:1041] Waiting for caches to sync for job controller | |
Dec 01 15:49:41 minikube localkube[3201]: I1201 15:49:41.288074 3201 controllermanager.go:487] Started "deployment" | |
Dec 01 15:49:41 minikube localkube[3201]: I1201 15:49:41.288124 3201 deployment_controller.go:151] Starting deployment controller | |
Dec 01 15:49:41 minikube localkube[3201]: I1201 15:49:41.288257 3201 controller_utils.go:1041] Waiting for caches to sync for deployment controller | |
Dec 01 15:49:41 minikube localkube[3201]: I1201 15:49:41.291107 3201 controllermanager.go:487] Started "horizontalpodautoscaling" | |
Dec 01 15:49:41 minikube localkube[3201]: I1201 15:49:41.291188 3201 horizontal.go:145] Starting HPA controller | |
Dec 01 15:49:41 minikube localkube[3201]: I1201 15:49:41.291308 3201 controller_utils.go:1041] Waiting for caches to sync for HPA controller | |
Dec 01 15:49:41 minikube localkube[3201]: I1201 15:49:41.292368 3201 controllermanager.go:487] Started "ttl" | |
Dec 01 15:49:41 minikube localkube[3201]: W1201 15:49:41.292500 3201 controllermanager.go:471] "bootstrapsigner" is disabled | |
Dec 01 15:49:41 minikube localkube[3201]: I1201 15:49:41.292427 3201 ttl_controller.go:116] Starting TTL controller | |
Dec 01 15:49:41 minikube localkube[3201]: I1201 15:49:41.292755 3201 controller_utils.go:1041] Waiting for caches to sync for TTL controller | |
Dec 01 15:49:41 minikube localkube[3201]: W1201 15:49:41.293861 3201 shared_informer.go:304] resyncPeriod 53852933650778 is smaller than resyncCheckPeriod 68540285395639 and the informer has already started. Changing it to 68540285395639 | |
Dec 01 15:49:41 minikube localkube[3201]: I1201 15:49:41.293988 3201 controllermanager.go:487] Started "resourcequota" | |
Dec 01 15:49:41 minikube localkube[3201]: I1201 15:49:41.294051 3201 resource_quota_controller.go:238] Starting resource quota controller | |
Dec 01 15:49:41 minikube localkube[3201]: I1201 15:49:41.294170 3201 controller_utils.go:1041] Waiting for caches to sync for resource quota controller | |
Dec 01 15:49:41 minikube localkube[3201]: I1201 15:49:41.295293 3201 controllermanager.go:487] Started "serviceaccount" | |
Dec 01 15:49:41 minikube localkube[3201]: I1201 15:49:41.295341 3201 serviceaccounts_controller.go:113] Starting service account controller | |
Dec 01 15:49:41 minikube localkube[3201]: I1201 15:49:41.295521 3201 controller_utils.go:1041] Waiting for caches to sync for service account controller | |
Dec 01 15:49:41 minikube localkube[3201]: W1201 15:49:41.296504 3201 probe.go:215] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. | |
Dec 01 15:49:41 minikube localkube[3201]: I1201 15:49:41.305464 3201 controllermanager.go:487] Started "attachdetach" | |
Dec 01 15:49:41 minikube localkube[3201]: W1201 15:49:41.305541 3201 controllermanager.go:484] Skipping "persistentvolume-expander" | |
Dec 01 15:49:41 minikube localkube[3201]: I1201 15:49:41.309547 3201 attach_detach_controller.go:255] Starting attach detach controller | |
Dec 01 15:49:41 minikube localkube[3201]: I1201 15:49:41.309696 3201 controller_utils.go:1041] Waiting for caches to sync for attach detach controller | |
Dec 01 15:49:41 minikube localkube[3201]: E1201 15:49:41.316947 3201 certificates.go:48] Failed to start certificate controller: error reading CA cert file "/etc/kubernetes/ca/ca.pem": open /etc/kubernetes/ca/ca.pem: no such file or directory | |
Dec 01 15:49:41 minikube localkube[3201]: W1201 15:49:41.316976 3201 controllermanager.go:484] Skipping "csrsigning" | |
Dec 01 15:49:41 minikube localkube[3201]: I1201 15:49:41.324623 3201 controllermanager.go:487] Started "persistentvolume-binder" | |
Dec 01 15:49:41 minikube localkube[3201]: I1201 15:49:41.332736 3201 pv_controller_base.go:259] Starting persistent volume controller | |
Dec 01 15:49:41 minikube localkube[3201]: I1201 15:49:41.332967 3201 controller_utils.go:1041] Waiting for caches to sync for persistent volume controller | |
Dec 01 15:49:41 minikube localkube[3201]: I1201 15:49:41.335113 3201 controllermanager.go:487] Started "replicationcontroller" | |
Dec 01 15:49:41 minikube localkube[3201]: I1201 15:49:41.345101 3201 replication_controller.go:151] Starting RC controller | |
Dec 01 15:49:41 minikube localkube[3201]: I1201 15:49:41.345125 3201 controller_utils.go:1041] Waiting for caches to sync for RC controller | |
Dec 01 15:49:41 minikube localkube[3201]: I1201 15:49:41.379166 3201 controller_utils.go:1048] Caches are synced for tokens controller | |
Dec 01 15:49:42 minikube localkube[3201]: controller-manager is ready! | |
Dec 01 15:49:42 minikube localkube[3201]: Starting scheduler... | |
Dec 01 15:49:42 minikube localkube[3201]: Waiting for scheduler to be healthy... | |
Dec 01 15:49:42 minikube localkube[3201]: E1201 15:49:42.245671 3201 server.go:173] unable to register configz: register config "componentconfig" twice | |
Dec 01 15:49:42 minikube localkube[3201]: I1201 15:49:42.540085 3201 controllermanager.go:487] Started "garbagecollector" | |
Dec 01 15:49:42 minikube localkube[3201]: I1201 15:49:42.540093 3201 garbagecollector.go:136] Starting garbage collector controller | |
Dec 01 15:49:42 minikube localkube[3201]: I1201 15:49:42.540557 3201 controller_utils.go:1041] Waiting for caches to sync for garbage collector controller | |
Dec 01 15:49:42 minikube localkube[3201]: I1201 15:49:42.540688 3201 graph_builder.go:321] GraphBuilder running | |
Dec 01 15:49:42 minikube localkube[3201]: I1201 15:49:42.559317 3201 controllermanager.go:487] Started "namespace" | |
Dec 01 15:49:42 minikube localkube[3201]: I1201 15:49:42.559581 3201 namespace_controller.go:186] Starting namespace controller | |
Dec 01 15:49:42 minikube localkube[3201]: I1201 15:49:42.559709 3201 controller_utils.go:1041] Waiting for caches to sync for namespace controller | |
Dec 01 15:49:42 minikube localkube[3201]: I1201 15:49:42.560584 3201 controllermanager.go:487] Started "cronjob" | |
Dec 01 15:49:42 minikube localkube[3201]: I1201 15:49:42.560798 3201 cronjob_controller.go:98] Starting CronJob Manager | |
Dec 01 15:49:42 minikube localkube[3201]: I1201 15:49:42.561658 3201 controllermanager.go:487] Started "csrapproving" | |
Dec 01 15:49:42 minikube localkube[3201]: W1201 15:49:42.561699 3201 controllermanager.go:471] "tokencleaner" is disabled | |
Dec 01 15:49:42 minikube localkube[3201]: I1201 15:49:42.562643 3201 node_controller.go:249] Sending events to api server. | |
Dec 01 15:49:42 minikube localkube[3201]: I1201 15:49:42.562739 3201 taint_controller.go:158] Sending events to api server. | |
Dec 01 15:49:42 minikube localkube[3201]: I1201 15:49:42.562778 3201 controllermanager.go:487] Started "node" | |
Dec 01 15:49:42 minikube localkube[3201]: W1201 15:49:42.562786 3201 core.go:128] Unsuccessful parsing of cluster CIDR : invalid CIDR address: | |
Dec 01 15:49:42 minikube localkube[3201]: I1201 15:49:42.562793 3201 core.go:131] Will not configure cloud provider routes for allocate-node-cidrs: false, configure-cloud-routes: true. | |
Dec 01 15:49:42 minikube localkube[3201]: W1201 15:49:42.562797 3201 controllermanager.go:484] Skipping "route" | |
Dec 01 15:49:42 minikube localkube[3201]: I1201 15:49:42.563880 3201 controllermanager.go:487] Started "endpoint" | |
Dec 01 15:49:42 minikube localkube[3201]: I1201 15:49:42.564252 3201 certificate_controller.go:109] Starting certificate controller | |
Dec 01 15:49:42 minikube localkube[3201]: I1201 15:49:42.564398 3201 controller_utils.go:1041] Waiting for caches to sync for certificate controller | |
Dec 01 15:49:42 minikube localkube[3201]: I1201 15:49:42.564591 3201 node_controller.go:516] Starting node controller | |
Dec 01 15:49:42 minikube localkube[3201]: I1201 15:49:42.564731 3201 controller_utils.go:1041] Waiting for caches to sync for node controller | |
Dec 01 15:49:42 minikube localkube[3201]: I1201 15:49:42.564856 3201 controllermanager.go:487] Started "podgc" | |
Dec 01 15:49:42 minikube localkube[3201]: I1201 15:49:42.565052 3201 endpoints_controller.go:153] Starting endpoint controller | |
Dec 01 15:49:42 minikube localkube[3201]: I1201 15:49:42.565184 3201 controller_utils.go:1041] Waiting for caches to sync for endpoint controller | |
Dec 01 15:49:42 minikube localkube[3201]: I1201 15:49:42.565062 3201 gc_controller.go:76] Starting GC controller | |
Dec 01 15:49:42 minikube localkube[3201]: I1201 15:49:42.565479 3201 controller_utils.go:1041] Waiting for caches to sync for GC controller | |
Dec 01 15:49:42 minikube localkube[3201]: I1201 15:49:42.588203 3201 controller_utils.go:1048] Caches are synced for job controller | |
Dec 01 15:49:42 minikube localkube[3201]: I1201 15:49:42.588408 3201 controller_utils.go:1048] Caches are synced for disruption controller | |
Dec 01 15:49:42 minikube localkube[3201]: I1201 15:49:42.588872 3201 disruption.go:296] Sending events to api server. | |
Dec 01 15:49:42 minikube localkube[3201]: I1201 15:49:42.589011 3201 controller_utils.go:1048] Caches are synced for deployment controller | |
Dec 01 15:49:42 minikube localkube[3201]: I1201 15:49:42.592957 3201 controller_utils.go:1048] Caches are synced for TTL controller | |
Dec 01 15:49:42 minikube localkube[3201]: I1201 15:49:42.594284 3201 controller_utils.go:1048] Caches are synced for resource quota controller | |
Dec 01 15:49:42 minikube localkube[3201]: I1201 15:49:42.610162 3201 controller_utils.go:1048] Caches are synced for attach detach controller | |
Dec 01 15:49:42 minikube localkube[3201]: I1201 15:49:42.635930 3201 controller_utils.go:1048] Caches are synced for persistent volume controller | |
Dec 01 15:49:42 minikube localkube[3201]: I1201 15:49:42.645421 3201 controller_utils.go:1048] Caches are synced for RC controller | |
Dec 01 15:49:42 minikube localkube[3201]: I1201 15:49:42.665532 3201 controller_utils.go:1048] Caches are synced for node controller | |
Dec 01 15:49:42 minikube localkube[3201]: I1201 15:49:42.665621 3201 taint_controller.go:181] Starting NoExecuteTaintManager | |
Dec 01 15:49:42 minikube localkube[3201]: I1201 15:49:42.665642 3201 controller_utils.go:1048] Caches are synced for endpoint controller | |
Dec 01 15:49:42 minikube localkube[3201]: I1201 15:49:42.666317 3201 controller_utils.go:1048] Caches are synced for GC controller | |
Dec 01 15:49:42 minikube localkube[3201]: I1201 15:49:42.682197 3201 controller_utils.go:1048] Caches are synced for replica set controller | |
Dec 01 15:49:42 minikube localkube[3201]: I1201 15:49:42.682287 3201 controller_utils.go:1048] Caches are synced for stateful set controller | |
Dec 01 15:49:42 minikube localkube[3201]: I1201 15:49:42.691652 3201 controller_utils.go:1048] Caches are synced for HPA controller | |
Dec 01 15:49:42 minikube localkube[3201]: I1201 15:49:42.695774 3201 controller_utils.go:1048] Caches are synced for service account controller | |
Dec 01 15:49:42 minikube localkube[3201]: I1201 15:49:42.760446 3201 controller_utils.go:1048] Caches are synced for namespace controller | |
Dec 01 15:49:42 minikube localkube[3201]: I1201 15:49:42.764942 3201 controller_utils.go:1048] Caches are synced for certificate controller | |
Dec 01 15:49:42 minikube localkube[3201]: I1201 15:49:42.885864 3201 controller_utils.go:1048] Caches are synced for daemon sets controller | |
Dec 01 15:49:42 minikube localkube[3201]: I1201 15:49:42.941400 3201 controller_utils.go:1048] Caches are synced for garbage collector controller | |
Dec 01 15:49:42 minikube localkube[3201]: I1201 15:49:42.941458 3201 garbagecollector.go:145] Garbage collector: all resource monitors have synced. Proceeding to collect garbage | |
Dec 01 15:49:42 minikube localkube[3201]: I1201 15:49:42.947617 3201 controller_utils.go:1041] Waiting for caches to sync for scheduler controller | |
Dec 01 15:49:43 minikube localkube[3201]: I1201 15:49:43.047846 3201 controller_utils.go:1048] Caches are synced for scheduler controller | |
Dec 01 15:49:43 minikube localkube[3201]: I1201 15:49:43.048257 3201 leaderelection.go:174] attempting to acquire leader lease... | |
Dec 01 15:49:43 minikube localkube[3201]: I1201 15:49:43.055294 3201 leaderelection.go:184] successfully acquired lease kube-system/kube-scheduler | |
Dec 01 15:49:43 minikube localkube[3201]: I1201 15:49:43.055871 3201 event.go:218] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"kube-scheduler", UID:"3f77b948-d6af-11e7-9596-080027aac058", APIVersion:"v1", ResourceVersion:"46", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' minikube became leader | |
Dec 01 15:49:43 minikube localkube[3201]: scheduler is ready! | |
Dec 01 15:49:43 minikube localkube[3201]: Starting kubelet... | |
Dec 01 15:49:43 minikube localkube[3201]: Waiting for kubelet to be healthy... | |
Dec 01 15:49:43 minikube localkube[3201]: I1201 15:49:43.241881 3201 feature_gate.go:156] feature gates: map[] | |
Dec 01 15:49:43 minikube localkube[3201]: W1201 15:49:43.242109 3201 server.go:276] --require-kubeconfig is deprecated. Set --kubeconfig without using --require-kubeconfig. | |
Dec 01 15:49:43 minikube localkube[3201]: I1201 15:49:43.561451 3201 client.go:75] Connecting to docker on unix:///var/run/docker.sock | |
Dec 01 15:49:43 minikube localkube[3201]: I1201 15:49:43.561510 3201 client.go:95] Start docker client with request timeout=2m0s | |
Dec 01 15:49:43 minikube localkube[3201]: W1201 15:49:43.565129 3201 server.go:289] --cloud-provider=auto-detect is deprecated. The desired cloud provider should be set explicitly | |
Dec 01 15:49:43 minikube localkube[3201]: I1201 15:49:43.609220 3201 manager.go:149] cAdvisor running in container: "/sys/fs/cgroup/cpu,cpuacct/system.slice/localkube.service" | |
Dec 01 15:49:43 minikube localkube[3201]: I1201 15:49:43.633665 3201 fs.go:139] Filesystem UUIDs: map[00578285-6fcc-41b6-b5c1-d511458135b8:/dev/sda1 2017-10-19-17-24-41-00:/dev/sr0 a41b6062-582e-4abd-bc79-fb110aa14abb:/dev/sda2] | |
Dec 01 15:49:43 minikube localkube[3201]: I1201 15:49:43.633683 3201 fs.go:140] Filesystem partitions: map[tmpfs:{mountpoint:/dev/shm major:0 minor:17 fsType:tmpfs blockSize:0} /dev/sda1:{mountpoint:/mnt/sda1 major:8 minor:1 fsType:ext4 blockSize:0}] | |
Dec 01 15:49:43 minikube localkube[3201]: I1201 15:49:43.634513 3201 manager.go:216] Machine: {NumCores:2 CpuFrequency:3493294 MemoryCapacity:2097229824 HugePages:[{PageSize:2048 NumPages:0}] MachineID:0f115bc1efcb424a9baba6b5160e77e3 SystemUUID:F413E9FA-A9B7-46F1-AEC3-5E023752E5B1 BootID:4eee9fe4-17be-4daa-a4eb-3ea286ee35e6 Filesystems:[{Device:tmpfs DeviceMajor:0 DeviceMinor:17 Capacity:1048612864 Type:vfs Inodes:256009 HasInodes:true} {Device:/dev/sda1 DeviceMajor:8 DeviceMinor:1 Capacity:17293533184 Type:vfs Inodes:9732096 HasInodes:true} {Device:rootfs DeviceMajor:0 DeviceMinor:1 Capacity:0 Type:vfs Inodes:0 HasInodes:true}] DiskMap:map[8:0:{Name:sda Major:8 Minor:0 Size:20971520000 Scheduler:cfq}] NetworkDevices:[{Name:eth0 MacAddress:08:00:27:aa:c0:58 Speed:-1 Mtu:1500} {Name:eth1 MacAddress:08:00:27:71:87:71 Speed:-1 Mtu:1500} {Name:sit0 MacAddress:00:00:00:00 Speed:0 Mtu:1480}] Topology:[{Id:0 Memory:2097229824 Cores:[{Id:0 Threads:[0] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:262144 Type:Unified Level:2} {Size:8388608 Type:Unified Level:3}]} {Id:1 Threads:[1] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:262144 Type:Unified Level:2} {Size:8388608 Type:Unified Level:3}]}] Caches:[]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} | |
Dec 01 15:49:43 minikube localkube[3201]: I1201 15:49:43.637786 3201 manager.go:222] Version: {KernelVersion:4.9.13 ContainerOsVersion:Buildroot 2017.02 DockerVersion:17.06.0-ce DockerAPIVersion:1.30 CadvisorVersion: CadvisorRevision:} | |
Dec 01 15:49:43 minikube localkube[3201]: I1201 15:49:43.638469 3201 server.go:422] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to / | |
Dec 01 15:49:43 minikube localkube[3201]: I1201 15:49:43.640064 3201 container_manager_linux.go:252] container manager verified user specified cgroup-root exists: / | |
Dec 01 15:49:43 minikube localkube[3201]: I1201 15:49:43.640270 3201 container_manager_linux.go:257] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.1} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>}]} ExperimentalQOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s} | |
Dec 01 15:49:43 minikube localkube[3201]: I1201 15:49:43.641107 3201 container_manager_linux.go:288] Creating device plugin handler: false | |
Dec 01 15:49:43 minikube localkube[3201]: I1201 15:49:43.641229 3201 kubelet.go:273] Adding manifest file: /etc/kubernetes/manifests | |
Dec 01 15:49:43 minikube localkube[3201]: I1201 15:49:43.641253 3201 kubelet.go:283] Watching apiserver | |
Dec 01 15:49:43 minikube localkube[3201]: W1201 15:49:43.648092 3201 kubelet_network.go:69] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth" | |
Dec 01 15:49:43 minikube localkube[3201]: I1201 15:49:43.648335 3201 kubelet.go:517] Hairpin mode set to "hairpin-veth" | |
Dec 01 15:49:43 minikube localkube[3201]: I1201 15:49:43.652843 3201 docker_service.go:207] Docker cri networking managed by kubernetes.io/no-op | |
Dec 01 15:49:43 minikube localkube[3201]: I1201 15:49:43.658958 3201 docker_service.go:224] Setting cgroupDriver to cgroupfs | |
Dec 01 15:49:43 minikube localkube[3201]: I1201 15:49:43.670372 3201 remote_runtime.go:43] Connecting to runtime service unix:///var/run/dockershim.sock | |
Dec 01 15:49:43 minikube localkube[3201]: I1201 15:49:43.681914 3201 kuberuntime_manager.go:174] Container runtime docker initialized, version: 17.06.0-ce, apiVersion: 1.30.0 | |
Dec 01 15:49:43 minikube localkube[3201]: I1201 15:49:43.682292 3201 kuberuntime_manager.go:898] updating runtime config through cri with podcidr 10.180.1.0/24 | |
Dec 01 15:49:43 minikube localkube[3201]: I1201 15:49:43.682572 3201 docker_service.go:306] docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.180.1.0/24,},} | |
Dec 01 15:49:43 minikube localkube[3201]: I1201 15:49:43.682874 3201 kubelet_network.go:276] Setting Pod CIDR: -> 10.180.1.0/24 | |
Dec 01 15:49:43 minikube localkube[3201]: I1201 15:49:43.683809 3201 server.go:718] Started kubelet v1.8.0 | |
Dec 01 15:49:43 minikube localkube[3201]: E1201 15:49:43.684152 3201 kubelet.go:1234] Image garbage collection failed once. Stats initialization may not have completed yet: unable to find data for container / | |
Dec 01 15:49:43 minikube localkube[3201]: I1201 15:49:43.684541 3201 kubelet_node_status.go:276] Setting node annotation to enable volume controller attach/detach | |
Dec 01 15:49:43 minikube localkube[3201]: I1201 15:49:43.684738 3201 server.go:128] Starting to listen on 0.0.0.0:10250 | |
Dec 01 15:49:43 minikube localkube[3201]: I1201 15:49:43.685269 3201 server.go:296] Adding debug handlers to kubelet server. | |
Dec 01 15:49:43 minikube localkube[3201]: I1201 15:49:43.706705 3201 fs_resource_analyzer.go:66] Starting FS ResourceAnalyzer | |
Dec 01 15:49:43 minikube localkube[3201]: I1201 15:49:43.706764 3201 status_manager.go:140] Starting to sync pod status with apiserver | |
Dec 01 15:49:43 minikube localkube[3201]: I1201 15:49:43.706778 3201 kubelet.go:1768] Starting kubelet main sync loop. | |
Dec 01 15:49:43 minikube localkube[3201]: I1201 15:49:43.706792 3201 kubelet.go:1779] skipping pod synchronization - [container runtime is down PLEG is not healthy: pleg was last seen active 2562047h47m16.854775807s ago; threshold is 3m0s] | |
Dec 01 15:49:43 minikube localkube[3201]: E1201 15:49:43.706991 3201 container_manager_linux.go:603] [ContainerManager]: Fail to get rootfs information unable to find data for container / | |
Dec 01 15:49:43 minikube localkube[3201]: I1201 15:49:43.707011 3201 volume_manager.go:246] Starting Kubelet Volume Manager | |
Dec 01 15:49:43 minikube localkube[3201]: I1201 15:49:43.717565 3201 factory.go:355] Registering Docker factory | |
Dec 01 15:49:43 minikube localkube[3201]: I1201 15:49:43.718331 3201 factory.go:89] Registering Rkt factory | |
Dec 01 15:49:43 minikube localkube[3201]: I1201 15:49:43.720717 3201 factory.go:157] Registering CRI-O factory | |
Dec 01 15:49:43 minikube localkube[3201]: I1201 15:49:43.720846 3201 factory.go:54] Registering systemd factory | |
Dec 01 15:49:43 minikube localkube[3201]: I1201 15:49:43.721056 3201 factory.go:86] Registering Raw factory | |
Dec 01 15:49:43 minikube localkube[3201]: I1201 15:49:43.721483 3201 manager.go:1140] Started watching for new ooms in manager | |
Dec 01 15:49:43 minikube localkube[3201]: I1201 15:49:43.721881 3201 manager.go:311] Starting recovery of all containers | |
Dec 01 15:49:43 minikube localkube[3201]: I1201 15:49:43.762487 3201 manager.go:316] Recovery completed | |
Dec 01 15:49:43 minikube localkube[3201]: I1201 15:49:43.763208 3201 rkt.go:56] starting detectRktContainers thread | |
Dec 01 15:49:43 minikube localkube[3201]: E1201 15:49:43.793154 3201 eviction_manager.go:238] eviction manager: unexpected err: failed to get node info: node 'minikube' not found | |
Dec 01 15:49:43 minikube localkube[3201]: I1201 15:49:43.807506 3201 kubelet_node_status.go:276] Setting node annotation to enable volume controller attach/detach | |
Dec 01 15:49:43 minikube localkube[3201]: I1201 15:49:43.809067 3201 kubelet_node_status.go:83] Attempting to register node minikube | |
Dec 01 15:49:43 minikube localkube[3201]: I1201 15:49:43.811975 3201 kubelet_node_status.go:86] Successfully registered node minikube | |
Dec 01 15:49:43 minikube localkube[3201]: E1201 15:49:43.812966 3201 actual_state_of_world.go:483] Failed to set statusUpdateNeeded to needed true because nodeName="minikube" does not exist | |
Dec 01 15:49:43 minikube localkube[3201]: E1201 15:49:43.813006 3201 actual_state_of_world.go:497] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true because nodeName="minikube" does not exist | |
Dec 01 15:49:43 minikube localkube[3201]: I1201 15:49:43.814367 3201 kuberuntime_manager.go:898] updating runtime config through cri with podcidr | |
Dec 01 15:49:43 minikube localkube[3201]: I1201 15:49:43.814461 3201 docker_service.go:306] docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},} | |
Dec 01 15:49:43 minikube localkube[3201]: I1201 15:49:43.814546 3201 kubelet_network.go:276] Setting Pod CIDR: 10.180.1.0/24 -> | |
Dec 01 15:49:44 minikube localkube[3201]: kubelet is ready! | |
Dec 01 15:49:44 minikube localkube[3201]: Starting proxy... | |
Dec 01 15:49:44 minikube localkube[3201]: Waiting for proxy to be healthy... | |
Dec 01 15:49:44 minikube localkube[3201]: W1201 15:49:44.242421 3201 server_others.go:63] unable to register configz: register config "componentconfig" twice | |
Dec 01 15:49:44 minikube localkube[3201]: I1201 15:49:44.259308 3201 server_others.go:117] Using iptables Proxier. | |
Dec 01 15:49:44 minikube localkube[3201]: W1201 15:49:44.267166 3201 proxier.go:473] clusterCIDR not specified, unable to distinguish between internal and external traffic | |
Dec 01 15:49:44 minikube localkube[3201]: I1201 15:49:44.267258 3201 server_others.go:152] Tearing down inactive rules. | |
Dec 01 15:49:44 minikube localkube[3201]: E1201 15:49:44.281246 3201 healthcheck.go:317] Failed to start node healthz on 0: listen tcp: address 0: missing port in address | |
Dec 01 15:49:44 minikube localkube[3201]: I1201 15:49:44.281276 3201 config.go:202] Starting service config controller | |
Dec 01 15:49:44 minikube localkube[3201]: I1201 15:49:44.281282 3201 controller_utils.go:1041] Waiting for caches to sync for service config controller | |
Dec 01 15:49:44 minikube localkube[3201]: I1201 15:49:44.281292 3201 config.go:102] Starting endpoints config controller | |
Dec 01 15:49:44 minikube localkube[3201]: I1201 15:49:44.281296 3201 controller_utils.go:1041] Waiting for caches to sync for endpoints config controller | |
Dec 01 15:49:44 minikube localkube[3201]: I1201 15:49:44.381533 3201 controller_utils.go:1048] Caches are synced for endpoints config controller | |
Dec 01 15:49:44 minikube localkube[3201]: I1201 15:49:44.381607 3201 controller_utils.go:1048] Caches are synced for service config controller | |
Dec 01 15:49:45 minikube localkube[3201]: proxy is ready! | |
Dec 01 15:49:47 minikube localkube[3201]: I1201 15:49:47.665747 3201 node_controller.go:563] Initializing eviction metric for zone: | |
Dec 01 15:49:47 minikube localkube[3201]: W1201 15:49:47.665848 3201 node_controller.go:916] Missing timestamp for Node minikube. Assuming now as a timestamp. | |
Dec 01 15:49:47 minikube localkube[3201]: I1201 15:49:47.665888 3201 node_controller.go:832] Controller detected that zone is now in state Normal. | |
Dec 01 15:49:47 minikube localkube[3201]: I1201 15:49:47.666202 3201 event.go:218] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"3feb64b5-d6af-11e7-9596-080027aac058", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node minikube event: Registered Node minikube in Controller | |
Dec 01 15:49:48 minikube localkube[3201]: E1201 15:49:48.710515 3201 helpers.go:468] PercpuUsage had 0 cpus, but the actual number is 2; ignoring extra CPUs | |
Dec 01 15:49:48 minikube localkube[3201]: I1201 15:49:48.807364 3201 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/7b19c3ba446df5355649563d32723e4f-kubeconfig") pod "kube-addon-manager-minikube" (UID: "7b19c3ba446df5355649563d32723e4f") | |
Dec 01 15:49:48 minikube localkube[3201]: I1201 15:49:48.807750 3201 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "addons" (UniqueName: "kubernetes.io/host-path/7b19c3ba446df5355649563d32723e4f-addons") pod "kube-addon-manager-minikube" (UID: "7b19c3ba446df5355649563d32723e4f") | |
Dec 01 15:49:50 minikube localkube[3201]: I1201 15:49:50.565968 3201 event.go:218] Event(v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"storage-provisioner", UID:"43f186b0-d6af-11e7-9596-080027aac058", APIVersion:"v1", ResourceVersion:"77", FieldPath:""}): type: 'Normal' reason: 'Scheduled' Successfully assigned storage-provisioner to minikube | |
Dec 01 15:49:50 minikube localkube[3201]: I1201 15:49:50.619121 3201 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-dntqp" (UniqueName: "kubernetes.io/secret/43f186b0-d6af-11e7-9596-080027aac058-default-token-dntqp") pod "storage-provisioner" (UID: "43f186b0-d6af-11e7-9596-080027aac058") | |
Dec 01 15:49:51 minikube localkube[3201]: I1201 15:49:51.146526 3201 event.go:218] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"kube-system", Name:"kubernetes-dashboard", UID:"4448c8c9-d6af-11e7-9596-080027aac058", APIVersion:"v1", ResourceVersion:"84", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kubernetes-dashboard-4vsmg | |
Dec 01 15:49:51 minikube localkube[3201]: I1201 15:49:51.151968 3201 event.go:218] Event(v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kubernetes-dashboard-4vsmg", UID:"44492d8e-d6af-11e7-9596-080027aac058", APIVersion:"v1", ResourceVersion:"85", FieldPath:""}): type: 'Normal' reason: 'Scheduled' Successfully assigned kubernetes-dashboard-4vsmg to minikube | |
Dec 01 15:49:51 minikube localkube[3201]: I1201 15:49:51.221976 3201 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-dntqp" (UniqueName: "kubernetes.io/secret/44492d8e-d6af-11e7-9596-080027aac058-default-token-dntqp") pod "kubernetes-dashboard-4vsmg" (UID: "44492d8e-d6af-11e7-9596-080027aac058") | |
Dec 01 15:49:51 minikube localkube[3201]: I1201 15:49:51.294803 3201 event.go:218] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"kube-dns", UID:"445fba4c-d6af-11e7-9596-080027aac058", APIVersion:"extensions", ResourceVersion:"96", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set kube-dns-86f6f55dd5 to 1 | |
Dec 01 15:49:51 minikube localkube[3201]: I1201 15:49:51.303046 3201 event.go:218] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"kube-dns-86f6f55dd5", UID:"4460479d-d6af-11e7-9596-080027aac058", APIVersion:"extensions", ResourceVersion:"98", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-dns-86f6f55dd5-wrmkj | |
Dec 01 15:49:51 minikube localkube[3201]: I1201 15:49:51.311984 3201 event.go:218] Event(v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-dns-86f6f55dd5-wrmkj", UID:"4461e2df-d6af-11e7-9596-080027aac058", APIVersion:"v1", ResourceVersion:"100", FieldPath:""}): type: 'Normal' reason: 'Scheduled' Successfully assigned kube-dns-86f6f55dd5-wrmkj to minikube | |
Dec 01 15:49:51 minikube localkube[3201]: I1201 15:49:51.323823 3201 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-dns-config" (UniqueName: "kubernetes.io/configmap/4461e2df-d6af-11e7-9596-080027aac058-kube-dns-config") pod "kube-dns-86f6f55dd5-wrmkj" (UID: "4461e2df-d6af-11e7-9596-080027aac058") | |
Dec 01 15:49:51 minikube localkube[3201]: I1201 15:49:51.323860 3201 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-dntqp" (UniqueName: "kubernetes.io/secret/4461e2df-d6af-11e7-9596-080027aac058-default-token-dntqp") pod "kube-dns-86f6f55dd5-wrmkj" (UID: "4461e2df-d6af-11e7-9596-080027aac058") | |
Dec 01 15:49:51 minikube localkube[3201]: W1201 15:49:51.341187 3201 container.go:367] Failed to get RecentStats("/system.slice/run-r6707bffd9630484e93f317bbb02ee153.scope") while determining the next housekeeping: unable to find data for container /system.slice/run-r6707bffd9630484e93f317bbb02ee153.scope | |
Dec 01 15:49:51 minikube localkube[3201]: W1201 15:49:51.436913 3201 container.go:354] Failed to create summary reader for "/system.slice/run-r26d85f7b247c4136aca83e0b54b5f0d0.scope": none of the resources are being tracked. | |
Dec 01 15:49:51 minikube localkube[3201]: E1201 15:49:51.436987 3201 helpers.go:468] PercpuUsage had 0 cpus, but the actual number is 2; ignoring extra CPUs | |
Dec 01 15:49:52 minikube localkube[3201]: W1201 15:49:52.161619 3201 kuberuntime_container.go:191] Non-root verification doesn't support non-numeric user (nobody) | |
Dec 01 15:49:59 minikube localkube[3201]: E1201 15:49:59.322946 3201 proxier.go:1621] Failed to delete stale service IP 10.96.0.10 connections, error: error deleting connection tracking state for UDP service IP: 10.96.0.10, error: error looking for path of conntrack: exec: "conntrack": executable file not found in $PATH | |
Dec 01 15:50:03 minikube localkube[3201]: W1201 15:50:03.820968 3201 conversion.go:110] Could not get instant cpu stats: different number of cpus | |
Dec 01 15:50:13 minikube localkube[3201]: W1201 15:50:13.841071 3201 conversion.go:110] Could not get instant cpu stats: different number of cpus | |
Dec 01 15:50:32 minikube localkube[3201]: I1201 15:50:32.942364 3201 event.go:218] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"tiller-deploy", UID:"5d1a8773-d6af-11e7-9596-080027aac058", APIVersion:"extensions", ResourceVersion:"188", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set tiller-deploy-7777bff5d to 1 | |
Dec 01 15:50:32 minikube localkube[3201]: I1201 15:50:32.957226 3201 event.go:218] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"tiller-deploy-7777bff5d", UID:"5d1b0060-d6af-11e7-9596-080027aac058", APIVersion:"extensions", ResourceVersion:"189", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: tiller-deploy-7777bff5d-s5znv | |
Dec 01 15:50:32 minikube localkube[3201]: I1201 15:50:32.966598 3201 event.go:218] Event(v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"tiller-deploy-7777bff5d-s5znv", UID:"5d360470-d6af-11e7-9596-080027aac058", APIVersion:"v1", ResourceVersion:"193", FieldPath:""}): type: 'Normal' reason: 'Scheduled' Successfully assigned tiller-deploy-7777bff5d-s5znv to minikube | |
Dec 01 15:50:33 minikube localkube[3201]: I1201 15:50:33.072608 3201 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-dntqp" (UniqueName: "kubernetes.io/secret/5d360470-d6af-11e7-9596-080027aac058-default-token-dntqp") pod "tiller-deploy-7777bff5d-s5znv" (UID: "5d360470-d6af-11e7-9596-080027aac058") | |
Dec 01 15:50:44 minikube localkube[3201]: E1201 15:50:44.281565 3201 healthcheck.go:317] Failed to start node healthz on 0: listen tcp: address 0: missing port in address | |
Dec 01 15:51:10 minikube localkube[3201]: I1201 15:51:10.023375 3201 trace.go:76] Trace[665311406]: "Get /api/v1/namespaces/default" (started: 2017-12-01 15:51:09.515877621 +0000 UTC m=+94.555015205) (total time: 507.441024ms): | |
Dec 01 15:51:10 minikube localkube[3201]: Trace[665311406]: [507.373601ms] [507.370521ms] About to write a response | |
Dec 01 15:51:10 minikube localkube[3201]: I1201 15:51:10.024177 3201 trace.go:76] Trace[1135598398]: "Get /api/v1/namespaces/kube-system/endpoints/kube-controller-manager" (started: 2017-12-01 15:51:09.503843072 +0000 UTC m=+94.542980695) (total time: 520.298464ms): | |
Dec 01 15:51:10 minikube localkube[3201]: Trace[1135598398]: [520.238328ms] [520.228565ms] About to write a response | |
Dec 01 15:51:21 minikube localkube[3201]: E1201 15:51:21.941047 3201 helpers.go:468] PercpuUsage had 0 cpus, but the actual number is 2; ignoring extra CPUs | |
Dec 01 15:51:21 minikube localkube[3201]: I1201 15:51:21.942044 3201 event.go:218] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"spotify-docker-gc", UID:"7a6613a2-d6af-11e7-9596-080027aac058", APIVersion:"extensions", ResourceVersion:"298", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: spotify-docker-gc-bdfgr | |
Dec 01 15:51:21 minikube localkube[3201]: I1201 15:51:21.942380 3201 event.go:218] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"kube-system-chartmuseum", UID:"7a6677ed-d6af-11e7-9596-080027aac058", APIVersion:"extensions", ResourceVersion:"301", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set kube-system-chartmuseum-786f76fd7b to 1 | |
Dec 01 15:51:21 minikube localkube[3201]: I1201 15:51:21.954008 3201 event.go:218] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"kube-system-kubernetes-dashboard", UID:"7a67528f-d6af-11e7-9596-080027aac058", APIVersion:"extensions", ResourceVersion:"303", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set kube-system-kubernetes-dashboard-5676fbb68c to 1 | |
Dec 01 15:51:21 minikube localkube[3201]: I1201 15:51:21.954557 3201 event.go:218] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"kube-system-chartmuseum-786f76fd7b", UID:"7a677cd9-d6af-11e7-9596-080027aac058", APIVersion:"extensions", ResourceVersion:"305", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-system-chartmuseum-786f76fd7b-zh6dh | |
Dec 01 15:51:21 minikube localkube[3201]: E1201 15:51:21.957123 3201 daemon_controller.go:263] kube-system/spotify-docker-gc failed with : error storing status for daemon set &v1beta1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"spotify-docker-gc", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/extensions/v1beta1/namespaces/kube-system/daemonsets/spotify-docker-gc", UID:"7a6613a2-d6af-11e7-9596-080027aac058", ResourceVersion:"298", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63647740281, loc:(*time.Location)(0x9a4b0a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"daemonset":"spotify-docker-gc"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1beta1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc4251d6bc0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"daemonset":"spotify-docker-gc"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"docker-socket", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc4251d6be0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RB | |
Dec 01 15:51:21 minikube localkube[3201]: DVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"spotify-docker-gc", Image:"docker.io/spotify/docker-gc:latest", Command:[]string{"/bin/sh"}, Args:[]string{"-c", " touch /var/log/crond.log && echo \"0 0 * * * /docker-gc >> /var/log/crond.log 2>&1\" | crontab - && crond -L /var/log/crond.log && tail -f /var/log/crond.log"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"GRACE_PERIOD_SECONDS", Value:"0", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"DOCKER_API_VERSION", Value:"", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"docker-socket", ReadOnly:false, MountPath:"/var/run/docker.sock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc429bb4ff8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy: | |
Dec 01 15:51:21 minikube localkube[3201]: "ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, SecurityContext:(*v1.PodSecurityContext)(0xc430be4ac0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration(nil), HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil)}}, UpdateStrategy:v1beta1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1beta1.RollingUpdateDaemonSet)(0xc4312264b8)}, MinReadySeconds:0, TemplateGeneration:1, RevisionHistoryLimit:(*int32)(0xc429bb500c)}, Status:v1beta1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil)}}: Operation cannot be fulfilled on daemonsets.extensions "spotify-docker-gc": the object has been modified; please apply your changes to the latest version and try again | |
Dec 01 15:51:21 minikube localkube[3201]: I1201 15:51:21.960367 3201 event.go:218] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"kube-system-traefik", UID:"7a689d37-d6af-11e7-9596-080027aac058", APIVersion:"extensions", ResourceVersion:"306", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set kube-system-traefik-7c778994b8 to 1 | |
Dec 01 15:51:21 minikube localkube[3201]: I1201 15:51:21.978424 3201 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-dntqp" (UniqueName: "kubernetes.io/secret/7a677edc-d6af-11e7-9596-080027aac058-default-token-dntqp") pod "spotify-docker-gc-bdfgr" (UID: "7a677edc-d6af-11e7-9596-080027aac058") | |
Dec 01 15:51:21 minikube localkube[3201]: I1201 15:51:21.978645 3201 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "docker-socket" (UniqueName: "kubernetes.io/host-path/7a677edc-d6af-11e7-9596-080027aac058-docker-socket") pod "spotify-docker-gc-bdfgr" (UID: "7a677edc-d6af-11e7-9596-080027aac058") | |
Dec 01 15:51:21 minikube localkube[3201]: I1201 15:51:21.982021 3201 event.go:218] Event(v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-system-chartmuseum-786f76fd7b-zh6dh", UID:"7a6964a6-d6af-11e7-9596-080027aac058", APIVersion:"v1", ResourceVersion:"309", FieldPath:""}): type: 'Normal' reason: 'Scheduled' Successfully assigned kube-system-chartmuseum-786f76fd7b-zh6dh to minikube | |
Dec 01 15:51:21 minikube localkube[3201]: I1201 15:51:21.982301 3201 event.go:218] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"kube-system-traefik-7c778994b8", UID:"7a6a5ee2-d6af-11e7-9596-080027aac058", APIVersion:"extensions", ResourceVersion:"312", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-system-traefik-7c778994b8-8sxz9 | |
Dec 01 15:51:21 minikube localkube[3201]: I1201 15:51:21.982342 3201 event.go:218] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"kube-system-kubernetes-dashboard-5676fbb68c", UID:"7a6976e6-d6af-11e7-9596-080027aac058", APIVersion:"extensions", ResourceVersion:"308", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-system-kubernetes-dashboard-5676fbb68c-sdj4c | |
Dec 01 15:51:22 minikube localkube[3201]: I1201 15:51:22.030945 3201 event.go:218] Event(v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-system-kubernetes-dashboard-5676fbb68c-sdj4c", UID:"7a6b34a1-d6af-11e7-9596-080027aac058", APIVersion:"v1", ResourceVersion:"322", FieldPath:""}): type: 'Normal' reason: 'Scheduled' Successfully assigned kube-system-kubernetes-dashboard-5676fbb68c-sdj4c to minikube | |
Dec 01 15:51:22 minikube localkube[3201]: I1201 15:51:22.052712 3201 event.go:218] Event(v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-system-traefik-7c778994b8-8sxz9", UID:"7a6c24f0-d6af-11e7-9596-080027aac058", APIVersion:"v1", ResourceVersion:"320", FieldPath:""}): type: 'Normal' reason: 'Scheduled' Successfully assigned kube-system-traefik-7c778994b8-8sxz9 to minikube | |
Dec 01 15:51:22 minikube localkube[3201]: I1201 15:51:22.089831 3201 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "storage-volume" (UniqueName: "kubernetes.io/empty-dir/7a6964a6-d6af-11e7-9596-080027aac058-storage-volume") pod "kube-system-chartmuseum-786f76fd7b-zh6dh" (UID: "7a6964a6-d6af-11e7-9596-080027aac058") | |
Dec 01 15:51:22 minikube localkube[3201]: I1201 15:51:22.089863 3201 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-dntqp" (UniqueName: "kubernetes.io/secret/7a6964a6-d6af-11e7-9596-080027aac058-default-token-dntqp") pod "kube-system-chartmuseum-786f76fd7b-zh6dh" (UID: "7a6964a6-d6af-11e7-9596-080027aac058") | |
Dec 01 15:51:22 minikube localkube[3201]: I1201 15:51:22.089893 3201 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "config" (UniqueName: "kubernetes.io/configmap/7a6c24f0-d6af-11e7-9596-080027aac058-config") pod "kube-system-traefik-7c778994b8-8sxz9" (UID: "7a6c24f0-d6af-11e7-9596-080027aac058") | |
Dec 01 15:51:22 minikube localkube[3201]: I1201 15:51:22.089907 3201 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "ssl" (UniqueName: "kubernetes.io/secret/7a6c24f0-d6af-11e7-9596-080027aac058-ssl") pod "kube-system-traefik-7c778994b8-8sxz9" (UID: "7a6c24f0-d6af-11e7-9596-080027aac058") | |
Dec 01 15:51:22 minikube localkube[3201]: I1201 15:51:22.190100 3201 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-dntqp" (UniqueName: "kubernetes.io/secret/7a6b34a1-d6af-11e7-9596-080027aac058-default-token-dntqp") pod "kube-system-kubernetes-dashboard-5676fbb68c-sdj4c" (UID: "7a6b34a1-d6af-11e7-9596-080027aac058") | |
Dec 01 15:51:22 minikube localkube[3201]: I1201 15:51:22.190320 3201 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-dntqp" (UniqueName: "kubernetes.io/secret/7a6c24f0-d6af-11e7-9596-080027aac058-default-token-dntqp") pod "kube-system-traefik-7c778994b8-8sxz9" (UID: "7a6c24f0-d6af-11e7-9596-080027aac058") | |
Dec 01 15:51:22 minikube localkube[3201]: I1201 15:51:22.190475 3201 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "pvc-7a5a0c8d-d6af-11e7-9596-080027aac058" (UniqueName: "kubernetes.io/host-path/7a6c24f0-d6af-11e7-9596-080027aac058-pvc-7a5a0c8d-d6af-11e7-9596-080027aac058") pod "kube-system-traefik-7c778994b8-8sxz9" (UID: "7a6c24f0-d6af-11e7-9596-080027aac058") | |
Dec 01 15:51:22 minikube localkube[3201]: W1201 15:51:22.349540 3201 pod_container_deletor.go:77] Container "60a57be8b740343c811813281d66714af9831fa596599d469481746255b66463" not found in pod's containers | |
Dec 01 15:51:22 minikube localkube[3201]: E1201 15:51:22.689806 3201 upgradeaware.go:310] Error proxying data from client to backend: read tcp 192.168.99.100:8443->192.168.99.1:37246: read: connection reset by peer | |
Dec 01 15:51:43 minikube localkube[3201]: W1201 15:51:43.982653 3201 conversion.go:110] Could not get instant cpu stats: different number of cpus | |
Dec 01 15:51:44 minikube localkube[3201]: E1201 15:51:44.282164 3201 healthcheck.go:317] Failed to start node healthz on 0: listen tcp: address 0: missing port in address | |
Dec 01 15:51:54 minikube localkube[3201]: W1201 15:51:54.007443 3201 conversion.go:110] Could not get instant cpu stats: different number of cpus | |
Dec 01 15:52:44 minikube localkube[3201]: E1201 15:52:44.282455 3201 healthcheck.go:317] Failed to start node healthz on 0: listen tcp: address 0: missing port in address | |
Dec 01 15:53:32 minikube localkube[3201]: I1201 15:53:32.825870 3201 event.go:218] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"lazy-dachshund-elixir", UID:"c86ae076-d6af-11e7-9596-080027aac058", APIVersion:"extensions", ResourceVersion:"585", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set lazy-dachshund-elixir-d64574bd4 to 2 | |
Dec 01 15:53:32 minikube localkube[3201]: I1201 15:53:32.835390 3201 event.go:218] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"lazy-dachshund-postgresql", UID:"c86ba9cc-d6af-11e7-9596-080027aac058", APIVersion:"extensions", ResourceVersion:"586", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set lazy-dachshund-postgresql-cfbf6fbb7 to 1 | |
Dec 01 15:53:32 minikube localkube[3201]: I1201 15:53:32.835735 3201 event.go:218] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"lazy-dachshund-elixir-d64574bd4", UID:"c86bbb16-d6af-11e7-9596-080027aac058", APIVersion:"extensions", ResourceVersion:"587", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: lazy-dachshund-elixir-d64574bd4-xqm8j | |
Dec 01 15:53:32 minikube localkube[3201]: I1201 15:53:32.837654 3201 event.go:218] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"lazy-dachshund-postgresql-cfbf6fbb7", UID:"c86c2a32-d6af-11e7-9596-080027aac058", APIVersion:"extensions", ResourceVersion:"588", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: lazy-dachshund-postgresql-cfbf6fbb7-h5n6f | |
Dec 01 15:53:32 minikube localkube[3201]: E1201 15:53:32.838745 3201 factory.go:913] Error scheduling default lazy-dachshund-postgresql-cfbf6fbb7-h5n6f: PersistentVolumeClaim is not bound: "lazy-dachshund-postgresql"; retrying | |
Dec 01 15:53:32 minikube localkube[3201]: I1201 15:53:32.839622 3201 event.go:218] Event(v1.ObjectReference{Kind:"Pod", Namespace:"default", Name:"lazy-dachshund-postgresql-cfbf6fbb7-h5n6f", UID:"c86d57b5-d6af-11e7-9596-080027aac058", APIVersion:"v1", ResourceVersion:"592", FieldPath:""}): type: 'Warning' reason: 'FailedScheduling' PersistentVolumeClaim is not bound: "lazy-dachshund-postgresql" | |
Dec 01 15:53:32 minikube localkube[3201]: I1201 15:53:32.843205 3201 event.go:218] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"lazy-dachshund-elixir-d64574bd4", UID:"c86bbb16-d6af-11e7-9596-080027aac058", APIVersion:"extensions", ResourceVersion:"587", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: lazy-dachshund-elixir-d64574bd4-srxhb | |
Dec 01 15:53:32 minikube localkube[3201]: I1201 15:53:32.854697 3201 event.go:218] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"lazy-dachshund-redis", UID:"c86c3804-d6af-11e7-9596-080027aac058", APIVersion:"extensions", ResourceVersion:"589", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set lazy-dachshund-redis-c444c8957 to 1 | |
Dec 01 15:53:32 minikube localkube[3201]: I1201 15:53:32.854762 3201 event.go:218] Event(v1.ObjectReference{Kind:"Pod", Namespace:"default", Name:"lazy-dachshund-elixir-d64574bd4-xqm8j", UID:"c86c60a8-d6af-11e7-9596-080027aac058", APIVersion:"v1", ResourceVersion:"590", FieldPath:""}): type: 'Normal' reason: 'Scheduled' Successfully assigned lazy-dachshund-elixir-d64574bd4-xqm8j to minikube | |
Dec 01 15:53:32 minikube localkube[3201]: I1201 15:53:32.854798 3201 event.go:218] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"lazy-dachshund-ruby-rails", UID:"c86dfce8-d6af-11e7-9596-080027aac058", APIVersion:"extensions", ResourceVersion:"594", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set lazy-dachshund-ruby-rails-67968d7457 to 2 | |
Dec 01 15:53:32 minikube localkube[3201]: E1201 15:53:32.864139 3201 factory.go:913] Error scheduling default lazy-dachshund-postgresql-cfbf6fbb7-h5n6f: PersistentVolumeClaim is not bound: "lazy-dachshund-postgresql"; retrying | |
Dec 01 15:53:32 minikube localkube[3201]: I1201 15:53:32.864194 3201 event.go:218] Event(v1.ObjectReference{Kind:"Pod", Namespace:"default", Name:"lazy-dachshund-postgresql-cfbf6fbb7-h5n6f", UID:"c86d57b5-d6af-11e7-9596-080027aac058", APIVersion:"v1", ResourceVersion:"600", FieldPath:""}): type: 'Warning' reason: 'FailedScheduling' PersistentVolumeClaim is not bound: "lazy-dachshund-postgresql" | |
Dec 01 15:53:32 minikube localkube[3201]: W1201 15:53:32.864221 3201 factory.go:928] Request for pod default/lazy-dachshund-postgresql-cfbf6fbb7-h5n6f already in flight, abandoning | |
Dec 01 15:53:32 minikube localkube[3201]: I1201 15:53:32.883911 3201 event.go:218] Event(v1.ObjectReference{Kind:"Pod", Namespace:"default", Name:"lazy-dachshund-elixir-d64574bd4-srxhb", UID:"c86e0efa-d6af-11e7-9596-080027aac058", APIVersion:"v1", ResourceVersion:"595", FieldPath:""}): type: 'Normal' reason: 'Scheduled' Successfully assigned lazy-dachshund-elixir-d64574bd4-srxhb to minikube | |
Dec 01 15:53:32 minikube localkube[3201]: I1201 15:53:32.884992 3201 event.go:218] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"lazy-dachshund-ruby-rails-67968d7457", UID:"c86fb15a-d6af-11e7-9596-080027aac058", APIVersion:"extensions", ResourceVersion:"605", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: lazy-dachshund-ruby-rails-67968d7457-ppqtf | |
Dec 01 15:53:32 minikube localkube[3201]: I1201 15:53:32.885032 3201 event.go:218] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"lazy-dachshund-redis-c444c8957", UID:"c86e5d51-d6af-11e7-9596-080027aac058", APIVersion:"extensions", ResourceVersion:"599", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: lazy-dachshund-redis-c444c8957-x2jpp | |
Dec 01 15:53:32 minikube localkube[3201]: I1201 15:53:32.885046 3201 event.go:218] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"lazy-dachshund-ruby-rpush", UID:"c86f8dd4-d6af-11e7-9596-080027aac058", APIVersion:"extensions", ResourceVersion:"604", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set lazy-dachshund-ruby-rpush-89968d64 to 1 | |
Dec 01 15:53:32 minikube localkube[3201]: I1201 15:53:32.885055 3201 event.go:218] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"lazy-dachshund-ruby-sidekiq", UID:"c871f1f8-d6af-11e7-9596-080027aac058", APIVersion:"extensions", ResourceVersion:"609", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set lazy-dachshund-ruby-sidekiq-645666b64d to 2 | |
Dec 01 15:53:32 minikube localkube[3201]: E1201 15:53:32.893992 3201 factory.go:913] Error scheduling default lazy-dachshund-redis-c444c8957-x2jpp: PersistentVolumeClaim is not bound: "lazy-dachshund-redis"; retrying | |
Dec 01 15:53:32 minikube localkube[3201]: I1201 15:53:32.894162 3201 event.go:218] Event(v1.ObjectReference{Kind:"Pod", Namespace:"default", Name:"lazy-dachshund-redis-c444c8957-x2jpp", UID:"c872ac21-d6af-11e7-9596-080027aac058", APIVersion:"v1", ResourceVersion:"615", FieldPath:""}): type: 'Warning' reason: 'FailedScheduling' PersistentVolumeClaim is not bound: "lazy-dachshund-redis" | |
Dec 01 15:53:32 minikube localkube[3201]: I1201 15:53:32.897772 3201 event.go:218] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"lazy-dachshund-ruby-rpush-89968d64", UID:"c872cd75-d6af-11e7-9596-080027aac058", APIVersion:"extensions", ResourceVersion:"616", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: lazy-dachshund-ruby-rpush-89968d64-qkx9l | |
Dec 01 15:53:32 minikube localkube[3201]: I1201 15:53:32.910143 3201 event.go:218] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"lazy-dachshund-ruby-rails-67968d7457", UID:"c86fb15a-d6af-11e7-9596-080027aac058", APIVersion:"extensions", ResourceVersion:"605", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: lazy-dachshund-ruby-rails-67968d7457-854tw | |
Dec 01 15:53:32 minikube localkube[3201]: E1201 15:53:32.916592 3201 factory.go:913] Error scheduling default lazy-dachshund-redis-c444c8957-x2jpp: PersistentVolumeClaim is not bound: "lazy-dachshund-redis"; retrying | |
Dec 01 15:53:32 minikube localkube[3201]: I1201 15:53:32.916683 3201 event.go:218] Event(v1.ObjectReference{Kind:"Pod", Namespace:"default", Name:"lazy-dachshund-redis-c444c8957-x2jpp", UID:"c872ac21-d6af-11e7-9596-080027aac058", APIVersion:"v1", ResourceVersion:"633", FieldPath:""}): type: 'Warning' reason: 'FailedScheduling' PersistentVolumeClaim is not bound: "lazy-dachshund-redis" | |
Dec 01 15:53:32 minikube localkube[3201]: W1201 15:53:32.916908 3201 factory.go:928] Request for pod default/lazy-dachshund-redis-c444c8957-x2jpp already in flight, abandoning | |
Dec 01 15:53:32 minikube localkube[3201]: I1201 15:53:32.935860 3201 event.go:218] Event(v1.ObjectReference{Kind:"Pod", Namespace:"default", Name:"lazy-dachshund-ruby-rails-67968d7457-854tw", UID:"c875e16b-d6af-11e7-9596-080027aac058", APIVersion:"v1", ResourceVersion:"630", FieldPath:""}): type: 'Normal' reason: 'Scheduled' Successfully assigned lazy-dachshund-ruby-rails-67968d7457-854tw to minikube | |
Dec 01 15:53:32 minikube localkube[3201]: I1201 15:53:32.936374 3201 event.go:218] Event(v1.ObjectReference{Kind:"Pod", Namespace:"default", Name:"lazy-dachshund-ruby-rails-67968d7457-ppqtf", UID:"c872d7ba-d6af-11e7-9596-080027aac058", APIVersion:"v1", ResourceVersion:"613", FieldPath:""}): type: 'Normal' reason: 'Scheduled' Successfully assigned lazy-dachshund-ruby-rails-67968d7457-ppqtf to minikube | |
Dec 01 15:53:32 minikube localkube[3201]: I1201 15:53:32.939543 3201 event.go:218] Event(v1.ObjectReference{Kind:"Pod", Namespace:"default", Name:"lazy-dachshund-ruby-rpush-89968d64-qkx9l", UID:"c874f93f-d6af-11e7-9596-080027aac058", APIVersion:"v1", ResourceVersion:"623", FieldPath:""}): type: 'Normal' reason: 'Scheduled' Successfully assigned lazy-dachshund-ruby-rpush-89968d64-qkx9l to minikube | |
Dec 01 15:53:32 minikube localkube[3201]: I1201 15:53:32.939756 3201 event.go:218] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"lazy-dachshund-ruby-sidekiq-645666b64d", UID:"c873a66f-d6af-11e7-9596-080027aac058", APIVersion:"extensions", ResourceVersion:"619", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: lazy-dachshund-ruby-sidekiq-645666b64d-jrzfm | |
Dec 01 15:53:32 minikube localkube[3201]: I1201 15:53:32.952970 3201 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-ctrw6" (UniqueName: "kubernetes.io/secret/c872d7ba-d6af-11e7-9596-080027aac058-default-token-ctrw6") pod "lazy-dachshund-ruby-rails-67968d7457-ppqtf" (UID: "c872d7ba-d6af-11e7-9596-080027aac058") | |
Dec 01 15:53:32 minikube localkube[3201]: I1201 15:53:32.971102 3201 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "lazy-dachshund-elixir-apns" (UniqueName: "kubernetes.io/configmap/c86e0efa-d6af-11e7-9596-080027aac058-lazy-dachshund-elixir-apns") pod "lazy-dachshund-elixir-d64574bd4-srxhb" (UID: "c86e0efa-d6af-11e7-9596-080027aac058") | |
Dec 01 15:53:32 minikube localkube[3201]: I1201 15:53:32.971190 3201 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "lazy-dachshund-elixir-ejabberd" (UniqueName: "kubernetes.io/configmap/c86e0efa-d6af-11e7-9596-080027aac058-lazy-dachshund-elixir-ejabberd") pod "lazy-dachshund-elixir-d64574bd4-srxhb" (UID: "c86e0efa-d6af-11e7-9596-080027aac058") | |
Dec 01 15:53:32 minikube localkube[3201]: I1201 15:53:32.971221 3201 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-ctrw6" (UniqueName: "kubernetes.io/secret/c875e16b-d6af-11e7-9596-080027aac058-default-token-ctrw6") pod "lazy-dachshund-ruby-rails-67968d7457-854tw" (UID: "c875e16b-d6af-11e7-9596-080027aac058") | |
Dec 01 15:53:32 minikube localkube[3201]: I1201 15:53:32.971248 3201 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "lazy-dachshund-elixir-ejabberd" (UniqueName: "kubernetes.io/configmap/c86c60a8-d6af-11e7-9596-080027aac058-lazy-dachshund-elixir-ejabberd") pod "lazy-dachshund-elixir-d64574bd4-xqm8j" (UID: "c86c60a8-d6af-11e7-9596-080027aac058") | |
Dec 01 15:53:32 minikube localkube[3201]: I1201 15:53:32.971274 3201 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "lazy-dachshund-elixir-apns" (UniqueName: "kubernetes.io/configmap/c86c60a8-d6af-11e7-9596-080027aac058-lazy-dachshund-elixir-apns") pod "lazy-dachshund-elixir-d64574bd4-xqm8j" (UID: "c86c60a8-d6af-11e7-9596-080027aac058") | |
Dec 01 15:53:32 minikube localkube[3201]: I1201 15:53:32.971298 3201 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-ctrw6" (UniqueName: "kubernetes.io/secret/c86c60a8-d6af-11e7-9596-080027aac058-default-token-ctrw6") pod "lazy-dachshund-elixir-d64574bd4-xqm8j" (UID: "c86c60a8-d6af-11e7-9596-080027aac058") | |
Dec 01 15:53:32 minikube localkube[3201]: I1201 15:53:32.971325 3201 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-ctrw6" (UniqueName: "kubernetes.io/secret/c86e0efa-d6af-11e7-9596-080027aac058-default-token-ctrw6") pod "lazy-dachshund-elixir-d64574bd4-srxhb" (UID: "c86e0efa-d6af-11e7-9596-080027aac058") | |
Dec 01 15:53:32 minikube localkube[3201]: I1201 15:53:32.958219 3201 event.go:218] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"lazy-dachshund-ruby-sidekiq-645666b64d", UID:"c873a66f-d6af-11e7-9596-080027aac058", APIVersion:"extensions", ResourceVersion:"619", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: lazy-dachshund-ruby-sidekiq-645666b64d-wnvc9 | |
Dec 01 15:53:32 minikube localkube[3201]: I1201 15:53:32.981327 3201 event.go:218] Event(v1.ObjectReference{Kind:"Pod", Namespace:"default", Name:"lazy-dachshund-ruby-sidekiq-645666b64d-wnvc9", UID:"c87d9881-d6af-11e7-9596-080027aac058", APIVersion:"v1", ResourceVersion:"650", FieldPath:""}): type: 'Normal' reason: 'Scheduled' Successfully assigned lazy-dachshund-ruby-sidekiq-645666b64d-wnvc9 to minikube | |
Dec 01 15:53:32 minikube localkube[3201]: I1201 15:53:32.999261 3201 event.go:218] Event(v1.ObjectReference{Kind:"Pod", Namespace:"default", Name:"lazy-dachshund-ruby-sidekiq-645666b64d-jrzfm", UID:"c87933ff-d6af-11e7-9596-080027aac058", APIVersion:"v1", ResourceVersion:"637", FieldPath:""}): type: 'Normal' reason: 'Scheduled' Successfully assigned lazy-dachshund-ruby-sidekiq-645666b64d-jrzfm to minikube | |
Dec 01 15:53:33 minikube localkube[3201]: I1201 15:53:33.073278 3201 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "lazy-dachshund-ruby-apns" (UniqueName: "kubernetes.io/configmap/c874f93f-d6af-11e7-9596-080027aac058-lazy-dachshund-ruby-apns") pod "lazy-dachshund-ruby-rpush-89968d64-qkx9l" (UID: "c874f93f-d6af-11e7-9596-080027aac058") | |
Dec 01 15:53:33 minikube localkube[3201]: I1201 15:53:33.073517 3201 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-ctrw6" (UniqueName: "kubernetes.io/secret/c87d9881-d6af-11e7-9596-080027aac058-default-token-ctrw6") pod "lazy-dachshund-ruby-sidekiq-645666b64d-wnvc9" (UID: "c87d9881-d6af-11e7-9596-080027aac058") | |
Dec 01 15:53:33 minikube localkube[3201]: I1201 15:53:33.073635 3201 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-ctrw6" (UniqueName: "kubernetes.io/secret/c87933ff-d6af-11e7-9596-080027aac058-default-token-ctrw6") pod "lazy-dachshund-ruby-sidekiq-645666b64d-jrzfm" (UID: "c87933ff-d6af-11e7-9596-080027aac058") | |
Dec 01 15:53:33 minikube localkube[3201]: I1201 15:53:33.073900 3201 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-ctrw6" (UniqueName: "kubernetes.io/secret/c874f93f-d6af-11e7-9596-080027aac058-default-token-ctrw6") pod "lazy-dachshund-ruby-rpush-89968d64-qkx9l" (UID: "c874f93f-d6af-11e7-9596-080027aac058") | |
Dec 01 15:53:33 minikube localkube[3201]: E1201 15:53:33.254576 3201 helpers.go:468] PercpuUsage had 0 cpus, but the actual number is 2; ignoring extra CPUs | |
Dec 01 15:53:33 minikube localkube[3201]: I1201 15:53:33.845827 3201 event.go:218] Event(v1.ObjectReference{Kind:"Pod", Namespace:"default", Name:"lazy-dachshund-postgresql-cfbf6fbb7-h5n6f", UID:"c86d57b5-d6af-11e7-9596-080027aac058", APIVersion:"v1", ResourceVersion:"600", FieldPath:""}): type: 'Normal' reason: 'Scheduled' Successfully assigned lazy-dachshund-postgresql-cfbf6fbb7-h5n6f to minikube | |
Dec 01 15:53:33 minikube localkube[3201]: I1201 15:53:33.899104 3201 event.go:218] Event(v1.ObjectReference{Kind:"Pod", Namespace:"default", Name:"lazy-dachshund-redis-c444c8957-x2jpp", UID:"c872ac21-d6af-11e7-9596-080027aac058", APIVersion:"v1", ResourceVersion:"633", FieldPath:""}): type: 'Normal' reason: 'Scheduled' Successfully assigned lazy-dachshund-redis-c444c8957-x2jpp to minikube | |
Dec 01 15:53:33 minikube localkube[3201]: E1201 15:53:33.998054 3201 upgradeaware.go:310] Error proxying data from client to backend: read tcp 192.168.99.100:8443->192.168.99.1:37950: read: connection reset by peer | |
Dec 01 15:53:36 minikube localkube[3201]: I1201 15:53:36.417027 3201 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "pvc-c857ca9e-d6af-11e7-9596-080027aac058" (UniqueName: "kubernetes.io/host-path/c86d57b5-d6af-11e7-9596-080027aac058-pvc-c857ca9e-d6af-11e7-9596-080027aac058") pod "lazy-dachshund-postgresql-cfbf6fbb7-h5n6f" (UID: "c86d57b5-d6af-11e7-9596-080027aac058") | |
Dec 01 15:53:36 minikube localkube[3201]: I1201 15:53:36.417106 3201 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-ctrw6" (UniqueName: "kubernetes.io/secret/c86d57b5-d6af-11e7-9596-080027aac058-default-token-ctrw6") pod "lazy-dachshund-postgresql-cfbf6fbb7-h5n6f" (UID: "c86d57b5-d6af-11e7-9596-080027aac058") | |
Dec 01 15:53:38 minikube localkube[3201]: I1201 15:53:38.434468 3201 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "pvc-c8582af6-d6af-11e7-9596-080027aac058" (UniqueName: "kubernetes.io/host-path/c872ac21-d6af-11e7-9596-080027aac058-pvc-c8582af6-d6af-11e7-9596-080027aac058") pod "lazy-dachshund-redis-c444c8957-x2jpp" (UID: "c872ac21-d6af-11e7-9596-080027aac058") | |
Dec 01 15:53:38 minikube localkube[3201]: I1201 15:53:38.434503 3201 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-ctrw6" (UniqueName: "kubernetes.io/secret/c872ac21-d6af-11e7-9596-080027aac058-default-token-ctrw6") pod "lazy-dachshund-redis-c444c8957-x2jpp" (UID: "c872ac21-d6af-11e7-9596-080027aac058") | |
Dec 01 15:53:44 minikube localkube[3201]: E1201 15:53:44.282830 3201 healthcheck.go:317] Failed to start node healthz on 0: listen tcp: address 0: missing port in address | |
Dec 01 15:53:45 minikube localkube[3201]: W1201 15:53:45.483164 3201 container.go:354] Failed to create summary reader for "/kubepods/besteffort/podc874f93f-d6af-11e7-9596-080027aac058/7fc9c0a09c3b626e91b32e87b67a89047507786deca42c0638bc176ae4e04702": none of the resources are being tracked. | |
Dec 01 15:53:48 minikube localkube[3201]: W1201 15:53:48.292393 3201 container.go:354] Failed to create summary reader for "/kubepods/besteffort/podc872d7ba-d6af-11e7-9596-080027aac058/1fdfd8521e5b0a4ca8abd9629b5c47259253a8757113b44fdaba82329c7ec61b": none of the resources are being tracked. | |
Dec 01 15:54:19 minikube localkube[3201]: E1201 15:54:19.620014 3201 kuberuntime_manager.go:714] container start failed: CreateContainerConfigError: lstat /tmp/hostpath-provisioner/pvc-c857ca9e-d6af-11e7-9596-080027aac058: no such file or directory | |
Dec 01 15:54:19 minikube localkube[3201]: E1201 15:54:19.621385 3201 pod_workers.go:182] Error syncing pod c86d57b5-d6af-11e7-9596-080027aac058 ("lazy-dachshund-postgresql-cfbf6fbb7-h5n6f_default(c86d57b5-d6af-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "lazy-dachshund-postgresql" with CreateContainerConfigError: "lstat /tmp/hostpath-provisioner/pvc-c857ca9e-d6af-11e7-9596-080027aac058: no such file or directory" | |
Dec 01 15:54:20 minikube localkube[3201]: I1201 15:54:20.429697 3201 kuberuntime_manager.go:499] Container {Name:lazy-dachshund-postgresql Image:mdillon/postgis:9.6-alpine Command:[] Args:[] WorkingDir: Ports:[{Name:postgresql HostPort:0 ContainerPort:5432 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:POSTGRES_USER Value:api ValueFrom:nil} {Name:PGUSER Value:api ValueFrom:nil} {Name:POSTGRES_DB Value:api ValueFrom:nil} {Name:POSTGRES_INITDB_ARGS Value: ValueFrom:nil} {Name:PGDATA Value:/var/lib/postgresql/data/pgdata ValueFrom:nil} {Name:POSTGRES_PASSWORD Value: ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:lazy-dachshund-postgresql,},Key:postgres-password,Optional:nil,},}} {Name:POD_IP Value: ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:data ReadOnly:false MountPath:/var/lib/postgresql/data/pgdata SubPath:postgresql-db MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:6,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:3,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart | |
Dec 01 15:54:20 minikube localkube[3201]: it. | |
Dec 01 15:54:20 minikube localkube[3201]: E1201 15:54:20.434719 3201 kuberuntime_manager.go:714] container start failed: CreateContainerConfigError: lstat /tmp/hostpath-provisioner/pvc-c857ca9e-d6af-11e7-9596-080027aac058: no such file or directory | |
Dec 01 15:54:20 minikube localkube[3201]: E1201 15:54:20.435103 3201 pod_workers.go:182] Error syncing pod c86d57b5-d6af-11e7-9596-080027aac058 ("lazy-dachshund-postgresql-cfbf6fbb7-h5n6f_default(c86d57b5-d6af-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "lazy-dachshund-postgresql" with CreateContainerConfigError: "lstat /tmp/hostpath-provisioner/pvc-c857ca9e-d6af-11e7-9596-080027aac058: no such file or directory" | |
Dec 01 15:54:24 minikube localkube[3201]: W1201 15:54:24.769259 3201 container.go:354] Failed to create summary reader for "/kubepods/besteffort/podc87933ff-d6af-11e7-9596-080027aac058/9824318ce62aa01108bb52f70303d209eca0425e1690ea09b295bbe1fd3c5004": none of the resources are being tracked. | |
Dec 01 15:54:35 minikube localkube[3201]: I1201 15:54:35.007870 3201 kuberuntime_manager.go:499] Container {Name:lazy-dachshund-postgresql Image:mdillon/postgis:9.6-alpine Command:[] Args:[] WorkingDir: Ports:[{Name:postgresql HostPort:0 ContainerPort:5432 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:POSTGRES_USER Value:api ValueFrom:nil} {Name:PGUSER Value:api ValueFrom:nil} {Name:POSTGRES_DB Value:api ValueFrom:nil} {Name:POSTGRES_INITDB_ARGS Value: ValueFrom:nil} {Name:PGDATA Value:/var/lib/postgresql/data/pgdata ValueFrom:nil} {Name:POSTGRES_PASSWORD Value: ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:lazy-dachshund-postgresql,},Key:postgres-password,Optional:nil,},}} {Name:POD_IP Value: ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:data ReadOnly:false MountPath:/var/lib/postgresql/data/pgdata SubPath:postgresql-db MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:6,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:3,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart | |
Dec 01 15:54:35 minikube localkube[3201]: it. | |
Dec 01 15:54:35 minikube localkube[3201]: E1201 15:54:35.010353 3201 kuberuntime_manager.go:714] container start failed: CreateContainerConfigError: lstat /tmp/hostpath-provisioner/pvc-c857ca9e-d6af-11e7-9596-080027aac058: no such file or directory | |
Dec 01 15:54:35 minikube localkube[3201]: E1201 15:54:35.010378 3201 pod_workers.go:182] Error syncing pod c86d57b5-d6af-11e7-9596-080027aac058 ("lazy-dachshund-postgresql-cfbf6fbb7-h5n6f_default(c86d57b5-d6af-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "lazy-dachshund-postgresql" with CreateContainerConfigError: "lstat /tmp/hostpath-provisioner/pvc-c857ca9e-d6af-11e7-9596-080027aac058: no such file or directory" | |
Dec 01 15:54:44 minikube localkube[3201]: E1201 15:54:44.283562 3201 healthcheck.go:317] Failed to start node healthz on 0: listen tcp: address 0: missing port in address | |
Dec 01 15:54:49 minikube localkube[3201]: I1201 15:54:49.008364 3201 kuberuntime_manager.go:499] Container {Name:lazy-dachshund-postgresql Image:mdillon/postgis:9.6-alpine Command:[] Args:[] WorkingDir: Ports:[{Name:postgresql HostPort:0 ContainerPort:5432 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:POSTGRES_USER Value:api ValueFrom:nil} {Name:PGUSER Value:api ValueFrom:nil} {Name:POSTGRES_DB Value:api ValueFrom:nil} {Name:POSTGRES_INITDB_ARGS Value: ValueFrom:nil} {Name:PGDATA Value:/var/lib/postgresql/data/pgdata ValueFrom:nil} {Name:POSTGRES_PASSWORD Value: ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:lazy-dachshund-postgresql,},Key:postgres-password,Optional:nil,},}} {Name:POD_IP Value: ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:data ReadOnly:false MountPath:/var/lib/postgresql/data/pgdata SubPath:postgresql-db MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:6,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:3,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart | |
Dec 01 15:54:49 minikube localkube[3201]: it. | |
Dec 01 15:54:49 minikube localkube[3201]: E1201 15:54:49.018311 3201 kuberuntime_manager.go:714] container start failed: CreateContainerConfigError: lstat /tmp/hostpath-provisioner/pvc-c857ca9e-d6af-11e7-9596-080027aac058: no such file or directory | |
Dec 01 15:54:49 minikube localkube[3201]: E1201 15:54:49.018341 3201 pod_workers.go:182] Error syncing pod c86d57b5-d6af-11e7-9596-080027aac058 ("lazy-dachshund-postgresql-cfbf6fbb7-h5n6f_default(c86d57b5-d6af-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "lazy-dachshund-postgresql" with CreateContainerConfigError: "lstat /tmp/hostpath-provisioner/pvc-c857ca9e-d6af-11e7-9596-080027aac058: no such file or directory" | |
Dec 01 15:55:00 minikube localkube[3201]: I1201 15:55:00.009317 3201 kuberuntime_manager.go:499] Container {Name:lazy-dachshund-postgresql Image:mdillon/postgis:9.6-alpine Command:[] Args:[] WorkingDir: Ports:[{Name:postgresql HostPort:0 ContainerPort:5432 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:POSTGRES_USER Value:api ValueFrom:nil} {Name:PGUSER Value:api ValueFrom:nil} {Name:POSTGRES_DB Value:api ValueFrom:nil} {Name:POSTGRES_INITDB_ARGS Value: ValueFrom:nil} {Name:PGDATA Value:/var/lib/postgresql/data/pgdata ValueFrom:nil} {Name:POSTGRES_PASSWORD Value: ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:lazy-dachshund-postgresql,},Key:postgres-password,Optional:nil,},}} {Name:POD_IP Value: ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:data ReadOnly:false MountPath:/var/lib/postgresql/data/pgdata SubPath:postgresql-db MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:6,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:3,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart | |
Dec 01 15:55:00 minikube localkube[3201]: it. | |
Dec 01 15:55:00 minikube localkube[3201]: E1201 15:55:00.013544 3201 kuberuntime_manager.go:714] container start failed: CreateContainerConfigError: lstat /tmp/hostpath-provisioner/pvc-c857ca9e-d6af-11e7-9596-080027aac058: no such file or directory | |
Dec 01 15:55:00 minikube localkube[3201]: E1201 15:55:00.013599 3201 pod_workers.go:182] Error syncing pod c86d57b5-d6af-11e7-9596-080027aac058 ("lazy-dachshund-postgresql-cfbf6fbb7-h5n6f_default(c86d57b5-d6af-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "lazy-dachshund-postgresql" with CreateContainerConfigError: "lstat /tmp/hostpath-provisioner/pvc-c857ca9e-d6af-11e7-9596-080027aac058: no such file or directory" | |
Dec 01 15:55:12 minikube localkube[3201]: I1201 15:55:12.008170 3201 kuberuntime_manager.go:499] Container {Name:lazy-dachshund-postgresql Image:mdillon/postgis:9.6-alpine Command:[] Args:[] WorkingDir: Ports:[{Name:postgresql HostPort:0 ContainerPort:5432 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:POSTGRES_USER Value:api ValueFrom:nil} {Name:PGUSER Value:api ValueFrom:nil} {Name:POSTGRES_DB Value:api ValueFrom:nil} {Name:POSTGRES_INITDB_ARGS Value: ValueFrom:nil} {Name:PGDATA Value:/var/lib/postgresql/data/pgdata ValueFrom:nil} {Name:POSTGRES_PASSWORD Value: ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:lazy-dachshund-postgresql,},Key:postgres-password,Optional:nil,},}} {Name:POD_IP Value: ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:data ReadOnly:false MountPath:/var/lib/postgresql/data/pgdata SubPath:postgresql-db MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:6,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:3,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart | |
Dec 01 15:55:12 minikube localkube[3201]: it. | |
Dec 01 15:55:12 minikube localkube[3201]: E1201 15:55:12.012040 3201 kuberuntime_manager.go:714] container start failed: CreateContainerConfigError: lstat /tmp/hostpath-provisioner/pvc-c857ca9e-d6af-11e7-9596-080027aac058: no such file or directory | |
Dec 01 15:55:12 minikube localkube[3201]: E1201 15:55:12.012090 3201 pod_workers.go:182] Error syncing pod c86d57b5-d6af-11e7-9596-080027aac058 ("lazy-dachshund-postgresql-cfbf6fbb7-h5n6f_default(c86d57b5-d6af-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "lazy-dachshund-postgresql" with CreateContainerConfigError: "lstat /tmp/hostpath-provisioner/pvc-c857ca9e-d6af-11e7-9596-080027aac058: no such file or directory" | |
Dec 01 15:55:13 minikube localkube[3201]: I1201 15:55:13.226743 3201 kuberuntime_manager.go:499] Container {Name:ruby Image:quay.io/findaplayer/ruby-api:latest Command:[] Args:[] WorkingDir: Ports:[{Name: HostPort:0 ContainerPort:3000 Protocol:TCP HostIP:}] EnvFrom:[{Prefix: ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:lazy-dachshund-ruby,},Optional:nil,} SecretRef:nil}] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:nil ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/status.json,Port:3000,Host:,Scheme:HTTP,HTTPHeaders:[{Host ruby-ruby}],},TCPSocket:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 15:55:16 minikube localkube[3201]: I1201 15:55:16.280319 3201 kuberuntime_manager.go:499] Container {Name:ruby Image:quay.io/findaplayer/ruby-sidekiq:latest Command:[] Args:[] WorkingDir: Ports:[] EnvFrom:[{Prefix: ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:lazy-dachshund-ruby,},Optional:nil,} SecretRef:nil}] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 15:55:16 minikube localkube[3201]: I1201 15:55:16.282498 3201 kuberuntime_manager.go:499] Container {Name:lazy-dachshund-redis Image:bitnami/redis:4.0.2-r1 Command:[] Args:[] WorkingDir: Ports:[{Name:redis HostPort:0 ContainerPort:6379 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:ALLOW_EMPTY_PASSWORD Value:yes ValueFrom:nil}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:redis-data ReadOnly:false MountPath:/bitnami SubPath: MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 15:55:16 minikube localkube[3201]: I1201 15:55:16.282795 3201 kuberuntime_manager.go:738] checking backoff for container "lazy-dachshund-redis" in pod "lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)" | |
Dec 01 15:55:21 minikube localkube[3201]: I1201 15:55:21.367407 3201 kuberuntime_manager.go:499] Container {Name:ruby Image:quay.io/findaplayer/ruby-api:latest Command:[] Args:[] WorkingDir: Ports:[{Name: HostPort:0 ContainerPort:3000 Protocol:TCP HostIP:}] EnvFrom:[{Prefix: ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:lazy-dachshund-ruby,},Optional:nil,} SecretRef:nil}] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:nil ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/status.json,Port:3000,Host:,Scheme:HTTP,HTTPHeaders:[{Host ruby-ruby}],},TCPSocket:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 15:55:24 minikube localkube[3201]: I1201 15:55:24.402597 3201 kuberuntime_manager.go:499] Container {Name:ruby Image:quay.io/findaplayer/ruby-sidekiq:latest Command:[] Args:[] WorkingDir: Ports:[] EnvFrom:[{Prefix: ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:lazy-dachshund-ruby,},Optional:nil,} SecretRef:nil}] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 15:55:28 minikube localkube[3201]: I1201 15:55:28.007580 3201 kuberuntime_manager.go:499] Container {Name:lazy-dachshund-postgresql Image:mdillon/postgis:9.6-alpine Command:[] Args:[] WorkingDir: Ports:[{Name:postgresql HostPort:0 ContainerPort:5432 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:POSTGRES_USER Value:api ValueFrom:nil} {Name:PGUSER Value:api ValueFrom:nil} {Name:POSTGRES_DB Value:api ValueFrom:nil} {Name:POSTGRES_INITDB_ARGS Value: ValueFrom:nil} {Name:PGDATA Value:/var/lib/postgresql/data/pgdata ValueFrom:nil} {Name:POSTGRES_PASSWORD Value: ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:lazy-dachshund-postgresql,},Key:postgres-password,Optional:nil,},}} {Name:POD_IP Value: ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:data ReadOnly:false MountPath:/var/lib/postgresql/data/pgdata SubPath:postgresql-db MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:6,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:3,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart | |
Dec 01 15:55:28 minikube localkube[3201]: it. | |
Dec 01 15:55:28 minikube localkube[3201]: E1201 15:55:28.013232 3201 kuberuntime_manager.go:714] container start failed: CreateContainerConfigError: lstat /tmp/hostpath-provisioner/pvc-c857ca9e-d6af-11e7-9596-080027aac058: no such file or directory | |
Dec 01 15:55:28 minikube localkube[3201]: E1201 15:55:28.016479 3201 pod_workers.go:182] Error syncing pod c86d57b5-d6af-11e7-9596-080027aac058 ("lazy-dachshund-postgresql-cfbf6fbb7-h5n6f_default(c86d57b5-d6af-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "lazy-dachshund-postgresql" with CreateContainerConfigError: "lstat /tmp/hostpath-provisioner/pvc-c857ca9e-d6af-11e7-9596-080027aac058: no such file or directory" | |
Dec 01 15:55:39 minikube localkube[3201]: I1201 15:55:39.566821 3201 kuberuntime_manager.go:499] Container {Name:lazy-dachshund-redis Image:bitnami/redis:4.0.2-r1 Command:[] Args:[] WorkingDir: Ports:[{Name:redis HostPort:0 ContainerPort:6379 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:ALLOW_EMPTY_PASSWORD Value:yes ValueFrom:nil}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:redis-data ReadOnly:false MountPath:/bitnami SubPath: MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 15:55:39 minikube localkube[3201]: I1201 15:55:39.566972 3201 kuberuntime_manager.go:738] checking backoff for container "lazy-dachshund-redis" in pod "lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)" | |
Dec 01 15:55:39 minikube localkube[3201]: I1201 15:55:39.567117 3201 kuberuntime_manager.go:748] Back-off 10s restarting failed container=lazy-dachshund-redis pod=lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058) | |
Dec 01 15:55:39 minikube localkube[3201]: E1201 15:55:39.567151 3201 pod_workers.go:182] Error syncing pod c872ac21-d6af-11e7-9596-080027aac058 ("lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "lazy-dachshund-redis" with CrashLoopBackOff: "Back-off 10s restarting failed container=lazy-dachshund-redis pod=lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)" | |
Dec 01 15:55:40 minikube localkube[3201]: I1201 15:55:40.007679 3201 kuberuntime_manager.go:499] Container {Name:lazy-dachshund-postgresql Image:mdillon/postgis:9.6-alpine Command:[] Args:[] WorkingDir: Ports:[{Name:postgresql HostPort:0 ContainerPort:5432 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:POSTGRES_USER Value:api ValueFrom:nil} {Name:PGUSER Value:api ValueFrom:nil} {Name:POSTGRES_DB Value:api ValueFrom:nil} {Name:POSTGRES_INITDB_ARGS Value: ValueFrom:nil} {Name:PGDATA Value:/var/lib/postgresql/data/pgdata ValueFrom:nil} {Name:POSTGRES_PASSWORD Value: ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:lazy-dachshund-postgresql,},Key:postgres-password,Optional:nil,},}} {Name:POD_IP Value: ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:data ReadOnly:false MountPath:/var/lib/postgresql/data/pgdata SubPath:postgresql-db MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:6,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:3,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart | |
Dec 01 15:55:40 minikube localkube[3201]: it. | |
Dec 01 15:55:40 minikube localkube[3201]: E1201 15:55:40.011572 3201 kuberuntime_manager.go:714] container start failed: CreateContainerConfigError: lstat /tmp/hostpath-provisioner/pvc-c857ca9e-d6af-11e7-9596-080027aac058: no such file or directory | |
Dec 01 15:55:40 minikube localkube[3201]: E1201 15:55:40.011618 3201 pod_workers.go:182] Error syncing pod c86d57b5-d6af-11e7-9596-080027aac058 ("lazy-dachshund-postgresql-cfbf6fbb7-h5n6f_default(c86d57b5-d6af-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "lazy-dachshund-postgresql" with CreateContainerConfigError: "lstat /tmp/hostpath-provisioner/pvc-c857ca9e-d6af-11e7-9596-080027aac058: no such file or directory" | |
Dec 01 15:55:44 minikube localkube[3201]: I1201 15:55:44.244998 3201 kuberuntime_manager.go:499] Container {Name:lazy-dachshund-redis Image:bitnami/redis:4.0.2-r1 Command:[] Args:[] WorkingDir: Ports:[{Name:redis HostPort:0 ContainerPort:6379 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:ALLOW_EMPTY_PASSWORD Value:yes ValueFrom:nil}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:redis-data ReadOnly:false MountPath:/bitnami SubPath: MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 15:55:44 minikube localkube[3201]: I1201 15:55:44.245223 3201 kuberuntime_manager.go:738] checking backoff for container "lazy-dachshund-redis" in pod "lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)" | |
Dec 01 15:55:44 minikube localkube[3201]: I1201 15:55:44.245374 3201 kuberuntime_manager.go:748] Back-off 10s restarting failed container=lazy-dachshund-redis pod=lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058) | |
Dec 01 15:55:44 minikube localkube[3201]: E1201 15:55:44.245413 3201 pod_workers.go:182] Error syncing pod c872ac21-d6af-11e7-9596-080027aac058 ("lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "lazy-dachshund-redis" with CrashLoopBackOff: "Back-off 10s restarting failed container=lazy-dachshund-redis pod=lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)" | |
Dec 01 15:55:44 minikube localkube[3201]: E1201 15:55:44.283887 3201 healthcheck.go:317] Failed to start node healthz on 0: listen tcp: address 0: missing port in address | |
Dec 01 15:55:54 minikube localkube[3201]: I1201 15:55:54.009960 3201 kuberuntime_manager.go:499] Container {Name:lazy-dachshund-postgresql Image:mdillon/postgis:9.6-alpine Command:[] Args:[] WorkingDir: Ports:[{Name:postgresql HostPort:0 ContainerPort:5432 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:POSTGRES_USER Value:api ValueFrom:nil} {Name:PGUSER Value:api ValueFrom:nil} {Name:POSTGRES_DB Value:api ValueFrom:nil} {Name:POSTGRES_INITDB_ARGS Value: ValueFrom:nil} {Name:PGDATA Value:/var/lib/postgresql/data/pgdata ValueFrom:nil} {Name:POSTGRES_PASSWORD Value: ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:lazy-dachshund-postgresql,},Key:postgres-password,Optional:nil,},}} {Name:POD_IP Value: ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}}] Resources:{Limits:map[] Requests:map[memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI} cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI}]} VolumeMounts:[{Name:data ReadOnly:false MountPath:/var/lib/postgresql/data/pgdata SubPath:postgresql-db MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:6,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:3,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart | |
Dec 01 15:55:54 minikube localkube[3201]: it. | |
Dec 01 15:55:54 minikube localkube[3201]: E1201 15:55:54.014122 3201 kuberuntime_manager.go:714] container start failed: CreateContainerConfigError: lstat /tmp/hostpath-provisioner/pvc-c857ca9e-d6af-11e7-9596-080027aac058: no such file or directory | |
Dec 01 15:55:54 minikube localkube[3201]: E1201 15:55:54.014177 3201 pod_workers.go:182] Error syncing pod c86d57b5-d6af-11e7-9596-080027aac058 ("lazy-dachshund-postgresql-cfbf6fbb7-h5n6f_default(c86d57b5-d6af-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "lazy-dachshund-postgresql" with CreateContainerConfigError: "lstat /tmp/hostpath-provisioner/pvc-c857ca9e-d6af-11e7-9596-080027aac058: no such file or directory" | |
Dec 01 15:55:58 minikube localkube[3201]: I1201 15:55:58.009172 3201 kuberuntime_manager.go:499] Container {Name:lazy-dachshund-redis Image:bitnami/redis:4.0.2-r1 Command:[] Args:[] WorkingDir: Ports:[{Name:redis HostPort:0 ContainerPort:6379 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:ALLOW_EMPTY_PASSWORD Value:yes ValueFrom:nil}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:redis-data ReadOnly:false MountPath:/bitnami SubPath: MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 15:55:58 minikube localkube[3201]: I1201 15:55:58.010231 3201 kuberuntime_manager.go:738] checking backoff for container "lazy-dachshund-redis" in pod "lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)" | |
Dec 01 15:56:05 minikube localkube[3201]: I1201 15:56:05.007863 3201 kuberuntime_manager.go:499] Container {Name:lazy-dachshund-postgresql Image:mdillon/postgis:9.6-alpine Command:[] Args:[] WorkingDir: Ports:[{Name:postgresql HostPort:0 ContainerPort:5432 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:POSTGRES_USER Value:api ValueFrom:nil} {Name:PGUSER Value:api ValueFrom:nil} {Name:POSTGRES_DB Value:api ValueFrom:nil} {Name:POSTGRES_INITDB_ARGS Value: ValueFrom:nil} {Name:PGDATA Value:/var/lib/postgresql/data/pgdata ValueFrom:nil} {Name:POSTGRES_PASSWORD Value: ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:lazy-dachshund-postgresql,},Key:postgres-password,Optional:nil,},}} {Name:POD_IP Value: ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:data ReadOnly:false MountPath:/var/lib/postgresql/data/pgdata SubPath:postgresql-db MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:6,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:3,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart | |
Dec 01 15:56:05 minikube localkube[3201]: it. | |
Dec 01 15:56:05 minikube localkube[3201]: E1201 15:56:05.013539 3201 kuberuntime_manager.go:714] container start failed: CreateContainerConfigError: lstat /tmp/hostpath-provisioner/pvc-c857ca9e-d6af-11e7-9596-080027aac058: no such file or directory | |
Dec 01 15:56:05 minikube localkube[3201]: E1201 15:56:05.013658 3201 pod_workers.go:182] Error syncing pod c86d57b5-d6af-11e7-9596-080027aac058 ("lazy-dachshund-postgresql-cfbf6fbb7-h5n6f_default(c86d57b5-d6af-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "lazy-dachshund-postgresql" with CreateContainerConfigError: "lstat /tmp/hostpath-provisioner/pvc-c857ca9e-d6af-11e7-9596-080027aac058: no such file or directory" | |
Dec 01 15:56:17 minikube localkube[3201]: I1201 15:56:17.008398 3201 kuberuntime_manager.go:499] Container {Name:lazy-dachshund-postgresql Image:mdillon/postgis:9.6-alpine Command:[] Args:[] WorkingDir: Ports:[{Name:postgresql HostPort:0 ContainerPort:5432 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:POSTGRES_USER Value:api ValueFrom:nil} {Name:PGUSER Value:api ValueFrom:nil} {Name:POSTGRES_DB Value:api ValueFrom:nil} {Name:POSTGRES_INITDB_ARGS Value: ValueFrom:nil} {Name:PGDATA Value:/var/lib/postgresql/data/pgdata ValueFrom:nil} {Name:POSTGRES_PASSWORD Value: ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:lazy-dachshund-postgresql,},Key:postgres-password,Optional:nil,},}} {Name:POD_IP Value: ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:data ReadOnly:false MountPath:/var/lib/postgresql/data/pgdata SubPath:postgresql-db MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:6,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:3,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart | |
Dec 01 15:56:17 minikube localkube[3201]: it. | |
Dec 01 15:56:17 minikube localkube[3201]: E1201 15:56:17.017226 3201 kuberuntime_manager.go:714] container start failed: CreateContainerConfigError: lstat /tmp/hostpath-provisioner/pvc-c857ca9e-d6af-11e7-9596-080027aac058: no such file or directory | |
Dec 01 15:56:17 minikube localkube[3201]: E1201 15:56:17.017302 3201 pod_workers.go:182] Error syncing pod c86d57b5-d6af-11e7-9596-080027aac058 ("lazy-dachshund-postgresql-cfbf6fbb7-h5n6f_default(c86d57b5-d6af-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "lazy-dachshund-postgresql" with CreateContainerConfigError: "lstat /tmp/hostpath-provisioner/pvc-c857ca9e-d6af-11e7-9596-080027aac058: no such file or directory" | |
Dec 01 15:56:21 minikube localkube[3201]: I1201 15:56:21.039875 3201 kuberuntime_manager.go:499] Container {Name:lazy-dachshund-redis Image:bitnami/redis:4.0.2-r1 Command:[] Args:[] WorkingDir: Ports:[{Name:redis HostPort:0 ContainerPort:6379 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:ALLOW_EMPTY_PASSWORD Value:yes ValueFrom:nil}] Resources:{Limits:map[] Requests:map[memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI} cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI}]} VolumeMounts:[{Name:redis-data ReadOnly:false MountPath:/bitnami SubPath: MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 15:56:21 minikube localkube[3201]: I1201 15:56:21.040628 3201 kuberuntime_manager.go:738] checking backoff for container "lazy-dachshund-redis" in pod "lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)" | |
Dec 01 15:56:21 minikube localkube[3201]: I1201 15:56:21.040820 3201 kuberuntime_manager.go:748] Back-off 20s restarting failed container=lazy-dachshund-redis pod=lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058) | |
Dec 01 15:56:21 minikube localkube[3201]: E1201 15:56:21.040868 3201 pod_workers.go:182] Error syncing pod c872ac21-d6af-11e7-9596-080027aac058 ("lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "lazy-dachshund-redis" with CrashLoopBackOff: "Back-off 20s restarting failed container=lazy-dachshund-redis pod=lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)" | |
Dec 01 15:56:24 minikube localkube[3201]: I1201 15:56:24.244144 3201 kuberuntime_manager.go:499] Container {Name:lazy-dachshund-redis Image:bitnami/redis:4.0.2-r1 Command:[] Args:[] WorkingDir: Ports:[{Name:redis HostPort:0 ContainerPort:6379 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:ALLOW_EMPTY_PASSWORD Value:yes ValueFrom:nil}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:redis-data ReadOnly:false MountPath:/bitnami SubPath: MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 15:56:24 minikube localkube[3201]: I1201 15:56:24.244253 3201 kuberuntime_manager.go:738] checking backoff for container "lazy-dachshund-redis" in pod "lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)" | |
Dec 01 15:56:24 minikube localkube[3201]: I1201 15:56:24.244342 3201 kuberuntime_manager.go:748] Back-off 20s restarting failed container=lazy-dachshund-redis pod=lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058) | |
Dec 01 15:56:24 minikube localkube[3201]: E1201 15:56:24.244366 3201 pod_workers.go:182] Error syncing pod c872ac21-d6af-11e7-9596-080027aac058 ("lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "lazy-dachshund-redis" with CrashLoopBackOff: "Back-off 20s restarting failed container=lazy-dachshund-redis pod=lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)" | |
Dec 01 15:56:33 minikube localkube[3201]: I1201 15:56:33.008139 3201 kuberuntime_manager.go:499] Container {Name:lazy-dachshund-postgresql Image:mdillon/postgis:9.6-alpine Command:[] Args:[] WorkingDir: Ports:[{Name:postgresql HostPort:0 ContainerPort:5432 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:POSTGRES_USER Value:api ValueFrom:nil} {Name:PGUSER Value:api ValueFrom:nil} {Name:POSTGRES_DB Value:api ValueFrom:nil} {Name:POSTGRES_INITDB_ARGS Value: ValueFrom:nil} {Name:PGDATA Value:/var/lib/postgresql/data/pgdata ValueFrom:nil} {Name:POSTGRES_PASSWORD Value: ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:lazy-dachshund-postgresql,},Key:postgres-password,Optional:nil,},}} {Name:POD_IP Value: ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}}] Resources:{Limits:map[] Requests:map[memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI} cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI}]} VolumeMounts:[{Name:data ReadOnly:false MountPath:/var/lib/postgresql/data/pgdata SubPath:postgresql-db MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:6,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:3,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart | |
Dec 01 15:56:33 minikube localkube[3201]: it. | |
Dec 01 15:56:39 minikube localkube[3201]: I1201 15:56:39.007981 3201 kuberuntime_manager.go:499] Container {Name:lazy-dachshund-redis Image:bitnami/redis:4.0.2-r1 Command:[] Args:[] WorkingDir: Ports:[{Name:redis HostPort:0 ContainerPort:6379 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:ALLOW_EMPTY_PASSWORD Value:yes ValueFrom:nil}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:redis-data ReadOnly:false MountPath:/bitnami SubPath: MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 15:56:39 minikube localkube[3201]: I1201 15:56:39.008974 3201 kuberuntime_manager.go:738] checking backoff for container "lazy-dachshund-redis" in pod "lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)" | |
Dec 01 15:56:39 minikube localkube[3201]: I1201 15:56:39.009221 3201 kuberuntime_manager.go:748] Back-off 20s restarting failed container=lazy-dachshund-redis pod=lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058) | |
Dec 01 15:56:39 minikube localkube[3201]: E1201 15:56:39.009251 3201 pod_workers.go:182] Error syncing pod c872ac21-d6af-11e7-9596-080027aac058 ("lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "lazy-dachshund-redis" with CrashLoopBackOff: "Back-off 20s restarting failed container=lazy-dachshund-redis pod=lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)" | |
Dec 01 15:56:44 minikube localkube[3201]: E1201 15:56:44.284202 3201 healthcheck.go:317] Failed to start node healthz on 0: listen tcp: address 0: missing port in address | |
Dec 01 15:56:51 minikube localkube[3201]: I1201 15:56:51.008439 3201 kuberuntime_manager.go:499] Container {Name:lazy-dachshund-redis Image:bitnami/redis:4.0.2-r1 Command:[] Args:[] WorkingDir: Ports:[{Name:redis HostPort:0 ContainerPort:6379 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:ALLOW_EMPTY_PASSWORD Value:yes ValueFrom:nil}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:redis-data ReadOnly:false MountPath:/bitnami SubPath: MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 15:56:51 minikube localkube[3201]: I1201 15:56:51.010531 3201 kuberuntime_manager.go:738] checking backoff for container "lazy-dachshund-redis" in pod "lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)" | |
Dec 01 15:57:14 minikube localkube[3201]: I1201 15:57:14.609670 3201 kuberuntime_manager.go:499] Container {Name:lazy-dachshund-redis Image:bitnami/redis:4.0.2-r1 Command:[] Args:[] WorkingDir: Ports:[{Name:redis HostPort:0 ContainerPort:6379 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:ALLOW_EMPTY_PASSWORD Value:yes ValueFrom:nil}] Resources:{Limits:map[] Requests:map[memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI} cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI}]} VolumeMounts:[{Name:redis-data ReadOnly:false MountPath:/bitnami SubPath: MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 15:57:14 minikube localkube[3201]: I1201 15:57:14.609883 3201 kuberuntime_manager.go:738] checking backoff for container "lazy-dachshund-redis" in pod "lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)" | |
Dec 01 15:57:14 minikube localkube[3201]: I1201 15:57:14.610028 3201 kuberuntime_manager.go:748] Back-off 40s restarting failed container=lazy-dachshund-redis pod=lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058) | |
Dec 01 15:57:14 minikube localkube[3201]: E1201 15:57:14.610067 3201 pod_workers.go:182] Error syncing pod c872ac21-d6af-11e7-9596-080027aac058 ("lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "lazy-dachshund-redis" with CrashLoopBackOff: "Back-off 40s restarting failed container=lazy-dachshund-redis pod=lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)" | |
Dec 01 15:57:21 minikube localkube[3201]: I1201 15:57:21.141409 3201 trace.go:76] Trace[703688537]: "List /api/v1/pods" (started: 2017-12-01 15:57:20.577380874 +0000 UTC m=+465.616518463) (total time: 563.98502ms): | |
Dec 01 15:57:21 minikube localkube[3201]: Trace[703688537]: [559.859661ms] [559.852346ms] Listing from storage done | |
Dec 01 15:57:24 minikube localkube[3201]: I1201 15:57:24.244386 3201 kuberuntime_manager.go:499] Container {Name:lazy-dachshund-redis Image:bitnami/redis:4.0.2-r1 Command:[] Args:[] WorkingDir: Ports:[{Name:redis HostPort:0 ContainerPort:6379 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:ALLOW_EMPTY_PASSWORD Value:yes ValueFrom:nil}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:redis-data ReadOnly:false MountPath:/bitnami SubPath: MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 15:57:24 minikube localkube[3201]: I1201 15:57:24.244488 3201 kuberuntime_manager.go:738] checking backoff for container "lazy-dachshund-redis" in pod "lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)" | |
Dec 01 15:57:24 minikube localkube[3201]: I1201 15:57:24.244574 3201 kuberuntime_manager.go:748] Back-off 40s restarting failed container=lazy-dachshund-redis pod=lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058) | |
Dec 01 15:57:24 minikube localkube[3201]: E1201 15:57:24.244597 3201 pod_workers.go:182] Error syncing pod c872ac21-d6af-11e7-9596-080027aac058 ("lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "lazy-dachshund-redis" with CrashLoopBackOff: "Back-off 40s restarting failed container=lazy-dachshund-redis pod=lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)" | |
Dec 01 15:57:30 minikube localkube[3201]: I1201 15:57:30.921665 3201 kuberuntime_manager.go:738] checking backoff for container "seed-rpush" in pod "lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)" | |
Dec 01 15:57:31 minikube localkube[3201]: E1201 15:57:31.940430 3201 remote_runtime.go:278] ContainerStatus "e722e4d56f93cf53692b4a6e7ddc79069e8a0160b037ad2235e115dac1f81644" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: e722e4d56f93cf53692b4a6e7ddc79069e8a0160b037ad2235e115dac1f81644 | |
Dec 01 15:57:31 minikube localkube[3201]: E1201 15:57:31.940648 3201 kuberuntime_container.go:659] failed to remove pod init container "seed-rpush": failed to get container status "e722e4d56f93cf53692b4a6e7ddc79069e8a0160b037ad2235e115dac1f81644": rpc error: code = Unknown desc = Error: No such container: e722e4d56f93cf53692b4a6e7ddc79069e8a0160b037ad2235e115dac1f81644; Skipping pod "lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)" | |
Dec 01 15:57:31 minikube localkube[3201]: I1201 15:57:31.940809 3201 kuberuntime_manager.go:738] checking backoff for container "seed-rpush" in pod "lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)" | |
Dec 01 15:57:31 minikube localkube[3201]: I1201 15:57:31.940997 3201 kuberuntime_manager.go:748] Back-off 10s restarting failed container=seed-rpush pod=lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058) | |
Dec 01 15:57:31 minikube localkube[3201]: E1201 15:57:31.941102 3201 pod_workers.go:182] Error syncing pod c874f93f-d6af-11e7-9596-080027aac058 ("lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "seed-rpush" with CrashLoopBackOff: "Back-off 10s restarting failed container=seed-rpush pod=lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)" | |
Dec 01 15:57:33 minikube localkube[3201]: I1201 15:57:33.056916 3201 kuberuntime_manager.go:738] checking backoff for container "seed-rpush" in pod "lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)" | |
Dec 01 15:57:33 minikube localkube[3201]: I1201 15:57:33.057209 3201 kuberuntime_manager.go:748] Back-off 10s restarting failed container=seed-rpush pod=lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058) | |
Dec 01 15:57:33 minikube localkube[3201]: E1201 15:57:33.057350 3201 pod_workers.go:182] Error syncing pod c874f93f-d6af-11e7-9596-080027aac058 ("lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "seed-rpush" with CrashLoopBackOff: "Back-off 10s restarting failed container=seed-rpush pod=lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)" | |
Dec 01 15:57:38 minikube localkube[3201]: I1201 15:57:38.009803 3201 kuberuntime_manager.go:499] Container {Name:lazy-dachshund-redis Image:bitnami/redis:4.0.2-r1 Command:[] Args:[] WorkingDir: Ports:[{Name:redis HostPort:0 ContainerPort:6379 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:ALLOW_EMPTY_PASSWORD Value:yes ValueFrom:nil}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:redis-data ReadOnly:false MountPath:/bitnami SubPath: MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 15:57:38 minikube localkube[3201]: I1201 15:57:38.011984 3201 kuberuntime_manager.go:738] checking backoff for container "lazy-dachshund-redis" in pod "lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)" | |
Dec 01 15:57:38 minikube localkube[3201]: I1201 15:57:38.012433 3201 kuberuntime_manager.go:748] Back-off 40s restarting failed container=lazy-dachshund-redis pod=lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058) | |
Dec 01 15:57:38 minikube localkube[3201]: E1201 15:57:38.012760 3201 pod_workers.go:182] Error syncing pod c872ac21-d6af-11e7-9596-080027aac058 ("lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "lazy-dachshund-redis" with CrashLoopBackOff: "Back-off 40s restarting failed container=lazy-dachshund-redis pod=lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)" | |
Dec 01 15:57:44 minikube localkube[3201]: I1201 15:57:44.010561 3201 kuberuntime_manager.go:738] checking backoff for container "seed-rpush" in pod "lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)" | |
Dec 01 15:57:44 minikube localkube[3201]: E1201 15:57:44.286367 3201 healthcheck.go:317] Failed to start node healthz on 0: listen tcp: address 0: missing port in address | |
Dec 01 15:57:45 minikube localkube[3201]: E1201 15:57:45.200413 3201 remote_runtime.go:278] ContainerStatus "9b5d87d8311eda5cdebba4d7552eee25f6d33085725651583dc9df5631ce1644" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: 9b5d87d8311eda5cdebba4d7552eee25f6d33085725651583dc9df5631ce1644 | |
Dec 01 15:57:45 minikube localkube[3201]: E1201 15:57:45.200567 3201 kuberuntime_container.go:659] failed to remove pod init container "seed-rpush": failed to get container status "9b5d87d8311eda5cdebba4d7552eee25f6d33085725651583dc9df5631ce1644": rpc error: code = Unknown desc = Error: No such container: 9b5d87d8311eda5cdebba4d7552eee25f6d33085725651583dc9df5631ce1644; Skipping pod "lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)" | |
Dec 01 15:57:45 minikube localkube[3201]: I1201 15:57:45.200701 3201 kuberuntime_manager.go:738] checking backoff for container "seed-rpush" in pod "lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)" | |
Dec 01 15:57:45 minikube localkube[3201]: I1201 15:57:45.200826 3201 kuberuntime_manager.go:748] Back-off 20s restarting failed container=seed-rpush pod=lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058) | |
Dec 01 15:57:45 minikube localkube[3201]: E1201 15:57:45.200867 3201 pod_workers.go:182] Error syncing pod c874f93f-d6af-11e7-9596-080027aac058 ("lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "seed-rpush" with CrashLoopBackOff: "Back-off 20s restarting failed container=seed-rpush pod=lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)" | |
Dec 01 15:57:49 minikube localkube[3201]: I1201 15:57:49.008005 3201 kuberuntime_manager.go:499] Container {Name:lazy-dachshund-redis Image:bitnami/redis:4.0.2-r1 Command:[] Args:[] WorkingDir: Ports:[{Name:redis HostPort:0 ContainerPort:6379 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:ALLOW_EMPTY_PASSWORD Value:yes ValueFrom:nil}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:redis-data ReadOnly:false MountPath:/bitnami SubPath: MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 15:57:49 minikube localkube[3201]: I1201 15:57:49.008117 3201 kuberuntime_manager.go:738] checking backoff for container "lazy-dachshund-redis" in pod "lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)" | |
Dec 01 15:57:49 minikube localkube[3201]: I1201 15:57:49.008209 3201 kuberuntime_manager.go:748] Back-off 40s restarting failed container=lazy-dachshund-redis pod=lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058) | |
Dec 01 15:57:49 minikube localkube[3201]: E1201 15:57:49.008232 3201 pod_workers.go:182] Error syncing pod c872ac21-d6af-11e7-9596-080027aac058 ("lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "lazy-dachshund-redis" with CrashLoopBackOff: "Back-off 40s restarting failed container=lazy-dachshund-redis pod=lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)" | |
Dec 01 15:57:59 minikube localkube[3201]: I1201 15:57:59.010200 3201 kuberuntime_manager.go:738] checking backoff for container "seed-rpush" in pod "lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)" | |
Dec 01 15:57:59 minikube localkube[3201]: I1201 15:57:59.010365 3201 kuberuntime_manager.go:748] Back-off 20s restarting failed container=seed-rpush pod=lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058) | |
Dec 01 15:57:59 minikube localkube[3201]: E1201 15:57:59.010406 3201 pod_workers.go:182] Error syncing pod c874f93f-d6af-11e7-9596-080027aac058 ("lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "seed-rpush" with CrashLoopBackOff: "Back-off 20s restarting failed container=seed-rpush pod=lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)" | |
Dec 01 15:58:01 minikube localkube[3201]: I1201 15:58:01.008298 3201 kuberuntime_manager.go:499] Container {Name:lazy-dachshund-redis Image:bitnami/redis:4.0.2-r1 Command:[] Args:[] WorkingDir: Ports:[{Name:redis HostPort:0 ContainerPort:6379 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:ALLOW_EMPTY_PASSWORD Value:yes ValueFrom:nil}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:redis-data ReadOnly:false MountPath:/bitnami SubPath: MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 15:58:01 minikube localkube[3201]: I1201 15:58:01.008409 3201 kuberuntime_manager.go:738] checking backoff for container "lazy-dachshund-redis" in pod "lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)" | |
Dec 01 15:58:12 minikube localkube[3201]: I1201 15:58:12.013798 3201 kuberuntime_manager.go:738] checking backoff for container "seed-rpush" in pod "lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)" | |
Dec 01 15:58:14 minikube localkube[3201]: E1201 15:58:14.145759 3201 remote_runtime.go:278] ContainerStatus "b06de67f1ceb036586219dba1367795b3cfa9c302da406524683984992f6d47e" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: b06de67f1ceb036586219dba1367795b3cfa9c302da406524683984992f6d47e | |
Dec 01 15:58:14 minikube localkube[3201]: E1201 15:58:14.145807 3201 kuberuntime_container.go:659] failed to remove pod init container "seed-rpush": failed to get container status "b06de67f1ceb036586219dba1367795b3cfa9c302da406524683984992f6d47e": rpc error: code = Unknown desc = Error: No such container: b06de67f1ceb036586219dba1367795b3cfa9c302da406524683984992f6d47e; Skipping pod "lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)" | |
Dec 01 15:58:14 minikube localkube[3201]: I1201 15:58:14.145903 3201 kuberuntime_manager.go:738] checking backoff for container "seed-rpush" in pod "lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)" | |
Dec 01 15:58:14 minikube localkube[3201]: I1201 15:58:14.145966 3201 kuberuntime_manager.go:748] Back-off 40s restarting failed container=seed-rpush pod=lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058) | |
Dec 01 15:58:14 minikube localkube[3201]: E1201 15:58:14.145987 3201 pod_workers.go:182] Error syncing pod c874f93f-d6af-11e7-9596-080027aac058 ("lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "seed-rpush" with CrashLoopBackOff: "Back-off 40s restarting failed container=seed-rpush pod=lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)" | |
Dec 01 15:58:24 minikube localkube[3201]: I1201 15:58:24.274994 3201 kuberuntime_manager.go:499] Container {Name:lazy-dachshund-redis Image:bitnami/redis:4.0.2-r1 Command:[] Args:[] WorkingDir: Ports:[{Name:redis HostPort:0 ContainerPort:6379 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:ALLOW_EMPTY_PASSWORD Value:yes ValueFrom:nil}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:redis-data ReadOnly:false MountPath:/bitnami SubPath: MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 15:58:24 minikube localkube[3201]: I1201 15:58:24.275950 3201 kuberuntime_manager.go:738] checking backoff for container "lazy-dachshund-redis" in pod "lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)" | |
Dec 01 15:58:24 minikube localkube[3201]: I1201 15:58:24.276140 3201 kuberuntime_manager.go:748] Back-off 1m20s restarting failed container=lazy-dachshund-redis pod=lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058) | |
Dec 01 15:58:24 minikube localkube[3201]: E1201 15:58:24.276252 3201 pod_workers.go:182] Error syncing pod c872ac21-d6af-11e7-9596-080027aac058 ("lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "lazy-dachshund-redis" with CrashLoopBackOff: "Back-off 1m20s restarting failed container=lazy-dachshund-redis pod=lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)" | |
Dec 01 15:58:28 minikube localkube[3201]: I1201 15:58:28.008897 3201 kuberuntime_manager.go:738] checking backoff for container "seed-rpush" in pod "lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)" | |
Dec 01 15:58:28 minikube localkube[3201]: I1201 15:58:28.009291 3201 kuberuntime_manager.go:748] Back-off 40s restarting failed container=seed-rpush pod=lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058) | |
Dec 01 15:58:28 minikube localkube[3201]: E1201 15:58:28.009465 3201 pod_workers.go:182] Error syncing pod c874f93f-d6af-11e7-9596-080027aac058 ("lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "seed-rpush" with CrashLoopBackOff: "Back-off 40s restarting failed container=seed-rpush pod=lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)" | |
Dec 01 15:58:34 minikube localkube[3201]: I1201 15:58:34.244547 3201 kuberuntime_manager.go:499] Container {Name:lazy-dachshund-redis Image:bitnami/redis:4.0.2-r1 Command:[] Args:[] WorkingDir: Ports:[{Name:redis HostPort:0 ContainerPort:6379 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:ALLOW_EMPTY_PASSWORD Value:yes ValueFrom:nil}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:redis-data ReadOnly:false MountPath:/bitnami SubPath: MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 15:58:34 minikube localkube[3201]: I1201 15:58:34.245660 3201 kuberuntime_manager.go:738] checking backoff for container "lazy-dachshund-redis" in pod "lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)" | |
Dec 01 15:58:34 minikube localkube[3201]: I1201 15:58:34.245845 3201 kuberuntime_manager.go:748] Back-off 1m20s restarting failed container=lazy-dachshund-redis pod=lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058) | |
Dec 01 15:58:34 minikube localkube[3201]: E1201 15:58:34.245889 3201 pod_workers.go:182] Error syncing pod c872ac21-d6af-11e7-9596-080027aac058 ("lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "lazy-dachshund-redis" with CrashLoopBackOff: "Back-off 1m20s restarting failed container=lazy-dachshund-redis pod=lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)" | |
Dec 01 15:58:42 minikube localkube[3201]: I1201 15:58:42.012687 3201 kuberuntime_manager.go:738] checking backoff for container "seed-rpush" in pod "lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)" | |
Dec 01 15:58:42 minikube localkube[3201]: I1201 15:58:42.012873 3201 kuberuntime_manager.go:748] Back-off 40s restarting failed container=seed-rpush pod=lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058) | |
Dec 01 15:58:42 minikube localkube[3201]: E1201 15:58:42.012916 3201 pod_workers.go:182] Error syncing pod c874f93f-d6af-11e7-9596-080027aac058 ("lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "seed-rpush" with CrashLoopBackOff: "Back-off 40s restarting failed container=seed-rpush pod=lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)" | |
Dec 01 15:58:44 minikube localkube[3201]: E1201 15:58:44.286871 3201 healthcheck.go:317] Failed to start node healthz on 0: listen tcp: address 0: missing port in address | |
Dec 01 15:58:46 minikube localkube[3201]: I1201 15:58:46.009994 3201 kuberuntime_manager.go:499] Container {Name:lazy-dachshund-redis Image:bitnami/redis:4.0.2-r1 Command:[] Args:[] WorkingDir: Ports:[{Name:redis HostPort:0 ContainerPort:6379 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:ALLOW_EMPTY_PASSWORD Value:yes ValueFrom:nil}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:redis-data ReadOnly:false MountPath:/bitnami SubPath: MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 15:58:46 minikube localkube[3201]: I1201 15:58:46.010197 3201 kuberuntime_manager.go:738] checking backoff for container "lazy-dachshund-redis" in pod "lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)" | |
Dec 01 15:58:46 minikube localkube[3201]: I1201 15:58:46.010406 3201 kuberuntime_manager.go:748] Back-off 1m20s restarting failed container=lazy-dachshund-redis pod=lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058) | |
Dec 01 15:58:46 minikube localkube[3201]: E1201 15:58:46.010453 3201 pod_workers.go:182] Error syncing pod c872ac21-d6af-11e7-9596-080027aac058 ("lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "lazy-dachshund-redis" with CrashLoopBackOff: "Back-off 1m20s restarting failed container=lazy-dachshund-redis pod=lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)" | |
Dec 01 15:58:55 minikube localkube[3201]: I1201 15:58:55.010217 3201 kuberuntime_manager.go:738] checking backoff for container "seed-rpush" in pod "lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)" | |
Dec 01 15:58:56 minikube localkube[3201]: E1201 15:58:56.684423 3201 remote_runtime.go:278] ContainerStatus "093a78f970061d05ded40dbb4b758d33c07d92a505a4514c520b5313e882ff35" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: 093a78f970061d05ded40dbb4b758d33c07d92a505a4514c520b5313e882ff35 | |
Dec 01 15:58:56 minikube localkube[3201]: E1201 15:58:56.685008 3201 kuberuntime_container.go:659] failed to remove pod init container "seed-rpush": failed to get container status "093a78f970061d05ded40dbb4b758d33c07d92a505a4514c520b5313e882ff35": rpc error: code = Unknown desc = Error: No such container: 093a78f970061d05ded40dbb4b758d33c07d92a505a4514c520b5313e882ff35; Skipping pod "lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)" | |
Dec 01 15:58:56 minikube localkube[3201]: I1201 15:58:56.685410 3201 kuberuntime_manager.go:738] checking backoff for container "seed-rpush" in pod "lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)" | |
Dec 01 15:58:56 minikube localkube[3201]: I1201 15:58:56.685796 3201 kuberuntime_manager.go:748] Back-off 1m20s restarting failed container=seed-rpush pod=lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058) | |
Dec 01 15:58:56 minikube localkube[3201]: E1201 15:58:56.686117 3201 pod_workers.go:182] Error syncing pod c874f93f-d6af-11e7-9596-080027aac058 ("lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "seed-rpush" with CrashLoopBackOff: "Back-off 1m20s restarting failed container=seed-rpush pod=lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)" | |
Dec 01 15:58:59 minikube localkube[3201]: I1201 15:58:59.008006 3201 kuberuntime_manager.go:499] Container {Name:lazy-dachshund-redis Image:bitnami/redis:4.0.2-r1 Command:[] Args:[] WorkingDir: Ports:[{Name:redis HostPort:0 ContainerPort:6379 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:ALLOW_EMPTY_PASSWORD Value:yes ValueFrom:nil}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:redis-data ReadOnly:false MountPath:/bitnami SubPath: MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 15:58:59 minikube localkube[3201]: I1201 15:58:59.008207 3201 kuberuntime_manager.go:738] checking backoff for container "lazy-dachshund-redis" in pod "lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)" | |
Dec 01 15:58:59 minikube localkube[3201]: I1201 15:58:59.008437 3201 kuberuntime_manager.go:748] Back-off 1m20s restarting failed container=lazy-dachshund-redis pod=lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058) | |
Dec 01 15:58:59 minikube localkube[3201]: E1201 15:58:59.008491 3201 pod_workers.go:182] Error syncing pod c872ac21-d6af-11e7-9596-080027aac058 ("lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "lazy-dachshund-redis" with CrashLoopBackOff: "Back-off 1m20s restarting failed container=lazy-dachshund-redis pod=lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)" | |
Dec 01 15:59:11 minikube localkube[3201]: I1201 15:59:11.008148 3201 kuberuntime_manager.go:499] Container {Name:lazy-dachshund-redis Image:bitnami/redis:4.0.2-r1 Command:[] Args:[] WorkingDir: Ports:[{Name:redis HostPort:0 ContainerPort:6379 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:ALLOW_EMPTY_PASSWORD Value:yes ValueFrom:nil}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:redis-data ReadOnly:false MountPath:/bitnami SubPath: MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 15:59:11 minikube localkube[3201]: I1201 15:59:11.010226 3201 kuberuntime_manager.go:738] checking backoff for container "lazy-dachshund-redis" in pod "lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)" | |
Dec 01 15:59:11 minikube localkube[3201]: I1201 15:59:11.010421 3201 kuberuntime_manager.go:748] Back-off 1m20s restarting failed container=lazy-dachshund-redis pod=lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058) | |
Dec 01 15:59:11 minikube localkube[3201]: E1201 15:59:11.010467 3201 pod_workers.go:182] Error syncing pod c872ac21-d6af-11e7-9596-080027aac058 ("lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "lazy-dachshund-redis" with CrashLoopBackOff: "Back-off 1m20s restarting failed container=lazy-dachshund-redis pod=lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)" | |
Dec 01 15:59:12 minikube localkube[3201]: I1201 15:59:12.010865 3201 kuberuntime_manager.go:738] checking backoff for container "seed-rpush" in pod "lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)" | |
Dec 01 15:59:12 minikube localkube[3201]: I1201 15:59:12.011022 3201 kuberuntime_manager.go:748] Back-off 1m20s restarting failed container=seed-rpush pod=lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058) | |
Dec 01 15:59:12 minikube localkube[3201]: E1201 15:59:12.011064 3201 pod_workers.go:182] Error syncing pod c874f93f-d6af-11e7-9596-080027aac058 ("lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "seed-rpush" with CrashLoopBackOff: "Back-off 1m20s restarting failed container=seed-rpush pod=lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)" | |
Dec 01 15:59:22 minikube localkube[3201]: I1201 15:59:22.011916 3201 kuberuntime_manager.go:499] Container {Name:lazy-dachshund-redis Image:bitnami/redis:4.0.2-r1 Command:[] Args:[] WorkingDir: Ports:[{Name:redis HostPort:0 ContainerPort:6379 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:ALLOW_EMPTY_PASSWORD Value:yes ValueFrom:nil}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:redis-data ReadOnly:false MountPath:/bitnami SubPath: MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 15:59:22 minikube localkube[3201]: I1201 15:59:22.012927 3201 kuberuntime_manager.go:738] checking backoff for container "lazy-dachshund-redis" in pod "lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)" | |
Dec 01 15:59:22 minikube localkube[3201]: I1201 15:59:22.013176 3201 kuberuntime_manager.go:748] Back-off 1m20s restarting failed container=lazy-dachshund-redis pod=lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058) | |
Dec 01 15:59:22 minikube localkube[3201]: E1201 15:59:22.013386 3201 pod_workers.go:182] Error syncing pod c872ac21-d6af-11e7-9596-080027aac058 ("lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "lazy-dachshund-redis" with CrashLoopBackOff: "Back-off 1m20s restarting failed container=lazy-dachshund-redis pod=lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)" | |
Dec 01 15:59:24 minikube localkube[3201]: I1201 15:59:24.011964 3201 kuberuntime_manager.go:738] checking backoff for container "seed-rpush" in pod "lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)" | |
Dec 01 15:59:24 minikube localkube[3201]: I1201 15:59:24.012580 3201 kuberuntime_manager.go:748] Back-off 1m20s restarting failed container=seed-rpush pod=lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058) | |
Dec 01 15:59:24 minikube localkube[3201]: E1201 15:59:24.012906 3201 pod_workers.go:182] Error syncing pod c874f93f-d6af-11e7-9596-080027aac058 ("lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "seed-rpush" with CrashLoopBackOff: "Back-off 1m20s restarting failed container=seed-rpush pod=lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)" | |
Dec 01 15:59:36 minikube localkube[3201]: store.index: compact 842 | |
Dec 01 15:59:36 minikube localkube[3201]: finished scheduled compaction at 842 (took 9.234975ms) | |
Dec 01 15:59:38 minikube localkube[3201]: I1201 15:59:38.009266 3201 kuberuntime_manager.go:499] Container {Name:lazy-dachshund-redis Image:bitnami/redis:4.0.2-r1 Command:[] Args:[] WorkingDir: Ports:[{Name:redis HostPort:0 ContainerPort:6379 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:ALLOW_EMPTY_PASSWORD Value:yes ValueFrom:nil}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:redis-data ReadOnly:false MountPath:/bitnami SubPath: MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 15:59:38 minikube localkube[3201]: I1201 15:59:38.009471 3201 kuberuntime_manager.go:738] checking backoff for container "lazy-dachshund-redis" in pod "lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)" | |
Dec 01 15:59:38 minikube localkube[3201]: I1201 15:59:38.009634 3201 kuberuntime_manager.go:748] Back-off 1m20s restarting failed container=lazy-dachshund-redis pod=lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058) | |
Dec 01 15:59:38 minikube localkube[3201]: E1201 15:59:38.009680 3201 pod_workers.go:182] Error syncing pod c872ac21-d6af-11e7-9596-080027aac058 ("lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "lazy-dachshund-redis" with CrashLoopBackOff: "Back-off 1m20s restarting failed container=lazy-dachshund-redis pod=lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)" | |
Dec 01 15:59:38 minikube localkube[3201]: I1201 15:59:38.011792 3201 kuberuntime_manager.go:738] checking backoff for container "seed-rpush" in pod "lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)" | |
Dec 01 15:59:38 minikube localkube[3201]: I1201 15:59:38.011899 3201 kuberuntime_manager.go:748] Back-off 1m20s restarting failed container=seed-rpush pod=lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058) | |
Dec 01 15:59:38 minikube localkube[3201]: E1201 15:59:38.011935 3201 pod_workers.go:182] Error syncing pod c874f93f-d6af-11e7-9596-080027aac058 ("lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "seed-rpush" with CrashLoopBackOff: "Back-off 1m20s restarting failed container=seed-rpush pod=lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)" | |
Dec 01 15:59:44 minikube localkube[3201]: E1201 15:59:44.287619 3201 healthcheck.go:317] Failed to start node healthz on 0: listen tcp: address 0: missing port in address | |
Dec 01 15:59:49 minikube localkube[3201]: I1201 15:59:49.008231 3201 kuberuntime_manager.go:499] Container {Name:lazy-dachshund-redis Image:bitnami/redis:4.0.2-r1 Command:[] Args:[] WorkingDir: Ports:[{Name:redis HostPort:0 ContainerPort:6379 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:ALLOW_EMPTY_PASSWORD Value:yes ValueFrom:nil}] Resources:{Limits:map[] Requests:map[memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI} cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI}]} VolumeMounts:[{Name:redis-data ReadOnly:false MountPath:/bitnami SubPath: MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 15:59:49 minikube localkube[3201]: I1201 15:59:49.009112 3201 kuberuntime_manager.go:738] checking backoff for container "lazy-dachshund-redis" in pod "lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)" | |
Dec 01 15:59:52 minikube localkube[3201]: I1201 15:59:52.011831 3201 kuberuntime_manager.go:738] checking backoff for container "seed-rpush" in pod "lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)" | |
Dec 01 15:59:52 minikube localkube[3201]: I1201 15:59:52.011916 3201 kuberuntime_manager.go:748] Back-off 1m20s restarting failed container=seed-rpush pod=lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058) | |
Dec 01 15:59:52 minikube localkube[3201]: E1201 15:59:52.011937 3201 pod_workers.go:182] Error syncing pod c874f93f-d6af-11e7-9596-080027aac058 ("lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "seed-rpush" with CrashLoopBackOff: "Back-off 1m20s restarting failed container=seed-rpush pod=lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:00:05 minikube localkube[3201]: I1201 16:00:05.009693 3201 kuberuntime_manager.go:738] checking backoff for container "seed-rpush" in pod "lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:00:05 minikube localkube[3201]: I1201 16:00:05.010723 3201 kuberuntime_manager.go:748] Back-off 1m20s restarting failed container=seed-rpush pod=lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058) | |
Dec 01 16:00:05 minikube localkube[3201]: E1201 16:00:05.011011 3201 pod_workers.go:182] Error syncing pod c874f93f-d6af-11e7-9596-080027aac058 ("lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "seed-rpush" with CrashLoopBackOff: "Back-off 1m20s restarting failed container=seed-rpush pod=lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:00:12 minikube localkube[3201]: I1201 16:00:12.605442 3201 kuberuntime_manager.go:499] Container {Name:lazy-dachshund-redis Image:bitnami/redis:4.0.2-r1 Command:[] Args:[] WorkingDir: Ports:[{Name:redis HostPort:0 ContainerPort:6379 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:ALLOW_EMPTY_PASSWORD Value:yes ValueFrom:nil}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:redis-data ReadOnly:false MountPath:/bitnami SubPath: MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:00:12 minikube localkube[3201]: I1201 16:00:12.605524 3201 kuberuntime_manager.go:738] checking backoff for container "lazy-dachshund-redis" in pod "lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:00:12 minikube localkube[3201]: I1201 16:00:12.605612 3201 kuberuntime_manager.go:748] Back-off 2m40s restarting failed container=lazy-dachshund-redis pod=lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058) | |
Dec 01 16:00:12 minikube localkube[3201]: E1201 16:00:12.605635 3201 pod_workers.go:182] Error syncing pod c872ac21-d6af-11e7-9596-080027aac058 ("lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "lazy-dachshund-redis" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=lazy-dachshund-redis pod=lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:00:14 minikube localkube[3201]: I1201 16:00:14.245278 3201 kuberuntime_manager.go:499] Container {Name:lazy-dachshund-redis Image:bitnami/redis:4.0.2-r1 Command:[] Args:[] WorkingDir: Ports:[{Name:redis HostPort:0 ContainerPort:6379 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:ALLOW_EMPTY_PASSWORD Value:yes ValueFrom:nil}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:redis-data ReadOnly:false MountPath:/bitnami SubPath: MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:00:14 minikube localkube[3201]: I1201 16:00:14.245518 3201 kuberuntime_manager.go:738] checking backoff for container "lazy-dachshund-redis" in pod "lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:00:14 minikube localkube[3201]: I1201 16:00:14.245683 3201 kuberuntime_manager.go:748] Back-off 2m40s restarting failed container=lazy-dachshund-redis pod=lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058) | |
Dec 01 16:00:14 minikube localkube[3201]: E1201 16:00:14.245730 3201 pod_workers.go:182] Error syncing pod c872ac21-d6af-11e7-9596-080027aac058 ("lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "lazy-dachshund-redis" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=lazy-dachshund-redis pod=lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:00:16 minikube localkube[3201]: I1201 16:00:16.009329 3201 kuberuntime_manager.go:738] checking backoff for container "seed-rpush" in pod "lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:00:18 minikube localkube[3201]: E1201 16:00:18.053384 3201 remote_runtime.go:278] ContainerStatus "f6ba826e37f45d2f0dbe8327a4f4103ee18aaac5c66e72016aeb39969322b426" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: f6ba826e37f45d2f0dbe8327a4f4103ee18aaac5c66e72016aeb39969322b426 | |
Dec 01 16:00:18 minikube localkube[3201]: E1201 16:00:18.053641 3201 kuberuntime_container.go:659] failed to remove pod init container "seed-rpush": failed to get container status "f6ba826e37f45d2f0dbe8327a4f4103ee18aaac5c66e72016aeb39969322b426": rpc error: code = Unknown desc = Error: No such container: f6ba826e37f45d2f0dbe8327a4f4103ee18aaac5c66e72016aeb39969322b426; Skipping pod "lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:00:18 minikube localkube[3201]: I1201 16:00:18.053815 3201 kuberuntime_manager.go:738] checking backoff for container "seed-rpush" in pod "lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:00:18 minikube localkube[3201]: I1201 16:00:18.053968 3201 kuberuntime_manager.go:748] Back-off 2m40s restarting failed container=seed-rpush pod=lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058) | |
Dec 01 16:00:18 minikube localkube[3201]: E1201 16:00:18.054120 3201 pod_workers.go:182] Error syncing pod c874f93f-d6af-11e7-9596-080027aac058 ("lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "seed-rpush" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=seed-rpush pod=lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:00:29 minikube localkube[3201]: I1201 16:00:29.008990 3201 kuberuntime_manager.go:499] Container {Name:lazy-dachshund-redis Image:bitnami/redis:4.0.2-r1 Command:[] Args:[] WorkingDir: Ports:[{Name:redis HostPort:0 ContainerPort:6379 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:ALLOW_EMPTY_PASSWORD Value:yes ValueFrom:nil}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:redis-data ReadOnly:false MountPath:/bitnami SubPath: MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:00:29 minikube localkube[3201]: I1201 16:00:29.010378 3201 kuberuntime_manager.go:738] checking backoff for container "lazy-dachshund-redis" in pod "lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:00:29 minikube localkube[3201]: I1201 16:00:29.010563 3201 kuberuntime_manager.go:748] Back-off 2m40s restarting failed container=lazy-dachshund-redis pod=lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058) | |
Dec 01 16:00:29 minikube localkube[3201]: E1201 16:00:29.010606 3201 pod_workers.go:182] Error syncing pod c872ac21-d6af-11e7-9596-080027aac058 ("lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "lazy-dachshund-redis" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=lazy-dachshund-redis pod=lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:00:33 minikube localkube[3201]: I1201 16:00:33.010922 3201 kuberuntime_manager.go:738] checking backoff for container "seed-rpush" in pod "lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:00:33 minikube localkube[3201]: I1201 16:00:33.011171 3201 kuberuntime_manager.go:748] Back-off 2m40s restarting failed container=seed-rpush pod=lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058) | |
Dec 01 16:00:33 minikube localkube[3201]: E1201 16:00:33.011216 3201 pod_workers.go:182] Error syncing pod c874f93f-d6af-11e7-9596-080027aac058 ("lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "seed-rpush" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=seed-rpush pod=lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:00:44 minikube localkube[3201]: I1201 16:00:44.008761 3201 kuberuntime_manager.go:499] Container {Name:lazy-dachshund-redis Image:bitnami/redis:4.0.2-r1 Command:[] Args:[] WorkingDir: Ports:[{Name:redis HostPort:0 ContainerPort:6379 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:ALLOW_EMPTY_PASSWORD Value:yes ValueFrom:nil}] Resources:{Limits:map[] Requests:map[memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI} cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI}]} VolumeMounts:[{Name:redis-data ReadOnly:false MountPath:/bitnami SubPath: MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:00:44 minikube localkube[3201]: I1201 16:00:44.009913 3201 kuberuntime_manager.go:738] checking backoff for container "lazy-dachshund-redis" in pod "lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:00:44 minikube localkube[3201]: I1201 16:00:44.010078 3201 kuberuntime_manager.go:748] Back-off 2m40s restarting failed container=lazy-dachshund-redis pod=lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058) | |
Dec 01 16:00:44 minikube localkube[3201]: E1201 16:00:44.010120 3201 pod_workers.go:182] Error syncing pod c872ac21-d6af-11e7-9596-080027aac058 ("lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "lazy-dachshund-redis" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=lazy-dachshund-redis pod=lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:00:44 minikube localkube[3201]: E1201 16:00:44.287742 3201 healthcheck.go:317] Failed to start node healthz on 0: listen tcp: address 0: missing port in address | |
Dec 01 16:00:46 minikube localkube[3201]: I1201 16:00:46.009214 3201 kuberuntime_manager.go:738] checking backoff for container "seed-rpush" in pod "lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:00:46 minikube localkube[3201]: I1201 16:00:46.009563 3201 kuberuntime_manager.go:748] Back-off 2m40s restarting failed container=seed-rpush pod=lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058) | |
Dec 01 16:00:46 minikube localkube[3201]: E1201 16:00:46.009716 3201 pod_workers.go:182] Error syncing pod c874f93f-d6af-11e7-9596-080027aac058 ("lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "seed-rpush" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=seed-rpush pod=lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:00:58 minikube localkube[3201]: I1201 16:00:58.014639 3201 kuberuntime_manager.go:738] checking backoff for container "seed-rpush" in pod "lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:00:58 minikube localkube[3201]: I1201 16:00:58.014914 3201 kuberuntime_manager.go:748] Back-off 2m40s restarting failed container=seed-rpush pod=lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058) | |
Dec 01 16:00:58 minikube localkube[3201]: E1201 16:00:58.014961 3201 pod_workers.go:182] Error syncing pod c874f93f-d6af-11e7-9596-080027aac058 ("lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "seed-rpush" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=seed-rpush pod=lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:01:00 minikube localkube[3201]: I1201 16:01:00.008790 3201 kuberuntime_manager.go:499] Container {Name:lazy-dachshund-redis Image:bitnami/redis:4.0.2-r1 Command:[] Args:[] WorkingDir: Ports:[{Name:redis HostPort:0 ContainerPort:6379 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:ALLOW_EMPTY_PASSWORD Value:yes ValueFrom:nil}] Resources:{Limits:map[] Requests:map[memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI} cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI}]} VolumeMounts:[{Name:redis-data ReadOnly:false MountPath:/bitnami SubPath: MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:01:00 minikube localkube[3201]: I1201 16:01:00.008889 3201 kuberuntime_manager.go:738] checking backoff for container "lazy-dachshund-redis" in pod "lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:01:00 minikube localkube[3201]: I1201 16:01:00.008968 3201 kuberuntime_manager.go:748] Back-off 2m40s restarting failed container=lazy-dachshund-redis pod=lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058) | |
Dec 01 16:01:00 minikube localkube[3201]: E1201 16:01:00.008988 3201 pod_workers.go:182] Error syncing pod c872ac21-d6af-11e7-9596-080027aac058 ("lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "lazy-dachshund-redis" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=lazy-dachshund-redis pod=lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:01:12 minikube localkube[3201]: I1201 16:01:12.011033 3201 kuberuntime_manager.go:738] checking backoff for container "seed-rpush" in pod "lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:01:12 minikube localkube[3201]: I1201 16:01:12.011283 3201 kuberuntime_manager.go:748] Back-off 2m40s restarting failed container=seed-rpush pod=lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058) | |
Dec 01 16:01:12 minikube localkube[3201]: E1201 16:01:12.011355 3201 pod_workers.go:182] Error syncing pod c874f93f-d6af-11e7-9596-080027aac058 ("lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "seed-rpush" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=seed-rpush pod=lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:01:14 minikube localkube[3201]: I1201 16:01:14.010558 3201 kuberuntime_manager.go:499] Container {Name:lazy-dachshund-redis Image:bitnami/redis:4.0.2-r1 Command:[] Args:[] WorkingDir: Ports:[{Name:redis HostPort:0 ContainerPort:6379 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:ALLOW_EMPTY_PASSWORD Value:yes ValueFrom:nil}] Resources:{Limits:map[] Requests:map[memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI} cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI}]} VolumeMounts:[{Name:redis-data ReadOnly:false MountPath:/bitnami SubPath: MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:01:14 minikube localkube[3201]: I1201 16:01:14.012831 3201 kuberuntime_manager.go:738] checking backoff for container "lazy-dachshund-redis" in pod "lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:01:14 minikube localkube[3201]: I1201 16:01:14.013356 3201 kuberuntime_manager.go:748] Back-off 2m40s restarting failed container=lazy-dachshund-redis pod=lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058) | |
Dec 01 16:01:14 minikube localkube[3201]: E1201 16:01:14.013696 3201 pod_workers.go:182] Error syncing pod c872ac21-d6af-11e7-9596-080027aac058 ("lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "lazy-dachshund-redis" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=lazy-dachshund-redis pod=lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:01:26 minikube localkube[3201]: I1201 16:01:26.010436 3201 kuberuntime_manager.go:738] checking backoff for container "seed-rpush" in pod "lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:01:26 minikube localkube[3201]: I1201 16:01:26.010710 3201 kuberuntime_manager.go:748] Back-off 2m40s restarting failed container=seed-rpush pod=lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058) | |
Dec 01 16:01:26 minikube localkube[3201]: E1201 16:01:26.010962 3201 pod_workers.go:182] Error syncing pod c874f93f-d6af-11e7-9596-080027aac058 ("lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "seed-rpush" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=seed-rpush pod=lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:01:29 minikube localkube[3201]: I1201 16:01:29.007629 3201 kuberuntime_manager.go:499] Container {Name:lazy-dachshund-redis Image:bitnami/redis:4.0.2-r1 Command:[] Args:[] WorkingDir: Ports:[{Name:redis HostPort:0 ContainerPort:6379 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:ALLOW_EMPTY_PASSWORD Value:yes ValueFrom:nil}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:redis-data ReadOnly:false MountPath:/bitnami SubPath: MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:01:29 minikube localkube[3201]: I1201 16:01:29.009852 3201 kuberuntime_manager.go:738] checking backoff for container "lazy-dachshund-redis" in pod "lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:01:29 minikube localkube[3201]: I1201 16:01:29.010390 3201 kuberuntime_manager.go:748] Back-off 2m40s restarting failed container=lazy-dachshund-redis pod=lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058) | |
Dec 01 16:01:29 minikube localkube[3201]: E1201 16:01:29.010722 3201 pod_workers.go:182] Error syncing pod c872ac21-d6af-11e7-9596-080027aac058 ("lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "lazy-dachshund-redis" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=lazy-dachshund-redis pod=lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:01:39 minikube localkube[3201]: I1201 16:01:39.010706 3201 kuberuntime_manager.go:738] checking backoff for container "seed-rpush" in pod "lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:01:39 minikube localkube[3201]: I1201 16:01:39.011211 3201 kuberuntime_manager.go:748] Back-off 2m40s restarting failed container=seed-rpush pod=lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058) | |
Dec 01 16:01:39 minikube localkube[3201]: E1201 16:01:39.011509 3201 pod_workers.go:182] Error syncing pod c874f93f-d6af-11e7-9596-080027aac058 ("lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "seed-rpush" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=seed-rpush pod=lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:01:42 minikube localkube[3201]: I1201 16:01:42.008542 3201 kuberuntime_manager.go:499] Container {Name:lazy-dachshund-redis Image:bitnami/redis:4.0.2-r1 Command:[] Args:[] WorkingDir: Ports:[{Name:redis HostPort:0 ContainerPort:6379 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:ALLOW_EMPTY_PASSWORD Value:yes ValueFrom:nil}] Resources:{Limits:map[] Requests:map[memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI} cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI}]} VolumeMounts:[{Name:redis-data ReadOnly:false MountPath:/bitnami SubPath: MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:01:42 minikube localkube[3201]: I1201 16:01:42.010635 3201 kuberuntime_manager.go:738] checking backoff for container "lazy-dachshund-redis" in pod "lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:01:42 minikube localkube[3201]: I1201 16:01:42.011198 3201 kuberuntime_manager.go:748] Back-off 2m40s restarting failed container=lazy-dachshund-redis pod=lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058) | |
Dec 01 16:01:42 minikube localkube[3201]: E1201 16:01:42.011503 3201 pod_workers.go:182] Error syncing pod c872ac21-d6af-11e7-9596-080027aac058 ("lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "lazy-dachshund-redis" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=lazy-dachshund-redis pod=lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:01:44 minikube localkube[3201]: E1201 16:01:44.288719 3201 healthcheck.go:317] Failed to start node healthz on 0: listen tcp: address 0: missing port in address | |
Dec 01 16:01:50 minikube localkube[3201]: I1201 16:01:50.009459 3201 kuberuntime_manager.go:738] checking backoff for container "seed-rpush" in pod "lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:01:50 minikube localkube[3201]: I1201 16:01:50.009772 3201 kuberuntime_manager.go:748] Back-off 2m40s restarting failed container=seed-rpush pod=lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058) | |
Dec 01 16:01:50 minikube localkube[3201]: E1201 16:01:50.010106 3201 pod_workers.go:182] Error syncing pod c874f93f-d6af-11e7-9596-080027aac058 ("lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "seed-rpush" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=seed-rpush pod=lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:01:56 minikube localkube[3201]: I1201 16:01:56.009026 3201 kuberuntime_manager.go:499] Container {Name:lazy-dachshund-redis Image:bitnami/redis:4.0.2-r1 Command:[] Args:[] WorkingDir: Ports:[{Name:redis HostPort:0 ContainerPort:6379 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:ALLOW_EMPTY_PASSWORD Value:yes ValueFrom:nil}] Resources:{Limits:map[] Requests:map[memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI} cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI}]} VolumeMounts:[{Name:redis-data ReadOnly:false MountPath:/bitnami SubPath: MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:01:56 minikube localkube[3201]: I1201 16:01:56.010019 3201 kuberuntime_manager.go:738] checking backoff for container "lazy-dachshund-redis" in pod "lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:01:56 minikube localkube[3201]: I1201 16:01:56.010265 3201 kuberuntime_manager.go:748] Back-off 2m40s restarting failed container=lazy-dachshund-redis pod=lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058) | |
Dec 01 16:01:56 minikube localkube[3201]: E1201 16:01:56.010394 3201 pod_workers.go:182] Error syncing pod c872ac21-d6af-11e7-9596-080027aac058 ("lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "lazy-dachshund-redis" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=lazy-dachshund-redis pod=lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:02:03 minikube localkube[3201]: I1201 16:02:03.009194 3201 kuberuntime_manager.go:738] checking backoff for container "seed-rpush" in pod "lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:02:03 minikube localkube[3201]: I1201 16:02:03.009457 3201 kuberuntime_manager.go:748] Back-off 2m40s restarting failed container=seed-rpush pod=lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058) | |
Dec 01 16:02:03 minikube localkube[3201]: E1201 16:02:03.009566 3201 pod_workers.go:182] Error syncing pod c874f93f-d6af-11e7-9596-080027aac058 ("lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "seed-rpush" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=seed-rpush pod=lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:02:09 minikube localkube[3201]: I1201 16:02:09.008425 3201 kuberuntime_manager.go:499] Container {Name:lazy-dachshund-redis Image:bitnami/redis:4.0.2-r1 Command:[] Args:[] WorkingDir: Ports:[{Name:redis HostPort:0 ContainerPort:6379 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:ALLOW_EMPTY_PASSWORD Value:yes ValueFrom:nil}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:redis-data ReadOnly:false MountPath:/bitnami SubPath: MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:02:09 minikube localkube[3201]: I1201 16:02:09.010678 3201 kuberuntime_manager.go:738] checking backoff for container "lazy-dachshund-redis" in pod "lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:02:09 minikube localkube[3201]: I1201 16:02:09.011301 3201 kuberuntime_manager.go:748] Back-off 2m40s restarting failed container=lazy-dachshund-redis pod=lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058) | |
Dec 01 16:02:09 minikube localkube[3201]: E1201 16:02:09.011671 3201 pod_workers.go:182] Error syncing pod c872ac21-d6af-11e7-9596-080027aac058 ("lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "lazy-dachshund-redis" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=lazy-dachshund-redis pod=lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:02:14 minikube localkube[3201]: I1201 16:02:14.012374 3201 kuberuntime_manager.go:738] checking backoff for container "seed-rpush" in pod "lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:02:14 minikube localkube[3201]: I1201 16:02:14.012964 3201 kuberuntime_manager.go:748] Back-off 2m40s restarting failed container=seed-rpush pod=lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058) | |
Dec 01 16:02:14 minikube localkube[3201]: E1201 16:02:14.013469 3201 pod_workers.go:182] Error syncing pod c874f93f-d6af-11e7-9596-080027aac058 ("lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "seed-rpush" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=seed-rpush pod=lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:02:22 minikube localkube[3201]: I1201 16:02:22.008094 3201 kuberuntime_manager.go:499] Container {Name:lazy-dachshund-redis Image:bitnami/redis:4.0.2-r1 Command:[] Args:[] WorkingDir: Ports:[{Name:redis HostPort:0 ContainerPort:6379 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:ALLOW_EMPTY_PASSWORD Value:yes ValueFrom:nil}] Resources:{Limits:map[] Requests:map[memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI} cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI}]} VolumeMounts:[{Name:redis-data ReadOnly:false MountPath:/bitnami SubPath: MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:02:22 minikube localkube[3201]: I1201 16:02:22.008300 3201 kuberuntime_manager.go:738] checking backoff for container "lazy-dachshund-redis" in pod "lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:02:22 minikube localkube[3201]: I1201 16:02:22.008471 3201 kuberuntime_manager.go:748] Back-off 2m40s restarting failed container=lazy-dachshund-redis pod=lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058) | |
Dec 01 16:02:22 minikube localkube[3201]: E1201 16:02:22.008518 3201 pod_workers.go:182] Error syncing pod c872ac21-d6af-11e7-9596-080027aac058 ("lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "lazy-dachshund-redis" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=lazy-dachshund-redis pod=lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:02:28 minikube localkube[3201]: I1201 16:02:28.010714 3201 kuberuntime_manager.go:738] checking backoff for container "seed-rpush" in pod "lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:02:28 minikube localkube[3201]: I1201 16:02:28.011968 3201 kuberuntime_manager.go:748] Back-off 2m40s restarting failed container=seed-rpush pod=lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058) | |
Dec 01 16:02:28 minikube localkube[3201]: E1201 16:02:28.012584 3201 pod_workers.go:182] Error syncing pod c874f93f-d6af-11e7-9596-080027aac058 ("lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "seed-rpush" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=seed-rpush pod=lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:02:33 minikube localkube[3201]: I1201 16:02:33.008729 3201 kuberuntime_manager.go:499] Container {Name:lazy-dachshund-redis Image:bitnami/redis:4.0.2-r1 Command:[] Args:[] WorkingDir: Ports:[{Name:redis HostPort:0 ContainerPort:6379 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:ALLOW_EMPTY_PASSWORD Value:yes ValueFrom:nil}] Resources:{Limits:map[] Requests:map[memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI} cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI}]} VolumeMounts:[{Name:redis-data ReadOnly:false MountPath:/bitnami SubPath: MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:02:33 minikube localkube[3201]: I1201 16:02:33.008927 3201 kuberuntime_manager.go:738] checking backoff for container "lazy-dachshund-redis" in pod "lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:02:33 minikube localkube[3201]: I1201 16:02:33.009190 3201 kuberuntime_manager.go:748] Back-off 2m40s restarting failed container=lazy-dachshund-redis pod=lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058) | |
Dec 01 16:02:33 minikube localkube[3201]: E1201 16:02:33.009242 3201 pod_workers.go:182] Error syncing pod c872ac21-d6af-11e7-9596-080027aac058 ("lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "lazy-dachshund-redis" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=lazy-dachshund-redis pod=lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:02:40 minikube localkube[3201]: I1201 16:02:40.009693 3201 kuberuntime_manager.go:738] checking backoff for container "seed-rpush" in pod "lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:02:40 minikube localkube[3201]: I1201 16:02:40.010346 3201 kuberuntime_manager.go:748] Back-off 2m40s restarting failed container=seed-rpush pod=lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058) | |
Dec 01 16:02:40 minikube localkube[3201]: E1201 16:02:40.011121 3201 pod_workers.go:182] Error syncing pod c874f93f-d6af-11e7-9596-080027aac058 ("lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "seed-rpush" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=seed-rpush pod=lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:02:44 minikube localkube[3201]: E1201 16:02:44.289456 3201 healthcheck.go:317] Failed to start node healthz on 0: listen tcp: address 0: missing port in address | |
Dec 01 16:02:45 minikube localkube[3201]: I1201 16:02:45.008398 3201 kuberuntime_manager.go:499] Container {Name:lazy-dachshund-redis Image:bitnami/redis:4.0.2-r1 Command:[] Args:[] WorkingDir: Ports:[{Name:redis HostPort:0 ContainerPort:6379 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:ALLOW_EMPTY_PASSWORD Value:yes ValueFrom:nil}] Resources:{Limits:map[] Requests:map[memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI} cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI}]} VolumeMounts:[{Name:redis-data ReadOnly:false MountPath:/bitnami SubPath: MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:02:45 minikube localkube[3201]: I1201 16:02:45.008549 3201 kuberuntime_manager.go:738] checking backoff for container "lazy-dachshund-redis" in pod "lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:02:45 minikube localkube[3201]: I1201 16:02:45.008636 3201 kuberuntime_manager.go:748] Back-off 2m40s restarting failed container=lazy-dachshund-redis pod=lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058) | |
Dec 01 16:02:45 minikube localkube[3201]: E1201 16:02:45.008661 3201 pod_workers.go:182] Error syncing pod c872ac21-d6af-11e7-9596-080027aac058 ("lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "lazy-dachshund-redis" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=lazy-dachshund-redis pod=lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:02:53 minikube localkube[3201]: I1201 16:02:53.009724 3201 kuberuntime_manager.go:738] checking backoff for container "seed-rpush" in pod "lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:02:53 minikube localkube[3201]: I1201 16:02:53.009924 3201 kuberuntime_manager.go:748] Back-off 2m40s restarting failed container=seed-rpush pod=lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058) | |
Dec 01 16:02:53 minikube localkube[3201]: E1201 16:02:53.009953 3201 pod_workers.go:182] Error syncing pod c874f93f-d6af-11e7-9596-080027aac058 ("lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "seed-rpush" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=seed-rpush pod=lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:02:59 minikube localkube[3201]: I1201 16:02:59.007907 3201 kuberuntime_manager.go:499] Container {Name:lazy-dachshund-redis Image:bitnami/redis:4.0.2-r1 Command:[] Args:[] WorkingDir: Ports:[{Name:redis HostPort:0 ContainerPort:6379 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:ALLOW_EMPTY_PASSWORD Value:yes ValueFrom:nil}] Resources:{Limits:map[] Requests:map[memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI} cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI}]} VolumeMounts:[{Name:redis-data ReadOnly:false MountPath:/bitnami SubPath: MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:02:59 minikube localkube[3201]: I1201 16:02:59.008010 3201 kuberuntime_manager.go:738] checking backoff for container "lazy-dachshund-redis" in pod "lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:03:08 minikube localkube[3201]: I1201 16:03:08.009396 3201 kuberuntime_manager.go:738] checking backoff for container "seed-rpush" in pod "lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:03:10 minikube localkube[3201]: E1201 16:03:10.494237 3201 remote_runtime.go:278] ContainerStatus "f31cb2d92da8063cf673ee3c298123279607e6a194ca3a5b7fe25e6db71dcc3e" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: f31cb2d92da8063cf673ee3c298123279607e6a194ca3a5b7fe25e6db71dcc3e | |
Dec 01 16:03:10 minikube localkube[3201]: E1201 16:03:10.494312 3201 kuberuntime_container.go:659] failed to remove pod init container "seed-rpush": failed to get container status "f31cb2d92da8063cf673ee3c298123279607e6a194ca3a5b7fe25e6db71dcc3e": rpc error: code = Unknown desc = Error: No such container: f31cb2d92da8063cf673ee3c298123279607e6a194ca3a5b7fe25e6db71dcc3e; Skipping pod "lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:03:10 minikube localkube[3201]: I1201 16:03:10.494493 3201 kuberuntime_manager.go:738] checking backoff for container "seed-rpush" in pod "lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:03:10 minikube localkube[3201]: I1201 16:03:10.494598 3201 kuberuntime_manager.go:748] Back-off 5m0s restarting failed container=seed-rpush pod=lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058) | |
Dec 01 16:03:10 minikube localkube[3201]: E1201 16:03:10.494639 3201 pod_workers.go:182] Error syncing pod c874f93f-d6af-11e7-9596-080027aac058 ("lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "seed-rpush" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=seed-rpush pod=lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:03:23 minikube localkube[3201]: I1201 16:03:23.009920 3201 kuberuntime_manager.go:738] checking backoff for container "seed-rpush" in pod "lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:03:23 minikube localkube[3201]: I1201 16:03:23.010297 3201 kuberuntime_manager.go:748] Back-off 5m0s restarting failed container=seed-rpush pod=lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058) | |
Dec 01 16:03:23 minikube localkube[3201]: E1201 16:03:23.010329 3201 pod_workers.go:182] Error syncing pod c874f93f-d6af-11e7-9596-080027aac058 ("lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "seed-rpush" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=seed-rpush pod=lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:03:23 minikube localkube[3201]: E1201 16:03:23.199070 3201 remote_runtime.go:332] ExecSync 55324272a32520172dabd4543d7aed15224f983b94efb55ede437083ada4e957 'redis-cli ping' from runtime service failed: rpc error: code = Unknown desc = container not running (55324272a32520172dabd4543d7aed15224f983b94efb55ede437083ada4e957) | |
Dec 01 16:03:23 minikube localkube[3201]: E1201 16:03:23.201277 3201 remote_runtime.go:332] ExecSync 55324272a32520172dabd4543d7aed15224f983b94efb55ede437083ada4e957 'redis-cli ping' from runtime service failed: rpc error: code = Unknown desc = container not running (55324272a32520172dabd4543d7aed15224f983b94efb55ede437083ada4e957) | |
Dec 01 16:03:23 minikube localkube[3201]: E1201 16:03:23.203043 3201 remote_runtime.go:332] ExecSync 55324272a32520172dabd4543d7aed15224f983b94efb55ede437083ada4e957 'redis-cli ping' from runtime service failed: rpc error: code = Unknown desc = container not running (55324272a32520172dabd4543d7aed15224f983b94efb55ede437083ada4e957) | |
Dec 01 16:03:23 minikube localkube[3201]: I1201 16:03:23.639703 3201 kuberuntime_manager.go:499] Container {Name:lazy-dachshund-redis Image:bitnami/redis:4.0.2-r1 Command:[] Args:[] WorkingDir: Ports:[{Name:redis HostPort:0 ContainerPort:6379 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:ALLOW_EMPTY_PASSWORD Value:yes ValueFrom:nil}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:redis-data ReadOnly:false MountPath:/bitnami SubPath: MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:03:23 minikube localkube[3201]: I1201 16:03:23.639861 3201 kuberuntime_manager.go:738] checking backoff for container "lazy-dachshund-redis" in pod "lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:03:23 minikube localkube[3201]: I1201 16:03:23.639959 3201 kuberuntime_manager.go:748] Back-off 5m0s restarting failed container=lazy-dachshund-redis pod=lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058) | |
Dec 01 16:03:23 minikube localkube[3201]: E1201 16:03:23.639994 3201 pod_workers.go:182] Error syncing pod c872ac21-d6af-11e7-9596-080027aac058 ("lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "lazy-dachshund-redis" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=lazy-dachshund-redis pod=lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:03:24 minikube localkube[3201]: I1201 16:03:24.653801 3201 kuberuntime_manager.go:499] Container {Name:lazy-dachshund-redis Image:bitnami/redis:4.0.2-r1 Command:[] Args:[] WorkingDir: Ports:[{Name:redis HostPort:0 ContainerPort:6379 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:ALLOW_EMPTY_PASSWORD Value:yes ValueFrom:nil}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:redis-data ReadOnly:false MountPath:/bitnami SubPath: MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:03:24 minikube localkube[3201]: I1201 16:03:24.654813 3201 kuberuntime_manager.go:738] checking backoff for container "lazy-dachshund-redis" in pod "lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:03:24 minikube localkube[3201]: I1201 16:03:24.655063 3201 kuberuntime_manager.go:748] Back-off 5m0s restarting failed container=lazy-dachshund-redis pod=lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058) | |
Dec 01 16:03:24 minikube localkube[3201]: E1201 16:03:24.655407 3201 pod_workers.go:182] Error syncing pod c872ac21-d6af-11e7-9596-080027aac058 ("lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "lazy-dachshund-redis" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=lazy-dachshund-redis pod=lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:03:38 minikube localkube[3201]: I1201 16:03:38.008898 3201 kuberuntime_manager.go:499] Container {Name:lazy-dachshund-redis Image:bitnami/redis:4.0.2-r1 Command:[] Args:[] WorkingDir: Ports:[{Name:redis HostPort:0 ContainerPort:6379 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:ALLOW_EMPTY_PASSWORD Value:yes ValueFrom:nil}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:redis-data ReadOnly:false MountPath:/bitnami SubPath: MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:03:38 minikube localkube[3201]: I1201 16:03:38.009062 3201 kuberuntime_manager.go:738] checking backoff for container "lazy-dachshund-redis" in pod "lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:03:38 minikube localkube[3201]: I1201 16:03:38.009181 3201 kuberuntime_manager.go:748] Back-off 5m0s restarting failed container=lazy-dachshund-redis pod=lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058) | |
Dec 01 16:03:38 minikube localkube[3201]: E1201 16:03:38.009212 3201 pod_workers.go:182] Error syncing pod c872ac21-d6af-11e7-9596-080027aac058 ("lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "lazy-dachshund-redis" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=lazy-dachshund-redis pod=lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:03:39 minikube localkube[3201]: I1201 16:03:39.010022 3201 kuberuntime_manager.go:738] checking backoff for container "seed-rpush" in pod "lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:03:39 minikube localkube[3201]: I1201 16:03:39.010246 3201 kuberuntime_manager.go:748] Back-off 5m0s restarting failed container=seed-rpush pod=lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058) | |
Dec 01 16:03:39 minikube localkube[3201]: E1201 16:03:39.010290 3201 pod_workers.go:182] Error syncing pod c874f93f-d6af-11e7-9596-080027aac058 ("lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "seed-rpush" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=seed-rpush pod=lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:03:44 minikube localkube[3201]: E1201 16:03:44.290256 3201 healthcheck.go:317] Failed to start node healthz on 0: listen tcp: address 0: missing port in address | |
Dec 01 16:03:50 minikube localkube[3201]: I1201 16:03:50.012577 3201 kuberuntime_manager.go:738] checking backoff for container "seed-rpush" in pod "lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:03:50 minikube localkube[3201]: I1201 16:03:50.012975 3201 kuberuntime_manager.go:748] Back-off 5m0s restarting failed container=seed-rpush pod=lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058) | |
Dec 01 16:03:50 minikube localkube[3201]: E1201 16:03:50.013153 3201 pod_workers.go:182] Error syncing pod c874f93f-d6af-11e7-9596-080027aac058 ("lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "seed-rpush" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=seed-rpush pod=lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:03:52 minikube localkube[3201]: I1201 16:03:52.013682 3201 kuberuntime_manager.go:499] Container {Name:lazy-dachshund-redis Image:bitnami/redis:4.0.2-r1 Command:[] Args:[] WorkingDir: Ports:[{Name:redis HostPort:0 ContainerPort:6379 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:ALLOW_EMPTY_PASSWORD Value:yes ValueFrom:nil}] Resources:{Limits:map[] Requests:map[memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI} cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI}]} VolumeMounts:[{Name:redis-data ReadOnly:false MountPath:/bitnami SubPath: MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:03:52 minikube localkube[3201]: I1201 16:03:52.013770 3201 kuberuntime_manager.go:738] checking backoff for container "lazy-dachshund-redis" in pod "lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:03:52 minikube localkube[3201]: I1201 16:03:52.013846 3201 kuberuntime_manager.go:748] Back-off 5m0s restarting failed container=lazy-dachshund-redis pod=lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058) | |
Dec 01 16:03:52 minikube localkube[3201]: E1201 16:03:52.013866 3201 pod_workers.go:182] Error syncing pod c872ac21-d6af-11e7-9596-080027aac058 ("lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "lazy-dachshund-redis" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=lazy-dachshund-redis pod=lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:04:03 minikube localkube[3201]: I1201 16:04:03.008097 3201 kuberuntime_manager.go:499] Container {Name:lazy-dachshund-redis Image:bitnami/redis:4.0.2-r1 Command:[] Args:[] WorkingDir: Ports:[{Name:redis HostPort:0 ContainerPort:6379 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:ALLOW_EMPTY_PASSWORD Value:yes ValueFrom:nil}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:redis-data ReadOnly:false MountPath:/bitnami SubPath: MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:04:03 minikube localkube[3201]: I1201 16:04:03.009807 3201 kuberuntime_manager.go:738] checking backoff for container "lazy-dachshund-redis" in pod "lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:04:03 minikube localkube[3201]: I1201 16:04:03.010408 3201 kuberuntime_manager.go:748] Back-off 5m0s restarting failed container=lazy-dachshund-redis pod=lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058) | |
Dec 01 16:04:03 minikube localkube[3201]: E1201 16:04:03.010676 3201 pod_workers.go:182] Error syncing pod c872ac21-d6af-11e7-9596-080027aac058 ("lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "lazy-dachshund-redis" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=lazy-dachshund-redis pod=lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:04:04 minikube localkube[3201]: I1201 16:04:04.011732 3201 kuberuntime_manager.go:738] checking backoff for container "seed-rpush" in pod "lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:04:04 minikube localkube[3201]: I1201 16:04:04.012363 3201 kuberuntime_manager.go:748] Back-off 5m0s restarting failed container=seed-rpush pod=lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058) | |
Dec 01 16:04:04 minikube localkube[3201]: E1201 16:04:04.012745 3201 pod_workers.go:182] Error syncing pod c874f93f-d6af-11e7-9596-080027aac058 ("lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "seed-rpush" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=seed-rpush pod=lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:04:17 minikube localkube[3201]: I1201 16:04:17.009756 3201 kuberuntime_manager.go:738] checking backoff for container "seed-rpush" in pod "lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:04:17 minikube localkube[3201]: I1201 16:04:17.010463 3201 kuberuntime_manager.go:748] Back-off 5m0s restarting failed container=seed-rpush pod=lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058) | |
Dec 01 16:04:17 minikube localkube[3201]: E1201 16:04:17.010706 3201 pod_workers.go:182] Error syncing pod c874f93f-d6af-11e7-9596-080027aac058 ("lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "seed-rpush" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=seed-rpush pod=lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:04:18 minikube localkube[3201]: I1201 16:04:18.008626 3201 kuberuntime_manager.go:499] Container {Name:lazy-dachshund-redis Image:bitnami/redis:4.0.2-r1 Command:[] Args:[] WorkingDir: Ports:[{Name:redis HostPort:0 ContainerPort:6379 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:ALLOW_EMPTY_PASSWORD Value:yes ValueFrom:nil}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:redis-data ReadOnly:false MountPath:/bitnami SubPath: MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:04:18 minikube localkube[3201]: I1201 16:04:18.008928 3201 kuberuntime_manager.go:738] checking backoff for container "lazy-dachshund-redis" in pod "lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:04:18 minikube localkube[3201]: I1201 16:04:18.009120 3201 kuberuntime_manager.go:748] Back-off 5m0s restarting failed container=lazy-dachshund-redis pod=lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058) | |
Dec 01 16:04:18 minikube localkube[3201]: E1201 16:04:18.009168 3201 pod_workers.go:182] Error syncing pod c872ac21-d6af-11e7-9596-080027aac058 ("lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "lazy-dachshund-redis" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=lazy-dachshund-redis pod=lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:04:29 minikube localkube[3201]: I1201 16:04:29.009797 3201 kuberuntime_manager.go:738] checking backoff for container "seed-rpush" in pod "lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:04:29 minikube localkube[3201]: I1201 16:04:29.010376 3201 kuberuntime_manager.go:748] Back-off 5m0s restarting failed container=seed-rpush pod=lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058) | |
Dec 01 16:04:29 minikube localkube[3201]: E1201 16:04:29.010706 3201 pod_workers.go:182] Error syncing pod c874f93f-d6af-11e7-9596-080027aac058 ("lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "seed-rpush" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=seed-rpush pod=lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:04:30 minikube localkube[3201]: I1201 16:04:30.008931 3201 kuberuntime_manager.go:499] Container {Name:lazy-dachshund-redis Image:bitnami/redis:4.0.2-r1 Command:[] Args:[] WorkingDir: Ports:[{Name:redis HostPort:0 ContainerPort:6379 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:ALLOW_EMPTY_PASSWORD Value:yes ValueFrom:nil}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:redis-data ReadOnly:false MountPath:/bitnami SubPath: MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:04:30 minikube localkube[3201]: I1201 16:04:30.009148 3201 kuberuntime_manager.go:738] checking backoff for container "lazy-dachshund-redis" in pod "lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:04:30 minikube localkube[3201]: I1201 16:04:30.009321 3201 kuberuntime_manager.go:748] Back-off 5m0s restarting failed container=lazy-dachshund-redis pod=lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058) | |
Dec 01 16:04:30 minikube localkube[3201]: E1201 16:04:30.009367 3201 pod_workers.go:182] Error syncing pod c872ac21-d6af-11e7-9596-080027aac058 ("lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "lazy-dachshund-redis" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=lazy-dachshund-redis pod=lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:04:36 minikube localkube[3201]: store.index: compact 1348 | |
Dec 01 16:04:36 minikube localkube[3201]: finished scheduled compaction at 1348 (took 10.242299ms) | |
Dec 01 16:04:42 minikube localkube[3201]: I1201 16:04:42.008175 3201 kuberuntime_manager.go:499] Container {Name:lazy-dachshund-redis Image:bitnami/redis:4.0.2-r1 Command:[] Args:[] WorkingDir: Ports:[{Name:redis HostPort:0 ContainerPort:6379 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:ALLOW_EMPTY_PASSWORD Value:yes ValueFrom:nil}] Resources:{Limits:map[] Requests:map[memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI} cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI}]} VolumeMounts:[{Name:redis-data ReadOnly:false MountPath:/bitnami SubPath: MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:04:42 minikube localkube[3201]: I1201 16:04:42.008354 3201 kuberuntime_manager.go:738] checking backoff for container "lazy-dachshund-redis" in pod "lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:04:42 minikube localkube[3201]: I1201 16:04:42.008474 3201 kuberuntime_manager.go:748] Back-off 5m0s restarting failed container=lazy-dachshund-redis pod=lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058) | |
Dec 01 16:04:42 minikube localkube[3201]: E1201 16:04:42.008506 3201 pod_workers.go:182] Error syncing pod c872ac21-d6af-11e7-9596-080027aac058 ("lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "lazy-dachshund-redis" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=lazy-dachshund-redis pod=lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:04:44 minikube localkube[3201]: E1201 16:04:44.290840 3201 healthcheck.go:317] Failed to start node healthz on 0: listen tcp: address 0: missing port in address | |
Dec 01 16:04:45 minikube localkube[3201]: I1201 16:04:45.009309 3201 kuberuntime_manager.go:738] checking backoff for container "seed-rpush" in pod "lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:04:45 minikube localkube[3201]: I1201 16:04:45.009794 3201 kuberuntime_manager.go:748] Back-off 5m0s restarting failed container=seed-rpush pod=lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058) | |
Dec 01 16:04:45 minikube localkube[3201]: E1201 16:04:45.009960 3201 pod_workers.go:182] Error syncing pod c874f93f-d6af-11e7-9596-080027aac058 ("lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "seed-rpush" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=seed-rpush pod=lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:04:57 minikube localkube[3201]: I1201 16:04:57.008795 3201 kuberuntime_manager.go:499] Container {Name:lazy-dachshund-redis Image:bitnami/redis:4.0.2-r1 Command:[] Args:[] WorkingDir: Ports:[{Name:redis HostPort:0 ContainerPort:6379 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:ALLOW_EMPTY_PASSWORD Value:yes ValueFrom:nil}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:redis-data ReadOnly:false MountPath:/bitnami SubPath: MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:04:57 minikube localkube[3201]: I1201 16:04:57.008995 3201 kuberuntime_manager.go:738] checking backoff for container "lazy-dachshund-redis" in pod "lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:04:57 minikube localkube[3201]: I1201 16:04:57.009077 3201 kuberuntime_manager.go:748] Back-off 5m0s restarting failed container=lazy-dachshund-redis pod=lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058) | |
Dec 01 16:04:57 minikube localkube[3201]: E1201 16:04:57.009100 3201 pod_workers.go:182] Error syncing pod c872ac21-d6af-11e7-9596-080027aac058 ("lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "lazy-dachshund-redis" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=lazy-dachshund-redis pod=lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:04:57 minikube localkube[3201]: I1201 16:04:57.011343 3201 kuberuntime_manager.go:738] checking backoff for container "seed-rpush" in pod "lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:04:57 minikube localkube[3201]: I1201 16:04:57.011881 3201 kuberuntime_manager.go:748] Back-off 5m0s restarting failed container=seed-rpush pod=lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058) | |
Dec 01 16:04:57 minikube localkube[3201]: E1201 16:04:57.012358 3201 pod_workers.go:182] Error syncing pod c874f93f-d6af-11e7-9596-080027aac058 ("lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "seed-rpush" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=seed-rpush pod=lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:05:08 minikube localkube[3201]: I1201 16:05:08.008427 3201 kuberuntime_manager.go:499] Container {Name:lazy-dachshund-redis Image:bitnami/redis:4.0.2-r1 Command:[] Args:[] WorkingDir: Ports:[{Name:redis HostPort:0 ContainerPort:6379 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:ALLOW_EMPTY_PASSWORD Value:yes ValueFrom:nil}] Resources:{Limits:map[] Requests:map[memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI} cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI}]} VolumeMounts:[{Name:redis-data ReadOnly:false MountPath:/bitnami SubPath: MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:05:08 minikube localkube[3201]: I1201 16:05:08.008752 3201 kuberuntime_manager.go:738] checking backoff for container "lazy-dachshund-redis" in pod "lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:05:08 minikube localkube[3201]: I1201 16:05:08.008830 3201 kuberuntime_manager.go:748] Back-off 5m0s restarting failed container=lazy-dachshund-redis pod=lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058) | |
Dec 01 16:05:08 minikube localkube[3201]: E1201 16:05:08.008852 3201 pod_workers.go:182] Error syncing pod c872ac21-d6af-11e7-9596-080027aac058 ("lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "lazy-dachshund-redis" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=lazy-dachshund-redis pod=lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:05:08 minikube localkube[3201]: I1201 16:05:08.009703 3201 kuberuntime_manager.go:738] checking backoff for container "seed-rpush" in pod "lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:05:08 minikube localkube[3201]: I1201 16:05:08.009868 3201 kuberuntime_manager.go:748] Back-off 5m0s restarting failed container=seed-rpush pod=lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058) | |
Dec 01 16:05:08 minikube localkube[3201]: E1201 16:05:08.009909 3201 pod_workers.go:182] Error syncing pod c874f93f-d6af-11e7-9596-080027aac058 ("lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "seed-rpush" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=seed-rpush pod=lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:05:19 minikube localkube[3201]: I1201 16:05:19.007976 3201 kuberuntime_manager.go:499] Container {Name:lazy-dachshund-redis Image:bitnami/redis:4.0.2-r1 Command:[] Args:[] WorkingDir: Ports:[{Name:redis HostPort:0 ContainerPort:6379 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:ALLOW_EMPTY_PASSWORD Value:yes ValueFrom:nil}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:redis-data ReadOnly:false MountPath:/bitnami SubPath: MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:05:19 minikube localkube[3201]: I1201 16:05:19.008185 3201 kuberuntime_manager.go:738] checking backoff for container "lazy-dachshund-redis" in pod "lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:05:19 minikube localkube[3201]: I1201 16:05:19.008328 3201 kuberuntime_manager.go:748] Back-off 5m0s restarting failed container=lazy-dachshund-redis pod=lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058) | |
Dec 01 16:05:19 minikube localkube[3201]: E1201 16:05:19.008367 3201 pod_workers.go:182] Error syncing pod c872ac21-d6af-11e7-9596-080027aac058 ("lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "lazy-dachshund-redis" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=lazy-dachshund-redis pod=lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:05:19 minikube localkube[3201]: I1201 16:05:19.010023 3201 kuberuntime_manager.go:738] checking backoff for container "seed-rpush" in pod "lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:05:19 minikube localkube[3201]: I1201 16:05:19.010106 3201 kuberuntime_manager.go:748] Back-off 5m0s restarting failed container=seed-rpush pod=lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058) | |
Dec 01 16:05:19 minikube localkube[3201]: E1201 16:05:19.010137 3201 pod_workers.go:182] Error syncing pod c874f93f-d6af-11e7-9596-080027aac058 ("lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "seed-rpush" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=seed-rpush pod=lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:05:30 minikube localkube[3201]: I1201 16:05:30.010934 3201 kuberuntime_manager.go:738] checking backoff for container "seed-rpush" in pod "lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:05:30 minikube localkube[3201]: I1201 16:05:30.011129 3201 kuberuntime_manager.go:748] Back-off 5m0s restarting failed container=seed-rpush pod=lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058) | |
Dec 01 16:05:30 minikube localkube[3201]: E1201 16:05:30.011156 3201 pod_workers.go:182] Error syncing pod c874f93f-d6af-11e7-9596-080027aac058 ("lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "seed-rpush" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=seed-rpush pod=lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:05:34 minikube localkube[3201]: I1201 16:05:34.009036 3201 kuberuntime_manager.go:499] Container {Name:lazy-dachshund-redis Image:bitnami/redis:4.0.2-r1 Command:[] Args:[] WorkingDir: Ports:[{Name:redis HostPort:0 ContainerPort:6379 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:ALLOW_EMPTY_PASSWORD Value:yes ValueFrom:nil}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:redis-data ReadOnly:false MountPath:/bitnami SubPath: MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:05:34 minikube localkube[3201]: I1201 16:05:34.009147 3201 kuberuntime_manager.go:738] checking backoff for container "lazy-dachshund-redis" in pod "lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:05:34 minikube localkube[3201]: I1201 16:05:34.009226 3201 kuberuntime_manager.go:748] Back-off 5m0s restarting failed container=lazy-dachshund-redis pod=lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058) | |
Dec 01 16:05:34 minikube localkube[3201]: E1201 16:05:34.009286 3201 pod_workers.go:182] Error syncing pod c872ac21-d6af-11e7-9596-080027aac058 ("lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "lazy-dachshund-redis" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=lazy-dachshund-redis pod=lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:05:44 minikube localkube[3201]: E1201 16:05:44.291268 3201 healthcheck.go:317] Failed to start node healthz on 0: listen tcp: address 0: missing port in address | |
Dec 01 16:05:45 minikube localkube[3201]: I1201 16:05:45.009059 3201 kuberuntime_manager.go:738] checking backoff for container "seed-rpush" in pod "lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:05:45 minikube localkube[3201]: I1201 16:05:45.009182 3201 kuberuntime_manager.go:748] Back-off 5m0s restarting failed container=seed-rpush pod=lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058) | |
Dec 01 16:05:45 minikube localkube[3201]: E1201 16:05:45.009210 3201 pod_workers.go:182] Error syncing pod c874f93f-d6af-11e7-9596-080027aac058 ("lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "seed-rpush" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=seed-rpush pod=lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:05:46 minikube localkube[3201]: I1201 16:05:46.008389 3201 kuberuntime_manager.go:499] Container {Name:lazy-dachshund-redis Image:bitnami/redis:4.0.2-r1 Command:[] Args:[] WorkingDir: Ports:[{Name:redis HostPort:0 ContainerPort:6379 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:ALLOW_EMPTY_PASSWORD Value:yes ValueFrom:nil}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:redis-data ReadOnly:false MountPath:/bitnami SubPath: MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:05:46 minikube localkube[3201]: I1201 16:05:46.010359 3201 kuberuntime_manager.go:738] checking backoff for container "lazy-dachshund-redis" in pod "lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:05:46 minikube localkube[3201]: I1201 16:05:46.010554 3201 kuberuntime_manager.go:748] Back-off 5m0s restarting failed container=lazy-dachshund-redis pod=lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058) | |
Dec 01 16:05:46 minikube localkube[3201]: E1201 16:05:46.010610 3201 pod_workers.go:182] Error syncing pod c872ac21-d6af-11e7-9596-080027aac058 ("lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "lazy-dachshund-redis" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=lazy-dachshund-redis pod=lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:05:58 minikube localkube[3201]: I1201 16:05:58.009646 3201 kuberuntime_manager.go:738] checking backoff for container "seed-rpush" in pod "lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:05:58 minikube localkube[3201]: I1201 16:05:58.009731 3201 kuberuntime_manager.go:748] Back-off 5m0s restarting failed container=seed-rpush pod=lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058) | |
Dec 01 16:05:58 minikube localkube[3201]: E1201 16:05:58.009751 3201 pod_workers.go:182] Error syncing pod c874f93f-d6af-11e7-9596-080027aac058 ("lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "seed-rpush" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=seed-rpush pod=lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:06:01 minikube localkube[3201]: I1201 16:06:01.007827 3201 kuberuntime_manager.go:499] Container {Name:lazy-dachshund-redis Image:bitnami/redis:4.0.2-r1 Command:[] Args:[] WorkingDir: Ports:[{Name:redis HostPort:0 ContainerPort:6379 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:ALLOW_EMPTY_PASSWORD Value:yes ValueFrom:nil}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:redis-data ReadOnly:false MountPath:/bitnami SubPath: MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:06:01 minikube localkube[3201]: I1201 16:06:01.009461 3201 kuberuntime_manager.go:738] checking backoff for container "lazy-dachshund-redis" in pod "lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:06:01 minikube localkube[3201]: I1201 16:06:01.009645 3201 kuberuntime_manager.go:748] Back-off 5m0s restarting failed container=lazy-dachshund-redis pod=lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058) | |
Dec 01 16:06:01 minikube localkube[3201]: E1201 16:06:01.009693 3201 pod_workers.go:182] Error syncing pod c872ac21-d6af-11e7-9596-080027aac058 ("lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "lazy-dachshund-redis" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=lazy-dachshund-redis pod=lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:06:10 minikube localkube[3201]: I1201 16:06:10.009675 3201 kuberuntime_manager.go:738] checking backoff for container "seed-rpush" in pod "lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:06:10 minikube localkube[3201]: I1201 16:06:10.009973 3201 kuberuntime_manager.go:748] Back-off 5m0s restarting failed container=seed-rpush pod=lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058) | |
Dec 01 16:06:10 minikube localkube[3201]: E1201 16:06:10.010117 3201 pod_workers.go:182] Error syncing pod c874f93f-d6af-11e7-9596-080027aac058 ("lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "seed-rpush" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=seed-rpush pod=lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:06:12 minikube localkube[3201]: I1201 16:06:12.008799 3201 kuberuntime_manager.go:499] Container {Name:lazy-dachshund-redis Image:bitnami/redis:4.0.2-r1 Command:[] Args:[] WorkingDir: Ports:[{Name:redis HostPort:0 ContainerPort:6379 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:ALLOW_EMPTY_PASSWORD Value:yes ValueFrom:nil}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:redis-data ReadOnly:false MountPath:/bitnami SubPath: MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:06:12 minikube localkube[3201]: I1201 16:06:12.008969 3201 kuberuntime_manager.go:738] checking backoff for container "lazy-dachshund-redis" in pod "lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:06:12 minikube localkube[3201]: I1201 16:06:12.009083 3201 kuberuntime_manager.go:748] Back-off 5m0s restarting failed container=lazy-dachshund-redis pod=lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058) | |
Dec 01 16:06:12 minikube localkube[3201]: E1201 16:06:12.009114 3201 pod_workers.go:182] Error syncing pod c872ac21-d6af-11e7-9596-080027aac058 ("lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "lazy-dachshund-redis" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=lazy-dachshund-redis pod=lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:06:18 minikube localkube[3201]: E1201 16:06:18.048528 3201 upgradeaware.go:310] Error proxying data from client to backend: read tcp 192.168.99.100:8443->192.168.99.1:41728: read: connection reset by peer | |
Dec 01 16:06:23 minikube localkube[3201]: I1201 16:06:23.008704 3201 kuberuntime_manager.go:499] Container {Name:lazy-dachshund-redis Image:bitnami/redis:4.0.2-r1 Command:[] Args:[] WorkingDir: Ports:[{Name:redis HostPort:0 ContainerPort:6379 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:ALLOW_EMPTY_PASSWORD Value:yes ValueFrom:nil}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:redis-data ReadOnly:false MountPath:/bitnami SubPath: MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:06:23 minikube localkube[3201]: I1201 16:06:23.008839 3201 kuberuntime_manager.go:738] checking backoff for container "lazy-dachshund-redis" in pod "lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:06:23 minikube localkube[3201]: I1201 16:06:23.008924 3201 kuberuntime_manager.go:748] Back-off 5m0s restarting failed container=lazy-dachshund-redis pod=lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058) | |
Dec 01 16:06:23 minikube localkube[3201]: E1201 16:06:23.008948 3201 pod_workers.go:182] Error syncing pod c872ac21-d6af-11e7-9596-080027aac058 ("lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "lazy-dachshund-redis" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=lazy-dachshund-redis pod=lazy-dachshund-redis-c444c8957-x2jpp_default(c872ac21-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:06:26 minikube localkube[3201]: I1201 16:06:26.010072 3201 kuberuntime_manager.go:738] checking backoff for container "seed-rpush" in pod "lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:06:26 minikube localkube[3201]: I1201 16:06:26.010685 3201 kuberuntime_manager.go:748] Back-off 5m0s restarting failed container=seed-rpush pod=lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058) | |
Dec 01 16:06:26 minikube localkube[3201]: E1201 16:06:26.011075 3201 pod_workers.go:182] Error syncing pod c874f93f-d6af-11e7-9596-080027aac058 ("lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "seed-rpush" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=seed-rpush pod=lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:06:30 minikube localkube[3201]: I1201 16:06:30.216178 3201 event.go:218] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"lazy-dachshund-elixir", UID:"c86ae076-d6af-11e7-9596-080027aac058", APIVersion:"extensions", ResourceVersion:"1856", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set lazy-dachshund-elixir-d64574bd4 to 0 | |
Dec 01 16:06:30 minikube localkube[3201]: I1201 16:06:30.227720 3201 event.go:218] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"lazy-dachshund-elixir-d64574bd4", UID:"c86bbb16-d6af-11e7-9596-080027aac058", APIVersion:"extensions", ResourceVersion:"1857", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: lazy-dachshund-elixir-d64574bd4-srxhb | |
Dec 01 16:06:30 minikube localkube[3201]: I1201 16:06:30.227959 3201 event.go:218] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"lazy-dachshund-elixir-d64574bd4", UID:"c86bbb16-d6af-11e7-9596-080027aac058", APIVersion:"extensions", ResourceVersion:"1857", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: lazy-dachshund-elixir-d64574bd4-xqm8j | |
Dec 01 16:06:33 minikube localkube[3201]: I1201 16:06:33.306600 3201 event.go:218] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"lazy-dachshund-postgresql", UID:"c86ba9cc-d6af-11e7-9596-080027aac058", APIVersion:"extensions", ResourceVersion:"1875", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set lazy-dachshund-postgresql-cfbf6fbb7 to 0 | |
Dec 01 16:06:33 minikube localkube[3201]: I1201 16:06:33.314245 3201 event.go:218] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"lazy-dachshund-postgresql-cfbf6fbb7", UID:"c86c2a32-d6af-11e7-9596-080027aac058", APIVersion:"extensions", ResourceVersion:"1876", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: lazy-dachshund-postgresql-cfbf6fbb7-h5n6f | |
Dec 01 16:06:33 minikube localkube[3201]: E1201 16:06:33.628746 3201 remote_runtime.go:332] ExecSync a3870d935a007d0dc26d15cc8cad125e9961c8a6ac623e52f89f80d7a37f1055 'sh -c exec pg_isready --host $POD_IP' from runtime service failed: rpc error: code = Unknown desc = container not running (a3870d935a007d0dc26d15cc8cad125e9961c8a6ac623e52f89f80d7a37f1055) | |
Dec 01 16:06:33 minikube localkube[3201]: E1201 16:06:33.630109 3201 remote_runtime.go:332] ExecSync a3870d935a007d0dc26d15cc8cad125e9961c8a6ac623e52f89f80d7a37f1055 'sh -c exec pg_isready --host $POD_IP' from runtime service failed: rpc error: code = Unknown desc = container not running (a3870d935a007d0dc26d15cc8cad125e9961c8a6ac623e52f89f80d7a37f1055) | |
Dec 01 16:06:33 minikube localkube[3201]: E1201 16:06:33.631145 3201 remote_runtime.go:332] ExecSync a3870d935a007d0dc26d15cc8cad125e9961c8a6ac623e52f89f80d7a37f1055 'sh -c exec pg_isready --host $POD_IP' from runtime service failed: rpc error: code = Unknown desc = container not running (a3870d935a007d0dc26d15cc8cad125e9961c8a6ac623e52f89f80d7a37f1055) | |
Dec 01 16:06:33 minikube localkube[3201]: W1201 16:06:33.631188 3201 prober.go:98] No ref for container "docker://a3870d935a007d0dc26d15cc8cad125e9961c8a6ac623e52f89f80d7a37f1055" (lazy-dachshund-postgresql-cfbf6fbb7-h5n6f_default(c86d57b5-d6af-11e7-9596-080027aac058):lazy-dachshund-postgresql) | |
Dec 01 16:06:34 minikube localkube[3201]: W1201 16:06:34.442425 3201 pod_container_deletor.go:77] Container "a96f1bd7c246de2c509449339bf2312199b7321cdc7b6e5835014bb4f0e83834" not found in pod's containers | |
Dec 01 16:06:36 minikube localkube[3201]: I1201 16:06:36.447857 3201 reconciler.go:186] operationExecutor.UnmountVolume started for volume "default-token-ctrw6" (UniqueName: "kubernetes.io/secret/c86d57b5-d6af-11e7-9596-080027aac058-default-token-ctrw6") pod "c86d57b5-d6af-11e7-9596-080027aac058" (UID: "c86d57b5-d6af-11e7-9596-080027aac058") | |
Dec 01 16:06:36 minikube localkube[3201]: I1201 16:06:36.447908 3201 reconciler.go:186] operationExecutor.UnmountVolume started for volume "data" (UniqueName: "kubernetes.io/host-path/c86d57b5-d6af-11e7-9596-080027aac058-pvc-c857ca9e-d6af-11e7-9596-080027aac058") pod "c86d57b5-d6af-11e7-9596-080027aac058" (UID: "c86d57b5-d6af-11e7-9596-080027aac058") | |
Dec 01 16:06:36 minikube localkube[3201]: I1201 16:06:36.447954 3201 operation_generator.go:527] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c86d57b5-d6af-11e7-9596-080027aac058-pvc-c857ca9e-d6af-11e7-9596-080027aac058" (OuterVolumeSpecName: "data") pod "c86d57b5-d6af-11e7-9596-080027aac058" (UID: "c86d57b5-d6af-11e7-9596-080027aac058"). InnerVolumeSpecName "pvc-c857ca9e-d6af-11e7-9596-080027aac058". PluginName "kubernetes.io/host-path", VolumeGidValue "" | |
Dec 01 16:06:36 minikube localkube[3201]: I1201 16:06:36.462189 3201 operation_generator.go:527] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c86d57b5-d6af-11e7-9596-080027aac058-default-token-ctrw6" (OuterVolumeSpecName: "default-token-ctrw6") pod "c86d57b5-d6af-11e7-9596-080027aac058" (UID: "c86d57b5-d6af-11e7-9596-080027aac058"). InnerVolumeSpecName "default-token-ctrw6". PluginName "kubernetes.io/secret", VolumeGidValue "" | |
Dec 01 16:06:36 minikube localkube[3201]: I1201 16:06:36.482911 3201 event.go:218] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"lazy-dachshund-redis", UID:"c86c3804-d6af-11e7-9596-080027aac058", APIVersion:"extensions", ResourceVersion:"1892", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set lazy-dachshund-redis-c444c8957 to 0 | |
Dec 01 16:06:36 minikube localkube[3201]: I1201 16:06:36.506982 3201 event.go:218] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"lazy-dachshund-redis-c444c8957", UID:"c86e5d51-d6af-11e7-9596-080027aac058", APIVersion:"extensions", ResourceVersion:"1893", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: lazy-dachshund-redis-c444c8957-x2jpp | |
Dec 01 16:06:36 minikube localkube[3201]: I1201 16:06:36.548140 3201 reconciler.go:290] Volume detached for volume "pvc-c857ca9e-d6af-11e7-9596-080027aac058" (UniqueName: "kubernetes.io/host-path/c86d57b5-d6af-11e7-9596-080027aac058-pvc-c857ca9e-d6af-11e7-9596-080027aac058") on node "minikube" DevicePath "" | |
Dec 01 16:06:36 minikube localkube[3201]: I1201 16:06:36.548361 3201 reconciler.go:290] Volume detached for volume "default-token-ctrw6" (UniqueName: "kubernetes.io/secret/c86d57b5-d6af-11e7-9596-080027aac058-default-token-ctrw6") on node "minikube" DevicePath "" | |
Dec 01 16:06:38 minikube localkube[3201]: I1201 16:06:38.008891 3201 kuberuntime_manager.go:738] checking backoff for container "seed-rpush" in pod "lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:06:38 minikube localkube[3201]: I1201 16:06:38.009240 3201 kuberuntime_manager.go:748] Back-off 5m0s restarting failed container=seed-rpush pod=lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058) | |
Dec 01 16:06:38 minikube localkube[3201]: E1201 16:06:38.009414 3201 pod_workers.go:182] Error syncing pod c874f93f-d6af-11e7-9596-080027aac058 ("lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "seed-rpush" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=seed-rpush pod=lazy-dachshund-ruby-rpush-89968d64-qkx9l_default(c874f93f-d6af-11e7-9596-080027aac058)" | |
Dec 01 16:06:38 minikube localkube[3201]: I1201 16:06:38.455926 3201 reconciler.go:186] operationExecutor.UnmountVolume started for volume "default-token-ctrw6" (UniqueName: "kubernetes.io/secret/c872ac21-d6af-11e7-9596-080027aac058-default-token-ctrw6") pod "c872ac21-d6af-11e7-9596-080027aac058" (UID: "c872ac21-d6af-11e7-9596-080027aac058") | |
Dec 01 16:06:38 minikube localkube[3201]: I1201 16:06:38.455989 3201 reconciler.go:186] operationExecutor.UnmountVolume started for volume "redis-data" (UniqueName: "kubernetes.io/host-path/c872ac21-d6af-11e7-9596-080027aac058-pvc-c8582af6-d6af-11e7-9596-080027aac058") pod "c872ac21-d6af-11e7-9596-080027aac058" (UID: "c872ac21-d6af-11e7-9596-080027aac058") | |
Dec 01 16:06:38 minikube localkube[3201]: I1201 16:06:38.456018 3201 operation_generator.go:527] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c872ac21-d6af-11e7-9596-080027aac058-pvc-c8582af6-d6af-11e7-9596-080027aac058" (OuterVolumeSpecName: "redis-data") pod "c872ac21-d6af-11e7-9596-080027aac058" (UID: "c872ac21-d6af-11e7-9596-080027aac058"). InnerVolumeSpecName "pvc-c8582af6-d6af-11e7-9596-080027aac058". PluginName "kubernetes.io/host-path", VolumeGidValue "" | |
Dec 01 16:06:38 minikube localkube[3201]: I1201 16:06:38.464125 3201 operation_generator.go:527] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c872ac21-d6af-11e7-9596-080027aac058-default-token-ctrw6" (OuterVolumeSpecName: "default-token-ctrw6") pod "c872ac21-d6af-11e7-9596-080027aac058" (UID: "c872ac21-d6af-11e7-9596-080027aac058"). InnerVolumeSpecName "default-token-ctrw6". PluginName "kubernetes.io/secret", VolumeGidValue "" | |
Dec 01 16:06:38 minikube localkube[3201]: W1201 16:06:38.486563 3201 pod_container_deletor.go:77] Container "0ca95ac190d895128f1faf0ad3262972b1fed63192c2fa72ba0399594973ff67" not found in pod's containers | |
Dec 01 16:06:38 minikube localkube[3201]: I1201 16:06:38.557047 3201 reconciler.go:290] Volume detached for volume "pvc-c8582af6-d6af-11e7-9596-080027aac058" (UniqueName: "kubernetes.io/host-path/c872ac21-d6af-11e7-9596-080027aac058-pvc-c8582af6-d6af-11e7-9596-080027aac058") on node "minikube" DevicePath "" | |
Dec 01 16:06:38 minikube localkube[3201]: I1201 16:06:38.557075 3201 reconciler.go:290] Volume detached for volume "default-token-ctrw6" (UniqueName: "kubernetes.io/secret/c872ac21-d6af-11e7-9596-080027aac058-default-token-ctrw6") on node "minikube" DevicePath "" | |
Dec 01 16:06:39 minikube localkube[3201]: I1201 16:06:39.550286 3201 event.go:218] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"lazy-dachshund-ruby-rails", UID:"c86dfce8-d6af-11e7-9596-080027aac058", APIVersion:"extensions", ResourceVersion:"1908", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set lazy-dachshund-ruby-rails-67968d7457 to 0 | |
Dec 01 16:06:39 minikube localkube[3201]: I1201 16:06:39.566892 3201 event.go:218] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"lazy-dachshund-ruby-rails-67968d7457", UID:"c86fb15a-d6af-11e7-9596-080027aac058", APIVersion:"extensions", ResourceVersion:"1909", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: lazy-dachshund-ruby-rails-67968d7457-ppqtf | |
Dec 01 16:06:39 minikube localkube[3201]: I1201 16:06:39.567085 3201 event.go:218] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"lazy-dachshund-ruby-rails-67968d7457", UID:"c86fb15a-d6af-11e7-9596-080027aac058", APIVersion:"extensions", ResourceVersion:"1909", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: lazy-dachshund-ruby-rails-67968d7457-854tw | |
Dec 01 16:06:42 minikube localkube[3201]: I1201 16:06:42.634352 3201 event.go:218] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"lazy-dachshund-ruby-rpush", UID:"c86f8dd4-d6af-11e7-9596-080027aac058", APIVersion:"extensions", ResourceVersion:"1925", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set lazy-dachshund-ruby-rpush-89968d64 to 0 | |
Dec 01 16:06:42 minikube localkube[3201]: I1201 16:06:42.640001 3201 event.go:218] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"lazy-dachshund-ruby-rpush-89968d64", UID:"c872cd75-d6af-11e7-9596-080027aac058", APIVersion:"extensions", ResourceVersion:"1926", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: lazy-dachshund-ruby-rpush-89968d64-qkx9l | |
Dec 01 16:06:44 minikube localkube[3201]: E1201 16:06:44.291914 3201 healthcheck.go:317] Failed to start node healthz on 0: listen tcp: address 0: missing port in address | |
Dec 01 16:06:44 minikube localkube[3201]: I1201 16:06:44.479425 3201 reconciler.go:186] operationExecutor.UnmountVolume started for volume "lazy-dachshund-ruby-apns" (UniqueName: "kubernetes.io/configmap/c874f93f-d6af-11e7-9596-080027aac058-lazy-dachshund-ruby-apns") pod "c874f93f-d6af-11e7-9596-080027aac058" (UID: "c874f93f-d6af-11e7-9596-080027aac058") | |
Dec 01 16:06:44 minikube localkube[3201]: I1201 16:06:44.479727 3201 reconciler.go:186] operationExecutor.UnmountVolume started for volume "default-token-ctrw6" (UniqueName: "kubernetes.io/secret/c874f93f-d6af-11e7-9596-080027aac058-default-token-ctrw6") pod "c874f93f-d6af-11e7-9596-080027aac058" (UID: "c874f93f-d6af-11e7-9596-080027aac058") | |
Dec 01 16:06:44 minikube localkube[3201]: I1201 16:06:44.480241 3201 operation_generator.go:527] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c874f93f-d6af-11e7-9596-080027aac058-lazy-dachshund-ruby-apns" (OuterVolumeSpecName: "lazy-dachshund-ruby-apns") pod "c874f93f-d6af-11e7-9596-080027aac058" (UID: "c874f93f-d6af-11e7-9596-080027aac058"). InnerVolumeSpecName "lazy-dachshund-ruby-apns". PluginName "kubernetes.io/configmap", VolumeGidValue "" | |
Dec 01 16:06:44 minikube localkube[3201]: I1201 16:06:44.490115 3201 operation_generator.go:527] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c874f93f-d6af-11e7-9596-080027aac058-default-token-ctrw6" (OuterVolumeSpecName: "default-token-ctrw6") pod "c874f93f-d6af-11e7-9596-080027aac058" (UID: "c874f93f-d6af-11e7-9596-080027aac058"). InnerVolumeSpecName "default-token-ctrw6". PluginName "kubernetes.io/secret", VolumeGidValue "" | |
Dec 01 16:06:44 minikube localkube[3201]: W1201 16:06:44.564398 3201 pod_container_deletor.go:77] Container "97d385a0c355ef2892036ac11a4b56510a3996888ca0257b788d9ab58bed4f02" not found in pod's containers | |
Dec 01 16:06:44 minikube localkube[3201]: I1201 16:06:44.580094 3201 reconciler.go:290] Volume detached for volume "lazy-dachshund-ruby-apns" (UniqueName: "kubernetes.io/configmap/c874f93f-d6af-11e7-9596-080027aac058-lazy-dachshund-ruby-apns") on node "minikube" DevicePath "" | |
Dec 01 16:06:44 minikube localkube[3201]: I1201 16:06:44.580351 3201 reconciler.go:290] Volume detached for volume "default-token-ctrw6" (UniqueName: "kubernetes.io/secret/c874f93f-d6af-11e7-9596-080027aac058-default-token-ctrw6") on node "minikube" DevicePath "" | |
Dec 01 16:06:45 minikube localkube[3201]: W1201 16:06:45.711640 3201 docker_sandbox.go:197] Both sandbox container and checkpoint for id "97d385a0c355ef2892036ac11a4b56510a3996888ca0257b788d9ab58bed4f02" could not be found. Proceed without further sandbox information. | |
Dec 01 16:06:45 minikube localkube[3201]: E1201 16:06:45.713349 3201 docker_sandbox.go:240] Failed to stop sandbox "97d385a0c355ef2892036ac11a4b56510a3996888ca0257b788d9ab58bed4f02": Error response from daemon: No such container: 97d385a0c355ef2892036ac11a4b56510a3996888ca0257b788d9ab58bed4f02 | |
Dec 01 16:06:45 minikube localkube[3201]: I1201 16:06:45.725313 3201 event.go:218] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"lazy-dachshund-ruby-sidekiq", UID:"c871f1f8-d6af-11e7-9596-080027aac058", APIVersion:"extensions", ResourceVersion:"1943", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set lazy-dachshund-ruby-sidekiq-645666b64d to 0 | |
Dec 01 16:06:45 minikube localkube[3201]: I1201 16:06:45.732527 3201 event.go:218] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"lazy-dachshund-ruby-sidekiq-645666b64d", UID:"c873a66f-d6af-11e7-9596-080027aac058", APIVersion:"extensions", ResourceVersion:"1944", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: lazy-dachshund-ruby-sidekiq-645666b64d-jrzfm | |
Dec 01 16:06:45 minikube localkube[3201]: I1201 16:06:45.732578 3201 event.go:218] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"lazy-dachshund-ruby-sidekiq-645666b64d", UID:"c873a66f-d6af-11e7-9596-080027aac058", APIVersion:"extensions", ResourceVersion:"1944", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: lazy-dachshund-ruby-sidekiq-645666b64d-wnvc9 | |
Dec 01 16:07:01 minikube localkube[3201]: E1201 16:07:01.779655 3201 kuberuntime_container.go:66] Can't make a ref to pod "lazy-dachshund-elixir-d64574bd4-srxhb_default(c86e0efa-d6af-11e7-9596-080027aac058)", container init-postgres: selfLink was empty, can't make reference | |
Dec 01 16:07:01 minikube localkube[3201]: E1201 16:07:01.779655 3201 kuberuntime_container.go:66] Can't make a ref to pod "lazy-dachshund-elixir-d64574bd4-xqm8j_default(c86c60a8-d6af-11e7-9596-080027aac058)", container init-postgres: selfLink was empty, can't make reference | |
Dec 01 16:07:02 minikube localkube[3201]: I1201 16:07:02.666909 3201 reconciler.go:186] operationExecutor.UnmountVolume started for volume "lazy-dachshund-elixir-apns" (UniqueName: "kubernetes.io/configmap/c86e0efa-d6af-11e7-9596-080027aac058-lazy-dachshund-elixir-apns") pod "c86e0efa-d6af-11e7-9596-080027aac058" (UID: "c86e0efa-d6af-11e7-9596-080027aac058") | |
Dec 01 16:07:02 minikube localkube[3201]: I1201 16:07:02.667413 3201 reconciler.go:186] operationExecutor.UnmountVolume started for volume "default-token-ctrw6" (UniqueName: "kubernetes.io/secret/c86c60a8-d6af-11e7-9596-080027aac058-default-token-ctrw6") pod "c86c60a8-d6af-11e7-9596-080027aac058" (UID: "c86c60a8-d6af-11e7-9596-080027aac058") | |
Dec 01 16:07:02 minikube localkube[3201]: I1201 16:07:02.667873 3201 reconciler.go:186] operationExecutor.UnmountVolume started for volume "default-token-ctrw6" (UniqueName: "kubernetes.io/secret/c86e0efa-d6af-11e7-9596-080027aac058-default-token-ctrw6") pod "c86e0efa-d6af-11e7-9596-080027aac058" (UID: "c86e0efa-d6af-11e7-9596-080027aac058") | |
Dec 01 16:07:02 minikube localkube[3201]: I1201 16:07:02.668272 3201 reconciler.go:186] operationExecutor.UnmountVolume started for volume "lazy-dachshund-elixir-apns" (UniqueName: "kubernetes.io/configmap/c86c60a8-d6af-11e7-9596-080027aac058-lazy-dachshund-elixir-apns") pod "c86c60a8-d6af-11e7-9596-080027aac058" (UID: "c86c60a8-d6af-11e7-9596-080027aac058") | |
Dec 01 16:07:02 minikube localkube[3201]: I1201 16:07:02.668557 3201 reconciler.go:186] operationExecutor.UnmountVolume started for volume "lazy-dachshund-elixir-ejabberd" (UniqueName: "kubernetes.io/configmap/c86e0efa-d6af-11e7-9596-080027aac058-lazy-dachshund-elixir-ejabberd") pod "c86e0efa-d6af-11e7-9596-080027aac058" (UID: "c86e0efa-d6af-11e7-9596-080027aac058") | |
Dec 01 16:07:02 minikube localkube[3201]: I1201 16:07:02.668881 3201 reconciler.go:186] operationExecutor.UnmountVolume started for volume "lazy-dachshund-elixir-ejabberd" (UniqueName: "kubernetes.io/configmap/c86c60a8-d6af-11e7-9596-080027aac058-lazy-dachshund-elixir-ejabberd") pod "c86c60a8-d6af-11e7-9596-080027aac058" (UID: "c86c60a8-d6af-11e7-9596-080027aac058") | |
Dec 01 16:07:02 minikube localkube[3201]: I1201 16:07:02.667476 3201 operation_generator.go:527] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c86e0efa-d6af-11e7-9596-080027aac058-lazy-dachshund-elixir-apns" (OuterVolumeSpecName: "lazy-dachshund-elixir-apns") pod "c86e0efa-d6af-11e7-9596-080027aac058" (UID: "c86e0efa-d6af-11e7-9596-080027aac058"). InnerVolumeSpecName "lazy-dachshund-elixir-apns". PluginName "kubernetes.io/configmap", VolumeGidValue "" | |
Dec 01 16:07:02 minikube localkube[3201]: I1201 16:07:02.679125 3201 operation_generator.go:527] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c86c60a8-d6af-11e7-9596-080027aac058-lazy-dachshund-elixir-ejabberd" (OuterVolumeSpecName: "lazy-dachshund-elixir-ejabberd") pod "c86c60a8-d6af-11e7-9596-080027aac058" (UID: "c86c60a8-d6af-11e7-9596-080027aac058"). InnerVolumeSpecName "lazy-dachshund-elixir-ejabberd". PluginName "kubernetes.io/configmap", VolumeGidValue "" | |
Dec 01 16:07:02 minikube localkube[3201]: I1201 16:07:02.679624 3201 operation_generator.go:527] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c86c60a8-d6af-11e7-9596-080027aac058-lazy-dachshund-elixir-apns" (OuterVolumeSpecName: "lazy-dachshund-elixir-apns") pod "c86c60a8-d6af-11e7-9596-080027aac058" (UID: "c86c60a8-d6af-11e7-9596-080027aac058"). InnerVolumeSpecName "lazy-dachshund-elixir-apns". PluginName "kubernetes.io/configmap", VolumeGidValue "" | |
Dec 01 16:07:02 minikube localkube[3201]: I1201 16:07:02.679739 3201 operation_generator.go:527] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c86e0efa-d6af-11e7-9596-080027aac058-lazy-dachshund-elixir-ejabberd" (OuterVolumeSpecName: "lazy-dachshund-elixir-ejabberd") pod "c86e0efa-d6af-11e7-9596-080027aac058" (UID: "c86e0efa-d6af-11e7-9596-080027aac058"). InnerVolumeSpecName "lazy-dachshund-elixir-ejabberd". PluginName "kubernetes.io/configmap", VolumeGidValue "" | |
Dec 01 16:07:02 minikube localkube[3201]: I1201 16:07:02.682820 3201 operation_generator.go:527] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c86c60a8-d6af-11e7-9596-080027aac058-default-token-ctrw6" (OuterVolumeSpecName: "default-token-ctrw6") pod "c86c60a8-d6af-11e7-9596-080027aac058" (UID: "c86c60a8-d6af-11e7-9596-080027aac058"). InnerVolumeSpecName "default-token-ctrw6". PluginName "kubernetes.io/secret", VolumeGidValue "" | |
Dec 01 16:07:02 minikube localkube[3201]: I1201 16:07:02.684848 3201 operation_generator.go:527] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c86e0efa-d6af-11e7-9596-080027aac058-default-token-ctrw6" (OuterVolumeSpecName: "default-token-ctrw6") pod "c86e0efa-d6af-11e7-9596-080027aac058" (UID: "c86e0efa-d6af-11e7-9596-080027aac058"). InnerVolumeSpecName "default-token-ctrw6". PluginName "kubernetes.io/secret", VolumeGidValue "" | |
Dec 01 16:07:02 minikube localkube[3201]: W1201 16:07:02.760054 3201 pod_container_deletor.go:77] Container "055570fd4a264765841294119f00c754ec2aae0ee2cf4d66ad16722f33a3a854" not found in pod's containers | |
Dec 01 16:07:02 minikube localkube[3201]: W1201 16:07:02.763402 3201 pod_container_deletor.go:77] Container "12cbff2d879100793bde55a59cba96fb73bf6d5aab457ed7dc4f6ff29fb24149" not found in pod's containers | |
Dec 01 16:07:02 minikube localkube[3201]: I1201 16:07:02.771986 3201 reconciler.go:290] Volume detached for volume "default-token-ctrw6" (UniqueName: "kubernetes.io/secret/c86e0efa-d6af-11e7-9596-080027aac058-default-token-ctrw6") on node "minikube" DevicePath "" | |
Dec 01 16:07:02 minikube localkube[3201]: I1201 16:07:02.772215 3201 reconciler.go:290] Volume detached for volume "lazy-dachshund-elixir-apns" (UniqueName: "kubernetes.io/configmap/c86c60a8-d6af-11e7-9596-080027aac058-lazy-dachshund-elixir-apns") on node "minikube" DevicePath "" | |
Dec 01 16:07:02 minikube localkube[3201]: I1201 16:07:02.772327 3201 reconciler.go:290] Volume detached for volume "lazy-dachshund-elixir-ejabberd" (UniqueName: "kubernetes.io/configmap/c86e0efa-d6af-11e7-9596-080027aac058-lazy-dachshund-elixir-ejabberd") on node "minikube" DevicePath "" | |
Dec 01 16:07:02 minikube localkube[3201]: I1201 16:07:02.772449 3201 reconciler.go:290] Volume detached for volume "lazy-dachshund-elixir-ejabberd" (UniqueName: "kubernetes.io/configmap/c86c60a8-d6af-11e7-9596-080027aac058-lazy-dachshund-elixir-ejabberd") on node "minikube" DevicePath "" | |
Dec 01 16:07:02 minikube localkube[3201]: I1201 16:07:02.772558 3201 reconciler.go:290] Volume detached for volume "lazy-dachshund-elixir-apns" (UniqueName: "kubernetes.io/configmap/c86e0efa-d6af-11e7-9596-080027aac058-lazy-dachshund-elixir-apns") on node "minikube" DevicePath "" | |
Dec 01 16:07:02 minikube localkube[3201]: I1201 16:07:02.772673 3201 reconciler.go:290] Volume detached for volume "default-token-ctrw6" (UniqueName: "kubernetes.io/secret/c86c60a8-d6af-11e7-9596-080027aac058-default-token-ctrw6") on node "minikube" DevicePath "" | |
Dec 01 16:07:03 minikube localkube[3201]: E1201 16:07:03.717911 3201 kuberuntime_container.go:66] Can't make a ref to pod "lazy-dachshund-elixir-d64574bd4-xqm8j_default(c86c60a8-d6af-11e7-9596-080027aac058)", container init-postgres: selfLink was empty, can't make reference | |
Dec 01 16:07:03 minikube localkube[3201]: E1201 16:07:03.719029 3201 kuberuntime_container.go:66] Can't make a ref to pod "lazy-dachshund-elixir-d64574bd4-srxhb_default(c86e0efa-d6af-11e7-9596-080027aac058)", container init-postgres: selfLink was empty, can't make reference | |
Dec 01 16:07:09 minikube localkube[3201]: W1201 16:07:09.856065 3201 pod_container_deletor.go:77] Container "33e1aacdbc95060618d4a05c5597ae2d20c64c290f33cb08163fae75d827014d" not found in pod's containers | |
Dec 01 16:07:10 minikube localkube[3201]: I1201 16:07:10.006626 3201 reconciler.go:186] operationExecutor.UnmountVolume started for volume "default-token-ctrw6" (UniqueName: "kubernetes.io/secret/c872d7ba-d6af-11e7-9596-080027aac058-default-token-ctrw6") pod "c872d7ba-d6af-11e7-9596-080027aac058" (UID: "c872d7ba-d6af-11e7-9596-080027aac058") | |
Dec 01 16:07:10 minikube localkube[3201]: I1201 16:07:10.015861 3201 operation_generator.go:527] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c872d7ba-d6af-11e7-9596-080027aac058-default-token-ctrw6" (OuterVolumeSpecName: "default-token-ctrw6") pod "c872d7ba-d6af-11e7-9596-080027aac058" (UID: "c872d7ba-d6af-11e7-9596-080027aac058"). InnerVolumeSpecName "default-token-ctrw6". PluginName "kubernetes.io/secret", VolumeGidValue "" | |
Dec 01 16:07:10 minikube localkube[3201]: I1201 16:07:10.107579 3201 reconciler.go:290] Volume detached for volume "default-token-ctrw6" (UniqueName: "kubernetes.io/secret/c872d7ba-d6af-11e7-9596-080027aac058-default-token-ctrw6") on node "minikube" DevicePath "" | |
Dec 01 16:07:10 minikube localkube[3201]: W1201 16:07:10.903531 3201 pod_container_deletor.go:77] Container "15005d11c83cae349167be9385c17b49dc59e6aad5192515687b70261fd5e168" not found in pod's containers | |
Dec 01 16:07:12 minikube localkube[3201]: I1201 16:07:12.017934 3201 reconciler.go:186] operationExecutor.UnmountVolume started for volume "default-token-ctrw6" (UniqueName: "kubernetes.io/secret/c875e16b-d6af-11e7-9596-080027aac058-default-token-ctrw6") pod "c875e16b-d6af-11e7-9596-080027aac058" (UID: "c875e16b-d6af-11e7-9596-080027aac058") | |
Dec 01 16:07:12 minikube localkube[3201]: I1201 16:07:12.031872 3201 operation_generator.go:527] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c875e16b-d6af-11e7-9596-080027aac058-default-token-ctrw6" (OuterVolumeSpecName: "default-token-ctrw6") pod "c875e16b-d6af-11e7-9596-080027aac058" (UID: "c875e16b-d6af-11e7-9596-080027aac058"). InnerVolumeSpecName "default-token-ctrw6". PluginName "kubernetes.io/secret", VolumeGidValue "" | |
Dec 01 16:07:12 minikube localkube[3201]: I1201 16:07:12.118890 3201 reconciler.go:290] Volume detached for volume "default-token-ctrw6" (UniqueName: "kubernetes.io/secret/c875e16b-d6af-11e7-9596-080027aac058-default-token-ctrw6") on node "minikube" DevicePath "" | |
Dec 01 16:07:16 minikube localkube[3201]: W1201 16:07:16.076048 3201 pod_container_deletor.go:77] Container "98cb952ff9137e2900faa0e527b0224d4214e4362033d210f31259db6adb2f3f" not found in pod's containers | |
Dec 01 16:07:16 minikube localkube[3201]: I1201 16:07:16.240549 3201 reconciler.go:186] operationExecutor.UnmountVolume started for volume "default-token-ctrw6" (UniqueName: "kubernetes.io/secret/c87d9881-d6af-11e7-9596-080027aac058-default-token-ctrw6") pod "c87d9881-d6af-11e7-9596-080027aac058" (UID: "c87d9881-d6af-11e7-9596-080027aac058") | |
Dec 01 16:07:16 minikube localkube[3201]: I1201 16:07:16.254684 3201 operation_generator.go:527] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c87d9881-d6af-11e7-9596-080027aac058-default-token-ctrw6" (OuterVolumeSpecName: "default-token-ctrw6") pod "c87d9881-d6af-11e7-9596-080027aac058" (UID: "c87d9881-d6af-11e7-9596-080027aac058"). InnerVolumeSpecName "default-token-ctrw6". PluginName "kubernetes.io/secret", VolumeGidValue "" | |
Dec 01 16:07:16 minikube localkube[3201]: I1201 16:07:16.340857 3201 reconciler.go:290] Volume detached for volume "default-token-ctrw6" (UniqueName: "kubernetes.io/secret/c87d9881-d6af-11e7-9596-080027aac058-default-token-ctrw6") on node "minikube" DevicePath "" | |
Dec 01 16:07:17 minikube localkube[3201]: W1201 16:07:17.097562 3201 pod_container_deletor.go:77] Container "bc4348d64df1067e6309600192416c273b5859684f34fc3dccbee9f5de01590d" not found in pod's containers | |
Dec 01 16:07:18 minikube localkube[3201]: I1201 16:07:18.249121 3201 reconciler.go:186] operationExecutor.UnmountVolume started for volume "default-token-ctrw6" (UniqueName: "kubernetes.io/secret/c87933ff-d6af-11e7-9596-080027aac058-default-token-ctrw6") pod "c87933ff-d6af-11e7-9596-080027aac058" (UID: "c87933ff-d6af-11e7-9596-080027aac058") | |
Dec 01 16:07:18 minikube localkube[3201]: I1201 16:07:18.258404 3201 operation_generator.go:527] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c87933ff-d6af-11e7-9596-080027aac058-default-token-ctrw6" (OuterVolumeSpecName: "default-token-ctrw6") pod "c87933ff-d6af-11e7-9596-080027aac058" (UID: "c87933ff-d6af-11e7-9596-080027aac058"). InnerVolumeSpecName "default-token-ctrw6". PluginName "kubernetes.io/secret", VolumeGidValue "" | |
Dec 01 16:07:18 minikube localkube[3201]: I1201 16:07:18.349554 3201 reconciler.go:290] Volume detached for volume "default-token-ctrw6" (UniqueName: "kubernetes.io/secret/c87933ff-d6af-11e7-9596-080027aac058-default-token-ctrw6") on node "minikube" DevicePath "" | |
Dec 01 16:07:44 minikube localkube[3201]: E1201 16:07:44.292437 3201 healthcheck.go:317] Failed to start node healthz on 0: listen tcp: address 0: missing port in address | |
Dec 01 16:08:44 minikube localkube[3201]: E1201 16:08:44.293216 3201 healthcheck.go:317] Failed to start node healthz on 0: listen tcp: address 0: missing port in address | |
Dec 01 16:08:59 minikube localkube[3201]: W1201 16:08:59.690879 3201 watcher.go:341] Fast watcher, slow processing. Number of buffered events: 100.Probably caused by slow decoding, user not receiving fast, or other processing logic | |
Dec 01 16:08:59 minikube localkube[3201]: W1201 16:08:59.693728 3201 watcher.go:341] Fast watcher, slow processing. Number of buffered events: 100.Probably caused by slow decoding, user not receiving fast, or other processing logic | |
Dec 01 16:08:59 minikube localkube[3201]: W1201 16:08:59.694589 3201 watcher.go:341] Fast watcher, slow processing. Number of buffered events: 100.Probably caused by slow decoding, user not receiving fast, or other processing logic | |
Dec 01 16:08:59 minikube localkube[3201]: W1201 16:08:59.694788 3201 watcher.go:341] Fast watcher, slow processing. Number of buffered events: 100.Probably caused by slow decoding, user not receiving fast, or other processing logic | |
Dec 01 16:08:59 minikube localkube[3201]: W1201 16:08:59.694924 3201 watcher.go:341] Fast watcher, slow processing. Number of buffered events: 100.Probably caused by slow decoding, user not receiving fast, or other processing logic | |
Dec 01 16:08:59 minikube localkube[3201]: W1201 16:08:59.699533 3201 watcher.go:341] Fast watcher, slow processing. Number of buffered events: 100.Probably caused by slow decoding, user not receiving fast, or other processing logic | |
Dec 01 16:08:59 minikube localkube[3201]: W1201 16:08:59.722842 3201 watcher.go:341] Fast watcher, slow processing. Number of buffered events: 100.Probably caused by slow decoding, user not receiving fast, or other processing logic | |
Dec 01 16:08:59 minikube localkube[3201]: W1201 16:08:59.723147 3201 watcher.go:341] Fast watcher, slow processing. Number of buffered events: 100.Probably caused by slow decoding, user not receiving fast, or other processing logic | |
Dec 01 16:08:59 minikube localkube[3201]: W1201 16:08:59.723163 3201 watcher.go:341] Fast watcher, slow processing. Number of buffered events: 100.Probably caused by slow decoding, user not receiving fast, or other processing logic | |
Dec 01 16:08:59 minikube localkube[3201]: I1201 16:08:59.818120 3201 event.go:218] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"cranky-zebra-elixir", UID:"f0f1aadd-d6b1-11e7-9596-080027aac058", APIVersion:"extensions", ResourceVersion:"2192", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set cranky-zebra-elixir-dcb898cb8 to 2 | |
Dec 01 16:08:59 minikube localkube[3201]: I1201 16:08:59.818388 3201 event.go:218] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"cranky-zebra-postgresql", UID:"f0f290c3-d6b1-11e7-9596-080027aac058", APIVersion:"extensions", ResourceVersion:"2193", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set cranky-zebra-postgresql-558bd4b587 to 1 | |
Dec 01 16:08:59 minikube localkube[3201]: I1201 16:08:59.837835 3201 event.go:218] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"cranky-zebra-redis", UID:"f0f31f37-d6b1-11e7-9596-080027aac058", APIVersion:"extensions", ResourceVersion:"2195", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set cranky-zebra-redis-7c787d57b9 to 1 | |
Dec 01 16:08:59 minikube localkube[3201]: I1201 16:08:59.843165 3201 event.go:218] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"cranky-zebra-postgresql-558bd4b587", UID:"f0f354da-d6b1-11e7-9596-080027aac058", APIVersion:"extensions", ResourceVersion:"2196", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: cranky-zebra-postgresql-558bd4b587-9zxnt | |
Dec 01 16:08:59 minikube localkube[3201]: I1201 16:08:59.843400 3201 event.go:218] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"cranky-zebra-elixir-dcb898cb8", UID:"f0f2c167-d6b1-11e7-9596-080027aac058", APIVersion:"extensions", ResourceVersion:"2194", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: cranky-zebra-elixir-dcb898cb8-qkh49 | |
Dec 01 16:08:59 minikube localkube[3201]: E1201 16:08:59.844819 3201 factory.go:913] Error scheduling default cranky-zebra-postgresql-558bd4b587-9zxnt: PersistentVolumeClaim is not bound: "cranky-zebra-postgresql"; retrying | |
Dec 01 16:08:59 minikube localkube[3201]: I1201 16:08:59.846296 3201 event.go:218] Event(v1.ObjectReference{Kind:"Pod", Namespace:"default", Name:"cranky-zebra-postgresql-558bd4b587-9zxnt", UID:"f0f4db68-d6b1-11e7-9596-080027aac058", APIVersion:"v1", ResourceVersion:"2201", FieldPath:""}): type: 'Warning' reason: 'FailedScheduling' PersistentVolumeClaim is not bound: "cranky-zebra-postgresql" | |
Dec 01 16:08:59 minikube localkube[3201]: I1201 16:08:59.877220 3201 event.go:218] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"cranky-zebra-redis-7c787d57b9", UID:"f0f4c7bf-d6b1-11e7-9596-080027aac058", APIVersion:"extensions", ResourceVersion:"2202", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: cranky-zebra-redis-7c787d57b9-m794n | |
Dec 01 16:08:59 minikube localkube[3201]: I1201 16:08:59.883942 3201 event.go:218] Event(v1.ObjectReference{Kind:"Pod", Namespace:"default", Name:"cranky-zebra-elixir-dcb898cb8-qkh49", UID:"f0f5f2cb-d6b1-11e7-9596-080027aac058", APIVersion:"v1", ResourceVersion:"2200", FieldPath:""}): type: 'Normal' reason: 'Scheduled' Successfully assigned cranky-zebra-elixir-dcb898cb8-qkh49 to minikube | |
Dec 01 16:08:59 minikube localkube[3201]: I1201 16:08:59.888317 3201 event.go:218] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"cranky-zebra-ruby-rails", UID:"f0f48ba7-d6b1-11e7-9596-080027aac058", APIVersion:"extensions", ResourceVersion:"2198", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set cranky-zebra-ruby-rails-6899b9c5c to 2 | |
Dec 01 16:08:59 minikube localkube[3201]: I1201 16:08:59.888475 3201 event.go:218] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"cranky-zebra-elixir-dcb898cb8", UID:"f0f2c167-d6b1-11e7-9596-080027aac058", APIVersion:"extensions", ResourceVersion:"2194", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: cranky-zebra-elixir-dcb898cb8-lslwp | |
Dec 01 16:08:59 minikube localkube[3201]: I1201 16:08:59.895370 3201 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "cranky-zebra-elixir-ejabberd" (UniqueName: "kubernetes.io/configmap/f0f5f2cb-d6b1-11e7-9596-080027aac058-cranky-zebra-elixir-ejabberd") pod "cranky-zebra-elixir-dcb898cb8-qkh49" (UID: "f0f5f2cb-d6b1-11e7-9596-080027aac058") | |
Dec 01 16:08:59 minikube localkube[3201]: I1201 16:08:59.897958 3201 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "cranky-zebra-elixir-apns" (UniqueName: "kubernetes.io/configmap/f0f5f2cb-d6b1-11e7-9596-080027aac058-cranky-zebra-elixir-apns") pod "cranky-zebra-elixir-dcb898cb8-qkh49" (UID: "f0f5f2cb-d6b1-11e7-9596-080027aac058") | |
Dec 01 16:08:59 minikube localkube[3201]: I1201 16:08:59.898083 3201 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-ctrw6" (UniqueName: "kubernetes.io/secret/f0f5f2cb-d6b1-11e7-9596-080027aac058-default-token-ctrw6") pod "cranky-zebra-elixir-dcb898cb8-qkh49" (UID: "f0f5f2cb-d6b1-11e7-9596-080027aac058") | |
Dec 01 16:08:59 minikube localkube[3201]: E1201 16:08:59.902951 3201 factory.go:913] Error scheduling default cranky-zebra-postgresql-558bd4b587-9zxnt: PersistentVolumeClaim is not bound: "cranky-zebra-postgresql"; retrying | |
Dec 01 16:08:59 minikube localkube[3201]: I1201 16:08:59.903126 3201 event.go:218] Event(v1.ObjectReference{Kind:"Pod", Namespace:"default", Name:"cranky-zebra-postgresql-558bd4b587-9zxnt", UID:"f0f4db68-d6b1-11e7-9596-080027aac058", APIVersion:"v1", ResourceVersion:"2211", FieldPath:""}): type: 'Warning' reason: 'FailedScheduling' PersistentVolumeClaim is not bound: "cranky-zebra-postgresql" | |
Dec 01 16:08:59 minikube localkube[3201]: W1201 16:08:59.903235 3201 factory.go:928] Request for pod default/cranky-zebra-postgresql-558bd4b587-9zxnt already in flight, abandoning | |
Dec 01 16:08:59 minikube localkube[3201]: I1201 16:08:59.916892 3201 event.go:218] Event(v1.ObjectReference{Kind:"Pod", Namespace:"default", Name:"cranky-zebra-elixir-dcb898cb8-lslwp", UID:"f0f869e2-d6b1-11e7-9596-080027aac058", APIVersion:"v1", ResourceVersion:"2210", FieldPath:""}): type: 'Normal' reason: 'Scheduled' Successfully assigned cranky-zebra-elixir-dcb898cb8-lslwp to minikube | |
Dec 01 16:08:59 minikube localkube[3201]: I1201 16:08:59.939563 3201 event.go:218] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"cranky-zebra-ruby-rpush", UID:"f0f6e1bc-d6b1-11e7-9596-080027aac058", APIVersion:"extensions", ResourceVersion:"2204", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set cranky-zebra-ruby-rpush-f6dd7985b to 1 | |
Dec 01 16:08:59 minikube localkube[3201]: I1201 16:08:59.960022 3201 event.go:218] Event(v1.ObjectReference{Kind:"Pod", Namespace:"default", Name:"cranky-zebra-redis-7c787d57b9-m794n", UID:"f0f77e98-d6b1-11e7-9596-080027aac058", APIVersion:"v1", ResourceVersion:"2206", FieldPath:""}): type: 'Normal' reason: 'Scheduled' Successfully assigned cranky-zebra-redis-7c787d57b9-m794n to minikube | |
Dec 01 16:08:59 minikube localkube[3201]: I1201 16:08:59.962674 3201 event.go:218] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"cranky-zebra-ruby-sidekiq", UID:"f0fdcfc3-d6b1-11e7-9596-080027aac058", APIVersion:"extensions", ResourceVersion:"2214", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set cranky-zebra-ruby-sidekiq-5cddd59496 to 2 | |
Dec 01 16:08:59 minikube localkube[3201]: I1201 16:08:59.962950 3201 event.go:218] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"cranky-zebra-ruby-rails-6899b9c5c", UID:"f0f6c8cb-d6b1-11e7-9596-080027aac058", APIVersion:"extensions", ResourceVersion:"2203", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: cranky-zebra-ruby-rails-6899b9c5c-956tr | |
Dec 01 16:08:59 minikube localkube[3201]: I1201 16:08:59.963285 3201 event.go:218] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"cranky-zebra-ruby-rpush-f6dd7985b", UID:"f100fbc9-d6b1-11e7-9596-080027aac058", APIVersion:"extensions", ResourceVersion:"2220", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: cranky-zebra-ruby-rpush-f6dd7985b-wll6p | |
Dec 01 16:08:59 minikube localkube[3201]: E1201 16:08:59.974129 3201 helpers.go:468] PercpuUsage had 0 cpus, but the actual number is 2; ignoring extra CPUs | |
Dec 01 16:08:59 minikube localkube[3201]: I1201 16:08:59.976824 3201 event.go:218] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"cranky-zebra-ruby-rails-6899b9c5c", UID:"f0f6c8cb-d6b1-11e7-9596-080027aac058", APIVersion:"extensions", ResourceVersion:"2203", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: cranky-zebra-ruby-rails-6899b9c5c-d2l67 | |
Dec 01 16:08:59 minikube localkube[3201]: I1201 16:08:59.996860 3201 event.go:218] Event(v1.ObjectReference{Kind:"Pod", Namespace:"default", Name:"cranky-zebra-ruby-rpush-f6dd7985b-wll6p", UID:"f1040c87-d6b1-11e7-9596-080027aac058", APIVersion:"v1", ResourceVersion:"2230", FieldPath:""}): type: 'Normal' reason: 'Scheduled' Successfully assigned cranky-zebra-ruby-rpush-f6dd7985b-wll6p to minikube | |
Dec 01 16:08:59 minikube localkube[3201]: I1201 16:08:59.997089 3201 event.go:218] Event(v1.ObjectReference{Kind:"Pod", Namespace:"default", Name:"cranky-zebra-ruby-rails-6899b9c5c-956tr", UID:"f100e4ad-d6b1-11e7-9596-080027aac058", APIVersion:"v1", ResourceVersion:"2222", FieldPath:""}): type: 'Normal' reason: 'Scheduled' Successfully assigned cranky-zebra-ruby-rails-6899b9c5c-956tr to minikube | |
Dec 01 16:09:00 minikube localkube[3201]: I1201 16:09:00.010208 3201 event.go:218] Event(v1.ObjectReference{Kind:"Pod", Namespace:"default", Name:"cranky-zebra-ruby-rails-6899b9c5c-d2l67", UID:"f10a96ad-d6b1-11e7-9596-080027aac058", APIVersion:"v1", ResourceVersion:"2235", FieldPath:""}): type: 'Normal' reason: 'Scheduled' Successfully assigned cranky-zebra-ruby-rails-6899b9c5c-d2l67 to minikube | |
Dec 01 16:09:00 minikube localkube[3201]: W1201 16:09:00.014561 3201 raw.go:87] Error while processing event ("/sys/fs/cgroup/memory/system.slice/run-ra5d7e0ab65d547d393d7a8b53b9908b8.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/memory/system.slice/run-ra5d7e0ab65d547d393d7a8b53b9908b8.scope: no such file or directory | |
Dec 01 16:09:00 minikube localkube[3201]: W1201 16:09:00.017233 3201 container.go:354] Failed to create summary reader for "/system.slice/run-ra5d7e0ab65d547d393d7a8b53b9908b8.scope": none of the resources are being tracked. | |
Dec 01 16:09:00 minikube localkube[3201]: I1201 16:09:00.025488 3201 event.go:218] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"cranky-zebra-ruby-sidekiq-5cddd59496", UID:"f1030c6a-d6b1-11e7-9596-080027aac058", APIVersion:"extensions", ResourceVersion:"2228", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: cranky-zebra-ruby-sidekiq-5cddd59496-n9zqh | |
Dec 01 16:09:00 minikube localkube[3201]: I1201 16:09:00.034120 3201 event.go:218] Event(v1.ObjectReference{Kind:"Pod", Namespace:"default", Name:"cranky-zebra-ruby-sidekiq-5cddd59496-n9zqh", UID:"f10e9405-d6b1-11e7-9596-080027aac058", APIVersion:"v1", ResourceVersion:"2251", FieldPath:""}): type: 'Normal' reason: 'Scheduled' Successfully assigned cranky-zebra-ruby-sidekiq-5cddd59496-n9zqh to minikube | |
Dec 01 16:09:00 minikube localkube[3201]: I1201 16:09:00.043570 3201 event.go:218] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"cranky-zebra-ruby-sidekiq-5cddd59496", UID:"f1030c6a-d6b1-11e7-9596-080027aac058", APIVersion:"extensions", ResourceVersion:"2228", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: cranky-zebra-ruby-sidekiq-5cddd59496-hg7fn | |
Dec 01 16:09:00 minikube localkube[3201]: I1201 16:09:00.051387 3201 event.go:218] Event(v1.ObjectReference{Kind:"Pod", Namespace:"default", Name:"cranky-zebra-ruby-sidekiq-5cddd59496-hg7fn", UID:"f1141392-d6b1-11e7-9596-080027aac058", APIVersion:"v1", ResourceVersion:"2256", FieldPath:""}): type: 'Normal' reason: 'Scheduled' Successfully assigned cranky-zebra-ruby-sidekiq-5cddd59496-hg7fn to minikube | |
Dec 01 16:09:00 minikube localkube[3201]: I1201 16:09:00.101836 3201 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-ctrw6" (UniqueName: "kubernetes.io/secret/f0f869e2-d6b1-11e7-9596-080027aac058-default-token-ctrw6") pod "cranky-zebra-elixir-dcb898cb8-lslwp" (UID: "f0f869e2-d6b1-11e7-9596-080027aac058") | |
Dec 01 16:09:00 minikube localkube[3201]: I1201 16:09:00.102056 3201 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "cranky-zebra-elixir-ejabberd" (UniqueName: "kubernetes.io/configmap/f0f869e2-d6b1-11e7-9596-080027aac058-cranky-zebra-elixir-ejabberd") pod "cranky-zebra-elixir-dcb898cb8-lslwp" (UID: "f0f869e2-d6b1-11e7-9596-080027aac058") | |
Dec 01 16:09:00 minikube localkube[3201]: I1201 16:09:00.102161 3201 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "pvc-f0db8cdd-d6b1-11e7-9596-080027aac058" (UniqueName: "kubernetes.io/host-path/f0f77e98-d6b1-11e7-9596-080027aac058-pvc-f0db8cdd-d6b1-11e7-9596-080027aac058") pod "cranky-zebra-redis-7c787d57b9-m794n" (UID: "f0f77e98-d6b1-11e7-9596-080027aac058") | |
Dec 01 16:09:00 minikube localkube[3201]: I1201 16:09:00.102178 3201 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-ctrw6" (UniqueName: "kubernetes.io/secret/f0f77e98-d6b1-11e7-9596-080027aac058-default-token-ctrw6") pod "cranky-zebra-redis-7c787d57b9-m794n" (UID: "f0f77e98-d6b1-11e7-9596-080027aac058") | |
Dec 01 16:09:00 minikube localkube[3201]: I1201 16:09:00.102194 3201 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "cranky-zebra-elixir-apns" (UniqueName: "kubernetes.io/configmap/f0f869e2-d6b1-11e7-9596-080027aac058-cranky-zebra-elixir-apns") pod "cranky-zebra-elixir-dcb898cb8-lslwp" (UID: "f0f869e2-d6b1-11e7-9596-080027aac058") | |
Dec 01 16:09:00 minikube localkube[3201]: I1201 16:09:00.202356 3201 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-ctrw6" (UniqueName: "kubernetes.io/secret/f100e4ad-d6b1-11e7-9596-080027aac058-default-token-ctrw6") pod "cranky-zebra-ruby-rails-6899b9c5c-956tr" (UID: "f100e4ad-d6b1-11e7-9596-080027aac058") | |
Dec 01 16:09:00 minikube localkube[3201]: I1201 16:09:00.202396 3201 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-ctrw6" (UniqueName: "kubernetes.io/secret/f10e9405-d6b1-11e7-9596-080027aac058-default-token-ctrw6") pod "cranky-zebra-ruby-sidekiq-5cddd59496-n9zqh" (UID: "f10e9405-d6b1-11e7-9596-080027aac058") | |
Dec 01 16:09:00 minikube localkube[3201]: I1201 16:09:00.202439 3201 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "cranky-zebra-ruby-apns" (UniqueName: "kubernetes.io/configmap/f1040c87-d6b1-11e7-9596-080027aac058-cranky-zebra-ruby-apns") pod "cranky-zebra-ruby-rpush-f6dd7985b-wll6p" (UID: "f1040c87-d6b1-11e7-9596-080027aac058") | |
Dec 01 16:09:00 minikube localkube[3201]: I1201 16:09:00.202458 3201 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-ctrw6" (UniqueName: "kubernetes.io/secret/f10a96ad-d6b1-11e7-9596-080027aac058-default-token-ctrw6") pod "cranky-zebra-ruby-rails-6899b9c5c-d2l67" (UID: "f10a96ad-d6b1-11e7-9596-080027aac058") | |
Dec 01 16:09:00 minikube localkube[3201]: I1201 16:09:00.202487 3201 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-ctrw6" (UniqueName: "kubernetes.io/secret/f1040c87-d6b1-11e7-9596-080027aac058-default-token-ctrw6") pod "cranky-zebra-ruby-rpush-f6dd7985b-wll6p" (UID: "f1040c87-d6b1-11e7-9596-080027aac058") | |
Dec 01 16:09:00 minikube localkube[3201]: I1201 16:09:00.202544 3201 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-ctrw6" (UniqueName: "kubernetes.io/secret/f1141392-d6b1-11e7-9596-080027aac058-default-token-ctrw6") pod "cranky-zebra-ruby-sidekiq-5cddd59496-hg7fn" (UID: "f1141392-d6b1-11e7-9596-080027aac058") | |
Dec 01 16:09:00 minikube localkube[3201]: W1201 16:09:00.326976 3201 raw.go:87] Error while processing event ("/sys/fs/cgroup/cpu,cpuacct/system.slice/run-r5e29e5dc964544d3bb9f29afb260a5c6.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/cpu,cpuacct/system.slice/run-r5e29e5dc964544d3bb9f29afb260a5c6.scope: no such file or directory | |
Dec 01 16:09:00 minikube localkube[3201]: W1201 16:09:00.327019 3201 raw.go:87] Error while processing event ("/sys/fs/cgroup/blkio/system.slice/run-r5e29e5dc964544d3bb9f29afb260a5c6.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/blkio/system.slice/run-r5e29e5dc964544d3bb9f29afb260a5c6.scope: no such file or directory | |
Dec 01 16:09:00 minikube localkube[3201]: W1201 16:09:00.327044 3201 raw.go:87] Error while processing event ("/sys/fs/cgroup/memory/system.slice/run-r5e29e5dc964544d3bb9f29afb260a5c6.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/memory/system.slice/run-r5e29e5dc964544d3bb9f29afb260a5c6.scope: no such file or directory | |
Dec 01 16:09:00 minikube localkube[3201]: W1201 16:09:00.327056 3201 raw.go:87] Error while processing event ("/sys/fs/cgroup/cpu,cpuacct/system.slice/run-rf66e281d41aa45759108c9d624903b07.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/cpu,cpuacct/system.slice/run-rf66e281d41aa45759108c9d624903b07.scope: no such file or directory | |
Dec 01 16:09:00 minikube localkube[3201]: W1201 16:09:00.327070 3201 raw.go:87] Error while processing event ("/sys/fs/cgroup/blkio/system.slice/run-rf66e281d41aa45759108c9d624903b07.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/blkio/system.slice/run-rf66e281d41aa45759108c9d624903b07.scope: no such file or directory | |
Dec 01 16:09:00 minikube localkube[3201]: W1201 16:09:00.327080 3201 raw.go:87] Error while processing event ("/sys/fs/cgroup/memory/system.slice/run-rf66e281d41aa45759108c9d624903b07.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/memory/system.slice/run-rf66e281d41aa45759108c9d624903b07.scope: no such file or directory | |
Dec 01 16:09:00 minikube localkube[3201]: W1201 16:09:00.428482 3201 raw.go:87] Error while processing event ("/sys/fs/cgroup/memory/system.slice/run-r3c7f60e662004a379a729d140bd3c7d9.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/memory/system.slice/run-r3c7f60e662004a379a729d140bd3c7d9.scope: no such file or directory | |
Dec 01 16:09:00 minikube localkube[3201]: W1201 16:09:00.428927 3201 container.go:354] Failed to create summary reader for "/system.slice/run-r170bf1f25f244896b0d4eba9e8ef56de.scope": none of the resources are being tracked. | |
Dec 01 16:09:00 minikube localkube[3201]: W1201 16:09:00.429004 3201 container.go:354] Failed to create summary reader for "/system.slice/run-r7fc3f3f19b1d4b968bab75b0cf743488.scope": none of the resources are being tracked. | |
Dec 01 16:09:00 minikube localkube[3201]: W1201 16:09:00.429069 3201 container.go:354] Failed to create summary reader for "/system.slice/run-r3c7f60e662004a379a729d140bd3c7d9.scope": none of the resources are being tracked. | |
Dec 01 16:09:00 minikube localkube[3201]: I1201 16:09:00.851634 3201 event.go:218] Event(v1.ObjectReference{Kind:"Pod", Namespace:"default", Name:"cranky-zebra-postgresql-558bd4b587-9zxnt", UID:"f0f4db68-d6b1-11e7-9596-080027aac058", APIVersion:"v1", ResourceVersion:"2211", FieldPath:""}): type: 'Normal' reason: 'Scheduled' Successfully assigned cranky-zebra-postgresql-558bd4b587-9zxnt to minikube | |
Dec 01 16:09:00 minikube localkube[3201]: E1201 16:09:00.858710 3201 helpers.go:468] PercpuUsage had 0 cpus, but the actual number is 2; ignoring extra CPUs | |
Dec 01 16:09:00 minikube localkube[3201]: E1201 16:09:00.952334 3201 upgradeaware.go:310] Error proxying data from client to backend: read tcp 192.168.99.100:8443->192.168.99.1:42260: read: connection reset by peer | |
Dec 01 16:09:03 minikube localkube[3201]: I1201 16:09:03.024942 3201 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-ctrw6" (UniqueName: "kubernetes.io/secret/f0f4db68-d6b1-11e7-9596-080027aac058-default-token-ctrw6") pod "cranky-zebra-postgresql-558bd4b587-9zxnt" (UID: "f0f4db68-d6b1-11e7-9596-080027aac058") | |
Dec 01 16:09:03 minikube localkube[3201]: I1201 16:09:03.025238 3201 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "pvc-f0db5074-d6b1-11e7-9596-080027aac058" (UniqueName: "kubernetes.io/host-path/f0f4db68-d6b1-11e7-9596-080027aac058-pvc-f0db5074-d6b1-11e7-9596-080027aac058") pod "cranky-zebra-postgresql-558bd4b587-9zxnt" (UID: "f0f4db68-d6b1-11e7-9596-080027aac058") | |
Dec 01 16:09:05 minikube localkube[3201]: E1201 16:09:05.886112 3201 kuberuntime_manager.go:714] container start failed: CreateContainerConfigError: lstat /tmp/hostpath-provisioner/pvc-f0db5074-d6b1-11e7-9596-080027aac058: no such file or directory | |
Dec 01 16:09:05 minikube localkube[3201]: E1201 16:09:05.888571 3201 pod_workers.go:182] Error syncing pod f0f4db68-d6b1-11e7-9596-080027aac058 ("cranky-zebra-postgresql-558bd4b587-9zxnt_default(f0f4db68-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "cranky-zebra-postgresql" with CreateContainerConfigError: "lstat /tmp/hostpath-provisioner/pvc-f0db5074-d6b1-11e7-9596-080027aac058: no such file or directory" | |
Dec 01 16:09:06 minikube localkube[3201]: I1201 16:09:06.512168 3201 kuberuntime_manager.go:499] Container {Name:cranky-zebra-postgresql Image:mdillon/postgis:9.6-alpine Command:[] Args:[] WorkingDir: Ports:[{Name:postgresql HostPort:0 ContainerPort:5432 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:POSTGRES_USER Value:api ValueFrom:nil} {Name:PGUSER Value:api ValueFrom:nil} {Name:POSTGRES_DB Value:api ValueFrom:nil} {Name:POSTGRES_INITDB_ARGS Value: ValueFrom:nil} {Name:PGDATA Value:/var/lib/postgresql/data/pgdata ValueFrom:nil} {Name:POSTGRES_PASSWORD Value: ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:cranky-zebra-postgresql,},Key:postgres-password,Optional:nil,},}} {Name:POD_IP Value: ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:data ReadOnly:false MountPath:/var/lib/postgresql/data/pgdata SubPath:postgresql-db MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:6,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:3,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:09:06 minikube localkube[3201]: E1201 16:09:06.520666 3201 kuberuntime_manager.go:714] container start failed: CreateContainerConfigError: lstat /tmp/hostpath-provisioner/pvc-f0db5074-d6b1-11e7-9596-080027aac058: no such file or directory | |
Dec 01 16:09:06 minikube localkube[3201]: E1201 16:09:06.520706 3201 pod_workers.go:182] Error syncing pod f0f4db68-d6b1-11e7-9596-080027aac058 ("cranky-zebra-postgresql-558bd4b587-9zxnt_default(f0f4db68-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "cranky-zebra-postgresql" with CreateContainerConfigError: "lstat /tmp/hostpath-provisioner/pvc-f0db5074-d6b1-11e7-9596-080027aac058: no such file or directory" | |
Dec 01 16:09:16 minikube localkube[3201]: W1201 16:09:16.904684 3201 conversion.go:110] Could not get instant cpu stats: different number of cpus | |
Dec 01 16:09:16 minikube localkube[3201]: W1201 16:09:16.910106 3201 conversion.go:110] Could not get instant cpu stats: different number of cpus | |
Dec 01 16:09:17 minikube localkube[3201]: W1201 16:09:17.385845 3201 container.go:354] Failed to create summary reader for "/kubepods/besteffort/podf1040c87-d6b1-11e7-9596-080027aac058/19ce2048472dbb07c1514616ea2e204dcd141452228545fecd633f32687019f6": none of the resources are being tracked. | |
Dec 01 16:09:19 minikube localkube[3201]: W1201 16:09:19.815168 3201 container.go:354] Failed to create summary reader for "/kubepods/besteffort/podf10a96ad-d6b1-11e7-9596-080027aac058/cf3eac22610e927708101e6b3dc4b51b33d823879f58fd127782b7569b38b46d": none of the resources are being tracked. | |
Dec 01 16:09:20 minikube localkube[3201]: I1201 16:09:20.009373 3201 kuberuntime_manager.go:499] Container {Name:cranky-zebra-postgresql Image:mdillon/postgis:9.6-alpine Command:[] Args:[] WorkingDir: Ports:[{Name:postgresql HostPort:0 ContainerPort:5432 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:POSTGRES_USER Value:api ValueFrom:nil} {Name:PGUSER Value:api ValueFrom:nil} {Name:POSTGRES_DB Value:api ValueFrom:nil} {Name:POSTGRES_INITDB_ARGS Value: ValueFrom:nil} {Name:PGDATA Value:/var/lib/postgresql/data/pgdata ValueFrom:nil} {Name:POSTGRES_PASSWORD Value: ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:cranky-zebra-postgresql,},Key:postgres-password,Optional:nil,},}} {Name:POD_IP Value: ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}}] Resources:{Limits:map[] Requests:map[memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI} cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI}]} VolumeMounts:[{Name:data ReadOnly:false MountPath:/var/lib/postgresql/data/pgdata SubPath:postgresql-db MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:6,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:3,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:09:20 minikube localkube[3201]: E1201 16:09:20.012835 3201 kuberuntime_manager.go:714] container start failed: CreateContainerConfigError: lstat /tmp/hostpath-provisioner/pvc-f0db5074-d6b1-11e7-9596-080027aac058: no such file or directory | |
Dec 01 16:09:20 minikube localkube[3201]: E1201 16:09:20.013033 3201 pod_workers.go:182] Error syncing pod f0f4db68-d6b1-11e7-9596-080027aac058 ("cranky-zebra-postgresql-558bd4b587-9zxnt_default(f0f4db68-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "cranky-zebra-postgresql" with CreateContainerConfigError: "lstat /tmp/hostpath-provisioner/pvc-f0db5074-d6b1-11e7-9596-080027aac058: no such file or directory" | |
Dec 01 16:09:25 minikube localkube[3201]: I1201 16:09:25.796182 3201 kuberuntime_manager.go:499] Container {Name:cranky-zebra-redis Image:bitnami/redis:4.0.2-r1 Command:[] Args:[] WorkingDir: Ports:[{Name:redis HostPort:0 ContainerPort:6379 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:ALLOW_EMPTY_PASSWORD Value:yes ValueFrom:nil}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:redis-data ReadOnly:false MountPath:/bitnami SubPath: MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:09:25 minikube localkube[3201]: I1201 16:09:25.797194 3201 kuberuntime_manager.go:738] checking backoff for container "cranky-zebra-redis" in pod "cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:09:26 minikube localkube[3201]: W1201 16:09:26.931568 3201 conversion.go:110] Could not get instant cpu stats: different number of cpus | |
Dec 01 16:09:33 minikube localkube[3201]: I1201 16:09:33.007502 3201 kuberuntime_manager.go:499] Container {Name:cranky-zebra-postgresql Image:mdillon/postgis:9.6-alpine Command:[] Args:[] WorkingDir: Ports:[{Name:postgresql HostPort:0 ContainerPort:5432 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:POSTGRES_USER Value:api ValueFrom:nil} {Name:PGUSER Value:api ValueFrom:nil} {Name:POSTGRES_DB Value:api ValueFrom:nil} {Name:POSTGRES_INITDB_ARGS Value: ValueFrom:nil} {Name:PGDATA Value:/var/lib/postgresql/data/pgdata ValueFrom:nil} {Name:POSTGRES_PASSWORD Value: ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:cranky-zebra-postgresql,},Key:postgres-password,Optional:nil,},}} {Name:POD_IP Value: ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:data ReadOnly:false MountPath:/var/lib/postgresql/data/pgdata SubPath:postgresql-db MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:6,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:3,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:09:33 minikube localkube[3201]: E1201 16:09:33.010095 3201 kuberuntime_manager.go:714] container start failed: CreateContainerConfigError: lstat /tmp/hostpath-provisioner/pvc-f0db5074-d6b1-11e7-9596-080027aac058: no such file or directory | |
Dec 01 16:09:33 minikube localkube[3201]: E1201 16:09:33.010243 3201 pod_workers.go:182] Error syncing pod f0f4db68-d6b1-11e7-9596-080027aac058 ("cranky-zebra-postgresql-558bd4b587-9zxnt_default(f0f4db68-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "cranky-zebra-postgresql" with CreateContainerConfigError: "lstat /tmp/hostpath-provisioner/pvc-f0db5074-d6b1-11e7-9596-080027aac058: no such file or directory" | |
Dec 01 16:09:34 minikube localkube[3201]: W1201 16:09:34.868546 3201 container.go:354] Failed to create summary reader for "/kubepods/besteffort/podf10e9405-d6b1-11e7-9596-080027aac058/5b86f95a3ea32858f25e29e5b17dce2fb2c38515f024ef4f4911e45d2a5db27c": none of the resources are being tracked. | |
Dec 01 16:09:35 minikube localkube[3201]: I1201 16:09:35.952547 3201 kuberuntime_manager.go:499] Container {Name:ruby Image:quay.io/findaplayer/ruby-sidekiq:latest Command:[] Args:[] WorkingDir: Ports:[] EnvFrom:[{Prefix: ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:cranky-zebra-ruby,},Optional:nil,} SecretRef:nil}] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:09:36 minikube localkube[3201]: store.index: compact 1703 | |
Dec 01 16:09:36 minikube localkube[3201]: finished scheduled compaction at 1703 (took 59.902567ms) | |
Dec 01 16:09:38 minikube localkube[3201]: I1201 16:09:38.031610 3201 kuberuntime_manager.go:499] Container {Name:ruby Image:quay.io/findaplayer/ruby-api:latest Command:[] Args:[] WorkingDir: Ports:[{Name: HostPort:0 ContainerPort:3000 Protocol:TCP HostIP:}] EnvFrom:[{Prefix: ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:cranky-zebra-ruby,},Optional:nil,} SecretRef:nil}] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:nil ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/status.json,Port:3000,Host:,Scheme:HTTP,HTTPHeaders:[{Host ruby-ruby}],},TCPSocket:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:09:39 minikube localkube[3201]: W1201 16:09:39.653107 3201 container.go:354] Failed to create summary reader for "/kubepods/besteffort/podf1141392-d6b1-11e7-9596-080027aac058/0665b57741a9f51abe58c9667eacd75eb01eaaabde924a3dfc595fd06b6ccf41": none of the resources are being tracked. | |
Dec 01 16:09:40 minikube localkube[3201]: I1201 16:09:40.089316 3201 kuberuntime_manager.go:499] Container {Name:ruby Image:quay.io/findaplayer/ruby-sidekiq:latest Command:[] Args:[] WorkingDir: Ports:[] EnvFrom:[{Prefix: ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:cranky-zebra-ruby,},Optional:nil,} SecretRef:nil}] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:09:44 minikube localkube[3201]: E1201 16:09:44.294127 3201 healthcheck.go:317] Failed to start node healthz on 0: listen tcp: address 0: missing port in address | |
Dec 01 16:09:45 minikube localkube[3201]: I1201 16:09:45.165565 3201 kuberuntime_manager.go:738] checking backoff for container "seed-rpush" in pod "cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:09:46 minikube localkube[3201]: I1201 16:09:46.568016 3201 kuberuntime_manager.go:499] Container {Name:ruby Image:quay.io/findaplayer/ruby-api:latest Command:[] Args:[] WorkingDir: Ports:[{Name: HostPort:0 ContainerPort:3000 Protocol:TCP HostIP:}] EnvFrom:[{Prefix: ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:cranky-zebra-ruby,},Optional:nil,} SecretRef:nil}] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:nil ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/status.json,Port:3000,Host:,Scheme:HTTP,HTTPHeaders:[{Host ruby-ruby}],},TCPSocket:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:09:46 minikube localkube[3201]: E1201 16:09:46.768600 3201 remote_runtime.go:278] ContainerStatus "92f408eec787da0aa4f17093b4965e4ed7b5b62ec35d60334bee421da327715b" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: 92f408eec787da0aa4f17093b4965e4ed7b5b62ec35d60334bee421da327715b | |
Dec 01 16:09:46 minikube localkube[3201]: E1201 16:09:46.769145 3201 kuberuntime_container.go:659] failed to remove pod init container "seed-rpush": failed to get container status "92f408eec787da0aa4f17093b4965e4ed7b5b62ec35d60334bee421da327715b": rpc error: code = Unknown desc = Error: No such container: 92f408eec787da0aa4f17093b4965e4ed7b5b62ec35d60334bee421da327715b; Skipping pod "cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:09:46 minikube localkube[3201]: I1201 16:09:46.769542 3201 kuberuntime_manager.go:738] checking backoff for container "seed-rpush" in pod "cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:09:46 minikube localkube[3201]: I1201 16:09:46.770054 3201 kuberuntime_manager.go:748] Back-off 10s restarting failed container=seed-rpush pod=cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058) | |
Dec 01 16:09:46 minikube localkube[3201]: E1201 16:09:46.770549 3201 pod_workers.go:182] Error syncing pod f1040c87-d6b1-11e7-9596-080027aac058 ("cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "seed-rpush" with CrashLoopBackOff: "Back-off 10s restarting failed container=seed-rpush pod=cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:09:47 minikube localkube[3201]: I1201 16:09:47.967411 3201 kuberuntime_manager.go:738] checking backoff for container "seed-rpush" in pod "cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:09:47 minikube localkube[3201]: I1201 16:09:47.967789 3201 kuberuntime_manager.go:748] Back-off 10s restarting failed container=seed-rpush pod=cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058) | |
Dec 01 16:09:47 minikube localkube[3201]: E1201 16:09:47.967903 3201 pod_workers.go:182] Error syncing pod f1040c87-d6b1-11e7-9596-080027aac058 ("cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "seed-rpush" with CrashLoopBackOff: "Back-off 10s restarting failed container=seed-rpush pod=cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:09:48 minikube localkube[3201]: I1201 16:09:48.007685 3201 kuberuntime_manager.go:499] Container {Name:cranky-zebra-postgresql Image:mdillon/postgis:9.6-alpine Command:[] Args:[] WorkingDir: Ports:[{Name:postgresql HostPort:0 ContainerPort:5432 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:POSTGRES_USER Value:api ValueFrom:nil} {Name:PGUSER Value:api ValueFrom:nil} {Name:POSTGRES_DB Value:api ValueFrom:nil} {Name:POSTGRES_INITDB_ARGS Value: ValueFrom:nil} {Name:PGDATA Value:/var/lib/postgresql/data/pgdata ValueFrom:nil} {Name:POSTGRES_PASSWORD Value: ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:cranky-zebra-postgresql,},Key:postgres-password,Optional:nil,},}} {Name:POD_IP Value: ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:data ReadOnly:false MountPath:/var/lib/postgresql/data/pgdata SubPath:postgresql-db MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:6,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:3,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:09:48 minikube localkube[3201]: E1201 16:09:48.011459 3201 kuberuntime_manager.go:714] container start failed: CreateContainerConfigError: lstat /tmp/hostpath-provisioner/pvc-f0db5074-d6b1-11e7-9596-080027aac058: no such file or directory | |
Dec 01 16:09:48 minikube localkube[3201]: E1201 16:09:48.011672 3201 pod_workers.go:182] Error syncing pod f0f4db68-d6b1-11e7-9596-080027aac058 ("cranky-zebra-postgresql-558bd4b587-9zxnt_default(f0f4db68-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "cranky-zebra-postgresql" with CreateContainerConfigError: "lstat /tmp/hostpath-provisioner/pvc-f0db5074-d6b1-11e7-9596-080027aac058: no such file or directory" | |
Dec 01 16:09:49 minikube localkube[3201]: I1201 16:09:49.294832 3201 kuberuntime_manager.go:499] Container {Name:cranky-zebra-redis Image:bitnami/redis:4.0.2-r1 Command:[] Args:[] WorkingDir: Ports:[{Name:redis HostPort:0 ContainerPort:6379 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:ALLOW_EMPTY_PASSWORD Value:yes ValueFrom:nil}] Resources:{Limits:map[] Requests:map[memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI} cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI}]} VolumeMounts:[{Name:redis-data ReadOnly:false MountPath:/bitnami SubPath: MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:09:49 minikube localkube[3201]: I1201 16:09:49.294937 3201 kuberuntime_manager.go:738] checking backoff for container "cranky-zebra-redis" in pod "cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:09:49 minikube localkube[3201]: I1201 16:09:49.295026 3201 kuberuntime_manager.go:748] Back-off 10s restarting failed container=cranky-zebra-redis pod=cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058) | |
Dec 01 16:09:49 minikube localkube[3201]: E1201 16:09:49.295047 3201 pod_workers.go:182] Error syncing pod f0f77e98-d6b1-11e7-9596-080027aac058 ("cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "cranky-zebra-redis" with CrashLoopBackOff: "Back-off 10s restarting failed container=cranky-zebra-redis pod=cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:09:58 minikube localkube[3201]: I1201 16:09:58.169170 3201 kuberuntime_manager.go:499] Container {Name:cranky-zebra-redis Image:bitnami/redis:4.0.2-r1 Command:[] Args:[] WorkingDir: Ports:[{Name:redis HostPort:0 ContainerPort:6379 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:ALLOW_EMPTY_PASSWORD Value:yes ValueFrom:nil}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:redis-data ReadOnly:false MountPath:/bitnami SubPath: MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:09:58 minikube localkube[3201]: I1201 16:09:58.169389 3201 kuberuntime_manager.go:738] checking backoff for container "cranky-zebra-redis" in pod "cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:09:58 minikube localkube[3201]: I1201 16:09:58.169490 3201 kuberuntime_manager.go:748] Back-off 10s restarting failed container=cranky-zebra-redis pod=cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058) | |
Dec 01 16:09:58 minikube localkube[3201]: E1201 16:09:58.169515 3201 pod_workers.go:182] Error syncing pod f0f77e98-d6b1-11e7-9596-080027aac058 ("cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "cranky-zebra-redis" with CrashLoopBackOff: "Back-off 10s restarting failed container=cranky-zebra-redis pod=cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:10:00 minikube localkube[3201]: I1201 16:10:00.013033 3201 kuberuntime_manager.go:738] checking backoff for container "seed-rpush" in pod "cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:10:01 minikube localkube[3201]: E1201 16:10:01.631562 3201 remote_runtime.go:278] ContainerStatus "85a80f269dc9cd0cec083d63d744b6777d0781f0f255686f90d3f0616afd5b58" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: 85a80f269dc9cd0cec083d63d744b6777d0781f0f255686f90d3f0616afd5b58 | |
Dec 01 16:10:01 minikube localkube[3201]: E1201 16:10:01.632083 3201 kuberuntime_container.go:659] failed to remove pod init container "seed-rpush": failed to get container status "85a80f269dc9cd0cec083d63d744b6777d0781f0f255686f90d3f0616afd5b58": rpc error: code = Unknown desc = Error: No such container: 85a80f269dc9cd0cec083d63d744b6777d0781f0f255686f90d3f0616afd5b58; Skipping pod "cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:10:01 minikube localkube[3201]: I1201 16:10:01.632462 3201 kuberuntime_manager.go:738] checking backoff for container "seed-rpush" in pod "cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:10:01 minikube localkube[3201]: I1201 16:10:01.632956 3201 kuberuntime_manager.go:748] Back-off 20s restarting failed container=seed-rpush pod=cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058) | |
Dec 01 16:10:01 minikube localkube[3201]: E1201 16:10:01.633525 3201 pod_workers.go:182] Error syncing pod f1040c87-d6b1-11e7-9596-080027aac058 ("cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "seed-rpush" with CrashLoopBackOff: "Back-off 20s restarting failed container=seed-rpush pod=cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:10:02 minikube localkube[3201]: I1201 16:10:02.011200 3201 kuberuntime_manager.go:499] Container {Name:cranky-zebra-postgresql Image:mdillon/postgis:9.6-alpine Command:[] Args:[] WorkingDir: Ports:[{Name:postgresql HostPort:0 ContainerPort:5432 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:POSTGRES_USER Value:api ValueFrom:nil} {Name:PGUSER Value:api ValueFrom:nil} {Name:POSTGRES_DB Value:api ValueFrom:nil} {Name:POSTGRES_INITDB_ARGS Value: ValueFrom:nil} {Name:PGDATA Value:/var/lib/postgresql/data/pgdata ValueFrom:nil} {Name:POSTGRES_PASSWORD Value: ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:cranky-zebra-postgresql,},Key:postgres-password,Optional:nil,},}} {Name:POD_IP Value: ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:data ReadOnly:false MountPath:/var/lib/postgresql/data/pgdata SubPath:postgresql-db MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:6,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:3,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:10:02 minikube localkube[3201]: E1201 16:10:02.015727 3201 kuberuntime_manager.go:714] container start failed: CreateContainerConfigError: lstat /tmp/hostpath-provisioner/pvc-f0db5074-d6b1-11e7-9596-080027aac058: no such file or directory | |
Dec 01 16:10:02 minikube localkube[3201]: E1201 16:10:02.015888 3201 pod_workers.go:182] Error syncing pod f0f4db68-d6b1-11e7-9596-080027aac058 ("cranky-zebra-postgresql-558bd4b587-9zxnt_default(f0f4db68-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "cranky-zebra-postgresql" with CreateContainerConfigError: "lstat /tmp/hostpath-provisioner/pvc-f0db5074-d6b1-11e7-9596-080027aac058: no such file or directory" | |
Dec 01 16:10:11 minikube localkube[3201]: I1201 16:10:11.008898 3201 kuberuntime_manager.go:499] Container {Name:cranky-zebra-redis Image:bitnami/redis:4.0.2-r1 Command:[] Args:[] WorkingDir: Ports:[{Name:redis HostPort:0 ContainerPort:6379 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:ALLOW_EMPTY_PASSWORD Value:yes ValueFrom:nil}] Resources:{Limits:map[] Requests:map[memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI} cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI}]} VolumeMounts:[{Name:redis-data ReadOnly:false MountPath:/bitnami SubPath: MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:10:11 minikube localkube[3201]: I1201 16:10:11.009109 3201 kuberuntime_manager.go:738] checking backoff for container "cranky-zebra-redis" in pod "cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:10:14 minikube localkube[3201]: I1201 16:10:14.010435 3201 kuberuntime_manager.go:499] Container {Name:cranky-zebra-postgresql Image:mdillon/postgis:9.6-alpine Command:[] Args:[] WorkingDir: Ports:[{Name:postgresql HostPort:0 ContainerPort:5432 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:POSTGRES_USER Value:api ValueFrom:nil} {Name:PGUSER Value:api ValueFrom:nil} {Name:POSTGRES_DB Value:api ValueFrom:nil} {Name:POSTGRES_INITDB_ARGS Value: ValueFrom:nil} {Name:PGDATA Value:/var/lib/postgresql/data/pgdata ValueFrom:nil} {Name:POSTGRES_PASSWORD Value: ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:cranky-zebra-postgresql,},Key:postgres-password,Optional:nil,},}} {Name:POD_IP Value: ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:data ReadOnly:false MountPath:/var/lib/postgresql/data/pgdata SubPath:postgresql-db MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:6,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:3,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:10:14 minikube localkube[3201]: E1201 16:10:14.014648 3201 kuberuntime_manager.go:714] container start failed: CreateContainerConfigError: lstat /tmp/hostpath-provisioner/pvc-f0db5074-d6b1-11e7-9596-080027aac058: no such file or directory | |
Dec 01 16:10:14 minikube localkube[3201]: E1201 16:10:14.014785 3201 pod_workers.go:182] Error syncing pod f0f4db68-d6b1-11e7-9596-080027aac058 ("cranky-zebra-postgresql-558bd4b587-9zxnt_default(f0f4db68-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "cranky-zebra-postgresql" with CreateContainerConfigError: "lstat /tmp/hostpath-provisioner/pvc-f0db5074-d6b1-11e7-9596-080027aac058: no such file or directory" | |
Dec 01 16:10:15 minikube localkube[3201]: I1201 16:10:15.008803 3201 kuberuntime_manager.go:738] checking backoff for container "seed-rpush" in pod "cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:10:15 minikube localkube[3201]: I1201 16:10:15.009064 3201 kuberuntime_manager.go:748] Back-off 20s restarting failed container=seed-rpush pod=cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058) | |
Dec 01 16:10:15 minikube localkube[3201]: E1201 16:10:15.009173 3201 pod_workers.go:182] Error syncing pod f1040c87-d6b1-11e7-9596-080027aac058 ("cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "seed-rpush" with CrashLoopBackOff: "Back-off 20s restarting failed container=seed-rpush pod=cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:10:26 minikube localkube[3201]: I1201 16:10:26.007826 3201 kuberuntime_manager.go:499] Container {Name:cranky-zebra-postgresql Image:mdillon/postgis:9.6-alpine Command:[] Args:[] WorkingDir: Ports:[{Name:postgresql HostPort:0 ContainerPort:5432 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:POSTGRES_USER Value:api ValueFrom:nil} {Name:PGUSER Value:api ValueFrom:nil} {Name:POSTGRES_DB Value:api ValueFrom:nil} {Name:POSTGRES_INITDB_ARGS Value: ValueFrom:nil} {Name:PGDATA Value:/var/lib/postgresql/data/pgdata ValueFrom:nil} {Name:POSTGRES_PASSWORD Value: ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:cranky-zebra-postgresql,},Key:postgres-password,Optional:nil,},}} {Name:POD_IP Value: ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:data ReadOnly:false MountPath:/var/lib/postgresql/data/pgdata SubPath:postgresql-db MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:6,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:3,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:10:26 minikube localkube[3201]: E1201 16:10:26.013093 3201 kuberuntime_manager.go:714] container start failed: CreateContainerConfigError: lstat /tmp/hostpath-provisioner/pvc-f0db5074-d6b1-11e7-9596-080027aac058: no such file or directory | |
Dec 01 16:10:26 minikube localkube[3201]: E1201 16:10:26.013174 3201 pod_workers.go:182] Error syncing pod f0f4db68-d6b1-11e7-9596-080027aac058 ("cranky-zebra-postgresql-558bd4b587-9zxnt_default(f0f4db68-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "cranky-zebra-postgresql" with CreateContainerConfigError: "lstat /tmp/hostpath-provisioner/pvc-f0db5074-d6b1-11e7-9596-080027aac058: no such file or directory" | |
Dec 01 16:10:30 minikube localkube[3201]: I1201 16:10:30.015400 3201 kuberuntime_manager.go:738] checking backoff for container "seed-rpush" in pod "cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:10:31 minikube localkube[3201]: E1201 16:10:31.005557 3201 remote_runtime.go:278] ContainerStatus "98cfbd4706f8c3207701fd22721e4853527b1ba925e2822ebaeb6ce5afc16a21" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: 98cfbd4706f8c3207701fd22721e4853527b1ba925e2822ebaeb6ce5afc16a21 | |
Dec 01 16:10:31 minikube localkube[3201]: E1201 16:10:31.005625 3201 kuberuntime_container.go:659] failed to remove pod init container "seed-rpush": failed to get container status "98cfbd4706f8c3207701fd22721e4853527b1ba925e2822ebaeb6ce5afc16a21": rpc error: code = Unknown desc = Error: No such container: 98cfbd4706f8c3207701fd22721e4853527b1ba925e2822ebaeb6ce5afc16a21; Skipping pod "cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:10:31 minikube localkube[3201]: I1201 16:10:31.006319 3201 kuberuntime_manager.go:738] checking backoff for container "seed-rpush" in pod "cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:10:31 minikube localkube[3201]: I1201 16:10:31.006903 3201 kuberuntime_manager.go:748] Back-off 40s restarting failed container=seed-rpush pod=cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058) | |
Dec 01 16:10:31 minikube localkube[3201]: E1201 16:10:31.006950 3201 pod_workers.go:182] Error syncing pod f1040c87-d6b1-11e7-9596-080027aac058 ("cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "seed-rpush" with CrashLoopBackOff: "Back-off 40s restarting failed container=seed-rpush pod=cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:10:35 minikube localkube[3201]: I1201 16:10:35.076402 3201 kuberuntime_manager.go:499] Container {Name:cranky-zebra-redis Image:bitnami/redis:4.0.2-r1 Command:[] Args:[] WorkingDir: Ports:[{Name:redis HostPort:0 ContainerPort:6379 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:ALLOW_EMPTY_PASSWORD Value:yes ValueFrom:nil}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:redis-data ReadOnly:false MountPath:/bitnami SubPath: MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:10:35 minikube localkube[3201]: I1201 16:10:35.077275 3201 kuberuntime_manager.go:738] checking backoff for container "cranky-zebra-redis" in pod "cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:10:35 minikube localkube[3201]: I1201 16:10:35.077584 3201 kuberuntime_manager.go:748] Back-off 20s restarting failed container=cranky-zebra-redis pod=cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058) | |
Dec 01 16:10:35 minikube localkube[3201]: E1201 16:10:35.077781 3201 pod_workers.go:182] Error syncing pod f0f77e98-d6b1-11e7-9596-080027aac058 ("cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "cranky-zebra-redis" with CrashLoopBackOff: "Back-off 20s restarting failed container=cranky-zebra-redis pod=cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:10:38 minikube localkube[3201]: I1201 16:10:38.008620 3201 kuberuntime_manager.go:499] Container {Name:cranky-zebra-postgresql Image:mdillon/postgis:9.6-alpine Command:[] Args:[] WorkingDir: Ports:[{Name:postgresql HostPort:0 ContainerPort:5432 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:POSTGRES_USER Value:api ValueFrom:nil} {Name:PGUSER Value:api ValueFrom:nil} {Name:POSTGRES_DB Value:api ValueFrom:nil} {Name:POSTGRES_INITDB_ARGS Value: ValueFrom:nil} {Name:PGDATA Value:/var/lib/postgresql/data/pgdata ValueFrom:nil} {Name:POSTGRES_PASSWORD Value: ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:cranky-zebra-postgresql,},Key:postgres-password,Optional:nil,},}} {Name:POD_IP Value: ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:data ReadOnly:false MountPath:/var/lib/postgresql/data/pgdata SubPath:postgresql-db MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:6,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:3,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:10:38 minikube localkube[3201]: E1201 16:10:38.011119 3201 kuberuntime_manager.go:714] container start failed: CreateContainerConfigError: lstat /tmp/hostpath-provisioner/pvc-f0db5074-d6b1-11e7-9596-080027aac058: no such file or directory | |
Dec 01 16:10:38 minikube localkube[3201]: E1201 16:10:38.011272 3201 pod_workers.go:182] Error syncing pod f0f4db68-d6b1-11e7-9596-080027aac058 ("cranky-zebra-postgresql-558bd4b587-9zxnt_default(f0f4db68-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "cranky-zebra-postgresql" with CreateContainerConfigError: "lstat /tmp/hostpath-provisioner/pvc-f0db5074-d6b1-11e7-9596-080027aac058: no such file or directory" | |
Dec 01 16:10:38 minikube localkube[3201]: I1201 16:10:38.169423 3201 kuberuntime_manager.go:499] Container {Name:cranky-zebra-redis Image:bitnami/redis:4.0.2-r1 Command:[] Args:[] WorkingDir: Ports:[{Name:redis HostPort:0 ContainerPort:6379 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:ALLOW_EMPTY_PASSWORD Value:yes ValueFrom:nil}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:redis-data ReadOnly:false MountPath:/bitnami SubPath: MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:10:38 minikube localkube[3201]: I1201 16:10:38.169535 3201 kuberuntime_manager.go:738] checking backoff for container "cranky-zebra-redis" in pod "cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:10:38 minikube localkube[3201]: I1201 16:10:38.169626 3201 kuberuntime_manager.go:748] Back-off 20s restarting failed container=cranky-zebra-redis pod=cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058) | |
Dec 01 16:10:38 minikube localkube[3201]: E1201 16:10:38.169648 3201 pod_workers.go:182] Error syncing pod f0f77e98-d6b1-11e7-9596-080027aac058 ("cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "cranky-zebra-redis" with CrashLoopBackOff: "Back-off 20s restarting failed container=cranky-zebra-redis pod=cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:10:44 minikube localkube[3201]: E1201 16:10:44.294364 3201 healthcheck.go:317] Failed to start node healthz on 0: listen tcp: address 0: missing port in address | |
Dec 01 16:10:46 minikube localkube[3201]: I1201 16:10:46.008855 3201 kuberuntime_manager.go:738] checking backoff for container "seed-rpush" in pod "cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:10:46 minikube localkube[3201]: I1201 16:10:46.009287 3201 kuberuntime_manager.go:748] Back-off 40s restarting failed container=seed-rpush pod=cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058) | |
Dec 01 16:10:46 minikube localkube[3201]: E1201 16:10:46.009441 3201 pod_workers.go:182] Error syncing pod f1040c87-d6b1-11e7-9596-080027aac058 ("cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "seed-rpush" with CrashLoopBackOff: "Back-off 40s restarting failed container=seed-rpush pod=cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:10:52 minikube localkube[3201]: I1201 16:10:52.009282 3201 kuberuntime_manager.go:499] Container {Name:cranky-zebra-redis Image:bitnami/redis:4.0.2-r1 Command:[] Args:[] WorkingDir: Ports:[{Name:redis HostPort:0 ContainerPort:6379 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:ALLOW_EMPTY_PASSWORD Value:yes ValueFrom:nil}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:redis-data ReadOnly:false MountPath:/bitnami SubPath: MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:10:52 minikube localkube[3201]: I1201 16:10:52.011548 3201 kuberuntime_manager.go:738] checking backoff for container "cranky-zebra-redis" in pod "cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:10:52 minikube localkube[3201]: I1201 16:10:52.011761 3201 kuberuntime_manager.go:748] Back-off 20s restarting failed container=cranky-zebra-redis pod=cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058) | |
Dec 01 16:10:52 minikube localkube[3201]: E1201 16:10:52.011878 3201 pod_workers.go:182] Error syncing pod f0f77e98-d6b1-11e7-9596-080027aac058 ("cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "cranky-zebra-redis" with CrashLoopBackOff: "Back-off 20s restarting failed container=cranky-zebra-redis pod=cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:10:53 minikube localkube[3201]: I1201 16:10:53.008131 3201 kuberuntime_manager.go:499] Container {Name:cranky-zebra-postgresql Image:mdillon/postgis:9.6-alpine Command:[] Args:[] WorkingDir: Ports:[{Name:postgresql HostPort:0 ContainerPort:5432 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:POSTGRES_USER Value:api ValueFrom:nil} {Name:PGUSER Value:api ValueFrom:nil} {Name:POSTGRES_DB Value:api ValueFrom:nil} {Name:POSTGRES_INITDB_ARGS Value: ValueFrom:nil} {Name:PGDATA Value:/var/lib/postgresql/data/pgdata ValueFrom:nil} {Name:POSTGRES_PASSWORD Value: ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:cranky-zebra-postgresql,},Key:postgres-password,Optional:nil,},}} {Name:POD_IP Value: ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:data ReadOnly:false MountPath:/var/lib/postgresql/data/pgdata SubPath:postgresql-db MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:6,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:3,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:10:53 minikube localkube[3201]: E1201 16:10:53.012150 3201 kuberuntime_manager.go:714] container start failed: CreateContainerConfigError: lstat /tmp/hostpath-provisioner/pvc-f0db5074-d6b1-11e7-9596-080027aac058: no such file or directory | |
Dec 01 16:10:53 minikube localkube[3201]: E1201 16:10:53.012191 3201 pod_workers.go:182] Error syncing pod f0f4db68-d6b1-11e7-9596-080027aac058 ("cranky-zebra-postgresql-558bd4b587-9zxnt_default(f0f4db68-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "cranky-zebra-postgresql" with CreateContainerConfigError: "lstat /tmp/hostpath-provisioner/pvc-f0db5074-d6b1-11e7-9596-080027aac058: no such file or directory" | |
Dec 01 16:10:59 minikube localkube[3201]: I1201 16:10:59.009958 3201 kuberuntime_manager.go:738] checking backoff for container "seed-rpush" in pod "cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:10:59 minikube localkube[3201]: I1201 16:10:59.010252 3201 kuberuntime_manager.go:748] Back-off 40s restarting failed container=seed-rpush pod=cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058) | |
Dec 01 16:10:59 minikube localkube[3201]: E1201 16:10:59.010296 3201 pod_workers.go:182] Error syncing pod f1040c87-d6b1-11e7-9596-080027aac058 ("cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "seed-rpush" with CrashLoopBackOff: "Back-off 40s restarting failed container=seed-rpush pod=cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:11:05 minikube localkube[3201]: I1201 16:11:05.008148 3201 kuberuntime_manager.go:499] Container {Name:cranky-zebra-postgresql Image:mdillon/postgis:9.6-alpine Command:[] Args:[] WorkingDir: Ports:[{Name:postgresql HostPort:0 ContainerPort:5432 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:POSTGRES_USER Value:api ValueFrom:nil} {Name:PGUSER Value:api ValueFrom:nil} {Name:POSTGRES_DB Value:api ValueFrom:nil} {Name:POSTGRES_INITDB_ARGS Value: ValueFrom:nil} {Name:PGDATA Value:/var/lib/postgresql/data/pgdata ValueFrom:nil} {Name:POSTGRES_PASSWORD Value: ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:cranky-zebra-postgresql,},Key:postgres-password,Optional:nil,},}} {Name:POD_IP Value: ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:data ReadOnly:false MountPath:/var/lib/postgresql/data/pgdata SubPath:postgresql-db MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:6,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:3,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:11:05 minikube localkube[3201]: E1201 16:11:05.013269 3201 kuberuntime_manager.go:714] container start failed: CreateContainerConfigError: lstat /tmp/hostpath-provisioner/pvc-f0db5074-d6b1-11e7-9596-080027aac058: no such file or directory | |
Dec 01 16:11:05 minikube localkube[3201]: E1201 16:11:05.013360 3201 pod_workers.go:182] Error syncing pod f0f4db68-d6b1-11e7-9596-080027aac058 ("cranky-zebra-postgresql-558bd4b587-9zxnt_default(f0f4db68-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "cranky-zebra-postgresql" with CreateContainerConfigError: "lstat /tmp/hostpath-provisioner/pvc-f0db5074-d6b1-11e7-9596-080027aac058: no such file or directory" | |
Dec 01 16:11:07 minikube localkube[3201]: I1201 16:11:07.008220 3201 kuberuntime_manager.go:499] Container {Name:cranky-zebra-redis Image:bitnami/redis:4.0.2-r1 Command:[] Args:[] WorkingDir: Ports:[{Name:redis HostPort:0 ContainerPort:6379 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:ALLOW_EMPTY_PASSWORD Value:yes ValueFrom:nil}] Resources:{Limits:map[] Requests:map[memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI} cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI}]} VolumeMounts:[{Name:redis-data ReadOnly:false MountPath:/bitnami SubPath: MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:11:07 minikube localkube[3201]: I1201 16:11:07.008362 3201 kuberuntime_manager.go:738] checking backoff for container "cranky-zebra-redis" in pod "cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:11:13 minikube localkube[3201]: I1201 16:11:13.010096 3201 kuberuntime_manager.go:738] checking backoff for container "seed-rpush" in pod "cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:11:15 minikube localkube[3201]: E1201 16:11:15.734213 3201 remote_runtime.go:278] ContainerStatus "a58fee7ae45bfe6f37c7072864f41159d2be3ae3ae35ff4ce363a0fd872ce00b" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: a58fee7ae45bfe6f37c7072864f41159d2be3ae3ae35ff4ce363a0fd872ce00b | |
Dec 01 16:11:15 minikube localkube[3201]: E1201 16:11:15.734264 3201 kuberuntime_container.go:659] failed to remove pod init container "seed-rpush": failed to get container status "a58fee7ae45bfe6f37c7072864f41159d2be3ae3ae35ff4ce363a0fd872ce00b": rpc error: code = Unknown desc = Error: No such container: a58fee7ae45bfe6f37c7072864f41159d2be3ae3ae35ff4ce363a0fd872ce00b; Skipping pod "cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:11:15 minikube localkube[3201]: I1201 16:11:15.734376 3201 kuberuntime_manager.go:738] checking backoff for container "seed-rpush" in pod "cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:11:15 minikube localkube[3201]: I1201 16:11:15.734433 3201 kuberuntime_manager.go:748] Back-off 1m20s restarting failed container=seed-rpush pod=cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058) | |
Dec 01 16:11:15 minikube localkube[3201]: E1201 16:11:15.734450 3201 pod_workers.go:182] Error syncing pod f1040c87-d6b1-11e7-9596-080027aac058 ("cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "seed-rpush" with CrashLoopBackOff: "Back-off 1m20s restarting failed container=seed-rpush pod=cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:11:17 minikube localkube[3201]: I1201 16:11:17.008885 3201 kuberuntime_manager.go:499] Container {Name:cranky-zebra-postgresql Image:mdillon/postgis:9.6-alpine Command:[] Args:[] WorkingDir: Ports:[{Name:postgresql HostPort:0 ContainerPort:5432 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:POSTGRES_USER Value:api ValueFrom:nil} {Name:PGUSER Value:api ValueFrom:nil} {Name:POSTGRES_DB Value:api ValueFrom:nil} {Name:POSTGRES_INITDB_ARGS Value: ValueFrom:nil} {Name:PGDATA Value:/var/lib/postgresql/data/pgdata ValueFrom:nil} {Name:POSTGRES_PASSWORD Value: ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:cranky-zebra-postgresql,},Key:postgres-password,Optional:nil,},}} {Name:POD_IP Value: ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:data ReadOnly:false MountPath:/var/lib/postgresql/data/pgdata SubPath:postgresql-db MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:6,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:3,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:11:17 minikube localkube[3201]: E1201 16:11:17.013596 3201 kuberuntime_manager.go:714] container start failed: CreateContainerConfigError: lstat /tmp/hostpath-provisioner/pvc-f0db5074-d6b1-11e7-9596-080027aac058: no such file or directory | |
Dec 01 16:11:17 minikube localkube[3201]: E1201 16:11:17.014337 3201 pod_workers.go:182] Error syncing pod f0f4db68-d6b1-11e7-9596-080027aac058 ("cranky-zebra-postgresql-558bd4b587-9zxnt_default(f0f4db68-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "cranky-zebra-postgresql" with CreateContainerConfigError: "lstat /tmp/hostpath-provisioner/pvc-f0db5074-d6b1-11e7-9596-080027aac058: no such file or directory" | |
Dec 01 16:11:29 minikube localkube[3201]: I1201 16:11:29.930533 3201 kuberuntime_manager.go:499] Container {Name:cranky-zebra-redis Image:bitnami/redis:4.0.2-r1 Command:[] Args:[] WorkingDir: Ports:[{Name:redis HostPort:0 ContainerPort:6379 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:ALLOW_EMPTY_PASSWORD Value:yes ValueFrom:nil}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:redis-data ReadOnly:false MountPath:/bitnami SubPath: MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:11:29 minikube localkube[3201]: I1201 16:11:29.932839 3201 kuberuntime_manager.go:738] checking backoff for container "cranky-zebra-redis" in pod "cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:11:29 minikube localkube[3201]: I1201 16:11:29.933465 3201 kuberuntime_manager.go:748] Back-off 40s restarting failed container=cranky-zebra-redis pod=cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058) | |
Dec 01 16:11:29 minikube localkube[3201]: E1201 16:11:29.933859 3201 pod_workers.go:182] Error syncing pod f0f77e98-d6b1-11e7-9596-080027aac058 ("cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "cranky-zebra-redis" with CrashLoopBackOff: "Back-off 40s restarting failed container=cranky-zebra-redis pod=cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:11:31 minikube localkube[3201]: I1201 16:11:31.011142 3201 kuberuntime_manager.go:738] checking backoff for container "seed-rpush" in pod "cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:11:31 minikube localkube[3201]: I1201 16:11:31.011309 3201 kuberuntime_manager.go:748] Back-off 1m20s restarting failed container=seed-rpush pod=cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058) | |
Dec 01 16:11:31 minikube localkube[3201]: E1201 16:11:31.011355 3201 pod_workers.go:182] Error syncing pod f1040c87-d6b1-11e7-9596-080027aac058 ("cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "seed-rpush" with CrashLoopBackOff: "Back-off 1m20s restarting failed container=seed-rpush pod=cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:11:32 minikube localkube[3201]: I1201 16:11:32.009101 3201 kuberuntime_manager.go:499] Container {Name:cranky-zebra-postgresql Image:mdillon/postgis:9.6-alpine Command:[] Args:[] WorkingDir: Ports:[{Name:postgresql HostPort:0 ContainerPort:5432 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:POSTGRES_USER Value:api ValueFrom:nil} {Name:PGUSER Value:api ValueFrom:nil} {Name:POSTGRES_DB Value:api ValueFrom:nil} {Name:POSTGRES_INITDB_ARGS Value: ValueFrom:nil} {Name:PGDATA Value:/var/lib/postgresql/data/pgdata ValueFrom:nil} {Name:POSTGRES_PASSWORD Value: ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:cranky-zebra-postgresql,},Key:postgres-password,Optional:nil,},}} {Name:POD_IP Value: ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:data ReadOnly:false MountPath:/var/lib/postgresql/data/pgdata SubPath:postgresql-db MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:6,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:3,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:11:32 minikube localkube[3201]: E1201 16:11:32.013498 3201 kuberuntime_manager.go:714] container start failed: CreateContainerConfigError: lstat /tmp/hostpath-provisioner/pvc-f0db5074-d6b1-11e7-9596-080027aac058: no such file or directory | |
Dec 01 16:11:32 minikube localkube[3201]: E1201 16:11:32.013606 3201 pod_workers.go:182] Error syncing pod f0f4db68-d6b1-11e7-9596-080027aac058 ("cranky-zebra-postgresql-558bd4b587-9zxnt_default(f0f4db68-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "cranky-zebra-postgresql" with CreateContainerConfigError: "lstat /tmp/hostpath-provisioner/pvc-f0db5074-d6b1-11e7-9596-080027aac058: no such file or directory" | |
Dec 01 16:11:38 minikube localkube[3201]: I1201 16:11:38.170148 3201 kuberuntime_manager.go:499] Container {Name:cranky-zebra-redis Image:bitnami/redis:4.0.2-r1 Command:[] Args:[] WorkingDir: Ports:[{Name:redis HostPort:0 ContainerPort:6379 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:ALLOW_EMPTY_PASSWORD Value:yes ValueFrom:nil}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:redis-data ReadOnly:false MountPath:/bitnami SubPath: MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:11:38 minikube localkube[3201]: I1201 16:11:38.170579 3201 kuberuntime_manager.go:738] checking backoff for container "cranky-zebra-redis" in pod "cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:11:38 minikube localkube[3201]: I1201 16:11:38.170753 3201 kuberuntime_manager.go:748] Back-off 40s restarting failed container=cranky-zebra-redis pod=cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058) | |
Dec 01 16:11:38 minikube localkube[3201]: E1201 16:11:38.170813 3201 pod_workers.go:182] Error syncing pod f0f77e98-d6b1-11e7-9596-080027aac058 ("cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "cranky-zebra-redis" with CrashLoopBackOff: "Back-off 40s restarting failed container=cranky-zebra-redis pod=cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:11:44 minikube localkube[3201]: E1201 16:11:44.294933 3201 healthcheck.go:317] Failed to start node healthz on 0: listen tcp: address 0: missing port in address | |
Dec 01 16:11:46 minikube localkube[3201]: I1201 16:11:46.007771 3201 kuberuntime_manager.go:499] Container {Name:cranky-zebra-postgresql Image:mdillon/postgis:9.6-alpine Command:[] Args:[] WorkingDir: Ports:[{Name:postgresql HostPort:0 ContainerPort:5432 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:POSTGRES_USER Value:api ValueFrom:nil} {Name:PGUSER Value:api ValueFrom:nil} {Name:POSTGRES_DB Value:api ValueFrom:nil} {Name:POSTGRES_INITDB_ARGS Value: ValueFrom:nil} {Name:PGDATA Value:/var/lib/postgresql/data/pgdata ValueFrom:nil} {Name:POSTGRES_PASSWORD Value: ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:cranky-zebra-postgresql,},Key:postgres-password,Optional:nil,},}} {Name:POD_IP Value: ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:data ReadOnly:false MountPath:/var/lib/postgresql/data/pgdata SubPath:postgresql-db MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:6,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:3,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:11:46 minikube localkube[3201]: I1201 16:11:46.009515 3201 kuberuntime_manager.go:738] checking backoff for container "seed-rpush" in pod "cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:11:46 minikube localkube[3201]: I1201 16:11:46.010140 3201 kuberuntime_manager.go:748] Back-off 1m20s restarting failed container=seed-rpush pod=cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058) | |
Dec 01 16:11:46 minikube localkube[3201]: E1201 16:11:46.010184 3201 pod_workers.go:182] Error syncing pod f1040c87-d6b1-11e7-9596-080027aac058 ("cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "seed-rpush" with CrashLoopBackOff: "Back-off 1m20s restarting failed container=seed-rpush pod=cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:11:46 minikube localkube[3201]: E1201 16:11:46.013521 3201 kuberuntime_manager.go:714] container start failed: CreateContainerConfigError: lstat /tmp/hostpath-provisioner/pvc-f0db5074-d6b1-11e7-9596-080027aac058: no such file or directory | |
Dec 01 16:11:46 minikube localkube[3201]: E1201 16:11:46.013558 3201 pod_workers.go:182] Error syncing pod f0f4db68-d6b1-11e7-9596-080027aac058 ("cranky-zebra-postgresql-558bd4b587-9zxnt_default(f0f4db68-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "cranky-zebra-postgresql" with CreateContainerConfigError: "lstat /tmp/hostpath-provisioner/pvc-f0db5074-d6b1-11e7-9596-080027aac058: no such file or directory" | |
Dec 01 16:11:54 minikube localkube[3201]: I1201 16:11:54.008382 3201 kuberuntime_manager.go:499] Container {Name:cranky-zebra-redis Image:bitnami/redis:4.0.2-r1 Command:[] Args:[] WorkingDir: Ports:[{Name:redis HostPort:0 ContainerPort:6379 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:ALLOW_EMPTY_PASSWORD Value:yes ValueFrom:nil}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:redis-data ReadOnly:false MountPath:/bitnami SubPath: MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:11:54 minikube localkube[3201]: I1201 16:11:54.009527 3201 kuberuntime_manager.go:738] checking backoff for container "cranky-zebra-redis" in pod "cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:11:54 minikube localkube[3201]: I1201 16:11:54.009743 3201 kuberuntime_manager.go:748] Back-off 40s restarting failed container=cranky-zebra-redis pod=cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058) | |
Dec 01 16:11:54 minikube localkube[3201]: E1201 16:11:54.009797 3201 pod_workers.go:182] Error syncing pod f0f77e98-d6b1-11e7-9596-080027aac058 ("cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "cranky-zebra-redis" with CrashLoopBackOff: "Back-off 40s restarting failed container=cranky-zebra-redis pod=cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:11:57 minikube localkube[3201]: I1201 16:11:57.010331 3201 kuberuntime_manager.go:738] checking backoff for container "seed-rpush" in pod "cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:11:57 minikube localkube[3201]: I1201 16:11:57.010539 3201 kuberuntime_manager.go:748] Back-off 1m20s restarting failed container=seed-rpush pod=cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058) | |
Dec 01 16:11:57 minikube localkube[3201]: E1201 16:11:57.010636 3201 pod_workers.go:182] Error syncing pod f1040c87-d6b1-11e7-9596-080027aac058 ("cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "seed-rpush" with CrashLoopBackOff: "Back-off 1m20s restarting failed container=seed-rpush pod=cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:12:01 minikube localkube[3201]: I1201 16:12:01.008166 3201 kuberuntime_manager.go:499] Container {Name:cranky-zebra-postgresql Image:mdillon/postgis:9.6-alpine Command:[] Args:[] WorkingDir: Ports:[{Name:postgresql HostPort:0 ContainerPort:5432 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:POSTGRES_USER Value:api ValueFrom:nil} {Name:PGUSER Value:api ValueFrom:nil} {Name:POSTGRES_DB Value:api ValueFrom:nil} {Name:POSTGRES_INITDB_ARGS Value: ValueFrom:nil} {Name:PGDATA Value:/var/lib/postgresql/data/pgdata ValueFrom:nil} {Name:POSTGRES_PASSWORD Value: ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:cranky-zebra-postgresql,},Key:postgres-password,Optional:nil,},}} {Name:POD_IP Value: ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:data ReadOnly:false MountPath:/var/lib/postgresql/data/pgdata SubPath:postgresql-db MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:6,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:3,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:12:01 minikube localkube[3201]: E1201 16:12:01.012606 3201 kuberuntime_manager.go:714] container start failed: CreateContainerConfigError: lstat /tmp/hostpath-provisioner/pvc-f0db5074-d6b1-11e7-9596-080027aac058: no such file or directory | |
Dec 01 16:12:01 minikube localkube[3201]: E1201 16:12:01.012637 3201 pod_workers.go:182] Error syncing pod f0f4db68-d6b1-11e7-9596-080027aac058 ("cranky-zebra-postgresql-558bd4b587-9zxnt_default(f0f4db68-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "cranky-zebra-postgresql" with CreateContainerConfigError: "lstat /tmp/hostpath-provisioner/pvc-f0db5074-d6b1-11e7-9596-080027aac058: no such file or directory" | |
Dec 01 16:12:09 minikube localkube[3201]: I1201 16:12:09.008818 3201 kuberuntime_manager.go:499] Container {Name:cranky-zebra-redis Image:bitnami/redis:4.0.2-r1 Command:[] Args:[] WorkingDir: Ports:[{Name:redis HostPort:0 ContainerPort:6379 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:ALLOW_EMPTY_PASSWORD Value:yes ValueFrom:nil}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:redis-data ReadOnly:false MountPath:/bitnami SubPath: MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:12:09 minikube localkube[3201]: I1201 16:12:09.011084 3201 kuberuntime_manager.go:738] checking backoff for container "cranky-zebra-redis" in pod "cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:12:09 minikube localkube[3201]: I1201 16:12:09.011587 3201 kuberuntime_manager.go:748] Back-off 40s restarting failed container=cranky-zebra-redis pod=cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058) | |
Dec 01 16:12:09 minikube localkube[3201]: E1201 16:12:09.012017 3201 pod_workers.go:182] Error syncing pod f0f77e98-d6b1-11e7-9596-080027aac058 ("cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "cranky-zebra-redis" with CrashLoopBackOff: "Back-off 40s restarting failed container=cranky-zebra-redis pod=cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:12:10 minikube localkube[3201]: I1201 16:12:10.010480 3201 kuberuntime_manager.go:738] checking backoff for container "seed-rpush" in pod "cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:12:10 minikube localkube[3201]: I1201 16:12:10.011140 3201 kuberuntime_manager.go:748] Back-off 1m20s restarting failed container=seed-rpush pod=cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058) | |
Dec 01 16:12:10 minikube localkube[3201]: E1201 16:12:10.011458 3201 pod_workers.go:182] Error syncing pod f1040c87-d6b1-11e7-9596-080027aac058 ("cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "seed-rpush" with CrashLoopBackOff: "Back-off 1m20s restarting failed container=seed-rpush pod=cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:12:13 minikube localkube[3201]: I1201 16:12:13.009075 3201 kuberuntime_manager.go:499] Container {Name:cranky-zebra-postgresql Image:mdillon/postgis:9.6-alpine Command:[] Args:[] WorkingDir: Ports:[{Name:postgresql HostPort:0 ContainerPort:5432 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:POSTGRES_USER Value:api ValueFrom:nil} {Name:PGUSER Value:api ValueFrom:nil} {Name:POSTGRES_DB Value:api ValueFrom:nil} {Name:POSTGRES_INITDB_ARGS Value: ValueFrom:nil} {Name:PGDATA Value:/var/lib/postgresql/data/pgdata ValueFrom:nil} {Name:POSTGRES_PASSWORD Value: ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:cranky-zebra-postgresql,},Key:postgres-password,Optional:nil,},}} {Name:POD_IP Value: ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:data ReadOnly:false MountPath:/var/lib/postgresql/data/pgdata SubPath:postgresql-db MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:6,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:3,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:12:13 minikube localkube[3201]: E1201 16:12:13.014719 3201 kuberuntime_manager.go:714] container start failed: CreateContainerConfigError: lstat /tmp/hostpath-provisioner/pvc-f0db5074-d6b1-11e7-9596-080027aac058: no such file or directory | |
Dec 01 16:12:13 minikube localkube[3201]: E1201 16:12:13.014896 3201 pod_workers.go:182] Error syncing pod f0f4db68-d6b1-11e7-9596-080027aac058 ("cranky-zebra-postgresql-558bd4b587-9zxnt_default(f0f4db68-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "cranky-zebra-postgresql" with CreateContainerConfigError: "lstat /tmp/hostpath-provisioner/pvc-f0db5074-d6b1-11e7-9596-080027aac058: no such file or directory" | |
Dec 01 16:12:21 minikube localkube[3201]: I1201 16:12:21.008057 3201 kuberuntime_manager.go:499] Container {Name:cranky-zebra-redis Image:bitnami/redis:4.0.2-r1 Command:[] Args:[] WorkingDir: Ports:[{Name:redis HostPort:0 ContainerPort:6379 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:ALLOW_EMPTY_PASSWORD Value:yes ValueFrom:nil}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:redis-data ReadOnly:false MountPath:/bitnami SubPath: MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:12:21 minikube localkube[3201]: I1201 16:12:21.008269 3201 kuberuntime_manager.go:738] checking backoff for container "cranky-zebra-redis" in pod "cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:12:25 minikube localkube[3201]: I1201 16:12:25.014674 3201 kuberuntime_manager.go:738] checking backoff for container "seed-rpush" in pod "cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:12:25 minikube localkube[3201]: I1201 16:12:25.015660 3201 kuberuntime_manager.go:748] Back-off 1m20s restarting failed container=seed-rpush pod=cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058) | |
Dec 01 16:12:25 minikube localkube[3201]: E1201 16:12:25.016641 3201 pod_workers.go:182] Error syncing pod f1040c87-d6b1-11e7-9596-080027aac058 ("cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "seed-rpush" with CrashLoopBackOff: "Back-off 1m20s restarting failed container=seed-rpush pod=cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:12:26 minikube localkube[3201]: I1201 16:12:26.009224 3201 kuberuntime_manager.go:499] Container {Name:cranky-zebra-postgresql Image:mdillon/postgis:9.6-alpine Command:[] Args:[] WorkingDir: Ports:[{Name:postgresql HostPort:0 ContainerPort:5432 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:POSTGRES_USER Value:api ValueFrom:nil} {Name:PGUSER Value:api ValueFrom:nil} {Name:POSTGRES_DB Value:api ValueFrom:nil} {Name:POSTGRES_INITDB_ARGS Value: ValueFrom:nil} {Name:PGDATA Value:/var/lib/postgresql/data/pgdata ValueFrom:nil} {Name:POSTGRES_PASSWORD Value: ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:cranky-zebra-postgresql,},Key:postgres-password,Optional:nil,},}} {Name:POD_IP Value: ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:data ReadOnly:false MountPath:/var/lib/postgresql/data/pgdata SubPath:postgresql-db MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:6,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:3,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:12:26 minikube localkube[3201]: E1201 16:12:26.012863 3201 kuberuntime_manager.go:714] container start failed: CreateContainerConfigError: lstat /tmp/hostpath-provisioner/pvc-f0db5074-d6b1-11e7-9596-080027aac058: no such file or directory | |
Dec 01 16:12:26 minikube localkube[3201]: E1201 16:12:26.013193 3201 pod_workers.go:182] Error syncing pod f0f4db68-d6b1-11e7-9596-080027aac058 ("cranky-zebra-postgresql-558bd4b587-9zxnt_default(f0f4db68-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "cranky-zebra-postgresql" with CreateContainerConfigError: "lstat /tmp/hostpath-provisioner/pvc-f0db5074-d6b1-11e7-9596-080027aac058: no such file or directory" | |
Dec 01 16:12:38 minikube localkube[3201]: I1201 16:12:38.007386 3201 kuberuntime_manager.go:499] Container {Name:cranky-zebra-postgresql Image:mdillon/postgis:9.6-alpine Command:[] Args:[] WorkingDir: Ports:[{Name:postgresql HostPort:0 ContainerPort:5432 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:POSTGRES_USER Value:api ValueFrom:nil} {Name:PGUSER Value:api ValueFrom:nil} {Name:POSTGRES_DB Value:api ValueFrom:nil} {Name:POSTGRES_INITDB_ARGS Value: ValueFrom:nil} {Name:PGDATA Value:/var/lib/postgresql/data/pgdata ValueFrom:nil} {Name:POSTGRES_PASSWORD Value: ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:cranky-zebra-postgresql,},Key:postgres-password,Optional:nil,},}} {Name:POD_IP Value: ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:data ReadOnly:false MountPath:/var/lib/postgresql/data/pgdata SubPath:postgresql-db MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:6,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:3,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:12:38 minikube localkube[3201]: E1201 16:12:38.011211 3201 kuberuntime_manager.go:714] container start failed: CreateContainerConfigError: lstat /tmp/hostpath-provisioner/pvc-f0db5074-d6b1-11e7-9596-080027aac058: no such file or directory | |
Dec 01 16:12:38 minikube localkube[3201]: E1201 16:12:38.011290 3201 pod_workers.go:182] Error syncing pod f0f4db68-d6b1-11e7-9596-080027aac058 ("cranky-zebra-postgresql-558bd4b587-9zxnt_default(f0f4db68-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "cranky-zebra-postgresql" with CreateContainerConfigError: "lstat /tmp/hostpath-provisioner/pvc-f0db5074-d6b1-11e7-9596-080027aac058: no such file or directory" | |
Dec 01 16:12:39 minikube localkube[3201]: I1201 16:12:39.008193 3201 kuberuntime_manager.go:738] checking backoff for container "seed-rpush" in pod "cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:12:40 minikube localkube[3201]: E1201 16:12:40.861437 3201 remote_runtime.go:278] ContainerStatus "79ce8f27e066b741c5cd4e5466fc98137bc9d1e735b02a144906483106126706" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: 79ce8f27e066b741c5cd4e5466fc98137bc9d1e735b02a144906483106126706 | |
Dec 01 16:12:40 minikube localkube[3201]: E1201 16:12:40.861663 3201 kuberuntime_container.go:659] failed to remove pod init container "seed-rpush": failed to get container status "79ce8f27e066b741c5cd4e5466fc98137bc9d1e735b02a144906483106126706": rpc error: code = Unknown desc = Error: No such container: 79ce8f27e066b741c5cd4e5466fc98137bc9d1e735b02a144906483106126706; Skipping pod "cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:12:40 minikube localkube[3201]: I1201 16:12:40.861817 3201 kuberuntime_manager.go:738] checking backoff for container "seed-rpush" in pod "cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:12:40 minikube localkube[3201]: I1201 16:12:40.861984 3201 kuberuntime_manager.go:748] Back-off 2m40s restarting failed container=seed-rpush pod=cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058) | |
Dec 01 16:12:40 minikube localkube[3201]: E1201 16:12:40.862087 3201 pod_workers.go:182] Error syncing pod f1040c87-d6b1-11e7-9596-080027aac058 ("cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "seed-rpush" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=seed-rpush pod=cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:12:43 minikube localkube[3201]: I1201 16:12:43.914330 3201 kuberuntime_manager.go:499] Container {Name:cranky-zebra-redis Image:bitnami/redis:4.0.2-r1 Command:[] Args:[] WorkingDir: Ports:[{Name:redis HostPort:0 ContainerPort:6379 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:ALLOW_EMPTY_PASSWORD Value:yes ValueFrom:nil}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:redis-data ReadOnly:false MountPath:/bitnami SubPath: MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:12:43 minikube localkube[3201]: I1201 16:12:43.915879 3201 kuberuntime_manager.go:738] checking backoff for container "cranky-zebra-redis" in pod "cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:12:43 minikube localkube[3201]: I1201 16:12:43.916265 3201 kuberuntime_manager.go:748] Back-off 1m20s restarting failed container=cranky-zebra-redis pod=cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058) | |
Dec 01 16:12:43 minikube localkube[3201]: E1201 16:12:43.916747 3201 pod_workers.go:182] Error syncing pod f0f77e98-d6b1-11e7-9596-080027aac058 ("cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "cranky-zebra-redis" with CrashLoopBackOff: "Back-off 1m20s restarting failed container=cranky-zebra-redis pod=cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:12:44 minikube localkube[3201]: E1201 16:12:44.295515 3201 healthcheck.go:317] Failed to start node healthz on 0: listen tcp: address 0: missing port in address | |
Dec 01 16:12:48 minikube localkube[3201]: I1201 16:12:48.169398 3201 kuberuntime_manager.go:499] Container {Name:cranky-zebra-redis Image:bitnami/redis:4.0.2-r1 Command:[] Args:[] WorkingDir: Ports:[{Name:redis HostPort:0 ContainerPort:6379 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:ALLOW_EMPTY_PASSWORD Value:yes ValueFrom:nil}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:redis-data ReadOnly:false MountPath:/bitnami SubPath: MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:12:48 minikube localkube[3201]: I1201 16:12:48.169547 3201 kuberuntime_manager.go:738] checking backoff for container "cranky-zebra-redis" in pod "cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:12:48 minikube localkube[3201]: I1201 16:12:48.169686 3201 kuberuntime_manager.go:748] Back-off 1m20s restarting failed container=cranky-zebra-redis pod=cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058) | |
Dec 01 16:12:48 minikube localkube[3201]: E1201 16:12:48.169725 3201 pod_workers.go:182] Error syncing pod f0f77e98-d6b1-11e7-9596-080027aac058 ("cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "cranky-zebra-redis" with CrashLoopBackOff: "Back-off 1m20s restarting failed container=cranky-zebra-redis pod=cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:12:49 minikube localkube[3201]: I1201 16:12:49.008137 3201 kuberuntime_manager.go:499] Container {Name:cranky-zebra-postgresql Image:mdillon/postgis:9.6-alpine Command:[] Args:[] WorkingDir: Ports:[{Name:postgresql HostPort:0 ContainerPort:5432 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:POSTGRES_USER Value:api ValueFrom:nil} {Name:PGUSER Value:api ValueFrom:nil} {Name:POSTGRES_DB Value:api ValueFrom:nil} {Name:POSTGRES_INITDB_ARGS Value: ValueFrom:nil} {Name:PGDATA Value:/var/lib/postgresql/data/pgdata ValueFrom:nil} {Name:POSTGRES_PASSWORD Value: ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:cranky-zebra-postgresql,},Key:postgres-password,Optional:nil,},}} {Name:POD_IP Value: ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:data ReadOnly:false MountPath:/var/lib/postgresql/data/pgdata SubPath:postgresql-db MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:6,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:3,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:12:49 minikube localkube[3201]: E1201 16:12:49.011231 3201 kuberuntime_manager.go:714] container start failed: CreateContainerConfigError: lstat /tmp/hostpath-provisioner/pvc-f0db5074-d6b1-11e7-9596-080027aac058: no such file or directory | |
Dec 01 16:12:49 minikube localkube[3201]: E1201 16:12:49.011463 3201 pod_workers.go:182] Error syncing pod f0f4db68-d6b1-11e7-9596-080027aac058 ("cranky-zebra-postgresql-558bd4b587-9zxnt_default(f0f4db68-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "cranky-zebra-postgresql" with CreateContainerConfigError: "lstat /tmp/hostpath-provisioner/pvc-f0db5074-d6b1-11e7-9596-080027aac058: no such file or directory" | |
Dec 01 16:12:56 minikube localkube[3201]: I1201 16:12:56.011432 3201 kuberuntime_manager.go:738] checking backoff for container "seed-rpush" in pod "cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:12:56 minikube localkube[3201]: I1201 16:12:56.011832 3201 kuberuntime_manager.go:748] Back-off 2m40s restarting failed container=seed-rpush pod=cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058) | |
Dec 01 16:12:56 minikube localkube[3201]: E1201 16:12:56.011880 3201 pod_workers.go:182] Error syncing pod f1040c87-d6b1-11e7-9596-080027aac058 ("cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "seed-rpush" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=seed-rpush pod=cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:13:00 minikube localkube[3201]: I1201 16:13:00.010282 3201 kuberuntime_manager.go:499] Container {Name:cranky-zebra-redis Image:bitnami/redis:4.0.2-r1 Command:[] Args:[] WorkingDir: Ports:[{Name:redis HostPort:0 ContainerPort:6379 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:ALLOW_EMPTY_PASSWORD Value:yes ValueFrom:nil}] Resources:{Limits:map[] Requests:map[memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI} cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI}]} VolumeMounts:[{Name:redis-data ReadOnly:false MountPath:/bitnami SubPath: MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:13:00 minikube localkube[3201]: I1201 16:13:00.012619 3201 kuberuntime_manager.go:738] checking backoff for container "cranky-zebra-redis" in pod "cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:13:00 minikube localkube[3201]: I1201 16:13:00.013137 3201 kuberuntime_manager.go:748] Back-off 1m20s restarting failed container=cranky-zebra-redis pod=cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058) | |
Dec 01 16:13:00 minikube localkube[3201]: E1201 16:13:00.013436 3201 pod_workers.go:182] Error syncing pod f0f77e98-d6b1-11e7-9596-080027aac058 ("cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "cranky-zebra-redis" with CrashLoopBackOff: "Back-off 1m20s restarting failed container=cranky-zebra-redis pod=cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:13:01 minikube localkube[3201]: I1201 16:13:01.008619 3201 kuberuntime_manager.go:499] Container {Name:cranky-zebra-postgresql Image:mdillon/postgis:9.6-alpine Command:[] Args:[] WorkingDir: Ports:[{Name:postgresql HostPort:0 ContainerPort:5432 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:POSTGRES_USER Value:api ValueFrom:nil} {Name:PGUSER Value:api ValueFrom:nil} {Name:POSTGRES_DB Value:api ValueFrom:nil} {Name:POSTGRES_INITDB_ARGS Value: ValueFrom:nil} {Name:PGDATA Value:/var/lib/postgresql/data/pgdata ValueFrom:nil} {Name:POSTGRES_PASSWORD Value: ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:cranky-zebra-postgresql,},Key:postgres-password,Optional:nil,},}} {Name:POD_IP Value: ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:data ReadOnly:false MountPath:/var/lib/postgresql/data/pgdata SubPath:postgresql-db MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:6,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:3,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:13:01 minikube localkube[3201]: E1201 16:13:01.015084 3201 kuberuntime_manager.go:714] container start failed: CreateContainerConfigError: lstat /tmp/hostpath-provisioner/pvc-f0db5074-d6b1-11e7-9596-080027aac058: no such file or directory | |
Dec 01 16:13:01 minikube localkube[3201]: E1201 16:13:01.015165 3201 pod_workers.go:182] Error syncing pod f0f4db68-d6b1-11e7-9596-080027aac058 ("cranky-zebra-postgresql-558bd4b587-9zxnt_default(f0f4db68-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "cranky-zebra-postgresql" with CreateContainerConfigError: "lstat /tmp/hostpath-provisioner/pvc-f0db5074-d6b1-11e7-9596-080027aac058: no such file or directory" | |
Dec 01 16:13:12 minikube localkube[3201]: I1201 16:13:12.009757 3201 kuberuntime_manager.go:499] Container {Name:cranky-zebra-redis Image:bitnami/redis:4.0.2-r1 Command:[] Args:[] WorkingDir: Ports:[{Name:redis HostPort:0 ContainerPort:6379 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:ALLOW_EMPTY_PASSWORD Value:yes ValueFrom:nil}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:redis-data ReadOnly:false MountPath:/bitnami SubPath: MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:13:12 minikube localkube[3201]: I1201 16:13:12.011403 3201 kuberuntime_manager.go:738] checking backoff for container "cranky-zebra-redis" in pod "cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:13:12 minikube localkube[3201]: I1201 16:13:12.011569 3201 kuberuntime_manager.go:748] Back-off 1m20s restarting failed container=cranky-zebra-redis pod=cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058) | |
Dec 01 16:13:12 minikube localkube[3201]: E1201 16:13:12.011611 3201 pod_workers.go:182] Error syncing pod f0f77e98-d6b1-11e7-9596-080027aac058 ("cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "cranky-zebra-redis" with CrashLoopBackOff: "Back-off 1m20s restarting failed container=cranky-zebra-redis pod=cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:13:12 minikube localkube[3201]: I1201 16:13:12.011269 3201 kuberuntime_manager.go:738] checking backoff for container "seed-rpush" in pod "cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:13:12 minikube localkube[3201]: I1201 16:13:12.011925 3201 kuberuntime_manager.go:748] Back-off 2m40s restarting failed container=seed-rpush pod=cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058) | |
Dec 01 16:13:12 minikube localkube[3201]: E1201 16:13:12.011958 3201 pod_workers.go:182] Error syncing pod f1040c87-d6b1-11e7-9596-080027aac058 ("cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "seed-rpush" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=seed-rpush pod=cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:13:16 minikube localkube[3201]: I1201 16:13:16.008607 3201 kuberuntime_manager.go:499] Container {Name:cranky-zebra-postgresql Image:mdillon/postgis:9.6-alpine Command:[] Args:[] WorkingDir: Ports:[{Name:postgresql HostPort:0 ContainerPort:5432 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:POSTGRES_USER Value:api ValueFrom:nil} {Name:PGUSER Value:api ValueFrom:nil} {Name:POSTGRES_DB Value:api ValueFrom:nil} {Name:POSTGRES_INITDB_ARGS Value: ValueFrom:nil} {Name:PGDATA Value:/var/lib/postgresql/data/pgdata ValueFrom:nil} {Name:POSTGRES_PASSWORD Value: ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:cranky-zebra-postgresql,},Key:postgres-password,Optional:nil,},}} {Name:POD_IP Value: ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:data ReadOnly:false MountPath:/var/lib/postgresql/data/pgdata SubPath:postgresql-db MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:6,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:3,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:13:16 minikube localkube[3201]: E1201 16:13:16.014895 3201 kuberuntime_manager.go:714] container start failed: CreateContainerConfigError: lstat /tmp/hostpath-provisioner/pvc-f0db5074-d6b1-11e7-9596-080027aac058: no such file or directory | |
Dec 01 16:13:16 minikube localkube[3201]: E1201 16:13:16.015383 3201 pod_workers.go:182] Error syncing pod f0f4db68-d6b1-11e7-9596-080027aac058 ("cranky-zebra-postgresql-558bd4b587-9zxnt_default(f0f4db68-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "cranky-zebra-postgresql" with CreateContainerConfigError: "lstat /tmp/hostpath-provisioner/pvc-f0db5074-d6b1-11e7-9596-080027aac058: no such file or directory" | |
Dec 01 16:13:24 minikube localkube[3201]: I1201 16:13:24.009306 3201 kuberuntime_manager.go:499] Container {Name:cranky-zebra-redis Image:bitnami/redis:4.0.2-r1 Command:[] Args:[] WorkingDir: Ports:[{Name:redis HostPort:0 ContainerPort:6379 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:ALLOW_EMPTY_PASSWORD Value:yes ValueFrom:nil}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:redis-data ReadOnly:false MountPath:/bitnami SubPath: MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:13:24 minikube localkube[3201]: I1201 16:13:24.009463 3201 kuberuntime_manager.go:738] checking backoff for container "cranky-zebra-redis" in pod "cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:13:24 minikube localkube[3201]: I1201 16:13:24.009604 3201 kuberuntime_manager.go:748] Back-off 1m20s restarting failed container=cranky-zebra-redis pod=cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058) | |
Dec 01 16:13:24 minikube localkube[3201]: E1201 16:13:24.009643 3201 pod_workers.go:182] Error syncing pod f0f77e98-d6b1-11e7-9596-080027aac058 ("cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "cranky-zebra-redis" with CrashLoopBackOff: "Back-off 1m20s restarting failed container=cranky-zebra-redis pod=cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:13:25 minikube localkube[3201]: I1201 16:13:25.009739 3201 kuberuntime_manager.go:738] checking backoff for container "seed-rpush" in pod "cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:13:25 minikube localkube[3201]: I1201 16:13:25.009903 3201 kuberuntime_manager.go:748] Back-off 2m40s restarting failed container=seed-rpush pod=cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058) | |
Dec 01 16:13:25 minikube localkube[3201]: E1201 16:13:25.009974 3201 pod_workers.go:182] Error syncing pod f1040c87-d6b1-11e7-9596-080027aac058 ("cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "seed-rpush" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=seed-rpush pod=cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:13:30 minikube localkube[3201]: I1201 16:13:30.009378 3201 kuberuntime_manager.go:499] Container {Name:cranky-zebra-postgresql Image:mdillon/postgis:9.6-alpine Command:[] Args:[] WorkingDir: Ports:[{Name:postgresql HostPort:0 ContainerPort:5432 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:POSTGRES_USER Value:api ValueFrom:nil} {Name:PGUSER Value:api ValueFrom:nil} {Name:POSTGRES_DB Value:api ValueFrom:nil} {Name:POSTGRES_INITDB_ARGS Value: ValueFrom:nil} {Name:PGDATA Value:/var/lib/postgresql/data/pgdata ValueFrom:nil} {Name:POSTGRES_PASSWORD Value: ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:cranky-zebra-postgresql,},Key:postgres-password,Optional:nil,},}} {Name:POD_IP Value: ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:data ReadOnly:false MountPath:/var/lib/postgresql/data/pgdata SubPath:postgresql-db MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:6,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:3,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:13:30 minikube localkube[3201]: E1201 16:13:30.015320 3201 kuberuntime_manager.go:714] container start failed: CreateContainerConfigError: lstat /tmp/hostpath-provisioner/pvc-f0db5074-d6b1-11e7-9596-080027aac058: no such file or directory | |
Dec 01 16:13:30 minikube localkube[3201]: E1201 16:13:30.015402 3201 pod_workers.go:182] Error syncing pod f0f4db68-d6b1-11e7-9596-080027aac058 ("cranky-zebra-postgresql-558bd4b587-9zxnt_default(f0f4db68-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "cranky-zebra-postgresql" with CreateContainerConfigError: "lstat /tmp/hostpath-provisioner/pvc-f0db5074-d6b1-11e7-9596-080027aac058: no such file or directory" | |
Dec 01 16:13:35 minikube localkube[3201]: I1201 16:13:35.009516 3201 kuberuntime_manager.go:499] Container {Name:cranky-zebra-redis Image:bitnami/redis:4.0.2-r1 Command:[] Args:[] WorkingDir: Ports:[{Name:redis HostPort:0 ContainerPort:6379 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:ALLOW_EMPTY_PASSWORD Value:yes ValueFrom:nil}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:redis-data ReadOnly:false MountPath:/bitnami SubPath: MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:13:35 minikube localkube[3201]: I1201 16:13:35.010952 3201 kuberuntime_manager.go:738] checking backoff for container "cranky-zebra-redis" in pod "cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:13:35 minikube localkube[3201]: I1201 16:13:35.011103 3201 kuberuntime_manager.go:748] Back-off 1m20s restarting failed container=cranky-zebra-redis pod=cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058) | |
Dec 01 16:13:35 minikube localkube[3201]: E1201 16:13:35.011142 3201 pod_workers.go:182] Error syncing pod f0f77e98-d6b1-11e7-9596-080027aac058 ("cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "cranky-zebra-redis" with CrashLoopBackOff: "Back-off 1m20s restarting failed container=cranky-zebra-redis pod=cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:13:40 minikube localkube[3201]: I1201 16:13:40.011857 3201 kuberuntime_manager.go:738] checking backoff for container "seed-rpush" in pod "cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:13:40 minikube localkube[3201]: I1201 16:13:40.012205 3201 kuberuntime_manager.go:748] Back-off 2m40s restarting failed container=seed-rpush pod=cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058) | |
Dec 01 16:13:40 minikube localkube[3201]: E1201 16:13:40.012373 3201 pod_workers.go:182] Error syncing pod f1040c87-d6b1-11e7-9596-080027aac058 ("cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "seed-rpush" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=seed-rpush pod=cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:13:44 minikube localkube[3201]: E1201 16:13:44.296717 3201 healthcheck.go:317] Failed to start node healthz on 0: listen tcp: address 0: missing port in address | |
Dec 01 16:13:45 minikube localkube[3201]: I1201 16:13:45.008395 3201 kuberuntime_manager.go:499] Container {Name:cranky-zebra-postgresql Image:mdillon/postgis:9.6-alpine Command:[] Args:[] WorkingDir: Ports:[{Name:postgresql HostPort:0 ContainerPort:5432 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:POSTGRES_USER Value:api ValueFrom:nil} {Name:PGUSER Value:api ValueFrom:nil} {Name:POSTGRES_DB Value:api ValueFrom:nil} {Name:POSTGRES_INITDB_ARGS Value: ValueFrom:nil} {Name:PGDATA Value:/var/lib/postgresql/data/pgdata ValueFrom:nil} {Name:POSTGRES_PASSWORD Value: ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:cranky-zebra-postgresql,},Key:postgres-password,Optional:nil,},}} {Name:POD_IP Value: ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:data ReadOnly:false MountPath:/var/lib/postgresql/data/pgdata SubPath:postgresql-db MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:6,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:3,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:13:45 minikube localkube[3201]: E1201 16:13:45.013368 3201 kuberuntime_manager.go:714] container start failed: CreateContainerConfigError: lstat /tmp/hostpath-provisioner/pvc-f0db5074-d6b1-11e7-9596-080027aac058: no such file or directory | |
Dec 01 16:13:45 minikube localkube[3201]: E1201 16:13:45.013699 3201 pod_workers.go:182] Error syncing pod f0f4db68-d6b1-11e7-9596-080027aac058 ("cranky-zebra-postgresql-558bd4b587-9zxnt_default(f0f4db68-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "cranky-zebra-postgresql" with CreateContainerConfigError: "lstat /tmp/hostpath-provisioner/pvc-f0db5074-d6b1-11e7-9596-080027aac058: no such file or directory" | |
Dec 01 16:13:51 minikube localkube[3201]: I1201 16:13:51.007651 3201 kuberuntime_manager.go:499] Container {Name:cranky-zebra-redis Image:bitnami/redis:4.0.2-r1 Command:[] Args:[] WorkingDir: Ports:[{Name:redis HostPort:0 ContainerPort:6379 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:ALLOW_EMPTY_PASSWORD Value:yes ValueFrom:nil}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:redis-data ReadOnly:false MountPath:/bitnami SubPath: MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:13:51 minikube localkube[3201]: I1201 16:13:51.007930 3201 kuberuntime_manager.go:738] checking backoff for container "cranky-zebra-redis" in pod "cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:13:51 minikube localkube[3201]: I1201 16:13:51.008069 3201 kuberuntime_manager.go:748] Back-off 1m20s restarting failed container=cranky-zebra-redis pod=cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058) | |
Dec 01 16:13:51 minikube localkube[3201]: E1201 16:13:51.008105 3201 pod_workers.go:182] Error syncing pod f0f77e98-d6b1-11e7-9596-080027aac058 ("cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "cranky-zebra-redis" with CrashLoopBackOff: "Back-off 1m20s restarting failed container=cranky-zebra-redis pod=cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:13:53 minikube localkube[3201]: I1201 16:13:53.011314 3201 kuberuntime_manager.go:738] checking backoff for container "seed-rpush" in pod "cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:13:53 minikube localkube[3201]: I1201 16:13:53.011450 3201 kuberuntime_manager.go:748] Back-off 2m40s restarting failed container=seed-rpush pod=cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058) | |
Dec 01 16:13:53 minikube localkube[3201]: E1201 16:13:53.011494 3201 pod_workers.go:182] Error syncing pod f1040c87-d6b1-11e7-9596-080027aac058 ("cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "seed-rpush" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=seed-rpush pod=cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:13:56 minikube localkube[3201]: I1201 16:13:56.013056 3201 kuberuntime_manager.go:499] Container {Name:cranky-zebra-postgresql Image:mdillon/postgis:9.6-alpine Command:[] Args:[] WorkingDir: Ports:[{Name:postgresql HostPort:0 ContainerPort:5432 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:POSTGRES_USER Value:api ValueFrom:nil} {Name:PGUSER Value:api ValueFrom:nil} {Name:POSTGRES_DB Value:api ValueFrom:nil} {Name:POSTGRES_INITDB_ARGS Value: ValueFrom:nil} {Name:PGDATA Value:/var/lib/postgresql/data/pgdata ValueFrom:nil} {Name:POSTGRES_PASSWORD Value: ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:cranky-zebra-postgresql,},Key:postgres-password,Optional:nil,},}} {Name:POD_IP Value: ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:data ReadOnly:false MountPath:/var/lib/postgresql/data/pgdata SubPath:postgresql-db MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:6,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:3,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:13:56 minikube localkube[3201]: E1201 16:13:56.020307 3201 kuberuntime_manager.go:714] container start failed: CreateContainerConfigError: lstat /tmp/hostpath-provisioner/pvc-f0db5074-d6b1-11e7-9596-080027aac058: no such file or directory | |
Dec 01 16:13:56 minikube localkube[3201]: E1201 16:13:56.020470 3201 pod_workers.go:182] Error syncing pod f0f4db68-d6b1-11e7-9596-080027aac058 ("cranky-zebra-postgresql-558bd4b587-9zxnt_default(f0f4db68-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "cranky-zebra-postgresql" with CreateContainerConfigError: "lstat /tmp/hostpath-provisioner/pvc-f0db5074-d6b1-11e7-9596-080027aac058: no such file or directory" | |
Dec 01 16:14:06 minikube localkube[3201]: I1201 16:14:06.011904 3201 kuberuntime_manager.go:499] Container {Name:cranky-zebra-redis Image:bitnami/redis:4.0.2-r1 Command:[] Args:[] WorkingDir: Ports:[{Name:redis HostPort:0 ContainerPort:6379 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:ALLOW_EMPTY_PASSWORD Value:yes ValueFrom:nil}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:redis-data ReadOnly:false MountPath:/bitnami SubPath: MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:14:06 minikube localkube[3201]: I1201 16:14:06.012118 3201 kuberuntime_manager.go:738] checking backoff for container "cranky-zebra-redis" in pod "cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:14:08 minikube localkube[3201]: I1201 16:14:08.009514 3201 kuberuntime_manager.go:738] checking backoff for container "seed-rpush" in pod "cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:14:08 minikube localkube[3201]: I1201 16:14:08.009589 3201 kuberuntime_manager.go:748] Back-off 2m40s restarting failed container=seed-rpush pod=cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058) | |
Dec 01 16:14:08 minikube localkube[3201]: E1201 16:14:08.009610 3201 pod_workers.go:182] Error syncing pod f1040c87-d6b1-11e7-9596-080027aac058 ("cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "seed-rpush" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=seed-rpush pod=cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:14:09 minikube localkube[3201]: I1201 16:14:09.009444 3201 kuberuntime_manager.go:499] Container {Name:cranky-zebra-postgresql Image:mdillon/postgis:9.6-alpine Command:[] Args:[] WorkingDir: Ports:[{Name:postgresql HostPort:0 ContainerPort:5432 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:POSTGRES_USER Value:api ValueFrom:nil} {Name:PGUSER Value:api ValueFrom:nil} {Name:POSTGRES_DB Value:api ValueFrom:nil} {Name:POSTGRES_INITDB_ARGS Value: ValueFrom:nil} {Name:PGDATA Value:/var/lib/postgresql/data/pgdata ValueFrom:nil} {Name:POSTGRES_PASSWORD Value: ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:cranky-zebra-postgresql,},Key:postgres-password,Optional:nil,},}} {Name:POD_IP Value: ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:data ReadOnly:false MountPath:/var/lib/postgresql/data/pgdata SubPath:postgresql-db MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:6,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:3,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:14:09 minikube localkube[3201]: E1201 16:14:09.013298 3201 kuberuntime_manager.go:714] container start failed: CreateContainerConfigError: lstat /tmp/hostpath-provisioner/pvc-f0db5074-d6b1-11e7-9596-080027aac058: no such file or directory | |
Dec 01 16:14:09 minikube localkube[3201]: E1201 16:14:09.013372 3201 pod_workers.go:182] Error syncing pod f0f4db68-d6b1-11e7-9596-080027aac058 ("cranky-zebra-postgresql-558bd4b587-9zxnt_default(f0f4db68-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "cranky-zebra-postgresql" with CreateContainerConfigError: "lstat /tmp/hostpath-provisioner/pvc-f0db5074-d6b1-11e7-9596-080027aac058: no such file or directory" | |
Dec 01 16:14:22 minikube localkube[3201]: I1201 16:14:22.009902 3201 kuberuntime_manager.go:499] Container {Name:cranky-zebra-postgresql Image:mdillon/postgis:9.6-alpine Command:[] Args:[] WorkingDir: Ports:[{Name:postgresql HostPort:0 ContainerPort:5432 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:POSTGRES_USER Value:api ValueFrom:nil} {Name:PGUSER Value:api ValueFrom:nil} {Name:POSTGRES_DB Value:api ValueFrom:nil} {Name:POSTGRES_INITDB_ARGS Value: ValueFrom:nil} {Name:PGDATA Value:/var/lib/postgresql/data/pgdata ValueFrom:nil} {Name:POSTGRES_PASSWORD Value: ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:cranky-zebra-postgresql,},Key:postgres-password,Optional:nil,},}} {Name:POD_IP Value: ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:data ReadOnly:false MountPath:/var/lib/postgresql/data/pgdata SubPath:postgresql-db MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:6,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:3,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:14:22 minikube localkube[3201]: I1201 16:14:22.013984 3201 kuberuntime_manager.go:738] checking backoff for container "seed-rpush" in pod "cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:14:22 minikube localkube[3201]: I1201 16:14:22.017179 3201 kuberuntime_manager.go:748] Back-off 2m40s restarting failed container=seed-rpush pod=cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058) | |
Dec 01 16:14:22 minikube localkube[3201]: E1201 16:14:22.017293 3201 pod_workers.go:182] Error syncing pod f1040c87-d6b1-11e7-9596-080027aac058 ("cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "seed-rpush" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=seed-rpush pod=cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:14:22 minikube localkube[3201]: E1201 16:14:22.019139 3201 kuberuntime_manager.go:714] container start failed: CreateContainerConfigError: lstat /tmp/hostpath-provisioner/pvc-f0db5074-d6b1-11e7-9596-080027aac058: no such file or directory | |
Dec 01 16:14:22 minikube localkube[3201]: E1201 16:14:22.019460 3201 pod_workers.go:182] Error syncing pod f0f4db68-d6b1-11e7-9596-080027aac058 ("cranky-zebra-postgresql-558bd4b587-9zxnt_default(f0f4db68-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "cranky-zebra-postgresql" with CreateContainerConfigError: "lstat /tmp/hostpath-provisioner/pvc-f0db5074-d6b1-11e7-9596-080027aac058: no such file or directory" | |
Dec 01 16:14:29 minikube localkube[3201]: I1201 16:14:29.227037 3201 kuberuntime_manager.go:499] Container {Name:cranky-zebra-redis Image:bitnami/redis:4.0.2-r1 Command:[] Args:[] WorkingDir: Ports:[{Name:redis HostPort:0 ContainerPort:6379 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:ALLOW_EMPTY_PASSWORD Value:yes ValueFrom:nil}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:redis-data ReadOnly:false MountPath:/bitnami SubPath: MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:14:29 minikube localkube[3201]: I1201 16:14:29.229195 3201 kuberuntime_manager.go:738] checking backoff for container "cranky-zebra-redis" in pod "cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:14:29 minikube localkube[3201]: I1201 16:14:29.229666 3201 kuberuntime_manager.go:748] Back-off 2m40s restarting failed container=cranky-zebra-redis pod=cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058) | |
Dec 01 16:14:29 minikube localkube[3201]: E1201 16:14:29.230369 3201 pod_workers.go:182] Error syncing pod f0f77e98-d6b1-11e7-9596-080027aac058 ("cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "cranky-zebra-redis" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=cranky-zebra-redis pod=cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:14:36 minikube localkube[3201]: I1201 16:14:36.010283 3201 kuberuntime_manager.go:738] checking backoff for container "seed-rpush" in pod "cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:14:36 minikube localkube[3201]: I1201 16:14:36.010868 3201 kuberuntime_manager.go:748] Back-off 2m40s restarting failed container=seed-rpush pod=cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058) | |
Dec 01 16:14:36 minikube localkube[3201]: E1201 16:14:36.011189 3201 pod_workers.go:182] Error syncing pod f1040c87-d6b1-11e7-9596-080027aac058 ("cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "seed-rpush" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=seed-rpush pod=cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:14:36 minikube localkube[3201]: store.index: compact 2464 | |
Dec 01 16:14:36 minikube localkube[3201]: finished scheduled compaction at 2464 (took 30.237687ms) | |
Dec 01 16:14:37 minikube localkube[3201]: I1201 16:14:37.008247 3201 kuberuntime_manager.go:499] Container {Name:cranky-zebra-postgresql Image:mdillon/postgis:9.6-alpine Command:[] Args:[] WorkingDir: Ports:[{Name:postgresql HostPort:0 ContainerPort:5432 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:POSTGRES_USER Value:api ValueFrom:nil} {Name:PGUSER Value:api ValueFrom:nil} {Name:POSTGRES_DB Value:api ValueFrom:nil} {Name:POSTGRES_INITDB_ARGS Value: ValueFrom:nil} {Name:PGDATA Value:/var/lib/postgresql/data/pgdata ValueFrom:nil} {Name:POSTGRES_PASSWORD Value: ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:cranky-zebra-postgresql,},Key:postgres-password,Optional:nil,},}} {Name:POD_IP Value: ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:data ReadOnly:false MountPath:/var/lib/postgresql/data/pgdata SubPath:postgresql-db MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:6,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:3,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:14:37 minikube localkube[3201]: E1201 16:14:37.013448 3201 kuberuntime_manager.go:714] container start failed: CreateContainerConfigError: lstat /tmp/hostpath-provisioner/pvc-f0db5074-d6b1-11e7-9596-080027aac058: no such file or directory | |
Dec 01 16:14:37 minikube localkube[3201]: E1201 16:14:37.013505 3201 pod_workers.go:182] Error syncing pod f0f4db68-d6b1-11e7-9596-080027aac058 ("cranky-zebra-postgresql-558bd4b587-9zxnt_default(f0f4db68-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "cranky-zebra-postgresql" with CreateContainerConfigError: "lstat /tmp/hostpath-provisioner/pvc-f0db5074-d6b1-11e7-9596-080027aac058: no such file or directory" | |
Dec 01 16:14:38 minikube localkube[3201]: I1201 16:14:38.169501 3201 kuberuntime_manager.go:499] Container {Name:cranky-zebra-redis Image:bitnami/redis:4.0.2-r1 Command:[] Args:[] WorkingDir: Ports:[{Name:redis HostPort:0 ContainerPort:6379 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:ALLOW_EMPTY_PASSWORD Value:yes ValueFrom:nil}] Resources:{Limits:map[] Requests:map[memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI} cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI}]} VolumeMounts:[{Name:redis-data ReadOnly:false MountPath:/bitnami SubPath: MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:14:38 minikube localkube[3201]: I1201 16:14:38.169693 3201 kuberuntime_manager.go:738] checking backoff for container "cranky-zebra-redis" in pod "cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:14:38 minikube localkube[3201]: I1201 16:14:38.169859 3201 kuberuntime_manager.go:748] Back-off 2m40s restarting failed container=cranky-zebra-redis pod=cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058) | |
Dec 01 16:14:38 minikube localkube[3201]: E1201 16:14:38.169933 3201 pod_workers.go:182] Error syncing pod f0f77e98-d6b1-11e7-9596-080027aac058 ("cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "cranky-zebra-redis" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=cranky-zebra-redis pod=cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:14:44 minikube localkube[3201]: E1201 16:14:44.297447 3201 healthcheck.go:317] Failed to start node healthz on 0: listen tcp: address 0: missing port in address | |
Dec 01 16:14:47 minikube localkube[3201]: I1201 16:14:47.010494 3201 kuberuntime_manager.go:738] checking backoff for container "seed-rpush" in pod "cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:14:47 minikube localkube[3201]: I1201 16:14:47.011291 3201 kuberuntime_manager.go:748] Back-off 2m40s restarting failed container=seed-rpush pod=cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058) | |
Dec 01 16:14:47 minikube localkube[3201]: E1201 16:14:47.011712 3201 pod_workers.go:182] Error syncing pod f1040c87-d6b1-11e7-9596-080027aac058 ("cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "seed-rpush" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=seed-rpush pod=cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:14:49 minikube localkube[3201]: I1201 16:14:49.008083 3201 kuberuntime_manager.go:499] Container {Name:cranky-zebra-postgresql Image:mdillon/postgis:9.6-alpine Command:[] Args:[] WorkingDir: Ports:[{Name:postgresql HostPort:0 ContainerPort:5432 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:POSTGRES_USER Value:api ValueFrom:nil} {Name:PGUSER Value:api ValueFrom:nil} {Name:POSTGRES_DB Value:api ValueFrom:nil} {Name:POSTGRES_INITDB_ARGS Value: ValueFrom:nil} {Name:PGDATA Value:/var/lib/postgresql/data/pgdata ValueFrom:nil} {Name:POSTGRES_PASSWORD Value: ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:cranky-zebra-postgresql,},Key:postgres-password,Optional:nil,},}} {Name:POD_IP Value: ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:data ReadOnly:false MountPath:/var/lib/postgresql/data/pgdata SubPath:postgresql-db MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:6,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:3,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:14:49 minikube localkube[3201]: E1201 16:14:49.011366 3201 kuberuntime_manager.go:714] container start failed: CreateContainerConfigError: lstat /tmp/hostpath-provisioner/pvc-f0db5074-d6b1-11e7-9596-080027aac058: no such file or directory | |
Dec 01 16:14:49 minikube localkube[3201]: E1201 16:14:49.012008 3201 pod_workers.go:182] Error syncing pod f0f4db68-d6b1-11e7-9596-080027aac058 ("cranky-zebra-postgresql-558bd4b587-9zxnt_default(f0f4db68-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "cranky-zebra-postgresql" with CreateContainerConfigError: "lstat /tmp/hostpath-provisioner/pvc-f0db5074-d6b1-11e7-9596-080027aac058: no such file or directory" | |
Dec 01 16:14:50 minikube localkube[3201]: I1201 16:14:50.008068 3201 kuberuntime_manager.go:499] Container {Name:cranky-zebra-redis Image:bitnami/redis:4.0.2-r1 Command:[] Args:[] WorkingDir: Ports:[{Name:redis HostPort:0 ContainerPort:6379 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:ALLOW_EMPTY_PASSWORD Value:yes ValueFrom:nil}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:redis-data ReadOnly:false MountPath:/bitnami SubPath: MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:14:50 minikube localkube[3201]: I1201 16:14:50.009328 3201 kuberuntime_manager.go:738] checking backoff for container "cranky-zebra-redis" in pod "cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:14:50 minikube localkube[3201]: I1201 16:14:50.009496 3201 kuberuntime_manager.go:748] Back-off 2m40s restarting failed container=cranky-zebra-redis pod=cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058) | |
Dec 01 16:14:50 minikube localkube[3201]: E1201 16:14:50.009528 3201 pod_workers.go:182] Error syncing pod f0f77e98-d6b1-11e7-9596-080027aac058 ("cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "cranky-zebra-redis" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=cranky-zebra-redis pod=cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:14:58 minikube localkube[3201]: I1201 16:14:58.010937 3201 kuberuntime_manager.go:738] checking backoff for container "seed-rpush" in pod "cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:14:58 minikube localkube[3201]: I1201 16:14:58.011722 3201 kuberuntime_manager.go:748] Back-off 2m40s restarting failed container=seed-rpush pod=cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058) | |
Dec 01 16:14:58 minikube localkube[3201]: E1201 16:14:58.012468 3201 pod_workers.go:182] Error syncing pod f1040c87-d6b1-11e7-9596-080027aac058 ("cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "seed-rpush" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=seed-rpush pod=cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:15:02 minikube localkube[3201]: I1201 16:15:02.008473 3201 kuberuntime_manager.go:499] Container {Name:cranky-zebra-redis Image:bitnami/redis:4.0.2-r1 Command:[] Args:[] WorkingDir: Ports:[{Name:redis HostPort:0 ContainerPort:6379 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:ALLOW_EMPTY_PASSWORD Value:yes ValueFrom:nil}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:redis-data ReadOnly:false MountPath:/bitnami SubPath: MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:15:02 minikube localkube[3201]: I1201 16:15:02.008629 3201 kuberuntime_manager.go:738] checking backoff for container "cranky-zebra-redis" in pod "cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:15:02 minikube localkube[3201]: I1201 16:15:02.008727 3201 kuberuntime_manager.go:748] Back-off 2m40s restarting failed container=cranky-zebra-redis pod=cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058) | |
Dec 01 16:15:02 minikube localkube[3201]: E1201 16:15:02.008753 3201 pod_workers.go:182] Error syncing pod f0f77e98-d6b1-11e7-9596-080027aac058 ("cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "cranky-zebra-redis" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=cranky-zebra-redis pod=cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:15:02 minikube localkube[3201]: I1201 16:15:02.009168 3201 kuberuntime_manager.go:499] Container {Name:cranky-zebra-postgresql Image:mdillon/postgis:9.6-alpine Command:[] Args:[] WorkingDir: Ports:[{Name:postgresql HostPort:0 ContainerPort:5432 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:POSTGRES_USER Value:api ValueFrom:nil} {Name:PGUSER Value:api ValueFrom:nil} {Name:POSTGRES_DB Value:api ValueFrom:nil} {Name:POSTGRES_INITDB_ARGS Value: ValueFrom:nil} {Name:PGDATA Value:/var/lib/postgresql/data/pgdata ValueFrom:nil} {Name:POSTGRES_PASSWORD Value: ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:cranky-zebra-postgresql,},Key:postgres-password,Optional:nil,},}} {Name:POD_IP Value: ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:data ReadOnly:false MountPath:/var/lib/postgresql/data/pgdata SubPath:postgresql-db MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:6,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:3,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:15:02 minikube localkube[3201]: E1201 16:15:02.011176 3201 kuberuntime_manager.go:714] container start failed: CreateContainerConfigError: lstat /tmp/hostpath-provisioner/pvc-f0db5074-d6b1-11e7-9596-080027aac058: no such file or directory | |
Dec 01 16:15:02 minikube localkube[3201]: E1201 16:15:02.011341 3201 pod_workers.go:182] Error syncing pod f0f4db68-d6b1-11e7-9596-080027aac058 ("cranky-zebra-postgresql-558bd4b587-9zxnt_default(f0f4db68-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "cranky-zebra-postgresql" with CreateContainerConfigError: "lstat /tmp/hostpath-provisioner/pvc-f0db5074-d6b1-11e7-9596-080027aac058: no such file or directory" | |
Dec 01 16:15:14 minikube localkube[3201]: I1201 16:15:14.009981 3201 kuberuntime_manager.go:738] checking backoff for container "seed-rpush" in pod "cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:15:14 minikube localkube[3201]: I1201 16:15:14.010438 3201 kuberuntime_manager.go:748] Back-off 2m40s restarting failed container=seed-rpush pod=cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058) | |
Dec 01 16:15:14 minikube localkube[3201]: E1201 16:15:14.010949 3201 pod_workers.go:182] Error syncing pod f1040c87-d6b1-11e7-9596-080027aac058 ("cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "seed-rpush" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=seed-rpush pod=cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:15:15 minikube localkube[3201]: I1201 16:15:15.008199 3201 kuberuntime_manager.go:499] Container {Name:cranky-zebra-redis Image:bitnami/redis:4.0.2-r1 Command:[] Args:[] WorkingDir: Ports:[{Name:redis HostPort:0 ContainerPort:6379 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:ALLOW_EMPTY_PASSWORD Value:yes ValueFrom:nil}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:redis-data ReadOnly:false MountPath:/bitnami SubPath: MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:15:15 minikube localkube[3201]: I1201 16:15:15.008816 3201 kuberuntime_manager.go:738] checking backoff for container "cranky-zebra-redis" in pod "cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:15:15 minikube localkube[3201]: I1201 16:15:15.008934 3201 kuberuntime_manager.go:748] Back-off 2m40s restarting failed container=cranky-zebra-redis pod=cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058) | |
Dec 01 16:15:15 minikube localkube[3201]: E1201 16:15:15.008966 3201 pod_workers.go:182] Error syncing pod f0f77e98-d6b1-11e7-9596-080027aac058 ("cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "cranky-zebra-redis" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=cranky-zebra-redis pod=cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:15:17 minikube localkube[3201]: I1201 16:15:17.007951 3201 kuberuntime_manager.go:499] Container {Name:cranky-zebra-postgresql Image:mdillon/postgis:9.6-alpine Command:[] Args:[] WorkingDir: Ports:[{Name:postgresql HostPort:0 ContainerPort:5432 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:POSTGRES_USER Value:api ValueFrom:nil} {Name:PGUSER Value:api ValueFrom:nil} {Name:POSTGRES_DB Value:api ValueFrom:nil} {Name:POSTGRES_INITDB_ARGS Value: ValueFrom:nil} {Name:PGDATA Value:/var/lib/postgresql/data/pgdata ValueFrom:nil} {Name:POSTGRES_PASSWORD Value: ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:cranky-zebra-postgresql,},Key:postgres-password,Optional:nil,},}} {Name:POD_IP Value: ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:data ReadOnly:false MountPath:/var/lib/postgresql/data/pgdata SubPath:postgresql-db MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:6,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:3,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:15:17 minikube localkube[3201]: E1201 16:15:17.013946 3201 kuberuntime_manager.go:714] container start failed: CreateContainerConfigError: lstat /tmp/hostpath-provisioner/pvc-f0db5074-d6b1-11e7-9596-080027aac058: no such file or directory | |
Dec 01 16:15:17 minikube localkube[3201]: E1201 16:15:17.014999 3201 pod_workers.go:182] Error syncing pod f0f4db68-d6b1-11e7-9596-080027aac058 ("cranky-zebra-postgresql-558bd4b587-9zxnt_default(f0f4db68-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "cranky-zebra-postgresql" with CreateContainerConfigError: "lstat /tmp/hostpath-provisioner/pvc-f0db5074-d6b1-11e7-9596-080027aac058: no such file or directory" | |
Dec 01 16:15:29 minikube localkube[3201]: I1201 16:15:29.009568 3201 kuberuntime_manager.go:499] Container {Name:cranky-zebra-redis Image:bitnami/redis:4.0.2-r1 Command:[] Args:[] WorkingDir: Ports:[{Name:redis HostPort:0 ContainerPort:6379 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:ALLOW_EMPTY_PASSWORD Value:yes ValueFrom:nil}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:redis-data ReadOnly:false MountPath:/bitnami SubPath: MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:15:29 minikube localkube[3201]: I1201 16:15:29.010794 3201 kuberuntime_manager.go:738] checking backoff for container "cranky-zebra-redis" in pod "cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:15:29 minikube localkube[3201]: I1201 16:15:29.010567 3201 kuberuntime_manager.go:738] checking backoff for container "seed-rpush" in pod "cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:15:29 minikube localkube[3201]: I1201 16:15:29.011229 3201 kuberuntime_manager.go:748] Back-off 2m40s restarting failed container=cranky-zebra-redis pod=cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058) | |
Dec 01 16:15:29 minikube localkube[3201]: E1201 16:15:29.012744 3201 pod_workers.go:182] Error syncing pod f0f77e98-d6b1-11e7-9596-080027aac058 ("cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "cranky-zebra-redis" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=cranky-zebra-redis pod=cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:15:29 minikube localkube[3201]: E1201 16:15:29.981881 3201 remote_runtime.go:278] ContainerStatus "c006e03a17ca8d349f55fdfb3687af747bcfebbd0b8af8595f4e1f8ab3b42179" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: c006e03a17ca8d349f55fdfb3687af747bcfebbd0b8af8595f4e1f8ab3b42179 | |
Dec 01 16:15:29 minikube localkube[3201]: E1201 16:15:29.981930 3201 kuberuntime_container.go:659] failed to remove pod init container "seed-rpush": failed to get container status "c006e03a17ca8d349f55fdfb3687af747bcfebbd0b8af8595f4e1f8ab3b42179": rpc error: code = Unknown desc = Error: No such container: c006e03a17ca8d349f55fdfb3687af747bcfebbd0b8af8595f4e1f8ab3b42179; Skipping pod "cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:15:29 minikube localkube[3201]: I1201 16:15:29.982056 3201 kuberuntime_manager.go:738] checking backoff for container "seed-rpush" in pod "cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:15:29 minikube localkube[3201]: I1201 16:15:29.982123 3201 kuberuntime_manager.go:748] Back-off 5m0s restarting failed container=seed-rpush pod=cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058) | |
Dec 01 16:15:29 minikube localkube[3201]: E1201 16:15:29.982144 3201 pod_workers.go:182] Error syncing pod f1040c87-d6b1-11e7-9596-080027aac058 ("cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "seed-rpush" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=seed-rpush pod=cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:15:31 minikube localkube[3201]: I1201 16:15:31.010275 3201 kuberuntime_manager.go:499] Container {Name:cranky-zebra-postgresql Image:mdillon/postgis:9.6-alpine Command:[] Args:[] WorkingDir: Ports:[{Name:postgresql HostPort:0 ContainerPort:5432 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:POSTGRES_USER Value:api ValueFrom:nil} {Name:PGUSER Value:api ValueFrom:nil} {Name:POSTGRES_DB Value:api ValueFrom:nil} {Name:POSTGRES_INITDB_ARGS Value: ValueFrom:nil} {Name:PGDATA Value:/var/lib/postgresql/data/pgdata ValueFrom:nil} {Name:POSTGRES_PASSWORD Value: ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:cranky-zebra-postgresql,},Key:postgres-password,Optional:nil,},}} {Name:POD_IP Value: ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}}] Resources:{Limits:map[] Requests:map[memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI} cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI}]} VolumeMounts:[{Name:data ReadOnly:false MountPath:/var/lib/postgresql/data/pgdata SubPath:postgresql-db MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:6,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:3,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:15:31 minikube localkube[3201]: E1201 16:15:31.016265 3201 kuberuntime_manager.go:714] container start failed: CreateContainerConfigError: lstat /tmp/hostpath-provisioner/pvc-f0db5074-d6b1-11e7-9596-080027aac058: no such file or directory | |
Dec 01 16:15:31 minikube localkube[3201]: E1201 16:15:31.016646 3201 pod_workers.go:182] Error syncing pod f0f4db68-d6b1-11e7-9596-080027aac058 ("cranky-zebra-postgresql-558bd4b587-9zxnt_default(f0f4db68-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "cranky-zebra-postgresql" with CreateContainerConfigError: "lstat /tmp/hostpath-provisioner/pvc-f0db5074-d6b1-11e7-9596-080027aac058: no such file or directory" | |
Dec 01 16:15:42 minikube localkube[3201]: I1201 16:15:42.010828 3201 kuberuntime_manager.go:738] checking backoff for container "seed-rpush" in pod "cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:15:42 minikube localkube[3201]: I1201 16:15:42.010960 3201 kuberuntime_manager.go:748] Back-off 5m0s restarting failed container=seed-rpush pod=cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058) | |
Dec 01 16:15:42 minikube localkube[3201]: E1201 16:15:42.010990 3201 pod_workers.go:182] Error syncing pod f1040c87-d6b1-11e7-9596-080027aac058 ("cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "seed-rpush" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=seed-rpush pod=cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:15:42 minikube localkube[3201]: I1201 16:15:42.013664 3201 kuberuntime_manager.go:499] Container {Name:cranky-zebra-postgresql Image:mdillon/postgis:9.6-alpine Command:[] Args:[] WorkingDir: Ports:[{Name:postgresql HostPort:0 ContainerPort:5432 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:POSTGRES_USER Value:api ValueFrom:nil} {Name:PGUSER Value:api ValueFrom:nil} {Name:POSTGRES_DB Value:api ValueFrom:nil} {Name:POSTGRES_INITDB_ARGS Value: ValueFrom:nil} {Name:PGDATA Value:/var/lib/postgresql/data/pgdata ValueFrom:nil} {Name:POSTGRES_PASSWORD Value: ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:cranky-zebra-postgresql,},Key:postgres-password,Optional:nil,},}} {Name:POD_IP Value: ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}}] Resources:{Limits:map[] Requests:map[memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI} cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI}]} VolumeMounts:[{Name:data ReadOnly:false MountPath:/var/lib/postgresql/data/pgdata SubPath:postgresql-db MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:6,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:3,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:15:42 minikube localkube[3201]: I1201 16:15:42.016023 3201 kuberuntime_manager.go:499] Container {Name:cranky-zebra-redis Image:bitnami/redis:4.0.2-r1 Command:[] Args:[] WorkingDir: Ports:[{Name:redis HostPort:0 ContainerPort:6379 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:ALLOW_EMPTY_PASSWORD Value:yes ValueFrom:nil}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:redis-data ReadOnly:false MountPath:/bitnami SubPath: MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:15:42 minikube localkube[3201]: I1201 16:15:42.016260 3201 kuberuntime_manager.go:738] checking backoff for container "cranky-zebra-redis" in pod "cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:15:42 minikube localkube[3201]: I1201 16:15:42.016347 3201 kuberuntime_manager.go:748] Back-off 2m40s restarting failed container=cranky-zebra-redis pod=cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058) | |
Dec 01 16:15:42 minikube localkube[3201]: E1201 16:15:42.016372 3201 pod_workers.go:182] Error syncing pod f0f77e98-d6b1-11e7-9596-080027aac058 ("cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "cranky-zebra-redis" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=cranky-zebra-redis pod=cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:15:42 minikube localkube[3201]: E1201 16:15:42.018916 3201 kuberuntime_manager.go:714] container start failed: CreateContainerConfigError: lstat /tmp/hostpath-provisioner/pvc-f0db5074-d6b1-11e7-9596-080027aac058: no such file or directory | |
Dec 01 16:15:42 minikube localkube[3201]: E1201 16:15:42.018935 3201 pod_workers.go:182] Error syncing pod f0f4db68-d6b1-11e7-9596-080027aac058 ("cranky-zebra-postgresql-558bd4b587-9zxnt_default(f0f4db68-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "cranky-zebra-postgresql" with CreateContainerConfigError: "lstat /tmp/hostpath-provisioner/pvc-f0db5074-d6b1-11e7-9596-080027aac058: no such file or directory" | |
Dec 01 16:15:44 minikube localkube[3201]: E1201 16:15:44.297839 3201 healthcheck.go:317] Failed to start node healthz on 0: listen tcp: address 0: missing port in address | |
Dec 01 16:15:55 minikube localkube[3201]: I1201 16:15:55.007724 3201 kuberuntime_manager.go:499] Container {Name:cranky-zebra-redis Image:bitnami/redis:4.0.2-r1 Command:[] Args:[] WorkingDir: Ports:[{Name:redis HostPort:0 ContainerPort:6379 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:ALLOW_EMPTY_PASSWORD Value:yes ValueFrom:nil}] Resources:{Limits:map[] Requests:map[memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI} cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI}]} VolumeMounts:[{Name:redis-data ReadOnly:false MountPath:/bitnami SubPath: MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:15:55 minikube localkube[3201]: I1201 16:15:55.008019 3201 kuberuntime_manager.go:738] checking backoff for container "cranky-zebra-redis" in pod "cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:15:55 minikube localkube[3201]: I1201 16:15:55.008120 3201 kuberuntime_manager.go:748] Back-off 2m40s restarting failed container=cranky-zebra-redis pod=cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058) | |
Dec 01 16:15:55 minikube localkube[3201]: E1201 16:15:55.008147 3201 pod_workers.go:182] Error syncing pod f0f77e98-d6b1-11e7-9596-080027aac058 ("cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "cranky-zebra-redis" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=cranky-zebra-redis pod=cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:15:56 minikube localkube[3201]: I1201 16:15:56.010368 3201 kuberuntime_manager.go:499] Container {Name:cranky-zebra-postgresql Image:mdillon/postgis:9.6-alpine Command:[] Args:[] WorkingDir: Ports:[{Name:postgresql HostPort:0 ContainerPort:5432 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:POSTGRES_USER Value:api ValueFrom:nil} {Name:PGUSER Value:api ValueFrom:nil} {Name:POSTGRES_DB Value:api ValueFrom:nil} {Name:POSTGRES_INITDB_ARGS Value: ValueFrom:nil} {Name:PGDATA Value:/var/lib/postgresql/data/pgdata ValueFrom:nil} {Name:POSTGRES_PASSWORD Value: ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:cranky-zebra-postgresql,},Key:postgres-password,Optional:nil,},}} {Name:POD_IP Value: ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:data ReadOnly:false MountPath:/var/lib/postgresql/data/pgdata SubPath:postgresql-db MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:6,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:3,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:15:56 minikube localkube[3201]: E1201 16:15:56.012530 3201 kuberuntime_manager.go:714] container start failed: CreateContainerConfigError: lstat /tmp/hostpath-provisioner/pvc-f0db5074-d6b1-11e7-9596-080027aac058: no such file or directory | |
Dec 01 16:15:56 minikube localkube[3201]: E1201 16:15:56.012599 3201 pod_workers.go:182] Error syncing pod f0f4db68-d6b1-11e7-9596-080027aac058 ("cranky-zebra-postgresql-558bd4b587-9zxnt_default(f0f4db68-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "cranky-zebra-postgresql" with CreateContainerConfigError: "lstat /tmp/hostpath-provisioner/pvc-f0db5074-d6b1-11e7-9596-080027aac058: no such file or directory" | |
Dec 01 16:15:57 minikube localkube[3201]: I1201 16:15:57.011748 3201 kuberuntime_manager.go:738] checking backoff for container "seed-rpush" in pod "cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:15:57 minikube localkube[3201]: I1201 16:15:57.011967 3201 kuberuntime_manager.go:748] Back-off 5m0s restarting failed container=seed-rpush pod=cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058) | |
Dec 01 16:15:57 minikube localkube[3201]: E1201 16:15:57.012013 3201 pod_workers.go:182] Error syncing pod f1040c87-d6b1-11e7-9596-080027aac058 ("cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "seed-rpush" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=seed-rpush pod=cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:16:08 minikube localkube[3201]: I1201 16:16:08.007892 3201 kuberuntime_manager.go:499] Container {Name:cranky-zebra-redis Image:bitnami/redis:4.0.2-r1 Command:[] Args:[] WorkingDir: Ports:[{Name:redis HostPort:0 ContainerPort:6379 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:ALLOW_EMPTY_PASSWORD Value:yes ValueFrom:nil}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:redis-data ReadOnly:false MountPath:/bitnami SubPath: MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:16:08 minikube localkube[3201]: I1201 16:16:08.009387 3201 kuberuntime_manager.go:738] checking backoff for container "cranky-zebra-redis" in pod "cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:16:08 minikube localkube[3201]: I1201 16:16:08.009906 3201 kuberuntime_manager.go:748] Back-off 2m40s restarting failed container=cranky-zebra-redis pod=cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058) | |
Dec 01 16:16:08 minikube localkube[3201]: E1201 16:16:08.010229 3201 pod_workers.go:182] Error syncing pod f0f77e98-d6b1-11e7-9596-080027aac058 ("cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "cranky-zebra-redis" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=cranky-zebra-redis pod=cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:16:09 minikube localkube[3201]: I1201 16:16:09.007751 3201 kuberuntime_manager.go:499] Container {Name:cranky-zebra-postgresql Image:mdillon/postgis:9.6-alpine Command:[] Args:[] WorkingDir: Ports:[{Name:postgresql HostPort:0 ContainerPort:5432 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:POSTGRES_USER Value:api ValueFrom:nil} {Name:PGUSER Value:api ValueFrom:nil} {Name:POSTGRES_DB Value:api ValueFrom:nil} {Name:POSTGRES_INITDB_ARGS Value: ValueFrom:nil} {Name:PGDATA Value:/var/lib/postgresql/data/pgdata ValueFrom:nil} {Name:POSTGRES_PASSWORD Value: ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:cranky-zebra-postgresql,},Key:postgres-password,Optional:nil,},}} {Name:POD_IP Value: ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:data ReadOnly:false MountPath:/var/lib/postgresql/data/pgdata SubPath:postgresql-db MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:6,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:3,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:16:09 minikube localkube[3201]: E1201 16:16:09.012242 3201 kuberuntime_manager.go:714] container start failed: CreateContainerConfigError: lstat /tmp/hostpath-provisioner/pvc-f0db5074-d6b1-11e7-9596-080027aac058: no such file or directory | |
Dec 01 16:16:09 minikube localkube[3201]: E1201 16:16:09.012505 3201 pod_workers.go:182] Error syncing pod f0f4db68-d6b1-11e7-9596-080027aac058 ("cranky-zebra-postgresql-558bd4b587-9zxnt_default(f0f4db68-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "cranky-zebra-postgresql" with CreateContainerConfigError: "lstat /tmp/hostpath-provisioner/pvc-f0db5074-d6b1-11e7-9596-080027aac058: no such file or directory" | |
Dec 01 16:16:10 minikube localkube[3201]: I1201 16:16:10.010913 3201 kuberuntime_manager.go:738] checking backoff for container "seed-rpush" in pod "cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:16:10 minikube localkube[3201]: I1201 16:16:10.011366 3201 kuberuntime_manager.go:748] Back-off 5m0s restarting failed container=seed-rpush pod=cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058) | |
Dec 01 16:16:10 minikube localkube[3201]: E1201 16:16:10.012071 3201 pod_workers.go:182] Error syncing pod f1040c87-d6b1-11e7-9596-080027aac058 ("cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "seed-rpush" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=seed-rpush pod=cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:16:20 minikube localkube[3201]: I1201 16:16:20.009137 3201 kuberuntime_manager.go:499] Container {Name:cranky-zebra-postgresql Image:mdillon/postgis:9.6-alpine Command:[] Args:[] WorkingDir: Ports:[{Name:postgresql HostPort:0 ContainerPort:5432 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:POSTGRES_USER Value:api ValueFrom:nil} {Name:PGUSER Value:api ValueFrom:nil} {Name:POSTGRES_DB Value:api ValueFrom:nil} {Name:POSTGRES_INITDB_ARGS Value: ValueFrom:nil} {Name:PGDATA Value:/var/lib/postgresql/data/pgdata ValueFrom:nil} {Name:POSTGRES_PASSWORD Value: ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:cranky-zebra-postgresql,},Key:postgres-password,Optional:nil,},}} {Name:POD_IP Value: ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:data ReadOnly:false MountPath:/var/lib/postgresql/data/pgdata SubPath:postgresql-db MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:6,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:3,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:16:20 minikube localkube[3201]: E1201 16:16:20.012815 3201 kuberuntime_manager.go:714] container start failed: CreateContainerConfigError: lstat /tmp/hostpath-provisioner/pvc-f0db5074-d6b1-11e7-9596-080027aac058: no such file or directory | |
Dec 01 16:16:20 minikube localkube[3201]: E1201 16:16:20.013044 3201 pod_workers.go:182] Error syncing pod f0f4db68-d6b1-11e7-9596-080027aac058 ("cranky-zebra-postgresql-558bd4b587-9zxnt_default(f0f4db68-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "cranky-zebra-postgresql" with CreateContainerConfigError: "lstat /tmp/hostpath-provisioner/pvc-f0db5074-d6b1-11e7-9596-080027aac058: no such file or directory" | |
Dec 01 16:16:21 minikube localkube[3201]: I1201 16:16:21.007659 3201 kuberuntime_manager.go:499] Container {Name:cranky-zebra-redis Image:bitnami/redis:4.0.2-r1 Command:[] Args:[] WorkingDir: Ports:[{Name:redis HostPort:0 ContainerPort:6379 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:ALLOW_EMPTY_PASSWORD Value:yes ValueFrom:nil}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:redis-data ReadOnly:false MountPath:/bitnami SubPath: MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:16:21 minikube localkube[3201]: I1201 16:16:21.008901 3201 kuberuntime_manager.go:738] checking backoff for container "cranky-zebra-redis" in pod "cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:16:21 minikube localkube[3201]: I1201 16:16:21.009152 3201 kuberuntime_manager.go:748] Back-off 2m40s restarting failed container=cranky-zebra-redis pod=cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058) | |
Dec 01 16:16:21 minikube localkube[3201]: E1201 16:16:21.009311 3201 pod_workers.go:182] Error syncing pod f0f77e98-d6b1-11e7-9596-080027aac058 ("cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "cranky-zebra-redis" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=cranky-zebra-redis pod=cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:16:23 minikube localkube[3201]: I1201 16:16:23.011194 3201 kuberuntime_manager.go:738] checking backoff for container "seed-rpush" in pod "cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:16:23 minikube localkube[3201]: I1201 16:16:23.011748 3201 kuberuntime_manager.go:748] Back-off 5m0s restarting failed container=seed-rpush pod=cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058) | |
Dec 01 16:16:23 minikube localkube[3201]: E1201 16:16:23.011793 3201 pod_workers.go:182] Error syncing pod f1040c87-d6b1-11e7-9596-080027aac058 ("cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "seed-rpush" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=seed-rpush pod=cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:16:33 minikube localkube[3201]: I1201 16:16:33.007974 3201 kuberuntime_manager.go:499] Container {Name:cranky-zebra-redis Image:bitnami/redis:4.0.2-r1 Command:[] Args:[] WorkingDir: Ports:[{Name:redis HostPort:0 ContainerPort:6379 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:ALLOW_EMPTY_PASSWORD Value:yes ValueFrom:nil}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:redis-data ReadOnly:false MountPath:/bitnami SubPath: MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:16:33 minikube localkube[3201]: I1201 16:16:33.008195 3201 kuberuntime_manager.go:738] checking backoff for container "cranky-zebra-redis" in pod "cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:16:33 minikube localkube[3201]: I1201 16:16:33.008362 3201 kuberuntime_manager.go:748] Back-off 2m40s restarting failed container=cranky-zebra-redis pod=cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058) | |
Dec 01 16:16:33 minikube localkube[3201]: E1201 16:16:33.008406 3201 pod_workers.go:182] Error syncing pod f0f77e98-d6b1-11e7-9596-080027aac058 ("cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "cranky-zebra-redis" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=cranky-zebra-redis pod=cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:16:34 minikube localkube[3201]: I1201 16:16:34.008812 3201 kuberuntime_manager.go:499] Container {Name:cranky-zebra-postgresql Image:mdillon/postgis:9.6-alpine Command:[] Args:[] WorkingDir: Ports:[{Name:postgresql HostPort:0 ContainerPort:5432 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:POSTGRES_USER Value:api ValueFrom:nil} {Name:PGUSER Value:api ValueFrom:nil} {Name:POSTGRES_DB Value:api ValueFrom:nil} {Name:POSTGRES_INITDB_ARGS Value: ValueFrom:nil} {Name:PGDATA Value:/var/lib/postgresql/data/pgdata ValueFrom:nil} {Name:POSTGRES_PASSWORD Value: ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:cranky-zebra-postgresql,},Key:postgres-password,Optional:nil,},}} {Name:POD_IP Value: ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:data ReadOnly:false MountPath:/var/lib/postgresql/data/pgdata SubPath:postgresql-db MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:6,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:3,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:16:34 minikube localkube[3201]: E1201 16:16:34.011207 3201 kuberuntime_manager.go:714] container start failed: CreateContainerConfigError: lstat /tmp/hostpath-provisioner/pvc-f0db5074-d6b1-11e7-9596-080027aac058: no such file or directory | |
Dec 01 16:16:34 minikube localkube[3201]: E1201 16:16:34.011491 3201 pod_workers.go:182] Error syncing pod f0f4db68-d6b1-11e7-9596-080027aac058 ("cranky-zebra-postgresql-558bd4b587-9zxnt_default(f0f4db68-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "cranky-zebra-postgresql" with CreateContainerConfigError: "lstat /tmp/hostpath-provisioner/pvc-f0db5074-d6b1-11e7-9596-080027aac058: no such file or directory" | |
Dec 01 16:16:35 minikube localkube[3201]: I1201 16:16:35.009720 3201 kuberuntime_manager.go:738] checking backoff for container "seed-rpush" in pod "cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:16:35 minikube localkube[3201]: I1201 16:16:35.010307 3201 kuberuntime_manager.go:748] Back-off 5m0s restarting failed container=seed-rpush pod=cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058) | |
Dec 01 16:16:35 minikube localkube[3201]: E1201 16:16:35.010604 3201 pod_workers.go:182] Error syncing pod f1040c87-d6b1-11e7-9596-080027aac058 ("cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "seed-rpush" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=seed-rpush pod=cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:16:44 minikube localkube[3201]: I1201 16:16:44.014622 3201 kuberuntime_manager.go:499] Container {Name:cranky-zebra-redis Image:bitnami/redis:4.0.2-r1 Command:[] Args:[] WorkingDir: Ports:[{Name:redis HostPort:0 ContainerPort:6379 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:ALLOW_EMPTY_PASSWORD Value:yes ValueFrom:nil}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:redis-data ReadOnly:false MountPath:/bitnami SubPath: MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:16:44 minikube localkube[3201]: I1201 16:16:44.014795 3201 kuberuntime_manager.go:738] checking backoff for container "cranky-zebra-redis" in pod "cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:16:44 minikube localkube[3201]: I1201 16:16:44.014904 3201 kuberuntime_manager.go:748] Back-off 2m40s restarting failed container=cranky-zebra-redis pod=cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058) | |
Dec 01 16:16:44 minikube localkube[3201]: E1201 16:16:44.014933 3201 pod_workers.go:182] Error syncing pod f0f77e98-d6b1-11e7-9596-080027aac058 ("cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "cranky-zebra-redis" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=cranky-zebra-redis pod=cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:16:44 minikube localkube[3201]: E1201 16:16:44.298781 3201 healthcheck.go:317] Failed to start node healthz on 0: listen tcp: address 0: missing port in address | |
Dec 01 16:16:47 minikube localkube[3201]: I1201 16:16:47.007889 3201 kuberuntime_manager.go:499] Container {Name:cranky-zebra-postgresql Image:mdillon/postgis:9.6-alpine Command:[] Args:[] WorkingDir: Ports:[{Name:postgresql HostPort:0 ContainerPort:5432 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:POSTGRES_USER Value:api ValueFrom:nil} {Name:PGUSER Value:api ValueFrom:nil} {Name:POSTGRES_DB Value:api ValueFrom:nil} {Name:POSTGRES_INITDB_ARGS Value: ValueFrom:nil} {Name:PGDATA Value:/var/lib/postgresql/data/pgdata ValueFrom:nil} {Name:POSTGRES_PASSWORD Value: ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:cranky-zebra-postgresql,},Key:postgres-password,Optional:nil,},}} {Name:POD_IP Value: ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:data ReadOnly:false MountPath:/var/lib/postgresql/data/pgdata SubPath:postgresql-db MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:6,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:3,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:16:47 minikube localkube[3201]: E1201 16:16:47.012252 3201 kuberuntime_manager.go:714] container start failed: CreateContainerConfigError: lstat /tmp/hostpath-provisioner/pvc-f0db5074-d6b1-11e7-9596-080027aac058: no such file or directory | |
Dec 01 16:16:47 minikube localkube[3201]: E1201 16:16:47.012295 3201 pod_workers.go:182] Error syncing pod f0f4db68-d6b1-11e7-9596-080027aac058 ("cranky-zebra-postgresql-558bd4b587-9zxnt_default(f0f4db68-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "cranky-zebra-postgresql" with CreateContainerConfigError: "lstat /tmp/hostpath-provisioner/pvc-f0db5074-d6b1-11e7-9596-080027aac058: no such file or directory" | |
Dec 01 16:16:49 minikube localkube[3201]: I1201 16:16:49.009561 3201 kuberuntime_manager.go:738] checking backoff for container "seed-rpush" in pod "cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:16:49 minikube localkube[3201]: I1201 16:16:49.009740 3201 kuberuntime_manager.go:748] Back-off 5m0s restarting failed container=seed-rpush pod=cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058) | |
Dec 01 16:16:49 minikube localkube[3201]: E1201 16:16:49.009793 3201 pod_workers.go:182] Error syncing pod f1040c87-d6b1-11e7-9596-080027aac058 ("cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "seed-rpush" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=seed-rpush pod=cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:16:56 minikube localkube[3201]: I1201 16:16:56.009259 3201 kuberuntime_manager.go:499] Container {Name:cranky-zebra-redis Image:bitnami/redis:4.0.2-r1 Command:[] Args:[] WorkingDir: Ports:[{Name:redis HostPort:0 ContainerPort:6379 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:ALLOW_EMPTY_PASSWORD Value:yes ValueFrom:nil}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:redis-data ReadOnly:false MountPath:/bitnami SubPath: MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:16:56 minikube localkube[3201]: I1201 16:16:56.009389 3201 kuberuntime_manager.go:738] checking backoff for container "cranky-zebra-redis" in pod "cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:16:56 minikube localkube[3201]: I1201 16:16:56.009470 3201 kuberuntime_manager.go:748] Back-off 2m40s restarting failed container=cranky-zebra-redis pod=cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058) | |
Dec 01 16:16:56 minikube localkube[3201]: E1201 16:16:56.009491 3201 pod_workers.go:182] Error syncing pod f0f77e98-d6b1-11e7-9596-080027aac058 ("cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "cranky-zebra-redis" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=cranky-zebra-redis pod=cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:16:59 minikube localkube[3201]: I1201 16:16:59.007961 3201 kuberuntime_manager.go:499] Container {Name:cranky-zebra-postgresql Image:mdillon/postgis:9.6-alpine Command:[] Args:[] WorkingDir: Ports:[{Name:postgresql HostPort:0 ContainerPort:5432 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:POSTGRES_USER Value:api ValueFrom:nil} {Name:PGUSER Value:api ValueFrom:nil} {Name:POSTGRES_DB Value:api ValueFrom:nil} {Name:POSTGRES_INITDB_ARGS Value: ValueFrom:nil} {Name:PGDATA Value:/var/lib/postgresql/data/pgdata ValueFrom:nil} {Name:POSTGRES_PASSWORD Value: ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:cranky-zebra-postgresql,},Key:postgres-password,Optional:nil,},}} {Name:POD_IP Value: ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:data ReadOnly:false MountPath:/var/lib/postgresql/data/pgdata SubPath:postgresql-db MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:6,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:3,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:16:59 minikube localkube[3201]: E1201 16:16:59.011127 3201 kuberuntime_manager.go:714] container start failed: CreateContainerConfigError: lstat /tmp/hostpath-provisioner/pvc-f0db5074-d6b1-11e7-9596-080027aac058: no such file or directory | |
Dec 01 16:16:59 minikube localkube[3201]: E1201 16:16:59.011636 3201 pod_workers.go:182] Error syncing pod f0f4db68-d6b1-11e7-9596-080027aac058 ("cranky-zebra-postgresql-558bd4b587-9zxnt_default(f0f4db68-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "cranky-zebra-postgresql" with CreateContainerConfigError: "lstat /tmp/hostpath-provisioner/pvc-f0db5074-d6b1-11e7-9596-080027aac058: no such file or directory" | |
Dec 01 16:17:00 minikube localkube[3201]: I1201 16:17:00.009486 3201 kuberuntime_manager.go:738] checking backoff for container "seed-rpush" in pod "cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:17:00 minikube localkube[3201]: I1201 16:17:00.009755 3201 kuberuntime_manager.go:748] Back-off 5m0s restarting failed container=seed-rpush pod=cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058) | |
Dec 01 16:17:00 minikube localkube[3201]: E1201 16:17:00.009910 3201 pod_workers.go:182] Error syncing pod f1040c87-d6b1-11e7-9596-080027aac058 ("cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "seed-rpush" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=seed-rpush pod=cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:17:09 minikube localkube[3201]: I1201 16:17:09.009726 3201 kuberuntime_manager.go:499] Container {Name:cranky-zebra-redis Image:bitnami/redis:4.0.2-r1 Command:[] Args:[] WorkingDir: Ports:[{Name:redis HostPort:0 ContainerPort:6379 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:ALLOW_EMPTY_PASSWORD Value:yes ValueFrom:nil}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:redis-data ReadOnly:false MountPath:/bitnami SubPath: MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:17:09 minikube localkube[3201]: I1201 16:17:09.009991 3201 kuberuntime_manager.go:738] checking backoff for container "cranky-zebra-redis" in pod "cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:17:14 minikube localkube[3201]: I1201 16:17:14.010773 3201 kuberuntime_manager.go:499] Container {Name:cranky-zebra-postgresql Image:mdillon/postgis:9.6-alpine Command:[] Args:[] WorkingDir: Ports:[{Name:postgresql HostPort:0 ContainerPort:5432 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:POSTGRES_USER Value:api ValueFrom:nil} {Name:PGUSER Value:api ValueFrom:nil} {Name:POSTGRES_DB Value:api ValueFrom:nil} {Name:POSTGRES_INITDB_ARGS Value: ValueFrom:nil} {Name:PGDATA Value:/var/lib/postgresql/data/pgdata ValueFrom:nil} {Name:POSTGRES_PASSWORD Value: ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:cranky-zebra-postgresql,},Key:postgres-password,Optional:nil,},}} {Name:POD_IP Value: ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:data ReadOnly:false MountPath:/var/lib/postgresql/data/pgdata SubPath:postgresql-db MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:6,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:3,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:17:14 minikube localkube[3201]: I1201 16:17:14.013646 3201 kuberuntime_manager.go:738] checking backoff for container "seed-rpush" in pod "cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:17:14 minikube localkube[3201]: I1201 16:17:14.014284 3201 kuberuntime_manager.go:748] Back-off 5m0s restarting failed container=seed-rpush pod=cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058) | |
Dec 01 16:17:14 minikube localkube[3201]: E1201 16:17:14.014694 3201 pod_workers.go:182] Error syncing pod f1040c87-d6b1-11e7-9596-080027aac058 ("cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "seed-rpush" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=seed-rpush pod=cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:17:14 minikube localkube[3201]: E1201 16:17:14.016716 3201 kuberuntime_manager.go:714] container start failed: CreateContainerConfigError: lstat /tmp/hostpath-provisioner/pvc-f0db5074-d6b1-11e7-9596-080027aac058: no such file or directory | |
Dec 01 16:17:14 minikube localkube[3201]: E1201 16:17:14.017039 3201 pod_workers.go:182] Error syncing pod f0f4db68-d6b1-11e7-9596-080027aac058 ("cranky-zebra-postgresql-558bd4b587-9zxnt_default(f0f4db68-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "cranky-zebra-postgresql" with CreateContainerConfigError: "lstat /tmp/hostpath-provisioner/pvc-f0db5074-d6b1-11e7-9596-080027aac058: no such file or directory" | |
Dec 01 16:17:26 minikube localkube[3201]: I1201 16:17:26.010753 3201 kuberuntime_manager.go:738] checking backoff for container "seed-rpush" in pod "cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:17:26 minikube localkube[3201]: I1201 16:17:26.011165 3201 kuberuntime_manager.go:748] Back-off 5m0s restarting failed container=seed-rpush pod=cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058) | |
Dec 01 16:17:26 minikube localkube[3201]: E1201 16:17:26.011287 3201 pod_workers.go:182] Error syncing pod f1040c87-d6b1-11e7-9596-080027aac058 ("cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "seed-rpush" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=seed-rpush pod=cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:17:28 minikube localkube[3201]: I1201 16:17:28.008684 3201 kuberuntime_manager.go:499] Container {Name:cranky-zebra-postgresql Image:mdillon/postgis:9.6-alpine Command:[] Args:[] WorkingDir: Ports:[{Name:postgresql HostPort:0 ContainerPort:5432 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:POSTGRES_USER Value:api ValueFrom:nil} {Name:PGUSER Value:api ValueFrom:nil} {Name:POSTGRES_DB Value:api ValueFrom:nil} {Name:POSTGRES_INITDB_ARGS Value: ValueFrom:nil} {Name:PGDATA Value:/var/lib/postgresql/data/pgdata ValueFrom:nil} {Name:POSTGRES_PASSWORD Value: ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:cranky-zebra-postgresql,},Key:postgres-password,Optional:nil,},}} {Name:POD_IP Value: ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:data ReadOnly:false MountPath:/var/lib/postgresql/data/pgdata SubPath:postgresql-db MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:6,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:3,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:17:28 minikube localkube[3201]: E1201 16:17:28.012162 3201 kuberuntime_manager.go:714] container start failed: CreateContainerConfigError: lstat /tmp/hostpath-provisioner/pvc-f0db5074-d6b1-11e7-9596-080027aac058: no such file or directory | |
Dec 01 16:17:28 minikube localkube[3201]: E1201 16:17:28.012214 3201 pod_workers.go:182] Error syncing pod f0f4db68-d6b1-11e7-9596-080027aac058 ("cranky-zebra-postgresql-558bd4b587-9zxnt_default(f0f4db68-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "cranky-zebra-postgresql" with CreateContainerConfigError: "lstat /tmp/hostpath-provisioner/pvc-f0db5074-d6b1-11e7-9596-080027aac058: no such file or directory" | |
Dec 01 16:17:32 minikube localkube[3201]: I1201 16:17:32.458274 3201 kuberuntime_manager.go:499] Container {Name:cranky-zebra-redis Image:bitnami/redis:4.0.2-r1 Command:[] Args:[] WorkingDir: Ports:[{Name:redis HostPort:0 ContainerPort:6379 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:ALLOW_EMPTY_PASSWORD Value:yes ValueFrom:nil}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:redis-data ReadOnly:false MountPath:/bitnami SubPath: MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:17:32 minikube localkube[3201]: I1201 16:17:32.458493 3201 kuberuntime_manager.go:738] checking backoff for container "cranky-zebra-redis" in pod "cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:17:32 minikube localkube[3201]: I1201 16:17:32.458694 3201 kuberuntime_manager.go:748] Back-off 5m0s restarting failed container=cranky-zebra-redis pod=cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058) | |
Dec 01 16:17:32 minikube localkube[3201]: E1201 16:17:32.458745 3201 pod_workers.go:182] Error syncing pod f0f77e98-d6b1-11e7-9596-080027aac058 ("cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "cranky-zebra-redis" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=cranky-zebra-redis pod=cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:17:38 minikube localkube[3201]: I1201 16:17:38.168749 3201 kuberuntime_manager.go:499] Container {Name:cranky-zebra-redis Image:bitnami/redis:4.0.2-r1 Command:[] Args:[] WorkingDir: Ports:[{Name:redis HostPort:0 ContainerPort:6379 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:ALLOW_EMPTY_PASSWORD Value:yes ValueFrom:nil}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:redis-data ReadOnly:false MountPath:/bitnami SubPath: MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:17:38 minikube localkube[3201]: I1201 16:17:38.168934 3201 kuberuntime_manager.go:738] checking backoff for container "cranky-zebra-redis" in pod "cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:17:38 minikube localkube[3201]: I1201 16:17:38.169086 3201 kuberuntime_manager.go:748] Back-off 5m0s restarting failed container=cranky-zebra-redis pod=cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058) | |
Dec 01 16:17:38 minikube localkube[3201]: E1201 16:17:38.169125 3201 pod_workers.go:182] Error syncing pod f0f77e98-d6b1-11e7-9596-080027aac058 ("cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "cranky-zebra-redis" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=cranky-zebra-redis pod=cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:17:40 minikube localkube[3201]: I1201 16:17:40.010899 3201 kuberuntime_manager.go:738] checking backoff for container "seed-rpush" in pod "cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:17:40 minikube localkube[3201]: I1201 16:17:40.011102 3201 kuberuntime_manager.go:748] Back-off 5m0s restarting failed container=seed-rpush pod=cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058) | |
Dec 01 16:17:40 minikube localkube[3201]: E1201 16:17:40.011147 3201 pod_workers.go:182] Error syncing pod f1040c87-d6b1-11e7-9596-080027aac058 ("cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "seed-rpush" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=seed-rpush pod=cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:17:43 minikube localkube[3201]: I1201 16:17:43.008167 3201 kuberuntime_manager.go:499] Container {Name:cranky-zebra-postgresql Image:mdillon/postgis:9.6-alpine Command:[] Args:[] WorkingDir: Ports:[{Name:postgresql HostPort:0 ContainerPort:5432 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:POSTGRES_USER Value:api ValueFrom:nil} {Name:PGUSER Value:api ValueFrom:nil} {Name:POSTGRES_DB Value:api ValueFrom:nil} {Name:POSTGRES_INITDB_ARGS Value: ValueFrom:nil} {Name:PGDATA Value:/var/lib/postgresql/data/pgdata ValueFrom:nil} {Name:POSTGRES_PASSWORD Value: ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:cranky-zebra-postgresql,},Key:postgres-password,Optional:nil,},}} {Name:POD_IP Value: ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:data ReadOnly:false MountPath:/var/lib/postgresql/data/pgdata SubPath:postgresql-db MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:6,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:3,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:17:43 minikube localkube[3201]: E1201 16:17:43.014239 3201 kuberuntime_manager.go:714] container start failed: CreateContainerConfigError: lstat /tmp/hostpath-provisioner/pvc-f0db5074-d6b1-11e7-9596-080027aac058: no such file or directory | |
Dec 01 16:17:43 minikube localkube[3201]: E1201 16:17:43.014602 3201 pod_workers.go:182] Error syncing pod f0f4db68-d6b1-11e7-9596-080027aac058 ("cranky-zebra-postgresql-558bd4b587-9zxnt_default(f0f4db68-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "cranky-zebra-postgresql" with CreateContainerConfigError: "lstat /tmp/hostpath-provisioner/pvc-f0db5074-d6b1-11e7-9596-080027aac058: no such file or directory" | |
Dec 01 16:17:44 minikube localkube[3201]: E1201 16:17:44.299409 3201 healthcheck.go:317] Failed to start node healthz on 0: listen tcp: address 0: missing port in address | |
Dec 01 16:17:52 minikube localkube[3201]: I1201 16:17:52.009718 3201 kuberuntime_manager.go:499] Container {Name:cranky-zebra-redis Image:bitnami/redis:4.0.2-r1 Command:[] Args:[] WorkingDir: Ports:[{Name:redis HostPort:0 ContainerPort:6379 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:ALLOW_EMPTY_PASSWORD Value:yes ValueFrom:nil}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:redis-data ReadOnly:false MountPath:/bitnami SubPath: MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:17:52 minikube localkube[3201]: I1201 16:17:52.009831 3201 kuberuntime_manager.go:738] checking backoff for container "cranky-zebra-redis" in pod "cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:17:52 minikube localkube[3201]: I1201 16:17:52.009903 3201 kuberuntime_manager.go:748] Back-off 5m0s restarting failed container=cranky-zebra-redis pod=cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058) | |
Dec 01 16:17:52 minikube localkube[3201]: E1201 16:17:52.009924 3201 pod_workers.go:182] Error syncing pod f0f77e98-d6b1-11e7-9596-080027aac058 ("cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "cranky-zebra-redis" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=cranky-zebra-redis pod=cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:17:53 minikube localkube[3201]: I1201 16:17:53.009180 3201 kuberuntime_manager.go:738] checking backoff for container "seed-rpush" in pod "cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:17:53 minikube localkube[3201]: I1201 16:17:53.009272 3201 kuberuntime_manager.go:748] Back-off 5m0s restarting failed container=seed-rpush pod=cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058) | |
Dec 01 16:17:53 minikube localkube[3201]: E1201 16:17:53.009297 3201 pod_workers.go:182] Error syncing pod f1040c87-d6b1-11e7-9596-080027aac058 ("cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "seed-rpush" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=seed-rpush pod=cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:17:58 minikube localkube[3201]: I1201 16:17:58.009290 3201 kuberuntime_manager.go:499] Container {Name:cranky-zebra-postgresql Image:mdillon/postgis:9.6-alpine Command:[] Args:[] WorkingDir: Ports:[{Name:postgresql HostPort:0 ContainerPort:5432 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:POSTGRES_USER Value:api ValueFrom:nil} {Name:PGUSER Value:api ValueFrom:nil} {Name:POSTGRES_DB Value:api ValueFrom:nil} {Name:POSTGRES_INITDB_ARGS Value: ValueFrom:nil} {Name:PGDATA Value:/var/lib/postgresql/data/pgdata ValueFrom:nil} {Name:POSTGRES_PASSWORD Value: ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:cranky-zebra-postgresql,},Key:postgres-password,Optional:nil,},}} {Name:POD_IP Value: ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:data ReadOnly:false MountPath:/var/lib/postgresql/data/pgdata SubPath:postgresql-db MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:6,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:3,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:17:58 minikube localkube[3201]: E1201 16:17:58.014623 3201 kuberuntime_manager.go:714] container start failed: CreateContainerConfigError: lstat /tmp/hostpath-provisioner/pvc-f0db5074-d6b1-11e7-9596-080027aac058: no such file or directory | |
Dec 01 16:17:58 minikube localkube[3201]: E1201 16:17:58.014700 3201 pod_workers.go:182] Error syncing pod f0f4db68-d6b1-11e7-9596-080027aac058 ("cranky-zebra-postgresql-558bd4b587-9zxnt_default(f0f4db68-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "cranky-zebra-postgresql" with CreateContainerConfigError: "lstat /tmp/hostpath-provisioner/pvc-f0db5074-d6b1-11e7-9596-080027aac058: no such file or directory" | |
Dec 01 16:18:05 minikube localkube[3201]: I1201 16:18:05.008494 3201 kuberuntime_manager.go:499] Container {Name:cranky-zebra-redis Image:bitnami/redis:4.0.2-r1 Command:[] Args:[] WorkingDir: Ports:[{Name:redis HostPort:0 ContainerPort:6379 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:ALLOW_EMPTY_PASSWORD Value:yes ValueFrom:nil}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:redis-data ReadOnly:false MountPath:/bitnami SubPath: MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:18:05 minikube localkube[3201]: I1201 16:18:05.009819 3201 kuberuntime_manager.go:738] checking backoff for container "cranky-zebra-redis" in pod "cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:18:05 minikube localkube[3201]: I1201 16:18:05.009992 3201 kuberuntime_manager.go:748] Back-off 5m0s restarting failed container=cranky-zebra-redis pod=cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058) | |
Dec 01 16:18:05 minikube localkube[3201]: E1201 16:18:05.010036 3201 pod_workers.go:182] Error syncing pod f0f77e98-d6b1-11e7-9596-080027aac058 ("cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "cranky-zebra-redis" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=cranky-zebra-redis pod=cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:18:07 minikube localkube[3201]: I1201 16:18:07.010225 3201 kuberuntime_manager.go:738] checking backoff for container "seed-rpush" in pod "cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:18:07 minikube localkube[3201]: I1201 16:18:07.010814 3201 kuberuntime_manager.go:748] Back-off 5m0s restarting failed container=seed-rpush pod=cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058) | |
Dec 01 16:18:07 minikube localkube[3201]: E1201 16:18:07.011117 3201 pod_workers.go:182] Error syncing pod f1040c87-d6b1-11e7-9596-080027aac058 ("cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "seed-rpush" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=seed-rpush pod=cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:18:13 minikube localkube[3201]: I1201 16:18:13.007916 3201 kuberuntime_manager.go:499] Container {Name:cranky-zebra-postgresql Image:mdillon/postgis:9.6-alpine Command:[] Args:[] WorkingDir: Ports:[{Name:postgresql HostPort:0 ContainerPort:5432 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:POSTGRES_USER Value:api ValueFrom:nil} {Name:PGUSER Value:api ValueFrom:nil} {Name:POSTGRES_DB Value:api ValueFrom:nil} {Name:POSTGRES_INITDB_ARGS Value: ValueFrom:nil} {Name:PGDATA Value:/var/lib/postgresql/data/pgdata ValueFrom:nil} {Name:POSTGRES_PASSWORD Value: ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:cranky-zebra-postgresql,},Key:postgres-password,Optional:nil,},}} {Name:POD_IP Value: ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:data ReadOnly:false MountPath:/var/lib/postgresql/data/pgdata SubPath:postgresql-db MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:6,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:3,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:18:13 minikube localkube[3201]: E1201 16:18:13.012431 3201 kuberuntime_manager.go:714] container start failed: CreateContainerConfigError: lstat /tmp/hostpath-provisioner/pvc-f0db5074-d6b1-11e7-9596-080027aac058: no such file or directory | |
Dec 01 16:18:13 minikube localkube[3201]: E1201 16:18:13.012514 3201 pod_workers.go:182] Error syncing pod f0f4db68-d6b1-11e7-9596-080027aac058 ("cranky-zebra-postgresql-558bd4b587-9zxnt_default(f0f4db68-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "cranky-zebra-postgresql" with CreateContainerConfigError: "lstat /tmp/hostpath-provisioner/pvc-f0db5074-d6b1-11e7-9596-080027aac058: no such file or directory" | |
Dec 01 16:18:20 minikube localkube[3201]: I1201 16:18:20.008252 3201 kuberuntime_manager.go:499] Container {Name:cranky-zebra-redis Image:bitnami/redis:4.0.2-r1 Command:[] Args:[] WorkingDir: Ports:[{Name:redis HostPort:0 ContainerPort:6379 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:ALLOW_EMPTY_PASSWORD Value:yes ValueFrom:nil}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:redis-data ReadOnly:false MountPath:/bitnami SubPath: MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:18:20 minikube localkube[3201]: I1201 16:18:20.010379 3201 kuberuntime_manager.go:738] checking backoff for container "cranky-zebra-redis" in pod "cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:18:20 minikube localkube[3201]: I1201 16:18:20.010848 3201 kuberuntime_manager.go:748] Back-off 5m0s restarting failed container=cranky-zebra-redis pod=cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058) | |
Dec 01 16:18:20 minikube localkube[3201]: I1201 16:18:20.010178 3201 kuberuntime_manager.go:738] checking backoff for container "seed-rpush" in pod "cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:18:20 minikube localkube[3201]: E1201 16:18:20.011282 3201 pod_workers.go:182] Error syncing pod f0f77e98-d6b1-11e7-9596-080027aac058 ("cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "cranky-zebra-redis" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=cranky-zebra-redis pod=cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:18:20 minikube localkube[3201]: I1201 16:18:20.011672 3201 kuberuntime_manager.go:748] Back-off 5m0s restarting failed container=seed-rpush pod=cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058) | |
Dec 01 16:18:20 minikube localkube[3201]: E1201 16:18:20.012951 3201 pod_workers.go:182] Error syncing pod f1040c87-d6b1-11e7-9596-080027aac058 ("cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "seed-rpush" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=seed-rpush pod=cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:18:26 minikube localkube[3201]: I1201 16:18:26.010191 3201 kuberuntime_manager.go:499] Container {Name:cranky-zebra-postgresql Image:mdillon/postgis:9.6-alpine Command:[] Args:[] WorkingDir: Ports:[{Name:postgresql HostPort:0 ContainerPort:5432 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:POSTGRES_USER Value:api ValueFrom:nil} {Name:PGUSER Value:api ValueFrom:nil} {Name:POSTGRES_DB Value:api ValueFrom:nil} {Name:POSTGRES_INITDB_ARGS Value: ValueFrom:nil} {Name:PGDATA Value:/var/lib/postgresql/data/pgdata ValueFrom:nil} {Name:POSTGRES_PASSWORD Value: ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:cranky-zebra-postgresql,},Key:postgres-password,Optional:nil,},}} {Name:POD_IP Value: ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:data ReadOnly:false MountPath:/var/lib/postgresql/data/pgdata SubPath:postgresql-db MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:6,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:3,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:18:26 minikube localkube[3201]: E1201 16:18:26.017802 3201 kuberuntime_manager.go:714] container start failed: CreateContainerConfigError: lstat /tmp/hostpath-provisioner/pvc-f0db5074-d6b1-11e7-9596-080027aac058: no such file or directory | |
Dec 01 16:18:26 minikube localkube[3201]: E1201 16:18:26.018222 3201 pod_workers.go:182] Error syncing pod f0f4db68-d6b1-11e7-9596-080027aac058 ("cranky-zebra-postgresql-558bd4b587-9zxnt_default(f0f4db68-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "cranky-zebra-postgresql" with CreateContainerConfigError: "lstat /tmp/hostpath-provisioner/pvc-f0db5074-d6b1-11e7-9596-080027aac058: no such file or directory" | |
Dec 01 16:18:31 minikube localkube[3201]: I1201 16:18:31.010502 3201 kuberuntime_manager.go:738] checking backoff for container "seed-rpush" in pod "cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:18:31 minikube localkube[3201]: I1201 16:18:31.010951 3201 kuberuntime_manager.go:748] Back-off 5m0s restarting failed container=seed-rpush pod=cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058) | |
Dec 01 16:18:31 minikube localkube[3201]: E1201 16:18:31.011166 3201 pod_workers.go:182] Error syncing pod f1040c87-d6b1-11e7-9596-080027aac058 ("cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "seed-rpush" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=seed-rpush pod=cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:18:33 minikube localkube[3201]: I1201 16:18:33.008177 3201 kuberuntime_manager.go:499] Container {Name:cranky-zebra-redis Image:bitnami/redis:4.0.2-r1 Command:[] Args:[] WorkingDir: Ports:[{Name:redis HostPort:0 ContainerPort:6379 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:ALLOW_EMPTY_PASSWORD Value:yes ValueFrom:nil}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:redis-data ReadOnly:false MountPath:/bitnami SubPath: MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:18:33 minikube localkube[3201]: I1201 16:18:33.009610 3201 kuberuntime_manager.go:738] checking backoff for container "cranky-zebra-redis" in pod "cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:18:33 minikube localkube[3201]: I1201 16:18:33.009935 3201 kuberuntime_manager.go:748] Back-off 5m0s restarting failed container=cranky-zebra-redis pod=cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058) | |
Dec 01 16:18:33 minikube localkube[3201]: E1201 16:18:33.010149 3201 pod_workers.go:182] Error syncing pod f0f77e98-d6b1-11e7-9596-080027aac058 ("cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "cranky-zebra-redis" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=cranky-zebra-redis pod=cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:18:38 minikube localkube[3201]: I1201 16:18:38.012962 3201 kuberuntime_manager.go:499] Container {Name:cranky-zebra-postgresql Image:mdillon/postgis:9.6-alpine Command:[] Args:[] WorkingDir: Ports:[{Name:postgresql HostPort:0 ContainerPort:5432 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:POSTGRES_USER Value:api ValueFrom:nil} {Name:PGUSER Value:api ValueFrom:nil} {Name:POSTGRES_DB Value:api ValueFrom:nil} {Name:POSTGRES_INITDB_ARGS Value: ValueFrom:nil} {Name:PGDATA Value:/var/lib/postgresql/data/pgdata ValueFrom:nil} {Name:POSTGRES_PASSWORD Value: ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:cranky-zebra-postgresql,},Key:postgres-password,Optional:nil,},}} {Name:POD_IP Value: ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:data ReadOnly:false MountPath:/var/lib/postgresql/data/pgdata SubPath:postgresql-db MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:6,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:3,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:18:38 minikube localkube[3201]: E1201 16:18:38.015312 3201 kuberuntime_manager.go:714] container start failed: CreateContainerConfigError: lstat /tmp/hostpath-provisioner/pvc-f0db5074-d6b1-11e7-9596-080027aac058: no such file or directory | |
Dec 01 16:18:38 minikube localkube[3201]: E1201 16:18:38.015344 3201 pod_workers.go:182] Error syncing pod f0f4db68-d6b1-11e7-9596-080027aac058 ("cranky-zebra-postgresql-558bd4b587-9zxnt_default(f0f4db68-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "cranky-zebra-postgresql" with CreateContainerConfigError: "lstat /tmp/hostpath-provisioner/pvc-f0db5074-d6b1-11e7-9596-080027aac058: no such file or directory" | |
Dec 01 16:18:44 minikube localkube[3201]: E1201 16:18:44.300252 3201 healthcheck.go:317] Failed to start node healthz on 0: listen tcp: address 0: missing port in address | |
Dec 01 16:18:46 minikube localkube[3201]: I1201 16:18:46.008622 3201 kuberuntime_manager.go:738] checking backoff for container "seed-rpush" in pod "cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:18:46 minikube localkube[3201]: I1201 16:18:46.008887 3201 kuberuntime_manager.go:748] Back-off 5m0s restarting failed container=seed-rpush pod=cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058) | |
Dec 01 16:18:46 minikube localkube[3201]: E1201 16:18:46.009135 3201 pod_workers.go:182] Error syncing pod f1040c87-d6b1-11e7-9596-080027aac058 ("cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "seed-rpush" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=seed-rpush pod=cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:18:48 minikube localkube[3201]: I1201 16:18:48.008521 3201 kuberuntime_manager.go:499] Container {Name:cranky-zebra-redis Image:bitnami/redis:4.0.2-r1 Command:[] Args:[] WorkingDir: Ports:[{Name:redis HostPort:0 ContainerPort:6379 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:ALLOW_EMPTY_PASSWORD Value:yes ValueFrom:nil}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:redis-data ReadOnly:false MountPath:/bitnami SubPath: MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:18:48 minikube localkube[3201]: I1201 16:18:48.010695 3201 kuberuntime_manager.go:738] checking backoff for container "cranky-zebra-redis" in pod "cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:18:48 minikube localkube[3201]: I1201 16:18:48.011263 3201 kuberuntime_manager.go:748] Back-off 5m0s restarting failed container=cranky-zebra-redis pod=cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058) | |
Dec 01 16:18:48 minikube localkube[3201]: E1201 16:18:48.011563 3201 pod_workers.go:182] Error syncing pod f0f77e98-d6b1-11e7-9596-080027aac058 ("cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "cranky-zebra-redis" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=cranky-zebra-redis pod=cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:18:50 minikube localkube[3201]: I1201 16:18:50.008068 3201 kuberuntime_manager.go:499] Container {Name:cranky-zebra-postgresql Image:mdillon/postgis:9.6-alpine Command:[] Args:[] WorkingDir: Ports:[{Name:postgresql HostPort:0 ContainerPort:5432 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:POSTGRES_USER Value:api ValueFrom:nil} {Name:PGUSER Value:api ValueFrom:nil} {Name:POSTGRES_DB Value:api ValueFrom:nil} {Name:POSTGRES_INITDB_ARGS Value: ValueFrom:nil} {Name:PGDATA Value:/var/lib/postgresql/data/pgdata ValueFrom:nil} {Name:POSTGRES_PASSWORD Value: ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:cranky-zebra-postgresql,},Key:postgres-password,Optional:nil,},}} {Name:POD_IP Value: ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}}] Resources:{Limits:map[] Requests:map[memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI} cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI}]} VolumeMounts:[{Name:data ReadOnly:false MountPath:/var/lib/postgresql/data/pgdata SubPath:postgresql-db MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:6,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:3,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:18:50 minikube localkube[3201]: E1201 16:18:50.011209 3201 kuberuntime_manager.go:714] container start failed: CreateContainerConfigError: lstat /tmp/hostpath-provisioner/pvc-f0db5074-d6b1-11e7-9596-080027aac058: no such file or directory | |
Dec 01 16:18:50 minikube localkube[3201]: E1201 16:18:50.011243 3201 pod_workers.go:182] Error syncing pod f0f4db68-d6b1-11e7-9596-080027aac058 ("cranky-zebra-postgresql-558bd4b587-9zxnt_default(f0f4db68-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "cranky-zebra-postgresql" with CreateContainerConfigError: "lstat /tmp/hostpath-provisioner/pvc-f0db5074-d6b1-11e7-9596-080027aac058: no such file or directory" | |
Dec 01 16:19:00 minikube localkube[3201]: I1201 16:19:00.010364 3201 kuberuntime_manager.go:738] checking backoff for container "seed-rpush" in pod "cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:19:00 minikube localkube[3201]: I1201 16:19:00.010969 3201 kuberuntime_manager.go:748] Back-off 5m0s restarting failed container=seed-rpush pod=cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058) | |
Dec 01 16:19:00 minikube localkube[3201]: E1201 16:19:00.011375 3201 pod_workers.go:182] Error syncing pod f1040c87-d6b1-11e7-9596-080027aac058 ("cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "seed-rpush" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=seed-rpush pod=cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:19:01 minikube localkube[3201]: I1201 16:19:01.007951 3201 kuberuntime_manager.go:499] Container {Name:cranky-zebra-redis Image:bitnami/redis:4.0.2-r1 Command:[] Args:[] WorkingDir: Ports:[{Name:redis HostPort:0 ContainerPort:6379 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:ALLOW_EMPTY_PASSWORD Value:yes ValueFrom:nil}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:redis-data ReadOnly:false MountPath:/bitnami SubPath: MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:19:01 minikube localkube[3201]: I1201 16:19:01.009036 3201 kuberuntime_manager.go:738] checking backoff for container "cranky-zebra-redis" in pod "cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:19:01 minikube localkube[3201]: I1201 16:19:01.009226 3201 kuberuntime_manager.go:748] Back-off 5m0s restarting failed container=cranky-zebra-redis pod=cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058) | |
Dec 01 16:19:01 minikube localkube[3201]: E1201 16:19:01.009273 3201 pod_workers.go:182] Error syncing pod f0f77e98-d6b1-11e7-9596-080027aac058 ("cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "cranky-zebra-redis" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=cranky-zebra-redis pod=cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:19:02 minikube localkube[3201]: I1201 16:19:02.008355 3201 kuberuntime_manager.go:499] Container {Name:cranky-zebra-postgresql Image:mdillon/postgis:9.6-alpine Command:[] Args:[] WorkingDir: Ports:[{Name:postgresql HostPort:0 ContainerPort:5432 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:POSTGRES_USER Value:api ValueFrom:nil} {Name:PGUSER Value:api ValueFrom:nil} {Name:POSTGRES_DB Value:api ValueFrom:nil} {Name:POSTGRES_INITDB_ARGS Value: ValueFrom:nil} {Name:PGDATA Value:/var/lib/postgresql/data/pgdata ValueFrom:nil} {Name:POSTGRES_PASSWORD Value: ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:cranky-zebra-postgresql,},Key:postgres-password,Optional:nil,},}} {Name:POD_IP Value: ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:data ReadOnly:false MountPath:/var/lib/postgresql/data/pgdata SubPath:postgresql-db MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:6,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:3,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:19:02 minikube localkube[3201]: E1201 16:19:02.015329 3201 kuberuntime_manager.go:714] container start failed: CreateContainerConfigError: lstat /tmp/hostpath-provisioner/pvc-f0db5074-d6b1-11e7-9596-080027aac058: no such file or directory | |
Dec 01 16:19:02 minikube localkube[3201]: E1201 16:19:02.015405 3201 pod_workers.go:182] Error syncing pod f0f4db68-d6b1-11e7-9596-080027aac058 ("cranky-zebra-postgresql-558bd4b587-9zxnt_default(f0f4db68-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "cranky-zebra-postgresql" with CreateContainerConfigError: "lstat /tmp/hostpath-provisioner/pvc-f0db5074-d6b1-11e7-9596-080027aac058: no such file or directory" | |
Dec 01 16:19:13 minikube localkube[3201]: I1201 16:19:13.007841 3201 kuberuntime_manager.go:499] Container {Name:cranky-zebra-postgresql Image:mdillon/postgis:9.6-alpine Command:[] Args:[] WorkingDir: Ports:[{Name:postgresql HostPort:0 ContainerPort:5432 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:POSTGRES_USER Value:api ValueFrom:nil} {Name:PGUSER Value:api ValueFrom:nil} {Name:POSTGRES_DB Value:api ValueFrom:nil} {Name:POSTGRES_INITDB_ARGS Value: ValueFrom:nil} {Name:PGDATA Value:/var/lib/postgresql/data/pgdata ValueFrom:nil} {Name:POSTGRES_PASSWORD Value: ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:cranky-zebra-postgresql,},Key:postgres-password,Optional:nil,},}} {Name:POD_IP Value: ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:data ReadOnly:false MountPath:/var/lib/postgresql/data/pgdata SubPath:postgresql-db MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:6,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:3,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:19:13 minikube localkube[3201]: I1201 16:19:13.008658 3201 kuberuntime_manager.go:738] checking backoff for container "seed-rpush" in pod "cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:19:13 minikube localkube[3201]: I1201 16:19:13.008756 3201 kuberuntime_manager.go:748] Back-off 5m0s restarting failed container=seed-rpush pod=cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058) | |
Dec 01 16:19:13 minikube localkube[3201]: E1201 16:19:13.008776 3201 pod_workers.go:182] Error syncing pod f1040c87-d6b1-11e7-9596-080027aac058 ("cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "seed-rpush" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=seed-rpush pod=cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:19:13 minikube localkube[3201]: E1201 16:19:13.010270 3201 kuberuntime_manager.go:714] container start failed: CreateContainerConfigError: lstat /tmp/hostpath-provisioner/pvc-f0db5074-d6b1-11e7-9596-080027aac058: no such file or directory | |
Dec 01 16:19:13 minikube localkube[3201]: E1201 16:19:13.010461 3201 pod_workers.go:182] Error syncing pod f0f4db68-d6b1-11e7-9596-080027aac058 ("cranky-zebra-postgresql-558bd4b587-9zxnt_default(f0f4db68-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "cranky-zebra-postgresql" with CreateContainerConfigError: "lstat /tmp/hostpath-provisioner/pvc-f0db5074-d6b1-11e7-9596-080027aac058: no such file or directory" | |
Dec 01 16:19:14 minikube localkube[3201]: I1201 16:19:14.009847 3201 kuberuntime_manager.go:499] Container {Name:cranky-zebra-redis Image:bitnami/redis:4.0.2-r1 Command:[] Args:[] WorkingDir: Ports:[{Name:redis HostPort:0 ContainerPort:6379 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:ALLOW_EMPTY_PASSWORD Value:yes ValueFrom:nil}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:redis-data ReadOnly:false MountPath:/bitnami SubPath: MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:19:14 minikube localkube[3201]: I1201 16:19:14.012076 3201 kuberuntime_manager.go:738] checking backoff for container "cranky-zebra-redis" in pod "cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:19:14 minikube localkube[3201]: I1201 16:19:14.012555 3201 kuberuntime_manager.go:748] Back-off 5m0s restarting failed container=cranky-zebra-redis pod=cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058) | |
Dec 01 16:19:14 minikube localkube[3201]: E1201 16:19:14.012857 3201 pod_workers.go:182] Error syncing pod f0f77e98-d6b1-11e7-9596-080027aac058 ("cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "cranky-zebra-redis" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=cranky-zebra-redis pod=cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:19:24 minikube localkube[3201]: I1201 16:19:24.014028 3201 kuberuntime_manager.go:738] checking backoff for container "seed-rpush" in pod "cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:19:24 minikube localkube[3201]: I1201 16:19:24.014273 3201 kuberuntime_manager.go:748] Back-off 5m0s restarting failed container=seed-rpush pod=cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058) | |
Dec 01 16:19:24 minikube localkube[3201]: E1201 16:19:24.014312 3201 pod_workers.go:182] Error syncing pod f1040c87-d6b1-11e7-9596-080027aac058 ("cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "seed-rpush" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=seed-rpush pod=cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:19:25 minikube localkube[3201]: I1201 16:19:25.008228 3201 kuberuntime_manager.go:499] Container {Name:cranky-zebra-redis Image:bitnami/redis:4.0.2-r1 Command:[] Args:[] WorkingDir: Ports:[{Name:redis HostPort:0 ContainerPort:6379 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:ALLOW_EMPTY_PASSWORD Value:yes ValueFrom:nil}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:redis-data ReadOnly:false MountPath:/bitnami SubPath: MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:19:25 minikube localkube[3201]: I1201 16:19:25.009977 3201 kuberuntime_manager.go:738] checking backoff for container "cranky-zebra-redis" in pod "cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:19:25 minikube localkube[3201]: I1201 16:19:25.010377 3201 kuberuntime_manager.go:748] Back-off 5m0s restarting failed container=cranky-zebra-redis pod=cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058) | |
Dec 01 16:19:25 minikube localkube[3201]: E1201 16:19:25.010680 3201 pod_workers.go:182] Error syncing pod f0f77e98-d6b1-11e7-9596-080027aac058 ("cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "cranky-zebra-redis" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=cranky-zebra-redis pod=cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:19:28 minikube localkube[3201]: I1201 16:19:28.009097 3201 kuberuntime_manager.go:499] Container {Name:cranky-zebra-postgresql Image:mdillon/postgis:9.6-alpine Command:[] Args:[] WorkingDir: Ports:[{Name:postgresql HostPort:0 ContainerPort:5432 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:POSTGRES_USER Value:api ValueFrom:nil} {Name:PGUSER Value:api ValueFrom:nil} {Name:POSTGRES_DB Value:api ValueFrom:nil} {Name:POSTGRES_INITDB_ARGS Value: ValueFrom:nil} {Name:PGDATA Value:/var/lib/postgresql/data/pgdata ValueFrom:nil} {Name:POSTGRES_PASSWORD Value: ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:cranky-zebra-postgresql,},Key:postgres-password,Optional:nil,},}} {Name:POD_IP Value: ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:data ReadOnly:false MountPath:/var/lib/postgresql/data/pgdata SubPath:postgresql-db MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:6,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:3,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:19:28 minikube localkube[3201]: E1201 16:19:28.011802 3201 kuberuntime_manager.go:714] container start failed: CreateContainerConfigError: lstat /tmp/hostpath-provisioner/pvc-f0db5074-d6b1-11e7-9596-080027aac058: no such file or directory | |
Dec 01 16:19:28 minikube localkube[3201]: E1201 16:19:28.011836 3201 pod_workers.go:182] Error syncing pod f0f4db68-d6b1-11e7-9596-080027aac058 ("cranky-zebra-postgresql-558bd4b587-9zxnt_default(f0f4db68-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "cranky-zebra-postgresql" with CreateContainerConfigError: "lstat /tmp/hostpath-provisioner/pvc-f0db5074-d6b1-11e7-9596-080027aac058: no such file or directory" | |
Dec 01 16:19:32 minikube localkube[3201]: I1201 16:19:32.315385 3201 event.go:218] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hipster-clam-postgresql", UID:"69f0e9a4-d6b3-11e7-9596-080027aac058", APIVersion:"extensions", ResourceVersion:"3268", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hipster-clam-postgresql-669bb67d95 to 1 | |
Dec 01 16:19:32 minikube localkube[3201]: E1201 16:19:32.317657 3201 factory.go:913] Error scheduling default hipster-clam-postgresql-669bb67d95-drr6g: PersistentVolumeClaim is not bound: "hipster-clam-postgresql"; retrying | |
Dec 01 16:19:32 minikube localkube[3201]: I1201 16:19:32.318479 3201 event.go:218] Event(v1.ObjectReference{Kind:"Pod", Namespace:"default", Name:"hipster-clam-postgresql-669bb67d95-drr6g", UID:"69f39b37-d6b3-11e7-9596-080027aac058", APIVersion:"v1", ResourceVersion:"3271", FieldPath:""}): type: 'Warning' reason: 'FailedScheduling' PersistentVolumeClaim is not bound: "hipster-clam-postgresql" | |
Dec 01 16:19:32 minikube localkube[3201]: I1201 16:19:32.318588 3201 event.go:218] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hipster-clam-postgresql-669bb67d95", UID:"69f317ea-d6b3-11e7-9596-080027aac058", APIVersion:"extensions", ResourceVersion:"3270", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hipster-clam-postgresql-669bb67d95-drr6g | |
Dec 01 16:19:32 minikube localkube[3201]: E1201 16:19:32.338747 3201 factory.go:913] Error scheduling default hipster-clam-postgresql-669bb67d95-drr6g: PersistentVolumeClaim is not bound: "hipster-clam-postgresql"; retrying | |
Dec 01 16:19:32 minikube localkube[3201]: W1201 16:19:32.339042 3201 factory.go:928] Request for pod default/hipster-clam-postgresql-669bb67d95-drr6g already in flight, abandoning | |
Dec 01 16:19:32 minikube localkube[3201]: I1201 16:19:32.339063 3201 event.go:218] Event(v1.ObjectReference{Kind:"Pod", Namespace:"default", Name:"hipster-clam-postgresql-669bb67d95-drr6g", UID:"69f39b37-d6b3-11e7-9596-080027aac058", APIVersion:"v1", ResourceVersion:"3274", FieldPath:""}): type: 'Warning' reason: 'FailedScheduling' PersistentVolumeClaim is not bound: "hipster-clam-postgresql" | |
Dec 01 16:19:32 minikube localkube[3201]: W1201 16:19:32.372748 3201 watcher.go:341] Fast watcher, slow processing. Number of buffered events: 100.Probably caused by slow decoding, user not receiving fast, or other processing logic | |
Dec 01 16:19:32 minikube localkube[3201]: W1201 16:19:32.373907 3201 watcher.go:341] Fast watcher, slow processing. Number of buffered events: 100.Probably caused by slow decoding, user not receiving fast, or other processing logic | |
Dec 01 16:19:32 minikube localkube[3201]: W1201 16:19:32.374673 3201 watcher.go:341] Fast watcher, slow processing. Number of buffered events: 100.Probably caused by slow decoding, user not receiving fast, or other processing logic | |
Dec 01 16:19:32 minikube localkube[3201]: W1201 16:19:32.385559 3201 watcher.go:341] Fast watcher, slow processing. Number of buffered events: 100.Probably caused by slow decoding, user not receiving fast, or other processing logic | |
Dec 01 16:19:32 minikube localkube[3201]: W1201 16:19:32.387747 3201 watcher.go:341] Fast watcher, slow processing. Number of buffered events: 100.Probably caused by slow decoding, user not receiving fast, or other processing logic | |
Dec 01 16:19:32 minikube localkube[3201]: W1201 16:19:32.388658 3201 watcher.go:341] Fast watcher, slow processing. Number of buffered events: 100.Probably caused by slow decoding, user not receiving fast, or other processing logic | |
Dec 01 16:19:32 minikube localkube[3201]: E1201 16:19:32.559820 3201 upgradeaware.go:310] Error proxying data from client to backend: read tcp 192.168.99.100:8443->192.168.99.1:44580: read: connection reset by peer | |
Dec 01 16:19:33 minikube localkube[3201]: I1201 16:19:33.325202 3201 event.go:218] Event(v1.ObjectReference{Kind:"Pod", Namespace:"default", Name:"hipster-clam-postgresql-669bb67d95-drr6g", UID:"69f39b37-d6b3-11e7-9596-080027aac058", APIVersion:"v1", ResourceVersion:"3274", FieldPath:""}): type: 'Normal' reason: 'Scheduled' Successfully assigned hipster-clam-postgresql-669bb67d95-drr6g to minikube | |
Dec 01 16:19:33 minikube localkube[3201]: E1201 16:19:33.331940 3201 helpers.go:468] PercpuUsage had 0 cpus, but the actual number is 2; ignoring extra CPUs | |
Dec 01 16:19:33 minikube localkube[3201]: I1201 16:19:33.383955 3201 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "pvc-69ede0d7-d6b3-11e7-9596-080027aac058" (UniqueName: "kubernetes.io/host-path/69f39b37-d6b3-11e7-9596-080027aac058-pvc-69ede0d7-d6b3-11e7-9596-080027aac058") pod "hipster-clam-postgresql-669bb67d95-drr6g" (UID: "69f39b37-d6b3-11e7-9596-080027aac058") | |
Dec 01 16:19:33 minikube localkube[3201]: I1201 16:19:33.384235 3201 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-ctrw6" (UniqueName: "kubernetes.io/secret/69f39b37-d6b3-11e7-9596-080027aac058-default-token-ctrw6") pod "hipster-clam-postgresql-669bb67d95-drr6g" (UID: "69f39b37-d6b3-11e7-9596-080027aac058") | |
Dec 01 16:19:35 minikube localkube[3201]: I1201 16:19:35.009586 3201 kuberuntime_manager.go:738] checking backoff for container "seed-rpush" in pod "cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:19:35 minikube localkube[3201]: I1201 16:19:35.009662 3201 kuberuntime_manager.go:748] Back-off 5m0s restarting failed container=seed-rpush pod=cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058) | |
Dec 01 16:19:35 minikube localkube[3201]: E1201 16:19:35.009681 3201 pod_workers.go:182] Error syncing pod f1040c87-d6b1-11e7-9596-080027aac058 ("cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "seed-rpush" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=seed-rpush pod=cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:19:36 minikube localkube[3201]: store.index: compact 2921 | |
Dec 01 16:19:36 minikube localkube[3201]: finished scheduled compaction at 2921 (took 7.760999ms) | |
Dec 01 16:19:39 minikube localkube[3201]: I1201 16:19:39.008462 3201 kuberuntime_manager.go:499] Container {Name:cranky-zebra-postgresql Image:mdillon/postgis:9.6-alpine Command:[] Args:[] WorkingDir: Ports:[{Name:postgresql HostPort:0 ContainerPort:5432 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:POSTGRES_USER Value:api ValueFrom:nil} {Name:PGUSER Value:api ValueFrom:nil} {Name:POSTGRES_DB Value:api ValueFrom:nil} {Name:POSTGRES_INITDB_ARGS Value: ValueFrom:nil} {Name:PGDATA Value:/var/lib/postgresql/data/pgdata ValueFrom:nil} {Name:POSTGRES_PASSWORD Value: ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:cranky-zebra-postgresql,},Key:postgres-password,Optional:nil,},}} {Name:POD_IP Value: ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:data ReadOnly:false MountPath:/var/lib/postgresql/data/pgdata SubPath:postgresql-db MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:6,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:3,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:19:39 minikube localkube[3201]: I1201 16:19:39.009878 3201 kuberuntime_manager.go:499] Container {Name:cranky-zebra-redis Image:bitnami/redis:4.0.2-r1 Command:[] Args:[] WorkingDir: Ports:[{Name:redis HostPort:0 ContainerPort:6379 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:ALLOW_EMPTY_PASSWORD Value:yes ValueFrom:nil}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:redis-data ReadOnly:false MountPath:/bitnami SubPath: MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:19:39 minikube localkube[3201]: I1201 16:19:39.011463 3201 kuberuntime_manager.go:738] checking backoff for container "cranky-zebra-redis" in pod "cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:19:39 minikube localkube[3201]: I1201 16:19:39.011693 3201 kuberuntime_manager.go:748] Back-off 5m0s restarting failed container=cranky-zebra-redis pod=cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058) | |
Dec 01 16:19:39 minikube localkube[3201]: E1201 16:19:39.011822 3201 pod_workers.go:182] Error syncing pod f0f77e98-d6b1-11e7-9596-080027aac058 ("cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "cranky-zebra-redis" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=cranky-zebra-redis pod=cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:19:39 minikube localkube[3201]: E1201 16:19:39.012560 3201 kuberuntime_manager.go:714] container start failed: CreateContainerConfigError: lstat /tmp/hostpath-provisioner/pvc-f0db5074-d6b1-11e7-9596-080027aac058: no such file or directory | |
Dec 01 16:19:39 minikube localkube[3201]: E1201 16:19:39.012686 3201 pod_workers.go:182] Error syncing pod f0f4db68-d6b1-11e7-9596-080027aac058 ("cranky-zebra-postgresql-558bd4b587-9zxnt_default(f0f4db68-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "cranky-zebra-postgresql" with CreateContainerConfigError: "lstat /tmp/hostpath-provisioner/pvc-f0db5074-d6b1-11e7-9596-080027aac058: no such file or directory" | |
Dec 01 16:19:44 minikube localkube[3201]: E1201 16:19:44.300908 3201 healthcheck.go:317] Failed to start node healthz on 0: listen tcp: address 0: missing port in address | |
Dec 01 16:19:46 minikube localkube[3201]: I1201 16:19:46.009695 3201 kuberuntime_manager.go:738] checking backoff for container "seed-rpush" in pod "cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:19:46 minikube localkube[3201]: I1201 16:19:46.010128 3201 kuberuntime_manager.go:748] Back-off 5m0s restarting failed container=seed-rpush pod=cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058) | |
Dec 01 16:19:46 minikube localkube[3201]: E1201 16:19:46.010290 3201 pod_workers.go:182] Error syncing pod f1040c87-d6b1-11e7-9596-080027aac058 ("cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "seed-rpush" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=seed-rpush pod=cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:19:54 minikube localkube[3201]: I1201 16:19:54.012377 3201 kuberuntime_manager.go:499] Container {Name:cranky-zebra-postgresql Image:mdillon/postgis:9.6-alpine Command:[] Args:[] WorkingDir: Ports:[{Name:postgresql HostPort:0 ContainerPort:5432 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:POSTGRES_USER Value:api ValueFrom:nil} {Name:PGUSER Value:api ValueFrom:nil} {Name:POSTGRES_DB Value:api ValueFrom:nil} {Name:POSTGRES_INITDB_ARGS Value: ValueFrom:nil} {Name:PGDATA Value:/var/lib/postgresql/data/pgdata ValueFrom:nil} {Name:POSTGRES_PASSWORD Value: ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:cranky-zebra-postgresql,},Key:postgres-password,Optional:nil,},}} {Name:POD_IP Value: ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:data ReadOnly:false MountPath:/var/lib/postgresql/data/pgdata SubPath:postgresql-db MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:6,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:3,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:19:54 minikube localkube[3201]: I1201 16:19:54.013881 3201 kuberuntime_manager.go:499] Container {Name:cranky-zebra-redis Image:bitnami/redis:4.0.2-r1 Command:[] Args:[] WorkingDir: Ports:[{Name:redis HostPort:0 ContainerPort:6379 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:ALLOW_EMPTY_PASSWORD Value:yes ValueFrom:nil}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:redis-data ReadOnly:false MountPath:/bitnami SubPath: MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:19:54 minikube localkube[3201]: I1201 16:19:54.013983 3201 kuberuntime_manager.go:738] checking backoff for container "cranky-zebra-redis" in pod "cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:19:54 minikube localkube[3201]: I1201 16:19:54.014062 3201 kuberuntime_manager.go:748] Back-off 5m0s restarting failed container=cranky-zebra-redis pod=cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058) | |
Dec 01 16:19:54 minikube localkube[3201]: E1201 16:19:54.014088 3201 pod_workers.go:182] Error syncing pod f0f77e98-d6b1-11e7-9596-080027aac058 ("cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "cranky-zebra-redis" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=cranky-zebra-redis pod=cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:19:54 minikube localkube[3201]: E1201 16:19:54.016080 3201 kuberuntime_manager.go:714] container start failed: CreateContainerConfigError: lstat /tmp/hostpath-provisioner/pvc-f0db5074-d6b1-11e7-9596-080027aac058: no such file or directory | |
Dec 01 16:19:54 minikube localkube[3201]: E1201 16:19:54.016329 3201 pod_workers.go:182] Error syncing pod f0f4db68-d6b1-11e7-9596-080027aac058 ("cranky-zebra-postgresql-558bd4b587-9zxnt_default(f0f4db68-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "cranky-zebra-postgresql" with CreateContainerConfigError: "lstat /tmp/hostpath-provisioner/pvc-f0db5074-d6b1-11e7-9596-080027aac058: no such file or directory" | |
Dec 01 16:19:57 minikube localkube[3201]: I1201 16:19:57.010194 3201 kuberuntime_manager.go:738] checking backoff for container "seed-rpush" in pod "cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:19:57 minikube localkube[3201]: I1201 16:19:57.010380 3201 kuberuntime_manager.go:748] Back-off 5m0s restarting failed container=seed-rpush pod=cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058) | |
Dec 01 16:19:57 minikube localkube[3201]: E1201 16:19:57.010408 3201 pod_workers.go:182] Error syncing pod f1040c87-d6b1-11e7-9596-080027aac058 ("cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "seed-rpush" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=seed-rpush pod=cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:19:58 minikube localkube[3201]: W1201 16:19:58.688685 3201 conversion.go:110] Could not get instant cpu stats: different number of cpus | |
Dec 01 16:20:05 minikube localkube[3201]: I1201 16:20:05.008065 3201 kuberuntime_manager.go:499] Container {Name:cranky-zebra-postgresql Image:mdillon/postgis:9.6-alpine Command:[] Args:[] WorkingDir: Ports:[{Name:postgresql HostPort:0 ContainerPort:5432 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:POSTGRES_USER Value:api ValueFrom:nil} {Name:PGUSER Value:api ValueFrom:nil} {Name:POSTGRES_DB Value:api ValueFrom:nil} {Name:POSTGRES_INITDB_ARGS Value: ValueFrom:nil} {Name:PGDATA Value:/var/lib/postgresql/data/pgdata ValueFrom:nil} {Name:POSTGRES_PASSWORD Value: ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:cranky-zebra-postgresql,},Key:postgres-password,Optional:nil,},}} {Name:POD_IP Value: ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:data ReadOnly:false MountPath:/var/lib/postgresql/data/pgdata SubPath:postgresql-db MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:6,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:3,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:20:05 minikube localkube[3201]: E1201 16:20:05.010752 3201 kuberuntime_manager.go:714] container start failed: CreateContainerConfigError: lstat /tmp/hostpath-provisioner/pvc-f0db5074-d6b1-11e7-9596-080027aac058: no such file or directory | |
Dec 01 16:20:05 minikube localkube[3201]: E1201 16:20:05.010945 3201 pod_workers.go:182] Error syncing pod f0f4db68-d6b1-11e7-9596-080027aac058 ("cranky-zebra-postgresql-558bd4b587-9zxnt_default(f0f4db68-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "cranky-zebra-postgresql" with CreateContainerConfigError: "lstat /tmp/hostpath-provisioner/pvc-f0db5074-d6b1-11e7-9596-080027aac058: no such file or directory" | |
Dec 01 16:20:08 minikube localkube[3201]: I1201 16:20:08.009128 3201 kuberuntime_manager.go:499] Container {Name:cranky-zebra-redis Image:bitnami/redis:4.0.2-r1 Command:[] Args:[] WorkingDir: Ports:[{Name:redis HostPort:0 ContainerPort:6379 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:ALLOW_EMPTY_PASSWORD Value:yes ValueFrom:nil}] Resources:{Limits:map[] Requests:map[memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI} cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI}]} VolumeMounts:[{Name:redis-data ReadOnly:false MountPath:/bitnami SubPath: MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:20:08 minikube localkube[3201]: I1201 16:20:08.009990 3201 kuberuntime_manager.go:738] checking backoff for container "cranky-zebra-redis" in pod "cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:20:08 minikube localkube[3201]: I1201 16:20:08.010174 3201 kuberuntime_manager.go:748] Back-off 5m0s restarting failed container=cranky-zebra-redis pod=cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058) | |
Dec 01 16:20:08 minikube localkube[3201]: E1201 16:20:08.010284 3201 pod_workers.go:182] Error syncing pod f0f77e98-d6b1-11e7-9596-080027aac058 ("cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "cranky-zebra-redis" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=cranky-zebra-redis pod=cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:20:09 minikube localkube[3201]: I1201 16:20:09.010172 3201 kuberuntime_manager.go:738] checking backoff for container "seed-rpush" in pod "cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:20:09 minikube localkube[3201]: I1201 16:20:09.010250 3201 kuberuntime_manager.go:748] Back-off 5m0s restarting failed container=seed-rpush pod=cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058) | |
Dec 01 16:20:09 minikube localkube[3201]: E1201 16:20:09.010271 3201 pod_workers.go:182] Error syncing pod f1040c87-d6b1-11e7-9596-080027aac058 ("cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "seed-rpush" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=seed-rpush pod=cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:20:12 minikube localkube[3201]: E1201 16:20:12.286095 3201 upgradeaware.go:310] Error proxying data from client to backend: read tcp 192.168.99.100:8443->192.168.99.1:44788: read: connection reset by peer | |
Dec 01 16:20:18 minikube localkube[3201]: I1201 16:20:18.012229 3201 kuberuntime_manager.go:499] Container {Name:cranky-zebra-postgresql Image:mdillon/postgis:9.6-alpine Command:[] Args:[] WorkingDir: Ports:[{Name:postgresql HostPort:0 ContainerPort:5432 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:POSTGRES_USER Value:api ValueFrom:nil} {Name:PGUSER Value:api ValueFrom:nil} {Name:POSTGRES_DB Value:api ValueFrom:nil} {Name:POSTGRES_INITDB_ARGS Value: ValueFrom:nil} {Name:PGDATA Value:/var/lib/postgresql/data/pgdata ValueFrom:nil} {Name:POSTGRES_PASSWORD Value: ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:cranky-zebra-postgresql,},Key:postgres-password,Optional:nil,},}} {Name:POD_IP Value: ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:data ReadOnly:false MountPath:/var/lib/postgresql/data/pgdata SubPath:postgresql-db MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:6,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:3,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:20:18 minikube localkube[3201]: E1201 16:20:18.015706 3201 kuberuntime_manager.go:714] container start failed: CreateContainerConfigError: lstat /tmp/hostpath-provisioner/pvc-f0db5074-d6b1-11e7-9596-080027aac058: no such file or directory | |
Dec 01 16:20:18 minikube localkube[3201]: E1201 16:20:18.016085 3201 pod_workers.go:182] Error syncing pod f0f4db68-d6b1-11e7-9596-080027aac058 ("cranky-zebra-postgresql-558bd4b587-9zxnt_default(f0f4db68-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "cranky-zebra-postgresql" with CreateContainerConfigError: "lstat /tmp/hostpath-provisioner/pvc-f0db5074-d6b1-11e7-9596-080027aac058: no such file or directory" | |
Dec 01 16:20:20 minikube localkube[3201]: I1201 16:20:20.453179 3201 event.go:218] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hipster-clam-postgresql", UID:"69f0e9a4-d6b3-11e7-9596-080027aac058", APIVersion:"extensions", ResourceVersion:"3357", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set hipster-clam-postgresql-669bb67d95 to 0 | |
Dec 01 16:20:20 minikube localkube[3201]: I1201 16:20:20.462202 3201 event.go:218] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hipster-clam-postgresql-669bb67d95", UID:"69f317ea-d6b3-11e7-9596-080027aac058", APIVersion:"extensions", ResourceVersion:"3358", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: hipster-clam-postgresql-669bb67d95-drr6g | |
Dec 01 16:20:20 minikube localkube[3201]: I1201 16:20:20.619350 3201 reconciler.go:186] operationExecutor.UnmountVolume started for volume "default-token-ctrw6" (UniqueName: "kubernetes.io/secret/69f39b37-d6b3-11e7-9596-080027aac058-default-token-ctrw6") pod "69f39b37-d6b3-11e7-9596-080027aac058" (UID: "69f39b37-d6b3-11e7-9596-080027aac058") | |
Dec 01 16:20:20 minikube localkube[3201]: I1201 16:20:20.619657 3201 reconciler.go:186] operationExecutor.UnmountVolume started for volume "data" (UniqueName: "kubernetes.io/host-path/69f39b37-d6b3-11e7-9596-080027aac058-pvc-69ede0d7-d6b3-11e7-9596-080027aac058") pod "69f39b37-d6b3-11e7-9596-080027aac058" (UID: "69f39b37-d6b3-11e7-9596-080027aac058") | |
Dec 01 16:20:20 minikube localkube[3201]: I1201 16:20:20.619881 3201 operation_generator.go:527] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69f39b37-d6b3-11e7-9596-080027aac058-pvc-69ede0d7-d6b3-11e7-9596-080027aac058" (OuterVolumeSpecName: "data") pod "69f39b37-d6b3-11e7-9596-080027aac058" (UID: "69f39b37-d6b3-11e7-9596-080027aac058"). InnerVolumeSpecName "pvc-69ede0d7-d6b3-11e7-9596-080027aac058". PluginName "kubernetes.io/host-path", VolumeGidValue "" | |
Dec 01 16:20:20 minikube localkube[3201]: I1201 16:20:20.634575 3201 operation_generator.go:527] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69f39b37-d6b3-11e7-9596-080027aac058-default-token-ctrw6" (OuterVolumeSpecName: "default-token-ctrw6") pod "69f39b37-d6b3-11e7-9596-080027aac058" (UID: "69f39b37-d6b3-11e7-9596-080027aac058"). InnerVolumeSpecName "default-token-ctrw6". PluginName "kubernetes.io/secret", VolumeGidValue "" | |
Dec 01 16:20:20 minikube localkube[3201]: I1201 16:20:20.720124 3201 reconciler.go:290] Volume detached for volume "default-token-ctrw6" (UniqueName: "kubernetes.io/secret/69f39b37-d6b3-11e7-9596-080027aac058-default-token-ctrw6") on node "minikube" DevicePath "" | |
Dec 01 16:20:20 minikube localkube[3201]: I1201 16:20:20.722087 3201 reconciler.go:290] Volume detached for volume "pvc-69ede0d7-d6b3-11e7-9596-080027aac058" (UniqueName: "kubernetes.io/host-path/69f39b37-d6b3-11e7-9596-080027aac058-pvc-69ede0d7-d6b3-11e7-9596-080027aac058") on node "minikube" DevicePath "" | |
Dec 01 16:20:21 minikube localkube[3201]: I1201 16:20:21.007792 3201 kuberuntime_manager.go:499] Container {Name:cranky-zebra-redis Image:bitnami/redis:4.0.2-r1 Command:[] Args:[] WorkingDir: Ports:[{Name:redis HostPort:0 ContainerPort:6379 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:ALLOW_EMPTY_PASSWORD Value:yes ValueFrom:nil}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:redis-data ReadOnly:false MountPath:/bitnami SubPath: MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[redis-cli ping],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it. | |
Dec 01 16:20:21 minikube localkube[3201]: I1201 16:20:21.009180 3201 kuberuntime_manager.go:738] checking backoff for container "cranky-zebra-redis" in pod "cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:20:21 minikube localkube[3201]: I1201 16:20:21.009369 3201 kuberuntime_manager.go:748] Back-off 5m0s restarting failed container=cranky-zebra-redis pod=cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058) | |
Dec 01 16:20:21 minikube localkube[3201]: E1201 16:20:21.009488 3201 pod_workers.go:182] Error syncing pod f0f77e98-d6b1-11e7-9596-080027aac058 ("cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "cranky-zebra-redis" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=cranky-zebra-redis pod=cranky-zebra-redis-7c787d57b9-m794n_default(f0f77e98-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:20:21 minikube localkube[3201]: I1201 16:20:21.009987 3201 kuberuntime_manager.go:738] checking backoff for container "seed-rpush" in pod "cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:20:21 minikube localkube[3201]: I1201 16:20:21.010042 3201 kuberuntime_manager.go:748] Back-off 5m0s restarting failed container=seed-rpush pod=cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058) | |
Dec 01 16:20:21 minikube localkube[3201]: E1201 16:20:21.010057 3201 pod_workers.go:182] Error syncing pod f1040c87-d6b1-11e7-9596-080027aac058 ("cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "seed-rpush" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=seed-rpush pod=cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:20:22 minikube localkube[3201]: W1201 16:20:22.014981 3201 pod_container_deletor.go:77] Container "56ce5b407551cb6ac371c5a912535e1d92cbfcdf5e2bd77280718d1c1c9ef507" not found in pod's containers | |
Dec 01 16:20:24 minikube localkube[3201]: I1201 16:20:24.402451 3201 event.go:218] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"cranky-zebra-elixir", UID:"f0f1aadd-d6b1-11e7-9596-080027aac058", APIVersion:"extensions", ResourceVersion:"3406", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set cranky-zebra-elixir-dcb898cb8 to 0 | |
Dec 01 16:20:24 minikube localkube[3201]: I1201 16:20:24.417108 3201 event.go:218] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"cranky-zebra-elixir-dcb898cb8", UID:"f0f2c167-d6b1-11e7-9596-080027aac058", APIVersion:"extensions", ResourceVersion:"3407", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: cranky-zebra-elixir-dcb898cb8-lslwp | |
Dec 01 16:20:24 minikube localkube[3201]: I1201 16:20:24.417323 3201 event.go:218] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"cranky-zebra-elixir-dcb898cb8", UID:"f0f2c167-d6b1-11e7-9596-080027aac058", APIVersion:"extensions", ResourceVersion:"3407", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: cranky-zebra-elixir-dcb898cb8-qkh49 | |
Dec 01 16:20:27 minikube localkube[3201]: I1201 16:20:27.466260 3201 event.go:218] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"cranky-zebra-postgresql", UID:"f0f290c3-d6b1-11e7-9596-080027aac058", APIVersion:"extensions", ResourceVersion:"3426", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set cranky-zebra-postgresql-558bd4b587 to 0 | |
Dec 01 16:20:27 minikube localkube[3201]: I1201 16:20:27.473084 3201 event.go:218] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"cranky-zebra-postgresql-558bd4b587", UID:"f0f354da-d6b1-11e7-9596-080027aac058", APIVersion:"extensions", ResourceVersion:"3427", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: cranky-zebra-postgresql-558bd4b587-9zxnt | |
Dec 01 16:20:28 minikube localkube[3201]: W1201 16:20:28.089587 3201 pod_container_deletor.go:77] Container "16c6911f437ac8639af2143c2b471c58a314b8cc06cfee019834c882af75275f" not found in pod's containers | |
Dec 01 16:20:28 minikube localkube[3201]: I1201 16:20:28.573796 3201 reconciler.go:186] operationExecutor.UnmountVolume started for volume "default-token-ctrw6" (UniqueName: "kubernetes.io/secret/f0f4db68-d6b1-11e7-9596-080027aac058-default-token-ctrw6") pod "f0f4db68-d6b1-11e7-9596-080027aac058" (UID: "f0f4db68-d6b1-11e7-9596-080027aac058") | |
Dec 01 16:20:28 minikube localkube[3201]: I1201 16:20:28.573835 3201 reconciler.go:186] operationExecutor.UnmountVolume started for volume "data" (UniqueName: "kubernetes.io/host-path/f0f4db68-d6b1-11e7-9596-080027aac058-pvc-f0db5074-d6b1-11e7-9596-080027aac058") pod "f0f4db68-d6b1-11e7-9596-080027aac058" (UID: "f0f4db68-d6b1-11e7-9596-080027aac058") | |
Dec 01 16:20:28 minikube localkube[3201]: I1201 16:20:28.573887 3201 operation_generator.go:527] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f0f4db68-d6b1-11e7-9596-080027aac058-pvc-f0db5074-d6b1-11e7-9596-080027aac058" (OuterVolumeSpecName: "data") pod "f0f4db68-d6b1-11e7-9596-080027aac058" (UID: "f0f4db68-d6b1-11e7-9596-080027aac058"). InnerVolumeSpecName "pvc-f0db5074-d6b1-11e7-9596-080027aac058". PluginName "kubernetes.io/host-path", VolumeGidValue "" | |
Dec 01 16:20:28 minikube localkube[3201]: I1201 16:20:28.635033 3201 operation_generator.go:527] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0f4db68-d6b1-11e7-9596-080027aac058-default-token-ctrw6" (OuterVolumeSpecName: "default-token-ctrw6") pod "f0f4db68-d6b1-11e7-9596-080027aac058" (UID: "f0f4db68-d6b1-11e7-9596-080027aac058"). InnerVolumeSpecName "default-token-ctrw6". PluginName "kubernetes.io/secret", VolumeGidValue "" | |
Dec 01 16:20:28 minikube localkube[3201]: I1201 16:20:28.674308 3201 reconciler.go:290] Volume detached for volume "default-token-ctrw6" (UniqueName: "kubernetes.io/secret/f0f4db68-d6b1-11e7-9596-080027aac058-default-token-ctrw6") on node "minikube" DevicePath "" | |
Dec 01 16:20:28 minikube localkube[3201]: I1201 16:20:28.674331 3201 reconciler.go:290] Volume detached for volume "pvc-f0db5074-d6b1-11e7-9596-080027aac058" (UniqueName: "kubernetes.io/host-path/f0f4db68-d6b1-11e7-9596-080027aac058-pvc-f0db5074-d6b1-11e7-9596-080027aac058") on node "minikube" DevicePath "" | |
Dec 01 16:20:30 minikube localkube[3201]: I1201 16:20:30.637269 3201 event.go:218] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"cranky-zebra-redis", UID:"f0f31f37-d6b1-11e7-9596-080027aac058", APIVersion:"extensions", ResourceVersion:"3442", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set cranky-zebra-redis-7c787d57b9 to 0 | |
Dec 01 16:20:30 minikube localkube[3201]: I1201 16:20:30.788180 3201 event.go:218] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"cranky-zebra-redis-7c787d57b9", UID:"f0f4c7bf-d6b1-11e7-9596-080027aac058", APIVersion:"extensions", ResourceVersion:"3443", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: cranky-zebra-redis-7c787d57b9-m794n | |
Dec 01 16:20:32 minikube localkube[3201]: W1201 16:20:32.144132 3201 pod_container_deletor.go:77] Container "2ede9b2783389a5773ced78253da040ee3c14f80df67452c6df0e65689090f13" not found in pod's containers | |
Dec 01 16:20:32 minikube localkube[3201]: I1201 16:20:32.592434 3201 reconciler.go:186] operationExecutor.UnmountVolume started for volume "default-token-ctrw6" (UniqueName: "kubernetes.io/secret/f0f77e98-d6b1-11e7-9596-080027aac058-default-token-ctrw6") pod "f0f77e98-d6b1-11e7-9596-080027aac058" (UID: "f0f77e98-d6b1-11e7-9596-080027aac058") | |
Dec 01 16:20:32 minikube localkube[3201]: I1201 16:20:32.592518 3201 reconciler.go:186] operationExecutor.UnmountVolume started for volume "redis-data" (UniqueName: "kubernetes.io/host-path/f0f77e98-d6b1-11e7-9596-080027aac058-pvc-f0db8cdd-d6b1-11e7-9596-080027aac058") pod "f0f77e98-d6b1-11e7-9596-080027aac058" (UID: "f0f77e98-d6b1-11e7-9596-080027aac058") | |
Dec 01 16:20:32 minikube localkube[3201]: I1201 16:20:32.592575 3201 operation_generator.go:527] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f0f77e98-d6b1-11e7-9596-080027aac058-pvc-f0db8cdd-d6b1-11e7-9596-080027aac058" (OuterVolumeSpecName: "redis-data") pod "f0f77e98-d6b1-11e7-9596-080027aac058" (UID: "f0f77e98-d6b1-11e7-9596-080027aac058"). InnerVolumeSpecName "pvc-f0db8cdd-d6b1-11e7-9596-080027aac058". PluginName "kubernetes.io/host-path", VolumeGidValue "" | |
Dec 01 16:20:32 minikube localkube[3201]: I1201 16:20:32.608198 3201 operation_generator.go:527] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0f77e98-d6b1-11e7-9596-080027aac058-default-token-ctrw6" (OuterVolumeSpecName: "default-token-ctrw6") pod "f0f77e98-d6b1-11e7-9596-080027aac058" (UID: "f0f77e98-d6b1-11e7-9596-080027aac058"). InnerVolumeSpecName "default-token-ctrw6". PluginName "kubernetes.io/secret", VolumeGidValue "" | |
Dec 01 16:20:32 minikube localkube[3201]: I1201 16:20:32.692961 3201 reconciler.go:290] Volume detached for volume "default-token-ctrw6" (UniqueName: "kubernetes.io/secret/f0f77e98-d6b1-11e7-9596-080027aac058-default-token-ctrw6") on node "minikube" DevicePath "" | |
Dec 01 16:20:32 minikube localkube[3201]: I1201 16:20:32.692990 3201 reconciler.go:290] Volume detached for volume "pvc-f0db8cdd-d6b1-11e7-9596-080027aac058" (UniqueName: "kubernetes.io/host-path/f0f77e98-d6b1-11e7-9596-080027aac058-pvc-f0db8cdd-d6b1-11e7-9596-080027aac058") on node "minikube" DevicePath "" | |
Dec 01 16:20:33 minikube localkube[3201]: I1201 16:20:33.009054 3201 kuberuntime_manager.go:738] checking backoff for container "seed-rpush" in pod "cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:20:33 minikube localkube[3201]: I1201 16:20:33.858670 3201 event.go:218] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"cranky-zebra-ruby-rails", UID:"f0f48ba7-d6b1-11e7-9596-080027aac058", APIVersion:"extensions", ResourceVersion:"3461", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set cranky-zebra-ruby-rails-6899b9c5c to 0 | |
Dec 01 16:20:34 minikube localkube[3201]: I1201 16:20:34.098181 3201 event.go:218] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"cranky-zebra-ruby-rails-6899b9c5c", UID:"f0f6c8cb-d6b1-11e7-9596-080027aac058", APIVersion:"extensions", ResourceVersion:"3462", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: cranky-zebra-ruby-rails-6899b9c5c-956tr | |
Dec 01 16:20:34 minikube localkube[3201]: I1201 16:20:34.098455 3201 event.go:218] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"cranky-zebra-ruby-rails-6899b9c5c", UID:"f0f6c8cb-d6b1-11e7-9596-080027aac058", APIVersion:"extensions", ResourceVersion:"3462", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: cranky-zebra-ruby-rails-6899b9c5c-d2l67 | |
Dec 01 16:20:34 minikube localkube[3201]: E1201 16:20:34.761088 3201 remote_runtime.go:278] ContainerStatus "62b587c26d0d6d18827fa22240c65230f83bd0b0c5a519083d1bfb41917cf339" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: 62b587c26d0d6d18827fa22240c65230f83bd0b0c5a519083d1bfb41917cf339 | |
Dec 01 16:20:34 minikube localkube[3201]: E1201 16:20:34.761188 3201 kuberuntime_container.go:659] failed to remove pod init container "seed-rpush": failed to get container status "62b587c26d0d6d18827fa22240c65230f83bd0b0c5a519083d1bfb41917cf339": rpc error: code = Unknown desc = Error: No such container: 62b587c26d0d6d18827fa22240c65230f83bd0b0c5a519083d1bfb41917cf339; Skipping pod "cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:20:34 minikube localkube[3201]: I1201 16:20:34.761291 3201 kuberuntime_manager.go:738] checking backoff for container "seed-rpush" in pod "cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:20:34 minikube localkube[3201]: I1201 16:20:34.761412 3201 kuberuntime_manager.go:748] Back-off 5m0s restarting failed container=seed-rpush pod=cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058) | |
Dec 01 16:20:34 minikube localkube[3201]: E1201 16:20:34.761446 3201 pod_workers.go:182] Error syncing pod f1040c87-d6b1-11e7-9596-080027aac058 ("cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "seed-rpush" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=seed-rpush pod=cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:20:35 minikube localkube[3201]: I1201 16:20:35.902943 3201 kuberuntime_manager.go:738] checking backoff for container "seed-rpush" in pod "cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:20:35 minikube localkube[3201]: I1201 16:20:35.903028 3201 kuberuntime_manager.go:748] Back-off 5m0s restarting failed container=seed-rpush pod=cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058) | |
Dec 01 16:20:35 minikube localkube[3201]: E1201 16:20:35.903048 3201 pod_workers.go:182] Error syncing pod f1040c87-d6b1-11e7-9596-080027aac058 ("cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "seed-rpush" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=seed-rpush pod=cranky-zebra-ruby-rpush-f6dd7985b-wll6p_default(f1040c87-d6b1-11e7-9596-080027aac058)" | |
Dec 01 16:20:36 minikube localkube[3201]: W1201 16:20:36.752961 3201 kubelet_pods.go:130] Mount cannot be satisfied for container "hipster-clam-postgresql", because the volume is missing or the volume mounter is nil: {"data" %!q(bool=false) "/var/lib/postgresql/data/pgdata" "postgresql-db" %!q(*v1.MountPropagationMode=<nil>)} | |
Dec 01 16:20:36 minikube localkube[3201]: W1201 16:20:36.752992 3201 kubelet_pods.go:130] Mount cannot be satisfied for container "hipster-clam-postgresql", because the volume is missing or the volume mounter is nil: {"default-token-ctrw6" %!q(bool=true) "/var/run/secrets/kubernetes.io/serviceaccount" "" %!q(*v1.MountPropagationMode=<nil>)} | |
Dec 01 16:20:36 minikube localkube[3201]: E1201 16:20:36.753028 3201 kuberuntime_manager.go:714] container start failed: CreateContainerConfigError: open /var/lib/kubelet/pods/69f39b37-d6b3-11e7-9596-080027aac058/etc-hosts: no such file or directory | |
Dec 01 16:20:36 minikube localkube[3201]: E1201 16:20:36.753049 3201 pod_workers.go:182] Error syncing pod 69f39b37-d6b3-11e7-9596-080027aac058 ("hipster-clam-postgresql-669bb67d95-drr6g_default(69f39b37-d6b3-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "hipster-clam-postgresql" with CreateContainerConfigError: "open /var/lib/kubelet/pods/69f39b37-d6b3-11e7-9596-080027aac058/etc-hosts: no such file or directory" | |
Dec 01 16:20:36 minikube localkube[3201]: I1201 16:20:36.912936 3201 event.go:218] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"cranky-zebra-ruby-rpush", UID:"f0f6e1bc-d6b1-11e7-9596-080027aac058", APIVersion:"extensions", ResourceVersion:"3484", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set cranky-zebra-ruby-rpush-f6dd7985b to 0 | |
Dec 01 16:20:36 minikube localkube[3201]: I1201 16:20:36.916450 3201 event.go:218] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"cranky-zebra-ruby-rpush-f6dd7985b", UID:"f100fbc9-d6b1-11e7-9596-080027aac058", APIVersion:"extensions", ResourceVersion:"3485", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: cranky-zebra-ruby-rpush-f6dd7985b-wll6p | |
Dec 01 16:20:38 minikube localkube[3201]: W1201 16:20:38.660664 3201 pod_container_deletor.go:77] Container "c8ff6a38601d1564eb0158d514f75a52684e2c63e6cee702d7a89dd1fd21ec08" not found in pod's containers | |
Dec 01 16:20:38 minikube localkube[3201]: I1201 16:20:38.727922 3201 reconciler.go:186] operationExecutor.UnmountVolume started for volume "default-token-ctrw6" (UniqueName: "kubernetes.io/secret/f1040c87-d6b1-11e7-9596-080027aac058-default-token-ctrw6") pod "f1040c87-d6b1-11e7-9596-080027aac058" (UID: "f1040c87-d6b1-11e7-9596-080027aac058") | |
Dec 01 16:20:38 minikube localkube[3201]: I1201 16:20:38.728414 3201 reconciler.go:186] operationExecutor.UnmountVolume started for volume "cranky-zebra-ruby-apns" (UniqueName: "kubernetes.io/configmap/f1040c87-d6b1-11e7-9596-080027aac058-cranky-zebra-ruby-apns") pod "f1040c87-d6b1-11e7-9596-080027aac058" (UID: "f1040c87-d6b1-11e7-9596-080027aac058") | |
Dec 01 16:20:38 minikube localkube[3201]: I1201 16:20:38.729092 3201 operation_generator.go:527] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1040c87-d6b1-11e7-9596-080027aac058-cranky-zebra-ruby-apns" (OuterVolumeSpecName: "cranky-zebra-ruby-apns") pod "f1040c87-d6b1-11e7-9596-080027aac058" (UID: "f1040c87-d6b1-11e7-9596-080027aac058"). InnerVolumeSpecName "cranky-zebra-ruby-apns". PluginName "kubernetes.io/configmap", VolumeGidValue "" | |
Dec 01 16:20:38 minikube localkube[3201]: I1201 16:20:38.739858 3201 operation_generator.go:527] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1040c87-d6b1-11e7-9596-080027aac058-default-token-ctrw6" (OuterVolumeSpecName: "default-token-ctrw6") pod "f1040c87-d6b1-11e7-9596-080027aac058" (UID: "f1040c87-d6b1-11e7-9596-080027aac058"). InnerVolumeSpecName "default-token-ctrw6". PluginName "kubernetes.io/secret", VolumeGidValue "" | |
Dec 01 16:20:38 minikube localkube[3201]: I1201 16:20:38.828913 3201 reconciler.go:290] Volume detached for volume "default-token-ctrw6" (UniqueName: "kubernetes.io/secret/f1040c87-d6b1-11e7-9596-080027aac058-default-token-ctrw6") on node "minikube" DevicePath "" | |
Dec 01 16:20:38 minikube localkube[3201]: I1201 16:20:38.829284 3201 reconciler.go:290] Volume detached for volume "cranky-zebra-ruby-apns" (UniqueName: "kubernetes.io/configmap/f1040c87-d6b1-11e7-9596-080027aac058-cranky-zebra-ruby-apns") on node "minikube" DevicePath "" | |
Dec 01 16:20:39 minikube localkube[3201]: I1201 16:20:39.997728 3201 event.go:218] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"cranky-zebra-ruby-sidekiq", UID:"f0fdcfc3-d6b1-11e7-9596-080027aac058", APIVersion:"extensions", ResourceVersion:"3500", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set cranky-zebra-ruby-sidekiq-5cddd59496 to 0 | |
Dec 01 16:20:40 minikube localkube[3201]: I1201 16:20:40.008950 3201 event.go:218] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"cranky-zebra-ruby-sidekiq-5cddd59496", UID:"f1030c6a-d6b1-11e7-9596-080027aac058", APIVersion:"extensions", ResourceVersion:"3501", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: cranky-zebra-ruby-sidekiq-5cddd59496-hg7fn | |
Dec 01 16:20:40 minikube localkube[3201]: I1201 16:20:40.013153 3201 event.go:218] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"cranky-zebra-ruby-sidekiq-5cddd59496", UID:"f1030c6a-d6b1-11e7-9596-080027aac058", APIVersion:"extensions", ResourceVersion:"3501", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: cranky-zebra-ruby-sidekiq-5cddd59496-n9zqh | |
Dec 01 16:20:44 minikube localkube[3201]: E1201 16:20:44.301761 3201 healthcheck.go:317] Failed to start node healthz on 0: listen tcp: address 0: missing port in address | |
Dec 01 16:20:49 minikube localkube[3201]: W1201 16:20:49.067659 3201 watcher.go:341] Fast watcher, slow processing. Number of buffered events: 100.Probably caused by slow decoding, user not receiving fast, or other processing logic | |
Dec 01 16:20:49 minikube localkube[3201]: W1201 16:20:49.068577 3201 watcher.go:341] Fast watcher, slow processing. Number of buffered events: 100.Probably caused by slow decoding, user not receiving fast, or other processing logic | |
Dec 01 16:20:49 minikube localkube[3201]: W1201 16:20:49.069578 3201 watcher.go:341] Fast watcher, slow processing. Number of buffered events: 100.Probably caused by slow decoding, user not receiving fast, or other processing logic | |
Dec 01 16:20:49 minikube localkube[3201]: I1201 16:20:49.074525 3201 event.go:218] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"mollified-mastiff-postgresql", UID:"97b0a1f2-d6b3-11e7-9596-080027aac058", APIVersion:"extensions", ResourceVersion:"3545", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set mollified-mastiff-postgresql-7845df87fc to 1 | |
Dec 01 16:20:49 minikube localkube[3201]: E1201 16:20:49.080766 3201 factory.go:913] Error scheduling default mollified-mastiff-postgresql-7845df87fc-s9qm8: PersistentVolumeClaim is not bound: "mollified-mastiff-postgresql"; retrying | |
Dec 01 16:20:49 minikube localkube[3201]: I1201 16:20:49.081258 3201 event.go:218] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"mollified-mastiff-postgresql-7845df87fc", UID:"97b26587-d6b3-11e7-9596-080027aac058", APIVersion:"extensions", ResourceVersion:"3547", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: mollified-mastiff-postgresql-7845df87fc-s9qm8 | |
Dec 01 16:20:49 minikube localkube[3201]: I1201 16:20:49.081611 3201 event.go:218] Event(v1.ObjectReference{Kind:"Pod", Namespace:"default", Name:"mollified-mastiff-postgresql-7845df87fc-s9qm8", UID:"97b40d5c-d6b3-11e7-9596-080027aac058", APIVersion:"v1", ResourceVersion:"3550", FieldPath:""}): type: 'Warning' reason: 'FailedScheduling' PersistentVolumeClaim is not bound: "mollified-mastiff-postgresql" | |
Dec 01 16:20:49 minikube localkube[3201]: W1201 16:20:49.091223 3201 watcher.go:341] Fast watcher, slow processing. Number of buffered events: 100.Probably caused by slow decoding, user not receiving fast, or other processing logic | |
Dec 01 16:20:49 minikube localkube[3201]: W1201 16:20:49.092191 3201 watcher.go:341] Fast watcher, slow processing. Number of buffered events: 100.Probably caused by slow decoding, user not receiving fast, or other processing logic | |
Dec 01 16:20:49 minikube localkube[3201]: W1201 16:20:49.095568 3201 watcher.go:341] Fast watcher, slow processing. Number of buffered events: 100.Probably caused by slow decoding, user not receiving fast, or other processing logic | |
Dec 01 16:20:49 minikube localkube[3201]: E1201 16:20:49.135833 3201 factory.go:913] Error scheduling default mollified-mastiff-postgresql-7845df87fc-s9qm8: PersistentVolumeClaim is not bound: "mollified-mastiff-postgresql"; retrying | |
Dec 01 16:20:49 minikube localkube[3201]: I1201 16:20:49.136046 3201 event.go:218] Event(v1.ObjectReference{Kind:"Pod", Namespace:"default", Name:"mollified-mastiff-postgresql-7845df87fc-s9qm8", UID:"97b40d5c-d6b3-11e7-9596-080027aac058", APIVersion:"v1", ResourceVersion:"3553", FieldPath:""}): type: 'Warning' reason: 'FailedScheduling' PersistentVolumeClaim is not bound: "mollified-mastiff-postgresql" | |
Dec 01 16:20:49 minikube localkube[3201]: W1201 16:20:49.136947 3201 factory.go:928] Request for pod default/mollified-mastiff-postgresql-7845df87fc-s9qm8 already in flight, abandoning | |
Dec 01 16:20:49 minikube localkube[3201]: E1201 16:20:49.271226 3201 upgradeaware.go:310] Error proxying data from client to backend: read tcp 192.168.99.100:8443->192.168.99.1:44926: read: connection reset by peer | |
Dec 01 16:20:50 minikube localkube[3201]: I1201 16:20:50.088171 3201 event.go:218] Event(v1.ObjectReference{Kind:"Pod", Namespace:"default", Name:"mollified-mastiff-postgresql-7845df87fc-s9qm8", UID:"97b40d5c-d6b3-11e7-9596-080027aac058", APIVersion:"v1", ResourceVersion:"3553", FieldPath:""}): type: 'Normal' reason: 'Scheduled' Successfully assigned mollified-mastiff-postgresql-7845df87fc-s9qm8 to minikube | |
Dec 01 16:20:50 minikube localkube[3201]: E1201 16:20:50.091687 3201 helpers.go:468] PercpuUsage had 0 cpus, but the actual number is 2; ignoring extra CPUs | |
Dec 01 16:20:50 minikube localkube[3201]: I1201 16:20:50.176514 3201 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-ctrw6" (UniqueName: "kubernetes.io/secret/97b40d5c-d6b3-11e7-9596-080027aac058-default-token-ctrw6") pod "mollified-mastiff-postgresql-7845df87fc-s9qm8" (UID: "97b40d5c-d6b3-11e7-9596-080027aac058") | |
Dec 01 16:20:50 minikube localkube[3201]: I1201 16:20:50.176567 3201 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "pvc-97abd9de-d6b3-11e7-9596-080027aac058" (UniqueName: "kubernetes.io/host-path/97b40d5c-d6b3-11e7-9596-080027aac058-pvc-97abd9de-d6b3-11e7-9596-080027aac058") pod "mollified-mastiff-postgresql-7845df87fc-s9qm8" (UID: "97b40d5c-d6b3-11e7-9596-080027aac058") | |
Dec 01 16:20:50 minikube localkube[3201]: E1201 16:20:50.631491 3201 kuberuntime_manager.go:714] container start failed: CreateContainerConfigError: lstat /tmp/hostpath-provisioner/pvc-97abd9de-d6b3-11e7-9596-080027aac058: no such file or directory | |
Dec 01 16:20:50 minikube localkube[3201]: E1201 16:20:50.631542 3201 pod_workers.go:182] Error syncing pod 97b40d5c-d6b3-11e7-9596-080027aac058 ("mollified-mastiff-postgresql-7845df87fc-s9qm8_default(97b40d5c-d6b3-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "mollified-mastiff-postgresql" with CreateContainerConfigError: "lstat /tmp/hostpath-provisioner/pvc-97abd9de-d6b3-11e7-9596-080027aac058: no such file or directory" | |
Dec 01 16:20:51 minikube localkube[3201]: I1201 16:20:51.080890 3201 kuberuntime_manager.go:499] Container {Name:mollified-mastiff-postgresql Image:postgres:9.6.2 Command:[] Args:[] WorkingDir: Ports:[{Name:postgresql HostPort:0 ContainerPort:5432 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:POSTGRES_USER Value:postgres ValueFrom:nil} {Name:PGUSER Value:postgres ValueFrom:nil} {Name:POSTGRES_DB Value: ValueFrom:nil} {Name:POSTGRES_INITDB_ARGS Value: ValueFrom:nil} {Name:PGDATA Value:/var/lib/postgresql/data/pgdata ValueFrom:nil} {Name:POSTGRES_PASSWORD Value: ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:mollified-mastiff-postgresql,},Key:postgres-password,Optional:nil,},}} {Name:POD_IP Value: ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:data ReadOnly:false MountPath:/var/lib/postgresql/data/pgdata SubPath:postgresql-db MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:6,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:3,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restar | |
Dec 01 16:20:51 minikube localkube[3201]: t it. | |
Dec 01 16:20:51 minikube localkube[3201]: E1201 16:20:51.087949 3201 kuberuntime_manager.go:714] container start failed: CreateContainerConfigError: lstat /tmp/hostpath-provisioner/pvc-97abd9de-d6b3-11e7-9596-080027aac058: no such file or directory | |
Dec 01 16:20:51 minikube localkube[3201]: E1201 16:20:51.088301 3201 pod_workers.go:182] Error syncing pod 97b40d5c-d6b3-11e7-9596-080027aac058 ("mollified-mastiff-postgresql-7845df87fc-s9qm8_default(97b40d5c-d6b3-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "mollified-mastiff-postgresql" with CreateContainerConfigError: "lstat /tmp/hostpath-provisioner/pvc-97abd9de-d6b3-11e7-9596-080027aac058: no such file or directory" | |
Dec 01 16:20:55 minikube localkube[3201]: E1201 16:20:55.765558 3201 kuberuntime_container.go:66] Can't make a ref to pod "cranky-zebra-elixir-dcb898cb8-lslwp_default(f0f869e2-d6b1-11e7-9596-080027aac058)", container init-postgres: selfLink was empty, can't make reference | |
Dec 01 16:20:55 minikube localkube[3201]: E1201 16:20:55.779914 3201 kuberuntime_container.go:66] Can't make a ref to pod "cranky-zebra-elixir-dcb898cb8-qkh49_default(f0f5f2cb-d6b1-11e7-9596-080027aac058)", container init-postgres: selfLink was empty, can't make reference | |
Dec 01 16:20:56 minikube localkube[3201]: W1201 16:20:56.867400 3201 pod_container_deletor.go:77] Container "c945b8b00a183a67aaa29ce1bcd3e55e80126c5a52e2bf239613b8e44a8ac444" not found in pod's containers | |
Dec 01 16:20:56 minikube localkube[3201]: W1201 16:20:56.871163 3201 pod_container_deletor.go:77] Container "753331efd516713d6783d64be09c57240a411e401943f128d6c510015742b770" not found in pod's containers | |
Dec 01 16:20:56 minikube localkube[3201]: I1201 16:20:56.907708 3201 reconciler.go:186] operationExecutor.UnmountVolume started for volume "default-token-ctrw6" (UniqueName: "kubernetes.io/secret/f0f869e2-d6b1-11e7-9596-080027aac058-default-token-ctrw6") pod "f0f869e2-d6b1-11e7-9596-080027aac058" (UID: "f0f869e2-d6b1-11e7-9596-080027aac058") | |
Dec 01 16:20:56 minikube localkube[3201]: I1201 16:20:56.908187 3201 reconciler.go:186] operationExecutor.UnmountVolume started for volume "cranky-zebra-elixir-ejabberd" (UniqueName: "kubernetes.io/configmap/f0f5f2cb-d6b1-11e7-9596-080027aac058-cranky-zebra-elixir-ejabberd") pod "f0f5f2cb-d6b1-11e7-9596-080027aac058" (UID: "f0f5f2cb-d6b1-11e7-9596-080027aac058") | |
Dec 01 16:20:56 minikube localkube[3201]: I1201 16:20:56.908613 3201 reconciler.go:186] operationExecutor.UnmountVolume started for volume "cranky-zebra-elixir-apns" (UniqueName: "kubernetes.io/configmap/f0f869e2-d6b1-11e7-9596-080027aac058-cranky-zebra-elixir-apns") pod "f0f869e2-d6b1-11e7-9596-080027aac058" (UID: "f0f869e2-d6b1-11e7-9596-080027aac058") | |
Dec 01 16:20:56 minikube localkube[3201]: I1201 16:20:56.908900 3201 reconciler.go:186] operationExecutor.UnmountVolume started for volume "cranky-zebra-elixir-apns" (UniqueName: "kubernetes.io/configmap/f0f5f2cb-d6b1-11e7-9596-080027aac058-cranky-zebra-elixir-apns") pod "f0f5f2cb-d6b1-11e7-9596-080027aac058" (UID: "f0f5f2cb-d6b1-11e7-9596-080027aac058") | |
Dec 01 16:20:56 minikube localkube[3201]: I1201 16:20:56.909138 3201 reconciler.go:186] operationExecutor.UnmountVolume started for volume "cranky-zebra-elixir-ejabberd" (UniqueName: "kubernetes.io/configmap/f0f869e2-d6b1-11e7-9596-080027aac058-cranky-zebra-elixir-ejabberd") pod "f0f869e2-d6b1-11e7-9596-080027aac058" (UID: "f0f869e2-d6b1-11e7-9596-080027aac058") | |
Dec 01 16:20:56 minikube localkube[3201]: I1201 16:20:56.910960 3201 reconciler.go:186] operationExecutor.UnmountVolume started for volume "default-token-ctrw6" (UniqueName: "kubernetes.io/secret/f0f5f2cb-d6b1-11e7-9596-080027aac058-default-token-ctrw6") pod "f0f5f2cb-d6b1-11e7-9596-080027aac058" (UID: "f0f5f2cb-d6b1-11e7-9596-080027aac058") | |
Dec 01 16:20:56 minikube localkube[3201]: I1201 16:20:56.909881 3201 operation_generator.go:527] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0f869e2-d6b1-11e7-9596-080027aac058-cranky-zebra-elixir-apns" (OuterVolumeSpecName: "cranky-zebra-elixir-apns") pod "f0f869e2-d6b1-11e7-9596-080027aac058" (UID: "f0f869e2-d6b1-11e7-9596-080027aac058"). InnerVolumeSpecName "cranky-zebra-elixir-apns". PluginName "kubernetes.io/configmap", VolumeGidValue "" | |
Dec 01 16:20:56 minikube localkube[3201]: I1201 16:20:56.910231 3201 operation_generator.go:527] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0f5f2cb-d6b1-11e7-9596-080027aac058-cranky-zebra-elixir-ejabberd" (OuterVolumeSpecName: "cranky-zebra-elixir-ejabberd") pod "f0f5f2cb-d6b1-11e7-9596-080027aac058" (UID: "f0f5f2cb-d6b1-11e7-9596-080027aac058"). InnerVolumeSpecName "cranky-zebra-elixir-ejabberd". PluginName "kubernetes.io/configmap", VolumeGidValue "" | |
Dec 01 16:20:56 minikube localkube[3201]: I1201 16:20:56.910576 3201 operation_generator.go:527] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0f5f2cb-d6b1-11e7-9596-080027aac058-cranky-zebra-elixir-apns" (OuterVolumeSpecName: "cranky-zebra-elixir-apns") pod "f0f5f2cb-d6b1-11e7-9596-080027aac058" (UID: "f0f5f2cb-d6b1-11e7-9596-080027aac058"). InnerVolumeSpecName "cranky-zebra-elixir-apns". PluginName "kubernetes.io/configmap", VolumeGidValue "" | |
Dec 01 16:20:56 minikube localkube[3201]: I1201 16:20:56.910882 3201 operation_generator.go:527] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0f869e2-d6b1-11e7-9596-080027aac058-cranky-zebra-elixir-ejabberd" (OuterVolumeSpecName: "cranky-zebra-elixir-ejabberd") pod "f0f869e2-d6b1-11e7-9596-080027aac058" (UID: "f0f869e2-d6b1-11e7-9596-080027aac058"). InnerVolumeSpecName "cranky-zebra-elixir-ejabberd". PluginName "kubernetes.io/configmap", VolumeGidValue "" | |
Dec 01 16:20:56 minikube localkube[3201]: I1201 16:20:56.920999 3201 operation_generator.go:527] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0f5f2cb-d6b1-11e7-9596-080027aac058-default-token-ctrw6" (OuterVolumeSpecName: "default-token-ctrw6") pod "f0f5f2cb-d6b1-11e7-9596-080027aac058" (UID: "f0f5f2cb-d6b1-11e7-9596-080027aac058"). InnerVolumeSpecName "default-token-ctrw6". PluginName "kubernetes.io/secret", VolumeGidValue "" | |
Dec 01 16:20:56 minikube localkube[3201]: I1201 16:20:56.922777 3201 operation_generator.go:527] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0f869e2-d6b1-11e7-9596-080027aac058-default-token-ctrw6" (OuterVolumeSpecName: "default-token-ctrw6") pod "f0f869e2-d6b1-11e7-9596-080027aac058" (UID: "f0f869e2-d6b1-11e7-9596-080027aac058"). InnerVolumeSpecName "default-token-ctrw6". PluginName "kubernetes.io/secret", VolumeGidValue "" | |
Dec 01 16:20:57 minikube localkube[3201]: I1201 16:20:57.011640 3201 reconciler.go:290] Volume detached for volume "cranky-zebra-elixir-ejabberd" (UniqueName: "kubernetes.io/configmap/f0f5f2cb-d6b1-11e7-9596-080027aac058-cranky-zebra-elixir-ejabberd") on node "minikube" DevicePath "" | |
Dec 01 16:20:57 minikube localkube[3201]: I1201 16:20:57.012171 3201 reconciler.go:290] Volume detached for volume "cranky-zebra-elixir-apns" (UniqueName: "kubernetes.io/configmap/f0f869e2-d6b1-11e7-9596-080027aac058-cranky-zebra-elixir-apns") on node "minikube" DevicePath "" | |
Dec 01 16:20:57 minikube localkube[3201]: I1201 16:20:57.012472 3201 reconciler.go:290] Volume detached for volume "cranky-zebra-elixir-ejabberd" (UniqueName: "kubernetes.io/configmap/f0f869e2-d6b1-11e7-9596-080027aac058-cranky-zebra-elixir-ejabberd") on node "minikube" DevicePath "" | |
Dec 01 16:20:57 minikube localkube[3201]: I1201 16:20:57.012665 3201 reconciler.go:290] Volume detached for volume "cranky-zebra-elixir-apns" (UniqueName: "kubernetes.io/configmap/f0f5f2cb-d6b1-11e7-9596-080027aac058-cranky-zebra-elixir-apns") on node "minikube" DevicePath "" | |
Dec 01 16:20:57 minikube localkube[3201]: I1201 16:20:57.012852 3201 reconciler.go:290] Volume detached for volume "default-token-ctrw6" (UniqueName: "kubernetes.io/secret/f0f5f2cb-d6b1-11e7-9596-080027aac058-default-token-ctrw6") on node "minikube" DevicePath "" | |
Dec 01 16:20:57 minikube localkube[3201]: I1201 16:20:57.013149 3201 reconciler.go:290] Volume detached for volume "default-token-ctrw6" (UniqueName: "kubernetes.io/secret/f0f869e2-d6b1-11e7-9596-080027aac058-default-token-ctrw6") on node "minikube" DevicePath "" | |
Dec 01 16:21:04 minikube localkube[3201]: I1201 16:21:04.009419 3201 kuberuntime_manager.go:499] Container {Name:mollified-mastiff-postgresql Image:postgres:9.6.2 Command:[] Args:[] WorkingDir: Ports:[{Name:postgresql HostPort:0 ContainerPort:5432 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:POSTGRES_USER Value:postgres ValueFrom:nil} {Name:PGUSER Value:postgres ValueFrom:nil} {Name:POSTGRES_DB Value: ValueFrom:nil} {Name:POSTGRES_INITDB_ARGS Value: ValueFrom:nil} {Name:PGDATA Value:/var/lib/postgresql/data/pgdata ValueFrom:nil} {Name:POSTGRES_PASSWORD Value: ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:mollified-mastiff-postgresql,},Key:postgres-password,Optional:nil,},}} {Name:POD_IP Value: ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:data ReadOnly:false MountPath:/var/lib/postgresql/data/pgdata SubPath:postgresql-db MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:6,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:3,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restar | |
Dec 01 16:21:04 minikube localkube[3201]: t it. | |
Dec 01 16:21:04 minikube localkube[3201]: E1201 16:21:04.013963 3201 kuberuntime_manager.go:714] container start failed: CreateContainerConfigError: lstat /tmp/hostpath-provisioner/pvc-97abd9de-d6b3-11e7-9596-080027aac058: no such file or directory | |
Dec 01 16:21:04 minikube localkube[3201]: E1201 16:21:04.014048 3201 pod_workers.go:182] Error syncing pod 97b40d5c-d6b3-11e7-9596-080027aac058 ("mollified-mastiff-postgresql-7845df87fc-s9qm8_default(97b40d5c-d6b3-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "mollified-mastiff-postgresql" with CreateContainerConfigError: "lstat /tmp/hostpath-provisioner/pvc-97abd9de-d6b3-11e7-9596-080027aac058: no such file or directory" | |
Dec 01 16:21:04 minikube localkube[3201]: W1201 16:21:04.980562 3201 pod_container_deletor.go:77] Container "a027d0630c84f32ea9fa33ed0c7667139a8fc35db7d48d3e267551408f61fb45" not found in pod's containers | |
Dec 01 16:21:04 minikube localkube[3201]: W1201 16:21:04.995543 3201 pod_container_deletor.go:77] Container "363e2fabeb7c8541c83fc73320038428a3bb3182e461422dedd46fce82c9aa13" not found in pod's containers | |
Dec 01 16:21:05 minikube localkube[3201]: I1201 16:21:05.154107 3201 reconciler.go:186] operationExecutor.UnmountVolume started for volume "default-token-ctrw6" (UniqueName: "kubernetes.io/secret/f10a96ad-d6b1-11e7-9596-080027aac058-default-token-ctrw6") pod "f10a96ad-d6b1-11e7-9596-080027aac058" (UID: "f10a96ad-d6b1-11e7-9596-080027aac058") | |
Dec 01 16:21:05 minikube localkube[3201]: I1201 16:21:05.154567 3201 reconciler.go:186] operationExecutor.UnmountVolume started for volume "default-token-ctrw6" (UniqueName: "kubernetes.io/secret/f100e4ad-d6b1-11e7-9596-080027aac058-default-token-ctrw6") pod "f100e4ad-d6b1-11e7-9596-080027aac058" (UID: "f100e4ad-d6b1-11e7-9596-080027aac058") | |
Dec 01 16:21:05 minikube localkube[3201]: I1201 16:21:05.164799 3201 operation_generator.go:527] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f100e4ad-d6b1-11e7-9596-080027aac058-default-token-ctrw6" (OuterVolumeSpecName: "default-token-ctrw6") pod "f100e4ad-d6b1-11e7-9596-080027aac058" (UID: "f100e4ad-d6b1-11e7-9596-080027aac058"). InnerVolumeSpecName "default-token-ctrw6". PluginName "kubernetes.io/secret", VolumeGidValue "" | |
Dec 01 16:21:05 minikube localkube[3201]: I1201 16:21:05.165046 3201 operation_generator.go:527] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f10a96ad-d6b1-11e7-9596-080027aac058-default-token-ctrw6" (OuterVolumeSpecName: "default-token-ctrw6") pod "f10a96ad-d6b1-11e7-9596-080027aac058" (UID: "f10a96ad-d6b1-11e7-9596-080027aac058"). InnerVolumeSpecName "default-token-ctrw6". PluginName "kubernetes.io/secret", VolumeGidValue "" | |
Dec 01 16:21:05 minikube localkube[3201]: I1201 16:21:05.255988 3201 reconciler.go:290] Volume detached for volume "default-token-ctrw6" (UniqueName: "kubernetes.io/secret/f100e4ad-d6b1-11e7-9596-080027aac058-default-token-ctrw6") on node "minikube" DevicePath "" | |
Dec 01 16:21:05 minikube localkube[3201]: I1201 16:21:05.256085 3201 reconciler.go:290] Volume detached for volume "default-token-ctrw6" (UniqueName: "kubernetes.io/secret/f10a96ad-d6b1-11e7-9596-080027aac058-default-token-ctrw6") on node "minikube" DevicePath "" | |
Dec 01 16:21:11 minikube localkube[3201]: W1201 16:21:11.068586 3201 pod_container_deletor.go:77] Container "e4f1736aec2b3d7ed5fd4479109bda665308a20569085b2b9ccc58fbab84827f" not found in pod's containers | |
Dec 01 16:21:11 minikube localkube[3201]: W1201 16:21:11.083248 3201 pod_container_deletor.go:77] Container "86ea4f6d5400a53f913a898fe07a743f00c3775588228e4966b26004f85d9341" not found in pod's containers | |
Dec 01 16:21:11 minikube localkube[3201]: I1201 16:21:11.178758 3201 reconciler.go:186] operationExecutor.UnmountVolume started for volume "default-token-ctrw6" (UniqueName: "kubernetes.io/secret/f1141392-d6b1-11e7-9596-080027aac058-default-token-ctrw6") pod "f1141392-d6b1-11e7-9596-080027aac058" (UID: "f1141392-d6b1-11e7-9596-080027aac058") | |
Dec 01 16:21:11 minikube localkube[3201]: I1201 16:21:11.178820 3201 reconciler.go:186] operationExecutor.UnmountVolume started for volume "default-token-ctrw6" (UniqueName: "kubernetes.io/secret/f10e9405-d6b1-11e7-9596-080027aac058-default-token-ctrw6") pod "f10e9405-d6b1-11e7-9596-080027aac058" (UID: "f10e9405-d6b1-11e7-9596-080027aac058") | |
Dec 01 16:21:11 minikube localkube[3201]: I1201 16:21:11.188908 3201 operation_generator.go:527] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f10e9405-d6b1-11e7-9596-080027aac058-default-token-ctrw6" (OuterVolumeSpecName: "default-token-ctrw6") pod "f10e9405-d6b1-11e7-9596-080027aac058" (UID: "f10e9405-d6b1-11e7-9596-080027aac058"). InnerVolumeSpecName "default-token-ctrw6". PluginName "kubernetes.io/secret", VolumeGidValue "" | |
Dec 01 16:21:11 minikube localkube[3201]: I1201 16:21:11.189012 3201 operation_generator.go:527] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1141392-d6b1-11e7-9596-080027aac058-default-token-ctrw6" (OuterVolumeSpecName: "default-token-ctrw6") pod "f1141392-d6b1-11e7-9596-080027aac058" (UID: "f1141392-d6b1-11e7-9596-080027aac058"). InnerVolumeSpecName "default-token-ctrw6". PluginName "kubernetes.io/secret", VolumeGidValue "" | |
Dec 01 16:21:11 minikube localkube[3201]: I1201 16:21:11.279004 3201 reconciler.go:290] Volume detached for volume "default-token-ctrw6" (UniqueName: "kubernetes.io/secret/f10e9405-d6b1-11e7-9596-080027aac058-default-token-ctrw6") on node "minikube" DevicePath "" | |
Dec 01 16:21:11 minikube localkube[3201]: I1201 16:21:11.279409 3201 reconciler.go:290] Volume detached for volume "default-token-ctrw6" (UniqueName: "kubernetes.io/secret/f1141392-d6b1-11e7-9596-080027aac058-default-token-ctrw6") on node "minikube" DevicePath "" | |
Dec 01 16:21:11 minikube localkube[3201]: E1201 16:21:11.722876 3201 kuberuntime_container.go:66] Can't make a ref to pod "cranky-zebra-ruby-sidekiq-5cddd59496-n9zqh_default(f10e9405-d6b1-11e7-9596-080027aac058)", container ruby: selfLink was empty, can't make reference | |
Dec 01 16:21:11 minikube localkube[3201]: E1201 16:21:11.723266 3201 kuberuntime_container.go:66] Can't make a ref to pod "cranky-zebra-ruby-sidekiq-5cddd59496-hg7fn_default(f1141392-d6b1-11e7-9596-080027aac058)", container ruby: selfLink was empty, can't make reference | |
Dec 01 16:21:17 minikube localkube[3201]: I1201 16:21:17.007561 3201 kuberuntime_manager.go:499] Container {Name:mollified-mastiff-postgresql Image:postgres:9.6.2 Command:[] Args:[] WorkingDir: Ports:[{Name:postgresql HostPort:0 ContainerPort:5432 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:POSTGRES_USER Value:postgres ValueFrom:nil} {Name:PGUSER Value:postgres ValueFrom:nil} {Name:POSTGRES_DB Value: ValueFrom:nil} {Name:POSTGRES_INITDB_ARGS Value: ValueFrom:nil} {Name:PGDATA Value:/var/lib/postgresql/data/pgdata ValueFrom:nil} {Name:POSTGRES_PASSWORD Value: ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:mollified-mastiff-postgresql,},Key:postgres-password,Optional:nil,},}} {Name:POD_IP Value: ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:268435456 scale:0} d:{Dec:<nil>} s: Format:BinarySI}]} VolumeMounts:[{Name:data ReadOnly:false MountPath:/var/lib/postgresql/data/pgdata SubPath:postgresql-db MountPropagation:<nil>} {Name:default-token-ctrw6 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:6,} ReadinessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c exec pg_isready --host $POD_IP],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:3,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restar | |
Dec 01 16:21:17 minikube localkube[3201]: t it. | |
Dec 01 16:21:17 minikube localkube[3201]: E1201 16:21:17.011397 3201 kuberuntime_manager.go:714] container start failed: CreateContainerConfigError: lstat /tmp/hostpath-provisioner/pvc-97abd9de-d6b3-11e7-9596-080027aac058: no such file or directory | |
Dec 01 16:21:17 minikube localkube[3201]: E1201 16:21:17.011752 3201 pod_workers.go:182] Error syncing pod 97b40d5c-d6b3-11e7-9596-080027aac058 ("mollified-mastiff-postgresql-7845df87fc-s9qm8_default(97b40d5c-d6b3-11e7-9596-080027aac058)"), skipping: failed to "StartContainer" for "mollified-mastiff-postgresql" with CreateContainerConfigError: "lstat /tmp/hostpath-provisioner/pvc-97abd9de-d6b3-11e7-9596-080027aac058: no such file or directory" | |
Dec 01 16:21:18 minikube localkube[3201]: W1201 16:21:18.879397 3201 conversion.go:110] Could not get instant cpu stats: different number of cpus |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment