-
-
Save gabrtv/70ae044394f3491ea6cb to your computer and use it in GitHub Desktop.
Kubelet logs for deis/builder#225
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
# journalctl -fu kubelet -n 20000 | |
-- Logs begin at Wed 2016-03-02 20:23:19 UTC. -- | |
Mar 02 20:23:30 ip-10-0-0-166.us-west-2.compute.internal systemd[1]: Starting kubelet.service... | |
Mar 02 20:23:30 ip-10-0-0-166.us-west-2.compute.internal systemd[1]: Started kubelet.service. | |
Mar 02 20:23:34 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 20:23:34.493522 924 aws.go:466] Zone not specified in configuration file; querying AWS metadata service | |
Mar 02 20:23:34 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 20:23:34.614522 924 aws.go:575] AWS cloud filtering on tags: map[KubernetesCluster:k8s] | |
Mar 02 20:23:34 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 20:23:34.615837 924 manager.go:128] cAdvisor running in container: "/system.slice" | |
Mar 02 20:23:46 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 20:23:46.173532 924 fs.go:108] Filesystem partitions: map[/dev/xvda9:{mountpoint:/ major:202 minor:9 fsType: blockSize:0} /dev/xvda3:{mountpoint:/usr major:202 minor:3 fsType: blockSize:0} /dev/xvda6:{mountpoint:/usr/share/oem major:202 minor:6 fsType: blockSize:0}] | |
Mar 02 20:23:46 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 20:23:46.180732 924 manager.go:163] Machine: {NumCores:2 CpuFrequency:2500106 MemoryCapacity:7846588416 MachineID:a47d66fc2ade46dba3040d0378b05e8c SystemUUID:EC259342-D3C5-115E-77CD-09DB0B66F289 BootID:378d2694-403b-4a5d-80e5-48408c4114c0 Filesystems:[{Device:/dev/xvda9 Capacity:28730269696} {Device:/dev/xvda3 Capacity:1031946240} {Device:/dev/xvda6 Capacity:113229824}] DiskMap:map[202:0:{Name:xvda Major:202 Minor:0 Size:32212254720 Scheduler:none} 202:16:{Name:xvdb Major:202 Minor:16 Size:32204390400 Scheduler:none}] NetworkDevices:[{Name:eth0 MacAddress:02:5c:32:09:4d:91 Speed:0 Mtu:9001} {Name:flannel0 MacAddress: Speed:10 Mtu:8973}] Topology:[{Id:0 Memory:7846588416 Cores:[{Id:0 Threads:[0 1] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:262144 Type:Unified Level:2}]}] Caches:[{Size:26214400 Type:Unified Level:3}]}] CloudProvider:Unknown InstanceType:Unknown} | |
Mar 02 20:23:46 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 20:23:46.183310 924 manager.go:169] Version: {KernelVersion:4.4.0-coreos-r2 ContainerOsVersion:CoreOS 942.0.0 DockerVersion:1.9.1 CadvisorVersion: CadvisorRevision:} | |
Mar 02 20:23:46 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 20:23:46.187290 924 server.go:798] Adding manifest file: /etc/kubernetes/manifests | |
Mar 02 20:23:46 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 20:23:46.187337 924 server.go:808] Watching apiserver | |
Mar 02 20:23:46 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 20:23:46.312080 924 plugins.go:56] Registering credential provider: .dockercfg | |
Mar 02 20:23:46 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 20:23:46.317577 924 server.go:770] Started kubelet | |
Mar 02 20:23:46 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: E0302 20:23:46.317788 924 kubelet.go:756] Image garbage collection failed: unable to find data for container / | |
Mar 02 20:23:46 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 20:23:46.321875 924 server.go:72] Starting to listen on 0.0.0.0:10250 | |
Mar 02 20:23:46 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: E0302 20:23:46.325987 924 event.go:197] Unable to write event: 'Post https://10.0.0.50/api/v1/namespaces/default/events: dial tcp 10.0.0.50:443: connection refused' (may retry after sleeping) | |
Mar 02 20:23:46 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 20:23:46.337237 924 kubelet.go:777] Running in container "/kubelet" | |
Mar 02 20:23:46 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 20:23:46.346582 924 factory.go:194] System is using systemd | |
Mar 02 20:23:46 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 20:23:46.553297 924 factory.go:236] Registering Docker factory | |
Mar 02 20:23:46 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 20:23:46.553945 924 factory.go:93] Registering Raw factory | |
Mar 02 20:23:46 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 20:23:46.681056 924 manager.go:1006] Started watching for new ooms in manager | |
Mar 02 20:23:46 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 20:23:46.682051 924 oomparser.go:183] oomparser using systemd | |
Mar 02 20:23:46 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 20:23:46.684506 924 manager.go:250] Starting recovery of all containers | |
Mar 02 20:23:46 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 20:23:46.759450 924 manager.go:255] Recovery completed | |
Mar 02 20:23:46 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 20:23:46.824668 924 manager.go:104] Starting to sync pod status with apiserver | |
Mar 02 20:23:46 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 20:23:46.825365 924 kubelet.go:1953] Starting kubelet main sync loop. | |
Mar 02 20:23:46 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: E0302 20:23:46.825614 924 kubelet.go:1908] error getting node: node 'ip-10-0-0-166.us-west-2.compute.internal' is not in cache | |
Mar 02 20:23:46 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 20:23:46.829904 924 provider.go:91] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider | |
Mar 02 20:23:46 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: W0302 20:23:46.831729 924 manager.go:108] Failed to updated pod status: error updating status for pod "kube-proxy-ip-10-0-0-166.us-west-2.compute.internal_kube-system": Get https://10.0.0.50/api/v1/namespaces/kube-system/pods/kube-proxy-ip-10-0-0-166.us-west-2.compute.internal: dial tcp 10.0.0.50:443: connection refused | |
Mar 02 20:23:48 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 20:23:48.974056 924 hairpin.go:49] Unable to find pair interface, setting up all interfaces: No peer_ifindex in interface statistics for eth0 of container 1177 | |
Mar 02 20:23:51 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: E0302 20:23:51.833283 924 event.go:197] Unable to write event: 'Post https://10.0.0.50/api/v1/namespaces/default/events: dial tcp 10.0.0.50:443: connection refused' (may retry after sleeping) | |
Mar 02 20:24:01 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: E0302 20:24:01.837218 924 event.go:197] Unable to write event: 'Post https://10.0.0.50/api/v1/namespaces/default/events: dial tcp 10.0.0.50:443: connection refused' (may retry after sleeping) | |
Mar 02 20:24:05 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: E0302 20:24:05.013117 924 kubelet.go:1455] Failed creating a mirror pod "kube-proxy-ip-10-0-0-166.us-west-2.compute.internal_kube-system": Post https://10.0.0.50/api/v1/namespaces/kube-system/pods: dial tcp 10.0.0.50:443: connection refused | |
Mar 02 20:24:05 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: E0302 20:24:05.029238 924 kubelet.go:1455] Failed creating a mirror pod "kube-proxy-ip-10-0-0-166.us-west-2.compute.internal_kube-system": Post https://10.0.0.50/api/v1/namespaces/kube-system/pods: dial tcp 10.0.0.50:443: connection refused | |
Mar 02 20:24:06 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: E0302 20:24:06.836215 924 kubelet.go:1455] Failed creating a mirror pod "kube-proxy-ip-10-0-0-166.us-west-2.compute.internal_kube-system": Post https://10.0.0.50/api/v1/namespaces/kube-system/pods: dial tcp 10.0.0.50:443: connection refused | |
Mar 02 20:24:11 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: E0302 20:24:11.838166 924 event.go:197] Unable to write event: 'Post https://10.0.0.50/api/v1/namespaces/default/events: dial tcp 10.0.0.50:443: connection refused' (may retry after sleeping) | |
Mar 02 20:24:16 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: E0302 20:24:16.833054 924 kubelet.go:1455] Failed creating a mirror pod "kube-proxy-ip-10-0-0-166.us-west-2.compute.internal_kube-system": Post https://10.0.0.50/api/v1/namespaces/kube-system/pods: dial tcp 10.0.0.50:443: connection refused | |
Mar 02 20:24:21 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: E0302 20:24:21.839101 924 event.go:197] Unable to write event: 'Post https://10.0.0.50/api/v1/namespaces/default/events: dial tcp 10.0.0.50:443: connection refused' (may retry after sleeping) | |
Mar 02 20:24:26 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: E0302 20:24:26.833960 924 kubelet.go:1455] Failed creating a mirror pod "kube-proxy-ip-10-0-0-166.us-west-2.compute.internal_kube-system": Post https://10.0.0.50/api/v1/namespaces/kube-system/pods: dial tcp 10.0.0.50:443: connection refused | |
Mar 02 20:24:30 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 20:24:30.245215 924 kubelet.go:900] Successfully registered node ip-10-0-0-166.us-west-2.compute.internal | |
Mar 02 20:24:31 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: E0302 20:24:31.851963 924 event.go:188] Server rejected event '&api.Event{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:api.ObjectMeta{Name:"kube-proxy-ip-10-0-0-166.us-west-2.compute.internal.14382083159ea179", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:unversioned.Time{Time:time.Time{sec:0, nsec:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*unversioned.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil)}, InvolvedObject:api.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-proxy-ip-10-0-0-166.us-west-2.compute.internal", UID:"7079b87c23d04e34d89908f136c73b99", APIVersion:"v1", ResourceVersion:"", FieldPath:"implicitly required container POD"}, Reason:"Pulling", Message:"Pulling image \"gcr.io/google_containers/pause:0.8.0\"", Source:api.EventSource{Component:"kubelet", Host:"ip-10-0-0-166.us-west-2.compute.internal"}, FirstTimestamp:unversioned.Time{Time:time.Time{sec:63592547026, nsec:829877625, loc:(*time.Location)(0x1fbc560)}}, LastTimestamp:unversioned.Time{Time:time.Time{sec:63592547026, nsec:829877625, loc:(*time.Location)(0x1fbc560)}}, Count:1}': 'namespaces "kube-system" not found' (will not retry!) | |
Mar 02 20:24:31 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: E0302 20:24:31.868482 924 event.go:188] Server rejected event '&api.Event{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:api.ObjectMeta{Name:"kube-proxy-ip-10-0-0-166.us-west-2.compute.internal.14382083816f0eb3", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:unversioned.Time{Time:time.Time{sec:0, nsec:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*unversioned.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil)}, InvolvedObject:api.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-proxy-ip-10-0-0-166.us-west-2.compute.internal", UID:"7079b87c23d04e34d89908f136c73b99", APIVersion:"v1", ResourceVersion:"", FieldPath:"implicitly required container POD"}, Reason:"Pulled", Message:"Successfully pulled image \"gcr.io/google_containers/pause:0.8.0\"", Source:api.EventSource{Component:"kubelet", Host:"ip-10-0-0-166.us-west-2.compute.internal"}, FirstTimestamp:unversioned.Time{Time:time.Time{sec:63592547028, nsec:638699187, loc:(*time.Location)(0x1fbc560)}}, LastTimestamp:unversioned.Time{Time:time.Time{sec:63592547028, nsec:638699187, loc:(*time.Location)(0x1fbc560)}}, Count:1}': 'namespaces "kube-system" not found' (will not retry!) | |
Mar 02 20:24:31 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: E0302 20:24:31.870770 924 event.go:188] Server rejected event '&api.Event{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:api.ObjectMeta{Name:"kube-proxy-ip-10-0-0-166.us-west-2.compute.internal.1438208384718801", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:unversioned.Time{Time:time.Time{sec:0, nsec:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*unversioned.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil)}, InvolvedObject:api.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-proxy-ip-10-0-0-166.us-west-2.compute.internal", UID:"7079b87c23d04e34d89908f136c73b99", APIVersion:"v1", ResourceVersion:"", FieldPath:"implicitly required container POD"}, Reason:"Created", Message:"Created with docker id 8816a8d7ffe5", Source:api.EventSource{Component:"kubelet", Host:"ip-10-0-0-166.us-west-2.compute.internal"}, FirstTimestamp:unversioned.Time{Time:time.Time{sec:63592547028, nsec:689192961, loc:(*time.Location)(0x1fbc560)}}, LastTimestamp:unversioned.Time{Time:time.Time{sec:63592547028, nsec:689192961, loc:(*time.Location)(0x1fbc560)}}, Count:1}': 'namespaces "kube-system" not found' (will not retry!) | |
Mar 02 20:24:32 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: E0302 20:24:32.341104 924 event.go:188] Server rejected event '&api.Event{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:api.ObjectMeta{Name:"kube-proxy-ip-10-0-0-166.us-west-2.compute.internal.143820838dde4766", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:unversioned.Time{Time:time.Time{sec:0, nsec:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*unversioned.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil)}, InvolvedObject:api.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-proxy-ip-10-0-0-166.us-west-2.compute.internal", UID:"7079b87c23d04e34d89908f136c73b99", APIVersion:"v1", ResourceVersion:"", FieldPath:"implicitly required container POD"}, Reason:"Started", Message:"Started with docker id 8816a8d7ffe5", Source:api.EventSource{Component:"kubelet", Host:"ip-10-0-0-166.us-west-2.compute.internal"}, FirstTimestamp:unversioned.Time{Time:time.Time{sec:63592547028, nsec:847314790, loc:(*time.Location)(0x1fbc560)}}, LastTimestamp:unversioned.Time{Time:time.Time{sec:63592547028, nsec:847314790, loc:(*time.Location)(0x1fbc560)}}, Count:1}': 'namespaces "kube-system" not found' (will not retry!) | |
Mar 02 20:24:32 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: E0302 20:24:32.542197 924 event.go:188] Server rejected event '&api.Event{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:api.ObjectMeta{Name:"kube-proxy-ip-10-0-0-166.us-west-2.compute.internal.14382083957de449", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:unversioned.Time{Time:time.Time{sec:0, nsec:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*unversioned.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil)}, InvolvedObject:api.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-proxy-ip-10-0-0-166.us-west-2.compute.internal", UID:"7079b87c23d04e34d89908f136c73b99", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{kube-proxy}"}, Reason:"Pulling", Message:"Pulling image \"gcr.io/google_containers/hyperkube:v1.1.2\"", Source:api.EventSource{Component:"kubelet", Host:"ip-10-0-0-166.us-west-2.compute.internal"}, FirstTimestamp:unversioned.Time{Time:time.Time{sec:63592547028, nsec:975215689, loc:(*time.Location)(0x1fbc560)}}, LastTimestamp:unversioned.Time{Time:time.Time{sec:63592547028, nsec:975215689, loc:(*time.Location)(0x1fbc560)}}, Count:1}': 'namespaces "kube-system" not found' (will not retry!) | |
Mar 02 20:26:12 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 20:26:12.929490 924 manager.go:1769] pod "deis-builder-ewoxm_deis" container "deis-builder" is unhealthy (probe result: failure), it will be killed and re-created. | |
Mar 02 20:28:26 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 20:28:26.859924 924 manager.go:2022] Back-off 10s restarting failed container=deis-database pod=deis-database-sb3lp_deis | |
Mar 02 20:28:56 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 20:28:56.874833 924 manager.go:2022] Back-off 20s restarting failed container=deis-database pod=deis-database-sb3lp_deis | |
Mar 02 20:29:06 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 20:29:06.892207 924 manager.go:2022] Back-off 20s restarting failed container=deis-database pod=deis-database-sb3lp_deis | |
Mar 02 20:29:16 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 20:29:16.895954 924 provider.go:91] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider | |
Mar 02 20:29:36 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 20:29:36.869427 924 manager.go:2022] Back-off 40s restarting failed container=deis-database pod=deis-database-sb3lp_deis | |
Mar 02 20:29:46 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 20:29:46.909296 924 manager.go:2022] Back-off 40s restarting failed container=deis-database pod=deis-database-sb3lp_deis | |
Mar 02 20:29:55 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 20:29:55.826635 924 logs.go:40] http: WriteHeader called with both Transfer-Encoding of "chunked" and a Content-Length of 93 | |
Mar 02 20:29:56 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 20:29:56.871577 924 manager.go:2022] Back-off 40s restarting failed container=deis-database pod=deis-database-sb3lp_deis | |
Mar 02 20:29:58 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 20:29:58.100918 924 logs.go:40] http: WriteHeader called with both Transfer-Encoding of "chunked" and a Content-Length of 93 | |
Mar 02 20:30:06 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 20:30:06.873682 924 manager.go:2022] Back-off 40s restarting failed container=deis-database pod=deis-database-sb3lp_deis | |
Mar 02 20:30:36 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 20:30:36.872983 924 manager.go:2022] Back-off 1m20s restarting failed container=deis-database pod=deis-database-sb3lp_deis | |
Mar 02 20:30:46 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 20:30:46.868320 924 manager.go:2022] Back-off 1m20s restarting failed container=deis-database pod=deis-database-sb3lp_deis | |
Mar 02 20:30:56 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 20:30:56.871668 924 manager.go:2022] Back-off 1m20s restarting failed container=deis-database pod=deis-database-sb3lp_deis | |
Mar 02 20:31:06 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 20:31:06.867822 924 manager.go:2022] Back-off 1m20s restarting failed container=deis-database pod=deis-database-sb3lp_deis | |
Mar 02 20:31:16 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 20:31:16.872654 924 manager.go:2022] Back-off 1m20s restarting failed container=deis-database pod=deis-database-sb3lp_deis | |
Mar 02 20:31:26 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 20:31:26.867304 924 manager.go:2022] Back-off 1m20s restarting failed container=deis-database pod=deis-database-sb3lp_deis | |
Mar 02 20:31:36 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 20:31:36.874192 924 manager.go:2022] Back-off 1m20s restarting failed container=deis-database pod=deis-database-sb3lp_deis | |
Mar 02 20:31:46 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 20:31:46.916680 924 manager.go:2022] Back-off 1m20s restarting failed container=deis-database pod=deis-database-sb3lp_deis | |
Mar 02 20:32:16 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 20:32:16.859924 924 manager.go:2022] Back-off 2m40s restarting failed container=deis-database pod=deis-database-sb3lp_deis | |
Mar 02 20:32:26 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 20:32:26.867163 924 manager.go:2022] Back-off 2m40s restarting failed container=deis-database pod=deis-database-sb3lp_deis | |
Mar 02 20:32:36 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 20:32:36.874169 924 manager.go:2022] Back-off 2m40s restarting failed container=deis-database pod=deis-database-sb3lp_deis | |
Mar 02 20:32:46 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 20:32:46.867632 924 manager.go:2022] Back-off 2m40s restarting failed container=deis-database pod=deis-database-sb3lp_deis | |
Mar 02 20:32:56 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 20:32:56.867400 924 manager.go:2022] Back-off 2m40s restarting failed container=deis-database pod=deis-database-sb3lp_deis | |
Mar 02 20:33:06 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 20:33:06.866560 924 manager.go:2022] Back-off 2m40s restarting failed container=deis-database pod=deis-database-sb3lp_deis | |
Mar 02 20:33:16 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 20:33:16.868609 924 manager.go:2022] Back-off 2m40s restarting failed container=deis-database pod=deis-database-sb3lp_deis | |
Mar 02 20:33:26 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 20:33:26.877742 924 manager.go:2022] Back-off 2m40s restarting failed container=deis-database pod=deis-database-sb3lp_deis | |
Mar 02 20:33:36 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 20:33:36.869760 924 manager.go:2022] Back-off 2m40s restarting failed container=deis-database pod=deis-database-sb3lp_deis | |
Mar 02 20:33:46 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 20:33:46.881301 924 manager.go:2022] Back-off 2m40s restarting failed container=deis-database pod=deis-database-sb3lp_deis | |
Mar 02 20:33:56 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 20:33:56.882709 924 manager.go:2022] Back-off 2m40s restarting failed container=deis-database pod=deis-database-sb3lp_deis | |
Mar 02 20:34:06 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 20:34:06.872076 924 manager.go:2022] Back-off 2m40s restarting failed container=deis-database pod=deis-database-sb3lp_deis | |
Mar 02 20:34:16 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 20:34:16.836525 924 manager.go:2022] Back-off 2m40s restarting failed container=deis-database pod=deis-database-sb3lp_deis | |
Mar 02 20:34:26 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 20:34:26.864330 924 manager.go:2022] Back-off 2m40s restarting failed container=deis-database pod=deis-database-sb3lp_deis | |
Mar 02 20:34:36 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 20:34:36.876262 924 manager.go:2022] Back-off 2m40s restarting failed container=deis-database pod=deis-database-sb3lp_deis | |
Mar 02 20:34:46 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 20:34:46.892351 924 manager.go:2022] Back-off 2m40s restarting failed container=deis-database pod=deis-database-sb3lp_deis | |
Mar 02 20:34:56 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 20:34:56.868930 924 provider.go:91] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider | |
Mar 02 20:35:16 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 20:35:16.845900 924 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-sb3lp_deis | |
Mar 02 20:35:26 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 20:35:26.851359 924 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-sb3lp_deis | |
Mar 02 20:35:36 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 20:35:36.873651 924 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-sb3lp_deis | |
Mar 02 20:35:46 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 20:35:46.895366 924 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-sb3lp_deis | |
Mar 02 20:35:56 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 20:35:56.865345 924 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-sb3lp_deis | |
Mar 02 20:36:06 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 20:36:06.873193 924 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-sb3lp_deis | |
Mar 02 20:36:16 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 20:36:16.883443 924 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-sb3lp_deis | |
Mar 02 20:36:26 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 20:36:26.863248 924 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-sb3lp_deis | |
Mar 02 20:36:36 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 20:36:36.876110 924 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-sb3lp_deis | |
Mar 02 20:36:46 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 20:36:46.868054 924 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-sb3lp_deis | |
Mar 02 20:36:56 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 20:36:56.870915 924 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-sb3lp_deis | |
Mar 02 20:37:06 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 20:37:06.862128 924 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-sb3lp_deis | |
Mar 02 20:37:16 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 20:37:16.853427 924 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-sb3lp_deis | |
Mar 02 20:37:26 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 20:37:26.911091 924 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-sb3lp_deis | |
Mar 02 20:37:36 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 20:37:36.859006 924 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-sb3lp_deis | |
Mar 02 20:37:56 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: W0302 20:37:56.841152 924 kubelet.go:1588] Orphaned volume "e314f23d-e0b4-11e5-b0bc-029bbe1bd231/database-creds" found, tearing down volume | |
Mar 02 20:37:56 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: W0302 20:37:56.874909 924 kubelet.go:1588] Orphaned volume "e314f23d-e0b4-11e5-b0bc-029bbe1bd231/objectstore-creds" found, tearing down volume | |
Mar 02 20:37:56 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: W0302 20:37:56.894255 924 kubelet.go:1588] Orphaned volume "e314f23d-e0b4-11e5-b0bc-029bbe1bd231/minio-user" found, tearing down volume | |
Mar 02 20:37:56 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: W0302 20:37:56.941221 924 kubelet.go:1588] Orphaned volume "e2b34e18-e0b4-11e5-b0bc-029bbe1bd231/builder-key-auth" found, tearing down volume | |
Mar 02 20:37:56 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: W0302 20:37:56.951131 924 kubelet.go:1588] Orphaned volume "e2b34e18-e0b4-11e5-b0bc-029bbe1bd231/minio-user" found, tearing down volume | |
Mar 02 20:37:56 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: W0302 20:37:56.960133 924 kubelet.go:1588] Orphaned volume "e2b34e18-e0b4-11e5-b0bc-029bbe1bd231/deis-builder-token-hvae8" found, tearing down volume | |
Mar 02 20:37:56 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: W0302 20:37:56.968186 924 kubelet.go:1588] Orphaned volume "e314f23d-e0b4-11e5-b0bc-029bbe1bd231/deis-database-token-ldns9" found, tearing down volume | |
Mar 02 20:38:41 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: W0302 20:38:41.936176 924 manager.go:1897] Failed to pull image "quay.io/gabrtv/postgres:git-9d28031" from pod "deis-database-50nvk_deis" and container "deis-database": image pull failed for quay.io/gabrtv/postgres:git-9d28031, this may be because there are no credentials on this request. details: (Error: Status 403 trying to pull repository gabrtv/postgres: "{\"error\": \"Permission Denied\"}") | |
Mar 02 20:39:15 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: W0302 20:39:15.526066 924 manager.go:1897] Failed to pull image "quay.io/gabrtv/postgres:git-9d28031" from pod "deis-database-50nvk_deis" and container "deis-database": image pull failed for quay.io/gabrtv/postgres:git-9d28031, this may be because there are no credentials on this request. details: (Error: Status 403 trying to pull repository gabrtv/postgres: "{\"error\": \"Permission Denied\"}") | |
Mar 02 20:39:17 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: W0302 20:39:17.511174 924 manager.go:1897] Failed to pull image "quay.io/gabrtv/postgres:git-9d28031" from pod "deis-database-50nvk_deis" and container "deis-database": image pull failed for quay.io/gabrtv/postgres:git-9d28031, this may be because there are no credentials on this request. details: (Error: Status 403 trying to pull repository gabrtv/postgres: "{\"error\": \"Permission Denied\"}") | |
Mar 02 20:39:20 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: W0302 20:39:20.020399 924 manager.go:1897] Failed to pull image "quay.io/gabrtv/postgres:git-9d28031" from pod "deis-database-50nvk_deis" and container "deis-database": image pull failed for quay.io/gabrtv/postgres:git-9d28031, this may be because there are no credentials on this request. details: (Error: Status 403 trying to pull repository gabrtv/postgres: "{\"error\": \"Permission Denied\"}") | |
Mar 02 20:39:28 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: W0302 20:39:28.883900 924 manager.go:1897] Failed to pull image "quay.io/gabrtv/postgres:git-9d28031" from pod "deis-database-50nvk_deis" and container "deis-database": image pull failed for quay.io/gabrtv/postgres:git-9d28031, this may be because there are no credentials on this request. details: (Error: Status 403 trying to pull repository gabrtv/postgres: "{\"error\": \"Permission Denied\"}") | |
Mar 02 20:39:36 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: W0302 20:39:36.836272 924 kubelet.go:1588] Orphaned volume "bec0f25b-e0b6-11e5-b0bc-029bbe1bd231/objectstore-creds" found, tearing down volume | |
Mar 02 20:39:36 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: W0302 20:39:36.850182 924 kubelet.go:1588] Orphaned volume "bec0f25b-e0b6-11e5-b0bc-029bbe1bd231/minio-user" found, tearing down volume | |
Mar 02 20:39:36 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: W0302 20:39:36.883245 924 kubelet.go:1588] Orphaned volume "bec0f25b-e0b6-11e5-b0bc-029bbe1bd231/database-creds" found, tearing down volume | |
Mar 02 20:39:36 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: W0302 20:39:36.892246 924 kubelet.go:1588] Orphaned volume "bec0f25b-e0b6-11e5-b0bc-029bbe1bd231/deis-database-token-9w5rw" found, tearing down volume | |
Mar 02 20:39:48 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: W0302 20:39:48.238272 924 manager.go:1897] Failed to pull image "quay.io/gabrtv/postgres:git-9d28031" from pod "deis-database-v8wxt_deis" and container "deis-database": image pull failed for quay.io/gabrtv/postgres:git-9d28031, this may be because there are no credentials on this request. details: (Error: Status 403 trying to pull repository gabrtv/postgres: "{\"error\": \"Permission Denied\"}") | |
Mar 02 20:39:50 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: W0302 20:39:50.286791 924 manager.go:1897] Failed to pull image "quay.io/gabrtv/postgres:git-9d28031" from pod "deis-database-v8wxt_deis" and container "deis-database": image pull failed for quay.io/gabrtv/postgres:git-9d28031, this may be because there are no credentials on this request. details: (Error: Status 403 trying to pull repository gabrtv/postgres: "{\"error\": \"Permission Denied\"}") | |
Mar 02 20:39:59 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: W0302 20:39:59.037655 924 manager.go:1897] Failed to pull image "quay.io/gabrtv/postgres:git-9d28031" from pod "deis-database-v8wxt_deis" and container "deis-database": image pull failed for quay.io/gabrtv/postgres:git-9d28031, this may be because there are no credentials on this request. details: (Error: Status 403 trying to pull repository gabrtv/postgres: "{\"error\": \"Permission Denied\"}") | |
Mar 02 20:40:06 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 20:40:06.866534 924 provider.go:91] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider | |
Mar 02 20:40:08 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: W0302 20:40:08.881011 924 manager.go:1897] Failed to pull image "quay.io/gabrtv/postgres:git-9d28031" from pod "deis-database-v8wxt_deis" and container "deis-database": image pull failed for quay.io/gabrtv/postgres:git-9d28031, this may be because there are no credentials on this request. details: (Error: Status 403 trying to pull repository gabrtv/postgres: "{\"error\": \"Permission Denied\"}") | |
Mar 02 20:40:18 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: W0302 20:40:18.799799 924 manager.go:1897] Failed to pull image "quay.io/gabrtv/postgres:git-9d28031" from pod "deis-database-v8wxt_deis" and container "deis-database": image pull failed for quay.io/gabrtv/postgres:git-9d28031, this may be because there are no credentials on this request. details: (Error: Status 403 trying to pull repository gabrtv/postgres: "{\"error\": \"Permission Denied\"}") | |
Mar 02 20:40:28 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: W0302 20:40:28.825249 924 manager.go:1897] Failed to pull image "quay.io/gabrtv/postgres:git-9d28031" from pod "deis-database-v8wxt_deis" and container "deis-database": image pull failed for quay.io/gabrtv/postgres:git-9d28031, this may be because there are no credentials on this request. details: (Error: Status 403 trying to pull repository gabrtv/postgres: "{\"error\": \"Permission Denied\"}") | |
Mar 02 20:40:39 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: W0302 20:40:39.068709 924 manager.go:1897] Failed to pull image "quay.io/gabrtv/postgres:git-9d28031" from pod "deis-database-v8wxt_deis" and container "deis-database": image pull failed for quay.io/gabrtv/postgres:git-9d28031, this may be because there are no credentials on this request. details: (Error: Status 403 trying to pull repository gabrtv/postgres: "{\"error\": \"Permission Denied\"}") | |
Mar 02 20:41:36 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 20:41:36.839765 924 kubelet.go:1583] volume "e679384a-e0b6-11e5-b0bc-029bbe1bd231/database-creds", still has a container running "e679384a-e0b6-11e5-b0bc-029bbe1bd231", skipping teardown | |
Mar 02 20:41:36 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 20:41:36.839798 924 kubelet.go:1583] volume "e679384a-e0b6-11e5-b0bc-029bbe1bd231/deis-database-token-9w5rw", still has a container running "e679384a-e0b6-11e5-b0bc-029bbe1bd231", skipping teardown | |
Mar 02 20:41:36 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 20:41:36.839818 924 kubelet.go:1583] volume "e679384a-e0b6-11e5-b0bc-029bbe1bd231/objectstore-creds", still has a container running "e679384a-e0b6-11e5-b0bc-029bbe1bd231", skipping teardown | |
Mar 02 20:41:36 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 20:41:36.839835 924 kubelet.go:1583] volume "e679384a-e0b6-11e5-b0bc-029bbe1bd231/minio-user", still has a container running "e679384a-e0b6-11e5-b0bc-029bbe1bd231", skipping teardown | |
Mar 02 20:41:46 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 20:41:46.831528 924 kubelet.go:1583] volume "e679384a-e0b6-11e5-b0bc-029bbe1bd231/database-creds", still has a container running "e679384a-e0b6-11e5-b0bc-029bbe1bd231", skipping teardown | |
Mar 02 20:41:46 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 20:41:46.831555 924 kubelet.go:1583] volume "e679384a-e0b6-11e5-b0bc-029bbe1bd231/deis-database-token-9w5rw", still has a container running "e679384a-e0b6-11e5-b0bc-029bbe1bd231", skipping teardown | |
Mar 02 20:41:46 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 20:41:46.831569 924 kubelet.go:1583] volume "e679384a-e0b6-11e5-b0bc-029bbe1bd231/objectstore-creds", still has a container running "e679384a-e0b6-11e5-b0bc-029bbe1bd231", skipping teardown | |
Mar 02 20:41:46 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 20:41:46.831583 924 kubelet.go:1583] volume "e679384a-e0b6-11e5-b0bc-029bbe1bd231/minio-user", still has a container running "e679384a-e0b6-11e5-b0bc-029bbe1bd231", skipping teardown | |
Mar 02 20:41:56 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 20:41:56.838980 924 kubelet.go:1583] volume "e679384a-e0b6-11e5-b0bc-029bbe1bd231/minio-user", still has a container running "e679384a-e0b6-11e5-b0bc-029bbe1bd231", skipping teardown | |
Mar 02 20:41:56 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 20:41:56.839005 924 kubelet.go:1583] volume "e679384a-e0b6-11e5-b0bc-029bbe1bd231/database-creds", still has a container running "e679384a-e0b6-11e5-b0bc-029bbe1bd231", skipping teardown | |
Mar 02 20:41:56 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 20:41:56.839019 924 kubelet.go:1583] volume "e679384a-e0b6-11e5-b0bc-029bbe1bd231/deis-database-token-9w5rw", still has a container running "e679384a-e0b6-11e5-b0bc-029bbe1bd231", skipping teardown | |
Mar 02 20:41:56 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 20:41:56.839032 924 kubelet.go:1583] volume "e679384a-e0b6-11e5-b0bc-029bbe1bd231/objectstore-creds", still has a container running "e679384a-e0b6-11e5-b0bc-029bbe1bd231", skipping teardown | |
Mar 02 20:41:59 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: W0302 20:41:59.227641 924 manager.go:1444] No ref for pod '"735c79d5062a140eb46fb5964f9d473325f2e9e3edfab6727f8f5a9c5ff6d823 deis-database deis/deis-database-v8wxt"' | |
Mar 02 20:41:59 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: W0302 20:41:59.398594 924 manager.go:1444] No ref for pod '"e99f672bc0492a5382e44cb042e99834372c1b6e6426721e48a52af68a841751 deis/deis-database-v8wxt"' | |
Mar 02 20:42:06 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: W0302 20:42:06.839090 924 kubelet.go:1588] Orphaned volume "e679384a-e0b6-11e5-b0bc-029bbe1bd231/database-creds" found, tearing down volume | |
Mar 02 20:42:06 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: W0302 20:42:06.880492 924 kubelet.go:1588] Orphaned volume "e679384a-e0b6-11e5-b0bc-029bbe1bd231/deis-database-token-9w5rw" found, tearing down volume | |
Mar 02 20:42:06 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: W0302 20:42:06.915220 924 kubelet.go:1588] Orphaned volume "e679384a-e0b6-11e5-b0bc-029bbe1bd231/objectstore-creds" found, tearing down volume | |
Mar 02 20:42:06 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: W0302 20:42:06.920232 924 kubelet.go:1588] Orphaned volume "e679384a-e0b6-11e5-b0bc-029bbe1bd231/minio-user" found, tearing down volume | |
Mar 02 20:42:56 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 20:42:56.871775 924 manager.go:1769] pod "deis-workflow-yxb1d_deis" container "deis-workflow" is unhealthy (probe result: failure), it will be killed and re-created. | |
Mar 02 20:47:06 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 20:47:06.981425 924 logs.go:40] http: multiple response.WriteHeader calls | |
Mar 02 20:47:06 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 20:47:06.986029 924 logs.go:40] http: multiple response.WriteHeader calls | |
Mar 02 20:47:16 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: W0302 20:47:16.846937 924 kubelet.go:1588] Orphaned volume "3b35e0c8-e0b7-11e5-b0bc-029bbe1bd231/minio-user" found, tearing down volume | |
Mar 02 20:47:16 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: W0302 20:47:16.861230 924 kubelet.go:1588] Orphaned volume "3b35e0c8-e0b7-11e5-b0bc-029bbe1bd231/builder-key-auth" found, tearing down volume | |
Mar 02 20:47:16 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: W0302 20:47:16.885882 924 kubelet.go:1588] Orphaned volume "3b35e0c8-e0b7-11e5-b0bc-029bbe1bd231/django-secret-key" found, tearing down volume | |
Mar 02 20:47:16 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: W0302 20:47:16.897268 924 kubelet.go:1588] Orphaned volume "3b35e0c8-e0b7-11e5-b0bc-029bbe1bd231/database-creds" found, tearing down volume | |
Mar 02 20:47:16 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: W0302 20:47:16.903398 924 kubelet.go:1588] Orphaned volume "3b35e0c8-e0b7-11e5-b0bc-029bbe1bd231/deis-workflow-token-bki0s" found, tearing down volume | |
Mar 02 20:48:28 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 20:48:28.846853 924 provider.go:91] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider | |
Mar 02 20:49:07 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 20:49:07.095483 924 manager.go:1769] pod "deis-database-m99mf_deis" container "deis-database" is unhealthy (probe result: failure), it will be killed and re-created. | |
Mar 02 20:49:47 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 20:49:47.225927 924 manager.go:1769] pod "deis-database-m99mf_deis" container "deis-database" is unhealthy (probe result: failure), it will be killed and re-created. | |
Mar 02 20:50:27 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 20:50:27.085160 924 manager.go:1769] pod "deis-database-m99mf_deis" container "deis-database" is unhealthy (probe result: failure), it will be killed and re-created. | |
Mar 02 20:51:07 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 20:51:07.096813 924 manager.go:1769] pod "deis-database-m99mf_deis" container "deis-database" is unhealthy (probe result: failure), it will be killed and re-created. | |
Mar 02 20:51:47 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 20:51:47.224169 924 manager.go:1769] pod "deis-database-m99mf_deis" container "deis-database" is unhealthy (probe result: failure), it will be killed and re-created. | |
Mar 02 20:52:27 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 20:52:27.110462 924 manager.go:1769] pod "deis-database-m99mf_deis" container "deis-database" is unhealthy (probe result: failure), it will be killed and re-created. | |
Mar 02 20:53:07 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 20:53:07.115753 924 manager.go:1769] pod "deis-database-m99mf_deis" container "deis-database" is unhealthy (probe result: failure), it will be killed and re-created. | |
Mar 02 20:53:47 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 20:53:47.196233 924 manager.go:1769] pod "deis-database-m99mf_deis" container "deis-database" is unhealthy (probe result: failure), it will be killed and re-created. | |
Mar 02 20:53:47 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 20:53:47.327289 924 provider.go:91] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider | |
Mar 02 20:54:27 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 20:54:27.109508 924 manager.go:1769] pod "deis-database-m99mf_deis" container "deis-database" is unhealthy (probe result: failure), it will be killed and re-created. | |
Mar 02 20:55:07 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 20:55:07.094144 924 manager.go:1769] pod "deis-database-m99mf_deis" container "deis-database" is unhealthy (probe result: failure), it will be killed and re-created. | |
Mar 02 20:55:07 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: E0302 20:55:07.273383 924 server.go:532] Error executing command in container: API error (409): Container 578b64091b88fc7286eacb8bab3fb54dfe3f4b5eaa430b60251ec51d99e1af4a is not running: Exited (0) Less than a second ago | |
Mar 02 20:55:47 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 20:55:47.154706 924 manager.go:1769] pod "deis-database-m99mf_deis" container "deis-database" is unhealthy (probe result: failure), it will be killed and re-created. | |
Mar 02 20:56:27 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 20:56:27.137566 924 manager.go:1769] pod "deis-database-m99mf_deis" container "deis-database" is unhealthy (probe result: failure), it will be killed and re-created. | |
Mar 02 20:57:01 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: W0302 20:57:01.922985 924 reflector.go:245] pkg/kubelet/kubelet.go:210: watch of *api.Service ended with: 401: The event in requested index is outdated and cleared (the requested history has been cleared [1479/1396]) [2478] | |
Mar 02 20:57:07 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 20:57:07.127835 924 manager.go:1769] pod "deis-database-m99mf_deis" container "deis-database" is unhealthy (probe result: failure), it will be killed and re-created. | |
Mar 02 20:57:07 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: E0302 20:57:07.282248 924 server.go:532] Error executing command in container: API error (409): Container a859c58d7657b951e2e775fa3b20f44f52d69856347cdc6335c9cd442f1cc42e is not running: Exited (0) Less than a second ago | |
Mar 02 20:57:18 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: E0302 20:57:18.924500 924 server.go:532] Error executing command in container: Error executing in Docker Container: -1 | |
Mar 02 20:57:30 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: E0302 20:57:30.037605 924 server.go:532] Error executing command in container: Error executing in Docker Container: -1 | |
Mar 02 20:57:45 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: E0302 20:57:45.077882 924 server.go:532] Error executing command in container: Error executing in Docker Container: -1 | |
Mar 02 20:57:47 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 20:57:47.168765 924 manager.go:1769] pod "deis-database-m99mf_deis" container "deis-database" is unhealthy (probe result: failure), it will be killed and re-created. | |
Mar 02 20:58:27 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 20:58:27.132680 924 manager.go:1769] pod "deis-database-m99mf_deis" container "deis-database" is unhealthy (probe result: failure), it will be killed and re-created. | |
Mar 02 20:58:37 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 20:58:37.102786 924 manager.go:1769] pod "deis-workflow-awp3p_deis" container "deis-workflow" is unhealthy (probe result: failure), it will be killed and re-created. | |
Mar 02 20:58:57 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 20:58:57.173210 924 provider.go:91] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider | |
Mar 02 20:58:57 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: E0302 20:58:57.188840 924 server.go:532] Error executing command in container: API error (409): Container 5c759513a67483171cdb4ccb4595d20eb9b7edf8a017ad18c3005176b500b362 is not running: Exited (137) Less than a second ago | |
Mar 02 20:59:07 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 20:59:07.124274 924 logs.go:40] http: multiple response.WriteHeader calls | |
Mar 02 20:59:07 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: E0302 20:59:07.385659 924 server.go:532] Error executing command in container: API error (409): Container c10ea9339156fe26ed11748548f3681cdc698acc693cc11cbac9c0141af0fe48 is not running: Exited (137) Less than a second ago | |
Mar 02 20:59:37 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 20:59:37.108217 924 manager.go:1769] pod "deis-database-m99mf_deis" container "deis-database" is unhealthy (probe result: failure), it will be killed and re-created. | |
Mar 02 21:00:17 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:00:17.123918 924 manager.go:1769] pod "deis-database-m99mf_deis" container "deis-database" is unhealthy (probe result: failure), it will be killed and re-created. | |
Mar 02 21:00:57 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:00:57.163152 924 manager.go:1769] pod "deis-database-m99mf_deis" container "deis-database" is unhealthy (probe result: failure), it will be killed and re-created. | |
Mar 02 21:01:37 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:01:37.146784 924 manager.go:1769] pod "deis-database-m99mf_deis" container "deis-database" is unhealthy (probe result: failure), it will be killed and re-created. | |
Mar 02 21:02:18 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: W0302 21:02:18.032802 924 kubelet.go:1588] Orphaned volume "1dd61f51-e0b8-11e5-b0bc-029bbe1bd231/objectstore-creds" found, tearing down volume | |
Mar 02 21:02:18 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: W0302 21:02:18.044223 924 kubelet.go:1588] Orphaned volume "1dd61f51-e0b8-11e5-b0bc-029bbe1bd231/minio-user" found, tearing down volume | |
Mar 02 21:02:18 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: W0302 21:02:18.057376 924 kubelet.go:1588] Orphaned volume "1dd61f51-e0b8-11e5-b0bc-029bbe1bd231/database-creds" found, tearing down volume | |
Mar 02 21:02:18 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: W0302 21:02:18.068163 924 kubelet.go:1588] Orphaned volume "1dd61f51-e0b8-11e5-b0bc-029bbe1bd231/deis-database-token-9w5rw" found, tearing down volume | |
Mar 02 21:02:34 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: W0302 21:02:34.707231 924 kubelet.go:1588] Orphaned volume "bf1c2438-e0b6-11e5-b0bc-029bbe1bd231/minio-admin" found, tearing down volume | |
Mar 02 21:02:34 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: W0302 21:02:34.716736 924 kubelet.go:1588] Orphaned volume "bf1c2438-e0b6-11e5-b0bc-029bbe1bd231/minio-user" found, tearing down volume | |
Mar 02 21:02:34 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: W0302 21:02:34.724127 924 kubelet.go:1588] Orphaned volume "bf1c2438-e0b6-11e5-b0bc-029bbe1bd231/deis-minio-token-tcsxt" found, tearing down volume | |
Mar 02 21:02:36 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:02:36.830163 924 kubelet.go:1583] volume "2cc8f34c-e0b8-11e5-b0bc-029bbe1bd231/django-secret-key", still has a container running "2cc8f34c-e0b8-11e5-b0bc-029bbe1bd231", skipping teardown | |
Mar 02 21:02:36 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:02:36.830190 924 kubelet.go:1583] volume "2cc8f34c-e0b8-11e5-b0bc-029bbe1bd231/database-creds", still has a container running "2cc8f34c-e0b8-11e5-b0bc-029bbe1bd231", skipping teardown | |
Mar 02 21:02:36 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:02:36.830207 924 kubelet.go:1583] volume "2cc8f34c-e0b8-11e5-b0bc-029bbe1bd231/deis-workflow-token-bki0s", still has a container running "2cc8f34c-e0b8-11e5-b0bc-029bbe1bd231", skipping teardown | |
Mar 02 21:02:36 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:02:36.830222 924 kubelet.go:1583] volume "2cc8f34c-e0b8-11e5-b0bc-029bbe1bd231/builder-key-auth", still has a container running "2cc8f34c-e0b8-11e5-b0bc-029bbe1bd231", skipping teardown | |
Mar 02 21:02:36 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:02:36.830237 924 kubelet.go:1583] volume "2cc8f34c-e0b8-11e5-b0bc-029bbe1bd231/minio-user", still has a container running "2cc8f34c-e0b8-11e5-b0bc-029bbe1bd231", skipping teardown | |
Mar 02 21:02:46 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:02:46.845985 924 kubelet.go:1583] volume "2cc8f34c-e0b8-11e5-b0bc-029bbe1bd231/django-secret-key", still has a container running "2cc8f34c-e0b8-11e5-b0bc-029bbe1bd231", skipping teardown | |
Mar 02 21:02:46 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:02:46.846016 924 kubelet.go:1583] volume "2cc8f34c-e0b8-11e5-b0bc-029bbe1bd231/database-creds", still has a container running "2cc8f34c-e0b8-11e5-b0bc-029bbe1bd231", skipping teardown | |
Mar 02 21:02:46 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:02:46.846033 924 kubelet.go:1583] volume "2cc8f34c-e0b8-11e5-b0bc-029bbe1bd231/deis-workflow-token-bki0s", still has a container running "2cc8f34c-e0b8-11e5-b0bc-029bbe1bd231", skipping teardown | |
Mar 02 21:02:46 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:02:46.846049 924 kubelet.go:1583] volume "2cc8f34c-e0b8-11e5-b0bc-029bbe1bd231/builder-key-auth", still has a container running "2cc8f34c-e0b8-11e5-b0bc-029bbe1bd231", skipping teardown | |
Mar 02 21:02:46 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:02:46.846064 924 kubelet.go:1583] volume "2cc8f34c-e0b8-11e5-b0bc-029bbe1bd231/minio-user", still has a container running "2cc8f34c-e0b8-11e5-b0bc-029bbe1bd231", skipping teardown | |
Mar 02 21:02:46 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: E0302 21:02:46.897938 924 event.go:188] Server rejected event '&api.Event{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:api.ObjectMeta{Name:"deis-workflow-awp3p.143822a3ec6ab3b2", GenerateName:"", Namespace:"deis", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:unversioned.Time{Time:time.Time{sec:0, nsec:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*unversioned.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil)}, InvolvedObject:api.ObjectReference{Kind:"Pod", Namespace:"deis", Name:"deis-workflow-awp3p", UID:"2cc8f34c-e0b8-11e5-b0bc-029bbe1bd231", APIVersion:"v1", ResourceVersion:"2118", FieldPath:"spec.containers{deis-workflow}"}, Reason:"Unhealthy", Message:"Readiness probe failed: Get http://10.2.2.5:8000/healthz: read tcp 10.2.2.5:8000: use of closed network connection", Source:api.EventSource{Component:"kubelet", Host:"ip-10-0-0-166.us-west-2.compute.internal"}, FirstTimestamp:unversioned.Time{Time:time.Time{sec:63592549366, nsec:895784882, loc:(*time.Location)(0x1fbc560)}}, LastTimestamp:unversioned.Time{Time:time.Time{sec:63592549366, nsec:895784882, loc:(*time.Location)(0x1fbc560)}}, Count:1}': 'Event "deis-workflow-awp3p.143822a3ec6ab3b2" is forbidden: Unable to create new content in namespace deis because it is being terminated.' (will not retry!) | |
Mar 02 21:02:56 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:02:56.837999 924 kubelet.go:1583] volume "2cc8f34c-e0b8-11e5-b0bc-029bbe1bd231/django-secret-key", still has a container running "2cc8f34c-e0b8-11e5-b0bc-029bbe1bd231", skipping teardown | |
Mar 02 21:02:56 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:02:56.838027 924 kubelet.go:1583] volume "2cc8f34c-e0b8-11e5-b0bc-029bbe1bd231/database-creds", still has a container running "2cc8f34c-e0b8-11e5-b0bc-029bbe1bd231", skipping teardown | |
Mar 02 21:02:56 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:02:56.838041 924 kubelet.go:1583] volume "2cc8f34c-e0b8-11e5-b0bc-029bbe1bd231/deis-workflow-token-bki0s", still has a container running "2cc8f34c-e0b8-11e5-b0bc-029bbe1bd231", skipping teardown | |
Mar 02 21:02:56 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:02:56.838055 924 kubelet.go:1583] volume "2cc8f34c-e0b8-11e5-b0bc-029bbe1bd231/builder-key-auth", still has a container running "2cc8f34c-e0b8-11e5-b0bc-029bbe1bd231", skipping teardown | |
Mar 02 21:02:56 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:02:56.838065 924 kubelet.go:1583] volume "2cc8f34c-e0b8-11e5-b0bc-029bbe1bd231/minio-user", still has a container running "2cc8f34c-e0b8-11e5-b0bc-029bbe1bd231", skipping teardown | |
Mar 02 21:03:04 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:03:04.756554 924 manager.go:1769] pod "deis-workflow-awp3p_deis" container "deis-workflow" is unhealthy (probe result: failure), it will be killed and re-created. | |
Mar 02 21:03:04 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: E0302 21:03:04.760182 924 event.go:188] Server rejected event '&api.Event{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:api.ObjectMeta{Name:"deis-workflow-awp3p.143822a815002b2c", GenerateName:"", Namespace:"deis", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:unversioned.Time{Time:time.Time{sec:0, nsec:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*unversioned.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil)}, InvolvedObject:api.ObjectReference{Kind:"Pod", Namespace:"deis", Name:"deis-workflow-awp3p", UID:"2cc8f34c-e0b8-11e5-b0bc-029bbe1bd231", APIVersion:"v1", ResourceVersion:"2118", FieldPath:"spec.containers{deis-workflow}"}, Reason:"Unhealthy", Message:"Liveness probe failed: Get http://10.2.2.5:8000/healthz: dial tcp 10.2.2.5:8000: connection refused", Source:api.EventSource{Component:"kubelet", Host:"ip-10-0-0-166.us-west-2.compute.internal"}, FirstTimestamp:unversioned.Time{Time:time.Time{sec:63592549384, nsec:756538156, loc:(*time.Location)(0x1fbc560)}}, LastTimestamp:unversioned.Time{Time:time.Time{sec:63592549384, nsec:756538156, loc:(*time.Location)(0x1fbc560)}}, Count:1}': 'namespaces "deis" not found' (will not retry!) | |
Mar 02 21:03:04 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: W0302 21:03:04.794269 924 manager.go:1444] No ref for pod '"4b8fd0473da08d1138f6aa601e1e821ba557b262b7d0e42706b8053ced00f23f deis-workflow deis/deis-workflow-awp3p"' | |
Mar 02 21:03:04 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: E0302 21:03:04.798469 924 event.go:188] Server rejected event '&api.Event{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:api.ObjectMeta{Name:"deis-workflow-awp3p.143822a8168762b3", GenerateName:"", Namespace:"deis", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:unversioned.Time{Time:time.Time{sec:0, nsec:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*unversioned.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil)}, InvolvedObject:api.ObjectReference{Kind:"Pod", Namespace:"deis", Name:"deis-workflow-awp3p", UID:"2cc8f34c-e0b8-11e5-b0bc-029bbe1bd231", APIVersion:"v1", ResourceVersion:"2118", FieldPath:"spec.containers{deis-workflow}"}, Reason:"Killing", Message:"Killing with docker id 4b8fd0473da0", Source:api.EventSource{Component:"kubelet", Host:"ip-10-0-0-166.us-west-2.compute.internal"}, FirstTimestamp:unversioned.Time{Time:time.Time{sec:63592549384, nsec:782176947, loc:(*time.Location)(0x1fbc560)}}, LastTimestamp:unversioned.Time{Time:time.Time{sec:63592549384, nsec:782176947, loc:(*time.Location)(0x1fbc560)}}, Count:1}': 'namespaces "deis" not found' (will not retry!) | |
Mar 02 21:03:04 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: E0302 21:03:04.803104 924 event.go:188] Server rejected event '&api.Event{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:api.ObjectMeta{Name:"deis-workflow-awp3p.143821e1ffecb132", GenerateName:"", Namespace:"deis", SelfLink:"", UID:"", ResourceVersion:"2569", Generation:0, CreationTimestamp:unversioned.Time{Time:time.Time{sec:0, nsec:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*unversioned.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil)}, InvolvedObject:api.ObjectReference{Kind:"Pod", Namespace:"deis", Name:"deis-workflow-awp3p", UID:"2cc8f34c-e0b8-11e5-b0bc-029bbe1bd231", APIVersion:"v1", ResourceVersion:"2118", FieldPath:"spec.containers{deis-workflow}"}, Reason:"Pulling", Message:"Pulling image \"quay.io/deisci/workflow:v2-beta\"", Source:api.EventSource{Component:"kubelet", Host:"ip-10-0-0-166.us-west-2.compute.internal"}, FirstTimestamp:unversioned.Time{Time:time.Time{sec:63592548533, nsec:0, loc:(*time.Location)(0x1fbc560)}}, LastTimestamp:unversioned.Time{Time:time.Time{sec:63592549384, nsec:797463613, loc:(*time.Location)(0x1fbc560)}}, Count:3}': 'namespaces "deis" not found' (will not retry!) | |
Mar 02 21:03:05 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: E0302 21:03:05.011735 924 event.go:188] Server rejected event '&api.Event{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:api.ObjectMeta{Name:"deis-workflow-awp3p.143822a8240b9c8c", GenerateName:"", Namespace:"deis", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:unversioned.Time{Time:time.Time{sec:0, nsec:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*unversioned.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil)}, InvolvedObject:api.ObjectReference{Kind:"Pod", Namespace:"deis", Name:"deis-workflow-awp3p", UID:"2cc8f34c-e0b8-11e5-b0bc-029bbe1bd231", APIVersion:"v1", ResourceVersion:"2118", FieldPath:"implicitly required container POD"}, Reason:"Killing", Message:"Killing with docker id 17d80a420b71", Source:api.EventSource{Component:"kubelet", Host:"ip-10-0-0-166.us-west-2.compute.internal"}, FirstTimestamp:unversioned.Time{Time:time.Time{sec:63592549385, nsec:8946316, loc:(*time.Location)(0x1fbc560)}}, LastTimestamp:unversioned.Time{Time:time.Time{sec:63592549385, nsec:8946316, loc:(*time.Location)(0x1fbc560)}}, Count:1}': 'namespaces "deis" not found' (will not retry!) | |
Mar 02 21:03:06 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: W0302 21:03:06.838460 924 kubelet.go:1588] Orphaned volume "2cc8f34c-e0b8-11e5-b0bc-029bbe1bd231/minio-user" found, tearing down volume | |
Mar 02 21:03:06 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: W0302 21:03:06.847396 924 kubelet.go:1588] Orphaned volume "2cc8f34c-e0b8-11e5-b0bc-029bbe1bd231/django-secret-key" found, tearing down volume | |
Mar 02 21:03:06 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: W0302 21:03:06.859110 924 kubelet.go:1588] Orphaned volume "2cc8f34c-e0b8-11e5-b0bc-029bbe1bd231/database-creds" found, tearing down volume | |
Mar 02 21:03:06 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: W0302 21:03:06.868268 924 kubelet.go:1588] Orphaned volume "2cc8f34c-e0b8-11e5-b0bc-029bbe1bd231/deis-workflow-token-bki0s" found, tearing down volume | |
Mar 02 21:03:06 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: W0302 21:03:06.892141 924 kubelet.go:1588] Orphaned volume "2cc8f34c-e0b8-11e5-b0bc-029bbe1bd231/builder-key-auth" found, tearing down volume | |
Mar 02 21:03:06 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: E0302 21:03:06.978996 924 manager.go:1920] Error running pod "deis-workflow-awp3p_deis" container "deis-workflow": impossible: cannot find the mounted volumes for pod "deis-workflow-awp3p_deis" | |
Mar 02 21:03:07 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: E0302 21:03:06.981778 924 event.go:188] Server rejected event '&api.Event{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:api.ObjectMeta{Name:"deis-workflow-awp3p.143821e26007e2c7", GenerateName:"", Namespace:"deis", SelfLink:"", UID:"", ResourceVersion:"2571", Generation:0, CreationTimestamp:unversioned.Time{Time:time.Time{sec:0, nsec:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*unversioned.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil)}, InvolvedObject:api.ObjectReference{Kind:"Pod", Namespace:"deis", Name:"deis-workflow-awp3p", UID:"2cc8f34c-e0b8-11e5-b0bc-029bbe1bd231", APIVersion:"v1", ResourceVersion:"2118", FieldPath:"spec.containers{deis-workflow}"}, Reason:"Pulled", Message:"Successfully pulled image \"quay.io/deisci/workflow:v2-beta\"", Source:api.EventSource{Component:"kubelet", Host:"ip-10-0-0-166.us-west-2.compute.internal"}, FirstTimestamp:unversioned.Time{Time:time.Time{sec:63592548535, nsec:0, loc:(*time.Location)(0x1fbc560)}}, LastTimestamp:unversioned.Time{Time:time.Time{sec:63592549386, nsec:978901298, loc:(*time.Location)(0x1fbc560)}}, Count:3}': 'namespaces "deis" not found' (will not retry!) | |
Mar 02 21:04:57 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:04:57.972162 924 provider.go:91] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider | |
Mar 02 21:05:26 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:05:26.888170 924 manager.go:2022] Back-off 10s restarting failed container=deis-workflow pod=deis-workflow-2mjio_deis | |
Mar 02 21:05:46 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:05:46.928140 924 manager.go:2022] Back-off 20s restarting failed container=deis-workflow pod=deis-workflow-2mjio_deis | |
Mar 02 21:05:56 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:05:56.952113 924 manager.go:2022] Back-off 20s restarting failed container=deis-workflow pod=deis-workflow-2mjio_deis | |
Mar 02 21:06:13 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:06:13.441025 924 logs.go:40] http: multiple response.WriteHeader calls | |
Mar 02 21:06:16 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:06:16.914355 924 manager.go:2022] Back-off 40s restarting failed container=deis-workflow pod=deis-workflow-2mjio_deis | |
Mar 02 21:06:27 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:06:26.915850 924 manager.go:2022] Back-off 40s restarting failed container=deis-workflow pod=deis-workflow-2mjio_deis | |
Mar 02 21:06:36 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:06:36.946967 924 manager.go:2022] Back-off 40s restarting failed container=deis-workflow pod=deis-workflow-2mjio_deis | |
Mar 02 21:06:36 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:06:36.948436 924 manager.go:2022] Back-off 10s restarting failed container=deis-database pod=deis-database-jzqxf_deis | |
Mar 02 21:06:46 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:06:46.951960 924 manager.go:2022] Back-off 40s restarting failed container=deis-workflow pod=deis-workflow-2mjio_deis | |
Mar 02 21:07:06 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:07:06.865917 924 manager.go:2022] Back-off 1m20s restarting failed container=deis-workflow pod=deis-workflow-2mjio_deis | |
Mar 02 21:07:10 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: W0302 21:07:10.439121 924 kubelet.go:1588] Orphaned volume "6c5a7767-e0ba-11e5-b0bc-029bbe1bd231/registry-storage" found, tearing down volume | |
Mar 02 21:07:10 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: W0302 21:07:10.439369 924 kubelet.go:1588] Orphaned volume "6c5a7767-e0ba-11e5-b0bc-029bbe1bd231/registry-creds" found, tearing down volume | |
Mar 02 21:07:10 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: W0302 21:07:10.470145 924 kubelet.go:1588] Orphaned volume "6c5a7767-e0ba-11e5-b0bc-029bbe1bd231/deis-registry-token-3ntm3" found, tearing down volume | |
Mar 02 21:07:16 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: W0302 21:07:16.842262 924 kubelet.go:1588] Orphaned volume "6d5216aa-e0ba-11e5-b0bc-029bbe1bd231/django-secret-key" found, tearing down volume | |
Mar 02 21:07:16 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: W0302 21:07:16.852849 924 kubelet.go:1588] Orphaned volume "6d5216aa-e0ba-11e5-b0bc-029bbe1bd231/deis-workflow-token-4u0ei" found, tearing down volume | |
Mar 02 21:07:16 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: W0302 21:07:16.893164 924 kubelet.go:1588] Orphaned volume "6d5216aa-e0ba-11e5-b0bc-029bbe1bd231/database-creds" found, tearing down volume | |
Mar 02 21:07:16 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: W0302 21:07:16.939629 924 kubelet.go:1588] Orphaned volume "6d5216aa-e0ba-11e5-b0bc-029bbe1bd231/builder-key-auth" found, tearing down volume | |
Mar 02 21:07:16 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: W0302 21:07:16.967263 924 kubelet.go:1588] Orphaned volume "6d5216aa-e0ba-11e5-b0bc-029bbe1bd231/minio-user" found, tearing down volume | |
Mar 02 21:07:26 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:07:26.844077 924 kubelet.go:1583] volume "6b6b38d2-e0ba-11e5-b0bc-029bbe1bd231/database-creds", still has a container running "6b6b38d2-e0ba-11e5-b0bc-029bbe1bd231", skipping teardown | |
Mar 02 21:07:26 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:07:26.844102 924 kubelet.go:1583] volume "6b6b38d2-e0ba-11e5-b0bc-029bbe1bd231/deis-database-token-umbsj", still has a container running "6b6b38d2-e0ba-11e5-b0bc-029bbe1bd231", skipping teardown | |
Mar 02 21:07:26 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:07:26.844119 924 kubelet.go:1583] volume "6b6b38d2-e0ba-11e5-b0bc-029bbe1bd231/objectstore-creds", still has a container running "6b6b38d2-e0ba-11e5-b0bc-029bbe1bd231", skipping teardown | |
Mar 02 21:07:26 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:07:26.844156 924 kubelet.go:1583] volume "6b6b38d2-e0ba-11e5-b0bc-029bbe1bd231/minio-user", still has a container running "6b6b38d2-e0ba-11e5-b0bc-029bbe1bd231", skipping teardown | |
Mar 02 21:07:28 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: W0302 21:07:28.813437 924 manager.go:1444] No ref for pod '"2aa31ac56d964d69472cc8ec3e89da300fbeb2a6166ea85cd21087e12977d278 deis-database deis/deis-database-jzqxf"' | |
Mar 02 21:07:28 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: E0302 21:07:28.816152 924 event.go:188] Server rejected event '&api.Event{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:api.ObjectMeta{Name:"deis-database-jzqxf.143822e58ffb6b5e", GenerateName:"", Namespace:"deis", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:unversioned.Time{Time:time.Time{sec:0, nsec:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*unversioned.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil)}, InvolvedObject:api.ObjectReference{Kind:"Pod", Namespace:"deis", Name:"deis-database-jzqxf", UID:"6b6b38d2-e0ba-11e5-b0bc-029bbe1bd231", APIVersion:"v1", ResourceVersion:"3240", FieldPath:"spec.containers{deis-database}"}, Reason:"Killing", Message:"Killing with docker id 2aa31ac56d96", Source:api.EventSource{Component:"kubelet", Host:"ip-10-0-0-166.us-west-2.compute.internal"}, FirstTimestamp:unversioned.Time{Time:time.Time{sec:63592549648, nsec:812829534, loc:(*time.Location)(0x1fbc560)}}, LastTimestamp:unversioned.Time{Time:time.Time{sec:63592549648, nsec:812829534, loc:(*time.Location)(0x1fbc560)}}, Count:1}': 'namespaces "deis" not found' (will not retry!) | |
Mar 02 21:07:28 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: W0302 21:07:28.987608 924 manager.go:1444] No ref for pod '"6349bf7f8ad5ae70c49d9d62dd33115476752f54ae304d53c32fda6a3c99871a /"' | |
Mar 02 21:07:29 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: E0302 21:07:28.994662 924 event.go:188] Server rejected event '&api.Event{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:api.ObjectMeta{Name:"deis-database-jzqxf.143822e59a659ccf", GenerateName:"", Namespace:"deis", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:unversioned.Time{Time:time.Time{sec:0, nsec:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*unversioned.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil)}, InvolvedObject:api.ObjectReference{Kind:"Pod", Namespace:"deis", Name:"deis-database-jzqxf", UID:"6b6b38d2-e0ba-11e5-b0bc-029bbe1bd231", APIVersion:"v1", ResourceVersion:"3240", FieldPath:"implicitly required container POD"}, Reason:"Killing", Message:"Killing with docker id 6349bf7f8ad5", Source:api.EventSource{Component:"kubelet", Host:"ip-10-0-0-166.us-west-2.compute.internal"}, FirstTimestamp:unversioned.Time{Time:time.Time{sec:63592549648, nsec:987561167, loc:(*time.Location)(0x1fbc560)}}, LastTimestamp:unversioned.Time{Time:time.Time{sec:63592549648, nsec:987561167, loc:(*time.Location)(0x1fbc560)}}, Count:1}': 'namespaces "deis" not found' (will not retry!) | |
Mar 02 21:07:36 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: W0302 21:07:36.846483 924 kubelet.go:1588] Orphaned volume "6b6b38d2-e0ba-11e5-b0bc-029bbe1bd231/objectstore-creds" found, tearing down volume | |
Mar 02 21:07:36 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: W0302 21:07:36.863305 924 kubelet.go:1588] Orphaned volume "6b6b38d2-e0ba-11e5-b0bc-029bbe1bd231/minio-user" found, tearing down volume | |
Mar 02 21:07:36 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: W0302 21:07:36.907120 924 kubelet.go:1588] Orphaned volume "6b6b38d2-e0ba-11e5-b0bc-029bbe1bd231/database-creds" found, tearing down volume | |
Mar 02 21:07:36 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: W0302 21:07:36.938319 924 kubelet.go:1588] Orphaned volume "6b6b38d2-e0ba-11e5-b0bc-029bbe1bd231/deis-database-token-umbsj" found, tearing down volume | |
Mar 02 21:08:56 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:08:56.946953 924 manager.go:1769] pod "deis-workflow-176jb_deis" container "deis-workflow" is unhealthy (probe result: failure), it will be killed and re-created. | |
Mar 02 21:09:47 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:09:46.971384 924 manager.go:2022] Back-off 10s restarting failed container=deis-workflow pod=deis-workflow-176jb_deis | |
Mar 02 21:10:07 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:10:06.940853 924 manager.go:2022] Back-off 20s restarting failed container=deis-workflow pod=deis-workflow-176jb_deis | |
Mar 02 21:10:16 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:10:16.927583 924 manager.go:2022] Back-off 20s restarting failed container=deis-workflow pod=deis-workflow-176jb_deis | |
Mar 02 21:10:26 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:10:26.921480 924 provider.go:91] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider | |
Mar 02 21:10:36 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:10:36.885491 924 manager.go:2022] Back-off 40s restarting failed container=deis-workflow pod=deis-workflow-176jb_deis | |
Mar 02 21:10:47 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:10:46.984423 924 manager.go:2022] Back-off 40s restarting failed container=deis-workflow pod=deis-workflow-176jb_deis | |
Mar 02 21:10:57 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:10:56.981079 924 manager.go:2022] Back-off 40s restarting failed container=deis-workflow pod=deis-workflow-176jb_deis | |
Mar 02 21:11:07 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:11:06.940918 924 manager.go:2022] Back-off 40s restarting failed container=deis-workflow pod=deis-workflow-176jb_deis | |
Mar 02 21:11:57 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:11:57.021284 924 manager.go:1769] pod "deis-workflow-176jb_deis" container "deis-workflow" is unhealthy (probe result: failure), it will be killed and re-created. | |
Mar 02 21:12:16 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:12:16.885828 924 manager.go:2022] Back-off 10s restarting failed container=deis-database pod=deis-database-lxj5i_deis | |
Mar 02 21:12:47 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:12:46.986556 924 manager.go:2022] Back-off 20s restarting failed container=deis-database pod=deis-database-lxj5i_deis | |
Mar 02 21:12:57 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:12:56.999709 924 manager.go:2022] Back-off 20s restarting failed container=deis-database pod=deis-database-lxj5i_deis | |
Mar 02 21:13:07 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:13:06.998911 924 manager.go:1769] pod "deis-workflow-176jb_deis" container "deis-workflow" is unhealthy (probe result: failure), it will be killed and re-created. | |
Mar 02 21:13:26 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:13:26.936555 924 manager.go:2022] Back-off 40s restarting failed container=deis-database pod=deis-database-lxj5i_deis | |
Mar 02 21:13:37 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:13:36.948706 924 manager.go:2022] Back-off 40s restarting failed container=deis-database pod=deis-database-lxj5i_deis | |
Mar 02 21:13:47 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:13:46.988574 924 manager.go:2022] Back-off 40s restarting failed container=deis-database pod=deis-database-lxj5i_deis | |
Mar 02 21:13:57 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:13:56.954981 924 manager.go:2022] Back-off 40s restarting failed container=deis-database pod=deis-database-lxj5i_deis | |
Mar 02 21:14:17 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:14:16.972955 924 manager.go:1769] pod "deis-workflow-176jb_deis" container "deis-workflow" is unhealthy (probe result: failure), it will be killed and re-created. | |
Mar 02 21:14:27 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:14:26.968894 924 manager.go:2022] Back-off 1m20s restarting failed container=deis-database pod=deis-database-lxj5i_deis | |
Mar 02 21:14:36 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:14:36.934223 924 manager.go:2022] Back-off 1m20s restarting failed container=deis-database pod=deis-database-lxj5i_deis | |
Mar 02 21:14:47 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:14:46.972790 924 manager.go:2022] Back-off 1m20s restarting failed container=deis-database pod=deis-database-lxj5i_deis | |
Mar 02 21:14:56 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:14:56.906763 924 manager.go:2022] Back-off 1m20s restarting failed container=deis-database pod=deis-database-lxj5i_deis | |
Mar 02 21:15:07 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:15:06.984453 924 manager.go:2022] Back-off 1m20s restarting failed container=deis-database pod=deis-database-lxj5i_deis | |
Mar 02 21:15:16 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:15:16.923511 924 manager.go:2022] Back-off 1m20s restarting failed container=deis-database pod=deis-database-lxj5i_deis | |
Mar 02 21:15:27 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:15:26.932833 924 manager.go:2022] Back-off 1m20s restarting failed container=deis-database pod=deis-database-lxj5i_deis | |
Mar 02 21:15:27 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:15:26.940481 924 manager.go:1769] pod "deis-workflow-176jb_deis" container "deis-workflow" is unhealthy (probe result: failure), it will be killed and re-created. | |
Mar 02 21:15:36 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:15:36.908952 924 manager.go:2022] Back-off 1m20s restarting failed container=deis-database pod=deis-database-lxj5i_deis | |
Mar 02 21:15:46 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:15:46.955955 924 provider.go:91] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider | |
Mar 02 21:16:07 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:16:07.002164 924 manager.go:2022] Back-off 1m20s restarting failed container=deis-workflow pod=deis-workflow-176jb_deis | |
Mar 02 21:16:07 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:16:07.004406 924 manager.go:2022] Back-off 2m40s restarting failed container=deis-database pod=deis-database-lxj5i_deis | |
Mar 02 21:16:16 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:16:16.956992 924 manager.go:2022] Back-off 2m40s restarting failed container=deis-database pod=deis-database-lxj5i_deis | |
Mar 02 21:16:17 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:16:16.997127 924 manager.go:2022] Back-off 1m20s restarting failed container=deis-workflow pod=deis-workflow-176jb_deis | |
Mar 02 21:16:27 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:16:26.975514 924 manager.go:2022] Back-off 1m20s restarting failed container=deis-workflow pod=deis-workflow-176jb_deis | |
Mar 02 21:16:27 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:16:26.977264 924 manager.go:2022] Back-off 2m40s restarting failed container=deis-database pod=deis-database-lxj5i_deis | |
Mar 02 21:16:36 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:16:36.957039 924 manager.go:2022] Back-off 2m40s restarting failed container=deis-database pod=deis-database-lxj5i_deis | |
Mar 02 21:16:36 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:16:36.959437 924 manager.go:2022] Back-off 1m20s restarting failed container=deis-workflow pod=deis-workflow-176jb_deis | |
Mar 02 21:16:47 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:16:46.949280 924 manager.go:2022] Back-off 2m40s restarting failed container=deis-database pod=deis-database-lxj5i_deis | |
Mar 02 21:16:47 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:16:46.995399 924 manager.go:2022] Back-off 1m20s restarting failed container=deis-workflow pod=deis-workflow-176jb_deis | |
Mar 02 21:16:56 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:16:56.926442 924 manager.go:2022] Back-off 1m20s restarting failed container=deis-workflow pod=deis-workflow-176jb_deis | |
Mar 02 21:16:56 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:16:56.930171 924 manager.go:2022] Back-off 2m40s restarting failed container=deis-database pod=deis-database-lxj5i_deis | |
Mar 02 21:17:07 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:17:06.991932 924 manager.go:2022] Back-off 1m20s restarting failed container=deis-workflow pod=deis-workflow-176jb_deis | |
Mar 02 21:17:07 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:17:06.998218 924 manager.go:2022] Back-off 2m40s restarting failed container=deis-database pod=deis-database-lxj5i_deis | |
Mar 02 21:17:16 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:17:16.940070 924 manager.go:2022] Back-off 2m40s restarting failed container=deis-database pod=deis-database-lxj5i_deis | |
Mar 02 21:17:16 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:17:16.964789 924 manager.go:2022] Back-off 1m20s restarting failed container=deis-workflow pod=deis-workflow-176jb_deis | |
Mar 02 21:17:27 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:17:26.978552 924 manager.go:2022] Back-off 2m40s restarting failed container=deis-database pod=deis-database-lxj5i_deis | |
Mar 02 21:17:37 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:17:36.958410 924 manager.go:2022] Back-off 2m40s restarting failed container=deis-database pod=deis-database-lxj5i_deis | |
Mar 02 21:17:47 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:17:47.000470 924 manager.go:2022] Back-off 2m40s restarting failed container=deis-database pod=deis-database-lxj5i_deis | |
Mar 02 21:17:57 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:17:56.954963 924 manager.go:2022] Back-off 2m40s restarting failed container=deis-database pod=deis-database-lxj5i_deis | |
Mar 02 21:18:07 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:18:06.935749 924 manager.go:1769] pod "deis-workflow-176jb_deis" container "deis-workflow" is unhealthy (probe result: failure), it will be killed and re-created. | |
Mar 02 21:18:07 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:18:06.990565 924 manager.go:2022] Back-off 2m40s restarting failed container=deis-database pod=deis-database-lxj5i_deis | |
Mar 02 21:18:17 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:18:16.932973 924 manager.go:2022] Back-off 2m40s restarting failed container=deis-database pod=deis-database-lxj5i_deis | |
Mar 02 21:18:27 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:18:26.937207 924 manager.go:2022] Back-off 2m40s restarting failed container=deis-database pod=deis-database-lxj5i_deis | |
Mar 02 21:18:36 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:18:36.928620 924 manager.go:2022] Back-off 2m40s restarting failed container=deis-database pod=deis-database-lxj5i_deis | |
Mar 02 21:19:07 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:19:06.939230 924 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-lxj5i_deis | |
Mar 02 21:19:16 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:19:16.911631 924 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-lxj5i_deis | |
Mar 02 21:19:16 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:19:16.937566 924 manager.go:1769] pod "deis-workflow-176jb_deis" container "deis-workflow" is unhealthy (probe result: failure), it will be killed and re-created. | |
Mar 02 21:19:26 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:19:26.912625 924 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-lxj5i_deis | |
Mar 02 21:19:36 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:19:36.926221 924 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-lxj5i_deis | |
Mar 02 21:19:47 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:19:46.994967 924 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-lxj5i_deis | |
Mar 02 21:19:56 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:19:56.917911 924 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-lxj5i_deis | |
Mar 02 21:20:06 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:20:06.948150 924 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-lxj5i_deis | |
Mar 02 21:20:17 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:20:16.945414 924 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-lxj5i_deis | |
Mar 02 21:20:26 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:20:26.954195 924 manager.go:1769] pod "deis-workflow-176jb_deis" container "deis-workflow" is unhealthy (probe result: failure), it will be killed and re-created. | |
Mar 02 21:20:26 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:20:26.956063 924 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-lxj5i_deis | |
Mar 02 21:20:36 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:20:36.959631 924 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-lxj5i_deis | |
Mar 02 21:20:47 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:20:46.996781 924 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-lxj5i_deis | |
Mar 02 21:20:56 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:20:56.940339 924 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-lxj5i_deis | |
Mar 02 21:20:57 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:20:57.059353 924 provider.go:91] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider | |
Mar 02 21:21:07 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:21:06.996506 924 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-lxj5i_deis | |
Mar 02 21:21:17 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:21:17.014619 924 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-lxj5i_deis | |
Mar 02 21:21:27 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:21:26.948448 924 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-lxj5i_deis | |
Mar 02 21:21:37 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:21:36.971996 924 manager.go:1769] pod "deis-workflow-176jb_deis" container "deis-workflow" is unhealthy (probe result: failure), it will be killed and re-created. | |
Mar 02 21:21:37 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:21:36.974841 924 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-lxj5i_deis | |
Mar 02 21:21:46 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:21:46.954447 924 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-lxj5i_deis | |
Mar 02 21:21:57 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:21:56.921111 924 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-lxj5i_deis | |
Mar 02 21:22:07 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:22:06.918073 924 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-lxj5i_deis | |
Mar 02 21:22:16 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:22:16.907812 924 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-lxj5i_deis | |
Mar 02 21:22:27 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:22:26.958503 924 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-lxj5i_deis | |
Mar 02 21:22:36 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:22:36.962478 924 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-lxj5i_deis | |
Mar 02 21:22:47 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:22:46.950797 924 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-lxj5i_deis | |
Mar 02 21:22:47 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:22:46.980807 924 manager.go:1769] pod "deis-workflow-176jb_deis" container "deis-workflow" is unhealthy (probe result: failure), it will be killed and re-created. | |
Mar 02 21:22:56 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:22:56.941969 924 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-lxj5i_deis | |
Mar 02 21:23:07 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:23:06.977108 924 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-lxj5i_deis | |
Mar 02 21:23:16 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:23:16.939387 924 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-lxj5i_deis | |
Mar 02 21:23:27 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:23:26.981949 924 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-lxj5i_deis | |
Mar 02 21:23:37 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:23:36.936628 924 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-lxj5i_deis | |
Mar 02 21:23:47 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:23:46.947330 924 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-lxj5i_deis | |
Mar 02 21:23:56 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:23:56.916826 924 manager.go:1769] pod "deis-workflow-176jb_deis" container "deis-workflow" is unhealthy (probe result: failure), it will be killed and re-created. | |
Mar 02 21:23:56 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:23:56.918803 924 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-lxj5i_deis | |
Mar 02 21:24:26 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:24:26.962917 924 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-lxj5i_deis | |
Mar 02 21:24:37 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:24:36.973504 924 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-lxj5i_deis | |
Mar 02 21:24:47 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:24:47.030208 924 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-lxj5i_deis | |
Mar 02 21:24:56 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:24:56.956680 924 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-lxj5i_deis | |
Mar 02 21:25:07 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:25:06.963656 924 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-lxj5i_deis | |
Mar 02 21:25:07 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:25:06.974187 924 manager.go:1769] pod "deis-workflow-176jb_deis" container "deis-workflow" is unhealthy (probe result: failure), it will be killed and re-created. | |
Mar 02 21:25:16 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:25:16.934533 924 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-lxj5i_deis | |
Mar 02 21:25:26 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:25:26.929547 924 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-lxj5i_deis | |
Mar 02 21:25:36 ip-10-0-0-166.us-west-2.compute.internal kubelet[924]: I0302 21:25:36.911567 924 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-lxj5i_deis | |
-- Reboot -- | |
Mar 02 21:27:19 ip-10-0-0-166.us-west-2.compute.internal systemd[1]: Starting kubelet.service... | |
Mar 02 21:27:19 ip-10-0-0-166.us-west-2.compute.internal systemd[1]: Started kubelet.service. | |
Mar 02 21:27:20 ip-10-0-0-166.us-west-2.compute.internal kubelet[658]: I0302 21:27:20.783809 658 aws.go:466] Zone not specified in configuration file; querying AWS metadata service | |
Mar 02 21:27:20 ip-10-0-0-166.us-west-2.compute.internal kubelet[658]: I0302 21:27:20.971732 658 aws.go:575] AWS cloud filtering on tags: map[KubernetesCluster:k8s] | |
Mar 02 21:27:21 ip-10-0-0-166.us-west-2.compute.internal kubelet[658]: I0302 21:27:20.972776 658 manager.go:128] cAdvisor running in container: "/system.slice/kubelet.service" | |
Mar 02 21:27:23 ip-10-0-0-166.us-west-2.compute.internal systemd[1]: Started kubelet.service. | |
Mar 02 21:27:27 ip-10-0-0-166.us-west-2.compute.internal kubelet[658]: I0302 21:27:27.124176 658 fs.go:108] Filesystem partitions: map[/dev/xvda9:{mountpoint:/ major:202 minor:9 fsType: blockSize:0} /dev/xvda3:{mountpoint:/usr major:202 minor:3 fsType: blockSize:0} /dev/xvda6:{mountpoint:/usr/share/oem major:202 minor:6 fsType: blockSize:0}] | |
Mar 02 21:27:27 ip-10-0-0-166.us-west-2.compute.internal kubelet[658]: I0302 21:27:27.142712 658 manager.go:163] Machine: {NumCores:2 CpuFrequency:2500106 MemoryCapacity:7846588416 MachineID:a47d66fc2ade46dba3040d0378b05e8c SystemUUID:EC259342-D3C5-115E-77CD-09DB0B66F289 BootID:777c7034-98da-40b3-818f-db69cc296377 Filesystems:[{Device:/dev/xvda9 Capacity:28730269696} {Device:/dev/xvda3 Capacity:1031946240} {Device:/dev/xvda6 Capacity:113229824}] DiskMap:map[202:0:{Name:xvda Major:202 Minor:0 Size:32212254720 Scheduler:none} 202:16:{Name:xvdb Major:202 Minor:16 Size:32204390400 Scheduler:none}] NetworkDevices:[{Name:eth0 MacAddress:02:5c:32:09:4d:91 Speed:0 Mtu:9001} {Name:flannel0 MacAddress: Speed:10 Mtu:8973}] Topology:[{Id:0 Memory:7846588416 Cores:[{Id:0 Threads:[0 1] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:262144 Type:Unified Level:2}]}] Caches:[{Size:26214400 Type:Unified Level:3}]}] CloudProvider:Unknown InstanceType:Unknown} | |
Mar 02 21:27:27 ip-10-0-0-166.us-west-2.compute.internal kubelet[658]: I0302 21:27:27.143487 658 manager.go:169] Version: {KernelVersion:4.4.0-coreos-r2 ContainerOsVersion:CoreOS 942.0.0 DockerVersion:1.9.1 CadvisorVersion: CadvisorRevision:} | |
Mar 02 21:27:27 ip-10-0-0-166.us-west-2.compute.internal kubelet[658]: I0302 21:27:27.160605 658 server.go:798] Adding manifest file: /etc/kubernetes/manifests | |
Mar 02 21:27:27 ip-10-0-0-166.us-west-2.compute.internal kubelet[658]: I0302 21:27:27.160641 658 server.go:808] Watching apiserver | |
Mar 02 21:27:27 ip-10-0-0-166.us-west-2.compute.internal kubelet[658]: I0302 21:27:27.333421 658 plugins.go:56] Registering credential provider: .dockercfg | |
Mar 02 21:27:27 ip-10-0-0-166.us-west-2.compute.internal kubelet[658]: I0302 21:27:27.341774 658 server.go:770] Started kubelet | |
Mar 02 21:27:27 ip-10-0-0-166.us-west-2.compute.internal kubelet[658]: E0302 21:27:27.347414 658 kubelet.go:756] Image garbage collection failed: unable to find data for container / | |
Mar 02 21:27:27 ip-10-0-0-166.us-west-2.compute.internal kubelet[658]: I0302 21:27:27.352267 658 server.go:72] Starting to listen on 0.0.0.0:10250 | |
Mar 02 21:27:27 ip-10-0-0-166.us-west-2.compute.internal kubelet[658]: I0302 21:27:27.362530 658 kubelet.go:777] Running in container "/kubelet" | |
Mar 02 21:27:27 ip-10-0-0-166.us-west-2.compute.internal kubelet[658]: I0302 21:27:27.477158 658 factory.go:194] System is using systemd | |
Mar 02 21:27:27 ip-10-0-0-166.us-west-2.compute.internal kubelet[658]: I0302 21:27:27.737606 658 factory.go:236] Registering Docker factory | |
Mar 02 21:27:27 ip-10-0-0-166.us-west-2.compute.internal kubelet[658]: I0302 21:27:27.738329 658 factory.go:93] Registering Raw factory | |
Mar 02 21:27:27 ip-10-0-0-166.us-west-2.compute.internal kubelet[658]: I0302 21:27:27.868228 658 kubelet.go:885] Node ip-10-0-0-166.us-west-2.compute.internal was previously registered | |
Mar 02 21:27:27 ip-10-0-0-166.us-west-2.compute.internal kubelet[658]: I0302 21:27:27.943056 658 manager.go:1006] Started watching for new ooms in manager | |
Mar 02 21:27:27 ip-10-0-0-166.us-west-2.compute.internal kubelet[658]: I0302 21:27:27.944275 658 oomparser.go:183] oomparser using systemd | |
Mar 02 21:27:27 ip-10-0-0-166.us-west-2.compute.internal kubelet[658]: I0302 21:27:27.944956 658 manager.go:250] Starting recovery of all containers | |
Mar 02 21:27:28 ip-10-0-0-166.us-west-2.compute.internal kubelet[658]: I0302 21:27:28.035587 658 manager.go:255] Recovery completed | |
Mar 02 21:27:28 ip-10-0-0-166.us-west-2.compute.internal kubelet[658]: I0302 21:27:28.105482 658 manager.go:104] Starting to sync pod status with apiserver | |
Mar 02 21:27:28 ip-10-0-0-166.us-west-2.compute.internal kubelet[658]: I0302 21:27:28.107054 658 kubelet.go:1953] Starting kubelet main sync loop. | |
Mar 02 21:27:28 ip-10-0-0-166.us-west-2.compute.internal kubelet[658]: I0302 21:27:28.539351 658 hairpin.go:49] Unable to find pair interface, setting up all interfaces: No peer_ifindex in interface statistics for eth0 of container 1125 | |
Mar 02 21:27:30 ip-10-0-0-166.us-west-2.compute.internal kubelet[658]: E0302 21:27:30.222179 658 kubelet.go:1455] Failed creating a mirror pod "kube-proxy-ip-10-0-0-166.us-west-2.compute.internal_kube-system": pods "kube-proxy-ip-10-0-0-166.us-west-2.compute.internal" already exists | |
Mar 02 21:27:30 ip-10-0-0-166.us-west-2.compute.internal kubelet[658]: I0302 21:27:30.316070 658 provider.go:91] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider | |
Mar 02 21:27:58 ip-10-0-0-166.us-west-2.compute.internal kubelet[658]: I0302 21:27:58.259608 658 manager.go:2022] Back-off 10s restarting failed container=deis-workflow pod=deis-workflow-176jb_deis | |
Mar 02 21:28:18 ip-10-0-0-166.us-west-2.compute.internal kubelet[658]: I0302 21:28:18.293415 658 manager.go:2022] Back-off 20s restarting failed container=deis-workflow pod=deis-workflow-176jb_deis | |
Mar 02 21:28:28 ip-10-0-0-166.us-west-2.compute.internal kubelet[658]: I0302 21:28:28.369101 658 manager.go:2022] Back-off 20s restarting failed container=deis-workflow pod=deis-workflow-176jb_deis | |
Mar 02 21:28:48 ip-10-0-0-166.us-west-2.compute.internal kubelet[658]: I0302 21:28:48.317739 658 manager.go:2022] Back-off 40s restarting failed container=deis-workflow pod=deis-workflow-176jb_deis | |
Mar 02 21:28:58 ip-10-0-0-166.us-west-2.compute.internal kubelet[658]: I0302 21:28:58.272373 658 manager.go:2022] Back-off 40s restarting failed container=deis-workflow pod=deis-workflow-176jb_deis | |
Mar 02 21:29:08 ip-10-0-0-166.us-west-2.compute.internal kubelet[658]: I0302 21:29:08.282724 658 manager.go:2022] Back-off 40s restarting failed container=deis-workflow pod=deis-workflow-176jb_deis | |
Mar 02 21:29:18 ip-10-0-0-166.us-west-2.compute.internal kubelet[658]: I0302 21:29:18.255593 658 manager.go:2022] Back-off 40s restarting failed container=deis-workflow pod=deis-workflow-176jb_deis | |
Mar 02 21:29:18 ip-10-0-0-166.us-west-2.compute.internal kubelet[658]: I0302 21:29:18.261448 658 manager.go:2022] Back-off 10s restarting failed container=deis-database pod=deis-database-lxj5i_deis | |
Mar 02 21:29:48 ip-10-0-0-166.us-west-2.compute.internal kubelet[658]: I0302 21:29:48.330606 658 manager.go:2022] Back-off 20s restarting failed container=deis-database pod=deis-database-lxj5i_deis | |
Mar 02 21:29:58 ip-10-0-0-166.us-west-2.compute.internal kubelet[658]: I0302 21:29:58.256795 658 manager.go:2022] Back-off 20s restarting failed container=deis-database pod=deis-database-lxj5i_deis | |
Mar 02 21:30:08 ip-10-0-0-166.us-west-2.compute.internal kubelet[658]: I0302 21:30:08.229297 658 manager.go:1769] pod "deis-workflow-176jb_deis" container "deis-workflow" is unhealthy (probe result: failure), it will be killed and re-created. | |
Mar 02 21:30:28 ip-10-0-0-166.us-west-2.compute.internal kubelet[658]: I0302 21:30:28.250450 658 manager.go:2022] Back-off 40s restarting failed container=deis-database pod=deis-database-lxj5i_deis | |
Mar 02 21:30:38 ip-10-0-0-166.us-west-2.compute.internal kubelet[658]: I0302 21:30:38.262864 658 manager.go:2022] Back-off 40s restarting failed container=deis-database pod=deis-database-lxj5i_deis | |
Mar 02 21:30:48 ip-10-0-0-166.us-west-2.compute.internal kubelet[658]: I0302 21:30:48.243045 658 manager.go:2022] Back-off 40s restarting failed container=deis-database pod=deis-database-lxj5i_deis | |
Mar 02 21:30:58 ip-10-0-0-166.us-west-2.compute.internal kubelet[658]: I0302 21:30:58.216540 658 manager.go:2022] Back-off 40s restarting failed container=deis-database pod=deis-database-lxj5i_deis | |
Mar 02 21:31:18 ip-10-0-0-166.us-west-2.compute.internal kubelet[658]: I0302 21:31:18.244476 658 manager.go:1769] pod "deis-workflow-176jb_deis" container "deis-workflow" is unhealthy (probe result: failure), it will be killed and re-created. | |
Mar 02 21:31:28 ip-10-0-0-166.us-west-2.compute.internal kubelet[658]: I0302 21:31:28.202053 658 manager.go:2022] Back-off 1m20s restarting failed container=deis-database pod=deis-database-lxj5i_deis | |
Mar 02 21:31:38 ip-10-0-0-166.us-west-2.compute.internal kubelet[658]: I0302 21:31:38.202213 658 manager.go:2022] Back-off 1m20s restarting failed container=deis-database pod=deis-database-lxj5i_deis | |
Mar 02 21:31:48 ip-10-0-0-166.us-west-2.compute.internal kubelet[658]: I0302 21:31:48.216323 658 manager.go:2022] Back-off 1m20s restarting failed container=deis-database pod=deis-database-lxj5i_deis | |
Mar 02 21:31:58 ip-10-0-0-166.us-west-2.compute.internal kubelet[658]: I0302 21:31:58.276830 658 manager.go:2022] Back-off 1m20s restarting failed container=deis-database pod=deis-database-lxj5i_deis | |
Mar 02 21:32:08 ip-10-0-0-166.us-west-2.compute.internal kubelet[658]: I0302 21:32:08.248545 658 manager.go:2022] Back-off 1m20s restarting failed container=deis-database pod=deis-database-lxj5i_deis | |
Mar 02 21:32:18 ip-10-0-0-166.us-west-2.compute.internal kubelet[658]: I0302 21:32:18.295585 658 manager.go:2022] Back-off 1m20s restarting failed container=deis-database pod=deis-database-lxj5i_deis | |
Mar 02 21:32:28 ip-10-0-0-166.us-west-2.compute.internal kubelet[658]: I0302 21:32:28.281397 658 manager.go:2022] Back-off 1m20s restarting failed container=deis-database pod=deis-database-lxj5i_deis | |
Mar 02 21:32:28 ip-10-0-0-166.us-west-2.compute.internal kubelet[658]: I0302 21:32:28.291928 658 manager.go:1769] pod "deis-workflow-176jb_deis" container "deis-workflow" is unhealthy (probe result: failure), it will be killed and re-created. | |
Mar 02 21:32:38 ip-10-0-0-166.us-west-2.compute.internal kubelet[658]: I0302 21:32:38.331521 658 manager.go:2022] Back-off 1m20s restarting failed container=deis-database pod=deis-database-lxj5i_deis | |
Mar 02 21:32:48 ip-10-0-0-166.us-west-2.compute.internal kubelet[658]: I0302 21:32:48.282610 658 provider.go:91] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider | |
Mar 02 21:32:49 ip-10-0-0-166.us-west-2.compute.internal kubelet[658]: E0302 21:32:49.890325 658 manager.go:1920] Error running pod "deis-database-lxj5i_deis" container "deis-database": exceeded maxTries, some processes might not have desired OOM score | |
Mar 02 21:33:08 ip-10-0-0-166.us-west-2.compute.internal kubelet[658]: I0302 21:33:08.235650 658 manager.go:2022] Back-off 1m20s restarting failed container=deis-workflow pod=deis-workflow-176jb_deis | |
Mar 02 21:33:08 ip-10-0-0-166.us-west-2.compute.internal kubelet[658]: I0302 21:33:08.237456 658 manager.go:2022] Back-off 2m40s restarting failed container=deis-database pod=deis-database-lxj5i_deis | |
Mar 02 21:33:18 ip-10-0-0-166.us-west-2.compute.internal kubelet[658]: I0302 21:33:18.279264 658 manager.go:2022] Back-off 2m40s restarting failed container=deis-database pod=deis-database-lxj5i_deis | |
Mar 02 21:33:18 ip-10-0-0-166.us-west-2.compute.internal kubelet[658]: I0302 21:33:18.286364 658 manager.go:2022] Back-off 1m20s restarting failed container=deis-workflow pod=deis-workflow-176jb_deis | |
Mar 02 21:33:28 ip-10-0-0-166.us-west-2.compute.internal kubelet[658]: I0302 21:33:28.293589 658 manager.go:2022] Back-off 1m20s restarting failed container=deis-workflow pod=deis-workflow-176jb_deis | |
Mar 02 21:33:28 ip-10-0-0-166.us-west-2.compute.internal kubelet[658]: I0302 21:33:28.303974 658 manager.go:2022] Back-off 2m40s restarting failed container=deis-database pod=deis-database-lxj5i_deis | |
Mar 02 21:33:38 ip-10-0-0-166.us-west-2.compute.internal kubelet[658]: I0302 21:33:38.314732 658 manager.go:2022] Back-off 1m20s restarting failed container=deis-workflow pod=deis-workflow-176jb_deis | |
Mar 02 21:33:38 ip-10-0-0-166.us-west-2.compute.internal kubelet[658]: I0302 21:33:38.322166 658 manager.go:2022] Back-off 2m40s restarting failed container=deis-database pod=deis-database-lxj5i_deis | |
Mar 02 21:33:48 ip-10-0-0-166.us-west-2.compute.internal kubelet[658]: I0302 21:33:48.249357 658 manager.go:2022] Back-off 1m20s restarting failed container=deis-workflow pod=deis-workflow-176jb_deis | |
Mar 02 21:33:48 ip-10-0-0-166.us-west-2.compute.internal kubelet[658]: I0302 21:33:48.269451 658 manager.go:2022] Back-off 2m40s restarting failed container=deis-database pod=deis-database-lxj5i_deis | |
Mar 02 21:33:58 ip-10-0-0-166.us-west-2.compute.internal kubelet[658]: I0302 21:33:58.286355 658 manager.go:2022] Back-off 1m20s restarting failed container=deis-workflow pod=deis-workflow-176jb_deis | |
Mar 02 21:33:58 ip-10-0-0-166.us-west-2.compute.internal kubelet[658]: I0302 21:33:58.305268 658 manager.go:2022] Back-off 2m40s restarting failed container=deis-database pod=deis-database-lxj5i_deis | |
Mar 02 21:34:08 ip-10-0-0-166.us-west-2.compute.internal kubelet[658]: I0302 21:34:08.221357 658 manager.go:2022] Back-off 2m40s restarting failed container=deis-database pod=deis-database-lxj5i_deis | |
Mar 02 21:34:08 ip-10-0-0-166.us-west-2.compute.internal kubelet[658]: I0302 21:34:08.229439 658 manager.go:2022] Back-off 1m20s restarting failed container=deis-workflow pod=deis-workflow-176jb_deis | |
Mar 02 21:34:18 ip-10-0-0-166.us-west-2.compute.internal kubelet[658]: I0302 21:34:18.258839 658 manager.go:2022] Back-off 2m40s restarting failed container=deis-database pod=deis-database-lxj5i_deis | |
Mar 02 21:34:18 ip-10-0-0-166.us-west-2.compute.internal kubelet[658]: I0302 21:34:18.275347 658 manager.go:2022] Back-off 1m20s restarting failed container=deis-workflow pod=deis-workflow-176jb_deis | |
Mar 02 21:34:28 ip-10-0-0-166.us-west-2.compute.internal kubelet[658]: I0302 21:34:28.298103 658 manager.go:2022] Back-off 2m40s restarting failed container=deis-database pod=deis-database-lxj5i_deis | |
Mar 02 21:34:38 ip-10-0-0-166.us-west-2.compute.internal kubelet[658]: I0302 21:34:38.264935 658 manager.go:2022] Back-off 2m40s restarting failed container=deis-database pod=deis-database-lxj5i_deis | |
Mar 02 21:34:48 ip-10-0-0-166.us-west-2.compute.internal kubelet[658]: I0302 21:34:48.277230 658 manager.go:2022] Back-off 2m40s restarting failed container=deis-database pod=deis-database-lxj5i_deis | |
Mar 02 21:34:58 ip-10-0-0-166.us-west-2.compute.internal kubelet[658]: I0302 21:34:58.247736 658 manager.go:2022] Back-off 2m40s restarting failed container=deis-database pod=deis-database-lxj5i_deis | |
Mar 02 21:35:08 ip-10-0-0-166.us-west-2.compute.internal kubelet[658]: I0302 21:35:08.260789 658 manager.go:1769] pod "deis-workflow-176jb_deis" container "deis-workflow" is unhealthy (probe result: failure), it will be killed and re-created. | |
Mar 02 21:35:08 ip-10-0-0-166.us-west-2.compute.internal kubelet[658]: I0302 21:35:08.264726 658 manager.go:2022] Back-off 2m40s restarting failed container=deis-database pod=deis-database-lxj5i_deis | |
Mar 02 21:35:18 ip-10-0-0-166.us-west-2.compute.internal kubelet[658]: I0302 21:35:18.220273 658 manager.go:2022] Back-off 2m40s restarting failed container=deis-database pod=deis-database-lxj5i_deis | |
Mar 02 21:35:28 ip-10-0-0-166.us-west-2.compute.internal kubelet[658]: I0302 21:35:28.288394 658 manager.go:2022] Back-off 2m40s restarting failed container=deis-database pod=deis-database-lxj5i_deis | |
Mar 02 21:35:38 ip-10-0-0-166.us-west-2.compute.internal kubelet[658]: I0302 21:35:38.250625 658 manager.go:2022] Back-off 2m40s restarting failed container=deis-database pod=deis-database-lxj5i_deis | |
Mar 02 21:36:08 ip-10-0-0-166.us-west-2.compute.internal kubelet[658]: I0302 21:36:08.253845 658 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-lxj5i_deis | |
Mar 02 21:36:18 ip-10-0-0-166.us-west-2.compute.internal kubelet[658]: I0302 21:36:18.257169 658 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-lxj5i_deis | |
Mar 02 21:36:18 ip-10-0-0-166.us-west-2.compute.internal kubelet[658]: I0302 21:36:18.260330 658 manager.go:1769] pod "deis-workflow-176jb_deis" container "deis-workflow" is unhealthy (probe result: failure), it will be killed and re-created. | |
Mar 02 21:36:28 ip-10-0-0-166.us-west-2.compute.internal kubelet[658]: I0302 21:36:28.218332 658 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-lxj5i_deis | |
Mar 02 21:36:38 ip-10-0-0-166.us-west-2.compute.internal kubelet[658]: I0302 21:36:38.288667 658 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-lxj5i_deis | |
Mar 02 21:36:48 ip-10-0-0-166.us-west-2.compute.internal kubelet[658]: I0302 21:36:48.230743 658 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-lxj5i_deis | |
Mar 02 21:36:58 ip-10-0-0-166.us-west-2.compute.internal kubelet[658]: I0302 21:36:58.263888 658 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-lxj5i_deis | |
Mar 02 21:37:08 ip-10-0-0-166.us-west-2.compute.internal kubelet[658]: I0302 21:37:08.249596 658 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-lxj5i_deis | |
Mar 02 21:37:18 ip-10-0-0-166.us-west-2.compute.internal kubelet[658]: I0302 21:37:18.215988 658 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-lxj5i_deis | |
Mar 02 21:37:28 ip-10-0-0-166.us-west-2.compute.internal kubelet[658]: I0302 21:37:28.325271 658 manager.go:1769] pod "deis-workflow-176jb_deis" container "deis-workflow" is unhealthy (probe result: failure), it will be killed and re-created. | |
Mar 02 21:37:28 ip-10-0-0-166.us-west-2.compute.internal kubelet[658]: I0302 21:37:28.340336 658 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-lxj5i_deis | |
Mar 02 21:37:38 ip-10-0-0-166.us-west-2.compute.internal kubelet[658]: I0302 21:37:38.204177 658 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-lxj5i_deis | |
Mar 02 21:37:48 ip-10-0-0-166.us-west-2.compute.internal kubelet[658]: I0302 21:37:48.258303 658 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-lxj5i_deis | |
Mar 02 21:37:58 ip-10-0-0-166.us-west-2.compute.internal kubelet[658]: I0302 21:37:58.190833 658 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-lxj5i_deis | |
Mar 02 21:37:58 ip-10-0-0-166.us-west-2.compute.internal kubelet[658]: I0302 21:37:58.370743 658 provider.go:91] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider | |
Mar 02 21:38:08 ip-10-0-0-166.us-west-2.compute.internal kubelet[658]: I0302 21:38:08.224567 658 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-lxj5i_deis | |
Mar 02 21:38:18 ip-10-0-0-166.us-west-2.compute.internal kubelet[658]: I0302 21:38:18.276246 658 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-lxj5i_deis | |
Mar 02 21:38:28 ip-10-0-0-166.us-west-2.compute.internal kubelet[658]: I0302 21:38:28.256077 658 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-lxj5i_deis | |
Mar 02 21:38:38 ip-10-0-0-166.us-west-2.compute.internal kubelet[658]: I0302 21:38:38.198322 658 manager.go:1769] pod "deis-workflow-176jb_deis" container "deis-workflow" is unhealthy (probe result: failure), it will be killed and re-created. | |
Mar 02 21:38:48 ip-10-0-0-166.us-west-2.compute.internal kubelet[658]: W0302 21:38:48.124172 658 kubelet.go:1588] Orphaned volume "58ccb063-e0bb-11e5-b0bc-029bbe1bd231/objectstore-creds" found, tearing down volume | |
Mar 02 21:38:48 ip-10-0-0-166.us-west-2.compute.internal kubelet[658]: W0302 21:38:48.163510 658 kubelet.go:1588] Orphaned volume "58ccb063-e0bb-11e5-b0bc-029bbe1bd231/deis-database-token-mhxz6" found, tearing down volume | |
Mar 02 21:38:48 ip-10-0-0-166.us-west-2.compute.internal kubelet[658]: W0302 21:38:48.171070 658 kubelet.go:1588] Orphaned volume "58ccb063-e0bb-11e5-b0bc-029bbe1bd231/database-creds" found, tearing down volume | |
Mar 02 21:38:48 ip-10-0-0-166.us-west-2.compute.internal kubelet[658]: W0302 21:38:48.180304 658 kubelet.go:1588] Orphaned volume "58ccb063-e0bb-11e5-b0bc-029bbe1bd231/minio-user" found, tearing down volume | |
Mar 02 21:39:48 ip-10-0-0-166.us-west-2.compute.internal kubelet[658]: I0302 21:39:48.263880 658 manager.go:1769] pod "deis-workflow-176jb_deis" container "deis-workflow" is unhealthy (probe result: failure), it will be killed and re-created. | |
Mar 02 21:40:58 ip-10-0-0-166.us-west-2.compute.internal kubelet[658]: I0302 21:40:58.210080 658 manager.go:1769] pod "deis-workflow-176jb_deis" container "deis-workflow" is unhealthy (probe result: failure), it will be killed and re-created. | |
Mar 02 21:41:28 ip-10-0-0-166.us-west-2.compute.internal kubelet[658]: I0302 21:41:28.292773 658 container.go:430] Failed to update stats for container "/system.slice/docker-8524e44f2a0ea245914fcc16755e7b16f2d9379fa41d75508ef8d3a87197b74c.scope": open /sys/fs/cgroup/cpu,cpuacct/system.slice/docker-8524e44f2a0ea245914fcc16755e7b16f2d9379fa41d75508ef8d3a87197b74c.scope/cpuacct.stat: no such file or directory, continuing to push stats | |
Mar 02 21:42:08 ip-10-0-0-166.us-west-2.compute.internal kubelet[658]: I0302 21:42:08.224880 658 manager.go:1769] pod "deis-workflow-176jb_deis" container "deis-workflow" is unhealthy (probe result: failure), it will be killed and re-created. | |
-- Reboot -- | |
Mar 02 22:03:08 ip-10-0-0-166.us-west-2.compute.internal systemd[1]: Starting kubelet.service... | |
Mar 02 22:03:08 ip-10-0-0-166.us-west-2.compute.internal systemd[1]: Started kubelet.service. | |
Mar 02 22:03:10 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:03:09.925864 682 aws.go:466] Zone not specified in configuration file; querying AWS metadata service | |
Mar 02 22:03:10 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:03:10.336903 682 aws.go:575] AWS cloud filtering on tags: map[KubernetesCluster:k8s] | |
Mar 02 22:03:10 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:03:10.340398 682 manager.go:128] cAdvisor running in container: "/system.slice/kubelet.service" | |
Mar 02 22:03:10 ip-10-0-0-166.us-west-2.compute.internal systemd[1]: Started kubelet.service. | |
Mar 02 22:03:15 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:03:15.865748 682 fs.go:108] Filesystem partitions: map[/dev/xvda9:{mountpoint:/ major:202 minor:9 fsType: blockSize:0} /dev/xvda3:{mountpoint:/usr major:202 minor:3 fsType: blockSize:0} /dev/xvda6:{mountpoint:/usr/share/oem major:202 minor:6 fsType: blockSize:0}] | |
Mar 02 22:03:15 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:03:15.872650 682 manager.go:163] Machine: {NumCores:2 CpuFrequency:2500106 MemoryCapacity:7846588416 MachineID:a47d66fc2ade46dba3040d0378b05e8c SystemUUID:EC259342-D3C5-115E-77CD-09DB0B66F289 BootID:a9ae1af3-bf04-4d41-b7af-7e33cadd9ecc Filesystems:[{Device:/dev/xvda9 Capacity:28730269696} {Device:/dev/xvda3 Capacity:1031946240} {Device:/dev/xvda6 Capacity:113229824}] DiskMap:map[202:0:{Name:xvda Major:202 Minor:0 Size:32212254720 Scheduler:none} 202:16:{Name:xvdb Major:202 Minor:16 Size:32204390400 Scheduler:none}] NetworkDevices:[{Name:eth0 MacAddress:02:5c:32:09:4d:91 Speed:0 Mtu:9001} {Name:flannel0 MacAddress: Speed:10 Mtu:8973}] Topology:[{Id:0 Memory:7846588416 Cores:[{Id:0 Threads:[0 1] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:262144 Type:Unified Level:2}]}] Caches:[{Size:26214400 Type:Unified Level:3}]}] CloudProvider:Unknown InstanceType:Unknown} | |
Mar 02 22:03:15 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:03:15.873312 682 manager.go:169] Version: {KernelVersion:4.4.0-coreos-r2 ContainerOsVersion:CoreOS 942.0.0 DockerVersion:1.9.1 CadvisorVersion: CadvisorRevision:} | |
Mar 02 22:03:15 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:03:15.876994 682 server.go:798] Adding manifest file: /etc/kubernetes/manifests | |
Mar 02 22:03:15 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:03:15.877023 682 server.go:808] Watching apiserver | |
Mar 02 22:03:16 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:03:16.046479 682 plugins.go:56] Registering credential provider: .dockercfg | |
Mar 02 22:03:16 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:03:16.052681 682 server.go:770] Started kubelet | |
Mar 02 22:03:16 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: E0302 22:03:16.056658 682 kubelet.go:756] Image garbage collection failed: unable to find data for container / | |
Mar 02 22:03:16 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:03:16.059260 682 server.go:72] Starting to listen on 0.0.0.0:10250 | |
Mar 02 22:03:16 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:03:16.073789 682 kubelet.go:777] Running in container "/kubelet" | |
Mar 02 22:03:16 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:03:16.192285 682 factory.go:194] System is using systemd | |
Mar 02 22:03:16 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:03:16.466429 682 factory.go:236] Registering Docker factory | |
Mar 02 22:03:16 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:03:16.467127 682 factory.go:93] Registering Raw factory | |
Mar 02 22:03:16 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:03:16.541942 682 kubelet.go:885] Node ip-10-0-0-166.us-west-2.compute.internal was previously registered | |
Mar 02 22:03:16 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:03:16.776723 682 manager.go:1006] Started watching for new ooms in manager | |
Mar 02 22:03:16 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:03:16.777978 682 oomparser.go:183] oomparser using systemd | |
Mar 02 22:03:16 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:03:16.778594 682 manager.go:250] Starting recovery of all containers | |
Mar 02 22:03:16 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:03:16.852836 682 manager.go:255] Recovery completed | |
Mar 02 22:03:16 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:03:16.922090 682 manager.go:104] Starting to sync pod status with apiserver | |
Mar 02 22:03:16 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:03:16.922115 682 kubelet.go:1953] Starting kubelet main sync loop. | |
Mar 02 22:03:17 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:03:17.456130 682 hairpin.go:49] Unable to find pair interface, setting up all interfaces: No peer_ifindex in interface statistics for eth0 of container 1155 | |
Mar 02 22:03:19 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: E0302 22:03:19.128634 682 kubelet.go:1455] Failed creating a mirror pod "kube-proxy-ip-10-0-0-166.us-west-2.compute.internal_kube-system": pods "kube-proxy-ip-10-0-0-166.us-west-2.compute.internal" already exists | |
Mar 02 22:03:19 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:03:19.316878 682 provider.go:91] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider | |
Mar 02 22:03:47 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:03:47.151075 682 manager.go:2022] Back-off 10s restarting failed container=deis-workflow pod=deis-workflow-176jb_deis | |
Mar 02 22:04:07 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:04:07.129005 682 manager.go:2022] Back-off 20s restarting failed container=deis-workflow pod=deis-workflow-176jb_deis | |
Mar 02 22:04:17 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:04:17.102333 682 manager.go:2022] Back-off 20s restarting failed container=deis-workflow pod=deis-workflow-176jb_deis | |
Mar 02 22:04:37 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:04:37.151188 682 manager.go:2022] Back-off 40s restarting failed container=deis-workflow pod=deis-workflow-176jb_deis | |
Mar 02 22:04:47 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:04:47.067337 682 manager.go:2022] Back-off 40s restarting failed container=deis-workflow pod=deis-workflow-176jb_deis | |
Mar 02 22:04:57 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:04:57.076479 682 manager.go:2022] Back-off 40s restarting failed container=deis-workflow pod=deis-workflow-176jb_deis | |
Mar 02 22:05:07 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:05:07.053253 682 manager.go:2022] Back-off 10s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:05:07 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:05:07.058693 682 manager.go:2022] Back-off 40s restarting failed container=deis-workflow pod=deis-workflow-176jb_deis | |
Mar 02 22:05:37 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:05:37.079866 682 manager.go:2022] Back-off 20s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:05:47 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:05:47.128428 682 manager.go:2022] Back-off 20s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:05:57 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:05:57.075037 682 manager.go:1769] pod "deis-workflow-176jb_deis" container "deis-workflow" is unhealthy (probe result: failure), it will be killed and re-created. | |
Mar 02 22:06:17 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:06:17.130280 682 manager.go:2022] Back-off 40s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:06:27 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:06:27.035614 682 manager.go:2022] Back-off 40s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:06:37 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:06:37.075899 682 manager.go:2022] Back-off 40s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:06:47 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:06:47.064790 682 manager.go:2022] Back-off 40s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:07:07 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:07:07.083898 682 manager.go:1769] pod "deis-workflow-176jb_deis" container "deis-workflow" is unhealthy (probe result: failure), it will be killed and re-created. | |
Mar 02 22:07:17 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:07:17.138496 682 manager.go:2022] Back-off 1m20s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:07:27 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:07:27.024875 682 manager.go:2022] Back-off 1m20s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:07:37 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:07:37.079434 682 manager.go:2022] Back-off 1m20s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:07:47 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:07:47.043377 682 manager.go:2022] Back-off 1m20s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:07:57 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:07:57.137471 682 manager.go:2022] Back-off 1m20s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:08:07 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:08:07.094117 682 manager.go:2022] Back-off 1m20s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:08:17 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:08:17.078837 682 manager.go:2022] Back-off 1m20s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:08:17 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:08:17.130187 682 manager.go:1769] pod "deis-workflow-176jb_deis" container "deis-workflow" is unhealthy (probe result: failure), it will be killed and re-created. | |
Mar 02 22:08:27 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:08:27.088929 682 manager.go:2022] Back-off 1m20s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:08:37 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:08:37.102398 682 provider.go:91] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider | |
Mar 02 22:08:57 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:08:57.129783 682 manager.go:2022] Back-off 2m40s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:08:57 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:08:57.135739 682 manager.go:2022] Back-off 1m20s restarting failed container=deis-workflow pod=deis-workflow-176jb_deis | |
Mar 02 22:09:07 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:09:07.094669 682 manager.go:2022] Back-off 1m20s restarting failed container=deis-workflow pod=deis-workflow-176jb_deis | |
Mar 02 22:09:07 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:09:07.099762 682 manager.go:2022] Back-off 2m40s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:09:17 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:09:17.149673 682 manager.go:2022] Back-off 2m40s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:09:17 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:09:17.158184 682 manager.go:2022] Back-off 1m20s restarting failed container=deis-workflow pod=deis-workflow-176jb_deis | |
Mar 02 22:09:27 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:09:27.083278 682 manager.go:2022] Back-off 2m40s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:09:27 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:09:27.103424 682 manager.go:2022] Back-off 1m20s restarting failed container=deis-workflow pod=deis-workflow-176jb_deis | |
Mar 02 22:09:37 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:09:37.075740 682 manager.go:2022] Back-off 2m40s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:09:37 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:09:37.097591 682 manager.go:2022] Back-off 1m20s restarting failed container=deis-workflow pod=deis-workflow-176jb_deis | |
Mar 02 22:09:47 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:09:47.095117 682 manager.go:2022] Back-off 2m40s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:09:47 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:09:47.105090 682 manager.go:2022] Back-off 1m20s restarting failed container=deis-workflow pod=deis-workflow-176jb_deis | |
Mar 02 22:09:57 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:09:57.106468 682 manager.go:2022] Back-off 2m40s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:09:57 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:09:57.113552 682 manager.go:2022] Back-off 1m20s restarting failed container=deis-workflow pod=deis-workflow-176jb_deis | |
Mar 02 22:10:07 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:10:07.123128 682 manager.go:2022] Back-off 2m40s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:10:07 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:10:07.163664 682 manager.go:2022] Back-off 1m20s restarting failed container=deis-workflow pod=deis-workflow-176jb_deis | |
Mar 02 22:10:17 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:10:17.140024 682 manager.go:2022] Back-off 2m40s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:10:27 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:10:27.102519 682 manager.go:2022] Back-off 2m40s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:10:37 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:10:37.046503 682 manager.go:2022] Back-off 2m40s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:10:47 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:10:47.096000 682 manager.go:2022] Back-off 2m40s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:10:57 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:10:57.085278 682 manager.go:2022] Back-off 2m40s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:10:57 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:10:57.092006 682 manager.go:1769] pod "deis-workflow-176jb_deis" container "deis-workflow" is unhealthy (probe result: failure), it will be killed and re-created. | |
Mar 02 22:11:07 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:11:07.075266 682 manager.go:2022] Back-off 2m40s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:11:17 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:11:17.127452 682 manager.go:2022] Back-off 2m40s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:11:27 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:11:27.096904 682 manager.go:2022] Back-off 2m40s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:11:57 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:11:57.089478 682 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:12:07 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:12:07.073162 682 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:12:07 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:12:07.108751 682 manager.go:1769] pod "deis-workflow-176jb_deis" container "deis-workflow" is unhealthy (probe result: failure), it will be killed and re-created. | |
Mar 02 22:12:17 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:12:17.099757 682 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:12:27 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:12:27.081819 682 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:12:37 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:12:37.030601 682 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:12:47 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:12:47.077205 682 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:12:57 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:12:57.077906 682 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:13:07 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:13:07.036334 682 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:13:17 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:13:17.153189 682 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:13:17 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:13:17.223731 682 manager.go:1769] pod "deis-workflow-176jb_deis" container "deis-workflow" is unhealthy (probe result: failure), it will be killed and re-created. | |
Mar 02 22:13:27 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:13:27.076421 682 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:13:37 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:13:37.033621 682 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:13:47 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:13:47.031114 682 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:13:47 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:13:47.449022 682 provider.go:91] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider | |
Mar 02 22:13:57 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:13:57.092365 682 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:14:07 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:14:07.078247 682 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:14:17 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:14:17.089748 682 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:14:27 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:14:27.093525 682 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:14:27 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:14:27.119715 682 manager.go:1769] pod "deis-workflow-176jb_deis" container "deis-workflow" is unhealthy (probe result: failure), it will be killed and re-created. | |
Mar 02 22:14:37 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:14:37.096637 682 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:14:47 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:14:47.055886 682 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:14:57 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:14:57.075338 682 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:15:07 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:15:07.066256 682 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:15:17 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:15:17.135436 682 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:15:27 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:15:27.089807 682 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:15:37 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:15:37.083357 682 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:15:37 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:15:37.109408 682 manager.go:1769] pod "deis-workflow-176jb_deis" container "deis-workflow" is unhealthy (probe result: failure), it will be killed and re-created. | |
Mar 02 22:15:47 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:15:47.081090 682 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:15:57 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:15:57.079475 682 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:16:07 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:16:07.033721 682 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:16:17 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:16:17.104032 682 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:16:27 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:16:27.148779 682 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:16:37 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:16:37.085849 682 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:16:47 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:16:47.060443 682 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:16:47 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:16:47.069840 682 manager.go:1769] pod "deis-workflow-176jb_deis" container "deis-workflow" is unhealthy (probe result: failure), it will be killed and re-created. | |
Mar 02 22:17:17 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:17:17.110105 682 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:17:27 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:17:27.095625 682 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:17:37 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:17:37.064228 682 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:17:47 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:17:47.061251 682 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:17:57 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:17:57.082624 682 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:17:57 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:17:57.099106 682 manager.go:1769] pod "deis-workflow-176jb_deis" container "deis-workflow" is unhealthy (probe result: failure), it will be killed and re-created. | |
Mar 02 22:18:07 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:18:07.034335 682 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:18:17 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:18:17.102606 682 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:18:27 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:18:27.059468 682 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:18:37 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:18:37.061295 682 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:18:47 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:18:47.059234 682 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:18:57 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:18:57.110065 682 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:19:07 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:19:07.068448 682 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:19:07 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:19:07.107902 682 manager.go:1769] pod "deis-workflow-176jb_deis" container "deis-workflow" is unhealthy (probe result: failure), it will be killed and re-created. | |
Mar 02 22:19:17 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:19:17.120051 682 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:19:27 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:19:27.041243 682 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:19:37 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:19:37.048935 682 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:19:37 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:19:37.245652 682 provider.go:91] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider | |
Mar 02 22:19:47 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:19:47.085210 682 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:19:57 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:19:57.106680 682 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:20:07 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:20:07.094542 682 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:20:17 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:20:17.147384 682 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:20:17 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:20:17.228469 682 manager.go:1769] pod "deis-workflow-176jb_deis" container "deis-workflow" is unhealthy (probe result: failure), it will be killed and re-created. | |
Mar 02 22:20:27 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:20:27.081198 682 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:20:37 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:20:37.032608 682 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:20:47 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:20:47.032283 682 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:20:57 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:20:57.062654 682 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:21:07 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:21:07.147359 682 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:21:17 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:21:17.121222 682 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:21:27 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:21:27.061283 682 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:21:27 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:21:27.134913 682 manager.go:1769] pod "deis-workflow-176jb_deis" container "deis-workflow" is unhealthy (probe result: failure), it will be killed and re-created. | |
Mar 02 22:21:37 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:21:37.055174 682 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:21:47 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:21:47.087506 682 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:21:57 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:21:57.084668 682 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:22:07 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:22:07.076821 682 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:22:37 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:22:37.086222 682 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:22:37 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:22:37.111831 682 manager.go:1769] pod "deis-workflow-176jb_deis" container "deis-workflow" is unhealthy (probe result: failure), it will be killed and re-created. | |
Mar 02 22:22:47 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:22:47.087664 682 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:22:57 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:22:57.089175 682 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:23:07 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:23:07.083600 682 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:23:17 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:23:17.157487 682 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:23:27 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:23:27.051598 682 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:23:37 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:23:37.083383 682 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:23:47 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:23:47.086408 682 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:23:47 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:23:47.099965 682 manager.go:1769] pod "deis-workflow-176jb_deis" container "deis-workflow" is unhealthy (probe result: failure), it will be killed and re-created. | |
Mar 02 22:23:57 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:23:57.090055 682 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:24:07 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:24:07.083229 682 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:24:17 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:24:17.124271 682 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:24:27 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:24:27.106473 682 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:24:37 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:24:37.040216 682 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:24:47 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:24:47.120697 682 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:24:57 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:24:57.074625 682 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:24:57 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:24:57.095086 682 manager.go:1769] pod "deis-workflow-176jb_deis" container "deis-workflow" is unhealthy (probe result: failure), it will be killed and re-created. | |
Mar 02 22:25:07 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:25:07.072097 682 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:25:17 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:25:17.077737 682 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:25:27 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:25:27.064769 682 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:25:27 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:25:27.238254 682 provider.go:91] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider | |
Mar 02 22:25:37 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:25:37.097983 682 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:25:47 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:25:47.065428 682 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:25:57 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:25:57.111499 682 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:26:07 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:26:07.101283 682 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:26:07 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:26:07.126389 682 manager.go:1769] pod "deis-workflow-176jb_deis" container "deis-workflow" is unhealthy (probe result: failure), it will be killed and re-created. | |
Mar 02 22:26:17 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:26:17.095239 682 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:26:27 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:26:27.086260 682 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:26:37 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:26:37.093291 682 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:26:47 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:26:47.107213 682 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:26:57 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:26:57.061734 682 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:27:07 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:27:07.050050 682 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:27:17 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:27:17.123669 682 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:27:17 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:27:17.168489 682 manager.go:1769] pod "deis-workflow-176jb_deis" container "deis-workflow" is unhealthy (probe result: failure), it will be killed and re-created. | |
Mar 02 22:27:27 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:27:27.066236 682 manager.go:2022] Back-off 5m0s restarting failed container=deis-database pod=deis-database-m175p_deis | |
Mar 02 22:29:07 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:29:07.064246 682 manager.go:2022] Back-off 10s restarting failed container=deis-workflow pod=deis-workflow-176jb_deis | |
Mar 02 22:32:06 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: W0302 22:32:06.943549 682 kubelet.go:1588] Orphaned volume "747b8789-e0c1-11e5-b0bc-029bbe1bd231/database-creds" found, tearing down volume | |
Mar 02 22:32:07 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: W0302 22:32:06.977499 682 kubelet.go:1588] Orphaned volume "747b8789-e0c1-11e5-b0bc-029bbe1bd231/deis-database-token-mhxz6" found, tearing down volume | |
Mar 02 22:32:07 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: W0302 22:32:07.001658 682 kubelet.go:1588] Orphaned volume "747b8789-e0c1-11e5-b0bc-029bbe1bd231/objectstore-creds" found, tearing down volume | |
Mar 02 22:32:07 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: W0302 22:32:07.062215 682 kubelet.go:1588] Orphaned volume "747b8789-e0c1-11e5-b0bc-029bbe1bd231/minio-user" found, tearing down volume | |
Mar 02 22:32:57 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:32:57.051257 682 manager.go:1769] pod "deis-workflow-176jb_deis" container "deis-workflow" is unhealthy (probe result: failure), it will be killed and re-created. | |
Mar 02 22:33:27 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:33:27.195053 682 provider.go:91] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider | |
Mar 02 22:33:37 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:33:37.052835 682 manager.go:2022] Back-off 20s restarting failed container=deis-workflow pod=deis-workflow-176jb_deis | |
Mar 02 22:33:47 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:33:47.135666 682 manager.go:2022] Back-off 20s restarting failed container=deis-workflow pod=deis-workflow-176jb_deis | |
Mar 02 22:34:07 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:34:07.053335 682 manager.go:2022] Back-off 40s restarting failed container=deis-workflow pod=deis-workflow-176jb_deis | |
Mar 02 22:34:17 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:34:17.180220 682 manager.go:2022] Back-off 40s restarting failed container=deis-workflow pod=deis-workflow-176jb_deis | |
Mar 02 22:34:27 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:34:27.092892 682 manager.go:2022] Back-off 40s restarting failed container=deis-workflow pod=deis-workflow-176jb_deis | |
Mar 02 22:34:37 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:34:37.038717 682 manager.go:2022] Back-off 40s restarting failed container=deis-workflow pod=deis-workflow-176jb_deis | |
Mar 02 22:38:17 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: W0302 22:38:17.387835 682 reflector.go:245] pkg/kubelet/kubelet.go:210: watch of *api.Service ended with: 401: The event in requested index is outdated and cleared (the requested history has been cleared [7779/6943]) [8778] | |
Mar 02 22:58:42 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 22:58:42.123535 682 provider.go:91] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider | |
Mar 02 23:04:24 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 23:04:24.100522 682 provider.go:91] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider | |
Mar 02 23:12:26 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: W0302 23:12:26.941814 682 kubelet.go:1588] Orphaned volume "1ab7768e-e0cb-11e5-b0bc-029bbe1bd231/minio-user" found, tearing down volume | |
Mar 02 23:12:26 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: W0302 23:12:26.956603 682 kubelet.go:1588] Orphaned volume "1ab7768e-e0cb-11e5-b0bc-029bbe1bd231/default-token-smwo7" found, tearing down volume | |
Mar 02 23:14:52 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 23:14:52.524120 682 provider.go:91] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider | |
Mar 02 23:20:45 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: W0302 23:20:45.739612 682 kubelet.go:1588] Orphaned volume "91381914-e0cc-11e5-b0bc-029bbe1bd231/minio-user" found, tearing down volume | |
Mar 02 23:20:45 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: W0302 23:20:45.750817 682 kubelet.go:1588] Orphaned volume "91459282-e0cc-11e5-b0bc-029bbe1bd231/minio-user" found, tearing down volume | |
Mar 02 23:20:45 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: W0302 23:20:45.762814 682 kubelet.go:1588] Orphaned volume "91381914-e0cc-11e5-b0bc-029bbe1bd231/default-token-smwo7" found, tearing down volume | |
Mar 02 23:20:45 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: W0302 23:20:45.785770 682 kubelet.go:1588] Orphaned volume "91459282-e0cc-11e5-b0bc-029bbe1bd231/default-token-smwo7" found, tearing down volume | |
Mar 02 23:20:45 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: W0302 23:20:45.799802 682 kubelet.go:1588] Orphaned volume "91569650-e0cc-11e5-b0bc-029bbe1bd231/default-token-smwo7" found, tearing down volume | |
Mar 02 23:20:45 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: W0302 23:20:45.810603 682 kubelet.go:1588] Orphaned volume "91569650-e0cc-11e5-b0bc-029bbe1bd231/minio-user" found, tearing down volume | |
Mar 02 23:20:46 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 23:20:46.161423 682 provider.go:91] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider | |
Mar 02 23:23:37 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: W0302 23:23:37.017498 682 kubelet.go:1588] Orphaned volume "63f83a26-e0cd-11e5-b0bc-029bbe1bd231/default-token-smwo7" found, tearing down volume | |
Mar 02 23:23:37 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: W0302 23:23:37.025933 682 kubelet.go:1588] Orphaned volume "63f83a26-e0cd-11e5-b0bc-029bbe1bd231/minio-user" found, tearing down volume | |
Mar 02 23:23:56 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: W0302 23:23:56.977621 682 kubelet.go:1588] Orphaned volume "640ba213-e0cd-11e5-b0bc-029bbe1bd231/minio-user" found, tearing down volume | |
Mar 02 23:23:57 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: W0302 23:23:56.991917 682 kubelet.go:1588] Orphaned volume "640ba213-e0cd-11e5-b0bc-029bbe1bd231/default-token-smwo7" found, tearing down volume | |
Mar 02 23:24:26 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: W0302 23:24:26.968742 682 kubelet.go:1588] Orphaned volume "63f65b89-e0cd-11e5-b0bc-029bbe1bd231/default-token-smwo7" found, tearing down volume | |
Mar 02 23:24:26 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: W0302 23:24:26.977816 682 kubelet.go:1588] Orphaned volume "63f65b89-e0cd-11e5-b0bc-029bbe1bd231/minio-user" found, tearing down volume | |
Mar 02 23:28:08 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: I0302 23:28:08.618731 682 provider.go:91] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider | |
Mar 02 23:28:17 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: W0302 23:28:17.161439 682 kubelet.go:1588] Orphaned volume "e257592c-e0cd-11e5-b0bc-029bbe1bd231/default-token-smwo7" found, tearing down volume | |
Mar 02 23:28:17 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: W0302 23:28:17.188543 682 kubelet.go:1588] Orphaned volume "e257592c-e0cd-11e5-b0bc-029bbe1bd231/minio-user" found, tearing down volume | |
Mar 02 23:28:46 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: W0302 23:28:46.946220 682 kubelet.go:1588] Orphaned volume "b597cc4c-e0cd-11e5-b0bc-029bbe1bd231/default-token-smwo7" found, tearing down volume | |
Mar 02 23:28:46 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: W0302 23:28:46.962637 682 kubelet.go:1588] Orphaned volume "b597cc4c-e0cd-11e5-b0bc-029bbe1bd231/minio-user" found, tearing down volume | |
Mar 02 23:28:52 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: W0302 23:28:52.493124 682 reflector.go:245] pkg/kubelet/kubelet.go:210: watch of *api.Service ended with: 401: The event in requested index is outdated and cleared (the requested history has been cleared [10446/9818]) [11445] | |
Mar 02 23:29:33 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: W0302 23:29:33.517675 682 kubelet.go:1588] Orphaned volume "c47fb3ea-e0cd-11e5-b0bc-029bbe1bd231/default-token-smwo7" found, tearing down volume | |
Mar 02 23:29:33 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: W0302 23:29:33.532800 682 kubelet.go:1588] Orphaned volume "c47fb3ea-e0cd-11e5-b0bc-029bbe1bd231/minio-user" found, tearing down volume | |
Mar 02 23:29:46 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: W0302 23:29:46.947202 682 kubelet.go:1588] Orphaned volume "d080096f-e0cd-11e5-b0bc-029bbe1bd231/minio-user" found, tearing down volume | |
Mar 02 23:29:46 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: W0302 23:29:46.964833 682 kubelet.go:1588] Orphaned volume "d080096f-e0cd-11e5-b0bc-029bbe1bd231/default-token-smwo7" found, tearing down volume | |
Mar 03 00:15:04 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: W0303 00:15:04.685279 682 reflector.go:245] pkg/kubelet/kubelet.go:210: watch of *api.Service ended with: 401: The event in requested index is outdated and cleared (the requested history has been cleared [11950/11461]) [12949] | |
Mar 03 00:58:45 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: W0303 00:58:45.544689 682 reflector.go:245] pkg/kubelet/kubelet.go:210: watch of *api.Service ended with: 401: The event in requested index is outdated and cleared (the requested history has been cleared [13448/12950]) [14447] | |
Mar 03 01:56:10 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: W0303 01:56:10.102225 682 reflector.go:245] pkg/kubelet/kubelet.go:210: watch of *api.Service ended with: 401: The event in requested index is outdated and cleared (the requested history has been cleared [14965/14448]) [15964] | |
Mar 03 02:50:55 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: W0303 02:50:55.394916 682 reflector.go:245] pkg/kubelet/kubelet.go:210: watch of *api.Service ended with: 401: The event in requested index is outdated and cleared (the requested history has been cleared [16413/15965]) [17412] | |
Mar 03 03:33:24 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: W0303 03:33:24.270986 682 reflector.go:245] pkg/kubelet/kubelet.go:210: watch of *api.Service ended with: 401: The event in requested index is outdated and cleared (the requested history has been cleared [17537/17413]) [18536] | |
Mar 03 04:28:40 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: W0303 04:28:40.365395 682 reflector.go:245] pkg/kubelet/kubelet.go:210: watch of *api.Service ended with: 401: The event in requested index is outdated and cleared (the requested history has been cleared [19000/18537]) [19999] | |
Mar 03 05:08:53 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: W0303 05:08:53.331915 682 reflector.go:245] pkg/kubelet/kubelet.go:210: watch of *api.Service ended with: 401: The event in requested index is outdated and cleared (the requested history has been cleared [20064/20000]) [21063] | |
Mar 03 05:54:09 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: W0303 05:54:09.029597 682 reflector.go:245] pkg/kubelet/kubelet.go:210: watch of *api.Service ended with: 401: The event in requested index is outdated and cleared (the requested history has been cleared [21261/21065]) [22260] | |
Mar 03 06:44:58 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: W0303 06:44:58.647677 682 reflector.go:245] pkg/kubelet/kubelet.go:210: watch of *api.Service ended with: 401: The event in requested index is outdated and cleared (the requested history has been cleared [22605/22261]) [23604] | |
Mar 03 07:36:50 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: W0303 07:36:50.793888 682 reflector.go:245] pkg/kubelet/kubelet.go:210: watch of *api.Service ended with: 401: The event in requested index is outdated and cleared (the requested history has been cleared [23980/23605]) [24979] | |
Mar 03 08:31:43 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: W0303 08:31:43.868774 682 reflector.go:245] pkg/kubelet/kubelet.go:210: watch of *api.Service ended with: 401: The event in requested index is outdated and cleared (the requested history has been cleared [25431/24980]) [26430] | |
Mar 03 09:28:53 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: W0303 09:28:53.896529 682 reflector.go:245] pkg/kubelet/kubelet.go:210: watch of *api.Service ended with: 401: The event in requested index is outdated and cleared (the requested history has been cleared [26944/26431]) [27943] | |
Mar 03 10:13:07 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: W0303 10:13:07.933037 682 reflector.go:245] pkg/kubelet/kubelet.go:210: watch of *api.Service ended with: 401: The event in requested index is outdated and cleared (the requested history has been cleared [28115/27945]) [29114] | |
Mar 03 11:18:58 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: W0303 11:18:58.153154 682 reflector.go:245] pkg/kubelet/kubelet.go:210: watch of *api.Service ended with: 401: The event in requested index is outdated and cleared (the requested history has been cleared [29857/29115]) [30856] | |
Mar 03 12:43:14 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: W0303 12:43:14.392998 682 reflector.go:245] pkg/kubelet/kubelet.go:210: watch of *api.Service ended with: 401: The event in requested index is outdated and cleared (the requested history has been cleared [32085/30857]) [33084] | |
Mar 03 13:32:17 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: W0303 13:32:17.152026 682 reflector.go:245] pkg/kubelet/kubelet.go:210: watch of *api.Service ended with: 401: The event in requested index is outdated and cleared (the requested history has been cleared [33383/33085]) [34382] | |
Mar 03 14:27:10 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: W0303 14:27:10.552554 682 reflector.go:245] pkg/kubelet/kubelet.go:210: watch of *api.Service ended with: 401: The event in requested index is outdated and cleared (the requested history has been cleared [34835/34383]) [35834] | |
Mar 03 15:14:59 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: W0303 15:14:59.355591 682 reflector.go:245] pkg/kubelet/kubelet.go:210: watch of *api.Service ended with: 401: The event in requested index is outdated and cleared (the requested history has been cleared [36101/35835]) [37100] | |
Mar 03 16:22:39 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: W0303 16:22:39.812331 682 reflector.go:245] pkg/kubelet/kubelet.go:210: watch of *api.Service ended with: 401: The event in requested index is outdated and cleared (the requested history has been cleared [37895/37101]) [38894] | |
Mar 03 17:13:07 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: W0303 17:13:07.039370 682 reflector.go:245] pkg/kubelet/kubelet.go:210: watch of *api.Service ended with: 401: The event in requested index is outdated and cleared (the requested history has been cleared [39230/38895]) [40229] | |
Mar 03 17:51:35 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: W0303 17:51:35.226790 682 reflector.go:245] pkg/kubelet/kubelet.go:210: watch of *api.Service ended with: 401: The event in requested index is outdated and cleared (the requested history has been cleared [40248/40230]) [41247] | |
Mar 03 18:24:35 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: E0303 18:24:34.959632 682 event.go:188] Server rejected event '&api.Event{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:api.ObjectMeta{Name:"zircon-addendum-v5-web-9ua9w.14386896783e2661", GenerateName:"", Namespace:"zircon-addendum", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:unversioned.Time{Time:time.Time{sec:0, nsec:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*unversioned.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil)}, InvolvedObject:api.ObjectReference{Kind:"Pod", Namespace:"zircon-addendum", Name:"zircon-addendum-v5-web-9ua9w", UID:"6bc8949e-e0ce-11e5-b0bc-029bbe1bd231", APIVersion:"v1", ResourceVersion:"11332", FieldPath:"spec.containers{zircon-addendum-web}"}, Reason:"Killing", Message:"Killing with docker id 911faa83725e", Source:api.EventSource{Component:"kubelet", Host:"ip-10-0-0-166.us-west-2.compute.internal"}, FirstTimestamp:unversioned.Time{Time:time.Time{sec:63592626274, nsec:926077537, loc:(*time.Location)(0x1fbc560)}}, LastTimestamp:unversioned.Time{Time:time.Time{sec:63592626274, nsec:926077537, loc:(*time.Location)(0x1fbc560)}}, Count:1}': 'Event "zircon-addendum-v5-web-9ua9w.14386896783e2661" is forbidden: Unable to create new content in namespace zircon-addendum because it is being terminated.' (will not retry!) | |
Mar 03 18:24:35 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: W0303 18:24:35.108415 682 manager.go:1444] No ref for pod '"da9347f2273cc3679e4fc9e6a2474c509bcf83012f6a147f4a7b130192a3da7a zircon-addendum/zircon-addendum-v5-web-9ua9w"' | |
Mar 03 18:24:35 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: E0303 18:24:35.126748 682 event.go:188] Server rejected event '&api.Event{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:api.ObjectMeta{Name:"zircon-addendum-v5-web-9ua9w.14386896831b996e", GenerateName:"", Namespace:"zircon-addendum", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:unversioned.Time{Time:time.Time{sec:0, nsec:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*unversioned.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil)}, InvolvedObject:api.ObjectReference{Kind:"Pod", Namespace:"zircon-addendum", Name:"zircon-addendum-v5-web-9ua9w", UID:"6bc8949e-e0ce-11e5-b0bc-029bbe1bd231", APIVersion:"v1", ResourceVersion:"11332", FieldPath:"implicitly required container POD"}, Reason:"Killing", Message:"Killing with docker id da9347f2273c", Source:api.EventSource{Component:"kubelet", Host:"ip-10-0-0-166.us-west-2.compute.internal"}, FirstTimestamp:unversioned.Time{Time:time.Time{sec:63592626275, nsec:108362606, loc:(*time.Location)(0x1fbc560)}}, LastTimestamp:unversioned.Time{Time:time.Time{sec:63592626275, nsec:108362606, loc:(*time.Location)(0x1fbc560)}}, Count:1}': 'Event "zircon-addendum-v5-web-9ua9w.14386896831b996e" is forbidden: Unable to create new content in namespace zircon-addendum because it is being terminated.' (will not retry!) | |
Mar 03 18:24:35 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: W0303 18:24:35.213249 682 manager.go:1444] No ref for pod '"623a012ff8cbfa1b4be0a09c42b953182449a8067e71822a917d406f51ef371e zircon-addendum-web zircon-addendum/zircon-addendum-v5-web-uxl6z"' | |
Mar 03 18:24:35 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: E0303 18:24:35.216335 682 event.go:188] Server rejected event '&api.Event{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:api.ObjectMeta{Name:"zircon-addendum-v5-web-uxl6z.14386896895a64fc", GenerateName:"", Namespace:"zircon-addendum", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:unversioned.Time{Time:time.Time{sec:0, nsec:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*unversioned.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil)}, InvolvedObject:api.ObjectReference{Kind:"Pod", Namespace:"zircon-addendum", Name:"zircon-addendum-v5-web-uxl6z", UID:"864747dd-e0ce-11e5-b0bc-029bbe1bd231", APIVersion:"v1", ResourceVersion:"11450", FieldPath:"spec.containers{zircon-addendum-web}"}, Reason:"Killing", Message:"Killing with docker id 623a012ff8cb", Source:api.EventSource{Component:"kubelet", Host:"ip-10-0-0-166.us-west-2.compute.internal"}, FirstTimestamp:unversioned.Time{Time:time.Time{sec:63592626275, nsec:213141244, loc:(*time.Location)(0x1fbc560)}}, LastTimestamp:unversioned.Time{Time:time.Time{sec:63592626275, nsec:213141244, loc:(*time.Location)(0x1fbc560)}}, Count:1}': 'namespaces "zircon-addendum" not found' (will not retry!) | |
Mar 03 18:24:35 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: W0303 18:24:35.463990 682 manager.go:1444] No ref for pod '"0b7f6703fc356a6473eca6218534f617530996f30aa490662a73a1ddb034b14d zircon-addendum/zircon-addendum-v5-web-uxl6z"' | |
Mar 03 18:24:35 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: E0303 18:24:35.593790 682 event.go:188] Server rejected event '&api.Event{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:api.ObjectMeta{Name:"zircon-addendum-v5-web-uxl6z.14386896984d07c4", GenerateName:"", Namespace:"zircon-addendum", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:unversioned.Time{Time:time.Time{sec:0, nsec:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*unversioned.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil)}, InvolvedObject:api.ObjectReference{Kind:"Pod", Namespace:"zircon-addendum", Name:"zircon-addendum-v5-web-uxl6z", UID:"864747dd-e0ce-11e5-b0bc-029bbe1bd231", APIVersion:"v1", ResourceVersion:"11450", FieldPath:"implicitly required container POD"}, Reason:"Killing", Message:"Killing with docker id 0b7f6703fc35", Source:api.EventSource{Component:"kubelet", Host:"ip-10-0-0-166.us-west-2.compute.internal"}, FirstTimestamp:unversioned.Time{Time:time.Time{sec:63592626275, nsec:463923652, loc:(*time.Location)(0x1fbc560)}}, LastTimestamp:unversioned.Time{Time:time.Time{sec:63592626275, nsec:463923652, loc:(*time.Location)(0x1fbc560)}}, Count:1}': 'namespaces "zircon-addendum" not found' (will not retry!) | |
Mar 03 18:24:36 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: W0303 18:24:36.937704 682 kubelet.go:1588] Orphaned volume "6bc8949e-e0ce-11e5-b0bc-029bbe1bd231/default-token-smwo7" found, tearing down volume | |
Mar 03 18:24:36 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: W0303 18:24:36.944777 682 kubelet.go:1588] Orphaned volume "6bc8949e-e0ce-11e5-b0bc-029bbe1bd231/minio-user" found, tearing down volume | |
Mar 03 18:24:36 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: W0303 18:24:36.951860 682 kubelet.go:1588] Orphaned volume "864747dd-e0ce-11e5-b0bc-029bbe1bd231/default-token-smwo7" found, tearing down volume | |
Mar 03 18:24:36 ip-10-0-0-166.us-west-2.compute.internal kubelet[682]: W0303 18:24:36.958814 682 kubelet.go:1588] Orphaned volume "864747dd-e0ce-11e5-b0bc-029bbe1bd231/minio-user" found, tearing down volume |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment