Skip to content

Instantly share code, notes, and snippets.

@pjos
Created December 12, 2018 06:40
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save pjos/8ff4ee45ab7b8032816f0fbb454a6230 to your computer and use it in GitHub Desktop.
Save pjos/8ff4ee45ab7b8032816f0fbb454a6230 to your computer and use it in GitHub Desktop.
CDK 3.7.0 logs 2018-12-12
Flag --address has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --allow-privileged has been deprecated, will be removed in a future version
Flag --anonymous-auth has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --authentication-token-webhook has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --authentication-token-webhook-cache-ttl has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --authorization-mode has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --authorization-webhook-cache-authorized-ttl has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --authorization-webhook-cache-unauthorized-ttl has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --cadvisor-port has been deprecated, The default will change to 0 (disabled) in 1.11, and the cadvisor port will be removed entirely in 1.12
Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --client-ca-file has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --cluster-domain has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --healthz-bind-address has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --healthz-port has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --host-ipc-sources has been deprecated, will be removed in a future version
Flag --host-ipc-sources has been deprecated, will be removed in a future version
Flag --host-network-sources has been deprecated, will be removed in a future version
Flag --host-network-sources has been deprecated, will be removed in a future version
Flag --host-pid-sources has been deprecated, will be removed in a future version
Flag --host-pid-sources has been deprecated, will be removed in a future version
Flag --http-check-frequency has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --iptables-masquerade-bit has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --max-pods has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --port has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --read-only-port has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --tls-cert-file has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --tls-cipher-suites has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --tls-cipher-suites has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --tls-cipher-suites has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --tls-cipher-suites has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --tls-cipher-suites has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --tls-cipher-suites has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --tls-cipher-suites has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --tls-cipher-suites has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --tls-cipher-suites has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --tls-cipher-suites has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --tls-cipher-suites has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --tls-cipher-suites has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --tls-cipher-suites has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --tls-cipher-suites has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --tls-cipher-suites has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --tls-cipher-suites has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --tls-min-version has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --tls-private-key-file has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --pod-manifest-path has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --file-check-frequency has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --cluster-dns has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
I1212 05:59:33.907735 8806 server.go:418] Version: v1.11.0+d4cacc0
I1212 05:59:33.910377 8806 plugins.go:97] No cloud provider specified.
I1212 05:59:34.053267 8806 server.go:658] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /
I1212 05:59:34.054036 8806 container_manager_linux.go:243] container manager verified user specified cgroup-root exists: []
I1212 05:59:34.054072 8806 container_manager_linux.go:248] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/minishift/base/openshift.local.volumes ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.1} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.15} GracePeriod:0s MinReclaim:<nil>} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true}
I1212 05:59:34.054228 8806 container_manager_linux.go:267] Creating device plugin manager: true
I1212 05:59:34.054282 8806 state_mem.go:36] [cpumanager] initializing new in-memory state store
I1212 05:59:34.054606 8806 state_file.go:82] [cpumanager] state file: created new state file "/var/lib/minishift/base/openshift.local.volumes/cpu_manager_state"
I1212 05:59:34.054687 8806 kubelet.go:274] Adding pod path: /var/lib/origin/pod-manifests
I1212 05:59:34.054732 8806 kubelet.go:299] Watching apiserver
E1212 05:59:34.056690 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 05:59:34.057041 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 05:59:34.066285 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
I1212 05:59:34.102169 8806 client.go:75] Connecting to docker on unix:///var/run/docker.sock
I1212 05:59:34.102202 8806 client.go:104] Start docker client with request timeout=2m0s
W1212 05:59:34.108654 8806 docker_service.go:545] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth"
I1212 05:59:34.108694 8806 docker_service.go:238] Hairpin mode set to "hairpin-veth"
W1212 05:59:34.108943 8806 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
I1212 05:59:34.132810 8806 docker_service.go:253] Docker cri networking managed by kubernetes.io/no-op
I1212 05:59:34.144360 8806 docker_service.go:258] Docker Info: &{ID:KFOA:RKRX:Q5I4:ZGJF:7L2D:AHAT:QEBV:OYNS:ZSFK:V3N6:EWCS:VT62 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem xfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host macvlan null overlay] Authorization:[rhel-push-plugin] Log:[]} MemoryLimit:true SwapLimit:true KernelMemory:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:23 OomKillDisable:true NGoroutines:31 SystemTime:2018-12-12T05:59:34.1364995Z LoggingDriver:journald CgroupDriver:systemd NEventsListener:0 KernelVersion:3.10.0-957.el7.x86_64 OperatingSystem:Red Hat Enterprise Linux Server 7.6 (Maipo) OSType:linux Architecture:x86_64 IndexServerAddress:https://registry.access.redhat.com/v1/ RegistryConfig:0xc420908bd0 NCPU:2 MemTotal:8351002624 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy:http://klientproxy-skv.rsv.se:8080 HTTPSProxy:http://klientproxy-skv.rsv.se:8080 NoProxy:localhost,127.0.0.1,172.30.1.1,.rsv.se,.rsvc.se,.svc,192.168.10.10,192.168.10.10 Name:minishift Labels:[provider=hyperv] ExperimentalBuild:false ServerVersion:1.13.1 ClusterStore: ClusterAdvertise: Runtimes:map[docker-runc:{Path:/usr/libexec/docker/docker-runc-current Args:[]} runc:{Path:docker-runc Args:[]}] DefaultRuntime:docker-runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:0xc420b27540} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID: Expected:aa8187dbd3b7ad67d8e5e3a15115d3eef43a7ed1} RuncCommit:{ID:5eda6f6fd0c2884c2c8e78a6e7119e8d0ecedb77 Expected:9df8b306d01f59d3a8029be411de015b7304dd8f} InitCommit:{ID:N/A Expected:949e6facb77383876aeff8a6944dde66b3089574} SecurityOptions:[name=seccomp,profile=default name=selinux]}
I1212 05:59:34.144439 8806 docker_service.go:271] Setting cgroupDriver to systemd
I1212 05:59:34.208028 8806 kuberuntime_manager.go:186] Container runtime docker initialized, version: 1.13.1, apiVersion: 1.26.0
W1212 05:59:34.209381 8806 probe.go:270] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
I1212 05:59:34.233383 8806 csi_plugin.go:111] kubernetes.io/csi: plugin initializing...
I1212 05:59:34.239099 8806 server.go:129] Starting to listen on 0.0.0.0:10250
E1212 05:59:34.245399 8806 event.go:212] Unable to write event: 'Post https://localhost:8443/api/v1/namespaces/default/events: dial tcp 127.0.0.1:8443: connect: connection refused' (may retry after sleeping)
E1212 05:59:34.245714 8806 kubelet.go:1244] Image garbage collection failed once. Stats initialization may not have completed yet: failed to get imageFs info: unable to find data for container /
I1212 05:59:34.246734 8806 fs_resource_analyzer.go:66] Starting FS ResourceAnalyzer
I1212 05:59:34.246835 8806 status_manager.go:152] Starting to sync pod status with apiserver
I1212 05:59:34.246860 8806 kubelet.go:1741] Starting kubelet main sync loop.
I1212 05:59:34.246917 8806 kubelet.go:1758] skipping pod synchronization - [container runtime is down PLEG is not healthy: pleg was last seen active 2562047h47m16.854775807s ago; threshold is 3m0s]
I1212 05:59:34.247163 8806 server.go:996] Started kubelet
I1212 05:59:34.247475 8806 volume_manager.go:247] Starting Kubelet Volume Manager
I1212 05:59:34.256495 8806 server.go:307] Adding debug handlers to kubelet server.
I1212 05:59:34.289740 8806 desired_state_of_world_populator.go:130] Desired state populator starts to run
I1212 05:59:34.348088 8806 kubelet.go:1758] skipping pod synchronization - [container runtime is down PLEG is not healthy: pleg was last seen active 2562047h47m16.854775807s ago; threshold is 3m0s]
I1212 05:59:34.348158 8806 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
I1212 05:59:34.542926 8806 kubelet_node_status.go:79] Attempting to register node localhost
E1212 05:59:34.543663 8806 kubelet_node_status.go:103] Unable to register node "localhost" with API server: Post https://localhost:8443/api/v1/nodes: dial tcp 127.0.0.1:8443: connect: connection refused
I1212 05:59:34.561087 8806 kubelet.go:1758] skipping pod synchronization - [container runtime is down]
I1212 05:59:34.744382 8806 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
I1212 05:59:34.771064 8806 kubelet_node_status.go:79] Attempting to register node localhost
E1212 05:59:34.783115 8806 kubelet_node_status.go:103] Unable to register node "localhost" with API server: Post https://localhost:8443/api/v1/nodes: dial tcp 127.0.0.1:8443: connect: connection refused
I1212 05:59:34.961249 8806 kubelet.go:1758] skipping pod synchronization - [container runtime is down]
I1212 05:59:35.042051 8806 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
I1212 05:59:35.047182 8806 cpu_manager.go:155] [cpumanager] starting with none policy
I1212 05:59:35.047220 8806 cpu_manager.go:156] [cpumanager] reconciling every 10s
I1212 05:59:35.047279 8806 policy_none.go:42] [cpumanager] none policy: Start
E1212 05:59:35.057495 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 05:59:35.068427 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 05:59:35.068593 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
I1212 05:59:35.184184 8806 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
I1212 05:59:35.202577 8806 kubelet_node_status.go:79] Attempting to register node localhost
E1212 05:59:35.203035 8806 kubelet_node_status.go:103] Unable to register node "localhost" with API server: Post https://localhost:8443/api/v1/nodes: dial tcp 127.0.0.1:8443: connect: connection refused
Starting Device Plugin manager
W1212 05:59:35.240925 8806 manager.go:496] Failed to retrieve checkpoint for "kubelet_internal_checkpoint": checkpoint is not found
E1212 05:59:35.241409 8806 eviction_manager.go:243] eviction manager: failed to get get summary stats: failed to get node info: node "localhost" not found
I1212 05:59:35.762198 8806 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
I1212 05:59:35.787460 8806 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
I1212 05:59:35.787534 8806 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
I1212 05:59:35.794901 8806 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
W1212 05:59:35.795640 8806 status_manager.go:482] Failed to get status for pod "kube-scheduler-localhost_kube-system(2d73ab1cb2447f75e6fb80d0c9daf4b4)": Get https://localhost:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost: dial tcp 127.0.0.1:8443: connect: connection refused
I1212 05:59:35.795789 8806 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
I1212 05:59:35.807209 8806 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
I1212 05:59:35.808693 8806 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
I1212 05:59:35.810708 8806 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "master-config" (UniqueName: "kubernetes.io/host-path/5ffde54bfa5a6ede6742ffca26053a0d-master-config") pod "master-api-localhost" (UID: "5ffde54bfa5a6ede6742ffca26053a0d")
I1212 05:59:35.810745 8806 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "master-cloud-provider" (UniqueName: "kubernetes.io/host-path/5ffde54bfa5a6ede6742ffca26053a0d-master-cloud-provider") pod "master-api-localhost" (UID: "5ffde54bfa5a6ede6742ffca26053a0d")
I1212 05:59:35.810787 8806 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "master-data" (UniqueName: "kubernetes.io/host-path/5ffde54bfa5a6ede6742ffca26053a0d-master-data") pod "master-api-localhost" (UID: "5ffde54bfa5a6ede6742ffca26053a0d")
I1212 05:59:35.810810 8806 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "master-config" (UniqueName: "kubernetes.io/host-path/2d73ab1cb2447f75e6fb80d0c9daf4b4-master-config") pod "kube-scheduler-localhost" (UID: "2d73ab1cb2447f75e6fb80d0c9daf4b4")
I1212 05:59:35.810836 8806 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "master-cloud-provider" (UniqueName: "kubernetes.io/host-path/2d73ab1cb2447f75e6fb80d0c9daf4b4-master-cloud-provider") pod "kube-scheduler-localhost" (UID: "2d73ab1cb2447f75e6fb80d0c9daf4b4")
W1212 05:59:35.811803 8806 status_manager.go:482] Failed to get status for pod "master-api-localhost_kube-system(5ffde54bfa5a6ede6742ffca26053a0d)": Get https://localhost:8443/api/v1/namespaces/kube-system/pods/master-api-localhost: dial tcp 127.0.0.1:8443: connect: connection refused
I1212 05:59:35.824519 8806 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
W1212 05:59:35.825182 8806 status_manager.go:482] Failed to get status for pod "master-etcd-localhost_kube-system(054a5563f4b5f4b05e278cec8bff9aef)": Get https://localhost:8443/api/v1/namespaces/kube-system/pods/master-etcd-localhost: dial tcp 127.0.0.1:8443: connect: connection refused
W1212 05:59:35.830666 8806 status_manager.go:482] Failed to get status for pod "kube-controller-manager-localhost_kube-system(eaa40c65683ee6d981374af16b8476c0)": Get https://localhost:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost: dial tcp 127.0.0.1:8443: connect: connection refused
I1212 05:59:35.911824 8806 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "master-data" (UniqueName: "kubernetes.io/host-path/054a5563f4b5f4b05e278cec8bff9aef-master-data") pod "master-etcd-localhost" (UID: "054a5563f4b5f4b05e278cec8bff9aef")
I1212 05:59:35.912060 8806 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "master-config" (UniqueName: "kubernetes.io/host-path/eaa40c65683ee6d981374af16b8476c0-master-config") pod "kube-controller-manager-localhost" (UID: "eaa40c65683ee6d981374af16b8476c0")
I1212 05:59:35.912159 8806 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "master-cloud-provider" (UniqueName: "kubernetes.io/host-path/eaa40c65683ee6d981374af16b8476c0-master-cloud-provider") pod "kube-controller-manager-localhost" (UID: "eaa40c65683ee6d981374af16b8476c0")
I1212 05:59:35.912522 8806 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "master-config" (UniqueName: "kubernetes.io/host-path/054a5563f4b5f4b05e278cec8bff9aef-master-config") pod "master-etcd-localhost" (UID: "054a5563f4b5f4b05e278cec8bff9aef")
I1212 05:59:36.003480 8806 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
I1212 05:59:36.015356 8806 kubelet_node_status.go:79] Attempting to register node localhost
E1212 05:59:36.016076 8806 kubelet_node_status.go:103] Unable to register node "localhost" with API server: Post https://localhost:8443/api/v1/nodes: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 05:59:36.058264 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 05:59:36.069362 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 05:59:36.072434 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 05:59:37.061107 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 05:59:37.072949 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 05:59:37.078722 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
I1212 05:59:37.616468 8806 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
I1212 05:59:37.631483 8806 kubelet_node_status.go:79] Attempting to register node localhost
E1212 05:59:37.632033 8806 kubelet_node_status.go:103] Unable to register node "localhost" with API server: Post https://localhost:8443/api/v1/nodes: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 05:59:38.064003 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 05:59:38.074002 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 05:59:38.079784 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 05:59:39.065237 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 05:59:39.076867 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 05:59:39.080934 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 05:59:39.184599 8806 event.go:212] Unable to write event: 'Post https://localhost:8443/api/v1/namespaces/default/events: dial tcp 127.0.0.1:8443: connect: connection refused' (may retry after sleeping)
E1212 05:59:40.067334 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 05:59:40.081611 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 05:59:40.083618 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
I1212 05:59:40.832543 8806 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
I1212 05:59:40.842965 8806 kubelet_node_status.go:79] Attempting to register node localhost
E1212 05:59:40.844714 8806 kubelet_node_status.go:103] Unable to register node "localhost" with API server: Post https://localhost:8443/api/v1/nodes: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 05:59:41.068632 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 05:59:41.082785 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 05:59:41.084740 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 05:59:42.069540 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 05:59:42.083696 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 05:59:42.086024 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 05:59:43.070497 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 05:59:43.085082 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 05:59:43.086999 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 05:59:44.074171 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 05:59:44.089061 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 05:59:44.091083 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 05:59:45.074979 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 05:59:45.089962 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 05:59:45.091834 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 05:59:45.241650 8806 eviction_manager.go:243] eviction manager: failed to get get summary stats: failed to get node info: node "localhost" not found
E1212 05:59:46.075805 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 05:59:46.090807 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 05:59:46.094303 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 05:59:47.080783 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 05:59:47.094521 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 05:59:47.095313 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
I1212 05:59:47.245087 8806 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
I1212 05:59:47.249947 8806 kubelet_node_status.go:79] Attempting to register node localhost
E1212 05:59:47.250465 8806 kubelet_node_status.go:103] Unable to register node "localhost" with API server: Post https://localhost:8443/api/v1/nodes: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 05:59:48.083459 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 05:59:48.096409 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 05:59:48.097489 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 05:59:49.084424 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 05:59:49.097283 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 05:59:49.098126 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 05:59:49.186194 8806 event.go:212] Unable to write event: 'Post https://localhost:8443/api/v1/namespaces/default/events: dial tcp 127.0.0.1:8443: connect: connection refused' (may retry after sleeping)
E1212 05:59:50.086817 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 05:59:50.102449 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 05:59:50.102703 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 05:59:51.088377 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 05:59:51.104092 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 05:59:51.104612 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 05:59:52.090240 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 05:59:52.106644 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 05:59:52.106699 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 05:59:53.098273 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 05:59:53.107623 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 05:59:53.108398 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 05:59:54.099486 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 05:59:54.109129 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 05:59:54.109977 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
I1212 05:59:54.250736 8806 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
I1212 05:59:54.255052 8806 kubelet_node_status.go:79] Attempting to register node localhost
E1212 05:59:54.255471 8806 kubelet_node_status.go:103] Unable to register node "localhost" with API server: Post https://localhost:8443/api/v1/nodes: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 05:59:55.100258 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 05:59:55.110672 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 05:59:55.112493 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 05:59:55.241992 8806 eviction_manager.go:243] eviction manager: failed to get get summary stats: failed to get node info: node "localhost" not found
E1212 05:59:56.101116 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 05:59:56.111436 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 05:59:56.113329 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 05:59:57.113901 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 05:59:57.118122 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 05:59:57.118567 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 05:59:58.115501 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 05:59:58.119261 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 05:59:58.120402 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 05:59:59.117870 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 05:59:59.125370 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 05:59:59.125428 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 05:59:59.188433 8806 event.go:212] Unable to write event: 'Post https://localhost:8443/api/v1/namespaces/default/events: dial tcp 127.0.0.1:8443: connect: connection refused' (may retry after sleeping)
E1212 06:00:00.119746 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:00.126240 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:00.128304 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:01.121115 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:01.127238 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:01.132222 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
I1212 06:00:01.255997 8806 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
I1212 06:00:01.263602 8806 kubelet_node_status.go:79] Attempting to register node localhost
E1212 06:00:01.264242 8806 kubelet_node_status.go:103] Unable to register node "localhost" with API server: Post https://localhost:8443/api/v1/nodes: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:02.121904 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:02.128345 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:02.133686 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:03.123807 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:03.130327 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:03.135386 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:04.127224 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:04.136171 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:04.136961 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:05.128224 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:05.138007 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:05.141674 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:05.242290 8806 eviction_manager.go:243] eviction manager: failed to get get summary stats: failed to get node info: node "localhost" not found
E1212 06:00:06.129152 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:06.141206 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:06.147633 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:07.132329 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:07.142239 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:07.149528 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:08.141064 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:08.146907 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:08.150219 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
I1212 06:00:08.264876 8806 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
I1212 06:00:08.281436 8806 kubelet_node_status.go:79] Attempting to register node localhost
E1212 06:00:08.285288 8806 kubelet_node_status.go:103] Unable to register node "localhost" with API server: Post https://localhost:8443/api/v1/nodes: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:09.143319 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:09.147562 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:09.150956 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:09.189859 8806 event.go:212] Unable to write event: 'Post https://localhost:8443/api/v1/namespaces/default/events: dial tcp 127.0.0.1:8443: connect: connection refused' (may retry after sleeping)
E1212 06:00:10.162551 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:10.162705 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:10.162796 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:11.165823 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:11.167558 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:11.169525 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:12.167424 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:12.169853 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:12.170862 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:13.169222 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:13.171036 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:13.172451 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:14.172152 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:14.179359 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:14.183235 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:15.173049 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:15.180187 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:15.184703 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:15.242601 8806 eviction_manager.go:243] eviction manager: failed to get get summary stats: failed to get node info: node "localhost" not found
I1212 06:00:15.285536 8806 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
I1212 06:00:15.291504 8806 kubelet_node_status.go:79] Attempting to register node localhost
E1212 06:00:15.293000 8806 kubelet_node_status.go:103] Unable to register node "localhost" with API server: Post https://localhost:8443/api/v1/nodes: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:16.174153 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:16.181136 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:16.186181 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:17.174963 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:17.181851 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:17.186987 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:18.175802 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:18.183323 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:18.187605 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:19.177016 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:19.188332 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:19.188447 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:19.193840 8806 event.go:212] Unable to write event: 'Post https://localhost:8443/api/v1/namespaces/default/events: dial tcp 127.0.0.1:8443: connect: connection refused' (may retry after sleeping)
E1212 06:00:20.178692 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:20.193483 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:20.193558 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:21.199442 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:21.200807 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:21.201252 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:22.203124 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:22.214037 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:22.226298 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
I1212 06:00:22.295009 8806 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
I1212 06:00:22.323415 8806 kubelet_node_status.go:79] Attempting to register node localhost
E1212 06:00:22.324008 8806 kubelet_node_status.go:103] Unable to register node "localhost" with API server: Post https://localhost:8443/api/v1/nodes: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:23.206083 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:23.219057 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:23.237792 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:24.207735 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:24.220275 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:24.239024 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:25.209336 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:25.222537 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:25.242271 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:25.242819 8806 eviction_manager.go:243] eviction manager: failed to get get summary stats: failed to get node info: node "localhost" not found
E1212 06:00:26.210324 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:26.223419 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:26.243180 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:27.211652 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:27.224325 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:27.244141 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:28.212808 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:28.233545 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
I1212 06:00:28.239953 8806 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
E1212 06:00:28.248351 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
W1212 06:00:28.254782 8806 status_manager.go:482] Failed to get status for pod "kube-scheduler-localhost_kube-system(2d73ab1cb2447f75e6fb80d0c9daf4b4)": Get https://localhost:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:29.194476 8806 event.go:212] Unable to write event: 'Post https://localhost:8443/api/v1/namespaces/default/events: dial tcp 127.0.0.1:8443: connect: connection refused' (may retry after sleeping)
E1212 06:00:29.213579 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:29.234446 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
I1212 06:00:29.249228 8806 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
E1212 06:00:29.249389 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
I1212 06:00:29.324341 8806 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
I1212 06:00:29.330360 8806 kubelet_node_status.go:79] Attempting to register node localhost
E1212 06:00:29.330737 8806 kubelet_node_status.go:103] Unable to register node "localhost" with API server: Post https://localhost:8443/api/v1/nodes: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:30.214586 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:30.235423 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:30.250714 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:31.215496 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:31.236226 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:31.251678 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:32.218597 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:32.237261 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:32.253111 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:33.224148 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:33.239520 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:33.253773 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:34.226009 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:34.248836 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:34.254491 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
I1212 06:00:34.361707 8806 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
W1212 06:00:34.370176 8806 status_manager.go:482] Failed to get status for pod "master-etcd-localhost_kube-system(054a5563f4b5f4b05e278cec8bff9aef)": Get https://localhost:8443/api/v1/namespaces/kube-system/pods/master-etcd-localhost: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:35.226872 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:35.243008 8806 eviction_manager.go:243] eviction manager: failed to get get summary stats: failed to get node info: node "localhost" not found
E1212 06:00:35.249540 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:35.255150 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
I1212 06:00:35.369386 8806 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
E1212 06:00:36.230692 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:36.250490 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:36.256995 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
I1212 06:00:36.331010 8806 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
I1212 06:00:36.345584 8806 kubelet_node_status.go:79] Attempting to register node localhost
E1212 06:00:36.346630 8806 kubelet_node_status.go:103] Unable to register node "localhost" with API server: Post https://localhost:8443/api/v1/nodes: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:37.231839 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:37.252046 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:37.260525 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:38.239463 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:38.254233 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:38.262787 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:39.200429 8806 event.go:212] Unable to write event: 'Post https://localhost:8443/api/v1/namespaces/default/events: dial tcp 127.0.0.1:8443: connect: connection refused' (may retry after sleeping)
E1212 06:00:39.240502 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:39.256738 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:39.264850 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:40.243934 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:40.259664 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:40.265682 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:41.245186 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:41.260532 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:41.266414 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:42.246386 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:42.261366 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:42.267256 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:43.247492 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:43.262109 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:43.267995 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
I1212 06:00:43.346846 8806 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
I1212 06:00:43.352024 8806 kubelet_node_status.go:79] Attempting to register node localhost
E1212 06:00:43.352470 8806 kubelet_node_status.go:103] Unable to register node "localhost" with API server: Post https://localhost:8443/api/v1/nodes: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:44.248239 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:44.269806 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:44.269947 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:45.243517 8806 eviction_manager.go:243] eviction manager: failed to get get summary stats: failed to get node info: node "localhost" not found
E1212 06:00:45.249100 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:45.270544 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:45.274488 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:46.251448 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:46.272172 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:46.275622 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:47.254438 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:47.272853 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:47.276389 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:48.255421 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:48.273845 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:48.277453 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:49.201241 8806 event.go:212] Unable to write event: 'Post https://localhost:8443/api/v1/namespaces/default/events: dial tcp 127.0.0.1:8443: connect: connection refused' (may retry after sleeping)
E1212 06:00:49.256288 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:49.274745 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:49.280199 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:50.257681 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:50.288498 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:50.288622 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
I1212 06:00:50.353290 8806 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
I1212 06:00:50.361306 8806 kubelet_node_status.go:79] Attempting to register node localhost
E1212 06:00:50.361751 8806 kubelet_node_status.go:103] Unable to register node "localhost" with API server: Post https://localhost:8443/api/v1/nodes: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:51.258443 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:51.289678 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:51.291689 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:52.267237 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:52.290619 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:52.294044 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:53.272359 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:53.294792 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:53.294871 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:54.279681 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:54.299796 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:54.306703 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:55.246121 8806 eviction_manager.go:243] eviction manager: failed to get get summary stats: failed to get node info: node "localhost" not found
E1212 06:00:55.285126 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:55.312124 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:55.312852 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:56.290056 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:56.314446 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:56.329067 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:57.290872 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:57.315233 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:57.329681 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
I1212 06:00:57.365285 8806 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
I1212 06:00:57.379897 8806 kubelet_node_status.go:79] Attempting to register node localhost
E1212 06:00:57.380446 8806 kubelet_node_status.go:103] Unable to register node "localhost" with API server: Post https://localhost:8443/api/v1/nodes: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:58.291537 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:58.325603 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:58.334581 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:59.202056 8806 event.go:212] Unable to write event: 'Post https://localhost:8443/api/v1/namespaces/default/events: dial tcp 127.0.0.1:8443: connect: connection refused' (may retry after sleeping)
E1212 06:00:59.292369 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:59.326491 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:00:59.335488 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:01:00.293417 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:01:00.327308 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:01:00.336240 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:01:01.294698 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:01:01.328077 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:01:01.337100 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:01:02.302081 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:01:02.329467 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:01:02.338607 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:01:03.303416 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:01:03.330548 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:01:03.339834 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:01:04.304769 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:01:04.332035 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:01:04.341418 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
I1212 06:01:04.380649 8806 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
I1212 06:01:04.401159 8806 kubelet_node_status.go:79] Attempting to register node localhost
E1212 06:01:04.401631 8806 kubelet_node_status.go:103] Unable to register node "localhost" with API server: Post https://localhost:8443/api/v1/nodes: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:01:05.246398 8806 eviction_manager.go:243] eviction manager: failed to get get summary stats: failed to get node info: node "localhost" not found
E1212 06:01:05.306154 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:01:05.333171 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:01:05.342265 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:01:06.307082 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:01:06.339724 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E1212 06:01:06.350040 8806 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
I1212 06:01:07.324415 8806 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
I1212 06:01:08.330156 8806 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
I1212 06:01:11.402021 8806 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
I1212 06:01:11.405982 8806 kubelet_node_status.go:79] Attempting to register node localhost
I1212 06:01:13.376861 8806 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
I1212 06:01:14.382429 8806 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
E1212 06:01:15.246646 8806 eviction_manager.go:243] eviction manager: failed to get get summary stats: failed to get node info: node "localhost" not found
I1212 06:01:15.856171 8806 kubelet_node_status.go:82] Successfully registered node localhost
I1212 06:01:15.920918 8806 reconciler.go:154] Reconciler: start to sync state
E1212 06:01:16.040394 8806 event.go:203] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.156f806656952178", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbefc46d18e3ee578, ext:749065301, loc:(*time.Location)(0x90a03c0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbefc46d18e3ee578, ext:749065301, loc:(*time.Location)(0x90a03c0)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
I1212 06:01:46.376341 8806 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "node-config" (UniqueName: "kubernetes.io/host-path/683a1016-fdd3-11e8-9503-00155d12d666-node-config") pod "kube-proxy-rqm4s" (UID: "683a1016-fdd3-11e8-9503-00155d12d666")
I1212 06:01:46.376395 8806 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-mpcxr" (UniqueName: "kubernetes.io/secret/683a1016-fdd3-11e8-9503-00155d12d666-kube-proxy-token-mpcxr") pod "kube-proxy-rqm4s" (UID: "683a1016-fdd3-11e8-9503-00155d12d666")
I1212 06:01:46.376421 8806 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-dns-token-bm8zh" (UniqueName: "kubernetes.io/secret/683bd217-fdd3-11e8-9503-00155d12d666-kube-dns-token-bm8zh") pod "kube-dns-jhbbk" (UID: "683bd217-fdd3-11e8-9503-00155d12d666")
I1212 06:01:46.376445 8806 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "node-config" (UniqueName: "kubernetes.io/host-path/683bd217-fdd3-11e8-9503-00155d12d666-node-config") pod "kube-dns-jhbbk" (UID: "683bd217-fdd3-11e8-9503-00155d12d666")
I1212 06:01:46.602218 8806 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "master-config" (UniqueName: "kubernetes.io/host-path/68496475-fdd3-11e8-9503-00155d12d666-master-config") pod "openshift-apiserver-n8rhb" (UID: "68496475-fdd3-11e8-9503-00155d12d666")
I1212 06:01:46.602835 8806 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "serving-cert" (UniqueName: "kubernetes.io/secret/68496475-fdd3-11e8-9503-00155d12d666-serving-cert") pod "openshift-apiserver-n8rhb" (UID: "68496475-fdd3-11e8-9503-00155d12d666")
I1212 06:01:46.602895 8806 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "master-cloud-provider" (UniqueName: "kubernetes.io/host-path/68496475-fdd3-11e8-9503-00155d12d666-master-cloud-provider") pod "openshift-apiserver-n8rhb" (UID: "68496475-fdd3-11e8-9503-00155d12d666")
I1212 06:01:46.602941 8806 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "openshift-apiserver-token-2d2gg" (UniqueName: "kubernetes.io/secret/68496475-fdd3-11e8-9503-00155d12d666-openshift-apiserver-token-2d2gg") pod "openshift-apiserver-n8rhb" (UID: "68496475-fdd3-11e8-9503-00155d12d666")
E1212 06:01:46.721937 8806 secret.go:198] Couldn't get secret openshift-apiserver/serving-cert: secrets "serving-cert" not found
E1212 06:01:46.722028 8806 nestedpendingoperations.go:267] Operation for "\"kubernetes.io/secret/68496475-fdd3-11e8-9503-00155d12d666-serving-cert\" (\"68496475-fdd3-11e8-9503-00155d12d666\")" failed. No retries permitted until 2018-12-12 06:01:47.2219938 +0000 UTC m=+133.732056101 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/68496475-fdd3-11e8-9503-00155d12d666-serving-cert\") pod \"openshift-apiserver-n8rhb\" (UID: \"68496475-fdd3-11e8-9503-00155d12d666\") : secrets \"serving-cert\" not found"
E1212 06:01:47.231237 8806 secret.go:198] Couldn't get secret openshift-apiserver/serving-cert: secrets "serving-cert" not found
E1212 06:01:47.231366 8806 nestedpendingoperations.go:267] Operation for "\"kubernetes.io/secret/68496475-fdd3-11e8-9503-00155d12d666-serving-cert\" (\"68496475-fdd3-11e8-9503-00155d12d666\")" failed. No retries permitted until 2018-12-12 06:01:48.231319 +0000 UTC m=+134.741381301 (durationBeforeRetry 1s). Error: "MountVolume.SetUp failed for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/68496475-fdd3-11e8-9503-00155d12d666-serving-cert\") pod \"openshift-apiserver-n8rhb\" (UID: \"68496475-fdd3-11e8-9503-00155d12d666\") : secrets \"serving-cert\" not found"
I1212 06:01:47.552429 8806 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "serving-cert" (UniqueName: "kubernetes.io/secret/68db0de0-fdd3-11e8-9503-00155d12d666-serving-cert") pod "openshift-service-cert-signer-operator-6d477f986b-4j88g" (UID: "68db0de0-fdd3-11e8-9503-00155d12d666")
I1212 06:01:47.552493 8806 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "config" (UniqueName: "kubernetes.io/configmap/68db0de0-fdd3-11e8-9503-00155d12d666-config") pod "openshift-service-cert-signer-operator-6d477f986b-4j88g" (UID: "68db0de0-fdd3-11e8-9503-00155d12d666")
I1212 06:01:47.552572 8806 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "openshift-service-cert-signer-operator-token-dx9l7" (UniqueName: "kubernetes.io/secret/68db0de0-fdd3-11e8-9503-00155d12d666-openshift-service-cert-signer-operator-token-dx9l7") pod "openshift-service-cert-signer-operator-6d477f986b-4j88g" (UID: "68db0de0-fdd3-11e8-9503-00155d12d666")
E1212 06:01:48.309564 8806 secret.go:198] Couldn't get secret openshift-apiserver/serving-cert: secrets "serving-cert" not found
E1212 06:01:48.309634 8806 nestedpendingoperations.go:267] Operation for "\"kubernetes.io/secret/68496475-fdd3-11e8-9503-00155d12d666-serving-cert\" (\"68496475-fdd3-11e8-9503-00155d12d666\")" failed. No retries permitted until 2018-12-12 06:01:50.3096127 +0000 UTC m=+136.819674901 (durationBeforeRetry 2s). Error: "MountVolume.SetUp failed for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/68496475-fdd3-11e8-9503-00155d12d666-serving-cert\") pod \"openshift-apiserver-n8rhb\" (UID: \"68496475-fdd3-11e8-9503-00155d12d666\") : secrets \"serving-cert\" not found"
E1212 06:01:50.339212 8806 secret.go:198] Couldn't get secret openshift-apiserver/serving-cert: secrets "serving-cert" not found
E1212 06:01:50.339287 8806 nestedpendingoperations.go:267] Operation for "\"kubernetes.io/secret/68496475-fdd3-11e8-9503-00155d12d666-serving-cert\" (\"68496475-fdd3-11e8-9503-00155d12d666\")" failed. No retries permitted until 2018-12-12 06:01:54.3392597 +0000 UTC m=+140.849321901 (durationBeforeRetry 4s). Error: "MountVolume.SetUp failed for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/68496475-fdd3-11e8-9503-00155d12d666-serving-cert\") pod \"openshift-apiserver-n8rhb\" (UID: \"68496475-fdd3-11e8-9503-00155d12d666\") : secrets \"serving-cert\" not found"
E1212 06:01:54.401461 8806 secret.go:198] Couldn't get secret openshift-apiserver/serving-cert: secrets "serving-cert" not found
E1212 06:01:54.401540 8806 nestedpendingoperations.go:267] Operation for "\"kubernetes.io/secret/68496475-fdd3-11e8-9503-00155d12d666-serving-cert\" (\"68496475-fdd3-11e8-9503-00155d12d666\")" failed. No retries permitted until 2018-12-12 06:02:02.4015143 +0000 UTC m=+148.911576601 (durationBeforeRetry 8s). Error: "MountVolume.SetUp failed for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/68496475-fdd3-11e8-9503-00155d12d666-serving-cert\") pod \"openshift-apiserver-n8rhb\" (UID: \"68496475-fdd3-11e8-9503-00155d12d666\") : secrets \"serving-cert\" not found"
E1212 06:02:02.454761 8806 secret.go:198] Couldn't get secret openshift-apiserver/serving-cert: secrets "serving-cert" not found
E1212 06:02:02.454837 8806 nestedpendingoperations.go:267] Operation for "\"kubernetes.io/secret/68496475-fdd3-11e8-9503-00155d12d666-serving-cert\" (\"68496475-fdd3-11e8-9503-00155d12d666\")" failed. No retries permitted until 2018-12-12 06:02:18.4548132 +0000 UTC m=+164.964875401 (durationBeforeRetry 16s). Error: "MountVolume.SetUp failed for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/68496475-fdd3-11e8-9503-00155d12d666-serving-cert\") pod \"openshift-apiserver-n8rhb\" (UID: \"68496475-fdd3-11e8-9503-00155d12d666\") : secrets \"serving-cert\" not found"
E1212 06:02:08.376631 8806 remote_image.go:108] PullImage "openshift/origin-service-serving-cert-signer:v3.11" from image service failed: rpc error: code = Unknown desc = Get https://registry-1.docker.io/v2/: x509: certificate signed by unknown authority
E1212 06:02:08.376669 8806 kuberuntime_image.go:51] Pull image "openshift/origin-service-serving-cert-signer:v3.11" failed: rpc error: code = Unknown desc = Get https://registry-1.docker.io/v2/: x509: certificate signed by unknown authority
E1212 06:02:08.376714 8806 kuberuntime_manager.go:733] container start failed: ErrImagePull: rpc error: code = Unknown desc = Get https://registry-1.docker.io/v2/: x509: certificate signed by unknown authority
E1212 06:02:08.376766 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ErrImagePull: "rpc error: code = Unknown desc = Get https://registry-1.docker.io/v2/: x509: certificate signed by unknown authority"
I1212 06:02:09.647989 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:02:09.657164 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
E1212 06:02:18.486977 8806 secret.go:198] Couldn't get secret openshift-apiserver/serving-cert: secrets "serving-cert" not found
E1212 06:02:18.487090 8806 nestedpendingoperations.go:267] Operation for "\"kubernetes.io/secret/68496475-fdd3-11e8-9503-00155d12d666-serving-cert\" (\"68496475-fdd3-11e8-9503-00155d12d666\")" failed. No retries permitted until 2018-12-12 06:02:50.4870595 +0000 UTC m=+196.997121801 (durationBeforeRetry 32s). Error: "MountVolume.SetUp failed for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/68496475-fdd3-11e8-9503-00155d12d666-serving-cert\") pod \"openshift-apiserver-n8rhb\" (UID: \"68496475-fdd3-11e8-9503-00155d12d666\") : secrets \"serving-cert\" not found"
I1212 06:02:25.548560 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:02:34.966498 8806 remote_image.go:108] PullImage "openshift/origin-service-serving-cert-signer:v3.11" from image service failed: rpc error: code = Unknown desc = Get https://registry-1.docker.io/v2/: x509: certificate signed by unknown authority
E1212 06:02:34.966573 8806 kuberuntime_image.go:51] Pull image "openshift/origin-service-serving-cert-signer:v3.11" failed: rpc error: code = Unknown desc = Get https://registry-1.docker.io/v2/: x509: certificate signed by unknown authority
E1212 06:02:34.966647 8806 kuberuntime_manager.go:733] container start failed: ErrImagePull: rpc error: code = Unknown desc = Get https://registry-1.docker.io/v2/: x509: certificate signed by unknown authority
E1212 06:02:34.966691 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ErrImagePull: "rpc error: code = Unknown desc = Get https://registry-1.docker.io/v2/: x509: certificate signed by unknown authority"
E1212 06:02:50.498813 8806 secret.go:198] Couldn't get secret openshift-apiserver/serving-cert: secrets "serving-cert" not found
E1212 06:02:50.498963 8806 nestedpendingoperations.go:267] Operation for "\"kubernetes.io/secret/68496475-fdd3-11e8-9503-00155d12d666-serving-cert\" (\"68496475-fdd3-11e8-9503-00155d12d666\")" failed. No retries permitted until 2018-12-12 06:03:54.4989115 +0000 UTC m=+261.008973701 (durationBeforeRetry 1m4s). Error: "MountVolume.SetUp failed for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/68496475-fdd3-11e8-9503-00155d12d666-serving-cert\") pod \"openshift-apiserver-n8rhb\" (UID: \"68496475-fdd3-11e8-9503-00155d12d666\") : secrets \"serving-cert\" not found"
I1212 06:02:50.555395 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:02:50.560387 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
I1212 06:03:01.549406 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:03:12.586519 8806 remote_image.go:108] PullImage "openshift/origin-service-serving-cert-signer:v3.11" from image service failed: rpc error: code = Unknown desc = Get https://registry-1.docker.io/v2/: x509: certificate signed by unknown authority
E1212 06:03:12.586585 8806 kuberuntime_image.go:51] Pull image "openshift/origin-service-serving-cert-signer:v3.11" failed: rpc error: code = Unknown desc = Get https://registry-1.docker.io/v2/: x509: certificate signed by unknown authority
E1212 06:03:12.586649 8806 kuberuntime_manager.go:733] container start failed: ErrImagePull: rpc error: code = Unknown desc = Get https://registry-1.docker.io/v2/: x509: certificate signed by unknown authority
E1212 06:03:12.586684 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ErrImagePull: "rpc error: code = Unknown desc = Get https://registry-1.docker.io/v2/: x509: certificate signed by unknown authority"
I1212 06:03:26.548383 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:03:26.558950 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
I1212 06:03:40.548514 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:03:40.562726 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
E1212 06:03:49.523792 8806 kubelet.go:1599] Unable to mount volumes for pod "openshift-apiserver-n8rhb_openshift-apiserver(68496475-fdd3-11e8-9503-00155d12d666)": timeout expired waiting for volumes to attach or mount for pod "openshift-apiserver"/"openshift-apiserver-n8rhb". list of unmounted volumes=[serving-cert]. list of unattached volumes=[master-config master-cloud-provider serving-cert openshift-apiserver-token-2d2gg]; skipping pod
E1212 06:03:49.524412 8806 pod_workers.go:186] Error syncing pod 68496475-fdd3-11e8-9503-00155d12d666 ("openshift-apiserver-n8rhb_openshift-apiserver(68496475-fdd3-11e8-9503-00155d12d666)"), skipping: timeout expired waiting for volumes to attach or mount for pod "openshift-apiserver"/"openshift-apiserver-n8rhb". list of unmounted volumes=[serving-cert]. list of unattached volumes=[master-config master-cloud-provider serving-cert openshift-apiserver-token-2d2gg]
I1212 06:03:52.550875 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:03:52.569235 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
E1212 06:03:54.569002 8806 secret.go:198] Couldn't get secret openshift-apiserver/serving-cert: secrets "serving-cert" not found
E1212 06:03:54.569269 8806 nestedpendingoperations.go:267] Operation for "\"kubernetes.io/secret/68496475-fdd3-11e8-9503-00155d12d666-serving-cert\" (\"68496475-fdd3-11e8-9503-00155d12d666\")" failed. No retries permitted until 2018-12-12 06:05:56.5691506 +0000 UTC m=+383.079212901 (durationBeforeRetry 2m2s). Error: "MountVolume.SetUp failed for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/68496475-fdd3-11e8-9503-00155d12d666-serving-cert\") pod \"openshift-apiserver-n8rhb\" (UID: \"68496475-fdd3-11e8-9503-00155d12d666\") : secrets \"serving-cert\" not found"
I1212 06:04:08.550013 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:04:13.821509 8806 remote_image.go:108] PullImage "openshift/origin-service-serving-cert-signer:v3.11" from image service failed: rpc error: code = Unknown desc = Get https://registry-1.docker.io/v2/: x509: certificate signed by unknown authority
E1212 06:04:13.821550 8806 kuberuntime_image.go:51] Pull image "openshift/origin-service-serving-cert-signer:v3.11" failed: rpc error: code = Unknown desc = Get https://registry-1.docker.io/v2/: x509: certificate signed by unknown authority
E1212 06:04:13.821597 8806 kuberuntime_manager.go:733] container start failed: ErrImagePull: rpc error: code = Unknown desc = Get https://registry-1.docker.io/v2/: x509: certificate signed by unknown authority
E1212 06:04:13.821627 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ErrImagePull: "rpc error: code = Unknown desc = Get https://registry-1.docker.io/v2/: x509: certificate signed by unknown authority"
I1212 06:04:25.548262 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:04:25.561372 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
I1212 06:04:38.552391 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:04:38.559673 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
I1212 06:04:53.548528 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:04:53.559776 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
I1212 06:05:07.550466 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:05:07.556601 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
I1212 06:05:21.548161 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:05:21.552130 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
I1212 06:05:37.547948 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:05:42.851455 8806 remote_image.go:108] PullImage "openshift/origin-service-serving-cert-signer:v3.11" from image service failed: rpc error: code = Unknown desc = Get https://registry-1.docker.io/v2/: x509: certificate signed by unknown authority
E1212 06:05:42.851513 8806 kuberuntime_image.go:51] Pull image "openshift/origin-service-serving-cert-signer:v3.11" failed: rpc error: code = Unknown desc = Get https://registry-1.docker.io/v2/: x509: certificate signed by unknown authority
E1212 06:05:42.851578 8806 kuberuntime_manager.go:733] container start failed: ErrImagePull: rpc error: code = Unknown desc = Get https://registry-1.docker.io/v2/: x509: certificate signed by unknown authority
E1212 06:05:42.851609 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ErrImagePull: "rpc error: code = Unknown desc = Get https://registry-1.docker.io/v2/: x509: certificate signed by unknown authority"
I1212 06:05:56.548772 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:05:56.554419 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
E1212 06:05:56.591113 8806 secret.go:198] Couldn't get secret openshift-apiserver/serving-cert: secrets "serving-cert" not found
E1212 06:05:56.591194 8806 nestedpendingoperations.go:267] Operation for "\"kubernetes.io/secret/68496475-fdd3-11e8-9503-00155d12d666-serving-cert\" (\"68496475-fdd3-11e8-9503-00155d12d666\")" failed. No retries permitted until 2018-12-12 06:07:58.5911684 +0000 UTC m=+505.101230701 (durationBeforeRetry 2m2s). Error: "MountVolume.SetUp failed for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/68496475-fdd3-11e8-9503-00155d12d666-serving-cert\") pod \"openshift-apiserver-n8rhb\" (UID: \"68496475-fdd3-11e8-9503-00155d12d666\") : secrets \"serving-cert\" not found"
E1212 06:06:04.247980 8806 kubelet.go:1599] Unable to mount volumes for pod "openshift-apiserver-n8rhb_openshift-apiserver(68496475-fdd3-11e8-9503-00155d12d666)": timeout expired waiting for volumes to attach or mount for pod "openshift-apiserver"/"openshift-apiserver-n8rhb". list of unmounted volumes=[serving-cert]. list of unattached volumes=[master-config master-cloud-provider serving-cert openshift-apiserver-token-2d2gg]; skipping pod
E1212 06:06:04.248019 8806 pod_workers.go:186] Error syncing pod 68496475-fdd3-11e8-9503-00155d12d666 ("openshift-apiserver-n8rhb_openshift-apiserver(68496475-fdd3-11e8-9503-00155d12d666)"), skipping: timeout expired waiting for volumes to attach or mount for pod "openshift-apiserver"/"openshift-apiserver-n8rhb". list of unmounted volumes=[serving-cert]. list of unattached volumes=[master-config master-cloud-provider serving-cert openshift-apiserver-token-2d2gg]
I1212 06:06:07.549874 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:06:07.554551 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
I1212 06:06:21.570873 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:06:21.623432 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
I1212 06:06:37.555056 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:06:37.565432 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
I1212 06:06:48.548317 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:06:48.551528 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
I1212 06:07:01.547941 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:07:01.552138 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
I1212 06:07:12.549085 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:07:12.553378 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
I1212 06:07:26.550932 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:07:26.558282 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
I1212 06:07:41.548971 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:07:41.554392 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
I1212 06:07:53.548038 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:07:53.551842 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
E1212 06:07:58.654938 8806 secret.go:198] Couldn't get secret openshift-apiserver/serving-cert: secrets "serving-cert" not found
E1212 06:07:58.655011 8806 nestedpendingoperations.go:267] Operation for "\"kubernetes.io/secret/68496475-fdd3-11e8-9503-00155d12d666-serving-cert\" (\"68496475-fdd3-11e8-9503-00155d12d666\")" failed. No retries permitted until 2018-12-12 06:10:00.6549861 +0000 UTC m=+627.165048301 (durationBeforeRetry 2m2s). Error: "MountVolume.SetUp failed for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/68496475-fdd3-11e8-9503-00155d12d666-serving-cert\") pod \"openshift-apiserver-n8rhb\" (UID: \"68496475-fdd3-11e8-9503-00155d12d666\") : secrets \"serving-cert\" not found"
I1212 06:08:08.548928 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:08:08.554118 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
I1212 06:08:21.547718 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:08:21.550801 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
E1212 06:08:22.248077 8806 kubelet.go:1599] Unable to mount volumes for pod "openshift-apiserver-n8rhb_openshift-apiserver(68496475-fdd3-11e8-9503-00155d12d666)": timeout expired waiting for volumes to attach or mount for pod "openshift-apiserver"/"openshift-apiserver-n8rhb". list of unmounted volumes=[serving-cert]. list of unattached volumes=[master-config master-cloud-provider serving-cert openshift-apiserver-token-2d2gg]; skipping pod
E1212 06:08:22.248125 8806 pod_workers.go:186] Error syncing pod 68496475-fdd3-11e8-9503-00155d12d666 ("openshift-apiserver-n8rhb_openshift-apiserver(68496475-fdd3-11e8-9503-00155d12d666)"), skipping: timeout expired waiting for volumes to attach or mount for pod "openshift-apiserver"/"openshift-apiserver-n8rhb". list of unmounted volumes=[serving-cert]. list of unattached volumes=[master-config master-cloud-provider serving-cert openshift-apiserver-token-2d2gg]
I1212 06:08:36.548571 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:08:41.716327 8806 remote_image.go:108] PullImage "openshift/origin-service-serving-cert-signer:v3.11" from image service failed: rpc error: code = Unknown desc = Get https://registry-1.docker.io/v2/: x509: certificate signed by unknown authority
E1212 06:08:41.716372 8806 kuberuntime_image.go:51] Pull image "openshift/origin-service-serving-cert-signer:v3.11" failed: rpc error: code = Unknown desc = Get https://registry-1.docker.io/v2/: x509: certificate signed by unknown authority
E1212 06:08:41.716431 8806 kuberuntime_manager.go:733] container start failed: ErrImagePull: rpc error: code = Unknown desc = Get https://registry-1.docker.io/v2/: x509: certificate signed by unknown authority
E1212 06:08:41.716462 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ErrImagePull: "rpc error: code = Unknown desc = Get https://registry-1.docker.io/v2/: x509: certificate signed by unknown authority"
I1212 06:08:52.548818 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:08:52.555394 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
I1212 06:09:05.548102 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:09:05.561755 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
I1212 06:09:20.548087 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:09:20.550335 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
I1212 06:09:35.548105 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:09:35.552381 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
I1212 06:09:47.548099 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:09:47.554256 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
I1212 06:10:00.554974 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:10:00.557753 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
E1212 06:10:00.726837 8806 secret.go:198] Couldn't get secret openshift-apiserver/serving-cert: secrets "serving-cert" not found
E1212 06:10:00.726951 8806 nestedpendingoperations.go:267] Operation for "\"kubernetes.io/secret/68496475-fdd3-11e8-9503-00155d12d666-serving-cert\" (\"68496475-fdd3-11e8-9503-00155d12d666\")" failed. No retries permitted until 2018-12-12 06:12:02.7269186 +0000 UTC m=+749.236980901 (durationBeforeRetry 2m2s). Error: "MountVolume.SetUp failed for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/68496475-fdd3-11e8-9503-00155d12d666-serving-cert\") pod \"openshift-apiserver-n8rhb\" (UID: \"68496475-fdd3-11e8-9503-00155d12d666\") : secrets \"serving-cert\" not found"
I1212 06:10:15.547657 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:10:15.551233 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
I1212 06:10:26.547916 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:10:26.551453 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
E1212 06:10:38.252054 8806 kubelet.go:1599] Unable to mount volumes for pod "openshift-apiserver-n8rhb_openshift-apiserver(68496475-fdd3-11e8-9503-00155d12d666)": timeout expired waiting for volumes to attach or mount for pod "openshift-apiserver"/"openshift-apiserver-n8rhb". list of unmounted volumes=[serving-cert]. list of unattached volumes=[master-config master-cloud-provider serving-cert openshift-apiserver-token-2d2gg]; skipping pod
E1212 06:10:38.252103 8806 pod_workers.go:186] Error syncing pod 68496475-fdd3-11e8-9503-00155d12d666 ("openshift-apiserver-n8rhb_openshift-apiserver(68496475-fdd3-11e8-9503-00155d12d666)"), skipping: timeout expired waiting for volumes to attach or mount for pod "openshift-apiserver"/"openshift-apiserver-n8rhb". list of unmounted volumes=[serving-cert]. list of unattached volumes=[master-config master-cloud-provider serving-cert openshift-apiserver-token-2d2gg]
I1212 06:10:39.549315 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:10:39.553480 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
I1212 06:10:53.547777 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:10:53.552102 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
I1212 06:11:06.548571 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:11:06.552676 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
I1212 06:11:21.548035 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:11:21.557225 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
I1212 06:11:36.548123 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:11:36.553565 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
I1212 06:11:48.547725 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:11:48.550788 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
I1212 06:11:59.547875 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:11:59.554511 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
E1212 06:12:02.767008 8806 secret.go:198] Couldn't get secret openshift-apiserver/serving-cert: secrets "serving-cert" not found
E1212 06:12:02.767137 8806 nestedpendingoperations.go:267] Operation for "\"kubernetes.io/secret/68496475-fdd3-11e8-9503-00155d12d666-serving-cert\" (\"68496475-fdd3-11e8-9503-00155d12d666\")" failed. No retries permitted until 2018-12-12 06:14:04.7670959 +0000 UTC m=+871.277158101 (durationBeforeRetry 2m2s). Error: "MountVolume.SetUp failed for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/68496475-fdd3-11e8-9503-00155d12d666-serving-cert\") pod \"openshift-apiserver-n8rhb\" (UID: \"68496475-fdd3-11e8-9503-00155d12d666\") : secrets \"serving-cert\" not found"
I1212 06:12:13.548054 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:12:13.554330 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
I1212 06:12:24.551073 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:12:24.553375 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
I1212 06:12:37.548598 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:12:37.555813 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
I1212 06:12:51.549236 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:12:51.565425 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
E1212 06:12:53.247863 8806 kubelet.go:1599] Unable to mount volumes for pod "openshift-apiserver-n8rhb_openshift-apiserver(68496475-fdd3-11e8-9503-00155d12d666)": timeout expired waiting for volumes to attach or mount for pod "openshift-apiserver"/"openshift-apiserver-n8rhb". list of unmounted volumes=[serving-cert]. list of unattached volumes=[master-config master-cloud-provider serving-cert openshift-apiserver-token-2d2gg]; skipping pod
E1212 06:12:53.247948 8806 pod_workers.go:186] Error syncing pod 68496475-fdd3-11e8-9503-00155d12d666 ("openshift-apiserver-n8rhb_openshift-apiserver(68496475-fdd3-11e8-9503-00155d12d666)"), skipping: timeout expired waiting for volumes to attach or mount for pod "openshift-apiserver"/"openshift-apiserver-n8rhb". list of unmounted volumes=[serving-cert]. list of unattached volumes=[master-config master-cloud-provider serving-cert openshift-apiserver-token-2d2gg]
I1212 06:13:05.551678 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:13:05.575436 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
I1212 06:13:16.552053 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:13:16.559317 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
I1212 06:13:29.549089 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:13:29.551446 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
I1212 06:13:43.547949 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:13:55.749271 8806 remote_image.go:108] PullImage "openshift/origin-service-serving-cert-signer:v3.11" from image service failed: rpc error: code = Unknown desc = Get https://registry-1.docker.io/v2/: x509: certificate signed by unknown authority
E1212 06:13:55.749365 8806 kuberuntime_image.go:51] Pull image "openshift/origin-service-serving-cert-signer:v3.11" failed: rpc error: code = Unknown desc = Get https://registry-1.docker.io/v2/: x509: certificate signed by unknown authority
E1212 06:13:55.749443 8806 kuberuntime_manager.go:733] container start failed: ErrImagePull: rpc error: code = Unknown desc = Get https://registry-1.docker.io/v2/: x509: certificate signed by unknown authority
E1212 06:13:55.749475 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ErrImagePull: "rpc error: code = Unknown desc = Get https://registry-1.docker.io/v2/: x509: certificate signed by unknown authority"
E1212 06:14:04.869533 8806 secret.go:198] Couldn't get secret openshift-apiserver/serving-cert: secrets "serving-cert" not found
E1212 06:14:04.869636 8806 nestedpendingoperations.go:267] Operation for "\"kubernetes.io/secret/68496475-fdd3-11e8-9503-00155d12d666-serving-cert\" (\"68496475-fdd3-11e8-9503-00155d12d666\")" failed. No retries permitted until 2018-12-12 06:16:06.869607 +0000 UTC m=+993.379669201 (durationBeforeRetry 2m2s). Error: "MountVolume.SetUp failed for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/68496475-fdd3-11e8-9503-00155d12d666-serving-cert\") pod \"openshift-apiserver-n8rhb\" (UID: \"68496475-fdd3-11e8-9503-00155d12d666\") : secrets \"serving-cert\" not found"
I1212 06:14:11.548509 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:14:11.556401 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
I1212 06:14:23.549929 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:14:23.573095 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
I1212 06:14:36.547924 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:14:36.554390 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
I1212 06:14:49.548502 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:14:49.552758 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
I1212 06:15:03.548403 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:15:03.551720 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
E1212 06:15:11.251341 8806 kubelet.go:1599] Unable to mount volumes for pod "openshift-apiserver-n8rhb_openshift-apiserver(68496475-fdd3-11e8-9503-00155d12d666)": timeout expired waiting for volumes to attach or mount for pod "openshift-apiserver"/"openshift-apiserver-n8rhb". list of unmounted volumes=[serving-cert]. list of unattached volumes=[master-config master-cloud-provider serving-cert openshift-apiserver-token-2d2gg]; skipping pod
E1212 06:15:11.251398 8806 pod_workers.go:186] Error syncing pod 68496475-fdd3-11e8-9503-00155d12d666 ("openshift-apiserver-n8rhb_openshift-apiserver(68496475-fdd3-11e8-9503-00155d12d666)"), skipping: timeout expired waiting for volumes to attach or mount for pod "openshift-apiserver"/"openshift-apiserver-n8rhb". list of unmounted volumes=[serving-cert]. list of unattached volumes=[master-config master-cloud-provider serving-cert openshift-apiserver-token-2d2gg]
I1212 06:15:16.548900 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:15:16.559814 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
I1212 06:15:31.547967 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:15:31.560169 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
I1212 06:15:42.556868 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:15:42.566750 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
I1212 06:15:53.548095 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:15:53.551179 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
I1212 06:16:06.549970 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:16:06.554660 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
E1212 06:16:06.963008 8806 secret.go:198] Couldn't get secret openshift-apiserver/serving-cert: secrets "serving-cert" not found
E1212 06:16:06.963120 8806 nestedpendingoperations.go:267] Operation for "\"kubernetes.io/secret/68496475-fdd3-11e8-9503-00155d12d666-serving-cert\" (\"68496475-fdd3-11e8-9503-00155d12d666\")" failed. No retries permitted until 2018-12-12 06:18:08.9630855 +0000 UTC m=+1115.473147801 (durationBeforeRetry 2m2s). Error: "MountVolume.SetUp failed for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/68496475-fdd3-11e8-9503-00155d12d666-serving-cert\") pod \"openshift-apiserver-n8rhb\" (UID: \"68496475-fdd3-11e8-9503-00155d12d666\") : secrets \"serving-cert\" not found"
I1212 06:16:18.551552 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:16:18.558513 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
I1212 06:16:30.562559 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:16:30.569371 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
I1212 06:16:43.548383 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:16:43.553762 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
I1212 06:16:57.551488 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:16:57.568962 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
I1212 06:17:12.556118 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:17:12.560132 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
I1212 06:17:26.551567 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:17:26.554706 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
E1212 06:17:27.248139 8806 kubelet.go:1599] Unable to mount volumes for pod "openshift-apiserver-n8rhb_openshift-apiserver(68496475-fdd3-11e8-9503-00155d12d666)": timeout expired waiting for volumes to attach or mount for pod "openshift-apiserver"/"openshift-apiserver-n8rhb". list of unmounted volumes=[serving-cert]. list of unattached volumes=[master-config master-cloud-provider serving-cert openshift-apiserver-token-2d2gg]; skipping pod
E1212 06:17:27.248570 8806 pod_workers.go:186] Error syncing pod 68496475-fdd3-11e8-9503-00155d12d666 ("openshift-apiserver-n8rhb_openshift-apiserver(68496475-fdd3-11e8-9503-00155d12d666)"), skipping: timeout expired waiting for volumes to attach or mount for pod "openshift-apiserver"/"openshift-apiserver-n8rhb". list of unmounted volumes=[serving-cert]. list of unattached volumes=[master-config master-cloud-provider serving-cert openshift-apiserver-token-2d2gg]
I1212 06:17:38.548699 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:17:38.556937 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
I1212 06:17:52.547964 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:17:52.550772 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
I1212 06:18:07.547857 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:18:07.556354 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
E1212 06:18:09.012335 8806 secret.go:198] Couldn't get secret openshift-apiserver/serving-cert: secrets "serving-cert" not found
E1212 06:18:09.012409 8806 nestedpendingoperations.go:267] Operation for "\"kubernetes.io/secret/68496475-fdd3-11e8-9503-00155d12d666-serving-cert\" (\"68496475-fdd3-11e8-9503-00155d12d666\")" failed. No retries permitted until 2018-12-12 06:20:11.012387 +0000 UTC m=+1237.522449301 (durationBeforeRetry 2m2s). Error: "MountVolume.SetUp failed for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/68496475-fdd3-11e8-9503-00155d12d666-serving-cert\") pod \"openshift-apiserver-n8rhb\" (UID: \"68496475-fdd3-11e8-9503-00155d12d666\") : secrets \"serving-cert\" not found"
I1212 06:18:20.550627 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:18:20.556156 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
I1212 06:18:36.554668 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:18:36.560106 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
I1212 06:18:51.547703 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:18:51.550723 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
I1212 06:19:06.548764 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:19:13.927836 8806 remote_image.go:108] PullImage "openshift/origin-service-serving-cert-signer:v3.11" from image service failed: rpc error: code = Unknown desc = Get https://registry-1.docker.io/v2/: x509: certificate signed by unknown authority
E1212 06:19:13.927967 8806 kuberuntime_image.go:51] Pull image "openshift/origin-service-serving-cert-signer:v3.11" failed: rpc error: code = Unknown desc = Get https://registry-1.docker.io/v2/: x509: certificate signed by unknown authority
E1212 06:19:13.928058 8806 kuberuntime_manager.go:733] container start failed: ErrImagePull: rpc error: code = Unknown desc = Get https://registry-1.docker.io/v2/: x509: certificate signed by unknown authority
E1212 06:19:13.928116 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ErrImagePull: "rpc error: code = Unknown desc = Get https://registry-1.docker.io/v2/: x509: certificate signed by unknown authority"
I1212 06:19:25.547852 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:19:25.563069 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
I1212 06:19:38.548321 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:19:38.552319 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
E1212 06:19:41.248122 8806 kubelet.go:1599] Unable to mount volumes for pod "openshift-apiserver-n8rhb_openshift-apiserver(68496475-fdd3-11e8-9503-00155d12d666)": timeout expired waiting for volumes to attach or mount for pod "openshift-apiserver"/"openshift-apiserver-n8rhb". list of unmounted volumes=[serving-cert]. list of unattached volumes=[master-config master-cloud-provider serving-cert openshift-apiserver-token-2d2gg]; skipping pod
E1212 06:19:41.248160 8806 pod_workers.go:186] Error syncing pod 68496475-fdd3-11e8-9503-00155d12d666 ("openshift-apiserver-n8rhb_openshift-apiserver(68496475-fdd3-11e8-9503-00155d12d666)"), skipping: timeout expired waiting for volumes to attach or mount for pod "openshift-apiserver"/"openshift-apiserver-n8rhb". list of unmounted volumes=[serving-cert]. list of unattached volumes=[master-config master-cloud-provider serving-cert openshift-apiserver-token-2d2gg]
I1212 06:19:51.549242 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:19:51.556719 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
I1212 06:20:02.548195 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:20:02.553807 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
E1212 06:20:11.106677 8806 secret.go:198] Couldn't get secret openshift-apiserver/serving-cert: secrets "serving-cert" not found
E1212 06:20:11.106773 8806 nestedpendingoperations.go:267] Operation for "\"kubernetes.io/secret/68496475-fdd3-11e8-9503-00155d12d666-serving-cert\" (\"68496475-fdd3-11e8-9503-00155d12d666\")" failed. No retries permitted until 2018-12-12 06:22:13.1067453 +0000 UTC m=+1359.616807501 (durationBeforeRetry 2m2s). Error: "MountVolume.SetUp failed for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/68496475-fdd3-11e8-9503-00155d12d666-serving-cert\") pod \"openshift-apiserver-n8rhb\" (UID: \"68496475-fdd3-11e8-9503-00155d12d666\") : secrets \"serving-cert\" not found"
I1212 06:20:14.548383 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:20:14.561262 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
I1212 06:20:29.548108 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:20:29.550511 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
I1212 06:20:43.548729 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:20:43.553519 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
I1212 06:20:55.547970 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:20:55.550161 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
I1212 06:21:06.548069 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:21:06.553157 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
I1212 06:21:22.552090 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:21:22.558433 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
I1212 06:21:35.549382 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:21:35.552106 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
I1212 06:21:47.549159 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:21:47.564500 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
E1212 06:21:57.249378 8806 kubelet.go:1599] Unable to mount volumes for pod "openshift-apiserver-n8rhb_openshift-apiserver(68496475-fdd3-11e8-9503-00155d12d666)": timeout expired waiting for volumes to attach or mount for pod "openshift-apiserver"/"openshift-apiserver-n8rhb". list of unmounted volumes=[serving-cert]. list of unattached volumes=[master-config master-cloud-provider serving-cert openshift-apiserver-token-2d2gg]; skipping pod
E1212 06:21:57.249471 8806 pod_workers.go:186] Error syncing pod 68496475-fdd3-11e8-9503-00155d12d666 ("openshift-apiserver-n8rhb_openshift-apiserver(68496475-fdd3-11e8-9503-00155d12d666)"), skipping: timeout expired waiting for volumes to attach or mount for pod "openshift-apiserver"/"openshift-apiserver-n8rhb". list of unmounted volumes=[serving-cert]. list of unattached volumes=[master-config master-cloud-provider serving-cert openshift-apiserver-token-2d2gg]
I1212 06:21:59.548347 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:21:59.552420 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
I1212 06:22:10.549062 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:22:10.550787 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
E1212 06:22:13.197410 8806 secret.go:198] Couldn't get secret openshift-apiserver/serving-cert: secrets "serving-cert" not found
E1212 06:22:13.197514 8806 nestedpendingoperations.go:267] Operation for "\"kubernetes.io/secret/68496475-fdd3-11e8-9503-00155d12d666-serving-cert\" (\"68496475-fdd3-11e8-9503-00155d12d666\")" failed. No retries permitted until 2018-12-12 06:24:15.1974818 +0000 UTC m=+1481.707544001 (durationBeforeRetry 2m2s). Error: "MountVolume.SetUp failed for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/68496475-fdd3-11e8-9503-00155d12d666-serving-cert\") pod \"openshift-apiserver-n8rhb\" (UID: \"68496475-fdd3-11e8-9503-00155d12d666\") : secrets \"serving-cert\" not found"
I1212 06:22:23.549394 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:22:23.553410 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
I1212 06:22:35.548337 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:22:35.552607 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
I1212 06:22:48.548803 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:22:48.557870 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
I1212 06:23:02.548086 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:23:02.551529 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
I1212 06:23:16.548483 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:23:16.554283 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
I1212 06:23:30.547937 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:23:30.556796 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
I1212 06:23:41.550096 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:23:41.554007 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
I1212 06:23:55.549816 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:23:55.559603 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
I1212 06:24:06.547674 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:24:06.549950 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
E1212 06:24:12.254102 8806 kubelet.go:1599] Unable to mount volumes for pod "openshift-apiserver-n8rhb_openshift-apiserver(68496475-fdd3-11e8-9503-00155d12d666)": timeout expired waiting for volumes to attach or mount for pod "openshift-apiserver"/"openshift-apiserver-n8rhb". list of unmounted volumes=[serving-cert]. list of unattached volumes=[master-config master-cloud-provider serving-cert openshift-apiserver-token-2d2gg]; skipping pod
E1212 06:24:12.254151 8806 pod_workers.go:186] Error syncing pod 68496475-fdd3-11e8-9503-00155d12d666 ("openshift-apiserver-n8rhb_openshift-apiserver(68496475-fdd3-11e8-9503-00155d12d666)"), skipping: timeout expired waiting for volumes to attach or mount for pod "openshift-apiserver"/"openshift-apiserver-n8rhb". list of unmounted volumes=[serving-cert]. list of unattached volumes=[master-config master-cloud-provider serving-cert openshift-apiserver-token-2d2gg]
E1212 06:24:15.283956 8806 secret.go:198] Couldn't get secret openshift-apiserver/serving-cert: secrets "serving-cert" not found
E1212 06:24:15.284066 8806 nestedpendingoperations.go:267] Operation for "\"kubernetes.io/secret/68496475-fdd3-11e8-9503-00155d12d666-serving-cert\" (\"68496475-fdd3-11e8-9503-00155d12d666\")" failed. No retries permitted until 2018-12-12 06:26:17.2840128 +0000 UTC m=+1603.794075001 (durationBeforeRetry 2m2s). Error: "MountVolume.SetUp failed for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/68496475-fdd3-11e8-9503-00155d12d666-serving-cert\") pod \"openshift-apiserver-n8rhb\" (UID: \"68496475-fdd3-11e8-9503-00155d12d666\") : secrets \"serving-cert\" not found"
I1212 06:24:21.549234 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:24:26.902932 8806 remote_image.go:108] PullImage "openshift/origin-service-serving-cert-signer:v3.11" from image service failed: rpc error: code = Unknown desc = Get https://registry-1.docker.io/v2/: x509: certificate signed by unknown authority
E1212 06:24:26.902976 8806 kuberuntime_image.go:51] Pull image "openshift/origin-service-serving-cert-signer:v3.11" failed: rpc error: code = Unknown desc = Get https://registry-1.docker.io/v2/: x509: certificate signed by unknown authority
E1212 06:24:26.903021 8806 kuberuntime_manager.go:733] container start failed: ErrImagePull: rpc error: code = Unknown desc = Get https://registry-1.docker.io/v2/: x509: certificate signed by unknown authority
E1212 06:24:26.903046 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ErrImagePull: "rpc error: code = Unknown desc = Get https://registry-1.docker.io/v2/: x509: certificate signed by unknown authority"
I1212 06:24:39.548094 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:24:39.555002 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
I1212 06:24:50.548914 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:24:50.556226 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
I1212 06:25:05.548967 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:25:05.551801 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
I1212 06:25:18.548415 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:25:18.551195 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
I1212 06:25:31.548053 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:25:31.564806 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
I1212 06:25:42.548119 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:25:42.554439 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
I1212 06:25:57.547826 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:25:57.550323 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
I1212 06:26:10.547963 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:26:10.561350 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
E1212 06:26:17.300713 8806 secret.go:198] Couldn't get secret openshift-apiserver/serving-cert: secrets "serving-cert" not found
E1212 06:26:17.300802 8806 nestedpendingoperations.go:267] Operation for "\"kubernetes.io/secret/68496475-fdd3-11e8-9503-00155d12d666-serving-cert\" (\"68496475-fdd3-11e8-9503-00155d12d666\")" failed. No retries permitted until 2018-12-12 06:28:19.3007806 +0000 UTC m=+1725.810842801 (durationBeforeRetry 2m2s). Error: "MountVolume.SetUp failed for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/68496475-fdd3-11e8-9503-00155d12d666-serving-cert\") pod \"openshift-apiserver-n8rhb\" (UID: \"68496475-fdd3-11e8-9503-00155d12d666\") : secrets \"serving-cert\" not found"
I1212 06:26:23.556988 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:26:23.635259 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
E1212 06:26:29.248267 8806 kubelet.go:1599] Unable to mount volumes for pod "openshift-apiserver-n8rhb_openshift-apiserver(68496475-fdd3-11e8-9503-00155d12d666)": timeout expired waiting for volumes to attach or mount for pod "openshift-apiserver"/"openshift-apiserver-n8rhb". list of unmounted volumes=[serving-cert]. list of unattached volumes=[master-config master-cloud-provider serving-cert openshift-apiserver-token-2d2gg]; skipping pod
E1212 06:26:29.248309 8806 pod_workers.go:186] Error syncing pod 68496475-fdd3-11e8-9503-00155d12d666 ("openshift-apiserver-n8rhb_openshift-apiserver(68496475-fdd3-11e8-9503-00155d12d666)"), skipping: timeout expired waiting for volumes to attach or mount for pod "openshift-apiserver"/"openshift-apiserver-n8rhb". list of unmounted volumes=[serving-cert]. list of unattached volumes=[master-config master-cloud-provider serving-cert openshift-apiserver-token-2d2gg]
I1212 06:26:38.564578 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:26:38.569835 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
I1212 06:26:51.548265 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:26:51.553332 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
I1212 06:27:06.547944 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:27:06.554179 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
I1212 06:27:20.548755 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:27:20.558333 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
I1212 06:27:32.548981 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:27:32.554104 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
I1212 06:27:47.547984 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:27:47.560791 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
I1212 06:28:02.553101 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:28:02.560013 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
I1212 06:28:13.548974 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:28:13.554165 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
E1212 06:28:19.376831 8806 secret.go:198] Couldn't get secret openshift-apiserver/serving-cert: secrets "serving-cert" not found
E1212 06:28:19.376925 8806 nestedpendingoperations.go:267] Operation for "\"kubernetes.io/secret/68496475-fdd3-11e8-9503-00155d12d666-serving-cert\" (\"68496475-fdd3-11e8-9503-00155d12d666\")" failed. No retries permitted until 2018-12-12 06:30:21.3769007 +0000 UTC m=+1847.886962901 (durationBeforeRetry 2m2s). Error: "MountVolume.SetUp failed for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/68496475-fdd3-11e8-9503-00155d12d666-serving-cert\") pod \"openshift-apiserver-n8rhb\" (UID: \"68496475-fdd3-11e8-9503-00155d12d666\") : secrets \"serving-cert\" not found"
I1212 06:28:28.550572 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:28:28.561766 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
I1212 06:28:44.557532 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:28:44.561453 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
E1212 06:28:47.248534 8806 kubelet.go:1599] Unable to mount volumes for pod "openshift-apiserver-n8rhb_openshift-apiserver(68496475-fdd3-11e8-9503-00155d12d666)": timeout expired waiting for volumes to attach or mount for pod "openshift-apiserver"/"openshift-apiserver-n8rhb". list of unmounted volumes=[serving-cert]. list of unattached volumes=[master-config master-cloud-provider serving-cert openshift-apiserver-token-2d2gg]; skipping pod
E1212 06:28:47.248578 8806 pod_workers.go:186] Error syncing pod 68496475-fdd3-11e8-9503-00155d12d666 ("openshift-apiserver-n8rhb_openshift-apiserver(68496475-fdd3-11e8-9503-00155d12d666)"), skipping: timeout expired waiting for volumes to attach or mount for pod "openshift-apiserver"/"openshift-apiserver-n8rhb". list of unmounted volumes=[serving-cert]. list of unattached volumes=[master-config master-cloud-provider serving-cert openshift-apiserver-token-2d2gg]
I1212 06:28:56.548393 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:28:56.553196 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
I1212 06:29:09.548806 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:29:09.551398 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
I1212 06:29:21.548654 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:29:21.557250 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
I1212 06:29:35.547951 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:29:40.592467 8806 remote_image.go:108] PullImage "openshift/origin-service-serving-cert-signer:v3.11" from image service failed: rpc error: code = Unknown desc = Get https://registry-1.docker.io/v2/: x509: certificate signed by unknown authority
E1212 06:29:40.592529 8806 kuberuntime_image.go:51] Pull image "openshift/origin-service-serving-cert-signer:v3.11" failed: rpc error: code = Unknown desc = Get https://registry-1.docker.io/v2/: x509: certificate signed by unknown authority
E1212 06:29:40.592596 8806 kuberuntime_manager.go:733] container start failed: ErrImagePull: rpc error: code = Unknown desc = Get https://registry-1.docker.io/v2/: x509: certificate signed by unknown authority
E1212 06:29:40.592633 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ErrImagePull: "rpc error: code = Unknown desc = Get https://registry-1.docker.io/v2/: x509: certificate signed by unknown authority"
I1212 06:29:55.547819 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:29:55.559072 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
I1212 06:30:08.550354 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:30:08.554371 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
E1212 06:30:21.440492 8806 secret.go:198] Couldn't get secret openshift-apiserver/serving-cert: secrets "serving-cert" not found
E1212 06:30:21.441390 8806 nestedpendingoperations.go:267] Operation for "\"kubernetes.io/secret/68496475-fdd3-11e8-9503-00155d12d666-serving-cert\" (\"68496475-fdd3-11e8-9503-00155d12d666\")" failed. No retries permitted until 2018-12-12 06:32:23.4413499 +0000 UTC m=+1969.951412101 (durationBeforeRetry 2m2s). Error: "MountVolume.SetUp failed for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/68496475-fdd3-11e8-9503-00155d12d666-serving-cert\") pod \"openshift-apiserver-n8rhb\" (UID: \"68496475-fdd3-11e8-9503-00155d12d666\") : secrets \"serving-cert\" not found"
I1212 06:30:22.550021 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:30:22.556229 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
I1212 06:30:35.548132 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:30:35.552691 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
I1212 06:30:47.548215 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:30:47.553933 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
I1212 06:30:58.548240 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:30:58.558926 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
E1212 06:31:03.248231 8806 kubelet.go:1599] Unable to mount volumes for pod "openshift-apiserver-n8rhb_openshift-apiserver(68496475-fdd3-11e8-9503-00155d12d666)": timeout expired waiting for volumes to attach or mount for pod "openshift-apiserver"/"openshift-apiserver-n8rhb". list of unmounted volumes=[serving-cert]. list of unattached volumes=[master-config master-cloud-provider serving-cert openshift-apiserver-token-2d2gg]; skipping pod
E1212 06:31:03.251779 8806 pod_workers.go:186] Error syncing pod 68496475-fdd3-11e8-9503-00155d12d666 ("openshift-apiserver-n8rhb_openshift-apiserver(68496475-fdd3-11e8-9503-00155d12d666)"), skipping: timeout expired waiting for volumes to attach or mount for pod "openshift-apiserver"/"openshift-apiserver-n8rhb". list of unmounted volumes=[serving-cert]. list of unattached volumes=[master-config master-cloud-provider serving-cert openshift-apiserver-token-2d2gg]
I1212 06:31:13.547824 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:31:13.563808 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
I1212 06:31:26.553077 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:31:26.557413 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
I1212 06:31:37.547728 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:31:37.551278 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
I1212 06:31:53.547609 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:31:53.555349 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
I1212 06:32:05.547704 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:32:05.552905 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
I1212 06:32:20.551161 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:32:20.564836 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
E1212 06:32:23.541025 8806 secret.go:198] Couldn't get secret openshift-apiserver/serving-cert: secrets "serving-cert" not found
E1212 06:32:23.541126 8806 nestedpendingoperations.go:267] Operation for "\"kubernetes.io/secret/68496475-fdd3-11e8-9503-00155d12d666-serving-cert\" (\"68496475-fdd3-11e8-9503-00155d12d666\")" failed. No retries permitted until 2018-12-12 06:34:25.5410983 +0000 UTC m=+2092.051160501 (durationBeforeRetry 2m2s). Error: "MountVolume.SetUp failed for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/68496475-fdd3-11e8-9503-00155d12d666-serving-cert\") pod \"openshift-apiserver-n8rhb\" (UID: \"68496475-fdd3-11e8-9503-00155d12d666\") : secrets \"serving-cert\" not found"
I1212 06:32:34.548125 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:32:34.552697 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
I1212 06:32:46.548192 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:32:46.553595 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
I1212 06:33:01.548030 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:33:01.554271 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
I1212 06:33:14.553731 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:33:14.563221 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
E1212 06:33:20.252164 8806 kubelet.go:1599] Unable to mount volumes for pod "openshift-apiserver-n8rhb_openshift-apiserver(68496475-fdd3-11e8-9503-00155d12d666)": timeout expired waiting for volumes to attach or mount for pod "openshift-apiserver"/"openshift-apiserver-n8rhb". list of unmounted volumes=[serving-cert]. list of unattached volumes=[master-config master-cloud-provider serving-cert openshift-apiserver-token-2d2gg]; skipping pod
E1212 06:33:20.252201 8806 pod_workers.go:186] Error syncing pod 68496475-fdd3-11e8-9503-00155d12d666 ("openshift-apiserver-n8rhb_openshift-apiserver(68496475-fdd3-11e8-9503-00155d12d666)"), skipping: timeout expired waiting for volumes to attach or mount for pod "openshift-apiserver"/"openshift-apiserver-n8rhb". list of unmounted volumes=[serving-cert]. list of unattached volumes=[master-config master-cloud-provider serving-cert openshift-apiserver-token-2d2gg]
I1212 06:33:27.549310 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:33:27.555359 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
I1212 06:33:42.548159 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:33:42.553390 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
I1212 06:33:54.548508 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:33:54.553815 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
I1212 06:34:07.548008 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:34:07.557338 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
I1212 06:34:19.548065 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:34:19.555044 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
E1212 06:34:25.629993 8806 secret.go:198] Couldn't get secret openshift-apiserver/serving-cert: secrets "serving-cert" not found
E1212 06:34:25.630101 8806 nestedpendingoperations.go:267] Operation for "\"kubernetes.io/secret/68496475-fdd3-11e8-9503-00155d12d666-serving-cert\" (\"68496475-fdd3-11e8-9503-00155d12d666\")" failed. No retries permitted until 2018-12-12 06:36:27.6300698 +0000 UTC m=+2214.140132101 (durationBeforeRetry 2m2s). Error: "MountVolume.SetUp failed for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/68496475-fdd3-11e8-9503-00155d12d666-serving-cert\") pod \"openshift-apiserver-n8rhb\" (UID: \"68496475-fdd3-11e8-9503-00155d12d666\") : secrets \"serving-cert\" not found"
I1212 06:34:31.549646 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:34:31.553892 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
I1212 06:34:43.551052 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:34:54.464877 8806 remote_image.go:108] PullImage "openshift/origin-service-serving-cert-signer:v3.11" from image service failed: rpc error: code = Unknown desc = Get https://registry-1.docker.io/v2/: x509: certificate signed by unknown authority
E1212 06:34:54.465395 8806 kuberuntime_image.go:51] Pull image "openshift/origin-service-serving-cert-signer:v3.11" failed: rpc error: code = Unknown desc = Get https://registry-1.docker.io/v2/: x509: certificate signed by unknown authority
E1212 06:34:54.465474 8806 kuberuntime_manager.go:733] container start failed: ErrImagePull: rpc error: code = Unknown desc = Get https://registry-1.docker.io/v2/: x509: certificate signed by unknown authority
E1212 06:34:54.465537 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ErrImagePull: "rpc error: code = Unknown desc = Get https://registry-1.docker.io/v2/: x509: certificate signed by unknown authority"
I1212 06:35:05.547723 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:35:05.550251 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
I1212 06:35:19.548864 8806 kuberuntime_manager.go:513] Container {Name:operator Image:openshift/origin-service-serving-cert-signer:v3.11 Command:[service-serving-cert-signer operator] Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=4] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/var/run/configmaps/config SubPath: MountPropagation:<nil>} {Name:openshift-service-cert-signer-operator-token-dx9l7 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
E1212 06:35:19.552618 8806 pod_workers.go:186] Error syncing pod 68db0de0-fdd3-11e8-9503-00155d12d666 ("openshift-service-cert-signer-operator-6d477f986b-4j88g_openshift-core-operators(68db0de0-fdd3-11e8-9503-00155d12d666)"), skipping: failed to "StartContainer" for "operator" with ImagePullBackOff: "Back-off pulling image \"openshift/origin-service-serving-cert-signer:v3.11\""
- check-network-ping-host : ocp-docker-minishift-virtual.repo.dev.corp.com
- disk-size : 50G
- docker-opt : [add-registry=ocp-docker-minishift-virtual.repo.dev.corp.com]
- http-proxy : http://corp-proxy.com:8080
- https-proxy : http://corp-proxy.com:8080
- hyperv-virtual-switch : NATSwitch
- image-caching : true
- insecure-registry : [ocp-docker-minishift-virtual.repo.dev.corp.com]
- iso-url : file://C:/WS/minishift/.minishift/cache/iso/minishift-rhel7.iso
- memory : 8G
- network-gateway : 192.168.10.1
- network-ipaddress : 192.168.10.10
- network-nameserver : [***dns ip***]
- no-proxy : .corp.com,.corp2.com,.svc,192.168.10.10,192.168.10.10
- openshift-version : v3.11.43
- registry-mirror : [https://ocp-docker-minishift-virtual.repo.dev.corp.com]
- routing-suffix : apps.local.dev.corp.com
- show-libmachine-logs : true
- skip-check-network-host : true
- skip-check-network-http : true
- skip-check-openshift-release : true
- skip-registration : true
- vm-driver : hyperv
-- minishift version: v1.27.0+5981f99
-- Starting profile 'minishift'
-- Using proxy for the setup
Using http proxy: http://corp-proxy.com:8080
Using https proxy: http://corp-proxy.com:8080
-- Check if deprecated options are used ... OK
-- Checking if https://mirror.openshift.com is reachable ... OK
-- Checking if requested OpenShift version 'v3.11.43' is valid ... SKIP
-- Checking if requested OpenShift version 'v3.11.43' is supported ... OK
-- Checking if requested hypervisor 'hyperv' is supported on this platform ... OK
-- Checking if Hyper-V driver is installed ... OK
-- Checking if Hyper-V driver is configured to use a Virtual Switch ...
'NATSwitch' ... OK
-- Checking if user is a member of the Hyper-V Administrators group ... OK
-- Checking the ISO URL ... OK
-- Checking if provided oc flags are supported ... OK
-- Starting the OpenShift cluster using 'hyperv' hypervisor ...
-- Minishift VM will be configured with ...
Memory: 8 GB
vCPUs : 2
Disk size: 50 GB
-- Determing netmask ...
-- Set the following network settings to VM ...
Device: eth0
IP Address: 192.168.10.10/255.255.255.0
Gateway: 192.168.10.1
Nameservers: ***dns ip***
-- Starting Minishift VM ...+Found binary path at C:\WS\minishift\bin\minishift.exe
Launching plugin server for driver hyperv
.Plugin server listening at address 127.0.0.1:59435
() Calling .GetVersion
Using API Version 1
() Calling .SetConfigRaw
() Calling .GetMachineName
(minishift) Calling .GetMachineName
(minishift) Calling .DriverName
Creating CA: C:\WS\minishift\.minishift\certs\ca.pem
Creating client certificate: C:\WS\minishift\.minishift\certs\cert.pem
Running pre-create checks...
(minishift) Calling .PreCreateCheck
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
(minishift) DBG | [stdout =====>] : Hyper-V
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole("S-1-5-32-578")
(minishift) DBG | [stdout =====>] : True
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; (Hyper-V\Get-VMSwitch).Name
.(minishift) DBG | [stdout =====>] : nat
(minishift) DBG | Standardväxel
(minishift) DBG | NATSwitch
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
(minishift) Calling .GetConfigRaw
Creating machine...
(minishift) Calling .Create
(minishift) Downloading C:\WS\minishift\.minishift\cache\boot2docker.iso from file://C:/WS/minishift/.minishift/cache/iso/minishift-rhel7.iso...
(minishift) Creating SSH key...
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; (Hyper-V\Get-VMSwitch).Name
(minishift) Creating VM...
.(minishift) DBG | [stdout =====>] : nat
(minishift) DBG | Standardväxel
(minishift) Using switch "NATSwitch"
(minishift) DBG | NATSwitch
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
+(minishift) DBG | [stdout =====>] : False
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
(minishift) Creating VHD
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\WS\minishift\.minishift\machines\minishift\fixed.vhd' -SizeBytes 50000MB -Fixed
...+...+...+..+..+..+...+(minishift) DBG | [stdout =====>] :
(minishift) DBG |
(minishift) DBG | ComputerName : Q13104
(minishift) DBG | Path : C:\WS\minishift\.minishift\machines\minishift\fixed.vhd
(minishift) DBG | VhdFormat : VHD
(minishift) DBG | VhdType : Fixed
(minishift) DBG | FileSize : 52428800512
(minishift) DBG | Size : 52428800000
(minishift) DBG | MinimumSize :
(minishift) DBG | LogicalSectorSize : 512
(minishift) DBG | PhysicalSectorSize : 512
(minishift) DBG | BlockSize : 0
(minishift) DBG | ParentPath :
(minishift) DBG | DiskIdentifier : D17F037B-9E57-4914-B508-D496B88CBFEB
(minishift) DBG | FragmentationPercentage : 0
(minishift) DBG | Alignment : 1
(minishift) DBG | Attached : False
(minishift) DBG | DiskNumber :
(minishift) DBG | IsPMEMCompatible : False
(minishift) DBG | AddressAbstractionType : None
(minishift) DBG | Number :
(minishift) DBG |
(minishift) DBG |
(minishift) DBG |
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
(minishift) DBG | Writing magic tar header
(minishift) DBG | Writing SSH key tar header
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\WS\minishift\.minishift\machines\minishift\fixed.vhd' -DestinationPath 'C:\WS\minishift\.minishift\machines\minishift\disk.vhd' -VHDType Dynamic -DeleteSource
.+..+.+..+.+..+(minishift) DBG | [stdout =====>] :
(minishift) DBG | [stderr =====>] :
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM minishift -Path 'C:\WS\minishift\.minishift\machines\minishift' -SwitchName 'NATSwitch' -MemoryStartupBytes 8192MB
.(minishift) DBG | [stdout =====>] :
(minishift) DBG | Name State CPUUsage(%) MemoryAssigned(M) Uptime Status Version
(minishift) DBG | ---- ----- ----------- ----------------- ------ ------ -------
(minishift) DBG | minishift Off 0 0 00:00:00 Fungerar normalt 8.2
(minishift) DBG |
(minishift) DBG |
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor minishift -Count 2
(minishift) DBG | [stdout =====>] :
(minishift) DBG | [stderr =====>] :
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName minishift -Path 'C:\WS\minishift\.minishift\machines\minishift\boot2docker.iso'
+.*(minishift) DBG | [stdout =====>] :
(minishift) DBG | [stderr =====>] :
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName minishift -Path 'C:\WS\minishift\.minishift\machines\minishift\disk.vhd'
(minishift) DBG | [stdout =====>] :
(minishift) DBG | [stderr =====>] :
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM minishift
(minishift) Starting VM...
.+.+.+(minishift) DBG | [stdout =====>] :
(minishift) DBG | [stderr =====>] :
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minishift ).state
(minishift) Waiting for host to start...
(minishift) DBG | [stdout =====>] : Running
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minishift ).networkadapters[0]).ipaddresses[0]
.(minishift) DBG | [stdout =====>] :
(minishift) DBG | [stderr =====>] :
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minishift ).state
(minishift) DBG | [stdout =====>] : Running
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minishift ).networkadapters[0]).ipaddresses[0]
+*(minishift) DBG | [stdout =====>] :
(minishift) DBG | [stderr =====>] :
.(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minishift ).state
(minishift) DBG | [stdout =====>] : Running
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minishift ).networkadapters[0]).ipaddresses[0]
(minishift) DBG | [stdout =====>] :
(minishift) DBG | [stderr =====>] :
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minishift ).state
+(minishift) DBG | [stdout =====>] : Running
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minishift ).networkadapters[0]).ipaddresses[0]
*.(minishift) DBG | [stdout =====>] :
(minishift) DBG | [stderr =====>] :
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minishift ).state
(minishift) DBG | [stdout =====>] : Running
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minishift ).networkadapters[0]).ipaddresses[0]
(minishift) DBG | [stdout =====>] :
(minishift) DBG | [stderr =====>] :
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minishift ).state
+*.(minishift) DBG | [stdout =====>] : Running
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minishift ).networkadapters[0]).ipaddresses[0]
(minishift) DBG | [stdout =====>] :
(minishift) DBG | [stderr =====>] :
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minishift ).state
(minishift) DBG | [stdout =====>] : Running
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minishift ).networkadapters[0]).ipaddresses[0]
(minishift) DBG | [stdout =====>] :
(minishift) DBG | [stderr =====>] :
+.*(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minishift ).state
(minishift) DBG | [stdout =====>] : Running
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minishift ).networkadapters[0]).ipaddresses[0]
(minishift) DBG | [stdout =====>] :
(minishift) DBG | [stderr =====>] :
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minishift ).state
(minishift) DBG | [stdout =====>] : Running
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minishift ).networkadapters[0]).ipaddresses[0]
.(minishift) DBG | [stdout =====>] :
(minishift) DBG | [stderr =====>] :
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minishift ).state
(minishift) DBG | [stdout =====>] : Running
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minishift ).networkadapters[0]).ipaddresses[0]
(minishift) DBG | [stdout =====>] :
(minishift) DBG | [stderr =====>] :
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minishift ).state
.(minishift) DBG | [stdout =====>] : Running
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minishift ).networkadapters[0]).ipaddresses[0]
(minishift) DBG | [stdout =====>] :
(minishift) DBG | [stderr =====>] :
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minishift ).state
(minishift) DBG | [stdout =====>] : Running
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minishift ).networkadapters[0]).ipaddresses[0]
(minishift) DBG | [stdout =====>] :
(minishift) DBG | [stderr =====>] :
.(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minishift ).state
(minishift) DBG | [stdout =====>] : Running
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minishift ).networkadapters[0]).ipaddresses[0]
(minishift) DBG | [stdout =====>] :
(minishift) DBG | [stderr =====>] :
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minishift ).state
(minishift) DBG | [stdout =====>] : Running
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minishift ).networkadapters[0]).ipaddresses[0]
(minishift) DBG | [stdout =====>] :
(minishift) DBG | [stderr =====>] :
.(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minishift ).state
(minishift) DBG | [stdout =====>] : Running
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minishift ).networkadapters[0]).ipaddresses[0]
(minishift) DBG | [stdout =====>] : 192.168.10.10
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
(minishift) Calling .GetConfigRaw
(minishift) Calling .DriverName
(minishift) Calling .DriverName
Waiting for machine to be running, this may take a few minutes...
(minishift) Calling .GetState
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minishift ).state
(minishift) DBG | [stdout =====>] : Running
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
Detecting operating system of created instance...
Waiting for SSH to be available...
Getting to WaitForSSH function...
(minishift) Calling .GetSSHHostname
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minishift ).state
(minishift) DBG | [stdout =====>] : Running
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minishift ).networkadapters[0]).ipaddresses[0]
.(minishift) DBG | [stdout =====>] : 192.168.10.10
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
(minishift) Calling .GetSSHPort
(minishift) Calling .GetSSHKeyPath
(minishift) Calling .GetSSHKeyPath
(minishift) Calling .GetSSHUsername
Using SSH client type: external
&{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.10.10 -o IdentitiesOnly=yes -i C:\WS\minishift\.minishift\machines\minishift\id_rsa -p 22] C:\Program Files\Git\usr\bin\ssh.exe <nil>}
About to run SSH command:
exit 0
.SSH cmd err, output: <nil>:
(minishift) Calling .GetSSHHostname
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minishift ).state
(minishift) DBG | [stdout =====>] : Running
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minishift ).networkadapters[0]).ipaddresses[0]
(minishift) DBG | [stdout =====>] : 192.168.10.10
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
(minishift) Calling .GetSSHPort
(minishift) Calling .GetSSHKeyPath
(minishift) Calling .GetSSHKeyPath
(minishift) Calling .GetSSHUsername
Using SSH client type: external
&{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.10.10 -o IdentitiesOnly=yes -i C:\WS\minishift\.minishift\machines\minishift\id_rsa -p 22] C:\Program Files\Git\usr\bin\ssh.exe <nil>}
About to run SSH command:
cat /etc/os-release
SSH cmd err, output: <nil>: NAME="Red Hat Enterprise Linux Server"
VERSION="7.6 (Maipo)"
ID="rhel"
ID_LIKE="fedora"
VARIANT="Server"
VARIANT_ID="server"
VERSION_ID="7.6"
PRETTY_NAME="Red Hat Enterprise Linux Server 7.6 (Maipo)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:redhat:enterprise_linux:7.6:GA:server"
HOME_URL="https://www.redhat.com/"
BUG_REPORT_URL="https://bugzilla.redhat.com/"
REDHAT_BUGZILLA_PRODUCT="Red Hat Enterprise Linux 7"
REDHAT_BUGZILLA_PRODUCT_VERSION=7.6
REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux"
REDHAT_SUPPORT_PRODUCT_VERSION="7.6"
VARIANT="minishift"
VARIANT_VERSION="1.13.0"
BUILD_ID="50124a9-23112018142811-655"
Detecting the provisioner...
Couldn't set key CPE_NAME, no corresponding struct field found
Couldn't set key , no corresponding struct field found
Couldn't set key REDHAT_BUGZILLA_PRODUCT, no corresponding struct field found
Couldn't set key REDHAT_BUGZILLA_PRODUCT_VERSION, no corresponding struct field found
Couldn't set key REDHAT_SUPPORT_PRODUCT, no corresponding struct field found
Couldn't set key REDHAT_SUPPORT_PRODUCT_VERSION, no corresponding struct field found
Couldn't set key VARIANT_VERSION, no corresponding struct field found
Couldn't set key BUILD_ID, no corresponding struct field found
Provisioning with minishift...
No storage driver specified, instead using 'overlay2'
Setting hostname ...
(minishift) Calling .GetMachineName
(minishift) Calling .GetSSHHostname
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minishift ).state
(minishift) DBG | [stdout =====>] : Running
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minishift ).networkadapters[0]).ipaddresses[0]
(minishift) DBG | [stdout =====>] : 192.168.10.10
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
(minishift) Calling .GetSSHPort
(minishift) Calling .GetSSHKeyPath
(minishift) Calling .GetSSHKeyPath
(minishift) Calling .GetSSHUsername
Using SSH client type: external
&{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.10.10 -o IdentitiesOnly=yes -i C:\WS\minishift\.minishift\machines\minishift\id_rsa -p 22] C:\Program Files\Git\usr\bin\ssh.exe <nil>}
About to run SSH command:
sudo hostname minishift && echo "minishift" | sudo tee /etc/hostname
.SSH cmd err, output: <nil>: minishift
(minishift) Calling .GetSSHHostname
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minishift ).state
(minishift) DBG | [stdout =====>] : Running
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minishift ).networkadapters[0]).ipaddresses[0]
.(minishift) DBG | [stdout =====>] : 192.168.10.10
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
(minishift) Calling .GetSSHPort
(minishift) Calling .GetSSHKeyPath
(minishift) Calling .GetSSHKeyPath
(minishift) Calling .GetSSHUsername
Using SSH client type: external
&{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.10.10 -o IdentitiesOnly=yes -i C:\WS\minishift\.minishift\machines\minishift\id_rsa -p 22] C:\Program Files\Git\usr\bin\ssh.exe <nil>}
About to run SSH command:
if grep -xq 127.0.1.1.* /etc/hosts; then sudo sed -i 's/^127.0.1.1.*/127.0.1.1 minishift/g' /etc/hosts; else echo '127.0.1.1 minishift' | sudo tee -a /etc/hosts; fi
SSH cmd err, output: <nil>: 127.0.1.1 minishift
OK
checking docker daemon
(minishift) Calling .GetSSHHostname
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minishift ).state
(minishift) DBG | [stdout =====>] : Running
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minishift ).networkadapters[0]).ipaddresses[0]
(minishift) DBG | [stdout =====>] : 192.168.10.10
(minishift) DBG |
(minishift) Calling .GetSSHPort
(minishift) DBG | [stderr =====>] :
(minishift) Calling .GetSSHKeyPath
(minishift) Calling .GetSSHKeyPath
(minishift) Calling .GetSSHUsername
Using SSH client type: external
&{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.10.10 -o IdentitiesOnly=yes -i C:\WS\minishift\.minishift\machines\minishift\id_rsa -p 22] C:\Program Files\Git\usr\bin\ssh.exe <nil>}
About to run SSH command:
sudo systemctl -f start docker
SSH cmd err, output: <nil>:
(minishift) Calling .GetSSHHostname
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minishift ).state
.(minishift) DBG | [stdout =====>] : Running
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minishift ).networkadapters[0]).ipaddresses[0]
(minishift) DBG | [stdout =====>] : 192.168.10.10
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
(minishift) Calling .GetSSHPort
(minishift) Calling .GetSSHKeyPath
(minishift) Calling .GetSSHKeyPath
(minishift) Calling .GetSSHUsername
Using SSH client type: external
&{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.10.10 -o IdentitiesOnly=yes -i C:\WS\minishift\.minishift\machines\minishift\id_rsa -p 22] C:\Program Files\Git\usr\bin\ssh.exe <nil>}
About to run SSH command:
sudo docker version
.SSH cmd err, output: <nil>: Client:
Version: 1.13.1
API version: 1.26
Package version: docker-1.13.1-75.git8633870.el7_5.x86_64
Go version: go1.9.2
Git commit: 8633870/1.13.1
Built: Wed Sep 12 10:56:54 2018
OS/Arch: linux/amd64
Server:
Version: 1.13.1
API version: 1.26 (minimum version 1.12)
Package version: docker-1.13.1-75.git8633870.el7_5.x86_64
Go version: go1.9.2
Git commit: 8633870/1.13.1
Built: Wed Sep 12 10:56:54 2018
OS/Arch: linux/amd64
Experimental: false
(minishift) Calling .GetSSHHostname
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minishift ).state
(minishift) DBG | [stdout =====>] : Running
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minishift ).networkadapters[0]).ipaddresses[0]
(minishift) DBG | [stdout =====>] : 192.168.10.10
(minishift) DBG |
(minishift) Calling .GetSSHPort
(minishift) DBG | [stderr =====>] :
(minishift) Calling .GetSSHKeyPath
(minishift) Calling .GetSSHKeyPath
(minishift) Calling .GetSSHUsername
Using SSH client type: external
&{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.10.10 -o IdentitiesOnly=yes -i C:\WS\minishift\.minishift\machines\minishift\id_rsa -p 22] C:\Program Files\Git\usr\bin\ssh.exe <nil>}
About to run SSH command:
sudo mkdir -p /etc/docker
SSH cmd err, output: <nil>:
(minishift) Calling .GetMachineName
(minishift) Calling .GetIP
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minishift ).state
(minishift) DBG | [stdout =====>] : Running
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minishift ).networkadapters[0]).ipaddresses[0]
.(minishift) DBG | [stdout =====>] : 192.168.10.10
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
Copying certs to the local machine directory...
generating server cert: C:\WS\minishift\.minishift\machines\server.pem ca-key=C:\WS\minishift\.minishift\certs\ca.pem private-key=C:\WS\minishift\.minishift\certs\ca-key.pem org=USER.minishift san=[192.168.10.10 localhost]
(minishift) Calling .GetSSHHostname
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minishift ).state
(minishift) DBG | [stdout =====>] : Running
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minishift ).networkadapters[0]).ipaddresses[0]
(minishift) DBG | [stdout =====>] : 192.168.10.10
(minishift) DBG |
(minishift) Calling .GetSSHPort
(minishift) DBG | [stderr =====>] :
(minishift) Calling .GetSSHKeyPath
(minishift) Calling .GetSSHKeyPath
(minishift) Calling .GetSSHUsername
Using SSH client type: external
&{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.10.10 -o IdentitiesOnly=yes -i C:\WS\minishift\.minishift\machines\minishift\id_rsa -p 22] C:\Program Files\Git\usr\bin\ssh.exe <nil>}
About to run SSH command:
sudo systemctl -f stop docker
SSH cmd err, output: <nil>:
(minishift) Calling .GetSSHHostname
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minishift ).state
(minishift) DBG | [stdout =====>] : Running
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minishift ).networkadapters[0]).ipaddresses[0]
(minishift) DBG | [stdout =====>] : 192.168.10.10
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
(minishift) Calling .GetSSHPort
(minishift) Calling .GetSSHKeyPath
(minishift) Calling .GetSSHKeyPath
(minishift) Calling .GetSSHUsername
Using SSH client type: external
&{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.10.10 -o IdentitiesOnly=yes -i C:\WS\minishift\.minishift\machines\minishift\id_rsa -p 22] C:\Program Files\Git\usr\bin\ssh.exe <nil>}
About to run SSH command:
if [ ! -z "$(ip link show docker0)" ]; then sudo ip link delete docker0; fi
.SSH cmd err, output: <nil>:
Copying certs to the remote machine...
(minishift) Calling .GetSSHHostname
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minishift ).state
(minishift) DBG | [stdout =====>] : Running
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minishift ).networkadapters[0]).ipaddresses[0]
.(minishift) DBG | [stdout =====>] : 192.168.10.10
(minishift) DBG |
(minishift) Calling .GetSSHPort
(minishift) DBG | [stderr =====>] :
(minishift) Calling .GetSSHKeyPath
(minishift) Calling .GetSSHKeyPath
(minishift) Calling .GetSSHUsername
Using SSH client type: external
&{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.10.10 -o IdentitiesOnly=yes -i C:\WS\minishift\.minishift\machines\minishift\id_rsa -p 22] C:\Program Files\Git\usr\bin\ssh.exe <nil>}
About to run SSH command:
printf '%s' '-----BEGIN CERTIFICATE-----
MIICzDCCAbSgAwIBAgIRALIo88zGqwePQTFqy2VHJg8wDQYJKoZIhvcNAQELBQAw
DzENMAsGA1UEChMERUpUVzAeFw0xODEyMTIwNTQ1MDBaFw0yMTExMjYwNTQ1MDBa
MA8xDTALBgNVBAoTBEVKVFcwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIB
AQCyB9LFgDby5di4x8wujWGQqhAe3BGEPBHqClRz3SkEMaAgU9eACPIIGCEZgTVw
ZVkgb/p+6PPBGmIzUD2D0dVY0AlFBztB/t89HQBLBpoaqCfmEEX2KHwvbCK9/6Bb
Dof11AS56qOpahfxsEf/4BNVSQfmb2058/K3O4PP79Igg3wSIWM5W9duy8nd6cy+
Lpsgz6DBEqPfj7SGjuhBVAQJiyJDqyl3iFjERdMTx6daSIK1oHmwYifbcSq4qufP
pcHhH3GuY9DQ/Ard5h+Ci9hbAqRNxTsF9UweNsDJ+GebhrjYfxVlb2uaE5LYYgKQ
A123KIl15HZhqhfjhWcQEHXZAgMBAAGjIzAhMA4GA1UdDwEB/wQEAwICrDAPBgNV
HRMBAf8EBTADAQH/MA0GCSqGSIb3DQEBCwUAA4IBAQA8u7Wa/3ors2GNdRCTeNeT
ILpJvzX6iFSxHftECDTkOGjKnmnrEQpAt02ShNhv8FouCfc6ZSYZD1HuAiGKI6L0
XhX+048c+cWhRM/4RbhIgeuTKM5Jf9iLPfhz215QVJhi1L4XzWDWQAfpZYTlNPYc
yGyEaZe7Pfvfo3m7edc0OJ4+NVkm8i7286hg2SsbzPzQW9fGajLRGkhyr4uJirE9
4E7dF/yV7ULiUa8QR/3tiIWPV4sxqHeXJyC4YQpTZl0ja6p89T9i+bGqjJWM1fW9
f11yQVD8nxy5Z1MJxKLfDkwPJ7UubA0Yw+UIqvHB2qgf/sQoCYUDJeoMe2tBlyZd
-----END CERTIFICATE-----
' | sudo tee /etc/docker/ca.pem
SSH cmd err, output: <nil>: -----BEGIN CERTIFICATE-----
MIICzDCCAbSgAwIBAgIRALIo88zGqwePQTFqy2VHJg8wDQYJKoZIhvcNAQELBQAw
DzENMAsGA1UEChMERUpUVzAeFw0xODEyMTIwNTQ1MDBaFw0yMTExMjYwNTQ1MDBa
MA8xDTALBgNVBAoTBEVKVFcwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIB
AQCyB9LFgDby5di4x8wujWGQqhAe3BGEPBHqClRz3SkEMaAgU9eACPIIGCEZgTVw
ZVkgb/p+6PPBGmIzUD2D0dVY0AlFBztB/t89HQBLBpoaqCfmEEX2KHwvbCK9/6Bb
Dof11AS56qOpahfxsEf/4BNVSQfmb2058/K3O4PP79Igg3wSIWM5W9duy8nd6cy+
Lpsgz6DBEqPfj7SGjuhBVAQJiyJDqyl3iFjERdMTx6daSIK1oHmwYifbcSq4qufP
pcHhH3GuY9DQ/Ard5h+Ci9hbAqRNxTsF9UweNsDJ+GebhrjYfxVlb2uaE5LYYgKQ
A123KIl15HZhqhfjhWcQEHXZAgMBAAGjIzAhMA4GA1UdDwEB/wQEAwICrDAPBgNV
HRMBAf8EBTADAQH/MA0GCSqGSIb3DQEBCwUAA4IBAQA8u7Wa/3ors2GNdRCTeNeT
ILpJvzX6iFSxHftECDTkOGjKnmnrEQpAt02ShNhv8FouCfc6ZSYZD1HuAiGKI6L0
XhX+048c+cWhRM/4RbhIgeuTKM5Jf9iLPfhz215QVJhi1L4XzWDWQAfpZYTlNPYc
yGyEaZe7Pfvfo3m7edc0OJ4+NVkm8i7286hg2SsbzPzQW9fGajLRGkhyr4uJirE9
4E7dF/yV7ULiUa8QR/3tiIWPV4sxqHeXJyC4YQpTZl0ja6p89T9i+bGqjJWM1fW9
f11yQVD8nxy5Z1MJxKLfDkwPJ7UubA0Yw+UIqvHB2qgf/sQoCYUDJeoMe2tBlyZd
-----END CERTIFICATE-----
(minishift) Calling .GetSSHHostname
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minishift ).state
(minishift) DBG | [stdout =====>] : Running
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minishift ).networkadapters[0]).ipaddresses[0]
(minishift) DBG | [stdout =====>] : 192.168.10.10
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
(minishift) Calling .GetSSHPort
(minishift) Calling .GetSSHKeyPath
(minishift) Calling .GetSSHKeyPath
(minishift) Calling .GetSSHUsername
Using SSH client type: external
&{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.10.10 -o IdentitiesOnly=yes -i C:\WS\minishift\.minishift\machines\minishift\id_rsa -p 22] C:\Program Files\Git\usr\bin\ssh.exe <nil>}
About to run SSH command:
printf '%s' '-----BEGIN CERTIFICATE-----
MIIDBDCCAeygAwIBAgIRAL8ZqdPqiAmaCYcI6hXO3FkwDQYJKoZIhvcNAQELBQAw
DzENMAsGA1UEChMERUpUVzAeFw0xODEyMTIwNTQ4MDBaFw0yMTExMjYwNTQ4MDBa
MBkxFzAVBgNVBAoTDkVKVFcubWluaXNoaWZ0MIIBIjANBgkqhkiG9w0BAQEFAAOC
AQ8AMIIBCgKCAQEAqpvOxlaLaoOdXx2fOuckS6U8U7PF8bDJW1VKpHoE7dZSrcVE
tsVrTblfUa4qT5Ex20BXh87vMjj99S+E/MPc265eAS18RL4MikXWEM7bzZ1NcSkH
VrOXtdLqjf5ptlqMRXkgzeQiiDdLGNdocD4O/RJRZAXdxnJ+d9Lo7r6lBQoY9Q08
mkpKgkb7fIzik9aOdvHfUz7uJp29ECia4NCSGDINA2JpGF0mF1UjwXwqzhjvqu0N
64sxXUQE3QCe5nIrUW8IbnzjJYSpt/sx5VZ7kXkgLTZ6HY4auD40Fg/CELCMBQBy
3wxIhIX99qtoNE9g8JFE3u8Bo9KIodh3DK2ShwIDAQABo1EwTzAOBgNVHQ8BAf8E
BAMCA6gwEwYDVR0lBAwwCgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAaBgNVHREE
EzARgglsb2NhbGhvc3SHBMCoCgowDQYJKoZIhvcNAQELBQADggEBAEFAER9YdcdC
dxWD/iqLCqtN7Ui6O9dED4eQn3e4v48if/XWJ+hhCcHDEndMWfb1xNURSijPOt0v
9WYL37DY6/FgaoIfXyJVvPtnsRiV4VvnKqXEntZylvNcbXLwEkhkQoteAmBduNG/
G20/0y40ibbfbNz/88T4MFcO5/qJLB8TJ2XC0MWQX8HiLuLnQUwDIMU+HxPXWBJa
jh/L+D7uaPu3s7YWIurEf47VhzoLW79idtg49VcRvuEseGsNqej+g13rbD7aekhq
16OXWfltRMgddVMPCsQgaUU1BPQZwEFEdD3hpiEn03gVAth/qwqLtQWxlEMqBH7S
Oy1baKXHia8=
-----END CERTIFICATE-----
' | sudo tee /etc/docker/server.pem
SSH cmd err, output: <nil>: -----BEGIN CERTIFICATE-----
MIIDBDCCAeygAwIBAgIRAL8ZqdPqiAmaCYcI6hXO3FkwDQYJKoZIhvcNAQELBQAw
DzENMAsGA1UEChMERUpUVzAeFw0xODEyMTIwNTQ4MDBaFw0yMTExMjYwNTQ4MDBa
MBkxFzAVBgNVBAoTDkVKVFcubWluaXNoaWZ0MIIBIjANBgkqhkiG9w0BAQEFAAOC
AQ8AMIIBCgKCAQEAqpvOxlaLaoOdXx2fOuckS6U8U7PF8bDJW1VKpHoE7dZSrcVE
tsVrTblfUa4qT5Ex20BXh87vMjj99S+E/MPc265eAS18RL4MikXWEM7bzZ1NcSkH
VrOXtdLqjf5ptlqMRXkgzeQiiDdLGNdocD4O/RJRZAXdxnJ+d9Lo7r6lBQoY9Q08
mkpKgkb7fIzik9aOdvHfUz7uJp29ECia4NCSGDINA2JpGF0mF1UjwXwqzhjvqu0N
64sxXUQE3QCe5nIrUW8IbnzjJYSpt/sx5VZ7kXkgLTZ6HY4auD40Fg/CELCMBQBy
3wxIhIX99qtoNE9g8JFE3u8Bo9KIodh3DK2ShwIDAQABo1EwTzAOBgNVHQ8BAf8E
BAMCA6gwEwYDVR0lBAwwCgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAaBgNVHREE
EzARgglsb2NhbGhvc3SHBMCoCgowDQYJKoZIhvcNAQELBQADggEBAEFAER9YdcdC
dxWD/iqLCqtN7Ui6O9dED4eQn3e4v48if/XWJ+hhCcHDEndMWfb1xNURSijPOt0v
9WYL37DY6/FgaoIfXyJVvPtnsRiV4VvnKqXEntZylvNcbXLwEkhkQoteAmBduNG/
G20/0y40ibbfbNz/88T4MFcO5/qJLB8TJ2XC0MWQX8HiLuLnQUwDIMU+HxPXWBJa
jh/L+D7uaPu3s7YWIurEf47VhzoLW79idtg49VcRvuEseGsNqej+g13rbD7aekhq
16OXWfltRMgddVMPCsQgaUU1BPQZwEFEdD3hpiEn03gVAth/qwqLtQWxlEMqBH7S
Oy1baKXHia8=
-----END CERTIFICATE-----
(minishift) Calling .GetSSHHostname
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minishift ).state
(minishift) DBG | [stdout =====>] : Running
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minishift ).networkadapters[0]).ipaddresses[0]
(minishift) DBG | [stdout =====>] : 192.168.10.10
(minishift) DBG |
(minishift) Calling .GetSSHPort
(minishift) DBG | [stderr =====>] :
(minishift) Calling .GetSSHKeyPath
(minishift) Calling .GetSSHKeyPath
(minishift) Calling .GetSSHUsername
Using SSH client type: external
&{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.10.10 -o IdentitiesOnly=yes -i C:\WS\minishift\.minishift\machines\minishift\id_rsa -p 22] C:\Program Files\Git\usr\bin\ssh.exe <nil>}
About to run SSH command:
printf '%s' '-----BEGIN RSA PRIVATE KEY-----
MIIEpQIBAAKCAQEAqpvOxlaLaoOdXx2fOuckS6U8U7PF8bDJW1VKpHoE7dZSrcVE
tsVrTblfUa4qT5Ex20BXh87vMjj99S+E/MPc265eAS18RL4MikXWEM7bzZ1NcSkH
VrOXtdLqjf5ptlqMRXkgzeQiiDdLGNdocD4O/RJRZAXdxnJ+d9Lo7r6lBQoY9Q08
mkpKgkb7fIzik9aOdvHfUz7uJp29ECia4NCSGDINA2JpGF0mF1UjwXwqzhjvqu0N
64sxXUQE3QCe5nIrUW8IbnzjJYSpt/sx5VZ7kXkgLTZ6HY4auD40Fg/CELCMBQBy
3wxIhIX99qtoNE9g8JFE3u8Bo9KIodh3DK2ShwIDAQABAoIBAGmrDPvdOIZlNEBo
KWojJWDQ27d//ha/B0fRYOTUSl9Awn6sUu3dAqPmL3p4o+4aIMYKaOxNp+r0T57f
qr+fVBigO8sA8Bnnl/7AWGCarpsAVanD3q69lzZfhzUhITp3hK+24TGEnjq9/H5L
VH1IgqIOCWkpFP5HhbsTX4AXhj/CyQqNtdEHtNN6bvWEBZK977YH6bIj2QuzrDSS
/DmHxAAFkI6sxi0gWNFFDyFQkO33oSuPnot1uD/P0AuawlJ1OXVppzppu5EjUMed
TGCiXFvE6sJmin8qdA94PzCyhNBP7Tn2LMwqF/WoIUi1rbg21AXt4VNkTlL+fGde
BdKAkBECgYEA3E/OAomJiMDYHnbc8lZpX27us5uG6PXjU9558KGs8OLioxLjEw6D
6sjjlo8GWsOn4CaTDV8unEIc1EObxl1VwhXA6XNZvUWPSpEgrWxJ1T5tbQW/lprj
DUKclQviSbf2gWtn0sXYc8qt/BcDcd63q4AzECuKyebBWNlJ0Mq02c8CgYEAxj7Y
QmeZ0Cv8LwrtqAjhlfVw8VDyIU8Y5nVg4mL8uDkg6pP8p0oxlyyGujGizDJSioLm
/yXOCaChUn7mdSuaSHV5obO035YP77+kXGMcEeYhrP2KhcFQaNog5rkv86b4gcyt
yuaScFETChyCxKs5HhvAAqeJd7NMkBVIhDXPQckCgYEAo3aLEhLeexsqv5/N2/kF
ggubDKSO+vbGTwo5S7OJz8loAzsWRKN2eZPIWYORYXLeck/st/UxbjsXjN8FC69g
2/qsAgrWQLsF0HvR1RsNxSzmoAet2Z7ebI5KA8Snh675NZltlVO6gF+Xq/2fTrPD
b3pVaOAFwDx4pOXEASkF+r0CgYEAr+Ww8nKD6k4sqvzSU4bVyb2F4cfFnsJUwJ6j
QRs9SqP9zcVSpohRKeYrAGfsH6wCyr1NAlRj1Oz+VnkcOBhhAyugqVYPBVdbeoka
55JUpJkBhkFMOFOG6hGooa7smg6rblfSWDZu9lMpRo53hNK7kjhjDLfkZB3lr+4C
crRf2ekCgYEAzATE6JpOTZw28z8wl03iZhJcJO6vnsyTeXilSBaRiLwt/wt5nPnG
rRCyXrHIN2TcOtCKuO5KakyjgfagVHcKsTo+LyDexO0fJK1d8Ze+jeTZp9rZx/ge
+O64k4xFSkwYkzFcx4K6opW3GCcQX4PUvG2fd3yTdbXnFy1ahWnnvI8=
-----END RSA PRIVATE KEY-----
' | sudo tee /etc/docker/server-key.pem
.SSH cmd err, output: <nil>: -----BEGIN RSA PRIVATE KEY-----
MIIEpQIBAAKCAQEAqpvOxlaLaoOdXx2fOuckS6U8U7PF8bDJW1VKpHoE7dZSrcVE
tsVrTblfUa4qT5Ex20BXh87vMjj99S+E/MPc265eAS18RL4MikXWEM7bzZ1NcSkH
VrOXtdLqjf5ptlqMRXkgzeQiiDdLGNdocD4O/RJRZAXdxnJ+d9Lo7r6lBQoY9Q08
mkpKgkb7fIzik9aOdvHfUz7uJp29ECia4NCSGDINA2JpGF0mF1UjwXwqzhjvqu0N
64sxXUQE3QCe5nIrUW8IbnzjJYSpt/sx5VZ7kXkgLTZ6HY4auD40Fg/CELCMBQBy
3wxIhIX99qtoNE9g8JFE3u8Bo9KIodh3DK2ShwIDAQABAoIBAGmrDPvdOIZlNEBo
KWojJWDQ27d//ha/B0fRYOTUSl9Awn6sUu3dAqPmL3p4o+4aIMYKaOxNp+r0T57f
qr+fVBigO8sA8Bnnl/7AWGCarpsAVanD3q69lzZfhzUhITp3hK+24TGEnjq9/H5L
VH1IgqIOCWkpFP5HhbsTX4AXhj/CyQqNtdEHtNN6bvWEBZK977YH6bIj2QuzrDSS
/DmHxAAFkI6sxi0gWNFFDyFQkO33oSuPnot1uD/P0AuawlJ1OXVppzppu5EjUMed
TGCiXFvE6sJmin8qdA94PzCyhNBP7Tn2LMwqF/WoIUi1rbg21AXt4VNkTlL+fGde
BdKAkBECgYEA3E/OAomJiMDYHnbc8lZpX27us5uG6PXjU9558KGs8OLioxLjEw6D
6sjjlo8GWsOn4CaTDV8unEIc1EObxl1VwhXA6XNZvUWPSpEgrWxJ1T5tbQW/lprj
DUKclQviSbf2gWtn0sXYc8qt/BcDcd63q4AzECuKyebBWNlJ0Mq02c8CgYEAxj7Y
QmeZ0Cv8LwrtqAjhlfVw8VDyIU8Y5nVg4mL8uDkg6pP8p0oxlyyGujGizDJSioLm
/yXOCaChUn7mdSuaSHV5obO035YP77+kXGMcEeYhrP2KhcFQaNog5rkv86b4gcyt
yuaScFETChyCxKs5HhvAAqeJd7NMkBVIhDXPQckCgYEAo3aLEhLeexsqv5/N2/kF
ggubDKSO+vbGTwo5S7OJz8loAzsWRKN2eZPIWYORYXLeck/st/UxbjsXjN8FC69g
2/qsAgrWQLsF0HvR1RsNxSzmoAet2Z7ebI5KA8Snh675NZltlVO6gF+Xq/2fTrPD
b3pVaOAFwDx4pOXEASkF+r0CgYEAr+Ww8nKD6k4sqvzSU4bVyb2F4cfFnsJUwJ6j
QRs9SqP9zcVSpohRKeYrAGfsH6wCyr1NAlRj1Oz+VnkcOBhhAyugqVYPBVdbeoka
55JUpJkBhkFMOFOG6hGooa7smg6rblfSWDZu9lMpRo53hNK7kjhjDLfkZB3lr+4C
crRf2ekCgYEAzATE6JpOTZw28z8wl03iZhJcJO6vnsyTeXilSBaRiLwt/wt5nPnG
rRCyXrHIN2TcOtCKuO5KakyjgfagVHcKsTo+LyDexO0fJK1d8Ze+jeTZp9rZx/ge
+O64k4xFSkwYkzFcx4K6opW3GCcQX4PUvG2fd3yTdbXnFy1ahWnnvI8=
-----END RSA PRIVATE KEY-----
(minishift) Calling .GetURL
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minishift ).state
(minishift) DBG | [stdout =====>] : Running
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minishift ).networkadapters[0]).ipaddresses[0]
(minishift) DBG | [stdout =====>] : 192.168.10.10
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
(minishift) Calling .DriverName
Setting Docker configuration on the remote daemon...
(minishift) Calling .GetSSHHostname
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minishift ).state
(minishift) DBG | [stdout =====>] : Running
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minishift ).networkadapters[0]).ipaddresses[0]
.(minishift) DBG | [stdout =====>] : 192.168.10.10
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
(minishift) Calling .GetSSHPort
(minishift) Calling .GetSSHKeyPath
(minishift) Calling .GetSSHKeyPath
(minishift) Calling .GetSSHUsername
Using SSH client type: external
&{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.10.10 -o IdentitiesOnly=yes -i C:\WS\minishift\.minishift\machines\minishift\id_rsa -p 22] C:\Program Files\Git\usr\bin\ssh.exe <nil>}
About to run SSH command:
sudo mkdir -p /etc/systemd/system/docker.service.d && printf %s "[Service]
ExecStart=
ExecStart=/usr/bin/dockerd-current -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock \
--authorization-plugin rhel-push-plugin \
--selinux-enabled \
--log-driver=journald \
--add-runtime docker-runc=/usr/libexec/docker/docker-runc-current \
--default-runtime=docker-runc \
--exec-opt native.cgroupdriver=systemd \
--userland-proxy-path=/usr/libexec/docker/docker-proxy-current \
--add-registry registry.access.redhat.com \
--storage-driver overlay2 --tlsverify --tlscacert /etc/docker/ca.pem \
--tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem \
--label provider=hyperv --insecure-registry ocp-docker-minishift-virtual.repo.dev.corp.com --insecure-registry 172.30.0.0/16 --registry-mirror https://ocp-docker-minishift-virtual.repo.dev.corp.com --add-registry=ocp-docker-minishift-virtual.repo.dev.corp.com
Environment="HTTP_PROXY=http://corp-proxy.com:8080" "http_proxy=http://corp-proxy.com:8080" "HTTPS_PROXY=http://corp-proxy.com:8080" "https_proxy=http://corp-proxy.com:8080" "NO_PROXY=localhost,127.0.0.1,172.30.1.1,.corp.com,.corp2.com,.svc,192.168.10.10,192.168.10.10" "no_proxy=localhost,127.0.0.1,172.30.1.1,.corp.com,.corp2.com,.svc,192.168.10.10,192.168.10.10"
" | sudo tee /etc/systemd/system/docker.service.d/10-machine.conf
SSH cmd err, output: <nil>: [Service]
ExecStart=
ExecStart=/usr/bin/dockerd-current -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --authorization-plugin rhel-push-plugin --selinux-enabled --log-driver=journald --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current --default-runtime=docker-runc --exec-opt native.cgroupdriver=systemd --userland-proxy-path=/usr/libexec/docker/docker-proxy-current --add-registry registry.access.redhat.com --storage-driver overlay2 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry ocp-docker-minishift-virtual.repo.dev.corp.com --insecure-registry 172.30.0.0/16 --registry-mirror https://ocp-docker-minishift-virtual.repo.dev.corp.com --add-registry=ocp-docker-minishift-virtual.repo.dev.corp.com
Environment=HTTP_PROXY=http://corp-proxy.com:8080 http_proxy=http://corp-proxy.com:8080 HTTPS_PROXY=http://corp-proxy.com:8080 https_proxy=http://corp-proxy.com:8080 NO_PROXY=localhost,127.0.0.1,172.30.1.1,.corp.com,.corp2.com,.svc,192.168.10.10,192.168.10.10 no_proxy=localhost,127.0.0.1,172.30.1.1,.corp.com,.corp2.com,.svc,192.168.10.10,192.168.10.10
(minishift) Calling .GetSSHHostname
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minishift ).state
(minishift) DBG | [stdout =====>] : Running
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minishift ).networkadapters[0]).ipaddresses[0]
(minishift) DBG | [stdout =====>] : 192.168.10.10
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
(minishift) Calling .GetSSHPort
(minishift) Calling .GetSSHKeyPath
(minishift) Calling .GetSSHKeyPath
(minishift) Calling .GetSSHUsername
Using SSH client type: external
&{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.10.10 -o IdentitiesOnly=yes -i C:\WS\minishift\.minishift\machines\minishift\id_rsa -p 22] C:\Program Files\Git\usr\bin\ssh.exe <nil>}
About to run SSH command:
sudo systemctl daemon-reload
SSH cmd err, output: <nil>:
(minishift) Calling .GetSSHHostname
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minishift ).state
(minishift) DBG | [stdout =====>] : Running
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minishift ).networkadapters[0]).ipaddresses[0]
(minishift) DBG | [stdout =====>] : 192.168.10.10
(minishift) DBG |
(minishift) Calling .GetSSHPort
(minishift) DBG | [stderr =====>] :
(minishift) Calling .GetSSHKeyPath
(minishift) Calling .GetSSHKeyPath
(minishift) Calling .GetSSHUsername
Using SSH client type: external
&{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.10.10 -o IdentitiesOnly=yes -i C:\WS\minishift\.minishift\machines\minishift\id_rsa -p 22] C:\Program Files\Git\usr\bin\ssh.exe <nil>}
About to run SSH command:
sudo systemctl -f start docker
.SSH cmd err, output: <nil>:
(minishift) Calling .GetSSHHostname
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minishift ).state
.(minishift) DBG | [stdout =====>] : Running
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minishift ).networkadapters[0]).ipaddresses[0]
(minishift) DBG | [stdout =====>] : 192.168.10.10
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
(minishift) Calling .GetSSHPort
(minishift) Calling .GetSSHKeyPath
(minishift) Calling .GetSSHKeyPath
(minishift) Calling .GetSSHUsername
Using SSH client type: external
&{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.10.10 -o IdentitiesOnly=yes -i C:\WS\minishift\.minishift\machines\minishift\id_rsa -p 22] C:\Program Files\Git\usr\bin\ssh.exe <nil>}
About to run SSH command:
if ! type netstat 1>/dev/null; then ss -tln; else netstat -tln; fi
SSH cmd err, output: <nil>: Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN
tcp6 0 0 :::2376 :::* LISTEN
tcp6 0 0 :::111 :::* LISTEN
tcp6 0 0 :::22 :::* LISTEN
Feature detection ...
(minishift) Calling .GetSSHHostname
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minishift ).state
(minishift) DBG | [stdout =====>] : Running
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minishift ).networkadapters[0]).ipaddresses[0]
(minishift) DBG | [stdout =====>] : 192.168.10.10
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
(minishift) Calling .GetSSHPort
(minishift) Calling .GetSSHKeyPath
(minishift) Calling .GetSSHKeyPath
(minishift) Calling .GetSSHUsername
Using SSH client type: external
&{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.10.10 -o IdentitiesOnly=yes -i C:\WS\minishift\.minishift\machines\minishift\id_rsa -p 22] C:\Program Files\Git\usr\bin\ssh.exe <nil>}
About to run SSH command:
test -f /usr/local/bin/minishift-set-ipaddress && echo '1' || echo '0'
SSH cmd err, output: <nil>: 1
(minishift) Calling .GetSSHHostname
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minishift ).state
(minishift) DBG | [stdout =====>] : Running
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minishift ).networkadapters[0]).ipaddresses[0]
(minishift) DBG | [stdout =====>] : 192.168.10.10
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
(minishift) Calling .GetSSHPort
(minishift) Calling .GetSSHKeyPath
(minishift) Calling .GetSSHKeyPath
(minishift) Calling .GetSSHUsername
Using SSH client type: external
&{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.10.10 -o IdentitiesOnly=yes -i C:\WS\minishift\.minishift\machines\minishift\id_rsa -p 22] C:\Program Files\Git\usr\bin\ssh.exe <nil>}
About to run SSH command:
test -f /usr/sbin/dnsmasq && echo '1' || echo '0'
.SSH cmd err, output: <nil>: 1
OK
Checking connection to Docker...
(minishift) Calling .GetURL
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minishift ).state
(minishift) DBG | [stdout =====>] : Running
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minishift ).networkadapters[0]).ipaddresses[0]
.(minishift) DBG | [stdout =====>] : 192.168.10.10
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
Reading CA certificate from C:\WS\minishift\.minishift\certs\ca.pem
Reading client certificate from C:\WS\minishift\.minishift\certs\cert.pem
Reading client key from C:\WS\minishift\.minishift\certs\key.pem
Docker is up and running!
Reticulating splines...
(minishift) Calling .GetConfigRaw
-- Setting proxy information ... (minishift) Calling .GetIP
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minishift ).state
(minishift) DBG | [stdout =====>] : Running
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minishift ).networkadapters[0]).ipaddresses[0]
(minishift) DBG | [stdout =====>] : 192.168.10.10
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
(minishift) Calling .DriverName
(minishift) Calling .GetIP
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minishift ).state
(minishift) DBG | [stdout =====>] : Running
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minishift ).networkadapters[0]).ipaddresses[0]
(minishift) DBG | [stdout =====>] : 192.168.10.10
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
(minishift) Calling .GetSSHHostname
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minishift ).state
(minishift) DBG | [stdout =====>] : Running
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minishift ).networkadapters[0]).ipaddresses[0]
(minishift) DBG | [stdout =====>] : 192.168.10.10
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
(minishift) Calling .GetSSHPort
(minishift) Calling .GetSSHKeyPath
(minishift) Calling .GetSSHKeyPath
(minishift) Calling .GetSSHUsername
Using SSH client type: external
&{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.10.10 -o IdentitiesOnly=yes -i C:\WS\minishift\.minishift\machines\minishift\id_rsa -p 22] C:\Program Files\Git\usr\bin\ssh.exe <nil>}
About to run SSH command:
export HTTP_PROXY=http://corp-proxy.com:8080 http_proxy=http://corp-proxy.com:8080 HTTPS_PROXY=http://corp-proxy.com:8080 https_proxy=http://corp-proxy.com:8080 NO_PROXY=localhost,127.0.0.1,172.30.1.1,.corp.com,.corp2.com,.svc,192.168.10.10,192.168.10.10,192.168.10.10,192.168.10.1 no_proxy=localhost,127.0.0.1,172.30.1.1,.corp.com,.corp2.com,.svc,192.168.10.10,192.168.10.10,192.168.10.10,192.168.10.1
.SSH cmd err, output: <nil>:
(minishift) Calling .GetSSHHostname
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minishift ).state
(minishift) DBG | [stdout =====>] : Running
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minishift ).networkadapters[0]).ipaddresses[0]
(minishift) DBG | [stdout =====>] : 192.168.10.10
(minishift) DBG |
(minishift) Calling .GetSSHPort
(minishift) DBG | [stderr =====>] :
(minishift) Calling .GetSSHKeyPath
(minishift) Calling .GetSSHKeyPath
(minishift) Calling .GetSSHUsername
Using SSH client type: external
&{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.10.10 -o IdentitiesOnly=yes -i C:\WS\minishift\.minishift\machines\minishift\id_rsa -p 22] C:\Program Files\Git\usr\bin\ssh.exe <nil>}
About to run SSH command:
sudo su -c 'echo "export HTTP_PROXY=http://corp-proxy.com:8080 http_proxy=http://corp-proxy.com:8080 HTTPS_PROXY=http://corp-proxy.com:8080 https_proxy=http://corp-proxy.com:8080 NO_PROXY=localhost,127.0.0.1,172.30.1.1,.corp.com,.corp2.com,.svc,192.168.10.10,192.168.10.10,192.168.10.10,192.168.10.1 no_proxy=localhost,127.0.0.1,172.30.1.1,.corp.com,.corp2.com,.svc,192.168.10.10,192.168.10.10,192.168.10.10,192.168.10.1" > /etc/profile.d/proxy.sh'
SSH cmd err, output: <nil>:
OK
OK
(minishift) Calling .GetSSHHostname
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minishift ).state
(minishift) DBG | [stdout =====>] : Running
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minishift ).networkadapters[0]).ipaddresses[0]
(minishift) DBG | [stdout =====>] : 192.168.10.10
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
(minishift) Calling .GetSSHPort
(minishift) Calling .GetSSHKeyPath
(minishift) Calling .GetSSHKeyPath
(minishift) Calling .GetSSHUsername
Using SSH client type: external
&{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.10.10 -o IdentitiesOnly=yes -i C:\WS\minishift\.minishift\machines\minishift\id_rsa -p 22] C:\Program Files\Git\usr\bin\ssh.exe <nil>}
About to run SSH command:
sudo timedatectl set-timezone 'UTC'
SSH cmd err, output: <nil>:
Skipping registration due to enabled '--skip-registration' flag
(minishift) Calling .GetSSHHostname
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minishift ).state
(minishift) DBG | [stdout =====>] : Running
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minishift ).networkadapters[0]).ipaddresses[0]
(minishift) DBG | [stdout =====>] : 192.168.10.10
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
(minishift) Calling .GetSSHPort
(minishift) Calling .GetSSHKeyPath
(minishift) Calling .GetSSHKeyPath
(minishift) Calling .GetSSHUsername
Using SSH client type: external
&{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.10.10 -o IdentitiesOnly=yes -i C:\WS\minishift\.minishift\machines\minishift\id_rsa -p 22] C:\Program Files\Git\usr\bin\ssh.exe <nil>}
About to run SSH command:
NS=***dns ip***; cat /etc/resolv.conf |grep -i "^nameserver $NS" || echo "nameserver $NS" | sudo tee -a /etc/resolv.conf
SSH cmd err, output: <nil>: nameserver ***dns ip***
(minishift) Calling .GetSSHHostname
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minishift ).state
(minishift) DBG | [stdout =====>] : Running
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minishift ).networkadapters[0]).ipaddresses[0]
(minishift) DBG | [stdout =====>] : 192.168.10.10
(minishift) DBG |
(minishift) Calling .GetSSHPort
(minishift) DBG | [stderr =====>] :
(minishift) Calling .GetSSHKeyPath
(minishift) Calling .GetSSHKeyPath
(minishift) Calling .GetSSHUsername
Using SSH client type: external
&{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.10.10 -o IdentitiesOnly=yes -i C:\WS\minishift\.minishift\machines\minishift\id_rsa -p 22] C:\Program Files\Git\usr\bin\ssh.exe <nil>}
About to run SSH command:
echo LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUY3akNDQTlhZ0F3SUJBZ0lKQUo2endJZWk0WjEvTUEwR0NTcUdTSWIzRFFFQkN3VUFNSUdMTVFzd0NRWUQKVlFRR0V3SlZVekVTTUJBR0ExVUVDQXdKVFdsdWFYTm9hV1owTVJJd0VBWURWUVFLREFsTmFXNXBjMmhwWm5ReApHekFaQmdOVkJBc01Fa2x1ZEdWeWJXVmthV0YwWlNCd2NtOTRlVEVWTUJNR0ExVUVBd3dNYldsdWFYTm9hV1owCkxtbHZNU0F3SGdZSktvWklodmNOQVFrQkZoRnBibVp2UUcxcGJtbHphR2xtZEM1cGJ6QWVGdzB4T0RBNE16QXgKT1RNNE1qaGFGdzB6T0RBNE1qVXhPVE00TWpoYU1JR0xNUXN3Q1FZRFZRUUdFd0pWVXpFU01CQUdBMVVFQ0F3SgpUV2x1YVhOb2FXWjBNUkl3RUFZRFZRUUtEQWxOYVc1cGMyaHBablF4R3pBWkJnTlZCQXNNRWtsdWRHVnliV1ZrCmFXRjBaU0J3Y205NGVURVZNQk1HQTFVRUF3d01iV2x1YVhOb2FXWjBMbWx2TVNBd0hnWUpLb1pJaHZjTkFRa0IKRmhGcGJtWnZRRzFwYm1semFHbG1kQzVwYnpDQ0FpSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnSVBBRENDQWdvQwpnZ0lCQU15VERvNHRybWV4djVQeTlhVGlDcGlJc0NIVUgwUFZxS2J6ZkNZNW81YjhpTWNCVWRocFVJVXZoMWR6CmxRamRZMWZKQUU5SXF5amloUTlPaFhCNlNpSkZhaStzRVFiMUY3RjB2RVozY05DUWYzU1c3NFN6VWpRVHl1aVAKd2FFclA5RU5PR3dYOVhnUmlrWGxwdEJmd3lHQVRNZjFqb1pSQ1RCcVRaVm1vd05OZGR1MnhKaWs0NENjTGZISwpQS2NCalRFdVZHT0JGb0tXci9IWDZyTHBxQWdreG9YWm9zZFhDSVZxRmczeTlYSEkyTDViT2x4anVHTVR6OFFSCm1nN2Nab3F4T2s4cTliQ3dCVk01MkN3cmdSRnJFdi85VWhBem9zSzhNUlNMU2JGOTVweC9JaGZ0MGcwWHRHWHoKcEZNSm92REd3YlRmajN4eUJLbWNzZHFHUDVaOVJvNnNibGIvblgrSEhBOHZDSDArdjVQcTFreGN4Mm1TV3loOQp4SlE5RWxUdS9STEJzanRDbW92bTRwaXFtNjNCaHlTR1NLYVpZRmN4NTRPbHI0ZDIrZHBGTUhmOXBXREtGODNuClMyemc1ZWdTaFdrci9XYXBrcmxGNUtNT1FDMkdpRmQzamNFRDZ3SjFIZXdHVlNuNWpYeks3eWhKMGtjSTkwaVcKdEw2dmM1TjNmTHFZTVFyVElPV1VKZXdzaHVOaDlTRHQ5RHJWT2dsM3hKUG0rZGxyV3VIVzFnM3J1L2p2NnN5RQpEY292cWI2Y3hJT1pDWG9hSHF4QUJ5YTRibk5iM200c0tROGlONTFPQXRQN1lKejBZdG1iTDhiMEdaNDdiOExpCmpQOWpTYzU1TnF1c1dScFplY2tMRk1aZVF2TkxOMHA4ZG9OMzdPTlpOZUtrZGhMVkFnTUJBQUdqVXpCUk1CMEcKQTFVZERnUVdCQlNUcEhadlhlYVNwUUIzV0kvYWxlc25MY0JySmpBZkJnTlZIU01FR0RBV2dCU1RwSFp2WGVhUwpwUUIzV0kvYWxlc25MY0JySmpBUEJnTlZIUk1CQWY4RUJUQURBUUgvTUEwR0NTcUdTSWIzRFFFQkN3VUFBNElDCkFRQmZGN0pDWkMvcTRqTkYyK2dySVd2aHBsa0EvbFdneTRkQ3Q0K1VvR3ZSdE1ibk5KSUNUVnYybkZhb0RlOUUKNkRWUXVzQm1LaHcyUm1HQmVpaWdJQzMwZjN0RnFpamJMb1kxN0owL2h0SU9ETm5qWTZNQUpXSEk4UHloVGtiTwoxZWZHc0tuY29IQXZMVjVxbkdtYjdwT1NHQlRVbGhGNU1vQk1NYUI1ZDg4OFMwSU45ejNRa29BVjRMUjFUZU1UCmZFWUlMbHIyeE9FMjArVi9DV1o4RGxRMUlSellYaTFaU2Q1UDBoOTBJVnk0TFg1RXpYMjAxVkdKV2FvUzZwN1UKM1RUd2dtOFp0am5xSTRmbFNCNmpWMk93UVhzN2doYm16RzlCY3hOYlc3czRNQWVWem53bEdnZkxMc0NGSjhnSQo4bzN0SVkrc1JtSHNQRi90aDF2bUtkTE1VTFo4bXVYOXNWRDZOOXVOaTROQUtKK1dZclZNdUYvWWNxd0w2S1BXCmJyS0hkdXZXZ01DZGNHY20wZU1ETUFWUzVOQXNLaFpCVFdINlIxVjY2T0NGRE05NW15VXJnYy9BQ2Q5MDRnNWQKK0JsRERBT01LYWRKeEk1cFdJTzl6eno0ek55dzBrSW56VllaNlNDS0w4ZEplaGlvUldIQUZCUkkxUk5SSVVHTQo5d21ZWTlqcThCckFqRm1KY1VkVFcvN0FPR3ZhTnIzUVhsbWFVK1dqTVQ1b2Y4MDJHM2VZQUZlQ29TcDRBT3NsCldtQWxpS0MybktXWWhwYUVTdU5QcEppL3dnMmZEOXVSTjhjcHN1YzB5TVpGaFlaU2xZNGEwYU9iWGF3Um5LVnkKbjNjL2V6RFA5dmh5NUR0MFBaTFc4UlJqNW9BWGJ3UlJkaVZWZkdOUmNGT1Fldz09Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0= | base64 --decode | sudo tee -a /etc/pki/tls/certs/ca-bundle.crt > /dev/null
SSH cmd err, output: <nil>:
(minishift) Calling .GetIP
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minishift ).state
(minishift) DBG | [stdout =====>] : Running
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minishift ).networkadapters[0]).ipaddresses[0]
(minishift) DBG | [stdout =====>] : 192.168.10.10
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
(minishift) Calling .DriverName
(minishift) Calling .GetIP
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minishift ).state
(minishift) DBG | [stdout =====>] : Running
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minishift ).networkadapters[0]).ipaddresses[0]
(minishift) DBG | [stdout =====>] : 192.168.10.10
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
-- Checking for IP address ... (minishift) Calling .GetIP
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minishift ).state
(minishift) DBG | [stdout =====>] : Running
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minishift ).networkadapters[0]).ipaddresses[0]
(minishift) DBG | [stdout =====>] : 192.168.10.10
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
OK
-- Checking for nameservers ... (minishift) Calling .GetSSHHostname
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minishift ).state
(minishift) DBG | [stdout =====>] : Running
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minishift ).networkadapters[0]).ipaddresses[0]
(minishift) DBG | [stdout =====>] : 192.168.10.10
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
(minishift) Calling .GetSSHPort
(minishift) Calling .GetSSHKeyPath
(minishift) Calling .GetSSHKeyPath
(minishift) Calling .GetSSHUsername
Using SSH client type: external
&{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.10.10 -o IdentitiesOnly=yes -i C:\WS\minishift\.minishift\machines\minishift\id_rsa -p 22] C:\Program Files\Git\usr\bin\ssh.exe <nil>}
About to run SSH command:
cat /etc/resolv.conf | grep -i '^nameserver' | wc -l | tr -d '
'
SSH cmd err, output: <nil>: 1
OK
-- Checking if external host is reachable from the Minishift VM ...
Pinging ocp-docker-minishift-virtual.repo.dev.corp.com ... (minishift) Calling .GetSSHHostname
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minishift ).state
(minishift) DBG | [stdout =====>] : Running
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minishift ).networkadapters[0]).ipaddresses[0]
(minishift) DBG | [stdout =====>] : 192.168.10.10
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
(minishift) Calling .GetSSHPort
(minishift) Calling .GetSSHKeyPath
(minishift) Calling .GetSSHKeyPath
(minishift) Calling .GetSSHUsername
Using SSH client type: external
&{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.10.10 -o IdentitiesOnly=yes -i C:\WS\minishift\.minishift\machines\minishift\id_rsa -p 22] C:\Program Files\Git\usr\bin\ssh.exe <nil>}
About to run SSH command:
sudo ping -c1 -w1 ocp-docker-minishift-virtual.repo.dev.corp.com
SSH cmd err, output: <nil>: PING ocp-docker-minishift-virtual.repo.dev.corp.com (10.164.36.44) 56(84) bytes of data.
64 bytes from 10.164.36.44 (10.164.36.44): icmp_seq=1 ttl=249 time=2.98 ms
--- ocp-docker-minishift-virtual.repo.dev.corp.com ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 2.983/2.983/2.983/0.000 ms
OK
-- Checking HTTP connectivity from the VM ... SKIP
-- Checking if persistent storage volume is mounted ... (minishift) Calling .GetSSHHostname
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minishift ).state
(minishift) DBG | [stdout =====>] : Running
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minishift ).networkadapters[0]).ipaddresses[0]
(minishift) DBG | [stdout =====>] : 192.168.10.10
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
(minishift) Calling .GetSSHPort
(minishift) Calling .GetSSHKeyPath
(minishift) Calling .GetSSHKeyPath
(minishift) Calling .GetSSHUsername
Using SSH client type: external
&{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.10.10 -o IdentitiesOnly=yes -i C:\WS\minishift\.minishift\machines\minishift\id_rsa -p 22] C:\Program Files\Git\usr\bin\ssh.exe <nil>}
About to run SSH command:
if grep -qs /mnt/?da1 /proc/mounts; then echo '1'; else echo '0'; fi
SSH cmd err, output: <nil>: 1
OK
-- Checking available disk space ... (minishift) Calling .GetSSHHostname
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minishift ).state
(minishift) DBG | [stdout =====>] : Running
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minishift ).networkadapters[0]).ipaddresses[0]
(minishift) DBG | [stdout =====>] : 192.168.10.10
(minishift) DBG |
(minishift) Calling .GetSSHPort
(minishift) DBG | [stderr =====>] :
(minishift) Calling .GetSSHKeyPath
(minishift) Calling .GetSSHKeyPath
(minishift) Calling .GetSSHUsername
Using SSH client type: external
&{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.10.10 -o IdentitiesOnly=yes -i C:\WS\minishift\.minishift\machines\minishift\id_rsa -p 22] C:\Program Files\Git\usr\bin\ssh.exe <nil>}
About to run SSH command:
df -h /mnt/?da1 | awk 'FNR > 1 {print $2,$5,$6}'
SSH cmd err, output: <nil>: 48G 1% /mnt/sda1
1% used OK
-- Writing current configuration for static assignment of IP address ... (minishift) Calling .GetIP
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minishift ).state
(minishift) DBG | [stdout =====>] : Running
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minishift ).networkadapters[0]).ipaddresses[0]
(minishift) DBG | [stdout =====>] : 192.168.10.10
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
(minishift) Calling .GetSSHHostname
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minishift ).state
(minishift) DBG | [stdout =====>] : Running
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minishift ).networkadapters[0]).ipaddresses[0]
(minishift) DBG | [stdout =====>] : 192.168.10.10
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
(minishift) Calling .GetSSHPort
(minishift) Calling .GetSSHKeyPath
(minishift) Calling .GetSSHKeyPath
(minishift) Calling .GetSSHUsername
Using SSH client type: external
&{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.10.10 -o IdentitiesOnly=yes -i C:\WS\minishift\.minishift\machines\minishift\id_rsa -p 22] C:\Program Files\Git\usr\bin\ssh.exe <nil>}
About to run SSH command:
ip a |grep -i '192.168.10.10' | awk '{print $NF}' | tr -d '
'
SSH cmd err, output: <nil>: eth0
(minishift) Calling .GetSSHHostname
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minishift ).state
(minishift) DBG | [stdout =====>] : Running
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minishift ).networkadapters[0]).ipaddresses[0]
(minishift) DBG | [stdout =====>] : 192.168.10.10
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
(minishift) Calling .GetSSHPort
(minishift) Calling .GetSSHKeyPath
(minishift) Calling .GetSSHKeyPath
(minishift) Calling .GetSSHUsername
Using SSH client type: external
&{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.10.10 -o IdentitiesOnly=yes -i C:\WS\minishift\.minishift\machines\minishift\id_rsa -p 22] C:\Program Files\Git\usr\bin\ssh.exe <nil>}
About to run SSH command:
ip -o -f inet addr show eth0 | head -n1 | awk '/scope global/ {print $4}'
SSH cmd err, output: <nil>: 192.168.10.10/24
(minishift) Calling .GetSSHHostname
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minishift ).state
(minishift) DBG | [stdout =====>] : Running
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minishift ).networkadapters[0]).ipaddresses[0]
(minishift) DBG | [stdout =====>] : 192.168.10.10
(minishift) DBG |
(minishift) Calling .GetSSHPort
(minishift) DBG | [stderr =====>] :
(minishift) Calling .GetSSHKeyPath
(minishift) Calling .GetSSHKeyPath
(minishift) Calling .GetSSHUsername
Using SSH client type: external
&{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.10.10 -o IdentitiesOnly=yes -i C:\WS\minishift\.minishift\machines\minishift\id_rsa -p 22] C:\Program Files\Git\usr\bin\ssh.exe <nil>}
About to run SSH command:
cat /etc/resolv.conf |grep -i '^nameserver' | cut -d ' ' -f2 | tr '
' ' '
SSH cmd err, output: <nil>: ***dns ip***
(minishift) Calling .GetSSHHostname
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minishift ).state
(minishift) DBG | [stdout =====>] : Running
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minishift ).networkadapters[0]).ipaddresses[0]
(minishift) DBG | [stdout =====>] : 192.168.10.10
(minishift) DBG |
(minishift) Calling .GetSSHPort
(minishift) DBG | [stderr =====>] :
(minishift) Calling .GetSSHKeyPath
(minishift) Calling .GetSSHKeyPath
(minishift) Calling .GetSSHUsername
Using SSH client type: external
&{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.10.10 -o IdentitiesOnly=yes -i C:\WS\minishift\.minishift\machines\minishift\id_rsa -p 22] C:\Program Files\Git\usr\bin\ssh.exe <nil>}
About to run SSH command:
route -n | grep 'UG[ ]' | awk '{print $2}' | tr -d '
'
SSH cmd err, output: <nil>: 192.168.10.1
(minishift) Calling .GetSSHHostname
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minishift ).state
(minishift) DBG | [stdout =====>] : Running
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minishift ).networkadapters[0]).ipaddresses[0]
(minishift) DBG | [stdout =====>] : 192.168.10.10
(minishift) DBG |
(minishift) Calling .GetSSHPort
(minishift) DBG | [stderr =====>] :
(minishift) Calling .GetSSHKeyPath
(minishift) Calling .GetSSHKeyPath
(minishift) Calling .GetSSHUsername
Using SSH client type: external
&{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.10.10 -o IdentitiesOnly=yes -i C:\WS\minishift\.minishift\machines\minishift\id_rsa -p 22] C:\Program Files\Git\usr\bin\ssh.exe <nil>}
About to run SSH command:
echo REVWSUNFPWV0aDAKSVBBRERSPTE5Mi4xNjguMTAuMTAKTkVUTUFTSz0yNApHQVRFV0FZPTE5Mi4xNjguMTAuMQpETlMxPTEwLjE2Ny4wLjEwMApETlMyPQo= | base64 --decode | sudo tee /var/lib/minishift/networking-eth0 > /dev/null
SSH cmd err, output: <nil>:
(minishift) Calling .GetSSHHostname
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minishift ).state
(minishift) DBG | [stdout =====>] : Running
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minishift ).networkadapters[0]).ipaddresses[0]
(minishift) DBG | [stdout =====>] : 192.168.10.10
(minishift) DBG |
(minishift) Calling .GetSSHPort
(minishift) DBG | [stderr =====>] :
(minishift) Calling .GetSSHKeyPath
(minishift) Calling .GetSSHKeyPath
(minishift) Calling .GetSSHUsername
Using SSH client type: external
&{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.10.10 -o IdentitiesOnly=yes -i C:\WS\minishift\.minishift\machines\minishift\id_rsa -p 22] C:\Program Files\Git\usr\bin\ssh.exe <nil>}
About to run SSH command:
echo REVWSUNFPWV0aDEKRElTQUJMRUQ9dHJ1ZQo= | base64 --decode | sudo tee /var/lib/minishift/networking-eth1 > /dev/null
SSH cmd err, output: <nil>:
OK
Found binary path at C:\WS\minishift\bin\minishift.exe
Launching plugin server for driver hyperv
Plugin server listening at address 127.0.0.1:59638
() Calling .GetVersion
Using API Version 1
() Calling .SetConfigRaw
() Calling .GetMachineName
(minishift) Calling .GetSSHHostname
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minishift ).state
(minishift) DBG | [stdout =====>] : Running
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minishift ).networkadapters[0]).ipaddresses[0]
(minishift) DBG | [stdout =====>] : 192.168.10.10
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
(minishift) Calling .GetSSHPort
(minishift) Calling .GetSSHKeyPath
(minishift) Calling .GetSSHKeyPath
(minishift) Calling .GetSSHUsername
Using SSH client type: external
&{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.10.10 -o IdentitiesOnly=yes -i C:\WS\minishift\.minishift\machines\minishift\id_rsa -p 22] C:\Program Files\Git\usr\bin\ssh.exe <nil>}
About to run SSH command:
docker version --format '{{.Server.APIVersion}}'
SSH cmd err, output: <nil>: 1.26
(minishift) Calling .GetIP
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minishift ).state
(minishift) DBG | [stdout =====>] : Running
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minishift ).networkadapters[0]).ipaddresses[0]
(minishift) DBG | [stdout =====>] : 192.168.10.10
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
(minishift) Calling .GetSSHUsername
(minishift) Calling .GetSSHUsername
(minishift) Calling .GetSSHHostname
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minishift ).state
(minishift) DBG | [stdout =====>] : Running
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minishift ).networkadapters[0]).ipaddresses[0]
(minishift) DBG | [stdout =====>] : 192.168.10.10
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
(minishift) Calling .GetSSHPort
(minishift) Calling .GetSSHKeyPath
(minishift) Calling .GetSSHKeyPath
(minishift) Calling .GetSSHUsername
Using SSH client type: external
&{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.10.10 -o IdentitiesOnly=yes -i C:\WS\minishift\.minishift\machines\minishift\id_rsa -p 22] C:\Program Files\Git\usr\bin\ssh.exe <nil>}
About to run SSH command:
sudo install -d -o docker -g docker -m 755 /var/lib/minishift/base /var/lib/minishift/bin
SSH cmd err, output: <nil>:
Found binary path at C:\WS\minishift\bin\minishift.exe
Launching plugin server for driver hyperv
Plugin server listening at address 127.0.0.1:59648
() Calling .GetVersion
Using API Version 1
() Calling .SetConfigRaw
() Calling .GetMachineName
(minishift) Calling .GetSSHHostname
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minishift ).state
(minishift) DBG | [stdout =====>] : Running
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minishift ).networkadapters[0]).ipaddresses[0]
(minishift) DBG | [stdout =====>] : 192.168.10.10
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
(minishift) Calling .GetSSHPort
(minishift) Calling .GetSSHKeyPath
(minishift) Calling .GetSSHKeyPath
(minishift) Calling .GetSSHUsername
Using SSH client type: external
&{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.10.10 -o IdentitiesOnly=yes -i C:\WS\minishift\.minishift\machines\minishift\id_rsa -p 22] C:\Program Files\Git\usr\bin\ssh.exe <nil>}
About to run SSH command:
docker version --format '{{.Server.APIVersion}}'
SSH cmd err, output: <nil>: 1.26
(minishift) Calling .GetIP
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minishift ).state
(minishift) DBG | [stdout =====>] : Running
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minishift ).networkadapters[0]).ipaddresses[0]
(minishift) DBG | [stdout =====>] : 192.168.10.10
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
(minishift) Calling .GetSSHHostname
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minishift ).state
(minishift) DBG | [stdout =====>] : Running
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minishift ).networkadapters[0]).ipaddresses[0]
(minishift) DBG | [stdout =====>] : 192.168.10.10
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
(minishift) Calling .GetSSHPort
(minishift) Calling .GetSSHKeyPath
(minishift) Calling .GetSSHUsername
Importing 'registry.access.redhat.com/openshift3/ose-cli:v3.11.43' . CACHE MISS
Importing 'registry.access.redhat.com/openshift3/ose-control-plane:v3.11.43' . CACHE MISS
Importing 'registry.access.redhat.com/openshift3/ose-hyperkube:v3.11.43' . CACHE MISS
Importing 'registry.access.redhat.com/openshift3/ose-hypershift:v3.11.43' . CACHE MISS
Importing 'registry.access.redhat.com/openshift3/ose-node:v3.11.43' . CACHE MISS
Importing 'registry.access.redhat.com/openshift3/ose-pod:v3.11.43' CACHE MISS
Importing 'registry.access.redhat.com/openshift3/ose-docker-registry:v3.11.43' CACHE MISS
Importing 'registry.access.redhat.com/openshift3/ose-haproxy-router:v3.11.43' CACHE MISS
(minishift) Calling .GetSSHHostname
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minishift ).state
(minishift) DBG | [stdout =====>] : Running
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minishift ).networkadapters[0]).ipaddresses[0]
(minishift) DBG | [stdout =====>] : 192.168.10.10
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
(minishift) Calling .GetSSHPort
(minishift) Calling .GetSSHKeyPath
(minishift) Calling .GetSSHKeyPath
(minishift) Calling .GetSSHUsername
Using SSH client type: external
&{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.10.10 -o IdentitiesOnly=yes -i C:\WS\minishift\.minishift\machines\minishift\id_rsa -p 22] C:\Program Files\Git\usr\bin\ssh.exe <nil>}
About to run SSH command:
docker network inspect -f "{{range .IPAM.Config }}{{ .Subnet }}{{end}}" bridge
SSH cmd err, output: <nil>: 172.17.0.0/16
(minishift) Calling .GetSSHUsername
-- OpenShift cluster will be configured with ...
Version: v3.11.43
(minishift) Calling .GetSSHHostname
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minishift ).state
(minishift) DBG | [stdout =====>] : Running
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minishift ).networkadapters[0]).ipaddresses[0]
(minishift) DBG | [stdout =====>] : 192.168.10.10
(minishift) DBG |
(minishift) Calling .GetSSHPort
(minishift) DBG | [stderr =====>] :
(minishift) Calling .GetSSHKeyPath
(minishift) Calling .GetSSHKeyPath
(minishift) Calling .GetSSHUsername
Using SSH client type: external
&{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.10.10 -o IdentitiesOnly=yes -i C:\WS\minishift\.minishift\machines\minishift\id_rsa -p 22] C:\Program Files\Git\usr\bin\ssh.exe <nil>}
About to run SSH command:
docker images -q registry.access.redhat.com/openshift3/ose-control-plane:v3.11.43
SSH cmd err, output: <nil>:
-- Pulling the OpenShift Container Image (minishift) Calling .GetSSHHostname
.(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minishift ).state
(minishift) DBG | [stdout =====>] : Running
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minishift ).networkadapters[0]).ipaddresses[0]
.(minishift) DBG | [stdout =====>] : 192.168.10.10
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
(minishift) Calling .GetSSHPort
(minishift) Calling .GetSSHKeyPath
(minishift) Calling .GetSSHKeyPath
(minishift) Calling .GetSSHUsername
Using SSH client type: external
&{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.10.10 -o IdentitiesOnly=yes -i C:\WS\minishift\.minishift\machines\minishift\id_rsa -p 22] C:\Program Files\Git\usr\bin\ssh.exe <nil>}
About to run SSH command:
docker pull registry.access.redhat.com/openshift3/ose-control-plane:v3.11.43
........................SSH cmd err, output: <nil>: Trying to pull repository registry.access.redhat.com/openshift3/ose-control-plane ...
sha256:adf53b055e13699154b6e084603b84e0ae7df8c33454e43e9530fd3eb5533977: Pulling from registry.access.redhat.com/openshift3/ose-control-plane
9a1bea865f79: Pulling fs layer
602125c154e3: Pulling fs layer
12f4e4c20da2: Pulling fs layer
b598aebf1511: Pulling fs layer
899256dd9531: Pulling fs layer
b598aebf1511: Waiting
899256dd9531: Waiting
602125c154e3: Download complete
12f4e4c20da2: Verifying Checksum
12f4e4c20da2: Download complete
b598aebf1511: Verifying Checksum
b598aebf1511: Download complete
899256dd9531: Verifying Checksum
899256dd9531: Download complete
9a1bea865f79: Verifying Checksum
9a1bea865f79: Download complete
9a1bea865f79: Pull complete
602125c154e3: Pull complete
12f4e4c20da2: Pull complete
b598aebf1511: Pull complete
899256dd9531: Pull complete
Digest: sha256:adf53b055e13699154b6e084603b84e0ae7df8c33454e43e9530fd3eb5533977
Status: Downloaded newer image for registry.access.redhat.com/openshift3/ose-control-plane:v3.11.43
OK
-- Copying oc binary from the OpenShift container image to VM ...(minishift) Calling .GetSSHHostname
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minishift ).state
(minishift) DBG | [stdout =====>] : Running
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minishift ).networkadapters[0]).ipaddresses[0]
(minishift) DBG | [stdout =====>] : 192.168.10.10
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
(minishift) Calling .GetSSHPort
(minishift) Calling .GetSSHKeyPath
(minishift) Calling .GetSSHKeyPath
(minishift) Calling .GetSSHUsername
Using SSH client type: external
&{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.10.10 -o IdentitiesOnly=yes -i C:\WS\minishift\.minishift\machines\minishift\id_rsa -p 22] C:\Program Files\Git\usr\bin\ssh.exe <nil>}
About to run SSH command:
docker create --name tmp registry.access.redhat.com/openshift3/ose-control-plane:v3.11.43
SSH cmd err, output: <nil>: 835a7d3e198d7b350e3901042cbc7355fcb457406185fd230750f6de4c3eef4b
(minishift) Calling .GetSSHHostname
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minishift ).state
(minishift) DBG | [stdout =====>] : Running
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minishift ).networkadapters[0]).ipaddresses[0]
(minishift) DBG | [stdout =====>] : 192.168.10.10
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
(minishift) Calling .GetSSHPort
(minishift) Calling .GetSSHKeyPath
(minishift) Calling .GetSSHKeyPath
(minishift) Calling .GetSSHUsername
Using SSH client type: external
&{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.10.10 -o IdentitiesOnly=yes -i C:\WS\minishift\.minishift\machines\minishift\id_rsa -p 22] C:\Program Files\Git\usr\bin\ssh.exe <nil>}
About to run SSH command:
docker cp tmp:/usr/bin/oc /var/lib/minishift/bin
SSH cmd err, output: <nil>:
(minishift) Calling .GetSSHHostname
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minishift ).state
(minishift) DBG | [stdout =====>] : Running
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minishift ).networkadapters[0]).ipaddresses[0]
(minishift) DBG | [stdout =====>] : 192.168.10.10
(minishift) DBG |
(minishift) Calling .GetSSHPort
(minishift) DBG | [stderr =====>] :
(minishift) Calling .GetSSHKeyPath
(minishift) Calling .GetSSHKeyPath
(minishift) Calling .GetSSHUsername
Using SSH client type: external
&{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.10.10 -o IdentitiesOnly=yes -i C:\WS\minishift\.minishift\machines\minishift\id_rsa -p 22] C:\Program Files\Git\usr\bin\ssh.exe <nil>}
About to run SSH command:
docker stop tmp
SSH cmd err, output: <nil>: tmp
(minishift) Calling .GetSSHHostname
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minishift ).state
(minishift) DBG | [stdout =====>] : Running
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minishift ).networkadapters[0]).ipaddresses[0]
(minishift) DBG | [stdout =====>] : 192.168.10.10
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
(minishift) Calling .GetSSHPort
(minishift) Calling .GetSSHKeyPath
(minishift) Calling .GetSSHKeyPath
(minishift) Calling .GetSSHUsername
Using SSH client type: external
&{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.10.10 -o IdentitiesOnly=yes -i C:\WS\minishift\.minishift\machines\minishift\id_rsa -p 22] C:\Program Files\Git\usr\bin\ssh.exe <nil>}
About to run SSH command:
docker rm tmp
SSH cmd err, output: <nil>: tmp
OK
-- Starting OpenShift cluster -- Running 'oc' with: 'cluster up --https-proxy http://corp-proxy.com:8080 --image 'registry.access.redhat.com/openshift3/ose-${component}:v3.11.43' --no-proxy 172.17.0.0/16,localhost,127.0.0.1,172.30.1.1,.corp.com,.corp2.com,.svc,192.168.10.10,192.168.10.10 --public-hostname 192.168.10.10 --routing-suffix apps.local.dev.corp.com --base-dir /var/lib/minishift/base --http-proxy http://corp-proxy.com:8080'
(minishift) Calling .GetSSHHostname
.(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minishift ).state
(minishift) DBG | [stdout =====>] : Running
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
(minishift) DBG | [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minishift ).networkadapters[0]).ipaddresses[0]
(minishift) DBG | [stdout =====>] : 192.168.10.10
(minishift) DBG |
(minishift) DBG | [stderr =====>] :
(minishift) Calling .GetSSHPort
(minishift) Calling .GetSSHKeyPath
(minishift) Calling .GetSSHKeyPath
(minishift) Calling .GetSSHUsername
Using SSH client type: external
&{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.10.10 -o IdentitiesOnly=yes -i C:\WS\minishift\.minishift\machines\minishift\id_rsa -p 22] C:\Program Files\Git\usr\bin\ssh.exe <nil>}
About to run SSH command:
/var/lib/minishift/bin/oc cluster up --https-proxy http://corp-proxy.com:8080 --image 'registry.access.redhat.com/openshift3/ose-${component}:v3.11.43' --no-proxy 172.17.0.0/16,localhost,127.0.0.1,172.30.1.1,.corp.com,.corp2.com,.svc,192.168.10.10,192.168.10.10 --public-hostname 192.168.10.10 --routing-suffix apps.local.dev.corp.com --base-dir /var/lib/minishift/base --http-proxy http://corp-proxy.com:8080
....................................................................................SSH cmd err, output: exit status 1: Getting a Docker client ...
Checking if image registry.access.redhat.com/openshift3/ose-control-plane:v3.11.43 is available ...
Pulling image registry.access.redhat.com/openshift3/ose-cli:v3.11.43
E1212 05:58:19.964422 7885 helper.go:173] Reading docker config from /home/docker/.docker/config.json failed: open /home/docker/.docker/config.json: no such file or directory, will attempt to pull image registry.access.redhat.com/openshift3/ose-cli:v3.11.43 anonymously
Image pull complete
Pulling image registry.access.redhat.com/openshift3/ose-node:v3.11.43
E1212 05:58:32.016676 7885 helper.go:173] Reading docker config from /home/docker/.docker/config.json failed: open /home/docker/.docker/config.json: no such file or directory, will attempt to pull image registry.access.redhat.com/openshift3/ose-node:v3.11.43 anonymously
Pulled 5/6 layers, 84% complete
Pulled 6/6 layers, 100% complete
Extracting
Image pull complete
Checking type of volume mount ...
Determining server IP ...
Using public hostname IP 192.168.10.10 as the host IP
Checking if OpenShift is already running ...
Checking for supported Docker version (=>1.22) ...
Checking if insecured registry is configured properly in Docker ...
Checking if required ports are available ...
Checking if OpenShift client is configured properly ...
Checking if image registry.access.redhat.com/openshift3/ose-control-plane:v3.11.43 is available ...
Starting OpenShift using registry.access.redhat.com/openshift3/ose-control-plane:v3.11.43 ...
I1212 05:59:24.657444 7885 config.go:40] Running "create-master-config"
I1212 05:59:30.396209 7885 config.go:46] Running "create-node-config"
I1212 05:59:31.971754 7885 flags.go:30] Running "create-kubelet-flags"
I1212 05:59:32.887662 7885 run_kubelet.go:49] Running "start-kubelet"
I1212 05:59:33.405592 7885 run_self_hosted.go:181] Waiting for the kube-apiserver to be ready ...
I1212 06:01:19.462721 7885 interface.go:26] Installing "kube-proxy" ...
I1212 06:01:19.463851 7885 interface.go:26] Installing "kube-dns" ...
I1212 06:01:19.463873 7885 interface.go:26] Installing "openshift-service-cert-signer-operator" ...
I1212 06:01:19.463945 7885 interface.go:26] Installing "openshift-apiserver" ...
I1212 06:01:19.464023 7885 apply_template.go:81] Installing "openshift-apiserver"
I1212 06:01:19.464368 7885 apply_template.go:81] Installing "kube-proxy"
I1212 06:01:19.469969 7885 apply_template.go:81] Installing "kube-dns"
I1212 06:01:19.470244 7885 apply_template.go:81] Installing "openshift-service-cert-signer-operator"
I1212 06:01:26.986749 7885 interface.go:41] Finished installing "kube-proxy" "kube-dns" "openshift-service-cert-signer-operator" "openshift-apiserver"
Error: timed out waiting for the condition
Error during 'cluster up' execution: Error starting the cluster. ssh command error:
command : /var/lib/minishift/bin/oc cluster up --https-proxy http://corp-proxy.com:8080 --image 'registry.access.redhat.com/openshift3/ose-${component}:v3.11.43' --no-proxy 172.17.0.0/16,localhost,127.0.0.1,172.30.1.1,.corp.com,.corp2.com,.svc,192.168.10.10,192.168.10.10 --public-hostname 192.168.10.10 --routing-suffix apps.local.dev.corp.com --base-dir /var/lib/minishift/base --http-proxy http://corp-proxy.com:8080
err : exit status 1
output : Getting a Docker client ...
Checking if image registry.access.redhat.com/openshift3/ose-control-plane:v3.11.43 is available ...
Pulling image registry.access.redhat.com/openshift3/ose-cli:v3.11.43
E1212 05:58:19.964422 7885 helper.go:173] Reading docker config from /home/docker/.docker/config.json failed: open /home/docker/.docker/config.json: no such file or directory, will attempt to pull image registry.access.redhat.com/openshift3/ose-cli:v3.11.43 anonymously
Image pull complete
Pulling image registry.access.redhat.com/openshift3/ose-node:v3.11.43
E1212 05:58:32.016676 7885 helper.go:173] Reading docker config from /home/docker/.docker/config.json failed: open /home/docker/.docker/config.json: no such file or directory, will attempt to pull image registry.access.redhat.com/openshift3/ose-node:v3.11.43 anonymously
Pulled 5/6 layers, 84% complete
Pulled 6/6 layers, 100% complete
Extracting
Image pull complete
Checking type of volume mount ...
Determining server IP ...
Using public hostname IP 192.168.10.10 as the host IP
Checking if OpenShift is already running ...
Checking for supported Docker version (=>1.22) ...
Checking if insecured registry is configured properly in Docker ...
Checking if required ports are available ...
Checking if OpenShift client is configured properly ...
Checking if image registry.access.redhat.com/openshift3/ose-control-plane:v3.11.43 is available ...
Starting OpenShift using registry.access.redhat.com/openshift3/ose-control-plane:v3.11.43 ...
I1212 05:59:24.657444 7885 config.go:40] Running "create-master-config"
I1212 05:59:30.396209 7885 config.go:46] Running "create-node-config"
I1212 05:59:31.971754 7885 flags.go:30] Running "create-kubelet-flags"
I1212 05:59:32.887662 7885 run_kubelet.go:49] Running "start-kubelet"
I1212 05:59:33.405592 7885 run_self_hosted.go:181] Waiting for the kube-apiserver to be ready ...
I1212 06:01:19.462721 7885 interface.go:26] Installing "kube-proxy" ...
I1212 06:01:19.463851 7885 interface.go:26] Installing "kube-dns" ...
I1212 06:01:19.463873 7885 interface.go:26] Installing "openshift-service-cert-signer-operator" ...
I1212 06:01:19.463945 7885 interface.go:26] Installing "openshift-apiserver" ...
I1212 06:01:19.464023 7885 apply_template.go:81] Installing "openshift-apiserver"
I1212 06:01:19.464368 7885 apply_template.go:81] Installing "kube-proxy"
I1212 06:01:19.469969 7885 apply_template.go:81] Installing "kube-dns"
I1212 06:01:19.470244 7885 apply_template.go:81] Installing "openshift-service-cert-signer-operator"
I1212 06:01:26.986749 7885 interface.go:41] Finished installing "kube-proxy" "kube-dns" "openshift-service-cert-signer-operator" "openshift-apiserver"
Error: timed out waiting for the condition
Cannot get the OpenShift master configuration: ssh command error:
command : docker exec -t cat /etc/origin/master/master-config.yaml
err : exit status 1
output : Error response from daemon: No such container: cat
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment