Skip to content

Instantly share code, notes, and snippets.

@jpeeler
Created December 9, 2016 18:42
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save jpeeler/fe82a0bd3fbcee5dd4e219548e48d98c to your computer and use it in GitHub Desktop.
Save jpeeler/fe82a0bd3fbcee5dd4e219548e48d98c to your computer and use it in GitHub Desktop.
This file has been truncated, but you can view the full file.
I1209 18:15:33.877080 20134 start_master.go:207] Generating master configuration
I1209 18:15:33.879274 20134 create_mastercerts.go:165] Creating all certs with: admin.CreateMasterCertsOptions{CertDir:"openshift.local.config/master", SignerName:"openshift-signer@1481307333", APIServerCAFiles:[]string(nil), CABundleFile:"openshift.local.config/master/ca-bundle.crt", Hostnames:[]string{"127.0.0.1", "172.17.0.1", "172.30.0.1", "192.168.121.18", "kubernetes", "kubernetes.default", "kubernetes.default.svc", "kubernetes.default.svc.cluster.local", "localhost", "openshift", "openshift.default", "openshift.default.svc", "openshift.default.svc.cluster.local"}, APIServerURL:"https://192.168.121.18:8443", PublicAPIServerURL:"https://192.168.121.18:8443", Overwrite:false, Output:(*util.gLogWriter)(0xc82049868c)}
I1209 18:15:33.879750 20134 create_signercert.go:89] Creating a signer cert with: admin.CreateSignerCertOptions{CertFile:"openshift.local.config/master/ca.crt", KeyFile:"openshift.local.config/master/ca.key", SerialFile:"openshift.local.config/master/ca.serial.txt", Name:"openshift-signer@1481307333", Output:(*util.gLogWriter)(0xc82049868c), Overwrite:false}
I1209 18:15:33.880302 20134 crypto.go:297] Generating new CA for openshift-signer@1481307333 cert, and key in openshift.local.config/master/ca.crt, openshift.local.config/master/ca.key
I1209 18:15:34.258475 20134 create_signercert.go:99] Generated new CA for openshift-signer@1481307333: cert in openshift.local.config/master/ca.crt and key in openshift.local.config/master/ca.key
I1209 18:15:34.258644 20134 create_signercert.go:89] Creating a signer cert with: admin.CreateSignerCertOptions{CertFile:"openshift.local.config/master/service-signer.crt", KeyFile:"openshift.local.config/master/service-signer.key", SerialFile:"", Name:"openshift-service-serving-signer@1481307334", Output:(*util.gLogWriter)(0xc82049868c), Overwrite:false}
I1209 18:15:34.259070 20134 crypto.go:297] Generating new CA for openshift-service-serving-signer@1481307334 cert, and key in openshift.local.config/master/service-signer.crt, openshift.local.config/master/service-signer.key
I1209 18:15:34.259947 20134 create_clientcert.go:52] Creating a client cert with: admin.CreateClientCertOptions{SignerCertOptions:(*admin.SignerCertOptions)(0xc8208adc00), CertFile:"openshift.local.config/master/master.etcd-client.crt", KeyFile:"openshift.local.config/master/master.etcd-client.key", User:"system:master", Groups:[]string{}, Overwrite:false, Output:(*util.gLogWriter)(0xc82049868c)} and &admin.SignerCertOptions{CertFile:"openshift.local.config/master/ca.crt", KeyFile:"openshift.local.config/master/ca.key", SerialFile:"openshift.local.config/master/ca.serial.txt", lock:sync.Mutex{state:0, sema:0x0}, ca:(*crypto.CA)(nil)}
I1209 18:15:34.261330 20134 create_clientcert.go:52] Creating a client cert with: admin.CreateClientCertOptions{SignerCertOptions:(*admin.SignerCertOptions)(0xc8208adc00), CertFile:"openshift.local.config/master/master.kubelet-client.crt", KeyFile:"openshift.local.config/master/master.kubelet-client.key", User:"system:openshift-node-admin", Groups:[]string{"system:node-admins"}, Overwrite:false, Output:(*util.gLogWriter)(0xc82049868c)} and &admin.SignerCertOptions{CertFile:"openshift.local.config/master/ca.crt", KeyFile:"openshift.local.config/master/ca.key", SerialFile:"openshift.local.config/master/ca.serial.txt", lock:sync.Mutex{state:1, sema:0x0}, ca:(*crypto.CA)(nil)}
I1209 18:15:34.261431 20134 create_clientcert.go:52] Creating a client cert with: admin.CreateClientCertOptions{SignerCertOptions:(*admin.SignerCertOptions)(0xc8208adc00), CertFile:"openshift.local.config/master/master.proxy-client.crt", KeyFile:"openshift.local.config/master/master.proxy-client.key", User:"system:master-proxy", Groups:[]string{}, Overwrite:false, Output:(*util.gLogWriter)(0xc82049868c)} and &admin.SignerCertOptions{CertFile:"openshift.local.config/master/ca.crt", KeyFile:"openshift.local.config/master/ca.key", SerialFile:"openshift.local.config/master/ca.serial.txt", lock:sync.Mutex{state:5, sema:0x0}, ca:(*crypto.CA)(nil)}
I1209 18:15:34.261535 20134 create_keypair.go:91] Creating a key pair with: admin.CreateKeyPairOptions{PublicKeyFile:"openshift.local.config/master/serviceaccounts.public.key", PrivateKeyFile:"openshift.local.config/master/serviceaccounts.private.key", Overwrite:false, Output:(*util.gLogWriter)(0xc82049868c)}
I1209 18:15:34.263236 20134 crypto.go:404] Generating client cert in openshift.local.config/master/master.etcd-client.crt and key in openshift.local.config/master/master.etcd-client.key
I1209 18:15:34.271373 20134 crypto.go:404] Generating client cert in openshift.local.config/master/master.kubelet-client.crt and key in openshift.local.config/master/master.kubelet-client.key
I1209 18:15:34.281432 20134 crypto.go:404] Generating client cert in openshift.local.config/master/master.proxy-client.crt and key in openshift.local.config/master/master.proxy-client.key
I1209 18:15:34.291682 20134 create_servercert.go:107] Creating a server cert with: admin.CreateServerCertOptions{SignerCertOptions:(*admin.SignerCertOptions)(0xc8208adc00), CertFile:"openshift.local.config/master/master.server.crt", KeyFile:"openshift.local.config/master/master.server.key", Hostnames:[]string{"127.0.0.1", "172.17.0.1", "172.30.0.1", "192.168.121.18", "kubernetes", "kubernetes.default", "kubernetes.default.svc", "kubernetes.default.svc.cluster.local", "localhost", "openshift", "openshift.default", "openshift.default.svc", "openshift.default.svc.cluster.local"}, Overwrite:false, Output:(*util.gLogWriter)(0xc82049868c)}
I1209 18:15:34.292170 20134 crypto.go:367] Generating server certificate in openshift.local.config/master/master.server.crt, key in openshift.local.config/master/master.server.key
I1209 18:15:34.311621 20134 create_clientcert.go:52] Creating a client cert with: admin.CreateClientCertOptions{SignerCertOptions:(*admin.SignerCertOptions)(0xc8208adc00), CertFile:"openshift.local.config/master/openshift-master.crt", KeyFile:"openshift.local.config/master/openshift-master.key", User:"system:openshift-master", Groups:[]string{"system:masters"}, Overwrite:false, Output:(*util.gLogWriter)(0xc82049868c)} and &admin.SignerCertOptions{CertFile:"openshift.local.config/master/ca.crt", KeyFile:"openshift.local.config/master/ca.key", SerialFile:"openshift.local.config/master/ca.serial.txt", lock:sync.Mutex{state:0, sema:0x0}, ca:(*crypto.CA)(0xc8205ab3a0)}
I1209 18:15:34.312039 20134 crypto.go:404] Generating client cert in openshift.local.config/master/openshift-master.crt and key in openshift.local.config/master/openshift-master.key
I1209 18:15:34.921406 20134 create_clientcert.go:68] Generated new client cert as openshift.local.config/master/master.kubelet-client.crt and key as openshift.local.config/master/master.kubelet-client.key
I1209 18:15:35.115714 20134 create_servercert.go:122] Generated new server certificate as openshift.local.config/master/master.server.crt, key as openshift.local.config/master/master.server.key
I1209 18:15:35.115740 20134 create_servercert.go:107] Creating a server cert with: admin.CreateServerCertOptions{SignerCertOptions:(*admin.SignerCertOptions)(0xc8208adc00), CertFile:"openshift.local.config/master/master.server.crt", KeyFile:"openshift.local.config/master/master.server.key", Hostnames:[]string{"127.0.0.1", "172.17.0.1", "172.30.0.1", "192.168.121.18", "kubernetes", "kubernetes.default", "kubernetes.default.svc", "kubernetes.default.svc.cluster.local", "localhost", "openshift", "openshift.default", "openshift.default.svc", "openshift.default.svc.cluster.local"}, Overwrite:false, Output:(*util.gLogWriter)(0xc82049868c)}
I1209 18:15:35.117374 20134 crypto.go:359] Found existing server certificate in openshift.local.config/master/master.server.crt
I1209 18:15:35.117388 20134 create_servercert.go:124] Keeping existing server certificate at openshift.local.config/master/master.server.crt, key at openshift.local.config/master/master.server.key
I1209 18:15:35.117834 20134 create_servercert.go:107] Creating a server cert with: admin.CreateServerCertOptions{SignerCertOptions:(*admin.SignerCertOptions)(0xc8208adc00), CertFile:"openshift.local.config/master/etcd.server.crt", KeyFile:"openshift.local.config/master/etcd.server.key", Hostnames:[]string{"127.0.0.1", "172.17.0.1", "172.30.0.1", "192.168.121.18", "kubernetes", "kubernetes.default", "kubernetes.default.svc", "kubernetes.default.svc.cluster.local", "localhost", "openshift", "openshift.default", "openshift.default.svc", "openshift.default.svc.cluster.local"}, Overwrite:false, Output:(*util.gLogWriter)(0xc82049868c)}
I1209 18:15:35.118094 20134 crypto.go:367] Generating server certificate in openshift.local.config/master/etcd.server.crt, key in openshift.local.config/master/etcd.server.key
I1209 18:15:35.271751 20134 create_keypair.go:117] Generated new key pair as openshift.local.config/master/serviceaccounts.public.key and openshift.local.config/master/serviceaccounts.private.key
I1209 18:15:35.288007 20134 create_clientcert.go:68] Generated new client cert as openshift.local.config/master/master.etcd-client.crt and key as openshift.local.config/master/master.etcd-client.key
I1209 18:15:35.312172 20134 create_clientcert.go:68] Generated new client cert as openshift.local.config/master/master.proxy-client.crt and key as openshift.local.config/master/master.proxy-client.key
I1209 18:15:35.455141 20134 create_clientcert.go:68] Generated new client cert as openshift.local.config/master/openshift-master.crt and key as openshift.local.config/master/openshift-master.key
I1209 18:15:35.455322 20134 create_kubeconfig.go:142] creating a .kubeconfig with: admin.CreateKubeConfigOptions{APIServerURL:"https://192.168.121.18:8443", PublicAPIServerURL:"https://192.168.121.18:8443", APIServerCAFiles:[]string{"openshift.local.config/master/ca.crt"}, CertFile:"openshift.local.config/master/openshift-master.crt", KeyFile:"openshift.local.config/master/openshift-master.key", ContextNamespace:"default", KubeConfigFile:"openshift.local.config/master/openshift-master.kubeconfig", Output:(*util.gLogWriter)(0xc82049868c)}
I1209 18:15:35.456358 20134 create_kubeconfig.go:210] Generating 'system:openshift-master/192-168-121-18:8443' API client config as openshift.local.config/master/openshift-master.kubeconfig
I1209 18:15:35.458744 20134 create_clientcert.go:52] Creating a client cert with: admin.CreateClientCertOptions{SignerCertOptions:(*admin.SignerCertOptions)(0xc8208adc00), CertFile:"openshift.local.config/master/admin.crt", KeyFile:"openshift.local.config/master/admin.key", User:"system:admin", Groups:[]string{"system:cluster-admins"}, Overwrite:false, Output:(*util.gLogWriter)(0xc82049868c)} and &admin.SignerCertOptions{CertFile:"openshift.local.config/master/ca.crt", KeyFile:"openshift.local.config/master/ca.key", SerialFile:"openshift.local.config/master/ca.serial.txt", lock:sync.Mutex{state:0, sema:0x0}, ca:(*crypto.CA)(0xc8205ab3a0)}
I1209 18:15:35.458827 20134 crypto.go:404] Generating client cert in openshift.local.config/master/admin.crt and key in openshift.local.config/master/admin.key
I1209 18:15:35.502498 20134 create_signercert.go:99] Generated new CA for openshift-service-serving-signer@1481307334: cert in openshift.local.config/master/service-signer.crt and key in openshift.local.config/master/service-signer.key
I1209 18:15:35.641502 20134 create_servercert.go:122] Generated new server certificate as openshift.local.config/master/etcd.server.crt, key as openshift.local.config/master/etcd.server.key
I1209 18:15:36.086804 20134 create_clientcert.go:68] Generated new client cert as openshift.local.config/master/admin.crt and key as openshift.local.config/master/admin.key
I1209 18:15:36.086894 20134 create_kubeconfig.go:142] creating a .kubeconfig with: admin.CreateKubeConfigOptions{APIServerURL:"https://192.168.121.18:8443", PublicAPIServerURL:"https://192.168.121.18:8443", APIServerCAFiles:[]string{"openshift.local.config/master/ca.crt"}, CertFile:"openshift.local.config/master/admin.crt", KeyFile:"openshift.local.config/master/admin.key", ContextNamespace:"default", KubeConfigFile:"openshift.local.config/master/admin.kubeconfig", Output:(*util.gLogWriter)(0xc82049868c)}
I1209 18:15:36.088387 20134 create_kubeconfig.go:210] Generating 'system:admin/192-168-121-18:8443' API client config as openshift.local.config/master/admin.kubeconfig
I1209 18:15:36.089469 20134 create_clientcert.go:52] Creating a client cert with: admin.CreateClientCertOptions{SignerCertOptions:(*admin.SignerCertOptions)(0xc8208adc00), CertFile:"openshift.local.config/master/openshift-router.crt", KeyFile:"openshift.local.config/master/openshift-router.key", User:"system:openshift-router", Groups:[]string{"system:routers"}, Overwrite:false, Output:(*util.gLogWriter)(0xc82049868c)} and &admin.SignerCertOptions{CertFile:"openshift.local.config/master/ca.crt", KeyFile:"openshift.local.config/master/ca.key", SerialFile:"openshift.local.config/master/ca.serial.txt", lock:sync.Mutex{state:0, sema:0x0}, ca:(*crypto.CA)(0xc8205ab3a0)}
I1209 18:15:36.089514 20134 crypto.go:404] Generating client cert in openshift.local.config/master/openshift-router.crt and key in openshift.local.config/master/openshift-router.key
I1209 18:15:36.260864 20134 create_clientcert.go:68] Generated new client cert as openshift.local.config/master/openshift-router.crt and key as openshift.local.config/master/openshift-router.key
I1209 18:15:36.261549 20134 create_kubeconfig.go:142] creating a .kubeconfig with: admin.CreateKubeConfigOptions{APIServerURL:"https://192.168.121.18:8443", PublicAPIServerURL:"https://192.168.121.18:8443", APIServerCAFiles:[]string{"openshift.local.config/master/ca.crt"}, CertFile:"openshift.local.config/master/openshift-router.crt", KeyFile:"openshift.local.config/master/openshift-router.key", ContextNamespace:"default", KubeConfigFile:"openshift.local.config/master/openshift-router.kubeconfig", Output:(*util.gLogWriter)(0xc82049868c)}
I1209 18:15:36.271500 20134 create_kubeconfig.go:210] Generating 'system:openshift-router/192-168-121-18:8443' API client config as openshift.local.config/master/openshift-router.kubeconfig
I1209 18:15:36.273408 20134 create_clientcert.go:52] Creating a client cert with: admin.CreateClientCertOptions{SignerCertOptions:(*admin.SignerCertOptions)(0xc8208adc00), CertFile:"openshift.local.config/master/openshift-registry.crt", KeyFile:"openshift.local.config/master/openshift-registry.key", User:"system:openshift-registry", Groups:[]string{"system:registries"}, Overwrite:false, Output:(*util.gLogWriter)(0xc82049868c)} and &admin.SignerCertOptions{CertFile:"openshift.local.config/master/ca.crt", KeyFile:"openshift.local.config/master/ca.key", SerialFile:"openshift.local.config/master/ca.serial.txt", lock:sync.Mutex{state:0, sema:0x0}, ca:(*crypto.CA)(0xc8205ab3a0)}
I1209 18:15:36.273493 20134 crypto.go:404] Generating client cert in openshift.local.config/master/openshift-registry.crt and key in openshift.local.config/master/openshift-registry.key
I1209 18:15:36.486787 20134 create_clientcert.go:68] Generated new client cert as openshift.local.config/master/openshift-registry.crt and key as openshift.local.config/master/openshift-registry.key
I1209 18:15:36.487105 20134 create_kubeconfig.go:142] creating a .kubeconfig with: admin.CreateKubeConfigOptions{APIServerURL:"https://192.168.121.18:8443", PublicAPIServerURL:"https://192.168.121.18:8443", APIServerCAFiles:[]string{"openshift.local.config/master/ca.crt"}, CertFile:"openshift.local.config/master/openshift-registry.crt", KeyFile:"openshift.local.config/master/openshift-registry.key", ContextNamespace:"default", KubeConfigFile:"openshift.local.config/master/openshift-registry.kubeconfig", Output:(*util.gLogWriter)(0xc82049868c)}
I1209 18:15:36.488220 20134 create_kubeconfig.go:210] Generating 'system:openshift-registry/192-168-121-18:8443' API client config as openshift.local.config/master/openshift-registry.kubeconfig
W1209 18:15:36.501198 20134 start_master.go:277] Warning: assetConfig.loggingPublicURL: Invalid value: "": required to view aggregated container logs in the console, master start will continue.
W1209 18:15:36.501335 20134 start_master.go:277] Warning: assetConfig.metricsPublicURL: Invalid value: "": required to view cluster metrics in the console, master start will continue.
W1209 18:15:36.501361 20134 start_master.go:277] Warning: auditConfig.auditFilePath: Required value: audit can now be logged to a separate file, master start will continue.
I1209 18:15:36.506400 20134 storage_factory.go:241] storing { clusterpolicies} in v1, reading as __internal from { openshift.io [https://192.168.121.18:4001] openshift.local.config/master/master.etcd-client.key openshift.local.config/master/master.etcd-client.crt openshift.local.config/master/ca.crt false 50000 <nil>}
I1209 18:15:36.506460 20134 configgetter.go:155] using watch cache storage (capacity=1000) for clusterpolicies &storagebackend.Config{Type:"", Prefix:"openshift.io", ServerList:[]string{"https://192.168.121.18:4001"}, KeyFile:"openshift.local.config/master/master.etcd-client.key", CertFile:"openshift.local.config/master/master.etcd-client.crt", CAFile:"openshift.local.config/master/ca.crt", Quorum:false, DeserializationCacheSize:50000, Codec:runtime.codec{Encoder:(*versioning.codec)(0xc821194a80), Decoder:(*versioning.codec)(0xc821194b00)}}
I1209 18:15:36.507832 20134 storage_factory.go:241] storing { clusterpolicybindings} in v1, reading as __internal from { openshift.io [https://192.168.121.18:4001] openshift.local.config/master/master.etcd-client.key openshift.local.config/master/master.etcd-client.crt openshift.local.config/master/ca.crt false 50000 <nil>}
I1209 18:15:36.507905 20134 configgetter.go:155] using watch cache storage (capacity=1000) for clusterpolicybindings &storagebackend.Config{Type:"", Prefix:"openshift.io", ServerList:[]string{"https://192.168.121.18:4001"}, KeyFile:"openshift.local.config/master/master.etcd-client.key", CertFile:"openshift.local.config/master/master.etcd-client.crt", CAFile:"openshift.local.config/master/ca.crt", Quorum:false, DeserializationCacheSize:50000, Codec:runtime.codec{Encoder:(*versioning.codec)(0xc821502d80), Decoder:(*versioning.codec)(0xc821502e00)}}
I1209 18:15:36.508674 20134 cacher.go:469] Terminating all watchers from cacher *api.ClusterPolicy
I1209 18:15:36.508787 20134 reflector.go:249] Listing and watching *api.ClusterPolicy from pkg/storage/cacher.go:194
I1209 18:15:36.509154 20134 storage_factory.go:241] storing { policies} in v1, reading as __internal from { openshift.io [https://192.168.121.18:4001] openshift.local.config/master/master.etcd-client.key openshift.local.config/master/master.etcd-client.crt openshift.local.config/master/ca.crt false 50000 <nil>}
I1209 18:15:36.509207 20134 configgetter.go:155] using watch cache storage (capacity=1000) for policies &storagebackend.Config{Type:"", Prefix:"openshift.io", ServerList:[]string{"https://192.168.121.18:4001"}, KeyFile:"openshift.local.config/master/master.etcd-client.key", CertFile:"openshift.local.config/master/master.etcd-client.crt", CAFile:"openshift.local.config/master/ca.crt", Quorum:false, DeserializationCacheSize:50000, Codec:runtime.codec{Encoder:(*versioning.codec)(0xc82154ef80), Decoder:(*versioning.codec)(0xc82154f000)}}
I1209 18:15:36.510097 20134 storage_factory.go:241] storing { policybindings} in v1, reading as __internal from { openshift.io [https://192.168.121.18:4001] openshift.local.config/master/master.etcd-client.key openshift.local.config/master/master.etcd-client.crt openshift.local.config/master/ca.crt false 50000 <nil>}
I1209 18:15:36.510142 20134 configgetter.go:155] using watch cache storage (capacity=1000) for policybindings &storagebackend.Config{Type:"", Prefix:"openshift.io", ServerList:[]string{"https://192.168.121.18:4001"}, KeyFile:"openshift.local.config/master/master.etcd-client.key", CertFile:"openshift.local.config/master/master.etcd-client.crt", CAFile:"openshift.local.config/master/ca.crt", Quorum:false, DeserializationCacheSize:50000, Codec:runtime.codec{Encoder:(*versioning.codec)(0xc8211f7180), Decoder:(*versioning.codec)(0xc8211f7200)}}
I1209 18:15:36.510816 20134 cacher.go:469] Terminating all watchers from cacher *api.Policy
I1209 18:15:36.510875 20134 reflector.go:249] Listing and watching *api.Policy from pkg/storage/cacher.go:194
I1209 18:15:36.511446 20134 cacher.go:469] Terminating all watchers from cacher *api.ClusterPolicyBinding
I1209 18:15:36.511469 20134 reflector.go:249] Listing and watching *api.ClusterPolicyBinding from pkg/storage/cacher.go:194
I1209 18:15:36.511936 20134 storage_factory.go:241] storing { groups} in v1, reading as __internal from { openshift.io [https://192.168.121.18:4001] openshift.local.config/master/master.etcd-client.key openshift.local.config/master/master.etcd-client.crt openshift.local.config/master/ca.crt false 50000 <nil>}
I1209 18:15:36.511968 20134 configgetter.go:155] using watch cache storage (capacity=1000) for groups &storagebackend.Config{Type:"", Prefix:"openshift.io", ServerList:[]string{"https://192.168.121.18:4001"}, KeyFile:"openshift.local.config/master/master.etcd-client.key", CertFile:"openshift.local.config/master/master.etcd-client.crt", CAFile:"openshift.local.config/master/ca.crt", Quorum:false, DeserializationCacheSize:50000, Codec:runtime.codec{Encoder:(*versioning.codec)(0xc821241800), Decoder:(*versioning.codec)(0xc821241880)}}
E1209 18:15:36.513067 20134 cacher.go:254] unexpected ListAndWatch error: pkg/storage/cacher.go:194: Failed to list *api.ClusterPolicyBinding: client: etcd cluster is unavailable or misconfigured; error #0: dial tcp 192.168.121.18:4001: getsockopt: connection refused
E1209 18:15:36.513216 20134 cacher.go:254] unexpected ListAndWatch error: pkg/storage/cacher.go:194: Failed to list *api.Policy: client: etcd cluster is unavailable or misconfigured; error #0: dial tcp 192.168.121.18:4001: getsockopt: connection refused
I1209 18:15:36.514066 20134 cacher.go:469] Terminating all watchers from cacher *api.PolicyBinding
I1209 18:15:36.514866 20134 reflector.go:249] Listing and watching *api.PolicyBinding from pkg/storage/cacher.go:194
E1209 18:15:36.522191 20134 cacher.go:254] unexpected ListAndWatch error: pkg/storage/cacher.go:194: Failed to list *api.ClusterPolicy: client: etcd cluster is unavailable or misconfigured; error #0: dial tcp 192.168.121.18:4001: getsockopt: connection refused
I1209 18:15:36.522068 20134 cacher.go:469] Terminating all watchers from cacher *api.Group
I1209 18:15:36.523045 20134 reflector.go:249] Listing and watching *api.Group from pkg/storage/cacher.go:194
I1209 18:15:36.524495 20134 admission.go:99] Admission plugin ProjectRequestLimit is not enabled. It will not be started.
I1209 18:15:36.524558 20134 admission.go:99] Admission plugin PodNodeConstraints is not enabled. It will not be started.
I1209 18:15:36.524656 20134 reflector.go:200] Starting reflector *api.LimitRange (0) from github.com/openshift/origin/vendor/k8s.io/kubernetes/plugin/pkg/admission/limitranger/admission.go:154
I1209 18:15:36.524679 20134 admission.go:99] Admission plugin RunOnceDuration is not enabled. It will not be started.
I1209 18:15:36.524694 20134 admission.go:99] Admission plugin PodNodeConstraints is not enabled. It will not be started.
I1209 18:15:36.524713 20134 admission.go:99] Admission plugin ClusterResourceOverride is not enabled. It will not be started.
I1209 18:15:36.527002 20134 imagepolicy.go:46] openshift.io/ImagePolicy admission controller loaded with config: &api.ImagePolicyConfig{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ResolveImages:"Attempt", ExecutionRules:[]api.ImageExecutionPolicyRule{api.ImageExecutionPolicyRule{ImageCondition:api.ImageCondition{Name:"execution-denied", IgnoreNamespaceOverride:false, OnResources:[]unversioned.GroupResource{unversioned.GroupResource{Group:"", Resource:"pods"}, unversioned.GroupResource{Group:"", Resource:"builds"}}, InvertMatch:false, MatchIntegratedRegistry:false, MatchRegistries:[]string(nil), SkipOnResolutionFailure:true, MatchDockerImageLabels:[]api.ValueCondition(nil), MatchImageLabels:[]unversioned.LabelSelector(nil), MatchImageLabelSelectors:[]labels.Selector(nil), MatchImageAnnotations:[]api.ValueCondition{api.ValueCondition{Key:"images.openshift.io/deny-execution", Set:false, Value:"true"}}}, Reject:true}}}
I1209 18:15:36.527455 20134 admission.go:99] Admission plugin ImagePolicyWebhook is not enabled. It will not be started.
I1209 18:15:36.527496 20134 reflector.go:200] Starting reflector *api.LimitRange (0) from github.com/openshift/origin/vendor/k8s.io/kubernetes/plugin/pkg/admission/limitranger/admission.go:154
I1209 18:15:36.527542 20134 reflector.go:211] Starting reflector *api.ServiceAccount (0) from github.com/openshift/origin/vendor/k8s.io/kubernetes/plugin/pkg/admission/serviceaccount/admission.go:103
I1209 18:15:36.527556 20134 reflector.go:211] Starting reflector *api.Secret (0) from github.com/openshift/origin/vendor/k8s.io/kubernetes/plugin/pkg/admission/serviceaccount/admission.go:119
I1209 18:15:36.527592 20134 reflector.go:211] Starting reflector *storage.StorageClass (0) from github.com/openshift/origin/vendor/k8s.io/kubernetes/plugin/pkg/admission/storageclass/default/admission.go:62
I1209 18:15:36.527608 20134 admission.go:99] Admission plugin AlwaysPullImages is not enabled. It will not be started.
I1209 18:15:36.528426 20134 storage_factory.go:241] storing { serviceaccounts} in v1, reading as __internal from { kubernetes.io [https://192.168.121.18:4001] openshift.local.config/master/master.etcd-client.key openshift.local.config/master/master.etcd-client.crt openshift.local.config/master/ca.crt false 50000 <nil>}
I1209 18:15:36.530350 20134 storage_factory.go:241] storing { oauthaccesstokens} in v1, reading as __internal from { openshift.io [https://192.168.121.18:4001] openshift.local.config/master/master.etcd-client.key openshift.local.config/master/master.etcd-client.crt openshift.local.config/master/ca.crt false 50000 <nil>}
I1209 18:15:36.530420 20134 configgetter.go:155] using watch cache storage (capacity=1000) for oauthaccesstokens &storagebackend.Config{Type:"", Prefix:"openshift.io", ServerList:[]string{"https://192.168.121.18:4001"}, KeyFile:"openshift.local.config/master/master.etcd-client.key", CertFile:"openshift.local.config/master/master.etcd-client.crt", CAFile:"openshift.local.config/master/ca.crt", Quorum:true, DeserializationCacheSize:50000, Codec:runtime.codec{Encoder:(*versioning.codec)(0xc82128e500), Decoder:(*versioning.codec)(0xc82128e580)}}
E1209 18:15:36.531171 20134 cacher.go:254] unexpected ListAndWatch error: pkg/storage/cacher.go:194: Failed to list *api.Group: client: etcd cluster is unavailable or misconfigured; error #0: dial tcp 192.168.121.18:4001: getsockopt: connection refused
I1209 18:15:36.531396 20134 storage_factory.go:241] storing { users} in v1, reading as __internal from { openshift.io [https://192.168.121.18:4001] openshift.local.config/master/master.etcd-client.key openshift.local.config/master/master.etcd-client.crt openshift.local.config/master/ca.crt false 50000 <nil>}
I1209 18:15:36.531432 20134 configgetter.go:155] using watch cache storage (capacity=1000) for users &storagebackend.Config{Type:"", Prefix:"openshift.io", ServerList:[]string{"https://192.168.121.18:4001"}, KeyFile:"openshift.local.config/master/master.etcd-client.key", CertFile:"openshift.local.config/master/master.etcd-client.crt", CAFile:"openshift.local.config/master/ca.crt", Quorum:false, DeserializationCacheSize:50000, Codec:runtime.codec{Encoder:(*versioning.codec)(0xc82101e700), Decoder:(*versioning.codec)(0xc82101e780)}}
E1209 18:15:36.532168 20134 cacher.go:254] unexpected ListAndWatch error: pkg/storage/cacher.go:194: Failed to list *api.PolicyBinding: client: etcd cluster is unavailable or misconfigured; error #0: dial tcp 192.168.121.18:4001: getsockopt: connection refused
I1209 18:15:36.533690 20134 plugins.go:71] No cloud provider specified.
I1209 18:15:36.534640 20134 start_master.go:394] Starting master on 0.0.0.0:8443 (v1.5.0-alpha.0+69afb3a-296)
I1209 18:15:36.534663 20134 start_master.go:395] Public master address is https://192.168.121.18:8443
I1209 18:15:36.534699 20134 start_master.go:399] Using images from "openshift/origin-<component>:v1.5.0-alpha.0"
2016-12-09 18:15:36.535511 I | embed: peerTLS: cert = openshift.local.config/master/etcd.server.crt, key = openshift.local.config/master/etcd.server.key, ca = openshift.local.config/master/ca.crt, trusted-ca = , client-cert-auth = true
2016-12-09 18:15:36.537847 I | embed: listening for peers on https://0.0.0.0:7001
2016-12-09 18:15:36.538356 I | embed: listening for client requests on 0.0.0.0:4001
I1209 18:15:36.539990 20134 reflector.go:249] Listing and watching *api.Secret from github.com/openshift/origin/vendor/k8s.io/kubernetes/plugin/pkg/admission/serviceaccount/admission.go:119
I1209 18:15:36.541348 20134 reflector.go:249] Listing and watching *api.LimitRange from github.com/openshift/origin/vendor/k8s.io/kubernetes/plugin/pkg/admission/limitranger/admission.go:154
I1209 18:15:36.542187 20134 reflector.go:249] Listing and watching *api.LimitRange from github.com/openshift/origin/vendor/k8s.io/kubernetes/plugin/pkg/admission/limitranger/admission.go:154
I1209 18:15:36.542987 20134 reflector.go:249] Listing and watching *api.ServiceAccount from github.com/openshift/origin/vendor/k8s.io/kubernetes/plugin/pkg/admission/serviceaccount/admission.go:103
E1209 18:15:36.543678 20134 reflector.go:214] github.com/openshift/origin/vendor/k8s.io/kubernetes/plugin/pkg/admission/serviceaccount/admission.go:103: Failed to list *api.ServiceAccount: Get https://192.168.121.18:8443/api/v1/serviceaccounts?resourceVersion=0: dial tcp 192.168.121.18:8443: getsockopt: connection refused
E1209 18:15:36.544148 20134 reflector.go:203] github.com/openshift/origin/vendor/k8s.io/kubernetes/plugin/pkg/admission/limitranger/admission.go:154: Failed to list *api.LimitRange: Get https://192.168.121.18:8443/api/v1/limitranges?resourceVersion=0: dial tcp 192.168.121.18:8443: getsockopt: connection refused
E1209 18:15:36.544600 20134 reflector.go:203] github.com/openshift/origin/vendor/k8s.io/kubernetes/plugin/pkg/admission/limitranger/admission.go:154: Failed to list *api.LimitRange: Get https://192.168.121.18:8443/api/v1/limitranges?resourceVersion=0: dial tcp 192.168.121.18:8443: getsockopt: connection refused
E1209 18:15:36.545032 20134 reflector.go:214] github.com/openshift/origin/vendor/k8s.io/kubernetes/plugin/pkg/admission/serviceaccount/admission.go:119: Failed to list *api.Secret: Get https://192.168.121.18:8443/api/v1/secrets?fieldSelector=type%3Dkubernetes.io%2Fservice-account-token&resourceVersion=0: dial tcp 192.168.121.18:8443: getsockopt: connection refused
I1209 18:15:36.545433 20134 cacher.go:469] Terminating all watchers from cacher *api.OAuthAccessToken
I1209 18:15:36.545791 20134 reflector.go:249] Listing and watching *api.OAuthAccessToken from pkg/storage/cacher.go:194
I1209 18:15:36.546407 20134 reflector.go:249] Listing and watching *storage.StorageClass from github.com/openshift/origin/vendor/k8s.io/kubernetes/plugin/pkg/admission/storageclass/default/admission.go:62
2016-12-09 18:15:36.548508 I | etcdserver: name = openshift.local
2016-12-09 18:15:36.548867 I | etcdserver: data dir = openshift.local.etcd
2016-12-09 18:15:36.549155 I | etcdserver: member dir = openshift.local.etcd/member
2016-12-09 18:15:36.549467 I | etcdserver: heartbeat = 100ms
2016-12-09 18:15:36.549757 I | etcdserver: election = 1000ms
2016-12-09 18:15:36.550019 I | etcdserver: snapshot count = 10000
2016-12-09 18:15:36.550293 I | etcdserver: advertise client URLs = https://192.168.121.18:4001
2016-12-09 18:15:36.550617 I | etcdserver: initial advertise peer URLs = https://192.168.121.18:7001
2016-12-09 18:15:36.550898 I | etcdserver: initial cluster = openshift.local=https://192.168.121.18:7001
I1209 18:15:36.551579 20134 reflector.go:211] Starting reflector *api.ResourceQuota (0) from github.com/openshift/origin/vendor/k8s.io/kubernetes/plugin/pkg/admission/resourcequota/resource_access.go:83
I1209 18:15:36.552087 20134 reflector.go:249] Listing and watching *api.ResourceQuota from github.com/openshift/origin/vendor/k8s.io/kubernetes/plugin/pkg/admission/resourcequota/resource_access.go:83
E1209 18:15:36.557486 20134 reflector.go:214] github.com/openshift/origin/vendor/k8s.io/kubernetes/plugin/pkg/admission/resourcequota/resource_access.go:83: Failed to list *api.ResourceQuota: Get https://192.168.121.18:8443/api/v1/resourcequotas?resourceVersion=0: dial tcp 192.168.121.18:8443: getsockopt: connection refused
E1209 18:15:36.557938 20134 reflector.go:214] github.com/openshift/origin/vendor/k8s.io/kubernetes/plugin/pkg/admission/storageclass/default/admission.go:62: Failed to list *storage.StorageClass: Get https://192.168.121.18:8443/apis/storage.k8s.io/v1beta1/storageclasses?resourceVersion=0: dial tcp 192.168.121.18:8443: getsockopt: connection refused
I1209 18:15:36.558322 20134 cacher.go:469] Terminating all watchers from cacher *api.User
I1209 18:15:36.558791 20134 reflector.go:249] Listing and watching *api.User from pkg/storage/cacher.go:194
2016-12-09 18:15:36.561171 I | etcdserver: starting member 55dd07c22a1fd14a in cluster 9c9f77d2f039c07d
2016-12-09 18:15:36.561509 I | raft: 55dd07c22a1fd14a became follower at term 0
2016-12-09 18:15:36.561788 I | raft: newRaft 55dd07c22a1fd14a [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
2016-12-09 18:15:36.562032 I | raft: 55dd07c22a1fd14a became follower at term 1
2016-12-09 18:15:36.577852 I | etcdserver: starting server... [version: 3.1.0-rc.0, cluster version: to_be_decided]
2016-12-09 18:15:36.578185 I | embed: ClientTLS: cert = openshift.local.config/master/etcd.server.crt, key = openshift.local.config/master/etcd.server.key, ca = openshift.local.config/master/ca.crt, trusted-ca = , client-cert-auth = true
2016-12-09 18:15:36.580118 I | etcdserver/membership: added member 55dd07c22a1fd14a [https://192.168.121.18:7001] to cluster 9c9f77d2f039c07d
2016-12-09 18:15:36.662837 I | raft: 55dd07c22a1fd14a is starting a new election at term 1
2016-12-09 18:15:36.663271 I | raft: 55dd07c22a1fd14a became candidate at term 2
2016-12-09 18:15:36.663310 I | raft: 55dd07c22a1fd14a received vote from 55dd07c22a1fd14a at term 2
2016-12-09 18:15:36.663379 I | raft: 55dd07c22a1fd14a became leader at term 2
2016-12-09 18:15:36.663418 I | raft: raft.node: 55dd07c22a1fd14a elected leader 55dd07c22a1fd14a at term 2
2016-12-09 18:15:36.665694 I | etcdserver: published {Name:openshift.local ClientURLs:[https://192.168.121.18:4001]} to cluster 9c9f77d2f039c07d
I1209 18:15:36.666539 20134 run.go:77] Started etcd at 192.168.121.18:4001
2016-12-09 18:15:36.666615 I | etcdserver: setting up the initial cluster version to 3.1
2016-12-09 18:15:36.667505 I | embed: ready to serve client requests
2016-12-09 18:15:36.668213 N | etcdserver/membership: set the initial cluster version to 3.1
2016-12-09 18:15:36.668256 I | etcdserver/api: enabled capabilities for version 3.1
2016-12-09 18:15:36.669946 I | embed: serving client requests on [::]:4001
2016-12-09 18:15:36.745970 I | etcdserver/api/v3rpc: Failed to dial [::]:4001: connection error: desc = "transport: remote error: bad certificate"; please retry.
2016-12-09 18:15:36.779995 I | etcdserver/api/v3rpc: Failed to dial [::]:4001: connection error: desc = "transport: remote error: bad certificate"; please retry.
2016-12-09 18:15:36.781038 I | etcdserver/api/v3rpc: Failed to dial [::]:4001: connection error: desc = "transport: remote error: bad certificate"; please retry.
2016-12-09 18:15:36.789776 I | etcdserver/api/v3rpc: Failed to dial [::]:4001: connection error: desc = "transport: remote error: bad certificate"; please retry.
I1209 18:15:36.790620 20134 reflector.go:200] Starting reflector *api.Group (2m0s) from github.com/openshift/origin/pkg/user/cache/groups.go:38
I1209 18:15:36.790926 20134 run_components.go:229] Using default project node label selector:
I1209 18:15:36.791194 20134 reflector.go:200] Starting reflector *api.Namespace (0) from github.com/openshift/origin/pkg/project/cache/cache.go:95
I1209 18:15:36.791697 20134 configgetter.go:155] using watch cache storage (capacity=1000) for users &storagebackend.Config{Type:"", Prefix:"openshift.io", ServerList:[]string{"https://192.168.121.18:4001"}, KeyFile:"openshift.local.config/master/master.etcd-client.key", CertFile:"openshift.local.config/master/master.etcd-client.crt", CAFile:"openshift.local.config/master/ca.crt", Quorum:false, DeserializationCacheSize:50000, Codec:runtime.codec{Encoder:(*versioning.codec)(0xc82101e700), Decoder:(*versioning.codec)(0xc82101e780)}}
2016-12-09 18:15:36.792805 I | etcdserver/api/v3rpc: Failed to dial [::]:4001: connection error: desc = "transport: remote error: bad certificate"; please retry.
2016-12-09 18:15:36.792843 I | etcdserver/api/v3rpc: Failed to dial [::]:4001: connection error: desc = "transport: remote error: bad certificate"; please retry.
I1209 18:15:36.793687 20134 storage_factory.go:241] storing { identities} in v1, reading as __internal from { openshift.io [https://192.168.121.18:4001] openshift.local.config/master/master.etcd-client.key openshift.local.config/master/master.etcd-client.crt openshift.local.config/master/ca.crt false 50000 <nil>}
I1209 18:15:36.794001 20134 reflector.go:211] Starting reflector *api.ClusterPolicy (10m0s) from github.com/openshift/origin/pkg/controller/shared/shared_informer.go:105
I1209 18:15:36.794030 20134 reflector.go:249] Listing and watching *api.ClusterPolicy from github.com/openshift/origin/pkg/controller/shared/shared_informer.go:105
I1209 18:15:36.794245 20134 reflector.go:211] Starting reflector *api.ClusterPolicyBinding (10m0s) from github.com/openshift/origin/pkg/controller/shared/shared_informer.go:105
I1209 18:15:36.794264 20134 reflector.go:249] Listing and watching *api.ClusterPolicyBinding from github.com/openshift/origin/pkg/controller/shared/shared_informer.go:105
I1209 18:15:36.794474 20134 reflector.go:211] Starting reflector *api.ClusterResourceQuota (10m0s) from github.com/openshift/origin/pkg/controller/shared/shared_informer.go:105
I1209 18:15:36.794493 20134 reflector.go:249] Listing and watching *api.ClusterResourceQuota from github.com/openshift/origin/pkg/controller/shared/shared_informer.go:105
E1209 18:15:36.794825 20134 reflector.go:214] github.com/openshift/origin/pkg/controller/shared/shared_informer.go:105: Failed to list *api.ClusterResourceQuota: Get https://192.168.121.18:8443/oapi/v1/clusterresourcequotas?resourceVersion=0: dial tcp 192.168.121.18:8443: getsockopt: connection refused
I1209 18:15:36.794845 20134 reflector.go:249] Listing and watching *api.Group from github.com/openshift/origin/pkg/user/cache/groups.go:38
I1209 18:15:36.795056 20134 reflector.go:211] Starting reflector *api.PolicyBinding (10m0s) from github.com/openshift/origin/pkg/controller/shared/shared_informer.go:105
I1209 18:15:36.795078 20134 reflector.go:249] Listing and watching *api.PolicyBinding from github.com/openshift/origin/pkg/controller/shared/shared_informer.go:105
I1209 18:15:36.795118 20134 clusterquotamapping.go:306] Waiting for the caches to sync before starting the quota mapping controller workers
I1209 18:15:36.795138 20134 reflector.go:249] Listing and watching *api.Namespace from github.com/openshift/origin/pkg/project/cache/cache.go:95
E1209 18:15:36.795371 20134 reflector.go:203] github.com/openshift/origin/pkg/project/cache/cache.go:95: Failed to list *api.Namespace: Get https://192.168.121.18:8443/api/v1/namespaces?resourceVersion=0: dial tcp 192.168.121.18:8443: getsockopt: connection refused
I1209 18:15:36.795398 20134 cacher.go:469] Terminating all watchers from cacher *api.User
I1209 18:15:36.795405 20134 reflector.go:249] Listing and watching *api.User from pkg/storage/cacher.go:194
I1209 18:15:36.793986 20134 configgetter.go:155] using watch cache storage (capacity=1000) for identities &storagebackend.Config{Type:"", Prefix:"openshift.io", ServerList:[]string{"https://192.168.121.18:4001"}, KeyFile:"openshift.local.config/master/master.etcd-client.key", CertFile:"openshift.local.config/master/master.etcd-client.crt", CAFile:"openshift.local.config/master/ca.crt", Quorum:false, DeserializationCacheSize:50000, Codec:runtime.codec{Encoder:(*versioning.codec)(0xc82188a700), Decoder:(*versioning.codec)(0xc82188a780)}}
I1209 18:15:36.798131 20134 master.go:97] Using the lease endpoint reconciler
I1209 18:15:36.798467 20134 storage_factory.go:241] storing { apiServerIPInfo} in v1, reading as __internal from { kubernetes.io [https://192.168.121.18:4001] openshift.local.config/master/master.etcd-client.key openshift.local.config/master/master.etcd-client.crt openshift.local.config/master/ca.crt false 50000 <nil>}
I1209 18:15:36.799677 20134 storage_factory.go:241] storing { endpoints} in v1, reading as __internal from { kubernetes.io [https://192.168.121.18:4001] openshift.local.config/master/master.etcd-client.key openshift.local.config/master/master.etcd-client.crt openshift.local.config/master/ca.crt false 50000 <nil>}
I1209 18:15:36.800977 20134 genericapiserver.go:300] Setting GenericAPIServer service IP to "172.30.0.1" (read-write).
I1209 18:15:36.801517 20134 storage_factory.go:241] storing { podTemplates} in v1, reading as __internal from { kubernetes.io [https://192.168.121.18:4001] openshift.local.config/master/master.etcd-client.key openshift.local.config/master/master.etcd-client.crt openshift.local.config/master/ca.crt false 50000 <nil>}
I1209 18:15:36.804275 20134 storage_factory.go:241] storing { events} in v1, reading as __internal from { kubernetes.io [https://192.168.121.18:4001] openshift.local.config/master/master.etcd-client.key openshift.local.config/master/master.etcd-client.crt openshift.local.config/master/ca.crt false 50000 <nil>}
I1209 18:15:36.805595 20134 storage_factory.go:241] storing { limitRanges} in v1, reading as __internal from { kubernetes.io [https://192.168.121.18:4001] openshift.local.config/master/master.etcd-client.key openshift.local.config/master/master.etcd-client.crt openshift.local.config/master/ca.crt false 50000 <nil>}
I1209 18:15:36.793805 20134 reflector.go:211] Starting reflector *api.Policy (10m0s) from github.com/openshift/origin/pkg/controller/shared/shared_informer.go:105
I1209 18:15:36.806813 20134 reflector.go:249] Listing and watching *api.Policy from github.com/openshift/origin/pkg/controller/shared/shared_informer.go:105
I1209 18:15:36.808740 20134 cacher.go:469] Terminating all watchers from cacher *api.Identity
I1209 18:15:36.808768 20134 reflector.go:249] Listing and watching *api.Identity from pkg/storage/cacher.go:194
I1209 18:15:36.819832 20134 storage_factory.go:241] storing { resourceQuotas} in v1, reading as __internal from { kubernetes.io [https://192.168.121.18:4001] openshift.local.config/master/master.etcd-client.key openshift.local.config/master/master.etcd-client.crt openshift.local.config/master/ca.crt false 50000 <nil>}
I1209 18:15:36.821586 20134 storage_factory.go:241] storing { secrets} in v1, reading as __internal from { kubernetes.io [https://192.168.121.18:4001] openshift.local.config/master/master.etcd-client.key openshift.local.config/master/master.etcd-client.crt openshift.local.config/master/ca.crt false 50000 <nil>}
I1209 18:15:36.823148 20134 storage_factory.go:241] storing { serviceAccounts} in v1, reading as __internal from { kubernetes.io [https://192.168.121.18:4001] openshift.local.config/master/master.etcd-client.key openshift.local.config/master/master.etcd-client.crt openshift.local.config/master/ca.crt false 50000 <nil>}
I1209 18:15:36.824392 20134 cacher.go:469] Terminating all watchers from cacher *api.PodTemplate
I1209 18:15:36.824675 20134 reflector.go:249] Listing and watching *api.PodTemplate from pkg/storage/cacher.go:194
I1209 18:15:36.824715 20134 storage_factory.go:241] storing { persistentVolumes} in v1, reading as __internal from { kubernetes.io [https://192.168.121.18:4001] openshift.local.config/master/master.etcd-client.key openshift.local.config/master/master.etcd-client.crt openshift.local.config/master/ca.crt false 50000 <nil>}
I1209 18:15:36.826673 20134 storage_factory.go:241] storing { persistentVolumeClaims} in v1, reading as __internal from { kubernetes.io [https://192.168.121.18:4001] openshift.local.config/master/master.etcd-client.key openshift.local.config/master/master.etcd-client.crt openshift.local.config/master/ca.crt false 50000 <nil>}
I1209 18:15:36.827925 20134 storage_factory.go:241] storing { configMaps} in v1, reading as __internal from { kubernetes.io [https://192.168.121.18:4001] openshift.local.config/master/master.etcd-client.key openshift.local.config/master/master.etcd-client.crt openshift.local.config/master/ca.crt false 50000 <nil>}
I1209 18:15:36.829185 20134 storage_factory.go:241] storing { namespaces} in v1, reading as __internal from { kubernetes.io [https://192.168.121.18:4001] openshift.local.config/master/master.etcd-client.key openshift.local.config/master/master.etcd-client.crt openshift.local.config/master/ca.crt false 50000 <nil>}
I1209 18:15:36.830443 20134 storage_factory.go:241] storing { endpoints} in v1, reading as __internal from { kubernetes.io [https://192.168.121.18:4001] openshift.local.config/master/master.etcd-client.key openshift.local.config/master/master.etcd-client.crt openshift.local.config/master/ca.crt false 50000 <nil>}
I1209 18:15:36.831196 20134 cacher.go:469] Terminating all watchers from cacher *api.LimitRange
I1209 18:15:36.831216 20134 reflector.go:249] Listing and watching *api.LimitRange from pkg/storage/cacher.go:194
I1209 18:15:36.831497 20134 cacher.go:469] Terminating all watchers from cacher *api.ResourceQuota
I1209 18:15:36.831516 20134 reflector.go:249] Listing and watching *api.ResourceQuota from pkg/storage/cacher.go:194
I1209 18:15:36.831772 20134 cacher.go:469] Terminating all watchers from cacher *api.Secret
I1209 18:15:36.831790 20134 reflector.go:249] Listing and watching *api.Secret from pkg/storage/cacher.go:194
I1209 18:15:36.832995 20134 storage_factory.go:241] storing { nodes} in v1, reading as __internal from { kubernetes.io [https://192.168.121.18:4001] openshift.local.config/master/master.etcd-client.key openshift.local.config/master/master.etcd-client.crt openshift.local.config/master/ca.crt false 50000 <nil>}
I1209 18:15:36.834361 20134 storage_factory.go:241] storing { securityContextConstraints} in v1, reading as __internal from { kubernetes.io [https://192.168.121.18:4001] openshift.local.config/master/master.etcd-client.key openshift.local.config/master/master.etcd-client.crt openshift.local.config/master/ca.crt false 50000 <nil>}
I1209 18:15:36.835729 20134 storage_factory.go:241] storing { pods} in v1, reading as __internal from { kubernetes.io [https://192.168.121.18:4001] openshift.local.config/master/master.etcd-client.key openshift.local.config/master/master.etcd-client.crt openshift.local.config/master/ca.crt false 50000 <nil>}
I1209 18:15:36.837095 20134 storage_factory.go:241] storing { services} in v1, reading as __internal from { kubernetes.io [https://192.168.121.18:4001] openshift.local.config/master/master.etcd-client.key openshift.local.config/master/master.etcd-client.crt openshift.local.config/master/ca.crt false 50000 <nil>}
I1209 18:15:36.838468 20134 storage_factory.go:241] storing { services} in v1, reading as __internal from { kubernetes.io [https://192.168.121.18:4001] openshift.local.config/master/master.etcd-client.key openshift.local.config/master/master.etcd-client.crt openshift.local.config/master/ca.crt false 50000 <nil>}
I1209 18:15:36.840742 20134 storage_factory.go:241] storing { replicationControllers} in v1, reading as __internal from { kubernetes.io [https://192.168.121.18:4001] openshift.local.config/master/master.etcd-client.key openshift.local.config/master/master.etcd-client.crt openshift.local.config/master/ca.crt false 50000 <nil>}
I1209 18:15:36.842174 20134 storage_factory.go:241] storing {extensions thirdpartyresources} in extensions/v1beta1, reading as extensions/__internal from { kubernetes.io [https://192.168.121.18:4001] openshift.local.config/master/master.etcd-client.key openshift.local.config/master/master.etcd-client.crt openshift.local.config/master/ca.crt false 50000 <nil>}
I1209 18:15:36.842476 20134 storage_factory.go:241] storing {apps petsets} in apps/v1alpha1, reading as apps/__internal from { kubernetes.io [https://192.168.121.18:4001] openshift.local.config/master/master.etcd-client.key openshift.local.config/master/master.etcd-client.crt openshift.local.config/master/ca.crt false 50000 <nil>}
I1209 18:15:36.843789 20134 storage_factory.go:241] storing {autoscaling horizontalpodautoscalers} in extensions/v1beta1, reading as autoscaling/__internal from { kubernetes.io [https://192.168.121.18:4001] openshift.local.config/master/master.etcd-client.key openshift.local.config/master/master.etcd-client.crt openshift.local.config/master/ca.crt false 50000 <nil>}
I1209 18:15:36.845198 20134 storage_factory.go:241] storing {batch jobs} in extensions/v1beta1, reading as batch/__internal from { kubernetes.io [https://192.168.121.18:4001] openshift.local.config/master/master.etcd-client.key openshift.local.config/master/master.etcd-client.crt openshift.local.config/master/ca.crt false 50000 <nil>}
I1209 18:15:36.846639 20134 storage_factory.go:241] storing {batch scheduledjobs} in batch/v2alpha1, reading as batch/__internal from { kubernetes.io [https://192.168.121.18:4001] openshift.local.config/master/master.etcd-client.key openshift.local.config/master/master.etcd-client.crt openshift.local.config/master/ca.crt false 50000 <nil>}
I1209 18:15:36.847999 20134 storage_factory.go:241] storing {batch jobs} in extensions/v1beta1, reading as batch/__internal from { kubernetes.io [https://192.168.121.18:4001] openshift.local.config/master/master.etcd-client.key openshift.local.config/master/master.etcd-client.crt openshift.local.config/master/ca.crt false 50000 <nil>}
I1209 18:15:36.849289 20134 storage_factory.go:241] storing {certificates.k8s.io certificatesigningrequests} in certificates.k8s.io/v1alpha1, reading as certificates.k8s.io/__internal from { kubernetes.io [https://192.168.121.18:4001] openshift.local.config/master/master.etcd-client.key openshift.local.config/master/master.etcd-client.crt openshift.local.config/master/ca.crt false 50000 <nil>}
I1209 18:15:36.850650 20134 storage_factory.go:241] storing {extensions horizontalpodautoscalers} in extensions/v1beta1, reading as extensions/__internal from { kubernetes.io [https://192.168.121.18:4001] openshift.local.config/master/master.etcd-client.key openshift.local.config/master/master.etcd-client.crt openshift.local.config/master/ca.crt false 50000 <nil>}
I1209 18:15:36.851986 20134 storage_factory.go:241] storing { replicationControllers} in v1, reading as __internal from { kubernetes.io [https://192.168.121.18:4001] openshift.local.config/master/master.etcd-client.key openshift.local.config/master/master.etcd-client.crt openshift.local.config/master/ca.crt false 50000 <nil>}
I1209 18:15:36.852600 20134 cacher.go:469] Terminating all watchers from cacher *autoscaling.HorizontalPodAutoscaler
I1209 18:15:36.852893 20134 reflector.go:249] Listing and watching *autoscaling.HorizontalPodAutoscaler from pkg/storage/cacher.go:194
I1209 18:15:36.824770 20134 cacher.go:469] Terminating all watchers from cacher *api.ServiceAccount
I1209 18:15:36.853806 20134 reflector.go:249] Listing and watching *api.ServiceAccount from pkg/storage/cacher.go:194
I1209 18:15:36.854349 20134 cacher.go:469] Terminating all watchers from cacher *api.PersistentVolume
I1209 18:15:36.854593 20134 reflector.go:249] Listing and watching *api.PersistentVolume from pkg/storage/cacher.go:194
I1209 18:15:36.855008 20134 cacher.go:469] Terminating all watchers from cacher *api.PersistentVolumeClaim
I1209 18:15:36.855227 20134 reflector.go:249] Listing and watching *api.PersistentVolumeClaim from pkg/storage/cacher.go:194
I1209 18:15:36.855649 20134 cacher.go:469] Terminating all watchers from cacher *api.ConfigMap
I1209 18:15:36.855878 20134 reflector.go:249] Listing and watching *api.ConfigMap from pkg/storage/cacher.go:194
I1209 18:15:36.856289 20134 cacher.go:469] Terminating all watchers from cacher *api.Namespace
I1209 18:15:36.856540 20134 reflector.go:249] Listing and watching *api.Namespace from pkg/storage/cacher.go:194
I1209 18:15:36.858454 20134 storage_factory.go:241] storing {extensions thirdpartyresources} in extensions/v1beta1, reading as extensions/__internal from { kubernetes.io [https://192.168.121.18:4001] openshift.local.config/master/master.etcd-client.key openshift.local.config/master/master.etcd-client.crt openshift.local.config/master/ca.crt false 50000 <nil>}
I1209 18:15:36.859771 20134 storage_factory.go:241] storing {extensions daemonsets} in extensions/v1beta1, reading as extensions/__internal from { kubernetes.io [https://192.168.121.18:4001] openshift.local.config/master/master.etcd-client.key openshift.local.config/master/master.etcd-client.crt openshift.local.config/master/ca.crt false 50000 <nil>}
I1209 18:15:36.861119 20134 storage_factory.go:241] storing {extensions deployments} in extensions/v1beta1, reading as extensions/__internal from { kubernetes.io [https://192.168.121.18:4001] openshift.local.config/master/master.etcd-client.key openshift.local.config/master/master.etcd-client.crt openshift.local.config/master/ca.crt false 50000 <nil>}
I1209 18:15:36.862445 20134 storage_factory.go:241] storing {extensions jobs} in extensions/v1beta1, reading as extensions/__internal from { kubernetes.io [https://192.168.121.18:4001] openshift.local.config/master/master.etcd-client.key openshift.local.config/master/master.etcd-client.crt openshift.local.config/master/ca.crt false 50000 <nil>}
I1209 18:15:36.863157 20134 cacher.go:469] Terminating all watchers from cacher *api.Endpoints
I1209 18:15:36.863405 20134 reflector.go:249] Listing and watching *api.Endpoints from pkg/storage/cacher.go:194
I1209 18:15:36.863676 20134 storage_factory.go:241] storing {extensions ingresses} in extensions/v1beta1, reading as extensions/__internal from { kubernetes.io [https://192.168.121.18:4001] openshift.local.config/master/master.etcd-client.key openshift.local.config/master/master.etcd-client.crt openshift.local.config/master/ca.crt false 50000 <nil>}
I1209 18:15:36.864047 20134 cacher.go:469] Terminating all watchers from cacher *api.Node
I1209 18:15:36.864269 20134 reflector.go:249] Listing and watching *api.Node from pkg/storage/cacher.go:194
I1209 18:15:36.864686 20134 cacher.go:469] Terminating all watchers from cacher *api.SecurityContextConstraints
I1209 18:15:36.864916 20134 reflector.go:249] Listing and watching *api.SecurityContextConstraints from pkg/storage/cacher.go:194
I1209 18:15:36.865297 20134 cacher.go:469] Terminating all watchers from cacher *api.Pod
I1209 18:15:36.865530 20134 reflector.go:249] Listing and watching *api.Pod from pkg/storage/cacher.go:194
I1209 18:15:36.865934 20134 cacher.go:469] Terminating all watchers from cacher *api.Service
I1209 18:15:36.866160 20134 reflector.go:249] Listing and watching *api.Service from pkg/storage/cacher.go:194
I1209 18:15:36.866588 20134 cacher.go:469] Terminating all watchers from cacher *api.ReplicationController
I1209 18:15:36.866822 20134 reflector.go:249] Listing and watching *api.ReplicationController from pkg/storage/cacher.go:194
I1209 18:15:36.867219 20134 cacher.go:469] Terminating all watchers from cacher *apps.PetSet
I1209 18:15:36.867457 20134 reflector.go:249] Listing and watching *apps.PetSet from pkg/storage/cacher.go:194
I1209 18:15:36.867871 20134 cacher.go:469] Terminating all watchers from cacher *autoscaling.HorizontalPodAutoscaler
I1209 18:15:36.868092 20134 reflector.go:249] Listing and watching *autoscaling.HorizontalPodAutoscaler from pkg/storage/cacher.go:194
I1209 18:15:36.868539 20134 cacher.go:469] Terminating all watchers from cacher *batch.Job
I1209 18:15:36.868779 20134 reflector.go:249] Listing and watching *batch.Job from pkg/storage/cacher.go:194
I1209 18:15:36.869169 20134 cacher.go:469] Terminating all watchers from cacher *batch.ScheduledJob
I1209 18:15:36.869411 20134 reflector.go:249] Listing and watching *batch.ScheduledJob from pkg/storage/cacher.go:194
I1209 18:15:36.869814 20134 cacher.go:469] Terminating all watchers from cacher *batch.Job
I1209 18:15:36.870034 20134 reflector.go:249] Listing and watching *batch.Job from pkg/storage/cacher.go:194
I1209 18:15:36.870466 20134 cacher.go:469] Terminating all watchers from cacher *certificates.CertificateSigningRequest
I1209 18:15:36.870701 20134 reflector.go:249] Listing and watching *certificates.CertificateSigningRequest from pkg/storage/cacher.go:194
I1209 18:15:36.864990 20134 storage_factory.go:241] storing {extensions podsecuritypolicy} in extensions/v1beta1, reading as extensions/__internal from { kubernetes.io [https://192.168.121.18:4001] openshift.local.config/master/master.etcd-client.key openshift.local.config/master/master.etcd-client.crt openshift.local.config/master/ca.crt false 50000 <nil>}
I1209 18:15:36.872492 20134 storage_factory.go:241] storing {extensions replicasets} in extensions/v1beta1, reading as extensions/__internal from { kubernetes.io [https://192.168.121.18:4001] openshift.local.config/master/master.etcd-client.key openshift.local.config/master/master.etcd-client.crt openshift.local.config/master/ca.crt false 50000 <nil>}
I1209 18:15:36.873761 20134 storage_factory.go:241] storing {extensions networkpolicies} in extensions/v1beta1, reading as extensions/__internal from { kubernetes.io [https://192.168.121.18:4001] openshift.local.config/master/master.etcd-client.key openshift.local.config/master/master.etcd-client.crt openshift.local.config/master/ca.crt false 50000 <nil>}
I1209 18:15:36.875031 20134 storage_factory.go:241] storing {policy poddisruptionbudgets} in policy/v1alpha1, reading as policy/__internal from { kubernetes.io [https://192.168.121.18:4001] openshift.local.config/master/master.etcd-client.key openshift.local.config/master/master.etcd-client.crt openshift.local.config/master/ca.crt false 50000 <nil>}
I1209 18:15:36.865065 20134 cacher.go:469] Terminating all watchers from cacher *extensions.Ingress
I1209 18:15:36.865361 20134 cacher.go:469] Terminating all watchers from cacher *extensions.DaemonSet
I1209 18:15:36.865420 20134 cacher.go:469] Terminating all watchers from cacher *extensions.Deployment
I1209 18:15:36.865444 20134 cacher.go:469] Terminating all watchers from cacher *batch.Job
I1209 18:15:36.865108 20134 cacher.go:469] Terminating all watchers from cacher *api.ReplicationController
I1209 18:15:36.890777 20134 reflector.go:249] Listing and watching *extensions.Ingress from pkg/storage/cacher.go:194
I1209 18:15:36.891247 20134 reflector.go:249] Listing and watching *extensions.DaemonSet from pkg/storage/cacher.go:194
I1209 18:15:36.891667 20134 reflector.go:249] Listing and watching *extensions.Deployment from pkg/storage/cacher.go:194
I1209 18:15:36.892040 20134 reflector.go:249] Listing and watching *batch.Job from pkg/storage/cacher.go:194
I1209 18:15:36.892440 20134 reflector.go:249] Listing and watching *api.ReplicationController from pkg/storage/cacher.go:194
I1209 18:15:36.892834 20134 cacher.go:469] Terminating all watchers from cacher *extensions.PodSecurityPolicy
I1209 18:15:36.893072 20134 reflector.go:249] Listing and watching *extensions.PodSecurityPolicy from pkg/storage/cacher.go:194
I1209 18:15:36.893482 20134 cacher.go:469] Terminating all watchers from cacher *extensions.ReplicaSet
I1209 18:15:36.893715 20134 reflector.go:249] Listing and watching *extensions.ReplicaSet from pkg/storage/cacher.go:194
I1209 18:15:36.894088 20134 cacher.go:469] Terminating all watchers from cacher *extensions.NetworkPolicy
I1209 18:15:36.894309 20134 reflector.go:249] Listing and watching *extensions.NetworkPolicy from pkg/storage/cacher.go:194
I1209 18:15:36.895683 20134 storage_factory.go:241] storing {storage.k8s.io storageclasses} in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from { kubernetes.io [https://192.168.121.18:4001] openshift.local.config/master/master.etcd-client.key openshift.local.config/master/master.etcd-client.crt openshift.local.config/master/ca.crt false 50000 <nil>}
I1209 18:15:36.913014 20134 cacher.go:469] Terminating all watchers from cacher *storage.StorageClass
I1209 18:15:36.913471 20134 reflector.go:249] Listing and watching *storage.StorageClass from pkg/storage/cacher.go:194
I1209 18:15:36.919110 20134 cacher.go:469] Terminating all watchers from cacher *policy.PodDisruptionBudget
I1209 18:15:36.919384 20134 reflector.go:249] Listing and watching *policy.PodDisruptionBudget from pkg/storage/cacher.go:194
I1209 18:15:36.927135 20134 clusterquotamapping.go:306] Waiting for the caches to sync before starting the quota mapping controller workers
I1209 18:15:37.115348 20134 clusterquotamapping.go:306] Waiting for the caches to sync before starting the quota mapping controller workers
W1209 18:15:37.233302 20134 lease_endpoint_reconciler.go:174] Resetting endpoints for master service "kubernetes" to [192.168.121.18]
I1209 18:15:37.236547 20134 storage_factory.go:241] storing { builds} in v1, reading as __internal from { openshift.io [https://192.168.121.18:4001] openshift.local.config/master/master.etcd-client.key openshift.local.config/master/master.etcd-client.crt openshift.local.config/master/ca.crt false 50000 <nil>}
I1209 18:15:37.236861 20134 configgetter.go:155] using watch cache storage (capacity=1000) for builds &storagebackend.Config{Type:"", Prefix:"openshift.io", ServerList:[]string{"https://192.168.121.18:4001"}, KeyFile:"openshift.local.config/master/master.etcd-client.key", CertFile:"openshift.local.config/master/master.etcd-client.crt", CAFile:"openshift.local.config/master/ca.crt", Quorum:false, DeserializationCacheSize:50000, Codec:runtime.codec{Encoder:(*versioning.codec)(0xc822cd6000), Decoder:(*versioning.codec)(0xc822cd6080)}}
I1209 18:15:37.258386 20134 clusterquotamapping.go:306] Waiting for the caches to sync before starting the quota mapping controller workers
I1209 18:15:37.267198 20134 storage_factory.go:241] storing { buildconfigs} in v1, reading as __internal from { openshift.io [https://192.168.121.18:4001] openshift.local.config/master/master.etcd-client.key openshift.local.config/master/master.etcd-client.crt openshift.local.config/master/ca.crt false 50000 <nil>}
I1209 18:15:37.267252 20134 configgetter.go:155] using watch cache storage (capacity=1000) for buildconfigs &storagebackend.Config{Type:"", Prefix:"openshift.io", ServerList:[]string{"https://192.168.121.18:4001"}, KeyFile:"openshift.local.config/master/master.etcd-client.key", CertFile:"openshift.local.config/master/master.etcd-client.crt", CAFile:"openshift.local.config/master/ca.crt", Quorum:false, DeserializationCacheSize:50000, Codec:runtime.codec{Encoder:(*versioning.codec)(0xc822937d00), Decoder:(*versioning.codec)(0xc822937d80)}}
I1209 18:15:37.268877 20134 storage_factory.go:241] storing { deploymentconfigs} in v1, reading as __internal from { openshift.io [https://192.168.121.18:4001] openshift.local.config/master/master.etcd-client.key openshift.local.config/master/master.etcd-client.crt openshift.local.config/master/ca.crt false 50000 <nil>}
I1209 18:15:37.268943 20134 configgetter.go:155] using watch cache storage (capacity=1000) for deploymentconfigs &storagebackend.Config{Type:"", Prefix:"openshift.io", ServerList:[]string{"https://192.168.121.18:4001"}, KeyFile:"openshift.local.config/master/master.etcd-client.key", CertFile:"openshift.local.config/master/master.etcd-client.crt", CAFile:"openshift.local.config/master/ca.crt", Quorum:false, DeserializationCacheSize:50000, Codec:runtime.codec{Encoder:(*versioning.codec)(0xc821cfa680), Decoder:(*versioning.codec)(0xc821cfa700)}}
I1209 18:15:37.271415 20134 plugin.go:27] Route plugin initialized with suffix=router.default.svc.cluster.local
I1209 18:15:37.271732 20134 storage_factory.go:241] storing { routes} in v1, reading as __internal from { openshift.io [https://192.168.121.18:4001] openshift.local.config/master/master.etcd-client.key openshift.local.config/master/master.etcd-client.crt openshift.local.config/master/ca.crt false 50000 <nil>}
I1209 18:15:37.271965 20134 configgetter.go:155] using watch cache storage (capacity=1000) for routes &storagebackend.Config{Type:"", Prefix:"openshift.io", ServerList:[]string{"https://192.168.121.18:4001"}, KeyFile:"openshift.local.config/master/master.etcd-client.key", CertFile:"openshift.local.config/master/master.etcd-client.crt", CAFile:"openshift.local.config/master/ca.crt", Quorum:false, DeserializationCacheSize:50000, Codec:runtime.codec{Encoder:(*versioning.codec)(0xc82252ca00), Decoder:(*versioning.codec)(0xc82252ca80)}}
I1209 18:15:37.273203 20134 storage_factory.go:241] storing { hostsubnets} in v1, reading as __internal from { openshift.io [https://192.168.121.18:4001] openshift.local.config/master/master.etcd-client.key openshift.local.config/master/master.etcd-client.crt openshift.local.config/master/ca.crt false 50000 <nil>}
I1209 18:15:37.273506 20134 configgetter.go:155] using watch cache storage (capacity=1000) for hostsubnets &storagebackend.Config{Type:"", Prefix:"openshift.io", ServerList:[]string{"https://192.168.121.18:4001"}, KeyFile:"openshift.local.config/master/master.etcd-client.key", CertFile:"openshift.local.config/master/master.etcd-client.crt", CAFile:"openshift.local.config/master/ca.crt", Quorum:false, DeserializationCacheSize:50000, Codec:runtime.codec{Encoder:(*versioning.codec)(0xc821c2ed00), Decoder:(*versioning.codec)(0xc821c2ed80)}}
I1209 18:15:37.274600 20134 storage_factory.go:241] storing { netnamespaces} in v1, reading as __internal from { openshift.io [https://192.168.121.18:4001] openshift.local.config/master/master.etcd-client.key openshift.local.config/master/master.etcd-client.crt openshift.local.config/master/ca.crt false 50000 <nil>}
I1209 18:15:37.274876 20134 configgetter.go:155] using watch cache storage (capacity=1000) for netnamespaces &storagebackend.Config{Type:"", Prefix:"openshift.io", ServerList:[]string{"https://192.168.121.18:4001"}, KeyFile:"openshift.local.config/master/master.etcd-client.key", CertFile:"openshift.local.config/master/master.etcd-client.crt", CAFile:"openshift.local.config/master/ca.crt", Quorum:false, DeserializationCacheSize:50000, Codec:runtime.codec{Encoder:(*versioning.codec)(0xc821a79780), Decoder:(*versioning.codec)(0xc821a79800)}}
I1209 18:15:37.276074 20134 storage_factory.go:241] storing { clusternetworks} in v1, reading as __internal from { openshift.io [https://192.168.121.18:4001] openshift.local.config/master/master.etcd-client.key openshift.local.config/master/master.etcd-client.crt openshift.local.config/master/ca.crt false 50000 <nil>}
I1209 18:15:37.276343 20134 configgetter.go:155] using watch cache storage (capacity=1000) for clusternetworks &storagebackend.Config{Type:"", Prefix:"openshift.io", ServerList:[]string{"https://192.168.121.18:4001"}, KeyFile:"openshift.local.config/master/master.etcd-client.key", CertFile:"openshift.local.config/master/master.etcd-client.crt", CAFile:"openshift.local.config/master/ca.crt", Quorum:false, DeserializationCacheSize:50000, Codec:runtime.codec{Encoder:(*versioning.codec)(0xc8219f2980), Decoder:(*versioning.codec)(0xc8219f2a00)}}
I1209 18:15:37.277449 20134 storage_factory.go:241] storing { egressnetworkpolicies} in v1, reading as __internal from { openshift.io [https://192.168.121.18:4001] openshift.local.config/master/master.etcd-client.key openshift.local.config/master/master.etcd-client.crt openshift.local.config/master/ca.crt false 50000 <nil>}
I1209 18:15:37.277712 20134 configgetter.go:155] using watch cache storage (capacity=1000) for egressnetworkpolicies &storagebackend.Config{Type:"", Prefix:"openshift.io", ServerList:[]string{"https://192.168.121.18:4001"}, KeyFile:"openshift.local.config/master/master.etcd-client.key", CertFile:"openshift.local.config/master/master.etcd-client.crt", CAFile:"openshift.local.config/master/ca.crt", Quorum:false, DeserializationCacheSize:50000, Codec:runtime.codec{Encoder:(*versioning.codec)(0xc821997a00), Decoder:(*versioning.codec)(0xc821997a80)}}
I1209 18:15:37.278840 20134 configgetter.go:155] using watch cache storage (capacity=1000) for users &storagebackend.Config{Type:"", Prefix:"openshift.io", ServerList:[]string{"https://192.168.121.18:4001"}, KeyFile:"openshift.local.config/master/master.etcd-client.key", CertFile:"openshift.local.config/master/master.etcd-client.crt", CAFile:"openshift.local.config/master/ca.crt", Quorum:false, DeserializationCacheSize:50000, Codec:runtime.codec{Encoder:(*versioning.codec)(0xc82101e700), Decoder:(*versioning.codec)(0xc82101e780)}}
I1209 18:15:37.279955 20134 configgetter.go:155] using watch cache storage (capacity=1000) for identities &storagebackend.Config{Type:"", Prefix:"openshift.io", ServerList:[]string{"https://192.168.121.18:4001"}, KeyFile:"openshift.local.config/master/master.etcd-client.key", CertFile:"openshift.local.config/master/master.etcd-client.crt", CAFile:"openshift.local.config/master/ca.crt", Quorum:false, DeserializationCacheSize:50000, Codec:runtime.codec{Encoder:(*versioning.codec)(0xc82188a700), Decoder:(*versioning.codec)(0xc82188a780)}}
I1209 18:15:37.280505 20134 cacher.go:469] Terminating all watchers from cacher *api.User
I1209 18:15:37.280739 20134 reflector.go:249] Listing and watching *api.User from pkg/storage/cacher.go:194
I1209 18:15:37.281189 20134 cacher.go:469] Terminating all watchers from cacher *api.DeploymentConfig
I1209 18:15:37.281430 20134 reflector.go:249] Listing and watching *api.DeploymentConfig from pkg/storage/cacher.go:194
I1209 18:15:37.281837 20134 cacher.go:469] Terminating all watchers from cacher *api.Route
I1209 18:15:37.282068 20134 reflector.go:249] Listing and watching *api.Route from pkg/storage/cacher.go:194
I1209 18:15:37.282474 20134 cacher.go:469] Terminating all watchers from cacher *api.HostSubnet
I1209 18:15:37.282720 20134 reflector.go:249] Listing and watching *api.HostSubnet from pkg/storage/cacher.go:194
I1209 18:15:37.283094 20134 cacher.go:469] Terminating all watchers from cacher *api.NetNamespace
I1209 18:15:37.283315 20134 reflector.go:249] Listing and watching *api.NetNamespace from pkg/storage/cacher.go:194
I1209 18:15:37.283722 20134 cacher.go:469] Terminating all watchers from cacher *api.ClusterNetwork
I1209 18:15:37.283948 20134 reflector.go:249] Listing and watching *api.ClusterNetwork from pkg/storage/cacher.go:194
I1209 18:15:37.284352 20134 cacher.go:469] Terminating all watchers from cacher *api.EgressNetworkPolicy
I1209 18:15:37.284579 20134 reflector.go:249] Listing and watching *api.EgressNetworkPolicy from pkg/storage/cacher.go:194
I1209 18:15:37.283757 20134 cacher.go:469] Terminating all watchers from cacher *api.Build
I1209 18:15:37.285167 20134 reflector.go:249] Listing and watching *api.Build from pkg/storage/cacher.go:194
I1209 18:15:37.283783 20134 cacher.go:469] Terminating all watchers from cacher *api.BuildConfig
I1209 18:15:37.285798 20134 reflector.go:249] Listing and watching *api.BuildConfig from pkg/storage/cacher.go:194
I1209 18:15:37.285915 20134 configgetter.go:155] using watch cache storage (capacity=1000) for groups &storagebackend.Config{Type:"", Prefix:"openshift.io", ServerList:[]string{"https://192.168.121.18:4001"}, KeyFile:"openshift.local.config/master/master.etcd-client.key", CertFile:"openshift.local.config/master/master.etcd-client.crt", CAFile:"openshift.local.config/master/ca.crt", Quorum:false, DeserializationCacheSize:50000, Codec:runtime.codec{Encoder:(*versioning.codec)(0xc821241800), Decoder:(*versioning.codec)(0xc821241880)}}
I1209 18:15:37.287260 20134 configgetter.go:155] using watch cache storage (capacity=1000) for policies &storagebackend.Config{Type:"", Prefix:"openshift.io", ServerList:[]string{"https://192.168.121.18:4001"}, KeyFile:"openshift.local.config/master/master.etcd-client.key", CertFile:"openshift.local.config/master/master.etcd-client.crt", CAFile:"openshift.local.config/master/ca.crt", Quorum:false, DeserializationCacheSize:50000, Codec:runtime.codec{Encoder:(*versioning.codec)(0xc82154ef80), Decoder:(*versioning.codec)(0xc82154f000)}}
I1209 18:15:37.288420 20134 configgetter.go:155] using watch cache storage (capacity=1000) for policybindings &storagebackend.Config{Type:"", Prefix:"openshift.io", ServerList:[]string{"https://192.168.121.18:4001"}, KeyFile:"openshift.local.config/master/master.etcd-client.key", CertFile:"openshift.local.config/master/master.etcd-client.crt", CAFile:"openshift.local.config/master/ca.crt", Quorum:false, DeserializationCacheSize:50000, Codec:runtime.codec{Encoder:(*versioning.codec)(0xc8211f7180), Decoder:(*versioning.codec)(0xc8211f7200)}}
I1209 18:15:37.289543 20134 configgetter.go:155] using watch cache storage (capacity=1000) for clusterpolicies &storagebackend.Config{Type:"", Prefix:"openshift.io", ServerList:[]string{"https://192.168.121.18:4001"}, KeyFile:"openshift.local.config/master/master.etcd-client.key", CertFile:"openshift.local.config/master/master.etcd-client.crt", CAFile:"openshift.local.config/master/ca.crt", Quorum:false, DeserializationCacheSize:50000, Codec:runtime.codec{Encoder:(*versioning.codec)(0xc821194a80), Decoder:(*versioning.codec)(0xc821194b00)}}
I1209 18:15:37.290678 20134 configgetter.go:155] using watch cache storage (capacity=1000) for clusterpolicybindings &storagebackend.Config{Type:"", Prefix:"openshift.io", ServerList:[]string{"https://192.168.121.18:4001"}, KeyFile:"openshift.local.config/master/master.etcd-client.key", CertFile:"openshift.local.config/master/master.etcd-client.crt", CAFile:"openshift.local.config/master/ca.crt", Quorum:false, DeserializationCacheSize:50000, Codec:runtime.codec{Encoder:(*versioning.codec)(0xc821502d80), Decoder:(*versioning.codec)(0xc821502e00)}}
I1209 18:15:37.291876 20134 storage_factory.go:241] storing { images} in v1, reading as __internal from { openshift.io [https://192.168.121.18:4001] openshift.local.config/master/master.etcd-client.key openshift.local.config/master/master.etcd-client.crt openshift.local.config/master/ca.crt false 50000 <nil>}
I1209 18:15:37.292123 20134 configgetter.go:155] using watch cache storage (capacity=1000) for images &storagebackend.Config{Type:"", Prefix:"openshift.io", ServerList:[]string{"https://192.168.121.18:4001"}, KeyFile:"openshift.local.config/master/master.etcd-client.key", CertFile:"openshift.local.config/master/master.etcd-client.crt", CAFile:"openshift.local.config/master/ca.crt", Quorum:false, DeserializationCacheSize:50000, Codec:runtime.codec{Encoder:(*versioning.codec)(0xc821240480), Decoder:(*versioning.codec)(0xc821240500)}}
I1209 18:15:37.293861 20134 storage_factory.go:241] storing { imagestreams} in v1, reading as __internal from { openshift.io [https://192.168.121.18:4001] openshift.local.config/master/master.etcd-client.key openshift.local.config/master/master.etcd-client.crt openshift.local.config/master/ca.crt false 50000 <nil>}
I1209 18:15:37.294136 20134 configgetter.go:155] using watch cache storage (capacity=1000) for imagestreams &storagebackend.Config{Type:"", Prefix:"openshift.io", ServerList:[]string{"https://192.168.121.18:4001"}, KeyFile:"openshift.local.config/master/master.etcd-client.key", CertFile:"openshift.local.config/master/master.etcd-client.crt", CAFile:"openshift.local.config/master/ca.crt", Quorum:false, DeserializationCacheSize:50000, Codec:runtime.codec{Encoder:(*versioning.codec)(0xc82148f400), Decoder:(*versioning.codec)(0xc82148f480)}}
I1209 18:15:37.295432 20134 storage_factory.go:241] storing { oauthclients} in v1, reading as __internal from { openshift.io [https://192.168.121.18:4001] openshift.local.config/master/master.etcd-client.key openshift.local.config/master/master.etcd-client.crt openshift.local.config/master/ca.crt false 50000 <nil>}
I1209 18:15:37.295687 20134 configgetter.go:155] using watch cache storage (capacity=1000) for oauthclients &storagebackend.Config{Type:"", Prefix:"openshift.io", ServerList:[]string{"https://192.168.121.18:4001"}, KeyFile:"openshift.local.config/master/master.etcd-client.key", CertFile:"openshift.local.config/master/master.etcd-client.crt", CAFile:"openshift.local.config/master/ca.crt", Quorum:false, DeserializationCacheSize:50000, Codec:runtime.codec{Encoder:(*versioning.codec)(0xc820737d80), Decoder:(*versioning.codec)(0xc820704080)}}
I1209 18:15:37.296917 20134 storage_factory.go:241] storing { oauthauthorizetokens} in v1, reading as __internal from { openshift.io [https://192.168.121.18:4001] openshift.local.config/master/master.etcd-client.key openshift.local.config/master/master.etcd-client.crt openshift.local.config/master/ca.crt false 50000 <nil>}
I1209 18:15:37.297194 20134 configgetter.go:155] using watch cache storage (capacity=1000) for oauthauthorizetokens &storagebackend.Config{Type:"", Prefix:"openshift.io", ServerList:[]string{"https://192.168.121.18:4001"}, KeyFile:"openshift.local.config/master/master.etcd-client.key", CertFile:"openshift.local.config/master/master.etcd-client.crt", CAFile:"openshift.local.config/master/ca.crt", Quorum:true, DeserializationCacheSize:50000, Codec:runtime.codec{Encoder:(*versioning.codec)(0xc8204aee00), Decoder:(*versioning.codec)(0xc8204aef00)}}
I1209 18:15:37.298384 20134 configgetter.go:155] using watch cache storage (capacity=1000) for oauthaccesstokens &storagebackend.Config{Type:"", Prefix:"openshift.io", ServerList:[]string{"https://192.168.121.18:4001"}, KeyFile:"openshift.local.config/master/master.etcd-client.key", CertFile:"openshift.local.config/master/master.etcd-client.crt", CAFile:"openshift.local.config/master/ca.crt", Quorum:true, DeserializationCacheSize:50000, Codec:runtime.codec{Encoder:(*versioning.codec)(0xc82128e500), Decoder:(*versioning.codec)(0xc82128e580)}}
I1209 18:15:37.299526 20134 storage_factory.go:241] storing { oauthclientauthorizations} in v1, reading as __internal from { openshift.io [https://192.168.121.18:4001] openshift.local.config/master/master.etcd-client.key openshift.local.config/master/master.etcd-client.crt openshift.local.config/master/ca.crt false 50000 <nil>}
I1209 18:15:37.299810 20134 configgetter.go:155] using watch cache storage (capacity=1000) for oauthclientauthorizations &storagebackend.Config{Type:"", Prefix:"openshift.io", ServerList:[]string{"https://192.168.121.18:4001"}, KeyFile:"openshift.local.config/master/master.etcd-client.key", CertFile:"openshift.local.config/master/master.etcd-client.crt", CAFile:"openshift.local.config/master/ca.crt", Quorum:false, DeserializationCacheSize:50000, Codec:runtime.codec{Encoder:(*versioning.codec)(0xc8229b4e00), Decoder:(*versioning.codec)(0xc8229b4e80)}}
I1209 18:15:37.300637 20134 cacher.go:469] Terminating all watchers from cacher *api.OAuthAccessToken
I1209 18:15:37.300898 20134 reflector.go:249] Listing and watching *api.OAuthAccessToken from pkg/storage/cacher.go:194
I1209 18:15:37.285964 20134 cacher.go:469] Terminating all watchers from cacher *api.Identity
I1209 18:15:37.301627 20134 reflector.go:249] Listing and watching *api.Identity from pkg/storage/cacher.go:194
I1209 18:15:37.302041 20134 cacher.go:469] Terminating all watchers from cacher *api.Group
I1209 18:15:37.302282 20134 reflector.go:249] Listing and watching *api.Group from pkg/storage/cacher.go:194
I1209 18:15:37.302728 20134 cacher.go:469] Terminating all watchers from cacher *api.Policy
I1209 18:15:37.302963 20134 reflector.go:249] Listing and watching *api.Policy from pkg/storage/cacher.go:194
I1209 18:15:37.303384 20134 cacher.go:469] Terminating all watchers from cacher *api.PolicyBinding
I1209 18:15:37.303638 20134 reflector.go:249] Listing and watching *api.PolicyBinding from pkg/storage/cacher.go:194
I1209 18:15:37.304256 20134 cacher.go:469] Terminating all watchers from cacher *api.ClusterPolicy
I1209 18:15:37.304653 20134 reflector.go:249] Listing and watching *api.ClusterPolicy from pkg/storage/cacher.go:194
I1209 18:15:37.305053 20134 cacher.go:469] Terminating all watchers from cacher *api.ClusterPolicyBinding
I1209 18:15:37.305277 20134 reflector.go:249] Listing and watching *api.ClusterPolicyBinding from pkg/storage/cacher.go:194
I1209 18:15:37.305691 20134 cacher.go:469] Terminating all watchers from cacher *api.Image
I1209 18:15:37.305916 20134 reflector.go:249] Listing and watching *api.Image from pkg/storage/cacher.go:194
I1209 18:15:37.306303 20134 cacher.go:469] Terminating all watchers from cacher *api.ImageStream
I1209 18:15:37.306570 20134 reflector.go:249] Listing and watching *api.ImageStream from pkg/storage/cacher.go:194
I1209 18:15:37.306954 20134 cacher.go:469] Terminating all watchers from cacher *api.OAuthClient
I1209 18:15:37.307173 20134 reflector.go:249] Listing and watching *api.OAuthClient from pkg/storage/cacher.go:194
I1209 18:15:37.316483 20134 cacher.go:469] Terminating all watchers from cacher *api.OAuthAuthorizeToken
I1209 18:15:37.316775 20134 reflector.go:249] Listing and watching *api.OAuthAuthorizeToken from pkg/storage/cacher.go:194
I1209 18:15:37.336172 20134 storage_factory.go:241] storing { templates} in v1, reading as __internal from { openshift.io [https://192.168.121.18:4001] openshift.local.config/master/master.etcd-client.key openshift.local.config/master/master.etcd-client.crt openshift.local.config/master/ca.crt false 50000 <nil>}
I1209 18:15:37.336639 20134 configgetter.go:155] using watch cache storage (capacity=1000) for templates &storagebackend.Config{Type:"", Prefix:"openshift.io", ServerList:[]string{"https://192.168.121.18:4001"}, KeyFile:"openshift.local.config/master/master.etcd-client.key", CertFile:"openshift.local.config/master/master.etcd-client.crt", CAFile:"openshift.local.config/master/ca.crt", Quorum:false, DeserializationCacheSize:50000, Codec:runtime.codec{Encoder:(*versioning.codec)(0xc822be3b80), Decoder:(*versioning.codec)(0xc822be3c00)}}
I1209 18:15:37.337835 20134 storage_factory.go:241] storing { clusterresourcequotas} in v1, reading as __internal from { openshift.io [https://192.168.121.18:4001] openshift.local.config/master/master.etcd-client.key openshift.local.config/master/master.etcd-client.crt openshift.local.config/master/ca.crt false 50000 <nil>}
I1209 18:15:37.338091 20134 configgetter.go:155] using watch cache storage (capacity=1000) for clusterresourcequotas &storagebackend.Config{Type:"", Prefix:"openshift.io", ServerList:[]string{"https://192.168.121.18:4001"}, KeyFile:"openshift.local.config/master/master.etcd-client.key", CertFile:"openshift.local.config/master/master.etcd-client.crt", CAFile:"openshift.local.config/master/ca.crt", Quorum:false, DeserializationCacheSize:50000, Codec:runtime.codec{Encoder:(*versioning.codec)(0xc822b5dd80), Decoder:(*versioning.codec)(0xc822b5de00)}}
I1209 18:15:37.339410 20134 configgetter.go:155] using watch cache storage (capacity=1000) for clusterresourcequotas &storagebackend.Config{Type:"", Prefix:"openshift.io", ServerList:[]string{"https://192.168.121.18:4001"}, KeyFile:"openshift.local.config/master/master.etcd-client.key", CertFile:"openshift.local.config/master/master.etcd-client.crt", CAFile:"openshift.local.config/master/ca.crt", Quorum:false, DeserializationCacheSize:50000, Codec:runtime.codec{Encoder:(*versioning.codec)(0xc822b5dd80), Decoder:(*versioning.codec)(0xc822b5de00)}}
I1209 18:15:37.351051 20134 cacher.go:469] Terminating all watchers from cacher *api.ClusterResourceQuota
I1209 18:15:37.351465 20134 reflector.go:249] Listing and watching *api.ClusterResourceQuota from pkg/storage/cacher.go:194
I1209 18:15:37.358672 20134 cacher.go:469] Terminating all watchers from cacher *api.OAuthClientAuthorization
I1209 18:15:37.358982 20134 reflector.go:249] Listing and watching *api.OAuthClientAuthorization from pkg/storage/cacher.go:194
I1209 18:15:37.359420 20134 cacher.go:469] Terminating all watchers from cacher *api.Template
I1209 18:15:37.359677 20134 reflector.go:249] Listing and watching *api.Template from pkg/storage/cacher.go:194
I1209 18:15:37.360074 20134 cacher.go:469] Terminating all watchers from cacher *api.ClusterResourceQuota
I1209 18:15:37.360300 20134 reflector.go:249] Listing and watching *api.ClusterResourceQuota from pkg/storage/cacher.go:194
I1209 18:15:37.386088 20134 clusterquotamapping.go:306] Waiting for the caches to sync before starting the quota mapping controller workers
I1209 18:15:37.492770 20134 configgetter.go:155] using watch cache storage (capacity=1000) for oauthclients &storagebackend.Config{Type:"", Prefix:"openshift.io", ServerList:[]string{"https://192.168.121.18:4001"}, KeyFile:"openshift.local.config/master/master.etcd-client.key", CertFile:"openshift.local.config/master/master.etcd-client.crt", CAFile:"openshift.local.config/master/ca.crt", Quorum:false, DeserializationCacheSize:50000, Codec:runtime.codec{Encoder:(*versioning.codec)(0xc820737d80), Decoder:(*versioning.codec)(0xc820704080)}}
I1209 18:15:37.494207 20134 configgetter.go:155] using watch cache storage (capacity=1000) for oauthaccesstokens &storagebackend.Config{Type:"", Prefix:"openshift.io", ServerList:[]string{"https://192.168.121.18:4001"}, KeyFile:"openshift.local.config/master/master.etcd-client.key", CertFile:"openshift.local.config/master/master.etcd-client.crt", CAFile:"openshift.local.config/master/ca.crt", Quorum:true, DeserializationCacheSize:50000, Codec:runtime.codec{Encoder:(*versioning.codec)(0xc82128e500), Decoder:(*versioning.codec)(0xc82128e580)}}
I1209 18:15:37.495398 20134 configgetter.go:155] using watch cache storage (capacity=1000) for oauthauthorizetokens &storagebackend.Config{Type:"", Prefix:"openshift.io", ServerList:[]string{"https://192.168.121.18:4001"}, KeyFile:"openshift.local.config/master/master.etcd-client.key", CertFile:"openshift.local.config/master/master.etcd-client.crt", CAFile:"openshift.local.config/master/ca.crt", Quorum:true, DeserializationCacheSize:50000, Codec:runtime.codec{Encoder:(*versioning.codec)(0xc8204aee00), Decoder:(*versioning.codec)(0xc8204aef00)}}
I1209 18:15:37.496547 20134 configgetter.go:155] using watch cache storage (capacity=1000) for oauthclientauthorizations &storagebackend.Config{Type:"", Prefix:"openshift.io", ServerList:[]string{"https://192.168.121.18:4001"}, KeyFile:"openshift.local.config/master/master.etcd-client.key", CertFile:"openshift.local.config/master/master.etcd-client.crt", CAFile:"openshift.local.config/master/ca.crt", Quorum:false, DeserializationCacheSize:50000, Codec:runtime.codec{Encoder:(*versioning.codec)(0xc8229b4e00), Decoder:(*versioning.codec)(0xc8229b4e80)}}
I1209 18:15:37.497639 20134 cacher.go:469] Terminating all watchers from cacher *api.OAuthAuthorizeToken
I1209 18:15:37.527286 20134 cacher.go:469] Terminating all watchers from cacher *api.OAuthClient
I1209 18:15:37.527309 20134 reflector.go:249] Listing and watching *api.OAuthClient from pkg/storage/cacher.go:194
I1209 18:15:37.527658 20134 cacher.go:469] Terminating all watchers from cacher *api.OAuthAccessToken
I1209 18:15:37.527671 20134 reflector.go:249] Listing and watching *api.OAuthAccessToken from pkg/storage/cacher.go:194
I1209 18:15:37.521849 20134 reflector.go:249] Listing and watching *api.OAuthAuthorizeToken from pkg/storage/cacher.go:194
I1209 18:15:37.532420 20134 cacher.go:469] Terminating all watchers from cacher *api.PolicyBinding
I1209 18:15:37.532683 20134 reflector.go:249] Listing and watching *api.PolicyBinding from pkg/storage/cacher.go:194
I1209 18:15:37.534406 20134 cacher.go:469] Terminating all watchers from cacher *api.ClusterPolicyBinding
I1209 18:15:37.534673 20134 reflector.go:249] Listing and watching *api.ClusterPolicyBinding from pkg/storage/cacher.go:194
I1209 18:15:37.535152 20134 cacher.go:469] Terminating all watchers from cacher *api.Policy
I1209 18:15:37.535412 20134 reflector.go:249] Listing and watching *api.Policy from pkg/storage/cacher.go:194
I1209 18:15:37.535784 20134 cacher.go:469] Terminating all watchers from cacher *api.ClusterPolicy
I1209 18:15:37.536007 20134 reflector.go:249] Listing and watching *api.ClusterPolicy from pkg/storage/cacher.go:194
I1209 18:15:37.536455 20134 cacher.go:469] Terminating all watchers from cacher *api.OAuthClientAuthorization
I1209 18:15:37.536470 20134 reflector.go:249] Listing and watching *api.OAuthClientAuthorization from pkg/storage/cacher.go:194
I1209 18:15:37.536990 20134 cacher.go:469] Terminating all watchers from cacher *api.Group
I1209 18:15:37.537204 20134 reflector.go:249] Listing and watching *api.Group from pkg/storage/cacher.go:194
I1209 18:15:37.558119 20134 reflector.go:249] Listing and watching *api.ResourceQuota from github.com/openshift/origin/vendor/k8s.io/kubernetes/plugin/pkg/admission/resourcequota/resource_access.go:83
E1209 18:15:37.576350 20134 reflector.go:214] github.com/openshift/origin/vendor/k8s.io/kubernetes/plugin/pkg/admission/resourcequota/resource_access.go:83: Failed to list *api.ResourceQuota: Get https://192.168.121.18:8443/api/v1/resourcequotas?resourceVersion=0: dial tcp 192.168.121.18:8443: getsockopt: connection refused
I1209 18:15:37.578290 20134 reflector.go:249] Listing and watching *storage.StorageClass from github.com/openshift/origin/vendor/k8s.io/kubernetes/plugin/pkg/admission/storageclass/default/admission.go:62
I1209 18:15:37.580568 20134 reflector.go:249] Listing and watching *api.ServiceAccount from github.com/openshift/origin/vendor/k8s.io/kubernetes/plugin/pkg/admission/serviceaccount/admission.go:103
I1209 18:15:37.580785 20134 reflector.go:249] Listing and watching *api.LimitRange from github.com/openshift/origin/vendor/k8s.io/kubernetes/plugin/pkg/admission/limitranger/admission.go:154
I1209 18:15:37.580961 20134 reflector.go:249] Listing and watching *api.LimitRange from github.com/openshift/origin/vendor/k8s.io/kubernetes/plugin/pkg/admission/limitranger/admission.go:154
I1209 18:15:37.581134 20134 reflector.go:249] Listing and watching *api.Secret from github.com/openshift/origin/vendor/k8s.io/kubernetes/plugin/pkg/admission/serviceaccount/admission.go:119
I1209 18:15:37.581383 20134 clusterquotamapping.go:306] Waiting for the caches to sync before starting the quota mapping controller workers
E1209 18:15:37.596829 20134 reflector.go:214] github.com/openshift/origin/vendor/k8s.io/kubernetes/plugin/pkg/admission/serviceaccount/admission.go:119: Failed to list *api.Secret: Get https://192.168.121.18:8443/api/v1/secrets?fieldSelector=type%3Dkubernetes.io%2Fservice-account-token&resourceVersion=0: dial tcp 192.168.121.18:8443: getsockopt: connection refused
E1209 18:15:37.597293 20134 reflector.go:203] github.com/openshift/origin/vendor/k8s.io/kubernetes/plugin/pkg/admission/limitranger/admission.go:154: Failed to list *api.LimitRange: Get https://192.168.121.18:8443/api/v1/limitranges?resourceVersion=0: dial tcp 192.168.121.18:8443: getsockopt: connection refused
E1209 18:15:37.597615 20134 reflector.go:203] github.com/openshift/origin/vendor/k8s.io/kubernetes/plugin/pkg/admission/limitranger/admission.go:154: Failed to list *api.LimitRange: Get https://192.168.121.18:8443/api/v1/limitranges?resourceVersion=0: dial tcp 192.168.121.18:8443: getsockopt: connection refused
E1209 18:15:37.597920 20134 reflector.go:214] github.com/openshift/origin/vendor/k8s.io/kubernetes/plugin/pkg/admission/serviceaccount/admission.go:103: Failed to list *api.ServiceAccount: Get https://192.168.121.18:8443/api/v1/serviceaccounts?resourceVersion=0: dial tcp 192.168.121.18:8443: getsockopt: connection refused
E1209 18:15:37.598210 20134 reflector.go:214] github.com/openshift/origin/vendor/k8s.io/kubernetes/plugin/pkg/admission/storageclass/default/admission.go:62: Failed to list *storage.StorageClass: Get https://192.168.121.18:8443/apis/storage.k8s.io/v1beta1/storageclasses?resourceVersion=0: dial tcp 192.168.121.18:8443: getsockopt: connection refused
I1209 18:15:37.683278 20134 clusterquotamapping.go:306] Waiting for the caches to sync before starting the quota mapping controller workers
I1209 18:15:37.795027 20134 reflector.go:249] Listing and watching *api.ClusterResourceQuota from github.com/openshift/origin/pkg/controller/shared/shared_informer.go:105
E1209 18:15:37.795980 20134 reflector.go:214] github.com/openshift/origin/pkg/controller/shared/shared_informer.go:105: Failed to list *api.ClusterResourceQuota: Get https://192.168.121.18:8443/oapi/v1/clusterresourcequotas?resourceVersion=0: dial tcp 192.168.121.18:8443: getsockopt: connection refused
I1209 18:15:37.796147 20134 reflector.go:249] Listing and watching *api.Namespace from github.com/openshift/origin/pkg/project/cache/cache.go:95
E1209 18:15:37.796647 20134 reflector.go:203] github.com/openshift/origin/pkg/project/cache/cache.go:95: Failed to list *api.Namespace: Get https://192.168.121.18:8443/api/v1/namespaces?resourceVersion=0: dial tcp 192.168.121.18:8443: getsockopt: connection refused
I1209 18:15:37.808579 20134 clusterquotamapping.go:306] Waiting for the caches to sync before starting the quota mapping controller workers
I1209 18:15:37.826716 20134 master.go:391] Started Kubernetes API at 0.0.0.0:8443/api
I1209 18:15:37.826743 20134 master.go:391] Started Kubernetes API extensions/v1beta1 at 0.0.0.0:8443/apis
I1209 18:15:37.826752 20134 master.go:391] Started Kubernetes API batch/v1 at 0.0.0.0:8443/apis
I1209 18:15:37.826756 20134 master.go:391] Started Kubernetes API batch/v2alpha1 at 0.0.0.0:8443/apis
I1209 18:15:37.826761 20134 master.go:391] Started Kubernetes API autoscaling/v1 at 0.0.0.0:8443/apis
I1209 18:15:37.826765 20134 master.go:391] Started Kubernetes API certificates.k8s.io/v1alpha1 at 0.0.0.0:8443/apis
I1209 18:15:37.826769 20134 master.go:391] Started Kubernetes API apps/v1alpha1 at 0.0.0.0:8443/apis
I1209 18:15:37.826774 20134 master.go:391] Started Kubernetes API policy/v1alpha1 at 0.0.0.0:8443/apis
I1209 18:15:37.826778 20134 master.go:391] Started Origin API at 0.0.0.0:8443/oapi/v1
I1209 18:15:37.826782 20134 master.go:391] Started OAuth2 API at 0.0.0.0:8443/oauth
I1209 18:15:37.827056 20134 master.go:391] Started Web Console 0.0.0.0:8443/console/
I1209 18:15:37.827072 20134 master.go:391] Started Swagger Schema API at 0.0.0.0:8443/swaggerapi/
I1209 18:15:37.827076 20134 master.go:391] Started OpenAPI Schema at 0.0.0.0:8443/swagger.json
I1209 18:15:37.828926 20134 net.go:106] Got error &net.OpError{Op:"dial", Net:"tcp4", Source:net.Addr(nil), Addr:(*net.TCPAddr)(0xc820ee0db0), Err:(*os.SyscallError)(0xc821f2dda0)}, trying again: "0.0.0.0:8443"
I1209 18:15:37.908750 20134 clusterquotamapping.go:306] Waiting for the caches to sync before starting the quota mapping controller workers
I1209 18:15:37.938929 20134 configgetter.go:155] using watch cache storage (capacity=1000) for clusterpolicies &storagebackend.Config{Type:"", Prefix:"openshift.io", ServerList:[]string{"https://192.168.121.18:4001"}, KeyFile:"openshift.local.config/master/master.etcd-client.key", CertFile:"openshift.local.config/master/master.etcd-client.crt", CAFile:"openshift.local.config/master/ca.crt", Quorum:false, DeserializationCacheSize:50000, Codec:runtime.codec{Encoder:(*versioning.codec)(0xc821194a80), Decoder:(*versioning.codec)(0xc821194b00)}}
I1209 18:15:37.940693 20134 reflector.go:211] Starting reflector *api.SecurityContextConstraints (10m0s) from github.com/openshift/origin/pkg/controller/shared/shared_informer.go:93
I1209 18:15:37.940800 20134 reflector.go:249] Listing and watching *api.SecurityContextConstraints from github.com/openshift/origin/pkg/controller/shared/shared_informer.go:93
I1209 18:15:37.941844 20134 reflector.go:211] Starting reflector *api.Namespace (10m0s) from github.com/openshift/origin/pkg/controller/shared/shared_informer.go:93
I1209 18:15:37.941986 20134 reflector.go:249] Listing and watching *api.Namespace from github.com/openshift/origin/pkg/controller/shared/shared_informer.go:93
I1209 18:15:37.942176 20134 reflector.go:211] Starting reflector *api.LimitRange (10m0s) from github.com/openshift/origin/pkg/controller/shared/shared_informer.go:93
I1209 18:15:37.942248 20134 reflector.go:249] Listing and watching *api.LimitRange from github.com/openshift/origin/pkg/controller/shared/shared_informer.go:93
I1209 18:15:37.943022 20134 reflector.go:211] Starting reflector *api.ServiceAccount (10m0s) from github.com/openshift/origin/pkg/controller/shared/shared_informer.go:93
I1209 18:15:37.943100 20134 reflector.go:249] Listing and watching *api.ServiceAccount from github.com/openshift/origin/pkg/controller/shared/shared_informer.go:93
I1209 18:15:37.943610 20134 cacher.go:469] Terminating all watchers from cacher *api.ClusterPolicy
I1209 18:15:37.943655 20134 reflector.go:249] Listing and watching *api.ClusterPolicy from pkg/storage/cacher.go:194
E1209 18:15:37.994670 20134 reflector.go:214] github.com/openshift/origin/pkg/controller/shared/shared_informer.go:93: Failed to list *api.SecurityContextConstraints: User "system:openshift-master" cannot list all securitycontextconstraints in the cluster
E1209 18:15:37.995427 20134 reflector.go:214] github.com/openshift/origin/pkg/controller/shared/shared_informer.go:93: Failed to list *api.LimitRange: User "system:openshift-master" cannot list all limitranges in the cluster
I1209 18:15:37.995609 20134 ensure.go:224] No cluster policy found. Creating bootstrap policy based on: openshift.local.config/master/policy.json
I1209 18:15:37.995916 20134 configgetter.go:155] using watch cache storage (capacity=1000) for policies &storagebackend.Config{Type:"", Prefix:"openshift.io", ServerList:[]string{"https://192.168.121.18:4001"}, KeyFile:"openshift.local.config/master/master.etcd-client.key", CertFile:"openshift.local.config/master/master.etcd-client.crt", CAFile:"openshift.local.config/master/ca.crt", Quorum:false, DeserializationCacheSize:50000, Codec:runtime.codec{Encoder:(*versioning.codec)(0xc82154ef80), Decoder:(*versioning.codec)(0xc82154f000)}}
I1209 18:15:38.026060 20134 clusterquotamapping.go:306] Waiting for the caches to sync before starting the quota mapping controller workers
I1209 18:15:38.036202 20134 configgetter.go:155] using watch cache storage (capacity=1000) for policybindings &storagebackend.Config{Type:"", Prefix:"openshift.io", ServerList:[]string{"https://192.168.121.18:4001"}, KeyFile:"openshift.local.config/master/master.etcd-client.key", CertFile:"openshift.local.config/master/master.etcd-client.crt", CAFile:"openshift.local.config/master/ca.crt", Quorum:false, DeserializationCacheSize:50000, Codec:runtime.codec{Encoder:(*versioning.codec)(0xc8211f7180), Decoder:(*versioning.codec)(0xc8211f7200)}}
I1209 18:15:38.038906 20134 configgetter.go:155] using watch cache storage (capacity=1000) for clusterpolicies &storagebackend.Config{Type:"", Prefix:"openshift.io", ServerList:[]string{"https://192.168.121.18:4001"}, KeyFile:"openshift.local.config/master/master.etcd-client.key", CertFile:"openshift.local.config/master/master.etcd-client.crt", CAFile:"openshift.local.config/master/ca.crt", Quorum:false, DeserializationCacheSize:50000, Codec:runtime.codec{Encoder:(*versioning.codec)(0xc821194a80), Decoder:(*versioning.codec)(0xc821194b00)}}
I1209 18:15:38.040246 20134 configgetter.go:155] using watch cache storage (capacity=1000) for clusterpolicybindings &storagebackend.Config{Type:"", Prefix:"openshift.io", ServerList:[]string{"https://192.168.121.18:4001"}, KeyFile:"openshift.local.config/master/master.etcd-client.key", CertFile:"openshift.local.config/master/master.etcd-client.crt", CAFile:"openshift.local.config/master/ca.crt", Quorum:false, DeserializationCacheSize:50000, Codec:runtime.codec{Encoder:(*versioning.codec)(0xc821502d80), Decoder:(*versioning.codec)(0xc821502e00)}}
I1209 18:15:38.041395 20134 decoder.go:203] decoding stream as JSON
I1209 18:15:38.046144 20134 cacher.go:469] Terminating all watchers from cacher *api.ClusterPolicyBinding
I1209 18:15:38.046170 20134 reflector.go:249] Listing and watching *api.ClusterPolicyBinding from pkg/storage/cacher.go:194
E1209 18:15:38.047491 20134 reflector.go:214] github.com/openshift/origin/pkg/controller/shared/shared_informer.go:93: Failed to list *api.Namespace: User "system:openshift-master" cannot list all namespaces in the cluster
I1209 18:15:38.047648 20134 cacher.go:469] Terminating all watchers from cacher *api.Policy
I1209 18:15:38.047662 20134 reflector.go:249] Listing and watching *api.Policy from pkg/storage/cacher.go:194
I1209 18:15:38.047896 20134 cacher.go:469] Terminating all watchers from cacher *api.PolicyBinding
I1209 18:15:38.047911 20134 reflector.go:249] Listing and watching *api.PolicyBinding from pkg/storage/cacher.go:194
I1209 18:15:38.048076 20134 cacher.go:469] Terminating all watchers from cacher *api.ClusterPolicy
I1209 18:15:38.048104 20134 reflector.go:249] Listing and watching *api.ClusterPolicy from pkg/storage/cacher.go:194
E1209 18:15:38.078521 20134 reflector.go:214] github.com/openshift/origin/pkg/controller/shared/shared_informer.go:93: Failed to list *api.ServiceAccount: User "system:openshift-master" cannot list all serviceaccounts in the cluster
I1209 18:15:38.132971 20134 clusterquotamapping.go:306] Waiting for the caches to sync before starting the quota mapping controller workers
I1209 18:15:38.233145 20134 clusterquotamapping.go:306] Waiting for the caches to sync before starting the quota mapping controller workers
I1209 18:15:38.333275 20134 clusterquotamapping.go:306] Waiting for the caches to sync before starting the quota mapping controller workers
I1209 18:15:38.438198 20134 clusterquotamapping.go:306] Waiting for the caches to sync before starting the quota mapping controller workers
I1209 18:15:38.546778 20134 clusterquotamapping.go:306] Waiting for the caches to sync before starting the quota mapping controller workers
I1209 18:15:38.578202 20134 reflector.go:249] Listing and watching *api.ResourceQuota from github.com/openshift/origin/vendor/k8s.io/kubernetes/plugin/pkg/admission/resourcequota/resource_access.go:83
E1209 18:15:38.579166 20134 reflector.go:214] github.com/openshift/origin/vendor/k8s.io/kubernetes/plugin/pkg/admission/resourcequota/resource_access.go:83: Failed to list *api.ResourceQuota: User "system:openshift-master" cannot list all resourcequotas in the cluster
I1209 18:15:38.598052 20134 reflector.go:249] Listing and watching *api.LimitRange from github.com/openshift/origin/vendor/k8s.io/kubernetes/plugin/pkg/admission/limitranger/admission.go:154
I1209 18:15:38.598538 20134 reflector.go:249] Listing and watching *api.Secret from github.com/openshift/origin/vendor/k8s.io/kubernetes/plugin/pkg/admission/serviceaccount/admission.go:119
I1209 18:15:38.598999 20134 reflector.go:249] Listing and watching *api.LimitRange from github.com/openshift/origin/vendor/k8s.io/kubernetes/plugin/pkg/admission/limitranger/admission.go:154
I1209 18:15:38.602363 20134 reflector.go:249] Listing and watching *storage.StorageClass from github.com/openshift/origin/vendor/k8s.io/kubernetes/plugin/pkg/admission/storageclass/default/admission.go:62
I1209 18:15:38.602617 20134 reflector.go:249] Listing and watching *api.ServiceAccount from github.com/openshift/origin/vendor/k8s.io/kubernetes/plugin/pkg/admission/serviceaccount/admission.go:103
E1209 18:15:38.608210 20134 reflector.go:203] github.com/openshift/origin/vendor/k8s.io/kubernetes/plugin/pkg/admission/limitranger/admission.go:154: Failed to list *api.LimitRange: User "system:openshift-master" cannot list all limitranges in the cluster
E1209 18:15:38.608769 20134 reflector.go:214] github.com/openshift/origin/vendor/k8s.io/kubernetes/plugin/pkg/admission/serviceaccount/admission.go:119: Failed to list *api.Secret: User "system:openshift-master" cannot list all secrets in the cluster
E1209 18:15:38.609297 20134 reflector.go:203] github.com/openshift/origin/vendor/k8s.io/kubernetes/plugin/pkg/admission/limitranger/admission.go:154: Failed to list *api.LimitRange: User "system:openshift-master" cannot list all limitranges in the cluster
E1209 18:15:38.620112 20134 reflector.go:214] github.com/openshift/origin/vendor/k8s.io/kubernetes/plugin/pkg/admission/serviceaccount/admission.go:103: Failed to list *api.ServiceAccount: User "system:openshift-master" cannot list all serviceaccounts in the cluster
E1209 18:15:38.621122 20134 reflector.go:214] github.com/openshift/origin/vendor/k8s.io/kubernetes/plugin/pkg/admission/storageclass/default/admission.go:62: Failed to list *storage.StorageClass: User "system:openshift-master" cannot list all storage.k8s.io.storageclasses in the cluster
I1209 18:15:38.648286 20134 clusterquotamapping.go:306] Waiting for the caches to sync before starting the quota mapping controller workers
I1209 18:15:38.792308 20134 clusterquotamapping.go:306] Waiting for the caches to sync before starting the quota mapping controller workers
I1209 18:15:38.796855 20134 reflector.go:249] Listing and watching *api.Namespace from github.com/openshift/origin/pkg/project/cache/cache.go:95
I1209 18:15:38.797616 20134 reflector.go:249] Listing and watching *api.ClusterResourceQuota from github.com/openshift/origin/pkg/controller/shared/shared_informer.go:105
E1209 18:15:38.803061 20134 reflector.go:203] github.com/openshift/origin/pkg/project/cache/cache.go:95: Failed to list *api.Namespace: User "system:openshift-master" cannot list all namespaces in the cluster
E1209 18:15:38.803700 20134 reflector.go:214] github.com/openshift/origin/pkg/controller/shared/shared_informer.go:105: Failed to list *api.ClusterResourceQuota: User "system:openshift-master" cannot list all clusterresourcequotas in the cluster
I1209 18:15:38.893880 20134 clusterquotamapping.go:306] Waiting for the caches to sync before starting the quota mapping controller workers
I1209 18:15:39.035836 20134 reflector.go:249] Listing and watching *api.LimitRange from github.com/openshift/origin/pkg/controller/shared/shared_informer.go:93
I1209 18:15:39.048841 20134 reflector.go:249] Listing and watching *api.SecurityContextConstraints from github.com/openshift/origin/pkg/controller/shared/shared_informer.go:93
I1209 18:15:39.048860 20134 clusterquotamapping.go:306] Waiting for the caches to sync before starting the quota mapping controller workers
E1209 18:15:39.052183 20134 reflector.go:214] github.com/openshift/origin/pkg/controller/shared/shared_informer.go:93: Failed to list *api.LimitRange: User "system:openshift-master" cannot list all limitranges in the cluster
I1209 18:15:39.054363 20134 reflector.go:249] Listing and watching *api.Namespace from github.com/openshift/origin/pkg/controller/shared/shared_informer.go:93
E1209 18:15:39.067192 20134 reflector.go:214] github.com/openshift/origin/pkg/controller/shared/shared_informer.go:93: Failed to list *api.SecurityContextConstraints: User "system:openshift-master" cannot list all securitycontextconstraints in the cluster
E1209 18:15:39.075678 20134 reflector.go:214] github.com/openshift/origin/pkg/controller/shared/shared_informer.go:93: Failed to list *api.Namespace: User "system:openshift-master" cannot list all namespaces in the cluster
I1209 18:15:39.081436 20134 reflector.go:249] Listing and watching *api.ServiceAccount from github.com/openshift/origin/pkg/controller/shared/shared_informer.go:93
E1209 18:15:39.085755 20134 reflector.go:214] github.com/openshift/origin/pkg/controller/shared/shared_informer.go:93: Failed to list *api.ServiceAccount: User "system:openshift-master" cannot list all serviceaccounts in the cluster
I1209 18:15:39.150969 20134 clusterquotamapping.go:306] Waiting for the caches to sync before starting the quota mapping controller workers
I1209 18:15:39.251387 20134 clusterquotamapping.go:306] Waiting for the caches to sync before starting the quota mapping controller workers
I1209 18:15:39.364194 20134 clusterquotamapping.go:306] Waiting for the caches to sync before starting the quota mapping controller workers
I1209 18:15:39.474443 20134 clusterquotamapping.go:306] Waiting for the caches to sync before starting the quota mapping controller workers
I1209 18:15:39.583240 20134 reflector.go:249] Listing and watching *api.ResourceQuota from github.com/openshift/origin/vendor/k8s.io/kubernetes/plugin/pkg/admission/resourcequota/resource_access.go:83
I1209 18:15:39.584237 20134 clusterquotamapping.go:306] Waiting for the caches to sync before starting the quota mapping controller workers
E1209 18:15:39.590125 20134 reflector.go:214] github.com/openshift/origin/vendor/k8s.io/kubernetes/plugin/pkg/admission/resourcequota/resource_access.go:83: Failed to list *api.ResourceQuota: User "system:openshift-master" cannot list all resourcequotas in the cluster
I1209 18:15:39.616804 20134 reflector.go:249] Listing and watching *api.LimitRange from github.com/openshift/origin/vendor/k8s.io/kubernetes/plugin/pkg/admission/limitranger/admission.go:154
I1209 18:15:39.617129 20134 reflector.go:249] Listing and watching *api.Secret from github.com/openshift/origin/vendor/k8s.io/kubernetes/plugin/pkg/admission/serviceaccount/admission.go:119
I1209 18:15:39.617377 20134 reflector.go:249] Listing and watching *api.LimitRange from github.com/openshift/origin/vendor/k8s.io/kubernetes/plugin/pkg/admission/limitranger/admission.go:154
I1209 18:15:39.623376 20134 reflector.go:249] Listing and watching *storage.StorageClass from github.com/openshift/origin/vendor/k8s.io/kubernetes/plugin/pkg/admission/storageclass/default/admission.go:62
I1209 18:15:39.625297 20134 reflector.go:249] Listing and watching *api.ServiceAccount from github.com/openshift/origin/vendor/k8s.io/kubernetes/plugin/pkg/admission/serviceaccount/admission.go:103
E1209 18:15:39.633262 20134 reflector.go:214] github.com/openshift/origin/vendor/k8s.io/kubernetes/plugin/pkg/admission/serviceaccount/admission.go:119: Failed to list *api.Secret: User "system:openshift-master" cannot list all secrets in the cluster
E1209 18:15:39.633496 20134 reflector.go:203] github.com/openshift/origin/vendor/k8s.io/kubernetes/plugin/pkg/admission/limitranger/admission.go:154: Failed to list *api.LimitRange: User "system:openshift-master" cannot list all limitranges in the cluster
E1209 18:15:39.633906 20134 reflector.go:214] github.com/openshift/origin/vendor/k8s.io/kubernetes/plugin/pkg/admission/serviceaccount/admission.go:103: Failed to list *api.ServiceAccount: User "system:openshift-master" cannot list all serviceaccounts in the cluster
E1209 18:15:39.634236 20134 reflector.go:214] github.com/openshift/origin/vendor/k8s.io/kubernetes/plugin/pkg/admission/storageclass/default/admission.go:62: Failed to list *storage.StorageClass: User "system:openshift-master" cannot list all storage.k8s.io.storageclasses in the cluster
E1209 18:15:39.634304 20134 reflector.go:203] github.com/openshift/origin/vendor/k8s.io/kubernetes/plugin/pkg/admission/limitranger/admission.go:154: Failed to list *api.LimitRange: User "system:openshift-master" cannot list all limitranges in the cluster
I1209 18:15:39.690992 20134 clusterquotamapping.go:306] Waiting for the caches to sync before starting the quota mapping controller workers
I1209 18:15:39.808262 20134 reflector.go:249] Listing and watching *api.ClusterResourceQuota from github.com/openshift/origin/pkg/controller/shared/shared_informer.go:105
I1209 18:15:39.808914 20134 reflector.go:249] Listing and watching *api.Namespace from github.com/openshift/origin/pkg/project/cache/cache.go:95
E1209 18:15:39.809973 20134 reflector.go:214] github.com/openshift/origin/pkg/controller/shared/shared_informer.go:105: Failed to list *api.ClusterResourceQuota: User "system:openshift-master" cannot list all clusterresourcequotas in the cluster
E1209 18:15:39.810162 20134 reflector.go:203] github.com/openshift/origin/pkg/project/cache/cache.go:95: Failed to list *api.Namespace: User "system:openshift-master" cannot list all namespaces in the cluster
I1209 18:15:39.838470 20134 clusterquotamapping.go:306] Waiting for the caches to sync before starting the quota mapping controller workers
I1209 18:15:39.949979 20134 clusterquotamapping.go:306] Waiting for the caches to sync before starting the quota mapping controller workers
I1209 18:15:40.079506 20134 reflector.go:249] Listing and watching *api.Namespace from github.com/openshift/origin/pkg/controller/shared/shared_informer.go:93
I1209 18:15:40.085994 20134 clusterquotamapping.go:306] Waiting for the caches to sync before starting the quota mapping controller workers
I1209 18:15:40.086047 20134 reflector.go:249] Listing and watching *api.LimitRange from github.com/openshift/origin/pkg/controller/shared/shared_informer.go:93
I1209 18:15:40.089750 20134 reflector.go:249] Listing and watching *api.SecurityContextConstraints from github.com/openshift/origin/pkg/controller/shared/shared_informer.go:93
E1209 18:15:40.091456 20134 reflector.go:214] github.com/openshift/origin/pkg/controller/shared/shared_informer.go:93: Failed to list *api.Namespace: User "system:openshift-master" cannot list all namespaces in the cluster
I1209 18:15:40.092086 20134 reflector.go:249] Listing and watching *api.ServiceAccount from github.com/openshift/origin/pkg/controller/shared/shared_informer.go:93
E1209 18:15:40.097717 20134 reflector.go:214] github.com/openshift/origin/pkg/controller/shared/shared_informer.go:93: Failed to list *api.LimitRange: User "system:openshift-master" cannot list all limitranges in the cluster
E1209 18:15:40.098842 20134 reflector.go:214] github.com/openshift/origin/pkg/controller/shared/shared_informer.go:93: Failed to list *api.ServiceAccount: User "system:openshift-master" cannot list all serviceaccounts in the cluster
E1209 18:15:40.099817 20134 reflector.go:214] github.com/openshift/origin/pkg/controller/shared/shared_informer.go:93: Failed to list *api.SecurityContextConstraints: User "system:openshift-master" cannot list all securitycontextconstraints in the cluster
I1209 18:15:40.189915 20134 clusterquotamapping.go:306] Waiting for the caches to sync before starting the quota mapping controller workers
I1209 18:15:40.300123 20134 clusterquotamapping.go:306] Waiting for the caches to sync before starting the quota mapping controller workers
I1209 18:15:40.403604 20134 clusterquotamapping.go:306] Waiting for the caches to sync before starting the quota mapping controller workers
I1209 18:15:40.520377 20134 clusterquotamapping.go:306] Waiting for the caches to sync before starting the quota mapping controller workers
I1209 18:15:40.593275 20134 reflector.go:249] Listing and watching *api.ResourceQuota from github.com/openshift/origin/vendor/k8s.io/kubernetes/plugin/pkg/admission/resourcequota/resource_access.go:83
E1209 18:15:40.607648 20134 reflector.go:214] github.com/openshift/origin/vendor/k8s.io/kubernetes/plugin/pkg/admission/resourcequota/resource_access.go:83: Failed to list *api.ResourceQuota: User "system:openshift-master" cannot list all resourcequotas in the cluster
I1209 18:15:40.631701 20134 clusterquotamapping.go:306] Waiting for the caches to sync before starting the quota mapping controller workers
I1209 18:15:40.634101 20134 reflector.go:249] Listing and watching *api.LimitRange from github.com/openshift/origin/vendor/k8s.io/kubernetes/plugin/pkg/admission/limitranger/admission.go:154
I1209 18:15:40.634393 20134 reflector.go:249] Listing and watching *api.Secret from github.com/openshift/origin/vendor/k8s.io/kubernetes/plugin/pkg/admission/serviceaccount/admission.go:119
I1209 18:15:40.634721 20134 reflector.go:249] Listing and watching *api.LimitRange from github.com/openshift/origin/vendor/k8s.io/kubernetes/plugin/pkg/admission/limitranger/admission.go:154
I1209 18:15:40.635173 20134 reflector.go:249] Listing and watching *api.ServiceAccount from github.com/openshift/origin/vendor/k8s.io/kubernetes/plugin/pkg/admission/serviceaccount/admission.go:103
I1209 18:15:40.635784 20134 reflector.go:249] Listing and watching *storage.StorageClass from github.com/openshift/origin/vendor/k8s.io/kubernetes/plugin/pkg/admission/storageclass/default/admission.go:62
I1209 18:15:40.732386 20134 clusterquotamapping.go:306] Waiting for the caches to sync before starting the quota mapping controller workers
I1209 18:15:40.811389 20134 reflector.go:249] Listing and watching *api.Namespace from github.com/openshift/origin/pkg/project/cache/cache.go:95
I1209 18:15:40.812111 20134 reflector.go:249] Listing and watching *api.ClusterResourceQuota from github.com/openshift/origin/pkg/controller/shared/shared_informer.go:105
I1209 18:15:40.838361 20134 clusterquotamapping.go:306] Waiting for the caches to sync before starting the quota mapping controller workers
I1209 18:15:40.878381 20134 ensure.go:209] Created default security context constraint privileged
I1209 18:15:40.881240 20134 ensure.go:209] Created default security context constraint nonroot
I1209 18:15:40.883865 20134 ensure.go:209] Created default security context constraint hostmount-anyuid
I1209 18:15:40.886669 20134 ensure.go:209] Created default security context constraint hostaccess
I1209 18:15:40.889644 20134 ensure.go:209] Created default security context constraint restricted
I1209 18:15:40.892053 20134 ensure.go:209] Created default security context constraint anyuid
I1209 18:15:40.894630 20134 ensure.go:209] Created default security context constraint hostnetwork
I1209 18:15:40.938789 20134 clusterquotamapping.go:306] Waiting for the caches to sync before starting the quota mapping controller workers
I1209 18:15:41.038996 20134 clusterquotamapping.go:306] Waiting for the caches to sync before starting the quota mapping controller workers
I1209 18:15:41.091654 20134 reflector.go:249] Listing and watching *api.Namespace from github.com/openshift/origin/pkg/controller/shared/shared_informer.go:93
I1209 18:15:41.097844 20134 reflector.go:249] Listing and watching *api.LimitRange from github.com/openshift/origin/pkg/controller/shared/shared_informer.go:93
I1209 18:15:41.100771 20134 reflector.go:249] Listing and watching *api.SecurityContextConstraints from github.com/openshift/origin/pkg/controller/shared/shared_informer.go:93
I1209 18:15:41.100774 20134 reflector.go:249] Listing and watching *api.ServiceAccount from github.com/openshift/origin/pkg/controller/shared/shared_informer.go:93
I1209 18:15:41.140845 20134 clusterquotamapping.go:106] Starting workers for quota mapping controller workers
I1209 18:15:41.217634 20134 ensure.go:86] Added build-controller service accounts to the system:build-controller cluster role: <nil>
I1209 18:15:41.245117 20134 ensure.go:86] Added daemonset-controller service accounts to the system:daemonset-controller cluster role: <nil>
I1209 18:15:41.274456 20134 ensure.go:86] Added deployment-controller service accounts to the system:deployment-controller cluster role: <nil>
I1209 18:15:41.312728 20134 ensure.go:86] Added deploymentconfig-controller service accounts to the system:deploymentconfig-controller cluster role: <nil>
I1209 18:15:41.343146 20134 ensure.go:86] Added disruption-controller service accounts to the system:disruption-controller cluster role: <nil>
I1209 18:15:41.367856 20134 ensure.go:86] Added endpoint-controller service accounts to the system:endpoint-controller cluster role: <nil>
I1209 18:15:41.519112 20134 ensure.go:86] Added gc-controller service accounts to the system:gc-controller cluster role: <nil>
I1209 18:15:41.556520 20134 ensure.go:86] Added hpa-controller service accounts to the system:hpa-controller cluster role: <nil>
I1209 18:15:41.592990 20134 ensure.go:86] Added job-controller service accounts to the system:job-controller cluster role: <nil>
I1209 18:15:41.607948 20134 reflector.go:249] Listing and watching *api.ResourceQuota from github.com/openshift/origin/vendor/k8s.io/kubernetes/plugin/pkg/admission/resourcequota/resource_access.go:83
I1209 18:15:41.627946 20134 ensure.go:86] Added namespace-controller service accounts to the system:namespace-controller cluster role: <nil>
I1209 18:15:41.658574 20134 ensure.go:86] Added pet-set-controller service accounts to the system:pet-set-controller cluster role: <nil>
I1209 18:15:41.688764 20134 ensure.go:86] Added pv-attach-detach-controller service accounts to the system:pv-attach-detach-controller cluster role: <nil>
I1209 18:15:41.728640 20134 ensure.go:86] Added pv-binder-controller service accounts to the system:pv-binder-controller cluster role: <nil>
I1209 18:15:41.765532 20134 ensure.go:86] Added pv-provisioner-controller service accounts to the system:pv-provisioner-controller cluster role: <nil>
I1209 18:15:41.798280 20134 ensure.go:86] Added pv-recycler-controller service accounts to the system:pv-recycler-controller cluster role: <nil>
I1209 18:15:41.835050 20134 ensure.go:86] Added replicaset-controller service accounts to the system:replicaset-controller cluster role: <nil>
I1209 18:15:41.869025 20134 ensure.go:86] Added replication-controller service accounts to the system:replication-controller cluster role: <nil>
I1209 18:15:41.904214 20134 ensure.go:86] Added service-ingress-ip-controller service accounts to the system:service-ingress-ip-controller cluster role: <nil>
I1209 18:15:41.954741 20134 ensure.go:86] Added service-load-balancer-controller service accounts to the system:service-load-balancer-controller cluster role: <nil>
I1209 18:15:42.001450 20134 ensure.go:86] Added service-serving-cert-controller service accounts to the system:service-serving-cert-controller cluster role: <nil>
I1209 18:15:42.042894 20134 ensure.go:86] Added unidling-controller service accounts to the system:unidling-controller cluster role: <nil>
W1209 18:15:42.102078 20134 run_components.go:207] Binding DNS on port 8053 instead of 53, which may not be resolvable from all clients
I1209 18:15:42.102706 20134 reflector.go:211] Starting reflector *api.Service (30m0s) from github.com/openshift/origin/pkg/dns/serviceaccessor.go:45
I1209 18:15:42.102997 20134 logs.go:41] skydns: ready for queries on cluster.local. for tcp4://0.0.0.0:8053 [rcache 0]
I1209 18:15:42.103076 20134 logs.go:41] skydns: ready for queries on cluster.local. for udp4://0.0.0.0:8053 [rcache 0]
I1209 18:15:42.103679 20134 reflector.go:249] Listing and watching *api.Service from github.com/openshift/origin/pkg/dns/serviceaccessor.go:45
I1209 18:15:42.104724 20134 net.go:106] Got error &net.OpError{Op:"dial", Net:"tcp", Source:net.Addr(nil), Addr:(*net.TCPAddr)(0xc82da962d0), Err:(*os.SyscallError)(0xc82da9cf40)}, trying again: "0.0.0.0:8053"
I1209 18:15:42.205406 20134 run_components.go:224] DNS listening at 0.0.0.0:8053
I1209 18:15:42.205695 20134 reflector.go:200] Starting reflector *api.Namespace (2m0s) from github.com/openshift/origin/pkg/project/auth/cache.go:189
I1209 18:15:42.205871 20134 start_node.go:183] Generating node configuration
I1209 18:15:42.207169 20134 create_nodeconfig.go:247] Generating node credentials ...
I1209 18:15:42.207356 20134 create_clientcert.go:52] Creating a client cert with: admin.CreateClientCertOptions{SignerCertOptions:(*admin.SignerCertOptions)(0xc823b47b00), CertFile:"openshift.local.config/node-localhost/master-client.crt", KeyFile:"openshift.local.config/node-localhost/master-client.key", User:"system:node:localhost", Groups:[]string{"system:nodes"}, Overwrite:false, Output:(*util.gLogWriter)(0xc82dac65ec)} and &admin.SignerCertOptions{CertFile:"openshift.local.config/master/ca.crt", KeyFile:"openshift.local.config/master/ca.key", SerialFile:"openshift.local.config/master/ca.serial.txt", lock:sync.Mutex{state:0, sema:0x0}, ca:(*crypto.CA)(nil)}
I1209 18:15:42.208092 20134 reflector.go:249] Listing and watching *api.Namespace from github.com/openshift/origin/pkg/project/auth/cache.go:189
I1209 18:15:42.210079 20134 crypto.go:404] Generating client cert in openshift.local.config/node-localhost/master-client.crt and key in openshift.local.config/node-localhost/master-client.key
I1209 18:15:42.217515 20134 start_master.go:575] Controllers starting (*)
E1209 18:15:42.219991 20134 util.go:45] Metric for serviceaccount_controller already registered
I1209 18:15:42.220560 20134 storage_factory.go:241] storing { securityuidranges} in v1, reading as __internal from { openshift.io [https://192.168.121.18:4001] openshift.local.config/master/master.etcd-client.key openshift.local.config/master/master.etcd-client.crt openshift.local.config/master/ca.crt false 50000 <nil>}
I1209 18:15:42.223144 20134 reflector.go:211] Starting reflector *api.ServiceAccount (0) from pkg/controller/serviceaccount/serviceaccounts_controller.go:142
I1209 18:15:42.223191 20134 reflector.go:249] Listing and watching *api.ServiceAccount from pkg/controller/serviceaccount/serviceaccounts_controller.go:142
I1209 18:15:42.224366 20134 reflector.go:211] Starting reflector *api.Namespace (0) from pkg/controller/serviceaccount/serviceaccounts_controller.go:143
I1209 18:15:42.224407 20134 reflector.go:249] Listing and watching *api.Namespace from pkg/controller/serviceaccount/serviceaccounts_controller.go:143
I1209 18:15:42.225559 20134 reflector.go:211] Starting reflector *api.Secret (0) from pkg/controller/serviceaccount/tokens_controller.go:179
I1209 18:15:42.225599 20134 reflector.go:249] Listing and watching *api.Secret from pkg/controller/serviceaccount/tokens_controller.go:179
I1209 18:15:42.226754 20134 reflector.go:211] Starting reflector *api.Secret (0) from github.com/openshift/origin/pkg/serviceaccounts/controllers/deleted_dockercfg_secrets.go:74
I1209 18:15:42.226794 20134 reflector.go:249] Listing and watching *api.Secret from github.com/openshift/origin/pkg/serviceaccounts/controllers/deleted_dockercfg_secrets.go:74
I1209 18:15:42.227990 20134 reflector.go:211] Starting reflector *api.Secret (0) from github.com/openshift/origin/pkg/serviceaccounts/controllers/deleted_token_secrets.go:68
I1209 18:15:42.228028 20134 reflector.go:249] Listing and watching *api.Secret from github.com/openshift/origin/pkg/serviceaccounts/controllers/deleted_token_secrets.go:68
I1209 18:15:42.229287 20134 reflector.go:211] Starting reflector *api.ServiceAccount (0) from pkg/controller/serviceaccount/tokens_controller.go:178
I1209 18:15:42.229340 20134 reflector.go:249] Listing and watching *api.ServiceAccount from pkg/controller/serviceaccount/tokens_controller.go:178
I1209 18:15:42.230406 20134 reflector.go:211] Starting reflector *api.Service (0) from github.com/openshift/origin/pkg/serviceaccounts/controllers/docker_registry_service.go:133
I1209 18:15:42.230444 20134 reflector.go:249] Listing and watching *api.Service from github.com/openshift/origin/pkg/serviceaccounts/controllers/docker_registry_service.go:133
I1209 18:15:42.231636 20134 reflector.go:211] Starting reflector *api.Secret (0) from github.com/openshift/origin/pkg/serviceaccounts/controllers/docker_registry_service.go:134
I1209 18:15:42.231672 20134 reflector.go:249] Listing and watching *api.Secret from github.com/openshift/origin/pkg/serviceaccounts/controllers/docker_registry_service.go:134
I1209 18:15:42.607136 20134 reflector.go:200] Starting reflector *api.Namespace (10m0s) from github.com/openshift/origin/pkg/security/controller/factory.go:40
I1209 18:15:42.614961 20134 reflector.go:249] Listing and watching *api.Namespace from github.com/openshift/origin/pkg/security/controller/factory.go:40
I1209 18:15:42.709054 20134 create_dockercfg_secrets.go:219] Dockercfg secret controller initialized, starting.
I1209 18:15:42.709434 20134 reflector.go:211] Starting reflector *api.Secret (0) from github.com/openshift/origin/pkg/serviceaccounts/controllers/create_dockercfg_secrets.go:222
I1209 18:15:42.709494 20134 reflector.go:249] Listing and watching *api.Secret from github.com/openshift/origin/pkg/serviceaccounts/controllers/create_dockercfg_secrets.go:222
I1209 18:15:42.710219 20134 reflector.go:211] Starting reflector *api.ServiceAccount (0) from github.com/openshift/origin/pkg/serviceaccounts/controllers/create_dockercfg_secrets.go:221
I1209 18:15:42.710280 20134 reflector.go:249] Listing and watching *api.ServiceAccount from github.com/openshift/origin/pkg/serviceaccounts/controllers/create_dockercfg_secrets.go:221
I1209 18:15:42.770651 20134 create_dockercfg_secrets.go:85] Adding service account build-controller
I1209 18:15:42.770894 20134 create_dockercfg_secrets.go:85] Adding service account hpa-controller
I1209 18:15:42.770926 20134 create_dockercfg_secrets.go:85] Adding service account pv-attach-detach-controller
I1209 18:15:42.770950 20134 create_dockercfg_secrets.go:85] Adding service account pv-binder-controller
I1209 18:15:42.771042 20134 create_dockercfg_secrets.go:85] Adding service account service-ingress-ip-controller
I1209 18:15:42.771098 20134 create_dockercfg_secrets.go:85] Adding service account service-load-balancer-controller
I1209 18:15:42.771146 20134 create_dockercfg_secrets.go:85] Adding service account service-serving-cert-controller
I1209 18:15:42.771202 20134 create_dockercfg_secrets.go:85] Adding service account disruption-controller
I1209 18:15:42.771275 20134 create_dockercfg_secrets.go:85] Adding service account endpoint-controller
I1209 18:15:42.771343 20134 create_dockercfg_secrets.go:85] Adding service account job-controller
I1209 18:15:42.771396 20134 create_dockercfg_secrets.go:85] Adding service account pv-provisioner-controller
I1209 18:15:42.771452 20134 create_dockercfg_secrets.go:85] Adding service account default
I1209 18:15:42.771499 20134 create_dockercfg_secrets.go:85] Adding service account gc-controller
I1209 18:15:42.771570 20134 create_dockercfg_secrets.go:85] Adding service account replicaset-controller
I1209 18:15:42.771637 20134 create_dockercfg_secrets.go:85] Adding service account unidling-controller
I1209 18:15:42.771685 20134 create_dockercfg_secrets.go:85] Adding service account deployer
I1209 18:15:42.771744 20134 create_dockercfg_secrets.go:85] Adding service account daemonset-controller
I1209 18:15:42.771795 20134 create_dockercfg_secrets.go:85] Adding service account deployment-controller
I1209 18:15:42.771851 20134 create_dockercfg_secrets.go:85] Adding service account deploymentconfig-controller
I1209 18:15:42.771917 20134 create_dockercfg_secrets.go:85] Adding service account namespace-controller
I1209 18:15:42.771960 20134 create_dockercfg_secrets.go:85] Adding service account pet-set-controller
I1209 18:15:42.772030 20134 create_dockercfg_secrets.go:85] Adding service account pv-recycler-controller
I1209 18:15:42.772074 20134 create_dockercfg_secrets.go:85] Adding service account replication-controller
I1209 18:15:42.772132 20134 create_dockercfg_secrets.go:85] Adding service account builder
I1209 18:15:42.772191 20134 create_dockercfg_secrets.go:85] Adding service account default
I1209 18:15:42.793273 20134 create_dockercfg_secrets.go:85] Adding service account builder
I1209 18:15:42.832870 20134 create_dockercfg_secrets.go:85] Adding service account deployer
I1209 18:15:42.849959 20134 create_dockercfg_secrets.go:85] Adding service account default
I1209 18:15:42.927634 20134 create_dockercfg_secrets.go:460] Creating token secret "hpa-controller-token-4oxjj" for service account openshift-infra/hpa-controller
I1209 18:15:42.928125 20134 create_dockercfg_secrets.go:460] Creating token secret "build-controller-token-4ec32" for service account openshift-infra/build-controller
I1209 18:15:42.928606 20134 create_dockercfg_secrets.go:460] Creating token secret "service-ingress-ip-controller-token-bspll" for service account openshift-infra/service-ingress-ip-controller
I1209 18:15:42.954765 20134 create_dockercfg_secrets.go:90] Updating service account pv-provisioner-controller
I1209 18:15:42.964977 20134 create_dockercfg_secrets.go:460] Creating token secret "pv-binder-controller-token-dxien" for service account openshift-infra/pv-binder-controller
I1209 18:15:42.966469 20134 create_dockercfg_secrets.go:90] Updating service account service-ingress-ip-controller
I1209 18:15:42.968532 20134 create_dockercfg_secrets.go:460] Creating token secret "pv-attach-detach-controller-token-tki08" for service account openshift-infra/pv-attach-detach-controller
I1209 18:15:42.971557 20134 create_dockercfg_secrets.go:90] Updating service account build-controller
I1209 18:15:42.971613 20134 create_dockercfg_secrets.go:90] Updating service account hpa-controller
I1209 18:15:42.972759 20134 create_dockercfg_secrets.go:85] Adding service account builder
I1209 18:15:42.984555 20134 create_dockercfg_secrets.go:90] Updating service account pv-binder-controller
I1209 18:15:43.005671 20134 create_dockercfg_secrets.go:90] Updating service account endpoint-controller
I1209 18:15:43.006576 20134 create_dockercfg_secrets.go:158] Adding token secret openshift-infra/hpa-controller-token-4oxjj
I1209 18:15:43.015283 20134 create_dockercfg_secrets.go:479] Token secret for service account openshift-infra/hpa-controller is not populated yet
I1209 18:15:43.015302 20134 create_dockercfg_secrets.go:366] The dockercfg secret was not created for service account openshift-infra/hpa-controller, will retry
I1209 18:15:43.016100 20134 create_dockercfg_secrets.go:479] Token secret for service account openshift-infra/pv-attach-detach-controller is not populated yet
I1209 18:15:43.016368 20134 create_dockercfg_secrets.go:366] The dockercfg secret was not created for service account openshift-infra/pv-attach-detach-controller, will retry
I1209 18:15:43.017091 20134 create_dockercfg_secrets.go:479] Token secret for service account openshift-infra/build-controller is not populated yet
I1209 18:15:43.017104 20134 create_dockercfg_secrets.go:366] The dockercfg secret was not created for service account openshift-infra/build-controller, will retry
I1209 18:15:43.017529 20134 create_clientcert.go:68] Generated new client cert as openshift.local.config/node-localhost/master-client.crt and key as openshift.local.config/node-localhost/master-client.key
I1209 18:15:43.017586 20134 create_servercert.go:107] Creating a server cert with: admin.CreateServerCertOptions{SignerCertOptions:(*admin.SignerCertOptions)(0xc823b47b00), CertFile:"openshift.local.config/node-localhost/server.crt", KeyFile:"openshift.local.config/node-localhost/server.key", Hostnames:[]string{"127.0.0.1", "172.17.0.1", "192.168.121.18", "localhost"}, Overwrite:false, Output:(*util.gLogWriter)(0xc82dac65ec)}
I1209 18:15:43.017661 20134 crypto.go:367] Generating server certificate in openshift.local.config/node-localhost/server.crt, key in openshift.local.config/node-localhost/server.key
I1209 18:15:43.018707 20134 create_dockercfg_secrets.go:479] Token secret for service account openshift-infra/service-ingress-ip-controller is not populated yet
I1209 18:15:43.018721 20134 create_dockercfg_secrets.go:366] The dockercfg secret was not created for service account openshift-infra/service-ingress-ip-controller, will retry
I1209 18:15:43.020026 20134 create_dockercfg_secrets.go:90] Updating service account disruption-controller
I1209 18:15:43.020622 20134 create_dockercfg_secrets.go:479] Token secret for service account openshift-infra/pv-binder-controller is not populated yet
I1209 18:15:43.020636 20134 create_dockercfg_secrets.go:366] The dockercfg secret was not created for service account openshift-infra/pv-binder-controller, will retry
I1209 18:15:43.028865 20134 create_dockercfg_secrets.go:158] Adding token secret openshift-infra/build-controller-token-4ec32
I1209 18:15:43.043033 20134 create_dockercfg_secrets.go:90] Updating service account gc-controller
I1209 18:15:43.050990 20134 create_dockercfg_secrets.go:479] Token secret for service account openshift-infra/disruption-controller is not populated yet
I1209 18:15:43.051022 20134 create_dockercfg_secrets.go:366] The dockercfg secret was not created for service account openshift-infra/disruption-controller, will retry
I1209 18:15:43.058469 20134 create_dockercfg_secrets.go:90] Updating service account pv-attach-detach-controller
I1209 18:15:43.064522 20134 create_dockercfg_secrets.go:479] Token secret for service account openshift-infra/job-controller is not populated yet
I1209 18:15:43.064537 20134 create_dockercfg_secrets.go:366] The dockercfg secret was not created for service account openshift-infra/job-controller, will retry
I1209 18:15:43.071673 20134 create_dockercfg_secrets.go:158] Adding token secret openshift-infra/service-ingress-ip-controller-token-bspll
I1209 18:15:43.072170 20134 create_dockercfg_secrets.go:90] Updating service account job-controller
I1209 18:15:43.081471 20134 create_dockercfg_secrets.go:460] Creating token secret "endpoint-controller-token-wugvg" for service account openshift-infra/endpoint-controller
I1209 18:15:43.092725 20134 create_dockercfg_secrets.go:460] Creating token secret "service-serving-cert-controller-token-cwm27" for service account openshift-infra/service-serving-cert-controller
I1209 18:15:43.093360 20134 create_dockercfg_secrets.go:85] Adding service account deployer
I1209 18:15:43.100891 20134 create_dockercfg_secrets.go:460] Creating token secret "service-load-balancer-controller-token-67s8c" for service account openshift-infra/service-load-balancer-controller
I1209 18:15:43.106726 20134 create_dockercfg_secrets.go:158] Adding token secret openshift-infra/pv-binder-controller-token-dxien
I1209 18:15:43.129651 20134 create_dockercfg_secrets.go:460] Creating token secret "pv-provisioner-controller-token-dejq4" for service account openshift-infra/pv-provisioner-controller
I1209 18:15:43.139470 20134 create_dockercfg_secrets.go:158] Adding token secret openshift-infra/pv-attach-detach-controller-token-tki08
I1209 18:15:43.139771 20134 create_dockercfg_secrets.go:85] Adding service account default
I1209 18:15:43.162174 20134 create_dockercfg_secrets.go:479] Token secret for service account openshift-infra/service-load-balancer-controller is not populated yet
I1209 18:15:43.162196 20134 create_dockercfg_secrets.go:366] The dockercfg secret was not created for service account openshift-infra/service-load-balancer-controller, will retry
I1209 18:15:43.165233 20134 create_dockercfg_secrets.go:460] Creating token secret "default-token-t6bhq" for service account kube-system/default
I1209 18:15:43.165757 20134 create_dockercfg_secrets.go:479] Token secret for service account openshift-infra/endpoint-controller is not populated yet
I1209 18:15:43.165771 20134 create_dockercfg_secrets.go:366] The dockercfg secret was not created for service account openshift-infra/endpoint-controller, will retry
I1209 18:15:43.173393 20134 create_dockercfg_secrets.go:479] Token secret for service account openshift-infra/service-serving-cert-controller is not populated yet
I1209 18:15:43.173409 20134 create_dockercfg_secrets.go:366] The dockercfg secret was not created for service account openshift-infra/service-serving-cert-controller, will retry
I1209 18:15:43.182168 20134 create_dockercfg_secrets.go:479] Token secret for service account openshift-infra/pv-provisioner-controller is not populated yet
I1209 18:15:43.182604 20134 create_dockercfg_secrets.go:366] The dockercfg secret was not created for service account openshift-infra/pv-provisioner-controller, will retry
I1209 18:15:43.224396 20134 create_dockercfg_secrets.go:90] Updating service account endpoint-controller
I1209 18:15:43.262999 20134 create_dockercfg_secrets.go:479] Token secret for service account openshift-infra/replicaset-controller is not populated yet
I1209 18:15:43.263454 20134 create_dockercfg_secrets.go:366] The dockercfg secret was not created for service account openshift-infra/replicaset-controller, will retry
I1209 18:15:43.277403 20134 create_dockercfg_secrets.go:90] Updating service account service-load-balancer-controller
I1209 18:15:43.281626 20134 create_dockercfg_secrets.go:479] Token secret for service account openshift-infra/unidling-controller is not populated yet
I1209 18:15:43.281669 20134 create_dockercfg_secrets.go:366] The dockercfg secret was not created for service account openshift-infra/unidling-controller, will retry
I1209 18:15:43.283275 20134 create_dockercfg_secrets.go:460] Creating token secret "gc-controller-token-xtp29" for service account openshift-infra/gc-controller
I1209 18:15:43.290028 20134 create_dockercfg_secrets.go:90] Updating service account service-serving-cert-controller
I1209 18:15:43.291833 20134 create_dockercfg_secrets.go:460] Creating token secret "deployer-token-nykkp" for service account kube-system/deployer
I1209 18:15:43.403505 20134 create_dockercfg_secrets.go:479] Token secret for service account kube-system/default is not populated yet
I1209 18:15:43.403524 20134 create_dockercfg_secrets.go:366] The dockercfg secret was not created for service account kube-system/default, will retry
I1209 18:15:43.405824 20134 create_dockercfg_secrets.go:479] Token secret for service account openshift-infra/daemonset-controller is not populated yet
I1209 18:15:43.406086 20134 create_dockercfg_secrets.go:366] The dockercfg secret was not created for service account openshift-infra/daemonset-controller, will retry
I1209 18:15:43.413952 20134 create_dockercfg_secrets.go:85] Adding service account builder
I1209 18:15:43.414834 20134 etcd_watcher.go:160] watch (*api.Secret): 1 objects queued in outgoing channel.
I1209 18:15:43.435402 20134 create_dockercfg_secrets.go:479] Token secret for service account openshift-infra/deployment-controller is not populated yet
I1209 18:15:43.435422 20134 create_dockercfg_secrets.go:366] The dockercfg secret was not created for service account openshift-infra/deployment-controller, will retry
I1209 18:15:43.456810 20134 create_dockercfg_secrets.go:156] Updating token secret openshift-infra/hpa-controller-token-4oxjj
I1209 18:15:43.459696 20134 create_dockercfg_secrets.go:90] Updating service account unidling-controller
I1209 18:15:43.459717 20134 create_dockercfg_secrets.go:90] Updating service account pv-provisioner-controller
I1209 18:15:43.462680 20134 create_dockercfg_secrets.go:158] Adding token secret openshift-infra/service-serving-cert-controller-token-cwm27
I1209 18:15:43.472716 20134 create_dockercfg_secrets.go:479] Token secret for service account openshift-infra/deploymentconfig-controller is not populated yet
I1209 18:15:43.473032 20134 create_dockercfg_secrets.go:479] Token secret for service account openshift-infra/gc-controller is not populated yet
I1209 18:15:43.473043 20134 create_dockercfg_secrets.go:366] The dockercfg secret was not created for service account openshift-infra/gc-controller, will retry
I1209 18:15:43.473485 20134 create_dockercfg_secrets.go:479] Token secret for service account kube-system/deployer is not populated yet
I1209 18:15:43.473496 20134 create_dockercfg_secrets.go:366] The dockercfg secret was not created for service account kube-system/deployer, will retry
I1209 18:15:43.473911 20134 create_dockercfg_secrets.go:366] The dockercfg secret was not created for service account openshift-infra/deploymentconfig-controller, will retry
I1209 18:15:43.477539 20134 create_dockercfg_secrets.go:460] Creating token secret "namespace-controller-token-6ef2d" for service account openshift-infra/namespace-controller
I1209 18:15:43.492734 20134 create_dockercfg_secrets.go:158] Adding token secret openshift-infra/endpoint-controller-token-wugvg
I1209 18:15:43.493164 20134 create_dockercfg_secrets.go:90] Updating service account default
I1209 18:15:43.508385 20134 create_dockercfg_secrets.go:158] Adding token secret openshift-infra/service-load-balancer-controller-token-67s8c
I1209 18:15:43.509219 20134 create_dockercfg_secrets.go:460] Creating token secret "pet-set-controller-token-sd2s6" for service account openshift-infra/pet-set-controller
I1209 18:15:43.513475 20134 create_dockercfg_secrets.go:479] Token secret for service account openshift-infra/namespace-controller is not populated yet
I1209 18:15:43.513488 20134 create_dockercfg_secrets.go:366] The dockercfg secret was not created for service account openshift-infra/namespace-controller, will retry
I1209 18:15:43.515376 20134 create_dockercfg_secrets.go:90] Updating service account replicaset-controller
I1209 18:15:43.515823 20134 tokens_controller.go:449] deleting secret openshift-infra/namespace-controller-token-ew0p1 because reference couldn't be added (Operation cannot be fulfilled on serviceaccounts "namespace-controller": the object has been modified; please apply your changes to the latest version and try again)
I1209 18:15:43.527733 20134 create_dockercfg_secrets.go:158] Adding token secret openshift-infra/pv-provisioner-controller-token-dejq4
I1209 18:15:43.528618 20134 create_dockercfg_secrets.go:460] Creating token secret "pv-recycler-controller-token-2360d" for service account openshift-infra/pv-recycler-controller
I1209 18:15:43.536289 20134 create_dockercfg_secrets.go:460] Creating token secret "builder-token-27klh" for service account kube-system/builder
I1209 18:15:43.547966 20134 create_dockercfg_secrets.go:156] Updating token secret openshift-infra/build-controller-token-4ec32
I1209 18:15:43.553407 20134 create_dockercfg_secrets.go:85] Adding service account deployer
I1209 18:15:43.553709 20134 create_dockercfg_secrets.go:479] Token secret for service account openshift-infra/pet-set-controller is not populated yet
I1209 18:15:43.553734 20134 create_dockercfg_secrets.go:366] The dockercfg secret was not created for service account openshift-infra/pet-set-controller, will retry
I1209 18:15:43.554459 20134 tokens_controller.go:449] deleting secret openshift-infra/pet-set-controller-token-z4yyn because reference couldn't be added (Operation cannot be fulfilled on serviceaccounts "pet-set-controller": the object has been modified; please apply your changes to the latest version and try again)
I1209 18:15:43.556951 20134 tokens_controller.go:449] deleting secret openshift-infra/pv-recycler-controller-token-0c5cy because reference couldn't be added (Operation cannot be fulfilled on serviceaccounts "pv-recycler-controller": the object has been modified; please apply your changes to the latest version and try again)
I1209 18:15:43.557368 20134 create_dockercfg_secrets.go:479] Token secret for service account openshift-infra/replication-controller is not populated yet
I1209 18:15:43.557382 20134 create_dockercfg_secrets.go:366] The dockercfg secret was not created for service account openshift-infra/replication-controller, will retry
I1209 18:15:43.566407 20134 create_dockercfg_secrets.go:90] Updating service account daemonset-controller
I1209 18:15:43.575291 20134 create_dockercfg_secrets.go:479] Token secret for service account openshift-infra/pv-recycler-controller is not populated yet
I1209 18:15:43.575305 20134 create_dockercfg_secrets.go:366] The dockercfg secret was not created for service account openshift-infra/pv-recycler-controller, will retry
I1209 18:15:43.575534 20134 create_dockercfg_secrets.go:479] Token secret for service account kube-system/builder is not populated yet
I1209 18:15:43.575551 20134 create_dockercfg_secrets.go:366] The dockercfg secret was not created for service account kube-system/builder, will retry
I1209 18:15:43.580449 20134 create_dockercfg_secrets.go:156] Updating token secret openshift-infra/service-ingress-ip-controller-token-bspll
I1209 18:15:43.589381 20134 create_dockercfg_secrets.go:90] Updating service account deploymentconfig-controller
I1209 18:15:43.601589 20134 create_dockercfg_secrets.go:156] Updating token secret openshift-infra/pv-attach-detach-controller-token-tki08
I1209 18:15:43.614088 20134 create_dockercfg_secrets.go:158] Adding token secret kube-system/default-token-t6bhq
I1209 18:15:43.614621 20134 create_dockercfg_secrets.go:90] Updating service account deployment-controller
I1209 18:15:43.615890 20134 create_dockercfg_secrets.go:460] Creating token secret "deployer-token-8lwt2" for service account openshift-infra/deployer
I1209 18:15:43.632255 20134 create_dockercfg_secrets.go:460] Creating token secret "default-token-98z7m" for service account openshift-infra/default
I1209 18:15:43.634012 20134 create_dockercfg_secrets.go:460] Creating token secret "builder-token-dq2n2" for service account openshift-infra/builder
I1209 18:15:43.634653 20134 create_dockercfg_secrets.go:90] Updating service account gc-controller
I1209 18:15:43.637089 20134 create_dockercfg_secrets.go:460] Creating token secret "default-token-a76x6" for service account openshift/default
I1209 18:15:43.637532 20134 create_dockercfg_secrets.go:460] Creating token secret "builder-token-ekyde" for service account openshift/builder
I1209 18:15:43.654808 20134 create_dockercfg_secrets.go:90] Updating service account deployer
I1209 18:15:43.655312 20134 create_dockercfg_secrets.go:158] Adding token secret openshift-infra/gc-controller-token-xtp29
I1209 18:15:43.656156 20134 create_dockercfg_secrets.go:479] Token secret for service account openshift-infra/deployer is not populated yet
I1209 18:15:43.656185 20134 create_dockercfg_secrets.go:366] The dockercfg secret was not created for service account openshift-infra/deployer, will retry
I1209 18:15:43.656677 20134 create_dockercfg_secrets.go:497] Creating dockercfg secret "hpa-controller-dockercfg-50l3b" for service account openshift-infra/hpa-controller
I1209 18:15:43.677432 20134 create_dockercfg_secrets.go:479] Token secret for service account openshift-infra/default is not populated yet
I1209 18:15:43.677475 20134 create_dockercfg_secrets.go:366] The dockercfg secret was not created for service account openshift-infra/default, will retry
I1209 18:15:43.678236 20134 create_dockercfg_secrets.go:497] Creating dockercfg secret "build-controller-dockercfg-1rtcx" for service account openshift-infra/build-controller
I1209 18:15:43.689216 20134 create_dockercfg_secrets.go:90] Updating service account namespace-controller
I1209 18:15:43.689859 20134 create_dockercfg_secrets.go:156] Updating token secret openshift-infra/pv-binder-controller-token-dxien
I1209 18:15:43.696204 20134 create_dockercfg_secrets.go:479] Token secret for service account openshift-infra/builder is not populated yet
I1209 18:15:43.696540 20134 create_dockercfg_secrets.go:366] The dockercfg secret was not created for service account openshift-infra/builder, will retry
I1209 18:15:43.696624 20134 create_dockercfg_secrets.go:497] Creating dockercfg secret "service-ingress-ip-controller-dockercfg-ebpaz" for service account openshift-infra/service-ingress-ip-controller
I1209 18:15:43.717055 20134 create_dockercfg_secrets.go:90] Updating service account pet-set-controller
I1209 18:15:43.734381 20134 create_dockercfg_secrets.go:90] Updating service account pv-recycler-controller
I1209 18:15:43.734785 20134 create_dockercfg_secrets.go:158] Adding token secret kube-system/deployer-token-nykkp
I1209 18:15:43.736593 20134 create_dockercfg_secrets.go:479] Token secret for service account openshift/builder is not populated yet
I1209 18:15:43.736622 20134 create_dockercfg_secrets.go:366] The dockercfg secret was not created for service account openshift/builder, will retry
I1209 18:15:43.736992 20134 create_dockercfg_secrets.go:497] Creating dockercfg secret "pv-binder-controller-dockercfg-fesc7" for service account openshift-infra/pv-binder-controller
I1209 18:15:43.737300 20134 create_dockercfg_secrets.go:479] Token secret for service account openshift/default is not populated yet
I1209 18:15:43.737345 20134 create_dockercfg_secrets.go:366] The dockercfg secret was not created for service account openshift/default, will retry
I1209 18:15:43.749271 20134 create_dockercfg_secrets.go:90] Updating service account replication-controller
I1209 18:15:43.767439 20134 create_dockercfg_secrets.go:90] Updating service account builder
I1209 18:15:43.795859 20134 create_dockercfg_secrets.go:90] Updating service account build-controller
I1209 18:15:43.797689 20134 create_dockercfg_secrets.go:460] Creating token secret "disruption-controller-token-dxmig" for service account openshift-infra/disruption-controller
I1209 18:15:43.812137 20134 create_dockercfg_secrets.go:158] Adding token secret openshift-infra/namespace-controller-token-6ef2d
I1209 18:15:43.812506 20134 create_dockercfg_secrets.go:90] Updating service account default
I1209 18:15:43.829813 20134 create_dockercfg_secrets.go:158] Adding token secret openshift-infra/pet-set-controller-token-sd2s6
I1209 18:15:43.829950 20134 create_dockercfg_secrets.go:90] Updating service account deployer
I1209 18:15:43.837913 20134 create_dockercfg_secrets.go:479] Token secret for service account openshift-infra/disruption-controller is not populated yet
I1209 18:15:43.838222 20134 create_dockercfg_secrets.go:366] The dockercfg secret was not created for service account openshift-infra/disruption-controller, will retry
I1209 18:15:43.838457 20134 create_dockercfg_secrets.go:497] Creating dockercfg secret "pv-attach-detach-controller-dockercfg-rj5st" for service account openshift-infra/pv-attach-detach-controller
I1209 18:15:43.846707 20134 create_dockercfg_secrets.go:156] Updating token secret openshift-infra/service-serving-cert-controller-token-cwm27
I1209 18:15:43.849558 20134 create_dockercfg_secrets.go:90] Updating service account builder
I1209 18:15:43.865502 20134 create_dockercfg_secrets.go:90] Updating service account default
I1209 18:15:43.865925 20134 create_dockercfg_secrets.go:158] Adding token secret openshift-infra/pv-recycler-controller-token-2360d
I1209 18:15:43.882879 20134 create_dockercfg_secrets.go:90] Updating service account builder
I1209 18:15:43.883288 20134 create_dockercfg_secrets.go:158] Adding token secret kube-system/builder-token-27klh
I1209 18:15:43.905810 20134 create_dockercfg_secrets.go:156] Updating token secret openshift-infra/endpoint-controller-token-wugvg
I1209 18:15:43.906066 20134 create_dockercfg_secrets.go:90] Updating service account hpa-controller
I1209 18:15:43.931694 20134 create_dockercfg_secrets.go:156] Updating token secret openshift-infra/service-load-balancer-controller-token-67s8c
I1209 18:15:43.932374 20134 create_dockercfg_secrets.go:90] Updating service account pv-binder-controller
I1209 18:15:43.938166 20134 create_dockercfg_secrets.go:460] Creating token secret "job-controller-token-pf94k" for service account openshift-infra/job-controller
I1209 18:15:43.940619 20134 create_dockercfg_secrets.go:90] Updating service account pv-attach-detach-controller
I1209 18:15:43.953254 20134 create_dockercfg_secrets.go:90] Updating service account service-ingress-ip-controller
I1209 18:15:43.968880 20134 create_dockercfg_secrets.go:479] Token secret for service account openshift-infra/job-controller is not populated yet
I1209 18:15:43.968900 20134 create_dockercfg_secrets.go:366] The dockercfg secret was not created for service account openshift-infra/job-controller, will retry
I1209 18:15:43.971724 20134 create_dockercfg_secrets.go:156] Updating token secret openshift-infra/pv-provisioner-controller-token-dejq4
I1209 18:15:43.971774 20134 create_dockercfg_secrets.go:90] Updating service account disruption-controller
I1209 18:15:43.971788 20134 create_dockercfg_secrets.go:90] Updating service account service-load-balancer-controller
I1209 18:15:43.982650 20134 create_dockercfg_secrets.go:90] Updating service account service-serving-cert-controller
I1209 18:15:43.982968 20134 create_dockercfg_secrets.go:158] Adding token secret openshift-infra/deployer-token-8lwt2
I1209 18:15:44.001613 20134 create_dockercfg_secrets.go:158] Adding token secret openshift-infra/default-token-98z7m
I1209 18:15:44.001803 20134 create_dockercfg_secrets.go:90] Updating service account build-controller
I1209 18:15:44.009111 20134 create_dockercfg_secrets.go:460] Creating token secret "deployer-token-lru8f" for service account openshift/deployer
I1209 18:15:44.015434 20134 create_dockercfg_secrets.go:90] Updating service account default
I1209 18:15:44.020640 20134 create_dockercfg_secrets.go:497] Creating dockercfg secret "endpoint-controller-dockercfg-hx89r" for service account openshift-infra/endpoint-controller
I1209 18:15:44.025981 20134 create_servercert.go:122] Generated new server certificate as openshift.local.config/node-localhost/server.crt, key as openshift.local.config/node-localhost/server.key
I1209 18:15:44.026280 20134 create_kubeconfig.go:142] creating a .kubeconfig with: admin.CreateKubeConfigOptions{APIServerURL:"https://192.168.121.18:8443", PublicAPIServerURL:"", APIServerCAFiles:[]string{"openshift.local.config/node-localhost/ca.crt"}, CertFile:"openshift.local.config/node-localhost/master-client.crt", KeyFile:"openshift.local.config/node-localhost/master-client.key", ContextNamespace:"default", KubeConfigFile:"openshift.local.config/node-localhost/node.kubeconfig", Output:(*util.gLogWriter)(0xc82dac65ec)}
I1209 18:15:44.027653 20134 create_kubeconfig.go:210] Generating 'system:node:localhost/192-168-121-18:8443' API client config as openshift.local.config/node-localhost/node.kubeconfig
I1209 18:15:44.030163 20134 create_nodeconfig.go:275] Created node config for localhost in openshift.local.config/node-localhost
W1209 18:15:44.035513 20134 node_config.go:103] Using "localhost" as node name will not resolve from all locations
I1209 18:15:44.035995 20134 server.go:122] Running kubelet in containerized mode (experimental)
I1209 18:15:44.036204 20134 docker.go:418] Connecting to docker on unix:///var/run/docker.sock
I1209 18:15:44.036221 20134 docker.go:438] Start docker client with request timeout=2m0s
E1209 18:15:44.037187 20134 cni.go:163] error updating cni config: No networks found in /etc/cni/net.d
I1209 18:15:44.043892 20134 create_dockercfg_secrets.go:158] Adding token secret openshift-infra/builder-token-dq2n2
I1209 18:15:44.045716 20134 create_dockercfg_secrets.go:479] Token secret for service account openshift/deployer is not populated yet
I1209 18:15:44.045734 20134 create_dockercfg_secrets.go:366] The dockercfg secret was not created for service account openshift/deployer, will retry
I1209 18:15:44.045840 20134 create_dockercfg_secrets.go:497] Creating dockercfg secret "service-load-balancer-controller-dockercfg-zjzuk" for service account openshift-infra/service-load-balancer-controller
I1209 18:15:44.053726 20134 create_dockercfg_secrets.go:90] Updating service account builder
I1209 18:15:44.067653 20134 create_dockercfg_secrets.go:460] Creating token secret "default-token-t4356" for service account default/default
I1209 18:15:44.071695 20134 create_dockercfg_secrets.go:90] Updating service account deployer
I1209 18:15:44.071804 20134 create_dockercfg_secrets.go:90] Updating service account job-controller
I1209 18:15:44.072278 20134 tokens_controller.go:449] deleting secret openshift/deployer-token-yn9xx because reference couldn't be added (Operation cannot be fulfilled on serviceaccounts "deployer": the object has been modified; please apply your changes to the latest version and try again)
I1209 18:15:44.073069 20134 create_dockercfg_secrets.go:497] Creating dockercfg secret "service-serving-cert-controller-dockercfg-fxtr1" for service account openshift-infra/service-serving-cert-controller
I1209 18:15:44.083066 20134 create_dockercfg_secrets.go:90] Updating service account default
I1209 18:15:44.083251 20134 create_dockercfg_secrets.go:158] Adding token secret openshift/builder-token-ekyde
I1209 18:15:44.096500 20134 create_dockercfg_secrets.go:156] Updating token secret kube-system/default-token-t6bhq
I1209 18:15:44.098145 20134 create_dockercfg_secrets.go:90] Updating service account builder
I1209 18:15:44.103586 20134 create_dockercfg_secrets.go:479] Token secret for service account default/default is not populated yet
I1209 18:15:44.103624 20134 create_dockercfg_secrets.go:366] The dockercfg secret was not created for service account default/default, will retry
I1209 18:15:44.105590 20134 create_dockercfg_secrets.go:158] Adding token secret openshift/default-token-a76x6
I1209 18:15:44.105763 20134 create_dockercfg_secrets.go:90] Updating service account deployer
I1209 18:15:44.106652 20134 start_node.go:303] Starting node localhost (v1.5.0-alpha.0+69afb3a-296)
I1209 18:15:44.108710 20134 start_node.go:312] Connecting to API server https://192.168.121.18:8443
I1209 18:15:44.119992 20134 docker.go:418] Connecting to docker on unix:///var/run/docker.sock
I1209 18:15:44.120105 20134 docker.go:438] Start docker client with request timeout=0
I1209 18:15:44.125274 20134 node.go:142] Connecting to Docker at unix:///var/run/docker.sock
I1209 18:15:44.131147 20134 create_dockercfg_secrets.go:90] Updating service account default
I1209 18:15:44.134112 20134 tokens_controller.go:449] deleting secret default/default-token-30tce because reference couldn't be added (Operation cannot be fulfilled on serviceaccounts "default": the object has been modified; please apply your changes to the latest version and try again)
I1209 18:15:44.140595 20134 create_dockercfg_secrets.go:90] Updating service account deployer
I1209 18:15:44.149933 20134 create_dockercfg_secrets.go:479] Token secret for service account default/builder is not populated yet
I1209 18:15:44.149949 20134 create_dockercfg_secrets.go:366] The dockercfg secret was not created for service account default/builder, will retry
I1209 18:15:44.150013 20134 create_dockercfg_secrets.go:497] Creating dockercfg secret "pv-provisioner-controller-dockercfg-wg0x1" for service account openshift-infra/pv-provisioner-controller
I1209 18:15:44.150296 20134 create_dockercfg_secrets.go:497] Creating dockercfg secret "default-dockercfg-0uhp4" for service account kube-system/default
I1209 18:15:44.151439 20134 create_dockercfg_secrets.go:90] Updating service account pv-binder-controller
I1209 18:15:44.154127 20134 create_dockercfg_secrets.go:460] Creating token secret "unidling-controller-token-qbosi" for service account openshift-infra/unidling-controller
I1209 18:15:44.161704 20134 create_dockercfg_secrets.go:90] Updating service account service-ingress-ip-controller
I1209 18:15:44.162162 20134 create_dockercfg_secrets.go:156] Updating token secret openshift-infra/gc-controller-token-xtp29
I1209 18:15:44.175030 20134 create_dockercfg_secrets.go:90] Updating service account default
I1209 18:15:44.183431 20134 create_dockercfg_secrets.go:90] Updating service account builder
I1209 18:15:44.184648 20134 create_dockercfg_secrets.go:479] Token secret for service account openshift-infra/unidling-controller is not populated yet
I1209 18:15:44.184678 20134 create_dockercfg_secrets.go:366] The dockercfg secret was not created for service account openshift-infra/unidling-controller, will retry
I1209 18:15:44.186007 20134 create_dockercfg_secrets.go:460] Creating token secret "replicaset-controller-token-lz4hd" for service account openshift-infra/replicaset-controller
I1209 18:15:44.194168 20134 create_dockercfg_secrets.go:90] Updating service account pv-attach-detach-controller
I1209 18:15:44.198439 20134 create_dockercfg_secrets.go:156] Updating token secret kube-system/deployer-token-nykkp
I1209 18:15:44.199704 20134 create_dockercfg_secrets.go:158] Adding token secret openshift-infra/disruption-controller-token-dxmig
I1209 18:15:44.206484 20134 create_dockercfg_secrets.go:479] Token secret for service account default/deployer is not populated yet
I1209 18:15:44.206499 20134 create_dockercfg_secrets.go:366] The dockercfg secret was not created for service account default/deployer, will retry
I1209 18:15:44.207877 20134 create_dockercfg_secrets.go:90] Updating service account endpoint-controller
I1209 18:15:44.209901 20134 create_dockercfg_secrets.go:479] Token secret for service account openshift-infra/replicaset-controller is not populated yet
I1209 18:15:44.209916 20134 create_dockercfg_secrets.go:366] The dockercfg secret was not created for service account openshift-infra/replicaset-controller, will retry
I1209 18:15:44.218064 20134 create_dockercfg_secrets.go:497] Creating dockercfg secret "gc-controller-dockercfg-9vuce" for service account openshift-infra/gc-controller
I1209 18:15:44.222696 20134 manager.go:140] cAdvisor running in container: "/system.slice/docker-b585d69a82cf6e0cb5909a7f97b69a2983f3e9a8f5b29b298e036d0e33c55769.scope"
I1209 18:15:44.233023 20134 create_dockercfg_secrets.go:90] Updating service account builder
I1209 18:15:44.233467 20134 create_dockercfg_secrets.go:90] Updating service account deployer
I1209 18:15:44.234963 20134 node.go:390] Using iptables Proxier.
I1209 18:15:44.240421 20134 create_dockercfg_secrets.go:90] Updating service account service-load-balancer-controller
I1209 18:15:44.242600 20134 create_dockercfg_secrets.go:497] Creating dockercfg secret "deployer-dockercfg-ilbta" for service account kube-system/deployer
I1209 18:15:44.249048 20134 etcd_watcher.go:318] watch: 1 objects queued in incoming channel.
I1209 18:15:44.258139 20134 create_dockercfg_secrets.go:90] Updating service account unidling-controller
I1209 18:15:44.258162 20134 create_dockercfg_secrets.go:90] Updating service account service-serving-cert-controller
I1209 18:15:44.259739 20134 create_dockercfg_secrets.go:460] Creating token secret "daemonset-controller-token-ri53o" for service account openshift-infra/daemonset-controller
I1209 18:15:44.259966 20134 create_dockercfg_secrets.go:460] Creating token secret "deployment-controller-token-sp05w" for service account openshift-infra/deployment-controller
W1209 18:15:44.263594 20134 manager.go:148] unable to connect to Rkt api service: rkt: cannot tcp Dial rkt api service: dial tcp [::1]:15441: getsockopt: connection refused
I1209 18:15:44.265892 20134 create_dockercfg_secrets.go:460] Creating token secret "deploymentconfig-controller-token-axnu8" for service account openshift-infra/deploymentconfig-controller
I1209 18:15:44.268764 20134 create_dockercfg_secrets.go:90] Updating service account namespace-controller
W1209 18:15:44.273560 20134 node.go:507] Failed to retrieve node info: nodes "localhost" not found
W1209 18:15:44.273700 20134 proxier.go:226] invalid nodeIP, initialize kube-proxy with 127.0.0.1 as nodeIP
I1209 18:15:44.273723 20134 node.go:416] Tearing down userspace rules.
I1209 18:15:44.273746 20134 iptables.go:362] running iptables -C [OUTPUT -t nat -m comment --comment handle ClusterIPs; NOTE: this must be before the NodePort rules -j KUBE-PORTALS-HOST]
I1209 18:15:44.283216 20134 create_dockercfg_secrets.go:156] Updating token secret openshift-infra/namespace-controller-token-6ef2d
I1209 18:15:44.283676 20134 create_dockercfg_secrets.go:90] Updating service account replicaset-controller
I1209 18:15:44.284781 20134 iptables.go:362] running iptables -D [OUTPUT -t nat -m comment --comment handle ClusterIPs; NOTE: this must be before the NodePort rules -j KUBE-PORTALS-HOST]
I1209 18:15:44.292788 20134 create_dockercfg_secrets.go:479] Token secret for service account openshift-infra/deploymentconfig-controller is not populated yet
I1209 18:15:44.292838 20134 create_dockercfg_secrets.go:366] The dockercfg secret was not created for service account openshift-infra/deploymentconfig-controller, will retry
I1209 18:15:44.292910 20134 create_dockercfg_secrets.go:497] Creating dockercfg secret "namespace-controller-dockercfg-09piq" for service account openshift-infra/namespace-controller
I1209 18:15:44.293790 20134 create_dockercfg_secrets.go:479] Token secret for service account openshift-infra/daemonset-controller is not populated yet
I1209 18:15:44.293818 20134 create_dockercfg_secrets.go:366] The dockercfg secret was not created for service account openshift-infra/daemonset-controller, will retry
I1209 18:15:44.293850 20134 create_dockercfg_secrets.go:479] Token secret for service account openshift-infra/pet-set-controller is not populated yet
I1209 18:15:44.294166 20134 create_dockercfg_secrets.go:366] The dockercfg secret was not created for service account openshift-infra/pet-set-controller, will retry
I1209 18:15:44.294516 20134 create_dockercfg_secrets.go:479] Token secret for service account openshift-infra/pv-recycler-controller is not populated yet
I1209 18:15:44.294538 20134 create_dockercfg_secrets.go:366] The dockercfg secret was not created for service account openshift-infra/pv-recycler-controller, will retry
I1209 18:15:44.295240 20134 create_dockercfg_secrets.go:479] Token secret for service account kube-system/builder is not populated yet
I1209 18:15:44.295266 20134 create_dockercfg_secrets.go:366] The dockercfg secret was not created for service account kube-system/builder, will retry
I1209 18:15:44.295611 20134 create_dockercfg_secrets.go:479] Token secret for service account openshift-infra/default is not populated yet
I1209 18:15:44.295634 20134 create_dockercfg_secrets.go:366] The dockercfg secret was not created for service account openshift-infra/default, will retry
I1209 18:15:44.295958 20134 create_dockercfg_secrets.go:479] Token secret for service account openshift-infra/deployer is not populated yet
I1209 18:15:44.295980 20134 create_dockercfg_secrets.go:366] The dockercfg secret was not created for service account openshift-infra/deployer, will retry
I1209 18:15:44.296005 20134 create_dockercfg_secrets.go:479] Token secret for service account openshift-infra/builder is not populated yet
I1209 18:15:44.296291 20134 create_dockercfg_secrets.go:366] The dockercfg secret was not created for service account openshift-infra/builder, will retry
I1209 18:15:44.296621 20134 create_dockercfg_secrets.go:479] Token secret for service account openshift/default is not populated yet
I1209 18:15:44.296643 20134 create_dockercfg_secrets.go:366] The dockercfg secret was not created for service account openshift/default, will retry
I1209 18:15:44.296952 20134 create_dockercfg_secrets.go:479] Token secret for service account openshift/builder is not populated yet
I1209 18:15:44.296974 20134 create_dockercfg_secrets.go:366] The dockercfg secret was not created for service account openshift/builder, will retry
I1209 18:15:44.297268 20134 create_dockercfg_secrets.go:479] Token secret for service account openshift-infra/disruption-controller is not populated yet
I1209 18:15:44.297289 20134 create_dockercfg_secrets.go:366] The dockercfg secret was not created for service account openshift-infra/disruption-controller, will retry
I1209 18:15:44.297697 20134 create_dockercfg_secrets.go:460] Creating token secret "job-controller-token-pf94k" for service account openshift-infra/job-controller
I1209 18:15:44.299051 20134 healthcheck.go:119] Initializing kube-proxy health checker
I1209 18:15:44.299210 20134 create_dockercfg_secrets.go:479] Token secret for service account openshift-infra/deployment-controller is not populated yet
I1209 18:15:44.299233 20134 create_dockercfg_secrets.go:366] The dockercfg secret was not created for service account openshift-infra/deployment-controller, will retry
I1209 18:15:44.299313 20134 create_dockercfg_secrets.go:460] Creating token secret "deployer-token-lru8f" for service account openshift/deployer
I1209 18:15:44.300457 20134 create_dockercfg_secrets.go:90] Updating service account hpa-controller
I1209 18:15:44.306132 20134 create_dockercfg_secrets.go:156] Updating token secret openshift-infra/pet-set-controller-token-sd2s6
I1209 18:15:44.309216 20134 create_dockercfg_secrets.go:90] Updating service account pv-provisioner-controller
I1209 18:15:44.312057 20134 fs.go:116] Filesystem partitions: map[/dev/mapper/docker-252:1-262311-31124fc842b59ad64bc83362cd3a95d06487dfe3da4abe9c3c25a17cf11f5a7f:{mountpoint:/ major:253 minor:1 fsType:xfs blockSize:0} /dev/vda1:{mountpoint:/var/lib/docker/devicemapper major:252 minor:1 fsType:ext4 blockSize:0}]
I1209 18:15:44.315502 20134 create_dockercfg_secrets.go:460] Creating token secret "replication-controller-token-f4env" for service account openshift-infra/replication-controller
I1209 18:15:44.315943 20134 create_dockercfg_secrets.go:479] Token secret for service account openshift-infra/job-controller is not populated yet
I1209 18:15:44.315958 20134 create_dockercfg_secrets.go:366] The dockercfg secret was not created for service account openshift-infra/job-controller, will retry
I1209 18:15:44.316114 20134 create_dockercfg_secrets.go:460] Creating token secret "default-token-t4356" for service account default/default
I1209 18:15:44.312838 20134 create_dockercfg_secrets.go:90] Updating service account pv-recycler-controller
I1209 18:15:44.317862 20134 create_dockercfg_secrets.go:90] Updating service account pet-set-controller
I1209 18:15:44.322125 20134 iptables.go:362] running iptables -C [PREROUTING -t nat -m comment --comment handle ClusterIPs; NOTE: this must be before the NodePort rules -j KUBE-PORTALS-CONTAINER]
I1209 18:15:44.322514 20134 create_dockercfg_secrets.go:479] Token secret for service account openshift/deployer is not populated yet
I1209 18:15:44.329827 20134 create_dockercfg_secrets.go:366] The dockercfg secret was not created for service account openshift/deployer, will retry
I1209 18:15:44.324424 20134 create_dockercfg_secrets.go:156] Updating token secret openshift-infra/pv-recycler-controller-token-2360d
I1209 18:15:44.323193 20134 manager.go:195] Machine: {NumCores:2 CpuFrequency:2399996 MemoryCapacity:8223051776 MachineID:8629c58585b148cca9cc925e788be346 SystemUUID:8629C585-85B1-48CC-A9CC-925E788BE346 BootID:1a2f3964-acf4-4825-9949-2816c7af2060 Filesystems:[{Device:/dev/mapper/docker-252:1-262311-31124fc842b59ad64bc83362cd3a95d06487dfe3da4abe9c3c25a17cf11f5a7f Capacity:10725883904 Type:vfs Inodes:5242368 HasInodes:true} {Device:/dev/vda1 Capacity:42140499968 Type:vfs Inodes:2621440 HasInodes:true}] DiskMap:map[253:1:{Name:dm-1 Major:253 Minor:1 Size:10737418240 Scheduler:none} 252:0:{Name:vda Major:252 Minor:0 Size:44023414784 Scheduler:none} 253:0:{Name:dm-0 Major:253 Minor:0 Size:107374182400 Scheduler:none}] NetworkDevices:[{Name:eth0 MacAddress:52:54:00:56:0f:2f Speed:-1 Mtu:1500}] Topology:[{Id:0 Memory:8223051776 Cores:[{Id:0 Threads:[0] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:4194304 Type:Unified Level:2}]}] Caches:[]} {Id:1 Memory:0 Cores:[{Id:0 Threads:[1] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:4194304 Type:Unified Level:2}]}] Caches:[]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None}
I1209 18:15:44.337522 20134 create_dockercfg_secrets.go:90] Updating service account default
I1209 18:15:44.339132 20134 create_dockercfg_secrets.go:460] Creating token secret "builder-token-cpjqi" for service account default/builder
I1209 18:15:44.344031 20134 create_dockercfg_secrets.go:479] Token secret for service account openshift-infra/replication-controller is not populated yet
I1209 18:15:44.344045 20134 create_dockercfg_secrets.go:366] The dockercfg secret was not created for service account openshift-infra/replication-controller, will retry
I1209 18:15:44.344100 20134 create_dockercfg_secrets.go:460] Creating token secret "unidling-controller-token-qbosi" for service account openshift-infra/unidling-controller
I1209 18:15:44.351356 20134 create_dockercfg_secrets.go:156] Updating token secret kube-system/builder-token-27klh
I1209 18:15:44.351512 20134 create_dockercfg_secrets.go:479] Token secret for service account default/builder is not populated yet
I1209 18:15:44.351526 20134 create_dockercfg_secrets.go:366] The dockercfg secret was not created for service account default/builder, will retry
I1209 18:15:44.351572 20134 create_dockercfg_secrets.go:460] Creating token secret "replicaset-controller-token-lz4hd" for service account openshift-infra/replicaset-controller
I1209 18:15:44.352019 20134 manager.go:201] Version: {KernelVersion:4.8.6-300.fc25.x86_64 ContainerOsVersion:CentOS Linux 7 (Core) DockerVersion:1.12.3 CadvisorVersion: CadvisorRevision:}
I1209 18:15:44.353153 20134 create_dockercfg_secrets.go:497] Creating dockercfg secret "pet-set-controller-dockercfg-epimm" for service account openshift-infra/pet-set-controller
I1209 18:15:44.353951 20134 oom_linux.go:69] attempting to set "/proc/self/oom_score_adj" to "-999"
I1209 18:15:44.354759 20134 server.go:610] Sending events to api server.
I1209 18:15:44.355067 20134 server.go:644] Using root directory: /var/lib/origin/openshift.local.volumes
I1209 18:15:44.355260 20134 create_dockercfg_secrets.go:90] Updating service account deployment-controller
I1209 18:15:44.355280 20134 create_dockercfg_secrets.go:90] Updating service account daemonset-controller
I1209 18:15:44.355516 20134 create_dockercfg_secrets.go:460] Creating token secret "deployer-token-gavl7" for service account default/deployer
I1209 18:15:44.355713 20134 create_dockercfg_secrets.go:479] Token secret for service account default/default is not populated yet
I1209 18:15:44.355744 20134 create_dockercfg_secrets.go:366] The dockercfg secret was not created for service account default/default, will retry
I1209 18:15:44.355833 20134 create_dockercfg_secrets.go:497] Creating dockercfg secret "pv-recycler-controller-dockercfg-06o4l" for service account openshift-infra/pv-recycler-controller
I1209 18:15:44.356549 20134 kubelet.go:262] Watching apiserver
I1209 18:15:44.356785 20134 reflector.go:200] Starting reflector *api.Pod (0) from pkg/kubelet/config/apiserver.go:43
I1209 18:15:44.357035 20134 reflector.go:200] Starting reflector *api.Service (0) from pkg/kubelet/kubelet.go:384
I1209 18:15:44.357083 20134 reflector.go:200] Starting reflector *api.Node (0) from pkg/kubelet/kubelet.go:403
I1209 18:15:44.366245 20134 iptables.go:362] running iptables -D [PREROUTING -t nat -m comment --comment handle ClusterIPs; NOTE: this must be before the NodePort rules -j KUBE-PORTALS-CONTAINER]
I1209 18:15:44.370166 20134 reflector.go:249] Listing and watching *api.Pod from pkg/kubelet/config/apiserver.go:43
I1209 18:15:44.370820 20134 reflector.go:249] Listing and watching *api.Service from pkg/kubelet/kubelet.go:384
I1209 18:15:44.371304 20134 reflector.go:249] Listing and watching *api.Node from pkg/kubelet/kubelet.go:403
I1209 18:15:44.375136 20134 iptables.go:362] running iptables -C [OUTPUT -t nat -m addrtype --dst-type LOCAL -m comment --comment handle service NodePorts; NOTE: this must be the last rule in the chain -j KUBE-NODEPORT-HOST]
I1209 18:15:44.384764 20134 iptables.go:362] running iptables -D [OUTPUT -t nat -m addrtype --dst-type LOCAL -m comment --comment handle service NodePorts; NOTE: this must be the last rule in the chain -j KUBE-NODEPORT-HOST]
I1209 18:15:44.394180 20134 iptables.go:362] running iptables -C [PREROUTING -t nat -m addrtype --dst-type LOCAL -m comment --comment handle service NodePorts; NOTE: this must be the last rule in the chain -j KUBE-NODEPORT-CONTAINER]
I1209 18:15:44.402539 20134 create_dockercfg_secrets.go:90] Updating service account deploymentconfig-controller
I1209 18:15:44.402633 20134 create_dockercfg_secrets.go:90] Updating service account default
W1209 18:15:44.404662 20134 kubelet_network.go:71] Hairpin mode set to "promiscuous-bridge" but configureCBR0 is false, falling back to "hairpin-veth"
I1209 18:15:44.404702 20134 kubelet.go:513] Hairpin mode set to "hairpin-veth"
I1209 18:15:44.408465 20134 iptables.go:362] running iptables -D [PREROUTING -t nat -m addrtype --dst-type LOCAL -m comment --comment handle service NodePorts; NOTE: this must be the last rule in the chain -j KUBE-NODEPORT-CONTAINER]
I1209 18:15:44.416780 20134 iptables.go:362] running iptables -C [INPUT -t filter -m comment --comment Ensure that non-local NodePort traffic can flow -j KUBE-NODEPORT-NON-LOCAL]
I1209 18:15:44.424653 20134 iptables.go:362] running iptables -D [INPUT -t filter -m comment --comment Ensure that non-local NodePort traffic can flow -j KUBE-NODEPORT-NON-LOCAL]
I1209 18:15:44.439072 20134 iptables.go:362] running iptables -F [KUBE-PORTALS-CONTAINER -t nat]
I1209 18:15:44.448421 20134 config.go:281] Setting pods for source api
I1209 18:15:44.450207 20134 iptables.go:362] running iptables -X [KUBE-PORTALS-CONTAINER -t nat]
I1209 18:15:44.460441 20134 iptables.go:362] running iptables -F [KUBE-PORTALS-HOST -t nat]
I1209 18:15:44.465455 20134 docker_manager.go:242] Setting dockerRoot to /var/lib/docker
I1209 18:15:44.465639 20134 plugins.go:56] Registering credential provider: .dockercfg
I1209 18:15:44.466144 20134 plugins.go:350] Loaded volume plugin "kubernetes.io/aws-ebs"
I1209 18:15:44.466163 20134 plugins.go:350] Loaded volume plugin "kubernetes.io/empty-dir"
I1209 18:15:44.466235 20134 plugins.go:350] Loaded volume plugin "kubernetes.io/gce-pd"
I1209 18:15:44.466260 20134 plugins.go:350] Loaded volume plugin "kubernetes.io/git-repo"
I1209 18:15:44.466270 20134 plugins.go:350] Loaded volume plugin "kubernetes.io/host-path"
I1209 18:15:44.466282 20134 plugins.go:350] Loaded volume plugin "kubernetes.io/nfs"
I1209 18:15:44.466292 20134 plugins.go:350] Loaded volume plugin "kubernetes.io/secret"
I1209 18:15:44.466304 20134 plugins.go:350] Loaded volume plugin "kubernetes.io/iscsi"
I1209 18:15:44.466320 20134 plugins.go:350] Loaded volume plugin "kubernetes.io/glusterfs"
I1209 18:15:44.466343 20134 plugins.go:350] Loaded volume plugin "kubernetes.io/rbd"
I1209 18:15:44.466356 20134 plugins.go:350] Loaded volume plugin "kubernetes.io/cinder"
I1209 18:15:44.466485 20134 plugins.go:350] Loaded volume plugin "kubernetes.io/quobyte"
I1209 18:15:44.466503 20134 plugins.go:350] Loaded volume plugin "kubernetes.io/cephfs"
I1209 18:15:44.466514 20134 plugins.go:350] Loaded volume plugin "kubernetes.io/downward-api"
I1209 18:15:44.466521 20134 plugins.go:350] Loaded volume plugin "kubernetes.io/fc"
I1209 18:15:44.466527 20134 plugins.go:350] Loaded volume plugin "kubernetes.io/flocker"
I1209 18:15:44.466534 20134 plugins.go:350] Loaded volume plugin "kubernetes.io/azure-file"
I1209 18:15:44.466562 20134 plugins.go:350] Loaded volume plugin "kubernetes.io/configmap"
I1209 18:15:44.466572 20134 plugins.go:350] Loaded volume plugin "kubernetes.io/vsphere-volume"
I1209 18:15:44.466580 20134 plugins.go:350] Loaded volume plugin "kubernetes.io/azure-disk"
I1209 18:15:44.473676 20134 iptables.go:362] running iptables -X [KUBE-PORTALS-HOST -t nat]
I1209 18:15:44.481954 20134 create_dockercfg_secrets.go:479] Token secret for service account openshift-infra/unidling-controller is not populated yet
I1209 18:15:44.481981 20134 create_dockercfg_secrets.go:366] The dockercfg secret was not created for service account openshift-infra/unidling-controller, will retry
I1209 18:15:44.482022 20134 create_dockercfg_secrets.go:497] Creating dockercfg secret "builder-dockercfg-shhbj" for service account kube-system/builder
I1209 18:15:44.484648 20134 iptables.go:362] running iptables -F [KUBE-NODEPORT-HOST -t nat]
I1209 18:15:44.493562 20134 iptables.go:362] running iptables -X [KUBE-NODEPORT-HOST -t nat]
I1209 18:15:44.497741 20134 create_dockercfg_secrets.go:479] Token secret for service account default/deployer is not populated yet
I1209 18:15:44.497766 20134 create_dockercfg_secrets.go:366] The dockercfg secret was not created for service account default/deployer, will retry
I1209 18:15:44.497927 20134 create_dockercfg_secrets.go:460] Creating token secret "deployment-controller-token-sp05w" for service account openshift-infra/deployment-controller
I1209 18:15:44.499229 20134 server.go:714] Started kubelet v1.4.0+776c994
I1209 18:15:44.499350 20134 server.go:118] Starting to listen on 0.0.0.0:10250
I1209 18:15:44.502889 20134 iptables.go:362] running iptables -F [KUBE-NODEPORT-CONTAINER -t nat]
I1209 18:15:44.508943 20134 create_dockercfg_secrets.go:90] Updating service account deployer
E1209 18:15:44.510229 20134 kubelet.go:1091] Image garbage collection failed: unable to find data for container /
I1209 18:15:44.510994 20134 kubelet_node_status.go:229] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
I1209 18:15:44.511590 20134 interface.go:248] Default route transits interface "eth0"
I1209 18:15:44.511903 20134 interface.go:93] Interface eth0 is up
I1209 18:15:44.512163 20134 interface.go:138] Interface "eth0" has 2 addresses :[192.168.121.18/24 fe80::5054:ff:fe56:f2f/64].
I1209 18:15:44.512471 20134 interface.go:105] Checking addr 192.168.121.18/24.
I1209 18:15:44.512496 20134 interface.go:114] IP found 192.168.121.18
I1209 18:15:44.512511 20134 interface.go:144] valid IPv4 address for interface "eth0" found as 192.168.121.18.
I1209 18:15:44.512523 20134 interface.go:254] Choosing IP 192.168.121.18
I1209 18:15:44.513389 20134 server.go:608] Event(api.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'Starting' Starting kubelet.
I1209 18:15:44.517843 20134 create_dockercfg_secrets.go:479] Token secret for service account openshift-infra/replicaset-controller is not populated yet
I1209 18:15:44.518136 20134 create_dockercfg_secrets.go:366] The dockercfg secret was not created for service account openshift-infra/replicaset-controller, will retry
I1209 18:15:44.518394 20134 create_dockercfg_secrets.go:460] Creating token secret "daemonset-controller-token-ri53o" for service account openshift-infra/daemonset-controller
I1209 18:15:44.524722 20134 iptables.go:362] running iptables -X [KUBE-NODEPORT-CONTAINER -t nat]
I1209 18:15:44.534497 20134 iptables.go:362] running iptables -F [KUBE-NODEPORT-NON-LOCAL -t filter]
I1209 18:15:44.542801 20134 iptables.go:362] running iptables -X [KUBE-NODEPORT-NON-LOCAL -t filter]
I1209 18:15:44.553415 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:15:44.555610 20134 interface.go:248] Default route transits interface "eth0"
I1209 18:15:44.555739 20134 interface.go:93] Interface eth0 is up
I1209 18:15:44.556170 20134 interface.go:138] Interface "eth0" has 2 addresses :[192.168.121.18/24 fe80::5054:ff:fe56:f2f/64].
I1209 18:15:44.556209 20134 interface.go:105] Checking addr 192.168.121.18/24.
I1209 18:15:44.556223 20134 interface.go:114] IP found 192.168.121.18
I1209 18:15:44.556673 20134 interface.go:144] valid IPv4 address for interface "eth0" found as 192.168.121.18.
I1209 18:15:44.556700 20134 interface.go:254] Choosing IP 192.168.121.18
I1209 18:15:44.556992 20134 proxier.go:177] Setting proxy IP to 192.168.121.18 and initializing iptables
I1209 18:15:44.557032 20134 iptables.go:362] running iptables -N [KUBE-PORTALS-CONTAINER -t nat]
I1209 18:15:44.572407 20134 iptables.go:362] running iptables -C [PREROUTING -t nat -m comment --comment handle ClusterIPs; NOTE: this must be before the NodePort rules -j KUBE-PORTALS-CONTAINER]
I1209 18:15:44.579813 20134 create_dockercfg_secrets.go:90] Updating service account gc-controller
I1209 18:15:44.581053 20134 iptables.go:362] running iptables -I [PREROUTING -t nat -m comment --comment handle ClusterIPs; NOTE: this must be before the NodePort rules -j KUBE-PORTALS-CONTAINER]
I1209 18:15:44.588618 20134 iptables.go:362] running iptables -N [KUBE-PORTALS-HOST -t nat]
I1209 18:15:44.596216 20134 iptables.go:362] running iptables -C [OUTPUT -t nat -m comment --comment handle ClusterIPs; NOTE: this must be before the NodePort rules -j KUBE-PORTALS-HOST]
I1209 18:15:44.601842 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:15:44.602209 20134 image_gc_manager.go:171] Pod test/deployment-example-1-deploy, container deployment uses image openshift/origin-deployer:v1.5.0-alpha.0(sha256:46451247dbc7a9a41cc208c3687553c2900c2c8dadb31e83948f5115110ea26d)
I1209 18:15:44.602236 20134 image_gc_manager.go:171] Pod test/deployment-example-1-deploy, container POD uses image openshift/origin-pod:v1.5.0-alpha.0(sha256:bb8660075f6b1f690983f8c0b119d0cc95e78e937e1982c2c0202cd97116f6de)
I1209 18:15:44.602253 20134 image_gc_manager.go:182] Adding image ID sha256:02998436bf31e2ccffbbbea70f8d713d6a785e0db7bb1c35ca831967b9a7a346 to currentImages
I1209 18:15:44.602267 20134 image_gc_manager.go:187] Image ID sha256:02998436bf31e2ccffbbbea70f8d713d6a785e0db7bb1c35ca831967b9a7a346 is new
I1209 18:15:44.602283 20134 image_gc_manager.go:199] Image ID sha256:02998436bf31e2ccffbbbea70f8d713d6a785e0db7bb1c35ca831967b9a7a346 has size 540431972
I1209 18:15:44.602296 20134 image_gc_manager.go:182] Adding image ID sha256:46451247dbc7a9a41cc208c3687553c2900c2c8dadb31e83948f5115110ea26d to currentImages
I1209 18:15:44.602307 20134 image_gc_manager.go:187] Image ID sha256:46451247dbc7a9a41cc208c3687553c2900c2c8dadb31e83948f5115110ea26d is new
I1209 18:15:44.602318 20134 image_gc_manager.go:195] Setting Image ID sha256:46451247dbc7a9a41cc208c3687553c2900c2c8dadb31e83948f5115110ea26d lastUsed to 2016-12-09 18:15:44.602245765 +0000 UTC
I1209 18:15:44.603398 20134 image_gc_manager.go:199] Image ID sha256:46451247dbc7a9a41cc208c3687553c2900c2c8dadb31e83948f5115110ea26d has size 488015134
I1209 18:15:44.603422 20134 image_gc_manager.go:182] Adding image ID sha256:bb8660075f6b1f690983f8c0b119d0cc95e78e937e1982c2c0202cd97116f6de to currentImages
I1209 18:15:44.603434 20134 image_gc_manager.go:187] Image ID sha256:bb8660075f6b1f690983f8c0b119d0cc95e78e937e1982c2c0202cd97116f6de is new
I1209 18:15:44.603447 20134 image_gc_manager.go:195] Setting Image ID sha256:bb8660075f6b1f690983f8c0b119d0cc95e78e937e1982c2c0202cd97116f6de lastUsed to 2016-12-09 18:15:44.602245765 +0000 UTC
I1209 18:15:44.603864 20134 image_gc_manager.go:199] Image ID sha256:bb8660075f6b1f690983f8c0b119d0cc95e78e937e1982c2c0202cd97116f6de has size 1138998
I1209 18:15:44.604058 20134 iptables.go:362] running iptables -I [OUTPUT -t nat -m comment --comment handle ClusterIPs; NOTE: this must be before the NodePort rules -j KUBE-PORTALS-HOST]
I1209 18:15:44.617229 20134 iptables.go:362] running iptables -N [KUBE-NODEPORT-CONTAINER -t nat]
I1209 18:15:44.625205 20134 iptables.go:362] running iptables -C [PREROUTING -t nat -m addrtype --dst-type LOCAL -m comment --comment handle service NodePorts; NOTE: this must be the last rule in the chain -j KUBE-NODEPORT-CONTAINER]
I1209 18:15:44.634229 20134 iptables.go:362] running iptables -A [PREROUTING -t nat -m addrtype --dst-type LOCAL -m comment --comment handle service NodePorts; NOTE: this must be the last rule in the chain -j KUBE-NODEPORT-CONTAINER]
I1209 18:15:44.641548 20134 iptables.go:362] running iptables -N [KUBE-NODEPORT-HOST -t nat]
I1209 18:15:44.649025 20134 iptables.go:362] running iptables -C [OUTPUT -t nat -m addrtype --dst-type LOCAL -m comment --comment handle service NodePorts; NOTE: this must be the last rule in the chain -j KUBE-NODEPORT-HOST]
E1209 18:15:44.652665 20134 kubelet.go:2124] Failed to check if disk space is available for the runtime: failed to get fs info for "runtime": unable to find data for container /
E1209 18:15:44.652700 20134 kubelet.go:2132] Failed to check if disk space is available on the root partition: failed to get fs info for "root": unable to find data for container /
I1209 18:15:44.652707 20134 kubelet_node_status.go:377] Recording NodeHasSufficientDisk event message for node localhost
I1209 18:15:44.652731 20134 kubelet_node_status.go:377] Recording NodeHasSufficientMemory event message for node localhost
I1209 18:15:44.652834 20134 kubelet_node_status.go:377] Recording NodeHasNoDiskPressure event message for node localhost
I1209 18:15:44.656764 20134 server.go:608] Event(api.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasSufficientDisk' Node localhost status is now: NodeHasSufficientDisk
I1209 18:15:44.656917 20134 server.go:608] Event(api.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasSufficientMemory' Node localhost status is now: NodeHasSufficientMemory
I1209 18:15:44.656972 20134 server.go:608] Event(api.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasNoDiskPressure' Node localhost status is now: NodeHasNoDiskPressure
I1209 18:15:44.657406 20134 create_dockercfg_secrets.go:90] Updating service account deployer
I1209 18:15:44.660031 20134 create_dockercfg_secrets.go:479] Token secret for service account openshift-infra/deployment-controller is not populated yet
I1209 18:15:44.660055 20134 create_dockercfg_secrets.go:366] The dockercfg secret was not created for service account openshift-infra/deployment-controller, will retry
I1209 18:15:44.660198 20134 create_dockercfg_secrets.go:460] Creating token secret "deploymentconfig-controller-token-axnu8" for service account openshift-infra/deploymentconfig-controller
I1209 18:15:44.662829 20134 create_dockercfg_secrets.go:90] Updating service account replication-controller
I1209 18:15:44.663287 20134 etcd_watcher.go:318] watch: 5 objects queued in incoming channel.
I1209 18:15:44.665814 20134 create_dockercfg_secrets.go:460] Creating token secret "default-token-t4356" for service account default/default
I1209 18:15:44.669985 20134 etcd_watcher.go:160] watch (*api.Secret): 2 objects queued in outgoing channel.
I1209 18:15:44.672855 20134 etcd_watcher.go:160] watch (*api.Secret): 3 objects queued in outgoing channel.
I1209 18:15:44.675151 20134 create_dockercfg_secrets.go:479] Token secret for service account openshift-infra/daemonset-controller is not populated yet
I1209 18:15:44.676454 20134 create_dockercfg_secrets.go:366] The dockercfg secret was not created for service account openshift-infra/daemonset-controller, will retry
I1209 18:15:44.676499 20134 create_dockercfg_secrets.go:460] Creating token secret "deployer-token-lru8f" for service account openshift/deployer
I1209 18:15:44.678806 20134 iptables.go:362] running iptables -A [OUTPUT -t nat -m addrtype --dst-type LOCAL -m comment --comment handle service NodePorts; NOTE: this must be the last rule in the chain -j KUBE-NODEPORT-HOST]
I1209 18:15:44.680988 20134 create_dockercfg_secrets.go:90] Updating service account builder
I1209 18:15:44.681233 20134 create_dockercfg_secrets.go:158] Adding token secret openshift-infra/job-controller-token-pf94k
I1209 18:15:44.690279 20134 create_dockercfg_secrets.go:158] Adding token secret openshift/deployer-token-lru8f
I1209 18:15:44.690709 20134 create_dockercfg_secrets.go:156] Updating token secret openshift-infra/deployer-token-8lwt2
I1209 18:15:44.692484 20134 create_dockercfg_secrets.go:460] Creating token secret "replication-controller-token-f4env" for service account openshift-infra/replication-controller
I1209 18:15:44.692841 20134 fs_resource_analyzer.go:66] Starting FS ResourceAnalyzer
I1209 18:15:44.692993 20134 status_manager.go:129] Starting to sync pod status with apiserver
I1209 18:15:44.693051 20134 kubelet.go:2226] Starting kubelet main sync loop.
I1209 18:15:44.693064 20134 kubelet.go:2237] skipping pod synchronization - [network state unknown container runtime is down]
I1209 18:15:44.693098 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:15:44.696613 20134 volume_manager.go:232] The desired_state_of_world populator starts
I1209 18:15:44.696994 20134 volume_manager.go:234] Starting Kubelet Volume Manager
I1209 18:15:44.699800 20134 container_manager_linux.go:625] attempting to apply oom_score_adj of -999 to pid 15164
I1209 18:15:44.699814 20134 oom_linux.go:69] attempting to set "/proc/15164/oom_score_adj" to "-999"
E1209 18:15:44.699869 20134 container_manager_linux.go:567] error opening pid file /run/docker/libcontainerd/docker-containerd.pid: open /run/docker/libcontainerd/docker-containerd.pid: no such file or directory
I1209 18:15:44.710753 20134 iptables.go:362] running iptables -N [KUBE-MARK-DROP -t nat]
I1209 18:15:44.714920 20134 nodecontroller.go:194] Sending events to api server.
I1209 18:15:44.718687 20134 factory.go:257] Creating scheduler from algorithm provider 'DefaultProvider'
I1209 18:15:44.718702 20134 factory.go:303] creating scheduler with fit predicates 'map[NoDiskConflict:{} NoVolumeZoneConflict:{} MaxEBSVolumeCount:{} MatchInterPodAffinity:{} MaxGCEPDVolumeCount:{} GeneralPredicates:{} PodToleratesNodeTaints:{} CheckNodeMemoryPressure:{} CheckNodeDiskPressure:{}]' and priority functions 'map[InterPodAffinityPriority:{} LeastRequestedPriority:{} BalancedResourceAllocation:{} SelectorSpreadPriority:{} NodePreferAvoidPodsPriority:{} NodeAffinityPriority:{} TaintTolerationPriority:{}]
I1209 18:15:44.718893 20134 reflector.go:211] Starting reflector *api.Pod (0) from github.com/openshift/origin/vendor/k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:389
I1209 18:15:44.718919 20134 reflector.go:211] Starting reflector *api.PersistentVolume (0) from github.com/openshift/origin/vendor/k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:399
I1209 18:15:44.718932 20134 reflector.go:211] Starting reflector *api.PersistentVolumeClaim (0) from github.com/openshift/origin/vendor/k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:400
I1209 18:15:44.719113 20134 reflector.go:211] Starting reflector *api.Service (0) from github.com/openshift/origin/vendor/k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:405
I1209 18:15:44.719136 20134 reflector.go:211] Starting reflector *api.ReplicationController (0) from github.com/openshift/origin/vendor/k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:410
I1209 18:15:44.719151 20134 reflector.go:211] Starting reflector *extensions.ReplicaSet (0) from github.com/openshift/origin/vendor/k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:415
I1209 18:15:44.697144 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:15:44.835742 20134 reflector.go:249] Listing and watching *api.Pod from github.com/openshift/origin/vendor/k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:389
I1209 18:15:44.836564 20134 kubelet_node_status.go:229] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
I1209 18:15:44.837291 20134 interface.go:248] Default route transits interface "eth0"
I1209 18:15:44.838023 20134 interface.go:93] Interface eth0 is up
I1209 18:15:44.838397 20134 interface.go:138] Interface "eth0" has 2 addresses :[192.168.121.18/24 fe80::5054:ff:fe56:f2f/64].
I1209 18:15:44.838752 20134 interface.go:105] Checking addr 192.168.121.18/24.
I1209 18:15:44.838776 20134 interface.go:114] IP found 192.168.121.18
I1209 18:15:44.838868 20134 reflector.go:211] Starting reflector *api.Pod (0) from github.com/openshift/origin/vendor/k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:392
I1209 18:15:44.838914 20134 reflector.go:249] Listing and watching *api.Pod from github.com/openshift/origin/vendor/k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:392
I1209 18:15:44.838802 20134 interface.go:144] valid IPv4 address for interface "eth0" found as 192.168.121.18.
I1209 18:15:44.839190 20134 interface.go:254] Choosing IP 192.168.121.18
I1209 18:15:44.839873 20134 reflector.go:211] Starting reflector *extensions.ReplicaSet (30s) from pkg/controller/replicaset/replica_set.go:205
I1209 18:15:44.839932 20134 reflector.go:249] Listing and watching *extensions.ReplicaSet from pkg/controller/replicaset/replica_set.go:205
I1209 18:15:44.847441 20134 reflector.go:211] Starting reflector *extensions.Deployment (30s) from pkg/controller/deployment/deployment_controller.go:181
I1209 18:15:44.847474 20134 reflector.go:249] Listing and watching *extensions.Deployment from pkg/controller/deployment/deployment_controller.go:181
I1209 18:15:44.849471 20134 reflector.go:211] Starting reflector *extensions.ReplicaSet (17h32m28.34680838s) from pkg/controller/deployment/deployment_controller.go:182
I1209 18:15:44.849493 20134 reflector.go:249] Listing and watching *extensions.ReplicaSet from pkg/controller/deployment/deployment_controller.go:182
I1209 18:15:44.849799 20134 reflector.go:211] Starting reflector *api.Pod (18h8m55.838581596s) from pkg/controller/deployment/deployment_controller.go:183
I1209 18:15:44.849828 20134 reflector.go:249] Listing and watching *api.Pod from pkg/controller/deployment/deployment_controller.go:183
I1209 18:15:44.856152 20134 reflector.go:211] Starting reflector *api.Node (0) from pkg/controller/node/nodecontroller.go:394
I1209 18:15:44.856232 20134 reflector.go:249] Listing and watching *api.Node from pkg/controller/node/nodecontroller.go:394
I1209 18:15:44.856569 20134 reflector.go:211] Starting reflector *extensions.DaemonSet (0) from pkg/controller/node/nodecontroller.go:396
I1209 18:15:44.856590 20134 reflector.go:249] Listing and watching *extensions.DaemonSet from pkg/controller/node/nodecontroller.go:396
I1209 18:15:44.857416 20134 controller.go:90] Starting ScheduledJob Manager
I1209 18:15:44.857642 20134 horizontal.go:134] Starting HPA Controller
I1209 18:15:44.857829 20134 reflector.go:211] Starting reflector *autoscaling.HorizontalPodAutoscaler (30s) from pkg/controller/podautoscaler/horizontal.go:135
I1209 18:15:44.857850 20134 reflector.go:249] Listing and watching *autoscaling.HorizontalPodAutoscaler from pkg/controller/podautoscaler/horizontal.go:135
I1209 18:15:44.858111 20134 daemoncontroller.go:236] Starting Daemon Sets controller manager
I1209 18:15:44.858165 20134 disruption.go:257] Starting disruption controller
I1209 18:15:44.858168 20134 disruption.go:259] Sending events to api server.
I1209 18:15:44.858587 20134 reflector.go:211] Starting reflector *batch.Job (10m0s) from pkg/controller/job/jobcontroller.go:148
I1209 18:15:44.858615 20134 reflector.go:249] Listing and watching *batch.Job from pkg/controller/job/jobcontroller.go:148
I1209 18:15:44.859006 20134 reflector.go:211] Starting reflector *extensions.DaemonSet (30s) from pkg/controller/daemon/daemoncontroller.go:237
I1209 18:15:44.859028 20134 reflector.go:249] Listing and watching *extensions.DaemonSet from pkg/controller/daemon/daemoncontroller.go:237
I1209 18:15:44.859427 20134 reflector.go:211] Starting reflector *api.Node (16h1m44.049317327s) from pkg/controller/daemon/daemoncontroller.go:239
I1209 18:15:44.859447 20134 reflector.go:249] Listing and watching *api.Node from pkg/controller/daemon/daemoncontroller.go:239
I1209 18:15:44.859830 20134 reflector.go:211] Starting reflector *policy.PodDisruptionBudget (30s) from pkg/controller/disruption/disruption.go:264
I1209 18:15:44.859908 20134 reflector.go:249] Listing and watching *policy.PodDisruptionBudget from pkg/controller/disruption/disruption.go:264
I1209 18:15:44.860229 20134 reflector.go:211] Starting reflector *api.ReplicationController (30s) from pkg/controller/disruption/disruption.go:266
I1209 18:15:44.860290 20134 reflector.go:249] Listing and watching *api.ReplicationController from pkg/controller/disruption/disruption.go:266
I1209 18:15:44.860715 20134 reflector.go:211] Starting reflector *extensions.ReplicaSet (30s) from pkg/controller/disruption/disruption.go:267
I1209 18:15:44.860788 20134 reflector.go:249] Listing and watching *extensions.ReplicaSet from pkg/controller/disruption/disruption.go:267
I1209 18:15:44.861161 20134 reflector.go:211] Starting reflector *extensions.Deployment (30s) from pkg/controller/disruption/disruption.go:268
I1209 18:15:44.861181 20134 reflector.go:249] Listing and watching *extensions.Deployment from pkg/controller/disruption/disruption.go:268
I1209 18:15:44.861596 20134 reflector.go:211] Starting reflector *api.Service (30s) from pkg/controller/endpoint/endpoints_controller.go:158
I1209 18:15:44.861664 20134 reflector.go:249] Listing and watching *api.Service from pkg/controller/endpoint/endpoints_controller.go:158
I1209 18:15:44.861936 20134 generic.go:141] GenericPLEG: 9194052f-be36-11e6-8171-525400560f2f/5122ba25e59b3e0aa0dae770964e9747c9e49fc15f1f6d260713914d3190a104: non-existent -> exited
I1209 18:15:44.861955 20134 generic.go:141] GenericPLEG: 9194052f-be36-11e6-8171-525400560f2f/955f3ff60dee1a6e78eca0486c2e4f580321c66e34d5075f518699918d5c0b41: non-existent -> exited
I1209 18:15:44.866031 20134 create_dockercfg_secrets.go:479] Token secret for service account openshift-infra/deploymentconfig-controller is not populated yet
I1209 18:15:44.866521 20134 create_dockercfg_secrets.go:366] The dockercfg secret was not created for service account openshift-infra/deploymentconfig-controller, will retry
I1209 18:15:44.867038 20134 create_dockercfg_secrets.go:460] Creating token secret "builder-token-cpjqi" for service account default/builder
I1209 18:15:44.868879 20134 iptables.go:362] running iptables -N [KUBE-NODEPORT-NON-LOCAL -t filter]
I1209 18:15:44.882887 20134 create_dockercfg_secrets.go:479] Token secret for service account default/default is not populated yet
I1209 18:15:44.882921 20134 create_dockercfg_secrets.go:366] The dockercfg secret was not created for service account default/default, will retry
I1209 18:15:44.883088 20134 create_dockercfg_secrets.go:497] Creating dockercfg secret "deployer-dockercfg-f3ey1" for service account openshift-infra/deployer
I1209 18:15:44.891041 20134 iptables.go:362] running iptables -C [INPUT -t filter -m comment --comment Ensure that non-local NodePort traffic can flow -j KUBE-NODEPORT-NON-LOCAL]
I1209 18:15:44.839354 20134 reflector.go:211] Starting reflector *api.Node (0) from github.com/openshift/origin/vendor/k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:395
I1209 18:15:44.899144 20134 reflector.go:249] Listing and watching *api.Node from github.com/openshift/origin/vendor/k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:395
I1209 18:15:44.839373 20134 reflector.go:249] Listing and watching *api.PersistentVolume from github.com/openshift/origin/vendor/k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:399
I1209 18:15:44.839423 20134 reflector.go:249] Listing and watching *api.PersistentVolumeClaim from github.com/openshift/origin/vendor/k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:400
I1209 18:15:44.839445 20134 reflector.go:249] Listing and watching *api.Service from github.com/openshift/origin/vendor/k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:405
I1209 18:15:44.839475 20134 reflector.go:249] Listing and watching *api.ReplicationController from github.com/openshift/origin/vendor/k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:410
I1209 18:15:44.839493 20134 reflector.go:249] Listing and watching *extensions.ReplicaSet from github.com/openshift/origin/vendor/k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:415
I1209 18:15:44.839607 20134 replication_controller.go:220] Starting RC Manager
I1209 18:15:44.839713 20134 iptables.go:362] running iptables -C [KUBE-MARK-DROP -t nat -j MARK --set-xmark 0x00008000/0x00008000]
I1209 18:15:44.938026 20134 reflector.go:211] Starting reflector *api.ReplicationController (10m0s) from pkg/controller/replication/replication_controller.go:221
I1209 18:15:44.939007 20134 reflector.go:249] Listing and watching *api.ReplicationController from pkg/controller/replication/replication_controller.go:221
I1209 18:15:44.947049 20134 container_manager_linux.go:371] Discovered runtime cgroups name: /system.slice/docker.service
I1209 18:15:44.947090 20134 container_manager_linux.go:625] attempting to apply oom_score_adj of -999 to pid 20134
I1209 18:15:44.947100 20134 oom_linux.go:69] attempting to set "/proc/20134/oom_score_adj" to "-999"
I1209 18:15:44.961297 20134 trace.go:61] Trace "Create /api/v1/namespaces/openshift/secrets" (started 2016-12-09 18:15:44.680506063 +0000 UTC):
[14.428µs] [14.428µs] About to convert to expected version
[26.213µs] [11.785µs] Conversion done
[140.793µs] [114.58µs] About to store object in database
[280.761254ms] [280.620461ms] END
I1209 18:15:44.961446 20134 create_dockercfg_secrets.go:90] Updating service account namespace-controller
I1209 18:15:44.961486 20134 create_dockercfg_secrets.go:90] Updating service account deployer
I1209 18:15:44.973309 20134 factory.go:295] Registering Docker factory
W1209 18:15:44.973411 20134 manager.go:244] Registration of the rkt container factory failed: unable to communicate with Rkt api service: rkt: cannot tcp Dial rkt api service: dial tcp [::1]:15441: getsockopt: connection refused
I1209 18:15:44.973430 20134 factory.go:54] Registering systemd factory
I1209 18:15:44.974360 20134 factory.go:86] Registering Raw factory
I1209 18:15:44.975177 20134 manager.go:1082] Started watching for new ooms in manager
I1209 18:15:44.983670 20134 oomparser.go:185] oomparser using systemd
I1209 18:15:44.983890 20134 factory.go:104] Error trying to work out if we can handle /: invalid container name
I1209 18:15:44.983906 20134 factory.go:115] Factory "docker" was unable to handle container "/"
I1209 18:15:44.983919 20134 factory.go:104] Error trying to work out if we can handle /: / not handled by systemd handler
I1209 18:15:44.983923 20134 factory.go:115] Factory "systemd" was unable to handle container "/"
I1209 18:15:44.983930 20134 factory.go:111] Using factory "raw" for container "/"
I1209 18:15:44.984663 20134 manager.go:874] Added container: "/" (aliases: [], namespace: "")
I1209 18:15:44.984917 20134 handler.go:325] Added event &{/ 2016-12-07 22:48:31.715 +0000 UTC containerCreation {<nil>}}
I1209 18:15:44.984955 20134 manager.go:285] Starting recovery of all containers
I1209 18:15:44.987399 20134 docker_manager.go:373] Container inspect result: {ContainerJSONBase:0xc82f499b80 Mounts:[{Name: Source:/var/lib/origin/openshift.local.volumes/pods/9194052f-be36-11e6-8171-525400560f2f/volumes/kubernetes.io~secret/deployer-token-bcaij Destination:/var/run/secrets/kubernetes.io/serviceaccount Driver: Mode:ro,Z RW:false Propagation:rprivate} {Name: Source:/var/lib/origin/openshift.local.volumes/pods/9194052f-be36-11e6-8171-525400560f2f/etc-hosts Destination:/etc/hosts Driver: Mode: RW:true Propagation:rprivate} {Name: Source:/var/lib/origin/openshift.local.volumes/pods/9194052f-be36-11e6-8171-525400560f2f/containers/deployment/3c17e55f Destination:/dev/termination-log Driver: Mode: RW:true Propagation:rprivate}] Config:0xc82df92c60 NetworkSettings:0xc821c61c00}
I1209 18:15:45.018279 20134 iptables.go:362] running iptables -I [INPUT -t filter -m comment --comment Ensure that non-local NodePort traffic can flow -j KUBE-NODEPORT-NON-LOCAL]
I1209 18:15:45.024416 20134 container.go:407] Start housekeeping for container "/"
I1209 18:15:45.030915 20134 trace.go:61] Trace "etcdHelper::Create *api.Event" (started 2016-12-09 18:15:44.699059979 +0000 UTC):
[41.294µs] [41.294µs] Object encoded
[41.957µs] [663ns] Version checked
[331.73338ms] [331.691423ms] Object created
[331.825063ms] [91.683µs] END
I1209 18:15:45.031124 20134 trace.go:61] Trace "Create /api/v1/namespaces/default/events" (started 2016-12-09 18:15:44.698875981 +0000 UTC):
[16.548µs] [16.548µs] About to convert to expected version
[34.44µs] [17.892µs] Conversion done
[72.513µs] [38.073µs] About to store object in database
[332.061976ms] [331.989463ms] Object stored in database
[332.077413ms] [15.437µs] Self-link added
[332.111813ms] [34.4µs] END
I1209 18:15:45.031209 20134 thin_pool_watcher.go:160] checking whether the thin-pool is holding a metadata snapshot
I1209 18:15:45.031217 20134 dmsetup_client.go:61] running dmsetup status docker-252:1-262311-pool
I1209 18:15:45.036597 20134 iptables.go:362] running iptables -F [KUBE-PORTALS-CONTAINER -t nat]
I1209 18:15:45.047723 20134 iptables.go:362] running iptables -F [KUBE-PORTALS-HOST -t nat]
I1209 18:15:45.057317 20134 trace.go:61] Trace "etcdHelper::Create *api.Secret" (started 2016-12-09 18:15:44.698219378 +0000 UTC):
[52.427µs] [52.427µs] Object encoded
[53.45µs] [1.023µs] Version checked
[359.033035ms] [358.979585ms] Object created
[359.044854ms] [11.819µs] END
I1209 18:15:45.058474 20134 thin_pool_watcher.go:126] reserving metadata snapshot for thin-pool docker-252:1-262311-pool
I1209 18:15:45.058513 20134 dmsetup_client.go:61] running dmsetup message docker-252:1-262311-pool 0 reserve_metadata_snap
I1209 18:15:45.077886 20134 iptables.go:362] running iptables -N [KUBE-FIREWALL -t filter]
I1209 18:15:45.086803 20134 iptables.go:362] running iptables -C [KUBE-FIREWALL -t filter -m comment --comment kubernetes firewall for dropping marked packets -m mark --mark 0x00008000/0x00008000 -j DROP]
I1209 18:15:45.094522 20134 iptables.go:362] running iptables -C [OUTPUT -t filter -j KUBE-FIREWALL]
I1209 18:15:45.101156 20134 iptables.go:362] running iptables -C [INPUT -t filter -j KUBE-FIREWALL]
I1209 18:15:45.105491 20134 kubelet_node_status.go:377] Recording NodeHasSufficientDisk event message for node localhost
I1209 18:15:45.105799 20134 kubelet_node_status.go:377] Recording NodeHasSufficientMemory event message for node localhost
I1209 18:15:45.105836 20134 kubelet_node_status.go:377] Recording NodeHasNoDiskPressure event message for node localhost
I1209 18:15:45.106198 20134 kubelet_node_status.go:73] Attempting to register node localhost
I1209 18:15:45.108735 20134 iptables.go:362] running iptables -N [KUBE-MARK-MASQ -t nat]
I1209 18:15:45.115641 20134 iptables.go:362] running iptables -N [KUBE-POSTROUTING -t nat]
I1209 18:15:45.121622 20134 factory.go:104] Error trying to work out if we can handle /machine.slice: invalid container name
I1209 18:15:45.121665 20134 factory.go:115] Factory "docker" was unable to handle container "/machine.slice"
I1209 18:15:45.121683 20134 factory.go:104] Error trying to work out if we can handle /machine.slice: /machine.slice not handled by systemd handler
I1209 18:15:45.121695 20134 factory.go:115] Factory "systemd" was unable to handle container "/machine.slice"
I1209 18:15:45.121707 20134 factory.go:111] Using factory "raw" for container "/machine.slice"
I1209 18:15:45.122659 20134 iptables.go:362] running iptables -C [KUBE-MARK-MASQ -t nat -j MARK --set-xmark 0x00004000/0x00004000]
I1209 18:15:45.125699 20134 manager.go:874] Added container: "/machine.slice" (aliases: [], namespace: "")
I1209 18:15:45.126057 20134 handler.go:325] Added event &{/machine.slice 2016-12-09 17:39:39.567252844 +0000 UTC containerCreation {<nil>}}
I1209 18:15:45.128389 20134 server.go:608] Event(api.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasSufficientDisk' Node localhost status is now: NodeHasSufficientDisk
I1209 18:15:45.128438 20134 server.go:608] Event(api.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasSufficientMemory' Node localhost status is now: NodeHasSufficientMemory
I1209 18:15:45.128518 20134 server.go:608] Event(api.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeHasNoDiskPressure' Node localhost status is now: NodeHasNoDiskPressure
I1209 18:15:45.128846 20134 container.go:407] Start housekeeping for container "/machine.slice"
I1209 18:15:45.129886 20134 iptables.go:362] running iptables -F [KUBE-NODEPORT-CONTAINER -t nat]
I1209 18:15:45.135956 20134 iptables.go:362] running iptables -C [POSTROUTING -t nat -m comment --comment kubernetes postrouting rules -j KUBE-POSTROUTING]
I1209 18:15:45.140362 20134 iptables.go:362] running iptables -F [KUBE-NODEPORT-HOST -t nat]
I1209 18:15:45.146678 20134 iptables.go:362] running iptables -C [KUBE-POSTROUTING -t nat -m comment --comment kubernetes service traffic requiring SNAT -m mark --mark 0x00004000/0x00004000 -j MASQUERADE]
I1209 18:15:45.151524 20134 iptables.go:362] running iptables -F [KUBE-NODEPORT-NON-LOCAL -t filter]
I1209 18:15:45.158888 20134 node.go:499] Started Kubernetes Proxy on 0.0.0.0
I1209 18:15:45.159008 20134 reflector.go:200] Starting reflector *api.Service (15m0s) from github.com/openshift/origin/pkg/cmd/server/kubernetes/node.go:267
I1209 18:15:45.159035 20134 reflector.go:200] Starting reflector *api.Endpoints (15m0s) from github.com/openshift/origin/pkg/cmd/server/kubernetes/node.go:272
I1209 18:15:45.159059 20134 reflector.go:249] Listing and watching *api.Endpoints from github.com/openshift/origin/pkg/cmd/server/kubernetes/node.go:272
I1209 18:15:45.168116 20134 create_dockercfg_secrets.go:90] Updating service account pet-set-controller
I1209 18:15:45.168731 20134 reflector.go:249] Listing and watching *api.Service from github.com/openshift/origin/pkg/cmd/server/kubernetes/node.go:267
I1209 18:15:45.169312 20134 create_dockercfg_secrets.go:479] Token secret for service account openshift/deployer is not populated yet
I1209 18:15:45.169360 20134 create_dockercfg_secrets.go:366] The dockercfg secret was not created for service account openshift/deployer, will retry
I1209 18:15:45.169918 20134 create_dockercfg_secrets.go:460] Creating token secret "deployer-token-gavl7" for service account default/deployer
I1209 18:15:45.170369 20134 thin_pool_watcher.go:133] reserved metadata snapshot for thin-pool docker-252:1-262311-pool
I1209 18:15:45.170387 20134 thin_pool_watcher.go:141] running thin_ls on metadata device /dev/loop1
I1209 18:15:45.170474 20134 thin_ls_client.go:56] running command: thin_ls --no-headers -m -o DEV,EXCLUSIVE_BYTES /dev/loop1
I1209 18:15:45.181631 20134 trace.go:61] Trace "Update /api/v1/namespaces/kube-system/serviceaccounts/builder" (started 2016-12-09 18:15:44.69437228 +0000 UTC):
[29.817µs] [29.817µs] About to convert to expected version
[60.285µs] [30.468µs] Conversion done
[65.58µs] [5.295µs] About to store object in database
[487.126051ms] [487.060471ms] Object stored in database
[487.131981ms] [5.93µs] Self-link added
[487.211393ms] [79.412µs] END
I1209 18:15:45.182295 20134 thin_pool_watcher.go:137] releasing metadata snapshot for thin-pool docker-252:1-262311-pool
I1209 18:15:45.182313 20134 dmsetup_client.go:61] running dmsetup message docker-252:1-262311-pool 0 release_metadata_snap
I1209 18:15:45.191751 20134 docker_manager.go:373] Container inspect result: {ContainerJSONBase:0xc824133e40 Mounts:[] Config:0xc824f190e0 NetworkSettings:0xc8283b3e00}
E1209 18:15:45.196557 20134 thin_pool_watcher.go:61] encountered error refreshing thin pool watcher: error performing thin_ls on metadata device /dev/loop1: Error running command `thin_ls --no-headers -m -o DEV,EXCLUSIVE_BYTES /dev/loop1`: exit status 127
output:
I1209 18:15:45.198564 20134 trace.go:61] Trace "Create /api/v1/namespaces/openshift-infra/secrets" (started 2016-12-09 18:15:44.697689906 +0000 UTC):
[15.65µs] [15.65µs] About to convert to expected version
[29.108µs] [13.458µs] Conversion done
[399.271µs] [370.163µs] About to store object in database
[500.790888ms] [500.391617ms] END
I1209 18:15:45.199421 20134 factory.go:111] Using factory "docker" for container "/system.slice/docker-b585d69a82cf6e0cb5909a7f97b69a2983f3e9a8f5b29b298e036d0e33c55769.scope"
I1209 18:15:45.211837 20134 generic.go:327] PLEG: Write status for deployment-example-1-deploy/test: &container.PodStatus{ID:"9194052f-be36-11e6-8171-525400560f2f", Name:"deployment-example-1-deploy", Namespace:"test", IP:"", ContainerStatuses:[]*container.ContainerStatus{(*container.ContainerStatus)(0xc82ac14a80), (*container.ContainerStatus)(0xc82785c700)}} (err: <nil>)
I1209 18:15:45.217882 20134 config.go:147] Setting endpoints (config.EndpointsUpdate) {
Endpoints: ([]api.Endpoints) (len=1 cap=1) {
(api.Endpoints) &TypeMeta{Kind:,APIVersion:,}
},
Op: (config.Operation) 0
}
I1209 18:15:45.218012 20134 config.go:99] Calling handler.OnEndpointsUpdate()
I1209 18:15:45.218057 20134 roundrobin.go:273] LoadBalancerRR: Setting endpoints for default/kubernetes:https to [192.168.121.18:8443]
I1209 18:15:45.218149 20134 roundrobin.go:99] LoadBalancerRR service "default/kubernetes:https" did not exist, created
I1209 18:15:45.218171 20134 roundrobin.go:273] LoadBalancerRR: Setting endpoints for default/kubernetes:dns-tcp to [192.168.121.18:8053]
I1209 18:15:45.218182 20134 roundrobin.go:99] LoadBalancerRR service "default/kubernetes:dns-tcp" did not exist, created
I1209 18:15:45.218194 20134 roundrobin.go:273] LoadBalancerRR: Setting endpoints for default/kubernetes:dns to [192.168.121.18:8053]
I1209 18:15:45.218203 20134 roundrobin.go:99] LoadBalancerRR service "default/kubernetes:dns" did not exist, created
I1209 18:15:45.218235 20134 proxier.go:573] Setting endpoints for "default/kubernetes:https" to [192.168.121.18:8443]
I1209 18:15:45.218258 20134 proxier.go:573] Setting endpoints for "default/kubernetes:dns-tcp" to [192.168.121.18:8053]
I1209 18:15:45.218269 20134 proxier.go:573] Setting endpoints for "default/kubernetes:dns" to [192.168.121.18:8053]
I1209 18:15:45.218299 20134 proxier.go:755] Not syncing iptables until Services and Endpoints have been received from master
I1209 18:15:45.218303 20134 proxier.go:751] syncProxyRules took 20.157µs
I1209 18:15:45.218310 20134 proxier.go:523] OnEndpointsUpdate took 90.674µs for 1 endpoints
I1209 18:15:45.218317 20134 proxier.go:397] Received update notice: []
I1209 18:15:45.218350 20134 proxier.go:758] Syncing iptables rules
I1209 18:15:45.218359 20134 iptables.go:362] running iptables -N [KUBE-SERVICES -t filter]
I1209 18:15:45.227170 20134 create_dockercfg_secrets.go:479] Token secret for service account openshift-infra/replication-controller is not populated yet
I1209 18:15:45.227211 20134 create_dockercfg_secrets.go:366] The dockercfg secret was not created for service account openshift-infra/replication-controller, will retry
I1209 18:15:45.227792 20134 iptables.go:362] running iptables -N [KUBE-SERVICES -t nat]
I1209 18:15:45.235268 20134 iptables.go:362] running iptables -C [OUTPUT -t filter -m comment --comment kubernetes service portals -j KUBE-SERVICES]
I1209 18:15:45.241827 20134 iptables.go:362] running iptables -C [OUTPUT -t nat -m comment --comment kubernetes service portals -j KUBE-SERVICES]
I1209 18:15:45.249220 20134 iptables.go:362] running iptables -C [PREROUTING -t nat -m comment --comment kubernetes service portals -j KUBE-SERVICES]
I1209 18:15:45.256188 20134 iptables.go:362] running iptables -N [KUBE-POSTROUTING -t nat]
I1209 18:15:45.262364 20134 iptables.go:362] running iptables -C [POSTROUTING -t nat -m comment --comment kubernetes postrouting rules -j KUBE-POSTROUTING]
I1209 18:15:45.269818 20134 iptables.go:298] running iptables-save [-t filter]
I1209 18:15:45.277821 20134 iptables.go:298] running iptables-save [-t nat]
I1209 18:15:45.291354 20134 proxier.go:1244] Restoring iptables rules: *filter
:KUBE-SERVICES - [0:0]
COMMIT
*nat
:KUBE-SERVICES - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-POSTROUTING - [0:0]
:KUBE-MARK-MASQ - [0:0]
:KUBE-SVC-BA6I5HTZKAAAJT56 - [0:0]
:KUBE-SEP-FQLCC2RAIW2XP5BY - [0:0]
:KUBE-SEP-LLAPZ6I53VN3DC7H - [0:0]
:KUBE-SEP-U462OSCUJL4Y6JKA - [0:0]
:KUBE-SVC-3VQ6B3MLH7E2SZT4 - [0:0]
:KUBE-SVC-7FAS7WLN46SI3LNQ - [0:0]
:KUBE-SVC-NPX46M4PTMTKRN6Y - [0:0]
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x00004000/0x00004000 -j MASQUERADE
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x00004000/0x00004000
-X KUBE-SVC-BA6I5HTZKAAAJT56
-X KUBE-SEP-FQLCC2RAIW2XP5BY
-X KUBE-SEP-LLAPZ6I53VN3DC7H
-X KUBE-SEP-U462OSCUJL4Y6JKA
-X KUBE-SVC-3VQ6B3MLH7E2SZT4
-X KUBE-SVC-7FAS7WLN46SI3LNQ
-X KUBE-SVC-NPX46M4PTMTKRN6Y
-A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
COMMIT
I1209 18:15:45.291376 20134 iptables.go:339] running iptables-restore [--noflush --counters]
I1209 18:15:45.314465 20134 kubelet_node_status.go:76] Successfully registered node localhost
I1209 18:15:45.314985 20134 create_dockercfg_secrets.go:90] Updating service account pv-recycler-controller
I1209 18:15:45.315025 20134 create_dockercfg_secrets.go:90] Updating service account builder
I1209 18:15:45.318675 20134 trace.go:61] Trace "Create /api/v1/namespaces/default/secrets" (started 2016-12-09 18:15:44.960371939 +0000 UTC):
[18.363µs] [18.363µs] About to convert to expected version
[29.702µs] [11.339µs] Conversion done
[188.78µs] [159.078µs] About to store object in database
[358.277028ms] [358.088248ms] END
I1209 18:15:45.319360 20134 create_dockercfg_secrets.go:158] Adding token secret default/default-token-t4356
I1209 18:15:45.319765 20134 healthcheck.go:86] LB service health check mutation request Service: default/kubernetes - 0 Endpoints []
I1209 18:15:45.319816 20134 healthcheck.go:86] LB service health check mutation request Service: default/kubernetes - 0 Endpoints []
I1209 18:15:45.319884 20134 healthcheck.go:86] LB service health check mutation request Service: default/kubernetes - 0 Endpoints []
I1209 18:15:45.323316 20134 create_dockercfg_secrets.go:479] Token secret for service account default/builder is not populated yet
I1209 18:15:45.323358 20134 create_dockercfg_secrets.go:366] The dockercfg secret was not created for service account default/builder, will retry
I1209 18:15:45.329199 20134 trace.go:61] Trace "Update /api/v1/namespaces/openshift-infra/secrets/job-controller-token-pf94k" (started 2016-12-09 18:15:44.959592271 +0000 UTC):
[36.379µs] [36.379µs] About to convert to expected version
[110.683µs] [74.304µs] Conversion done
[116.804µs] [6.121µs] About to store object in database
[369.495943ms] [369.379139ms] Object stored in database
[369.502591ms] [6.648µs] Self-link added
[369.577649ms] [75.058µs] END
I1209 18:15:45.356598 20134 manager.go:874] Added container: "/system.slice/docker-b585d69a82cf6e0cb5909a7f97b69a2983f3e9a8f5b29b298e036d0e33c55769.scope" (aliases: [origin b585d69a82cf6e0cb5909a7f97b69a2983f3e9a8f5b29b298e036d0e33c55769], namespace: "docker")
I1209 18:15:45.357257 20134 handler.go:325] Added event &{/system.slice/docker-b585d69a82cf6e0cb5909a7f97b69a2983f3e9a8f5b29b298e036d0e33c55769.scope 2016-12-09 18:15:44.988252844 +0000 UTC containerCreation {<nil>}}
I1209 18:15:45.357537 20134 factory.go:104] Error trying to work out if we can handle /system.slice/sys-kernel-debug.mount: invalid container name
I1209 18:15:45.357803 20134 factory.go:115] Factory "docker" was unable to handle container "/system.slice/sys-kernel-debug.mount"
I1209 18:15:45.358035 20134 factory.go:108] Factory "systemd" can handle container "/system.slice/sys-kernel-debug.mount", but ignoring.
I1209 18:15:45.358068 20134 manager.go:843] ignoring container "/system.slice/sys-kernel-debug.mount"
I1209 18:15:45.358477 20134 factory.go:104] Error trying to work out if we can handle /system.slice/dbus.service: invalid container name
I1209 18:15:45.358725 20134 factory.go:115] Factory "docker" was unable to handle container "/system.slice/dbus.service"
I1209 18:15:45.358948 20134 factory.go:104] Error trying to work out if we can handle /system.slice/dbus.service: /system.slice/dbus.service not handled by systemd handler
I1209 18:15:45.359163 20134 factory.go:115] Factory "systemd" was unable to handle container "/system.slice/dbus.service"
I1209 18:15:45.359385 20134 factory.go:111] Using factory "raw" for container "/system.slice/dbus.service"
I1209 18:15:45.359977 20134 manager.go:874] Added container: "/system.slice/dbus.service" (aliases: [], namespace: "")
I1209 18:15:45.360464 20134 handler.go:325] Added event &{/system.slice/dbus.service 2016-12-09 17:39:39.568252844 +0000 UTC containerCreation {<nil>}}
I1209 18:15:45.360526 20134 factory.go:104] Error trying to work out if we can handle /system.slice/sys-kernel-config.mount: invalid container name
I1209 18:15:45.360875 20134 factory.go:115] Factory "docker" was unable to handle container "/system.slice/sys-kernel-config.mount"
I1209 18:15:45.360906 20134 factory.go:108] Factory "systemd" can handle container "/system.slice/sys-kernel-config.mount", but ignoring.
I1209 18:15:45.360928 20134 manager.go:843] ignoring container "/system.slice/sys-kernel-config.mount"
I1209 18:15:45.361381 20134 factory.go:104] Error trying to work out if we can handle /system.slice/systemd-sysusers.service: invalid container name
I1209 18:15:45.361605 20134 factory.go:115] Factory "docker" was unable to handle container "/system.slice/systemd-sysusers.service"
I1209 18:15:45.361838 20134 factory.go:104] Error trying to work out if we can handle /system.slice/systemd-sysusers.service: /system.slice/systemd-sysusers.service not handled by systemd handler
I1209 18:15:45.362079 20134 factory.go:115] Factory "systemd" was unable to handle container "/system.slice/systemd-sysusers.service"
I1209 18:15:45.362285 20134 factory.go:111] Using factory "raw" for container "/system.slice/systemd-sysusers.service"
I1209 18:15:45.362886 20134 manager.go:874] Added container: "/system.slice/systemd-sysusers.service" (aliases: [], namespace: "")
I1209 18:15:45.363396 20134 handler.go:325] Added event &{/system.slice/systemd-sysusers.service 2016-12-09 17:39:39.577252844 +0000 UTC containerCreation {<nil>}}
I1209 18:15:45.363682 20134 factory.go:104] Error trying to work out if we can handle /system.slice/auditd.service: invalid container name
I1209 18:15:45.363924 20134 factory.go:115] Factory "docker" was unable to handle container "/system.slice/auditd.service"
I1209 18:15:45.364141 20134 factory.go:104] Error trying to work out if we can handle /system.slice/auditd.service: /system.slice/auditd.service not handled by systemd handler
I1209 18:15:45.364366 20134 factory.go:115] Factory "systemd" was unable to handle container "/system.slice/auditd.service"
I1209 18:15:45.364585 20134 factory.go:111] Using factory "raw" for container "/system.slice/auditd.service"
I1209 18:15:45.365042 20134 manager.go:874] Added container: "/system.slice/auditd.service" (aliases: [], namespace: "")
I1209 18:15:45.365455 20134 handler.go:325] Added event &{/system.slice/auditd.service 2016-12-09 17:39:39.567252844 +0000 UTC containerCreation {<nil>}}
I1209 18:15:45.365711 20134 factory.go:104] Error trying to work out if we can handle /system.slice/crond.service: invalid container name
I1209 18:15:45.365922 20134 factory.go:115] Factory "docker" was unable to handle container "/system.slice/crond.service"
I1209 18:15:45.365947 20134 factory.go:104] Error trying to work out if we can handle /system.slice/crond.service: /system.slice/crond.service not handled by systemd handler
I1209 18:15:45.365975 20134 factory.go:115] Factory "systemd" was unable to handle container "/system.slice/crond.service"
I1209 18:15:45.366503 20134 factory.go:111] Using factory "raw" for container "/system.slice/crond.service"
I1209 18:15:45.367002 20134 manager.go:874] Added container: "/system.slice/crond.service" (aliases: [], namespace: "")
I1209 18:15:45.367411 20134 handler.go:325] Added event &{/system.slice/crond.service 2016-12-09 17:39:39.568252844 +0000 UTC containerCreation {<nil>}}
I1209 18:15:45.367649 20134 factory.go:104] Error trying to work out if we can handle /system.slice/sshd.service: invalid container name
I1209 18:15:45.367852 20134 factory.go:115] Factory "docker" was unable to handle container "/system.slice/sshd.service"
I1209 18:15:45.368049 20134 factory.go:104] Error trying to work out if we can handle /system.slice/sshd.service: /system.slice/sshd.service not handled by systemd handler
I1209 18:15:45.368251 20134 factory.go:115] Factory "systemd" was unable to handle container "/system.slice/sshd.service"
I1209 18:15:45.368491 20134 factory.go:111] Using factory "raw" for container "/system.slice/sshd.service"
I1209 18:15:45.368946 20134 manager.go:874] Added container: "/system.slice/sshd.service" (aliases: [], namespace: "")
I1209 18:15:45.369336 20134 handler.go:325] Added event &{/system.slice/sshd.service 2016-12-09 17:39:39.570252844 +0000 UTC containerCreation {<nil>}}
I1209 18:15:45.369587 20134 factory.go:104] Error trying to work out if we can handle /system.slice/systemd-machine-id-commit.service: invalid container name
I1209 18:15:45.369796 20134 factory.go:115] Factory "docker" was unable to handle container "/system.slice/systemd-machine-id-commit.service"
I1209 18:15:45.369999 20134 factory.go:104] Error trying to work out if we can handle /system.slice/systemd-machine-id-commit.service: /system.slice/systemd-machine-id-commit.service not handled by systemd handler
I1209 18:15:45.370193 20134 factory.go:115] Factory "systemd" was unable to handle container "/system.slice/systemd-machine-id-commit.service"
I1209 18:15:45.370417 20134 factory.go:111] Using factory "raw" for container "/system.slice/systemd-machine-id-commit.service"
I1209 18:15:45.370878 20134 manager.go:874] Added container: "/system.slice/systemd-machine-id-commit.service" (aliases: [], namespace: "")
I1209 18:15:45.371307 20134 handler.go:325] Added event &{/system.slice/systemd-machine-id-commit.service 2016-12-09 17:39:39.576252844 +0000 UTC containerCreation {<nil>}}
I1209 18:15:45.371574 20134 factory.go:104] Error trying to work out if we can handle /system.slice/systemd-remount-fs.service: invalid container name
I1209 18:15:45.371777 20134 factory.go:115] Factory "docker" was unable to handle container "/system.slice/systemd-remount-fs.service"
I1209 18:15:45.371802 20134 factory.go:104] Error trying to work out if we can handle /system.slice/systemd-remount-fs.service: /system.slice/systemd-remount-fs.service not handled by systemd handler
I1209 18:15:45.372104 20134 factory.go:115] Factory "systemd" was unable to handle container "/system.slice/systemd-remount-fs.service"
I1209 18:15:45.372139 20134 factory.go:111] Using factory "raw" for container "/system.slice/systemd-remount-fs.service"
I1209 18:15:45.372782 20134 manager.go:874] Added container: "/system.slice/systemd-remount-fs.service" (aliases: [], namespace: "")
I1209 18:15:45.373176 20134 handler.go:325] Added event &{/system.slice/systemd-remount-fs.service 2016-12-09 17:39:39.576252844 +0000 UTC containerCreation {<nil>}}
I1209 18:15:45.373422 20134 factory.go:104] Error trying to work out if we can handle /system.slice/systemd-vconsole-setup.service: invalid container name
I1209 18:15:45.373638 20134 factory.go:115] Factory "docker" was unable to handle container "/system.slice/systemd-vconsole-setup.service"
I1209 18:15:45.373845 20134 factory.go:104] Error trying to work out if we can handle /system.slice/systemd-vconsole-setup.service: /system.slice/systemd-vconsole-setup.service not handled by systemd handler
I1209 18:15:45.374044 20134 factory.go:115] Factory "systemd" was unable to handle container "/system.slice/systemd-vconsole-setup.service"
I1209 18:15:45.374238 20134 factory.go:111] Using factory "raw" for container "/system.slice/systemd-vconsole-setup.service"
I1209 18:15:45.374755 20134 manager.go:874] Added container: "/system.slice/systemd-vconsole-setup.service" (aliases: [], namespace: "")
I1209 18:15:45.375169 20134 handler.go:325] Added event &{/system.slice/systemd-vconsole-setup.service 2016-12-09 17:39:39.578252844 +0000 UTC containerCreation {<nil>}}
I1209 18:15:45.385720 20134 container.go:407] Start housekeeping for container "/system.slice/docker-b585d69a82cf6e0cb5909a7f97b69a2983f3e9a8f5b29b298e036d0e33c55769.scope"
I1209 18:15:45.386778 20134 handler.go:300] unable to get fs usage from thin pool for device 16: no cached value for usage of device 16
I1209 18:15:45.359991 20134 proxier.go:751] syncProxyRules took 141.639117ms
I1209 18:15:45.404115 20134 proxier.go:391] OnServiceUpdate took 185.75923ms for 0 services
I1209 18:15:45.404400 20134 container.go:407] Start housekeeping for container "/system.slice/dbus.service"
I1209 18:15:45.405304 20134 container.go:407] Start housekeeping for container "/system.slice/systemd-sysusers.service"
I1209 18:15:45.406123 20134 container.go:407] Start housekeeping for container "/system.slice/auditd.service"
I1209 18:15:45.407032 20134 container.go:407] Start housekeeping for container "/system.slice/crond.service"
I1209 18:15:45.407795 20134 container.go:407] Start housekeeping for container "/system.slice/sshd.service"
I1209 18:15:45.408605 20134 container.go:407] Start housekeeping for container "/system.slice/systemd-machine-id-commit.service"
I1209 18:15:45.409322 20134 container.go:407] Start housekeeping for container "/system.slice/systemd-remount-fs.service"
I1209 18:15:45.410045 20134 container.go:407] Start housekeeping for container "/system.slice/systemd-vconsole-setup.service"
I1209 18:15:45.414369 20134 config.go:256] Setting services (config.ServiceUpdate) {
Services: ([]api.Service) (len=1 cap=1) {
(api.Service) &TypeMeta{Kind:,APIVersion:,}
},
Op: (config.Operation) 0
}
I1209 18:15:45.414432 20134 config.go:208] Calling handler.OnServiceUpdate()
I1209 18:15:45.414467 20134 proxier.go:397] Received update notice: []
I1209 18:15:45.414527 20134 proxier.go:431] Adding new service "default/kubernetes:https" at 172.30.0.1:443/TCP
I1209 18:15:45.414676 20134 proxier.go:459] added serviceInfo(default/kubernetes:https): (*iptables.serviceInfo)(0xc82d7e6420)({
clusterIP: (net.IP) (len=16 cap=16) 172.30.0.1,
port: (int) 443,
protocol: (api.Protocol) (len=3) "TCP",
nodePort: (int) 0,
loadBalancerStatus: (api.LoadBalancerStatus) {
Ingress: ([]api.LoadBalancerIngress) {
}
},
sessionAffinityType: (api.ServiceAffinity) (len=8) "ClientIP",
stickyMaxAgeSeconds: (int) 180,
externalIPs: ([]string) <nil>,
loadBalancerSourceRanges: ([]string) <nil>,
onlyNodeLocalEndpoints: (bool) false,
healthCheckNodePort: (int) 0
})
I1209 18:15:45.414695 20134 proxier.go:431] Adding new service "default/kubernetes:dns" at 172.30.0.1:53/UDP
I1209 18:15:45.414758 20134 proxier.go:459] added serviceInfo(default/kubernetes:dns): (*iptables.serviceInfo)(0xc82d7e64d0)({
clusterIP: (net.IP) (len=16 cap=16) 172.30.0.1,
port: (int) 53,
protocol: (api.Protocol) (len=3) "UDP",
nodePort: (int) 0,
loadBalancerStatus: (api.LoadBalancerStatus) {
Ingress: ([]api.LoadBalancerIngress) {
}
},
sessionAffinityType: (api.ServiceAffinity) (len=8) "ClientIP",
stickyMaxAgeSeconds: (int) 180,
externalIPs: ([]string) <nil>,
loadBalancerSourceRanges: ([]string) <nil>,
onlyNodeLocalEndpoints: (bool) false,
healthCheckNodePort: (int) 0
})
I1209 18:15:45.414780 20134 proxier.go:431] Adding new service "default/kubernetes:dns-tcp" at 172.30.0.1:53/TCP
I1209 18:15:45.414841 20134 proxier.go:459] added serviceInfo(default/kubernetes:dns-tcp): (*iptables.serviceInfo)(0xc82d7e6580)({
clusterIP: (net.IP) (len=16 cap=16) 172.30.0.1,
port: (int) 53,
protocol: (api.Protocol) (len=3) "TCP",
nodePort: (int) 0,
loadBalancerStatus: (api.LoadBalancerStatus) {
Ingress: ([]api.LoadBalancerIngress) {
}
},
sessionAffinityType: (api.ServiceAffinity) (len=8) "ClientIP",
stickyMaxAgeSeconds: (int) 180,
externalIPs: ([]string) <nil>,
loadBalancerSourceRanges: ([]string) <nil>,
onlyNodeLocalEndpoints: (bool) false,
healthCheckNodePort: (int) 0
})
I1209 18:15:45.414860 20134 proxier.go:758] Syncing iptables rules
I1209 18:15:45.414878 20134 iptables.go:362] running iptables -N [KUBE-SERVICES -t filter]
I1209 18:15:45.426242 20134 iptables.go:362] running iptables -N [KUBE-SERVICES -t nat]
I1209 18:15:45.433989 20134 iptables.go:362] running iptables -C [OUTPUT -t filter -m comment --comment kubernetes service portals -j KUBE-SERVICES]
I1209 18:15:45.440613 20134 iptables.go:362] running iptables -C [OUTPUT -t nat -m comment --comment kubernetes service portals -j KUBE-SERVICES]
I1209 18:15:45.447073 20134 iptables.go:362] running iptables -C [PREROUTING -t nat -m comment --comment kubernetes service portals -j KUBE-SERVICES]
I1209 18:15:45.452724 20134 iptables.go:362] running iptables -N [KUBE-POSTROUTING -t nat]
I1209 18:15:45.458668 20134 iptables.go:362] running iptables -C [POSTROUTING -t nat -m comment --comment kubernetes postrouting rules -j KUBE-POSTROUTING]
I1209 18:15:45.464530 20134 iptables.go:298] running iptables-save [-t filter]
I1209 18:15:45.471566 20134 iptables.go:298] running iptables-save [-t nat]
I1209 18:15:45.475276 20134 create_dockercfg_secrets.go:156] Updating token secret openshift-infra/default-token-98z7m
I1209 18:15:45.475462 20134 create_dockercfg_secrets.go:497] Creating dockercfg secret "default-dockercfg-qrmem" for service account openshift-infra/default
I1209 18:15:45.481031 20134 proxier.go:1244] Restoring iptables rules: *filter
:KUBE-SERVICES - [0:0]
COMMIT
*nat
:KUBE-SERVICES - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-POSTROUTING - [0:0]
:KUBE-MARK-MASQ - [0:0]
:KUBE-SVC-NPX46M4PTMTKRN6Y - [0:0]
:KUBE-SEP-LLAPZ6I53VN3DC7H - [0:0]
:KUBE-SVC-3VQ6B3MLH7E2SZT4 - [0:0]
:KUBE-SEP-FQLCC2RAIW2XP5BY - [0:0]
:KUBE-SVC-BA6I5HTZKAAAJT56 - [0:0]
:KUBE-SEP-U462OSCUJL4Y6JKA - [0:0]
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x00004000/0x00004000 -j MASQUERADE
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x00004000/0x00004000
-A KUBE-SERVICES -m comment --comment "default/kubernetes:https cluster IP" -m tcp -p tcp -d 172.30.0.1/32 --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment default/kubernetes:https -m recent --name KUBE-SEP-LLAPZ6I53VN3DC7H --rcheck --seconds 180 --reap -j KUBE-SEP-LLAPZ6I53VN3DC7H
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment default/kubernetes:https -j KUBE-SEP-LLAPZ6I53VN3DC7H
-A KUBE-SEP-LLAPZ6I53VN3DC7H -m comment --comment default/kubernetes:https -s 192.168.121.18/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-LLAPZ6I53VN3DC7H -m comment --comment default/kubernetes:https -m recent --name KUBE-SEP-LLAPZ6I53VN3DC7H --set -m tcp -p tcp -j DNAT --to-destination 192.168.121.18:8443
-A KUBE-SERVICES -m comment --comment "default/kubernetes:dns cluster IP" -m udp -p udp -d 172.30.0.1/32 --dport 53 -j KUBE-SVC-3VQ6B3MLH7E2SZT4
-A KUBE-SVC-3VQ6B3MLH7E2SZT4 -m comment --comment default/kubernetes:dns -m recent --name KUBE-SEP-FQLCC2RAIW2XP5BY --rcheck --seconds 180 --reap -j KUBE-SEP-FQLCC2RAIW2XP5BY
-A KUBE-SVC-3VQ6B3MLH7E2SZT4 -m comment --comment default/kubernetes:dns -j KUBE-SEP-FQLCC2RAIW2XP5BY
-A KUBE-SEP-FQLCC2RAIW2XP5BY -m comment --comment default/kubernetes:dns -s 192.168.121.18/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-FQLCC2RAIW2XP5BY -m comment --comment default/kubernetes:dns -m recent --name KUBE-SEP-FQLCC2RAIW2XP5BY --set -m udp -p udp -j DNAT --to-destination 192.168.121.18:8053
-A KUBE-SERVICES -m comment --comment "default/kubernetes:dns-tcp cluster IP" -m tcp -p tcp -d 172.30.0.1/32 --dport 53 -j KUBE-SVC-BA6I5HTZKAAAJT56
-A KUBE-SVC-BA6I5HTZKAAAJT56 -m comment --comment default/kubernetes:dns-tcp -m recent --name KUBE-SEP-U462OSCUJL4Y6JKA --rcheck --seconds 180 --reap -j KUBE-SEP-U462OSCUJL4Y6JKA
-A KUBE-SVC-BA6I5HTZKAAAJT56 -m comment --comment default/kubernetes:dns-tcp -j KUBE-SEP-U462OSCUJL4Y6JKA
-A KUBE-SEP-U462OSCUJL4Y6JKA -m comment --comment default/kubernetes:dns-tcp -s 192.168.121.18/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-U462OSCUJL4Y6JKA -m comment --comment default/kubernetes:dns-tcp -m recent --name KUBE-SEP-U462OSCUJL4Y6JKA --set -m tcp -p tcp -j DNAT --to-destination 192.168.121.18:8053
-A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
COMMIT
I1209 18:15:45.481052 20134 iptables.go:339] running iptables-restore [--noflush --counters]
I1209 18:15:45.495694 20134 create_dockercfg_secrets.go:479] Token secret for service account default/deployer is not populated yet
I1209 18:15:45.495730 20134 create_dockercfg_secrets.go:366] The dockercfg secret was not created for service account default/deployer, will retry
I1209 18:15:45.497571 20134 factory.go:111] Using factory "docker" for container "/system.slice/var-lib-docker-containers-b585d69a82cf6e0cb5909a7f97b69a2983f3e9a8f5b29b298e036d0e33c55769-shm.mount"
I1209 18:15:45.508142 20134 node.go:359] Starting DNS on 192.168.121.18:53
I1209 18:15:45.508377 20134 logs.go:41] skydns: ready for queries on cluster.local. for tcp://192.168.121.18:53 [rcache 0]
I1209 18:15:45.508389 20134 logs.go:41] skydns: ready for queries on cluster.local. for udp://192.168.121.18:53 [rcache 0]
I1209 18:15:45.546218 20134 proxier.go:751] syncProxyRules took 131.355525ms
I1209 18:15:45.546242 20134 proxier.go:391] OnServiceUpdate took 131.72323ms for 1 services
I1209 18:15:45.558473 20134 create_dockercfg_secrets.go:90] Updating service account deployer
I1209 18:15:45.559652 20134 interface.go:248] Default route transits interface "eth0"
I1209 18:15:45.560147 20134 interface.go:93] Interface eth0 is up
I1209 18:15:45.560470 20134 interface.go:138] Interface "eth0" has 2 addresses :[192.168.121.18/24 fe80::5054:ff:fe56:f2f/64].
I1209 18:15:45.560742 20134 interface.go:105] Checking addr 192.168.121.18/24.
I1209 18:15:45.560976 20134 interface.go:114] IP found 192.168.121.18
I1209 18:15:45.561000 20134 interface.go:144] valid IPv4 address for interface "eth0" found as 192.168.121.18.
I1209 18:15:45.561344 20134 interface.go:254] Choosing IP 192.168.121.18
I1209 18:15:45.564142 20134 manager.go:874] Added container: "/system.slice/var-lib-docker-containers-b585d69a82cf6e0cb5909a7f97b69a2983f3e9a8f5b29b298e036d0e33c55769-shm.mount" (aliases: [origin b585d69a82cf6e0cb5909a7f97b69a2983f3e9a8f5b29b298e036d0e33c55769], namespace: "docker")
I1209 18:15:45.564392 20134 handler.go:325] Added event &{/system.slice/var-lib-docker-containers-b585d69a82cf6e0cb5909a7f97b69a2983f3e9a8f5b29b298e036d0e33c55769-shm.mount 2016-12-09 18:15:44.996252844 +0000 UTC containerCreation {<nil>}}
I1209 18:15:45.564441 20134 factory.go:104] Error trying to work out if we can handle /system.slice/systemd-sysctl.service: invalid container name
I1209 18:15:45.564456 20134 factory.go:115] Factory "docker" was unable to handle container "/system.slice/systemd-sysctl.service"
I1209 18:15:45.564472 20134 factory.go:104] Error trying to work out if we can handle /system.slice/systemd-sysctl.service: /system.slice/systemd-sysctl.service not handled by systemd handler
I1209 18:15:45.564483 20134 factory.go:115] Factory "systemd" was unable to handle container "/system.slice/systemd-sysctl.service"
I1209 18:15:45.564495 20134 factory.go:111] Using factory "raw" for container "/system.slice/systemd-sysctl.service"
I1209 18:15:45.564764 20134 manager.go:874] Added container: "/system.slice/systemd-sysctl.service" (aliases: [], namespace: "")
I1209 18:15:45.564944 20134 handler.go:325] Added event &{/system.slice/systemd-sysctl.service 2016-12-09 17:39:39.577252844 +0000 UTC containerCreation {<nil>}}
I1209 18:15:45.564976 20134 factory.go:104] Error trying to work out if we can handle /system.slice/docker.service: invalid container name
I1209 18:15:45.566271 20134 factory.go:115] Factory "docker" was unable to handle container "/system.slice/docker.service"
I1209 18:15:45.566299 20134 factory.go:104] Error trying to work out if we can handle /system.slice/docker.service: /system.slice/docker.service not handled by systemd handler
I1209 18:15:45.566311 20134 factory.go:115] Factory "systemd" was unable to handle container "/system.slice/docker.service"
I1209 18:15:45.566745 20134 factory.go:111] Using factory "raw" for container "/system.slice/docker.service"
I1209 18:15:45.567170 20134 manager.go:874] Added container: "/system.slice/docker.service" (aliases: [], namespace: "")
I1209 18:15:45.567555 20134 handler.go:325] Added event &{/system.slice/docker.service 2016-12-09 17:39:39.568252844 +0000 UTC containerCreation {<nil>}}
I1209 18:15:45.567599 20134 factory.go:104] Error trying to work out if we can handle /system.slice/fedora-readonly.service: invalid container name
I1209 18:15:45.567612 20134 factory.go:115] Factory "docker" was unable to handle container "/system.slice/fedora-readonly.service"
I1209 18:15:45.567917 20134 factory.go:104] Error trying to work out if we can handle /system.slice/fedora-readonly.service: /system.slice/fedora-readonly.service not handled by systemd handler
I1209 18:15:45.567929 20134 factory.go:115] Factory "systemd" was unable to handle container "/system.slice/fedora-readonly.service"
I1209 18:15:45.568354 20134 factory.go:111] Using factory "raw" for container "/system.slice/fedora-readonly.service"
I1209 18:15:45.568600 20134 manager.go:874] Added container: "/system.slice/fedora-readonly.service" (aliases: [], namespace: "")
I1209 18:15:45.568805 20134 handler.go:325] Added event &{/system.slice/fedora-readonly.service 2016-12-09 17:39:39.569252844 +0000 UTC containerCreation {<nil>}}
I1209 18:15:45.568847 20134 factory.go:104] Error trying to work out if we can handle /system.slice/ldconfig.service: invalid container name
I1209 18:15:45.568859 20134 factory.go:115] Factory "docker" was unable to handle container "/system.slice/ldconfig.service"
I1209 18:15:45.568872 20134 factory.go:104] Error trying to work out if we can handle /system.slice/ldconfig.service: /system.slice/ldconfig.service not handled by systemd handler
I1209 18:15:45.568882 20134 factory.go:115] Factory "systemd" was unable to handle container "/system.slice/ldconfig.service"
I1209 18:15:45.568894 20134 factory.go:111] Using factory "raw" for container "/system.slice/ldconfig.service"
I1209 18:15:45.569122 20134 manager.go:874] Added container: "/system.slice/ldconfig.service" (aliases: [], namespace: "")
I1209 18:15:45.569313 20134 handler.go:325] Added event &{/system.slice/ldconfig.service 2016-12-09 17:39:39.569252844 +0000 UTC containerCreation {<nil>}}
I1209 18:15:45.569358 20134 factory.go:104] Error trying to work out if we can handle /system.slice/lvm2-monitor.service: invalid container name
I1209 18:15:45.569371 20134 factory.go:115] Factory "docker" was unable to handle container "/system.slice/lvm2-monitor.service"
I1209 18:15:45.569385 20134 factory.go:104] Error trying to work out if we can handle /system.slice/lvm2-monitor.service: /system.slice/lvm2-monitor.service not handled by systemd handler
I1209 18:15:45.569395 20134 factory.go:115] Factory "systemd" was unable to handle container "/system.slice/lvm2-monitor.service"
I1209 18:15:45.569407 20134 factory.go:111] Using factory "raw" for container "/system.slice/lvm2-monitor.service"
I1209 18:15:45.569619 20134 manager.go:874] Added container: "/system.slice/lvm2-monitor.service" (aliases: [], namespace: "")
I1209 18:15:45.569817 20134 handler.go:325] Added event &{/system.slice/lvm2-monitor.service 2016-12-09 17:39:39.570252844 +0000 UTC containerCreation {<nil>}}
I1209 18:15:45.569849 20134 factory.go:104] Error trying to work out if we can handle /system.slice/run-user-1001.mount: invalid container name
I1209 18:15:45.569860 20134 factory.go:115] Factory "docker" was unable to handle container "/system.slice/run-user-1001.mount"
I1209 18:15:45.569873 20134 factory.go:108] Factory "systemd" can handle container "/system.slice/run-user-1001.mount", but ignoring.
I1209 18:15:45.569885 20134 manager.go:843] ignoring container "/system.slice/run-user-1001.mount"
I1209 18:15:45.569902 20134 factory.go:104] Error trying to work out if we can handle /system.slice/system-serial\x2dgetty.slice: invalid container name
I1209 18:15:45.569913 20134 factory.go:115] Factory "docker" was unable to handle container "/system.slice/system-serial\\x2dgetty.slice"
I1209 18:15:45.569926 20134 factory.go:104] Error trying to work out if we can handle /system.slice/system-serial\x2dgetty.slice: /system.slice/system-serial\x2dgetty.slice not handled by systemd handler
I1209 18:15:45.569935 20134 factory.go:115] Factory "systemd" was unable to handle container "/system.slice/system-serial\\x2dgetty.slice"
I1209 18:15:45.569947 20134 factory.go:111] Using factory "raw" for container "/system.slice/system-serial\\x2dgetty.slice"
I1209 18:15:45.570211 20134 manager.go:874] Added container: "/system.slice/system-serial\\x2dgetty.slice" (aliases: [], namespace: "")
I1209 18:15:45.570405 20134 handler.go:325] Added event &{/system.slice/system-serial\x2dgetty.slice 2016-12-09 17:39:39.571252844 +0000 UTC containerCreation {<nil>}}
I1209 18:15:45.570438 20134 factory.go:104] Error trying to work out if we can handle /system.slice/systemd-machined.service: invalid container name
I1209 18:15:45.570452 20134 factory.go:115] Factory "docker" was unable to handle container "/system.slice/systemd-machined.service"
I1209 18:15:45.570465 20134 factory.go:104] Error trying to work out if we can handle /system.slice/systemd-machined.service: /system.slice/systemd-machined.service not handled by systemd handler
I1209 18:15:45.570475 20134 factory.go:115] Factory "systemd" was unable to handle container "/system.slice/systemd-machined.service"
I1209 18:15:45.570487 20134 factory.go:111] Using factory "raw" for container "/system.slice/systemd-machined.service"
I1209 18:15:45.570714 20134 manager.go:874] Added container: "/system.slice/systemd-machined.service" (aliases: [], namespace: "")
I1209 18:15:45.570925 20134 handler.go:325] Added event &{/system.slice/systemd-machined.service 2016-12-09 17:39:39.576252844 +0000 UTC containerCreation {<nil>}}
I1209 18:15:45.575289 20134 factory.go:104] Error trying to work out if we can handle /system.slice/systemd-tmpfiles-setup-dev.service: invalid container name
I1209 18:15:45.575314 20134 factory.go:115] Factory "docker" was unable to handle container "/system.slice/systemd-tmpfiles-setup-dev.service"
I1209 18:15:45.575350 20134 factory.go:104] Error trying to work out if we can handle /system.slice/systemd-tmpfiles-setup-dev.service: /system.slice/systemd-tmpfiles-setup-dev.service not handled by systemd handler
I1209 18:15:45.575362 20134 factory.go:115] Factory "systemd" was unable to handle container "/system.slice/systemd-tmpfiles-setup-dev.service"
I1209 18:15:45.575378 20134 factory.go:111] Using factory "raw" for container "/system.slice/systemd-tmpfiles-setup-dev.service"
I1209 18:15:45.575682 20134 manager.go:874] Added container: "/system.slice/systemd-tmpfiles-setup-dev.service" (aliases: [], namespace: "")
I1209 18:15:45.576902 20134 handler.go:325] Added event &{/system.slice/systemd-tmpfiles-setup-dev.service 2016-12-09 17:39:39.577252844 +0000 UTC containerCreation {<nil>}}
I1209 18:15:45.577236 20134 factory.go:104] Error trying to work out if we can handle /user.slice: invalid container name
I1209 18:15:45.577262 20134 factory.go:115] Factory "docker" was unable to handle container "/user.slice"
I1209 18:15:45.577302 20134 factory.go:104] Error trying to work out if we can handle /user.slice: /user.slice not handled by systemd handler
I1209 18:15:45.578007 20134 factory.go:115] Factory "systemd" was unable to handle container "/user.slice"
I1209 18:15:45.578311 20134 factory.go:111] Using factory "raw" for container "/user.slice"
I1209 18:15:45.578752 20134 manager.go:874] Added container: "/user.slice" (aliases: [], namespace: "")
I1209 18:15:45.578975 20134 handler.go:325] Added event &{/user.slice 2016-12-09 17:39:39.579252844 +0000 UTC containerCreation {<nil>}}
I1209 18:15:45.579374 20134 factory.go:104] Error trying to work out if we can handle /system.slice/systemd-journald.service: invalid container name
I1209 18:15:45.579593 20134 factory.go:115] Factory "docker" was unable to handle container "/system.slice/systemd-journald.service"
I1209 18:15:45.579796 20134 factory.go:104] Error trying to work out if we can handle /system.slice/systemd-journald.service: /system.slice/systemd-journald.service not handled by systemd handler
I1209 18:15:45.579997 20134 factory.go:115] Factory "systemd" was unable to handle container "/system.slice/systemd-journald.service"
I1209 18:15:45.580191 20134 factory.go:111] Using factory "raw" for container "/system.slice/systemd-journald.service"
I1209 18:15:45.580652 20134 manager.go:874] Added container: "/system.slice/systemd-journald.service" (aliases: [], namespace: "")
I1209 18:15:45.581049 20134 handler.go:325] Added event &{/system.slice/systemd-journald.service 2016-12-09 17:39:39.572252844 +0000 UTC containerCreation {<nil>}}
I1209 18:15:45.581304 20134 factory.go:104] Error trying to work out if we can handle /system.slice/-.mount: invalid container name
I1209 18:15:45.581525 20134 factory.go:115] Factory "docker" was unable to handle container "/system.slice/-.mount"
I1209 18:15:45.581731 20134 factory.go:108] Factory "systemd" can handle container "/system.slice/-.mount", but ignoring.
I1209 18:15:45.581934 20134 manager.go:843] ignoring container "/system.slice/-.mount"
I1209 18:15:45.582148 20134 factory.go:104] Error trying to work out if we can handle /system.slice/dev-hugepages.mount: invalid container name
I1209 18:15:45.582354 20134 factory.go:115] Factory "docker" was unable to handle container "/system.slice/dev-hugepages.mount"
I1209 18:15:45.582565 20134 factory.go:108] Factory "systemd" can handle container "/system.slice/dev-hugepages.mount", but ignoring.
I1209 18:15:45.582764 20134 manager.go:843] ignoring container "/system.slice/dev-hugepages.mount"
I1209 18:15:45.582967 20134 factory.go:104] Error trying to work out if we can handle /system.slice/dracut-shutdown.service: invalid container name
I1209 18:15:45.583179 20134 factory.go:115] Factory "docker" was unable to handle container "/system.slice/dracut-shutdown.service"
I1209 18:15:45.583398 20134 factory.go:104] Error trying to work out if we can handle /system.slice/dracut-shutdown.service: /system.slice/dracut-shutdown.service not handled by systemd handler
I1209 18:15:45.583623 20134 factory.go:115] Factory "systemd" was unable to handle container "/system.slice/dracut-shutdown.service"
I1209 18:15:45.583822 20134 factory.go:111] Using factory "raw" for container "/system.slice/dracut-shutdown.service"
I1209 18:15:45.584232 20134 manager.go:874] Added container: "/system.slice/dracut-shutdown.service" (aliases: [], namespace: "")
I1209 18:15:45.584662 20134 handler.go:325] Added event &{/system.slice/dracut-shutdown.service 2016-12-09 17:39:39.569252844 +0000 UTC containerCreation {<nil>}}
I1209 18:15:45.584719 20134 factory.go:104] Error trying to work out if we can handle /system.slice/kmod-static-nodes.service: invalid container name
I1209 18:15:45.585057 20134 factory.go:115] Factory "docker" was unable to handle container "/system.slice/kmod-static-nodes.service"
I1209 18:15:45.585272 20134 factory.go:104] Error trying to work out if we can handle /system.slice/kmod-static-nodes.service: /system.slice/kmod-static-nodes.service not handled by systemd handler
I1209 18:15:45.585299 20134 factory.go:115] Factory "systemd" was unable to handle container "/system.slice/kmod-static-nodes.service"
I1209 18:15:45.585374 20134 container.go:407] Start housekeeping for container "/system.slice/dracut-shutdown.service"
I1209 18:15:45.586764 20134 container.go:407] Start housekeeping for container "/system.slice/var-lib-docker-containers-b585d69a82cf6e0cb5909a7f97b69a2983f3e9a8f5b29b298e036d0e33c55769-shm.mount"
I1209 18:15:45.587663 20134 handler.go:300] unable to get fs usage from thin pool for device 16: no cached value for usage of device 16
I1209 18:15:45.602616 20134 container.go:407] Start housekeeping for container "/system.slice/systemd-sysctl.service"
I1209 18:15:45.603723 20134 container.go:407] Start housekeeping for container "/system.slice/docker.service"
I1209 18:15:45.604625 20134 container.go:407] Start housekeeping for container "/system.slice/fedora-readonly.service"
I1209 18:15:45.605417 20134 container.go:407] Start housekeeping for container "/system.slice/ldconfig.service"
I1209 18:15:45.606192 20134 container.go:407] Start housekeeping for container "/system.slice/lvm2-monitor.service"
I1209 18:15:45.607018 20134 container.go:407] Start housekeeping for container "/system.slice/system-serial\\x2dgetty.slice"
I1209 18:15:45.607805 20134 container.go:407] Start housekeeping for container "/system.slice/systemd-machined.service"
I1209 18:15:45.608572 20134 container.go:407] Start housekeeping for container "/system.slice/systemd-tmpfiles-setup-dev.service"
I1209 18:15:45.579094 20134 create_dockercfg_secrets.go:156] Updating token secret openshift-infra/builder-token-dq2n2
I1209 18:15:45.609613 20134 create_dockercfg_secrets.go:497] Creating dockercfg secret "builder-dockercfg-8862u" for service account openshift-infra/builder
I1209 18:15:45.610037 20134 container.go:407] Start housekeeping for container "/user.slice"
I1209 18:15:45.610877 20134 container.go:407] Start housekeeping for container "/system.slice/systemd-journald.service"
I1209 18:15:45.585315 20134 factory.go:111] Using factory "raw" for container "/system.slice/kmod-static-nodes.service"
I1209 18:15:45.634044 20134 manager.go:874] Added container: "/system.slice/kmod-static-nodes.service" (aliases: [], namespace: "")
I1209 18:15:45.634545 20134 handler.go:325] Added event &{/system.slice/kmod-static-nodes.service 2016-12-09 17:39:39.569252844 +0000 UTC containerCreation {<nil>}}
I1209 18:15:45.634851 20134 factory.go:104] Error trying to work out if we can handle /system.slice/run-docker-netns-default.mount: invalid container name
I1209 18:15:45.635092 20134 factory.go:115] Factory "docker" was unable to handle container "/system.slice/run-docker-netns-default.mount"
I1209 18:15:45.635315 20134 factory.go:108] Factory "systemd" can handle container "/system.slice/run-docker-netns-default.mount", but ignoring.
I1209 18:15:45.635570 20134 manager.go:843] ignoring container "/system.slice/run-docker-netns-default.mount"
I1209 18:15:45.635797 20134 factory.go:104] Error trying to work out if we can handle /system.slice/systemd-hwdb-update.service: invalid container name
I1209 18:15:45.636043 20134 factory.go:115] Factory "docker" was unable to handle container "/system.slice/systemd-hwdb-update.service"
I1209 18:15:45.636301 20134 factory.go:104] Error trying to work out if we can handle /system.slice/systemd-hwdb-update.service: /system.slice/systemd-hwdb-update.service not handled by systemd handler
I1209 18:15:45.636345 20134 factory.go:115] Factory "systemd" was unable to handle container "/system.slice/systemd-hwdb-update.service"
I1209 18:15:45.636723 20134 factory.go:111] Using factory "raw" for container "/system.slice/systemd-hwdb-update.service"
I1209 18:15:45.637228 20134 manager.go:874] Added container: "/system.slice/systemd-hwdb-update.service" (aliases: [], namespace: "")
I1209 18:15:45.637760 20134 handler.go:325] Added event &{/system.slice/systemd-hwdb-update.service 2016-12-09 17:39:39.571252844 +0000 UTC containerCreation {<nil>}}
I1209 18:15:45.638037 20134 factory.go:104] Error trying to work out if we can handle /system.slice/systemd-journal-flush.service: invalid container name
I1209 18:15:45.638277 20134 factory.go:115] Factory "docker" was unable to handle container "/system.slice/systemd-journal-flush.service"
I1209 18:15:45.638520 20134 factory.go:104] Error trying to work out if we can handle /system.slice/systemd-journal-flush.service: /system.slice/systemd-journal-flush.service not handled by systemd handler
I1209 18:15:45.638773 20134 factory.go:115] Factory "systemd" was unable to handle container "/system.slice/systemd-journal-flush.service"
I1209 18:15:45.638994 20134 factory.go:111] Using factory "raw" for container "/system.slice/systemd-journal-flush.service"
I1209 18:15:45.639548 20134 manager.go:874] Added container: "/system.slice/systemd-journal-flush.service" (aliases: [], namespace: "")
I1209 18:15:45.639964 20134 handler.go:325] Added event &{/system.slice/systemd-journal-flush.service 2016-12-09 17:39:39.571252844 +0000 UTC containerCreation {<nil>}}
I1209 18:15:45.640204 20134 factory.go:104] Error trying to work out if we can handle /system.slice/systemd-logind.service: invalid container name
I1209 18:15:45.640431 20134 factory.go:115] Factory "docker" was unable to handle container "/system.slice/systemd-logind.service"
I1209 18:15:45.640780 20134 factory.go:104] Error trying to work out if we can handle /system.slice/systemd-logind.service: /system.slice/systemd-logind.service not handled by systemd handler
I1209 18:15:45.641129 20134 factory.go:115] Factory "systemd" was unable to handle container "/system.slice/systemd-logind.service"
I1209 18:15:45.641157 20134 factory.go:111] Using factory "raw" for container "/system.slice/systemd-logind.service"
I1209 18:15:45.642030 20134 manager.go:874] Added container: "/system.slice/systemd-logind.service" (aliases: [], namespace: "")
I1209 18:15:45.642571 20134 handler.go:325] Added event &{/system.slice/systemd-logind.service 2016-12-09 17:39:39.576252844 +0000 UTC containerCreation {<nil>}}
I1209 18:15:45.642624 20134 factory.go:104] Error trying to work out if we can handle /init.scope: invalid container name
I1209 18:15:45.643020 20134 factory.go:115] Factory "docker" was unable to handle container "/init.scope"
I1209 18:15:45.643054 20134 factory.go:104] Error trying to work out if we can handle /init.scope: /init.scope not handled by systemd handler
I1209 18:15:45.643376 20134 factory.go:115] Factory "systemd" was unable to handle container "/init.scope"
I1209 18:15:45.643402 20134 factory.go:111] Using factory "raw" for container "/init.scope"
I1209 18:15:45.643927 20134 manager.go:874] Added container: "/init.scope" (aliases: [], namespace: "")
I1209 18:15:45.644296 20134 handler.go:325] Added event &{/init.scope 2016-12-09 17:39:39.566252844 +0000 UTC containerCreation {<nil>}}
I1209 18:15:45.644560 20134 factory.go:104] Error trying to work out if we can handle /system.slice: invalid container name
I1209 18:15:45.644583 20134 factory.go:115] Factory "docker" was unable to handle container "/system.slice"
I1209 18:15:45.644902 20134 factory.go:104] Error trying to work out if we can handle /system.slice: /system.slice not handled by systemd handler
I1209 18:15:45.644922 20134 factory.go:115] Factory "systemd" was unable to handle container "/system.slice"
I1209 18:15:45.645239 20134 factory.go:111] Using factory "raw" for container "/system.slice"
I1209 18:15:45.645720 20134 manager.go:874] Added container: "/system.slice" (aliases: [], namespace: "")
I1209 18:15:45.646148 20134 handler.go:325] Added event &{/system.slice 2016-12-09 17:39:39.567252844 +0000 UTC containerCreation {<nil>}}
I1209 18:15:45.646405 20134 factory.go:104] Error trying to work out if we can handle /system.slice/dev-mqueue.mount: invalid container name
I1209 18:15:45.646621 20134 factory.go:115] Factory "docker" was unable to handle container "/system.slice/dev-mqueue.mount"
I1209 18:15:45.646822 20134 factory.go:108] Factory "systemd" can handle container "/system.slice/dev-mqueue.mount", but ignoring.
I1209 18:15:45.647026 20134 manager.go:843] ignoring container "/system.slice/dev-mqueue.mount"
I1209 18:15:45.647240 20134 factory.go:104] Error trying to work out if we can handle /system.slice/docker-containerd.service: invalid container name
I1209 18:15:45.647454 20134 factory.go:115] Factory "docker" was unable to handle container "/system.slice/docker-containerd.service"
I1209 18:15:45.647672 20134 factory.go:104] Error trying to work out if we can handle /system.slice/docker-containerd.service: /system.slice/docker-containerd.service not handled by systemd handler
I1209 18:15:45.647864 20134 factory.go:115] Factory "systemd" was unable to handle container "/system.slice/docker-containerd.service"
I1209 18:15:45.648058 20134 factory.go:111] Using factory "raw" for container "/system.slice/docker-containerd.service"
I1209 18:15:45.648545 20134 manager.go:874] Added container: "/system.slice/docker-containerd.service" (aliases: [], namespace: "")
I1209 18:15:45.648934 20134 handler.go:325] Added event &{/system.slice/docker-containerd.service 2016-12-09 17:39:39.568252844 +0000 UTC containerCreation {<nil>}}
I1209 18:15:45.649163 20134 factory.go:104] Error trying to work out if we can handle /system.slice/lvm2-lvmetad.service: invalid container name
I1209 18:15:45.649378 20134 factory.go:115] Factory "docker" was unable to handle container "/system.slice/lvm2-lvmetad.service"
I1209 18:15:45.649589 20134 factory.go:104] Error trying to work out if we can handle /system.slice/lvm2-lvmetad.service: /system.slice/lvm2-lvmetad.service not handled by systemd handler
I1209 18:15:45.649609 20134 factory.go:115] Factory "systemd" was unable to handle container "/system.slice/lvm2-lvmetad.service"
I1209 18:15:45.649923 20134 factory.go:111] Using factory "raw" for container "/system.slice/lvm2-lvmetad.service"
I1209 18:15:45.650388 20134 manager.go:874] Added container: "/system.slice/lvm2-lvmetad.service" (aliases: [], namespace: "")
I1209 18:15:45.650817 20134 handler.go:325] Added event &{/system.slice/lvm2-lvmetad.service 2016-12-09 17:39:39.569252844 +0000 UTC containerCreation {<nil>}}
I1209 18:15:45.651064 20134 factory.go:104] Error trying to work out if we can handle /system.slice/system-sshd\x2dkeygen.slice: invalid container name
I1209 18:15:45.651265 20134 factory.go:115] Factory "docker" was unable to handle container "/system.slice/system-sshd\\x2dkeygen.slice"
I1209 18:15:45.651486 20134 factory.go:104] Error trying to work out if we can handle /system.slice/system-sshd\x2dkeygen.slice: /system.slice/system-sshd\x2dkeygen.slice not handled by systemd handler
I1209 18:15:45.651702 20134 factory.go:115] Factory "systemd" was unable to handle container "/system.slice/system-sshd\\x2dkeygen.slice"
I1209 18:15:45.651902 20134 factory.go:111] Using factory "raw" for container "/system.slice/system-sshd\\x2dkeygen.slice"
I1209 18:15:45.652374 20134 manager.go:874] Added container: "/system.slice/system-sshd\\x2dkeygen.slice" (aliases: [], namespace: "")
I1209 18:15:45.652778 20134 handler.go:325] Added event &{/system.slice/system-sshd\x2dkeygen.slice 2016-12-09 17:39:39.571252844 +0000 UTC containerCreation {<nil>}}
I1209 18:15:45.653016 20134 factory.go:104] Error trying to work out if we can handle /system.slice/systemd-journal-catalog-update.service: invalid container name
I1209 18:15:45.653223 20134 factory.go:115] Factory "docker" was unable to handle container "/system.slice/systemd-journal-catalog-update.service"
I1209 18:15:45.653448 20134 factory.go:104] Error trying to work out if we can handle /system.slice/systemd-journal-catalog-update.service: /system.slice/systemd-journal-catalog-update.service not handled by systemd handler
I1209 18:15:45.653662 20134 factory.go:115] Factory "systemd" was unable to handle container "/system.slice/systemd-journal-catalog-update.service"
I1209 18:15:45.653863 20134 factory.go:111] Using factory "raw" for container "/system.slice/systemd-journal-catalog-update.service"
I1209 18:15:45.654303 20134 manager.go:874] Added container: "/system.slice/systemd-journal-catalog-update.service" (aliases: [], namespace: "")
I1209 18:15:45.654759 20134 handler.go:325] Added event &{/system.slice/systemd-journal-catalog-update.service 2016-12-09 17:39:39.571252844 +0000 UTC containerCreation {<nil>}}
I1209 18:15:45.654990 20134 factory.go:104] Error trying to work out if we can handle /system.slice/systemd-udevd.service: invalid container name
I1209 18:15:45.655189 20134 factory.go:115] Factory "docker" was unable to handle container "/system.slice/systemd-udevd.service"
I1209 18:15:45.655413 20134 factory.go:104] Error trying to work out if we can handle /system.slice/systemd-udevd.service: /system.slice/systemd-udevd.service not handled by systemd handler
I1209 18:15:45.655620 20134 factory.go:115] Factory "systemd" was unable to handle container "/system.slice/systemd-udevd.service"
I1209 18:15:45.655643 20134 factory.go:111] Using factory "raw" for container "/system.slice/systemd-udevd.service"
I1209 18:15:45.657050 20134 container.go:407] Start housekeeping for container "/system.slice/systemd-journal-catalog-update.service"
I1209 18:15:45.674185 20134 manager.go:874] Added container: "/system.slice/systemd-udevd.service" (aliases: [], namespace: "")
I1209 18:15:45.674789 20134 handler.go:325] Added event &{/system.slice/systemd-udevd.service 2016-12-09 17:39:39.577252844 +0000 UTC containerCreation {<nil>}}
I1209 18:15:45.675047 20134 factory.go:104] Error trying to work out if we can handle /system.slice/systemd-update-done.service: invalid container name
I1209 18:15:45.675277 20134 factory.go:115] Factory "docker" was unable to handle container "/system.slice/systemd-update-done.service"
I1209 18:15:45.675314 20134 factory.go:104] Error trying to work out if we can handle /system.slice/systemd-update-done.service: /system.slice/systemd-update-done.service not handled by systemd handler
I1209 18:15:45.675723 20134 factory.go:115] Factory "systemd" was unable to handle container "/system.slice/systemd-update-done.service"
I1209 18:15:45.675757 20134 factory.go:111] Using factory "raw" for container "/system.slice/systemd-update-done.service"
I1209 18:15:45.676455 20134 manager.go:874] Added container: "/system.slice/systemd-update-done.service" (aliases: [], namespace: "")
I1209 18:15:45.676939 20134 handler.go:325] Added event &{/system.slice/systemd-update-done.service 2016-12-09 17:39:39.578252844 +0000 UTC containerCreation {<nil>}}
I1209 18:15:45.677194 20134 factory.go:104] Error trying to work out if we can handle /system.slice/chronyd.service: invalid container name
I1209 18:15:45.677437 20134 factory.go:115] Factory "docker" was unable to handle container "/system.slice/chronyd.service"
I1209 18:15:45.677690 20134 factory.go:104] Error trying to work out if we can handle /system.slice/chronyd.service: /system.slice/chronyd.service not handled by systemd handler
I1209 18:15:45.677719 20134 factory.go:115] Factory "systemd" was unable to handle container "/system.slice/chronyd.service"
I1209 18:15:45.678096 20134 factory.go:111] Using factory "raw" for container "/system.slice/chronyd.service"
I1209 18:15:45.678634 20134 manager.go:874] Added container: "/system.slice/chronyd.service" (aliases: [], namespace: "")
I1209 18:15:45.679115 20134 handler.go:325] Added event &{/system.slice/chronyd.service 2016-12-09 17:39:39.567252844 +0000 UTC containerCreation {<nil>}}
I1209 18:15:45.679377 20134 factory.go:104] Error trying to work out if we can handle /system.slice/system-getty.slice: invalid container name
I1209 18:15:45.679618 20134 factory.go:115] Factory "docker" was unable to handle container "/system.slice/system-getty.slice"
I1209 18:15:45.679845 20134 factory.go:104] Error trying to work out if we can handle /system.slice/system-getty.slice: /system.slice/system-getty.slice not handled by systemd handler
I1209 18:15:45.679864 20134 factory.go:115] Factory "systemd" was unable to handle container "/system.slice/system-getty.slice"
I1209 18:15:45.680189 20134 factory.go:111] Using factory "raw" for container "/system.slice/system-getty.slice"
I1209 18:15:45.680751 20134 manager.go:874] Added container: "/system.slice/system-getty.slice" (aliases: [], namespace: "")
I1209 18:15:45.681256 20134 handler.go:325] Added event &{/system.slice/system-getty.slice 2016-12-09 17:39:39.571252844 +0000 UTC containerCreation {<nil>}}
I1209 18:15:45.681520 20134 factory.go:104] Error trying to work out if we can handle /system.slice/systemd-update-utmp.service: invalid container name
I1209 18:15:45.681731 20134 factory.go:115] Factory "docker" was unable to handle container "/system.slice/systemd-update-utmp.service"
I1209 18:15:45.681967 20134 factory.go:104] Error trying to work out if we can handle /system.slice/systemd-update-utmp.service: /system.slice/systemd-update-utmp.service not handled by systemd handler
I1209 18:15:45.682169 20134 factory.go:115] Factory "systemd" was unable to handle container "/system.slice/systemd-update-utmp.service"
I1209 18:15:45.682423 20134 factory.go:111] Using factory "raw" for container "/system.slice/systemd-update-utmp.service"
I1209 18:15:45.682836 20134 container.go:407] Start housekeeping for container "/system.slice/kmod-static-nodes.service"
I1209 18:15:45.683125 20134 manager.go:874] Added container: "/system.slice/systemd-update-utmp.service" (aliases: [], namespace: "")
I1209 18:15:45.683494 20134 container.go:407] Start housekeeping for container "/system.slice/systemd-hwdb-update.service"
I1209 18:15:45.683755 20134 handler.go:325] Added event &{/system.slice/systemd-update-utmp.service 2016-12-09 17:39:39.578252844 +0000 UTC containerCreation {<nil>}}
I1209 18:15:45.683979 20134 factory.go:104] Error trying to work out if we can handle /system.slice/systemd-user-sessions.service: invalid container name
I1209 18:15:45.684264 20134 factory.go:115] Factory "docker" was unable to handle container "/system.slice/systemd-user-sessions.service"
I1209 18:15:45.684617 20134 factory.go:104] Error trying to work out if we can handle /system.slice/systemd-user-sessions.service: /system.slice/systemd-user-sessions.service not handled by systemd handler
I1209 18:15:45.684643 20134 factory.go:115] Factory "systemd" was unable to handle container "/system.slice/systemd-user-sessions.service"
I1209 18:15:45.684677 20134 factory.go:111] Using factory "raw" for container "/system.slice/systemd-user-sessions.service"
I1209 18:15:45.685725 20134 manager.go:874] Added container: "/system.slice/systemd-user-sessions.service" (aliases: [], namespace: "")
I1209 18:15:45.686268 20134 handler.go:325] Added event &{/system.slice/systemd-user-sessions.service 2016-12-09 17:39:39.578252844 +0000 UTC containerCreation {<nil>}}
I1209 18:15:45.693442 20134 container.go:407] Start housekeeping for container "/system.slice/systemd-udevd.service"
I1209 18:15:45.694012 20134 container.go:407] Start housekeeping for container "/system.slice/systemd-update-done.service"
I1209 18:15:45.694638 20134 container.go:407] Start housekeeping for container "/system.slice/chronyd.service"
I1209 18:15:45.695187 20134 container.go:407] Start housekeeping for container "/system.slice/system-getty.slice"
I1209 18:15:45.695690 20134 container.go:407] Start housekeeping for container "/system.slice/systemd-update-utmp.service"
I1209 18:15:45.684010 20134 container.go:407] Start housekeeping for container "/system.slice/systemd-journal-flush.service"
I1209 18:15:45.684026 20134 container.go:407] Start housekeeping for container "/system.slice/systemd-logind.service"
I1209 18:15:45.684040 20134 container.go:407] Start housekeeping for container "/init.scope"
I1209 18:15:45.684071 20134 container.go:407] Start housekeeping for container "/system.slice"
I1209 18:15:45.684081 20134 container.go:407] Start housekeeping for container "/system.slice/docker-containerd.service"
I1209 18:15:45.684089 20134 container.go:407] Start housekeeping for container "/system.slice/lvm2-lvmetad.service"
I1209 18:15:45.684098 20134 container.go:407] Start housekeeping for container "/system.slice/system-sshd\\x2dkeygen.slice"
I1209 18:15:45.699914 20134 container.go:407] Start housekeeping for container "/system.slice/systemd-user-sessions.service"
I1209 18:15:45.705039 20134 factory.go:104] Error trying to work out if we can handle /system.slice/var-lib-docker-devicemapper-mnt-31124fc842b59ad64bc83362cd3a95d06487dfe3da4abe9c3c25a17cf11f5a7f.mount: error inspecting container: Error: No such container: 31124fc842b59ad64bc83362cd3a95d06487dfe3da4abe9c3c25a17cf11f5a7f
I1209 18:15:45.705069 20134 factory.go:115] Factory "docker" was unable to handle container "/system.slice/var-lib-docker-devicemapper-mnt-31124fc842b59ad64bc83362cd3a95d06487dfe3da4abe9c3c25a17cf11f5a7f.mount"
I1209 18:15:45.705101 20134 factory.go:108] Factory "systemd" can handle container "/system.slice/var-lib-docker-devicemapper-mnt-31124fc842b59ad64bc83362cd3a95d06487dfe3da4abe9c3c25a17cf11f5a7f.mount", but ignoring.
I1209 18:15:45.705475 20134 manager.go:843] ignoring container "/system.slice/var-lib-docker-devicemapper-mnt-31124fc842b59ad64bc83362cd3a95d06487dfe3da4abe9c3c25a17cf11f5a7f.mount"
I1209 18:15:45.705851 20134 factory.go:104] Error trying to work out if we can handle /system.slice/var-lib-docker-devicemapper.mount: invalid container name
I1209 18:15:45.706066 20134 factory.go:115] Factory "docker" was unable to handle container "/system.slice/var-lib-docker-devicemapper.mount"
I1209 18:15:45.706263 20134 factory.go:108] Factory "systemd" can handle container "/system.slice/var-lib-docker-devicemapper.mount", but ignoring.
I1209 18:15:45.706484 20134 manager.go:843] ignoring container "/system.slice/var-lib-docker-devicemapper.mount"
I1209 18:15:45.706524 20134 factory.go:104] Error trying to work out if we can handle /system.slice/network.service: invalid container name
I1209 18:15:45.706833 20134 factory.go:115] Factory "docker" was unable to handle container "/system.slice/network.service"
I1209 18:15:45.706860 20134 factory.go:104] Error trying to work out if we can handle /system.slice/network.service: /system.slice/network.service not handled by systemd handler
I1209 18:15:45.707360 20134 factory.go:115] Factory "systemd" was unable to handle container "/system.slice/network.service"
I1209 18:15:45.707375 20134 factory.go:111] Using factory "raw" for container "/system.slice/network.service"
I1209 18:15:45.707671 20134 manager.go:874] Added container: "/system.slice/network.service" (aliases: [], namespace: "")
I1209 18:15:45.707864 20134 handler.go:325] Added event &{/system.slice/network.service 2016-12-09 17:39:39.570252844 +0000 UTC containerCreation {<nil>}}
I1209 18:15:45.707895 20134 factory.go:104] Error trying to work out if we can handle /system.slice/systemd-fsck-root.service: invalid container name
I1209 18:15:45.707900 20134 factory.go:115] Factory "docker" was unable to handle container "/system.slice/systemd-fsck-root.service"
I1209 18:15:45.707906 20134 factory.go:104] Error trying to work out if we can handle /system.slice/systemd-fsck-root.service: /system.slice/systemd-fsck-root.service not handled by systemd handler
I1209 18:15:45.707911 20134 factory.go:115] Factory "systemd" was unable to handle container "/system.slice/systemd-fsck-root.service"
I1209 18:15:45.707930 20134 factory.go:111] Using factory "raw" for container "/system.slice/systemd-fsck-root.service"
I1209 18:15:45.708174 20134 manager.go:874] Added container: "/system.slice/systemd-fsck-root.service" (aliases: [], namespace: "")
I1209 18:15:45.708370 20134 handler.go:325] Added event &{/system.slice/systemd-fsck-root.service 2016-12-09 17:39:39.571252844 +0000 UTC containerCreation {<nil>}}
I1209 18:15:45.708390 20134 factory.go:104] Error trying to work out if we can handle /system.slice/systemd-random-seed.service: invalid container name
I1209 18:15:45.708394 20134 factory.go:115] Factory "docker" was unable to handle container "/system.slice/systemd-random-seed.service"
I1209 18:15:45.708399 20134 factory.go:104] Error trying to work out if we can handle /system.slice/systemd-random-seed.service: /system.slice/systemd-random-seed.service not handled by systemd handler
I1209 18:15:45.708403 20134 factory.go:115] Factory "systemd" was unable to handle container "/system.slice/systemd-random-seed.service"
I1209 18:15:45.708407 20134 factory.go:111] Using factory "raw" for container "/system.slice/systemd-random-seed.service"
I1209 18:15:45.708604 20134 manager.go:874] Added container: "/system.slice/systemd-random-seed.service" (aliases: [], namespace: "")
I1209 18:15:45.708744 20134 handler.go:325] Added event &{/system.slice/systemd-random-seed.service 2016-12-09 17:39:39.576252844 +0000 UTC containerCreation {<nil>}}
I1209 18:15:45.708884 20134 factory.go:104] Error trying to work out if we can handle /system.slice/systemd-tmpfiles-setup.service: invalid container name
I1209 18:15:45.708893 20134 factory.go:115] Factory "docker" was unable to handle container "/system.slice/systemd-tmpfiles-setup.service"
I1209 18:15:45.708899 20134 factory.go:104] Error trying to work out if we can handle /system.slice/systemd-tmpfiles-setup.service: /system.slice/systemd-tmpfiles-setup.service not handled by systemd handler
I1209 18:15:45.708924 20134 factory.go:115] Factory "systemd" was unable to handle container "/system.slice/systemd-tmpfiles-setup.service"
I1209 18:15:45.708931 20134 factory.go:111] Using factory "raw" for container "/system.slice/systemd-tmpfiles-setup.service"
I1209 18:15:45.709168 20134 manager.go:874] Added container: "/system.slice/systemd-tmpfiles-setup.service" (aliases: [], namespace: "")
I1209 18:15:45.709409 20134 handler.go:325] Added event &{/system.slice/systemd-tmpfiles-setup.service 2016-12-09 17:39:39.577252844 +0000 UTC containerCreation {<nil>}}
I1209 18:15:45.709471 20134 factory.go:104] Error trying to work out if we can handle /system.slice/systemd-udev-trigger.service: invalid container name
I1209 18:15:45.709480 20134 factory.go:115] Factory "docker" was unable to handle container "/system.slice/systemd-udev-trigger.service"
I1209 18:15:45.709525 20134 factory.go:104] Error trying to work out if we can handle /system.slice/systemd-udev-trigger.service: /system.slice/systemd-udev-trigger.service not handled by systemd handler
I1209 18:15:45.709531 20134 factory.go:115] Factory "systemd" was unable to handle container "/system.slice/systemd-udev-trigger.service"
I1209 18:15:45.709552 20134 factory.go:111] Using factory "raw" for container "/system.slice/systemd-udev-trigger.service"
I1209 18:15:45.709832 20134 manager.go:874] Added container: "/system.slice/systemd-udev-trigger.service" (aliases: [], namespace: "")
I1209 18:15:45.710080 20134 handler.go:325] Added event &{/system.slice/systemd-udev-trigger.service 2016-12-09 17:39:39.577252844 +0000 UTC containerCreation {<nil>}}
I1209 18:15:45.710097 20134 manager.go:290] Recovery completed
I1209 18:15:45.748649 20134 create_dockercfg_secrets.go:156] Updating token secret openshift/builder-token-ekyde
I1209 18:15:45.748698 20134 create_dockercfg_secrets.go:497] Creating dockercfg secret "builder-dockercfg-ai4xx" for service account openshift/builder
I1209 18:15:45.777308 20134 factory.go:104] Error trying to work out if we can handle /system.slice/dev-hugepages.mount: invalid container name
I1209 18:15:45.777339 20134 factory.go:115] Factory "docker" was unable to handle container "/system.slice/dev-hugepages.mount"
I1209 18:15:45.777347 20134 factory.go:108] Factory "systemd" can handle container "/system.slice/dev-hugepages.mount", but ignoring.
I1209 18:15:45.777353 20134 manager.go:843] ignoring container "/system.slice/dev-hugepages.mount"
I1209 18:15:45.777360 20134 factory.go:104] Error trying to work out if we can handle /system.slice/-.mount: invalid container name
I1209 18:15:45.777365 20134 factory.go:115] Factory "docker" was unable to handle container "/system.slice/-.mount"
I1209 18:15:45.777369 20134 factory.go:108] Factory "systemd" can handle container "/system.slice/-.mount", but ignoring.
I1209 18:15:45.777374 20134 manager.go:843] ignoring container "/system.slice/-.mount"
I1209 18:15:45.777383 20134 factory.go:104] Error trying to work out if we can handle /system.slice/sys-kernel-config.mount: invalid container name
I1209 18:15:45.777387 20134 factory.go:115] Factory "docker" was unable to handle container "/system.slice/sys-kernel-config.mount"
I1209 18:15:45.777419 20134 factory.go:108] Factory "systemd" can handle container "/system.slice/sys-kernel-config.mount", but ignoring.
I1209 18:15:45.777424 20134 manager.go:843] ignoring container "/system.slice/sys-kernel-config.mount"
I1209 18:15:45.777601 20134 container.go:407] Start housekeeping for container "/system.slice/network.service"
I1209 18:15:45.778211 20134 container.go:407] Start housekeeping for container "/system.slice/systemd-fsck-root.service"
I1209 18:15:45.778815 20134 container.go:407] Start housekeeping for container "/system.slice/systemd-random-seed.service"
I1209 18:15:45.779313 20134 container.go:407] Start housekeeping for container "/system.slice/systemd-tmpfiles-setup.service"
I1209 18:15:45.779892 20134 container.go:407] Start housekeeping for container "/system.slice/systemd-udev-trigger.service"
I1209 18:15:45.782651 20134 create_dockercfg_secrets.go:90] Updating service account default
I1209 18:15:45.792816 20134 factory.go:104] Error trying to work out if we can handle /system.slice/var-lib-docker-devicemapper-mnt-31124fc842b59ad64bc83362cd3a95d06487dfe3da4abe9c3c25a17cf11f5a7f.mount: error inspecting container: Error: No such container: 31124fc842b59ad64bc83362cd3a95d06487dfe3da4abe9c3c25a17cf11f5a7f
I1209 18:15:45.792840 20134 factory.go:115] Factory "docker" was unable to handle container "/system.slice/var-lib-docker-devicemapper-mnt-31124fc842b59ad64bc83362cd3a95d06487dfe3da4abe9c3c25a17cf11f5a7f.mount"
I1209 18:15:45.792934 20134 factory.go:108] Factory "systemd" can handle container "/system.slice/var-lib-docker-devicemapper-mnt-31124fc842b59ad64bc83362cd3a95d06487dfe3da4abe9c3c25a17cf11f5a7f.mount", but ignoring.
I1209 18:15:45.792948 20134 manager.go:843] ignoring container "/system.slice/var-lib-docker-devicemapper-mnt-31124fc842b59ad64bc83362cd3a95d06487dfe3da4abe9c3c25a17cf11f5a7f.mount"
I1209 18:15:45.792964 20134 factory.go:104] Error trying to work out if we can handle /system.slice/run-docker-netns-default.mount: invalid container name
I1209 18:15:45.792969 20134 factory.go:115] Factory "docker" was unable to handle container "/system.slice/run-docker-netns-default.mount"
I1209 18:15:45.792973 20134 factory.go:108] Factory "systemd" can handle container "/system.slice/run-docker-netns-default.mount", but ignoring.
I1209 18:15:45.792978 20134 manager.go:843] ignoring container "/system.slice/run-docker-netns-default.mount"
I1209 18:15:45.792986 20134 factory.go:104] Error trying to work out if we can handle /system.slice/run-user-1001.mount: invalid container name
I1209 18:15:45.792989 20134 factory.go:115] Factory "docker" was unable to handle container "/system.slice/run-user-1001.mount"
I1209 18:15:45.792994 20134 factory.go:108] Factory "systemd" can handle container "/system.slice/run-user-1001.mount", but ignoring.
I1209 18:15:45.793005 20134 manager.go:843] ignoring container "/system.slice/run-user-1001.mount"
I1209 18:15:45.793013 20134 factory.go:104] Error trying to work out if we can handle /system.slice/sys-kernel-debug.mount: invalid container name
I1209 18:15:45.793016 20134 factory.go:115] Factory "docker" was unable to handle container "/system.slice/sys-kernel-debug.mount"
I1209 18:15:45.793152 20134 factory.go:108] Factory "systemd" can handle container "/system.slice/sys-kernel-debug.mount", but ignoring.
I1209 18:15:45.793158 20134 manager.go:843] ignoring container "/system.slice/sys-kernel-debug.mount"
I1209 18:15:45.793170 20134 factory.go:104] Error trying to work out if we can handle /system.slice/var-lib-docker-devicemapper.mount: invalid container name
I1209 18:15:45.793174 20134 factory.go:115] Factory "docker" was unable to handle container "/system.slice/var-lib-docker-devicemapper.mount"
I1209 18:15:45.793178 20134 factory.go:108] Factory "systemd" can handle container "/system.slice/var-lib-docker-devicemapper.mount", but ignoring.
I1209 18:15:45.793183 20134 manager.go:843] ignoring container "/system.slice/var-lib-docker-devicemapper.mount"
I1209 18:15:45.793189 20134 factory.go:104] Error trying to work out if we can handle /system.slice/dev-mqueue.mount: invalid container name
I1209 18:15:45.793193 20134 factory.go:115] Factory "docker" was unable to handle container "/system.slice/dev-mqueue.mount"
I1209 18:15:45.793197 20134 factory.go:108] Factory "systemd" can handle container "/system.slice/dev-mqueue.mount", but ignoring.
I1209 18:15:45.793201 20134 manager.go:843] ignoring container "/system.slice/dev-mqueue.mount"
I1209 18:15:45.796689 20134 conversion.go:133] failed to handle multiple devices for container. Skipping Filesystem stats
I1209 18:15:45.834311 20134 create_dockercfg_secrets.go:156] Updating token secret openshift/default-token-a76x6
I1209 18:15:45.834378 20134 create_dockercfg_secrets.go:497] Creating dockercfg secret "default-dockercfg-z9uim" for service account openshift/default
I1209 18:15:45.847025 20134 create_dockercfg_secrets.go:90] Updating service account builder
I1209 18:15:45.852380 20134 controller.go:105] Found 0 scheduledjobs
I1209 18:15:45.852771 20134 eviction_manager.go:204] eviction manager: no resources are starved
I1209 18:15:45.859908 20134 create_dockercfg_secrets.go:158] Adding token secret openshift-infra/unidling-controller-token-qbosi
I1209 18:15:45.867496 20134 create_dockercfg_secrets.go:90] Updating service account builder
I1209 18:15:45.880450 20134 create_dockercfg_secrets.go:158] Adding token secret openshift-infra/replicaset-controller-token-lz4hd
I1209 18:15:45.881414 20134 create_dockercfg_secrets.go:90] Updating service account default
I1209 18:15:45.890709 20134 endpoints_controller.go:327] Waiting for pods controller to sync, requeuing service default/kubernetes
I1209 18:15:45.890729 20134 endpoints_controller.go:321] Finished syncing service "default/kubernetes" endpoints. (100.499486ms)
I1209 18:15:45.891715 20134 controller.go:113] Found 0 jobs
I1209 18:15:45.891727 20134 controller.go:116] Found 0 groups
I1209 18:15:45.892357 20134 create_dockercfg_secrets.go:158] Adding token secret openshift-infra/daemonset-controller-token-ri53o
I1209 18:15:46.000267 20134 create_dockercfg_secrets.go:156] Updating token secret openshift-infra/disruption-controller-token-dxmig
I1209 18:15:46.000387 20134 create_dockercfg_secrets.go:158] Adding token secret openshift-infra/deployment-controller-token-sp05w
I1209 18:15:46.001204 20134 create_dockercfg_secrets.go:497] Creating dockercfg secret "disruption-controller-dockercfg-2wl86" for service account openshift-infra/disruption-controller
I1209 18:15:46.010894 20134 endpoints_controller.go:327] Waiting for pods controller to sync, requeuing service default/kubernetes
I1209 18:15:46.010925 20134 endpoints_controller.go:321] Finished syncing service "default/kubernetes" endpoints. (120.097323ms)
I1209 18:15:46.021814 20134 create_dockercfg_secrets.go:158] Adding token secret openshift-infra/deploymentconfig-controller-token-axnu8
I1209 18:15:46.037846 20134 create_dockercfg_secrets.go:158] Adding token secret openshift-infra/replication-controller-token-f4env
I1209 18:15:46.044749 20134 create_dockercfg_secrets.go:158] Adding token secret default/builder-token-cpjqi
I1209 18:15:46.052870 20134 create_dockercfg_secrets.go:90] Updating service account disruption-controller
I1209 18:15:46.057812 20134 create_dockercfg_secrets.go:158] Adding token secret default/deployer-token-gavl7
I1209 18:15:46.078874 20134 create_dockercfg_secrets.go:156] Updating token secret openshift/deployer-token-lru8f
I1209 18:15:46.079022 20134 create_dockercfg_secrets.go:497] Creating dockercfg secret "deployer-dockercfg-op22t" for service account openshift/deployer
I1209 18:15:46.084706 20134 create_dockercfg_secrets.go:156] Updating token secret openshift-infra/job-controller-token-pf94k
I1209 18:15:46.084752 20134 create_dockercfg_secrets.go:497] Creating dockercfg secret "job-controller-dockercfg-vddan" for service account openshift-infra/job-controller
I1209 18:15:46.092433 20134 create_dockercfg_secrets.go:156] Updating token secret default/default-token-t4356
I1209 18:15:46.092734 20134 create_dockercfg_secrets.go:497] Creating dockercfg secret "default-dockercfg-9mzqx" for service account default/default
I1209 18:15:46.096802 20134 create_dockercfg_secrets.go:156] Updating token secret openshift-infra/unidling-controller-token-qbosi
I1209 18:15:46.096884 20134 create_dockercfg_secrets.go:497] Creating dockercfg secret "unidling-controller-dockercfg-5is9l" for service account openshift-infra/unidling-controller
I1209 18:15:46.099488 20134 create_dockercfg_secrets.go:156] Updating token secret openshift-infra/replicaset-controller-token-lz4hd
I1209 18:15:46.099572 20134 create_dockercfg_secrets.go:497] Creating dockercfg secret "replicaset-controller-dockercfg-ysy0t" for service account openshift-infra/replicaset-controller
I1209 18:15:46.106941 20134 create_dockercfg_secrets.go:90] Updating service account deployer
I1209 18:15:46.107266 20134 create_dockercfg_secrets.go:156] Updating token secret openshift-infra/daemonset-controller-token-ri53o
I1209 18:15:46.107788 20134 create_dockercfg_secrets.go:497] Creating dockercfg secret "daemonset-controller-dockercfg-d7364" for service account openshift-infra/daemonset-controller
I1209 18:15:46.111027 20134 endpoints_controller.go:327] Waiting for pods controller to sync, requeuing service default/kubernetes
I1209 18:15:46.111054 20134 endpoints_controller.go:321] Finished syncing service "default/kubernetes" endpoints. (100.10955ms)
I1209 18:15:46.111492 20134 create_dockercfg_secrets.go:156] Updating token secret openshift-infra/deployment-controller-token-sp05w
I1209 18:15:46.113013 20134 create_dockercfg_secrets.go:90] Updating service account job-controller
I1209 18:15:46.113683 20134 plugins.go:350] Loaded volume plugin "kubernetes.io/host-path"
I1209 18:15:46.114011 20134 plugins.go:350] Loaded volume plugin "kubernetes.io/nfs"
I1209 18:15:46.114041 20134 plugins.go:350] Loaded volume plugin "kubernetes.io/aws-ebs"
I1209 18:15:46.114366 20134 plugins.go:350] Loaded volume plugin "kubernetes.io/gce-pd"
I1209 18:15:46.114384 20134 plugins.go:350] Loaded volume plugin "kubernetes.io/cinder"
I1209 18:15:46.114391 20134 plugins.go:350] Loaded volume plugin "kubernetes.io/vsphere-volume"
I1209 18:15:46.114398 20134 plugins.go:350] Loaded volume plugin "kubernetes.io/glusterfs"
I1209 18:15:46.114405 20134 plugins.go:350] Loaded volume plugin "kubernetes.io/rbd"
I1209 18:15:46.114432 20134 pv_controller_base.go:446] starting PersistentVolumeController
I1209 18:15:46.114776 20134 reflector.go:211] Starting reflector *api.Namespace (5m0s) from pkg/controller/namespace/namespace_controller.go:201
I1209 18:15:46.114810 20134 reflector.go:249] Listing and watching *api.Namespace from pkg/controller/namespace/namespace_controller.go:201
I1209 18:15:46.115236 20134 create_dockercfg_secrets.go:497] Creating dockercfg secret "deployment-controller-dockercfg-sle7e" for service account openshift-infra/deployment-controller
I1209 18:15:46.119845 20134 create_dockercfg_secrets.go:156] Updating token secret openshift-infra/deploymentconfig-controller-token-axnu8
I1209 18:15:46.125248 20134 create_dockercfg_secrets.go:156] Updating token secret openshift-infra/replication-controller-token-f4env
I1209 18:15:46.130386 20134 create_dockercfg_secrets.go:90] Updating service account default
I1209 18:15:46.130545 20134 create_dockercfg_secrets.go:156] Updating token secret default/builder-token-cpjqi
I1209 18:15:46.130851 20134 create_dockercfg_secrets.go:497] Creating dockercfg secret "deploymentconfig-controller-dockercfg-52kz6" for service account openshift-infra/deploymentconfig-controller
I1209 18:15:46.139916 20134 create_dockercfg_secrets.go:156] Updating token secret default/deployer-token-gavl7
I1209 18:15:46.142034 20134 create_dockercfg_secrets.go:90] Updating service account unidling-controller
I1209 18:15:46.142155 20134 namespace_controller.go:195] Finished syncing namespace "openshift" (144ns)
I1209 18:15:46.142194 20134 namespace_controller.go:195] Finished syncing namespace "default" (45ns)
I1209 18:15:46.142201 20134 namespace_controller.go:195] Finished syncing namespace "kube-system" (43ns)
I1209 18:15:46.142207 20134 namespace_controller.go:195] Finished syncing namespace "openshift-infra" (44ns)
I1209 18:15:46.143519 20134 create_dockercfg_secrets.go:497] Creating dockercfg secret "replication-controller-dockercfg-zh4wr" for service account openshift-infra/replication-controller
I1209 18:15:46.151874 20134 create_dockercfg_secrets.go:90] Updating service account replicaset-controller
I1209 18:15:46.155187 20134 create_dockercfg_secrets.go:497] Creating dockercfg secret "builder-dockercfg-ym647" for service account default/builder
I1209 18:15:46.165366 20134 create_dockercfg_secrets.go:90] Updating service account daemonset-controller
I1209 18:15:46.165669 20134 create_dockercfg_secrets.go:497] Creating dockercfg secret "deployer-dockercfg-25i7r" for service account default/deployer
I1209 18:15:46.174522 20134 create_dockercfg_secrets.go:90] Updating service account deployment-controller
I1209 18:15:46.185771 20134 create_dockercfg_secrets.go:90] Updating service account deploymentconfig-controller
I1209 18:15:46.189803 20134 create_dockercfg_secrets.go:90] Updating service account replication-controller
I1209 18:15:46.191809 20134 pv_controller_base.go:204] controller initialized
I1209 18:15:46.191972 20134 plugins.go:350] Loaded volume plugin "kubernetes.io/aws-ebs"
I1209 18:15:46.191986 20134 plugins.go:350] Loaded volume plugin "kubernetes.io/gce-pd"
I1209 18:15:46.191993 20134 plugins.go:350] Loaded volume plugin "kubernetes.io/cinder"
I1209 18:15:46.192000 20134 plugins.go:350] Loaded volume plugin "kubernetes.io/vsphere-volume"
I1209 18:15:46.192007 20134 plugins.go:350] Loaded volume plugin "kubernetes.io/azure-disk"
I1209 18:15:46.192096 20134 master.go:445] Service controller will not start - no cloud provider configured
I1209 18:15:46.192339 20134 start_master.go:701] Started Kubernetes Controllers
I1209 18:15:46.192383 20134 admission.go:29] Initialized build defaults plugin with config: api.BuildDefaultsConfig{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, GitHTTPProxy:"", GitHTTPSProxy:"", GitNoProxy:"", Env:[]api.EnvVar(nil), SourceStrategyDefaults:(*api.SourceStrategyDefaultsConfig)(nil), ImageLabels:[]api.ImageLabel(nil), NodeSelector:map[string]string(nil), Annotations:map[string]string(nil), Resources:api.ResourceRequirements{Limits:api.ResourceList(nil), Requests:api.ResourceList(nil)}}
I1209 18:15:46.192419 20134 admission.go:29] Initialized build overrides plugin with config: api.BuildOverridesConfig{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ForcePull:false, ImageLabels:[]api.ImageLabel(nil), NodeSelector:map[string]string(nil), Annotations:map[string]string(nil)}
I1209 18:15:46.192663 20134 reflector.go:211] Starting reflector *api.PersistentVolume (15s) from pkg/controller/volume/persistentvolume/pv_controller_base.go:448
I1209 18:15:46.192684 20134 reflector.go:249] Listing and watching *api.PersistentVolume from pkg/controller/volume/persistentvolume/pv_controller_base.go:448
I1209 18:15:46.192994 20134 reflector.go:211] Starting reflector *api.PersistentVolumeClaim (15s) from pkg/controller/volume/persistentvolume/pv_controller_base.go:449
I1209 18:15:46.193020 20134 reflector.go:249] Listing and watching *api.PersistentVolumeClaim from pkg/controller/volume/persistentvolume/pv_controller_base.go:449
I1209 18:15:46.193370 20134 reflector.go:211] Starting reflector *storage.StorageClass (15s) from pkg/controller/volume/persistentvolume/pv_controller_base.go:153
I1209 18:15:46.193392 20134 reflector.go:249] Listing and watching *storage.StorageClass from pkg/controller/volume/persistentvolume/pv_controller_base.go:153
I1209 18:15:46.193678 20134 attach_detach_controller.go:197] Starting Attach Detach Controller
I1209 18:15:46.193772 20134 pet_set.go:145] Starting petset controller
I1209 18:15:46.194069 20134 reflector.go:211] Starting reflector *api.Pod (20h36m40.468845613s) from pkg/controller/podgc/gc_controller.go:89
I1209 18:15:46.194093 20134 reflector.go:249] Listing and watching *api.Pod from pkg/controller/podgc/gc_controller.go:89
I1209 18:15:46.194562 20134 reflector.go:211] Starting reflector *apps.PetSet (30s) from pkg/controller/petset/pet_set.go:147
I1209 18:15:46.194595 20134 reflector.go:249] Listing and watching *apps.PetSet from pkg/controller/petset/pet_set.go:147
I1209 18:15:46.197512 20134 create_dockercfg_secrets.go:90] Updating service account builder
I1209 18:15:46.202576 20134 reflector.go:211] Starting reflector *api.Build (2m0s) from github.com/openshift/origin/pkg/build/controller/factory/factory.go:90
I1209 18:15:46.202858 20134 reflector.go:211] Starting reflector *api.Build (5m0s) from github.com/openshift/origin/pkg/build/controller/factory/factory.go:145
I1209 18:15:46.202911 20134 reflector.go:211] Starting reflector *api.Build (2m0s) from github.com/openshift/origin/pkg/build/controller/factory/factory.go:206
I1209 18:15:46.203195 20134 reflector.go:211] Starting reflector *api.Pod (2m0s) from github.com/openshift/origin/pkg/build/controller/factory/factory.go:209
I1209 18:15:46.203255 20134 reflector.go:211] Starting reflector *api.Pod (5m0s) from github.com/openshift/origin/pkg/build/controller/factory/factory.go:261
I1209 18:15:46.203616 20134 reflector.go:211] Starting reflector *api.BuildConfig (2m0s) from github.com/openshift/origin/pkg/build/controller/factory/factory.go:350
I1209 18:15:46.203960 20134 reflector.go:249] Listing and watching *api.Build from github.com/openshift/origin/pkg/build/controller/factory/factory.go:90
I1209 18:15:46.210484 20134 reflector.go:249] Listing and watching *api.Build from github.com/openshift/origin/pkg/build/controller/factory/factory.go:145
I1209 18:15:46.210745 20134 factory.go:483] Checking for deleted builds
I1209 18:15:46.211186 20134 reflector.go:249] Listing and watching *api.Build from github.com/openshift/origin/pkg/build/controller/factory/factory.go:206
I1209 18:15:46.211493 20134 reflector.go:249] Listing and watching *api.Pod from github.com/openshift/origin/pkg/build/controller/factory/factory.go:209
I1209 18:15:46.211955 20134 reflector.go:249] Listing and watching *api.Pod from github.com/openshift/origin/pkg/build/controller/factory/factory.go:261
I1209 18:15:46.212162 20134 factory.go:567] Checking for deleted build pods
I1209 18:15:46.212459 20134 reflector.go:249] Listing and watching *api.BuildConfig from github.com/openshift/origin/pkg/build/controller/factory/factory.go:350
I1209 18:15:46.212773 20134 reflector.go:211] Starting reflector *api.ImageStream (2m0s) from github.com/openshift/origin/pkg/build/controller/factory/factory.go:302
I1209 18:15:46.212965 20134 factory.go:329] Waiting for the bc caches to sync before starting the imagechange buildconfig controller worker
I1209 18:15:46.213143 20134 reflector.go:249] Listing and watching *api.ImageStream from github.com/openshift/origin/pkg/build/controller/factory/factory.go:302
I1209 18:15:46.213739 20134 create_dockercfg_secrets.go:90] Updating service account deployer
I1209 18:15:46.215454 20134 endpoints_controller.go:327] Waiting for pods controller to sync, requeuing service default/kubernetes
I1209 18:15:46.215470 20134 endpoints_controller.go:321] Finished syncing service "default/kubernetes" endpoints. (104.061494ms)
I1209 18:15:46.215488 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:15:46.234763 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:15:46.246471 20134 reflector.go:200] Starting reflector *api.ImageStream (10m0s) from github.com/openshift/origin/pkg/image/controller/factory.go:40
I1209 18:15:46.246544 20134 reflector.go:200] Starting reflector *api.Namespace (1m0s) from github.com/openshift/origin/pkg/project/controller/factory.go:36
E1209 18:15:46.247202 20134 util.go:45] Metric for replenishment_controller already registered
E1209 18:15:46.247245 20134 util.go:45] Metric for replenishment_controller already registered
E1209 18:15:46.247554 20134 util.go:45] Metric for replenishment_controller already registered
E1209 18:15:46.247599 20134 util.go:45] Metric for replenishment_controller already registered
E1209 18:15:46.247888 20134 util.go:45] Metric for replenishment_controller already registered
E1209 18:15:46.247933 20134 util.go:45] Metric for replenishment_controller already registered
E1209 18:15:46.248376 20134 util.go:45] Metric for replenishment_controller already registered
E1209 18:15:46.248420 20134 util.go:45] Metric for replenishment_controller already registered
E1209 18:15:46.248706 20134 util.go:45] Metric for replenishment_controller already registered
E1209 18:15:46.248743 20134 util.go:45] Metric for replenishment_controller already registered
E1209 18:15:46.249021 20134 util.go:45] Metric for replenishment_controller already registered
E1209 18:15:46.249052 20134 util.go:45] Metric for replenishment_controller already registered
E1209 18:15:46.249333 20134 util.go:45] Metric for replenishment_controller already registered
I1209 18:15:46.249752 20134 factory.go:91] Waiting for the rc and pod caches to sync before starting the deployment controller workers
I1209 18:15:46.249980 20134 factory.go:101] Waiting for the dc, rc, and pod caches to sync before starting the deployment config controller workers
I1209 18:15:46.250235 20134 factory.go:83] Waiting for the dc and rc caches to sync before starting the trigger controller workers
I1209 18:15:46.250460 20134 reflector.go:249] Listing and watching *api.ImageStream from github.com/openshift/origin/pkg/image/controller/factory.go:40
I1209 18:15:46.251022 20134 scheduler.go:74] DEBUG: scheduler: queue (0):
[]controller.bucket{controller.bucket{}, controller.bucket{}, controller.bucket{}, controller.bucket{}, controller.bucket{}}
I1209 18:15:46.251072 20134 scheduler.go:79] DEBUG: scheduler: position: 1 5
I1209 18:15:46.251094 20134 scheduler.go:56] DEBUG: scheduler: waiting for limit
I1209 18:15:46.251796 20134 reflector.go:249] Listing and watching *api.Namespace from github.com/openshift/origin/pkg/project/controller/factory.go:36
I1209 18:15:46.252389 20134 resource_quota_controller.go:154] Resource quota controller queued all resource quota for full calculation of usage
I1209 18:15:46.252453 20134 reconciliation_controller.go:116] Starting the cluster quota reconciliation controller workers
I1209 18:15:46.253050 20134 reflector.go:211] Starting reflector *api.ResourceQuota (5m0s) from pkg/controller/resourcequota/resource_quota_controller.go:230
I1209 18:15:46.253097 20134 reflector.go:249] Listing and watching *api.ResourceQuota from pkg/controller/resourcequota/resource_quota_controller.go:230
I1209 18:15:46.253880 20134 reflector.go:211] Starting reflector *api.Service (12h0m0s) from pkg/controller/resourcequota/resource_quota_controller.go:233
I1209 18:15:46.253941 20134 reflector.go:249] Listing and watching *api.Service from pkg/controller/resourcequota/resource_quota_controller.go:233
I1209 18:15:46.254770 20134 reflector.go:211] Starting reflector *api.ReplicationController (12h0m0s) from pkg/controller/resourcequota/resource_quota_controller.go:233
I1209 18:15:46.254818 20134 reflector.go:249] Listing and watching *api.ReplicationController from pkg/controller/resourcequota/resource_quota_controller.go:233
I1209 18:15:46.255734 20134 reflector.go:211] Starting reflector *api.PersistentVolumeClaim (12h0m0s) from pkg/controller/resourcequota/resource_quota_controller.go:233
I1209 18:15:46.255780 20134 reflector.go:249] Listing and watching *api.PersistentVolumeClaim from pkg/controller/resourcequota/resource_quota_controller.go:233
I1209 18:15:46.256397 20134 reflector.go:211] Starting reflector *api.Secret (12h0m0s) from pkg/controller/resourcequota/resource_quota_controller.go:233
I1209 18:15:46.256441 20134 reflector.go:249] Listing and watching *api.Secret from pkg/controller/resourcequota/resource_quota_controller.go:233
I1209 18:15:46.257082 20134 reflector.go:211] Starting reflector *api.ConfigMap (12h0m0s) from pkg/controller/resourcequota/resource_quota_controller.go:233
I1209 18:15:46.257126 20134 reflector.go:249] Listing and watching *api.ConfigMap from pkg/controller/resourcequota/resource_quota_controller.go:233
I1209 18:15:46.257735 20134 reflector.go:211] Starting reflector *api.ImageStream (12h0m0s) from pkg/controller/resourcequota/resource_quota_controller.go:233
I1209 18:15:46.257780 20134 reflector.go:249] Listing and watching *api.ImageStream from pkg/controller/resourcequota/resource_quota_controller.go:233
I1209 18:15:46.258453 20134 reflector.go:211] Starting reflector *api.Service (12h0m0s) from github.com/openshift/origin/pkg/quota/controller/clusterquotareconciliation/reconciliation_controller.go:120
I1209 18:15:46.258496 20134 reflector.go:249] Listing and watching *api.Service from github.com/openshift/origin/pkg/quota/controller/clusterquotareconciliation/reconciliation_controller.go:120
I1209 18:15:46.259137 20134 reflector.go:211] Starting reflector *api.ReplicationController (12h0m0s) from github.com/openshift/origin/pkg/quota/controller/clusterquotareconciliation/reconciliation_controller.go:120
I1209 18:15:46.259183 20134 reflector.go:249] Listing and watching *api.ReplicationController from github.com/openshift/origin/pkg/quota/controller/clusterquotareconciliation/reconciliation_controller.go:120
I1209 18:15:46.259913 20134 reflector.go:211] Starting reflector *api.PersistentVolumeClaim (12h0m0s) from github.com/openshift/origin/pkg/quota/controller/clusterquotareconciliation/reconciliation_controller.go:120
I1209 18:15:46.259965 20134 reflector.go:249] Listing and watching *api.PersistentVolumeClaim from github.com/openshift/origin/pkg/quota/controller/clusterquotareconciliation/reconciliation_controller.go:120
I1209 18:15:46.260640 20134 reflector.go:211] Starting reflector *api.Secret (12h0m0s) from github.com/openshift/origin/pkg/quota/controller/clusterquotareconciliation/reconciliation_controller.go:120
I1209 18:15:46.260686 20134 reflector.go:249] Listing and watching *api.Secret from github.com/openshift/origin/pkg/quota/controller/clusterquotareconciliation/reconciliation_controller.go:120
I1209 18:15:46.261393 20134 reflector.go:211] Starting reflector *api.ConfigMap (12h0m0s) from github.com/openshift/origin/pkg/quota/controller/clusterquotareconciliation/reconciliation_controller.go:120
I1209 18:15:46.261612 20134 reflector.go:249] Listing and watching *api.ConfigMap from github.com/openshift/origin/pkg/quota/controller/clusterquotareconciliation/reconciliation_controller.go:120
I1209 18:15:46.262143 20134 reflector.go:211] Starting reflector *api.ImageStream (12h0m0s) from github.com/openshift/origin/pkg/quota/controller/clusterquotareconciliation/reconciliation_controller.go:120
I1209 18:15:46.262192 20134 reflector.go:249] Listing and watching *api.ImageStream from github.com/openshift/origin/pkg/quota/controller/clusterquotareconciliation/reconciliation_controller.go:120
I1209 18:15:46.256050 20134 scheduler.go:74] DEBUG: scheduler: queue (1):
[]controller.bucket{controller.bucket{}, controller.bucket{}, controller.bucket{}, controller.bucket{}, controller.bucket{}}
I1209 18:15:46.262788 20134 scheduler.go:79] DEBUG: scheduler: position: 2 5
I1209 18:15:46.262809 20134 scheduler.go:56] DEBUG: scheduler: waiting for limit
I1209 18:15:46.358435 20134 factory.go:329] Waiting for the bc caches to sync before starting the imagechange buildconfig controller worker
I1209 18:15:46.358660 20134 endpoints_controller.go:327] Waiting for pods controller to sync, requeuing service default/kubernetes
I1209 18:15:46.358680 20134 endpoints_controller.go:321] Finished syncing service "default/kubernetes" endpoints. (143.192238ms)
I1209 18:15:46.376637 20134 factory.go:83] Waiting for the dc and rc caches to sync before starting the trigger controller workers
I1209 18:15:46.384425 20134 factory.go:91] Waiting for the rc and pod caches to sync before starting the deployment controller workers
I1209 18:15:46.384665 20134 factory.go:101] Waiting for the dc, rc, and pod caches to sync before starting the deployment config controller workers
I1209 18:15:46.396915 20134 secret_updating_controller.go:108] starting service signing cert update controller
I1209 18:15:46.397095 20134 reflector.go:211] Starting reflector *api.Service (20m0s) from github.com/openshift/origin/pkg/service/controller/servingcert/secret_updating_controller.go:109
I1209 18:15:46.397122 20134 reflector.go:249] Listing and watching *api.Service from github.com/openshift/origin/pkg/service/controller/servingcert/secret_updating_controller.go:109
I1209 18:15:46.397478 20134 reflector.go:211] Starting reflector *api.Secret (20m0s) from github.com/openshift/origin/pkg/service/controller/servingcert/secret_updating_controller.go:110
I1209 18:15:46.397503 20134 reflector.go:249] Listing and watching *api.Secret from github.com/openshift/origin/pkg/service/controller/servingcert/secret_updating_controller.go:110
I1209 18:15:46.400513 20134 reflector.go:211] Starting reflector *api.Service (2m0s) from github.com/openshift/origin/pkg/service/controller/servingcert/secret_creating_controller.go:118
I1209 18:15:46.400567 20134 reflector.go:249] Listing and watching *api.Service from github.com/openshift/origin/pkg/service/controller/servingcert/secret_creating_controller.go:118
I1209 18:15:46.410454 20134 reflector.go:211] Starting reflector *api.Event (2h0m0s) from github.com/openshift/origin/pkg/unidling/controller/controller.go:196
I1209 18:15:46.410791 20134 reflector.go:249] Listing and watching *api.Event from github.com/openshift/origin/pkg/unidling/controller/controller.go:196
I1209 18:15:46.420024 20134 secret_creating_controller.go:98] Adding service kubernetes
I1209 18:15:46.422939 20134 start_master.go:740] Started Origin Controllers
I1209 18:15:46.423346 20134 reflector.go:211] Starting reflector *api.PersistentVolumeClaim (10m0s) from github.com/openshift/origin/pkg/controller/shared/shared_informer.go:93
I1209 18:15:46.423549 20134 reflector.go:249] Listing and watching *api.PersistentVolumeClaim from github.com/openshift/origin/pkg/controller/shared/shared_informer.go:93
I1209 18:15:46.424618 20134 controller.go:169] Waiting for the initial sync to be completed
I1209 18:15:46.425002 20134 reflector.go:211] Starting reflector *api.Service (10m0s) from github.com/openshift/origin/pkg/service/controller/ingressip/controller.go:167
I1209 18:15:46.425213 20134 reflector.go:249] Listing and watching *api.Service from github.com/openshift/origin/pkg/service/controller/ingressip/controller.go:167
I1209 18:15:46.425755 20134 reflector.go:211] Starting reflector *api.Pod (10m0s) from github.com/openshift/origin/pkg/controller/shared/shared_informer.go:93
I1209 18:15:46.426016 20134 reflector.go:249] Listing and watching *api.Pod from github.com/openshift/origin/pkg/controller/shared/shared_informer.go:93
I1209 18:15:46.426508 20134 reflector.go:211] Starting reflector *api.PersistentVolume (10m0s) from github.com/openshift/origin/pkg/controller/shared/shared_informer.go:93
I1209 18:15:46.426557 20134 reflector.go:249] Listing and watching *api.PersistentVolume from github.com/openshift/origin/pkg/controller/shared/shared_informer.go:93
I1209 18:15:46.427167 20134 reflector.go:211] Starting reflector *api.DeploymentConfig (10m0s) from github.com/openshift/origin/pkg/controller/shared/shared_informer.go:93
I1209 18:15:46.427414 20134 reflector.go:249] Listing and watching *api.DeploymentConfig from github.com/openshift/origin/pkg/controller/shared/shared_informer.go:93
I1209 18:15:46.427860 20134 reflector.go:211] Starting reflector *api.ImageStream (10m0s) from github.com/openshift/origin/pkg/controller/shared/shared_informer.go:93
I1209 18:15:46.428056 20134 reflector.go:249] Listing and watching *api.ImageStream from github.com/openshift/origin/pkg/controller/shared/shared_informer.go:93
I1209 18:15:46.428568 20134 reflector.go:211] Starting reflector *api.BuildConfig (10m0s) from github.com/openshift/origin/pkg/controller/shared/shared_informer.go:93
I1209 18:15:46.428788 20134 reflector.go:249] Listing and watching *api.BuildConfig from github.com/openshift/origin/pkg/controller/shared/shared_informer.go:93
I1209 18:15:46.429272 20134 reflector.go:211] Starting reflector *api.ReplicationController (10m0s) from github.com/openshift/origin/pkg/controller/shared/shared_informer.go:93
I1209 18:15:46.429508 20134 reflector.go:249] Listing and watching *api.ReplicationController from github.com/openshift/origin/pkg/controller/shared/shared_informer.go:93
I1209 18:15:46.429982 20134 reflector.go:211] Starting reflector *api.Node (10m0s) from github.com/openshift/origin/pkg/controller/shared/shared_informer.go:93
I1209 18:15:46.430028 20134 reflector.go:249] Listing and watching *api.Node from github.com/openshift/origin/pkg/controller/shared/shared_informer.go:93
I1209 18:15:46.440387 20134 attach_detach_controller.go:520] processVolumesInUse for node "localhost"
I1209 18:15:46.448981 20134 controller.go:107] Adding service default/kubernetes
I1209 18:15:46.462404 20134 endpoints_controller.go:327] Waiting for pods controller to sync, requeuing service default/kubernetes
I1209 18:15:46.462463 20134 endpoints_controller.go:321] Finished syncing service "default/kubernetes" endpoints. (103.766031ms)
I1209 18:15:46.462670 20134 endpoints_controller.go:321] Finished syncing service "default/kubernetes" endpoints. (3.57µs)
I1209 18:15:46.497096 20134 secret_updating_controller.go:147] caches populated
I1209 18:15:46.525063 20134 controller.go:212] Processing initial sync
I1209 18:15:46.525127 20134 controller.go:287] Completed processing initial sync
I1209 18:15:46.525150 20134 controller.go:182] Starting normal worker
I1209 18:15:46.535893 20134 shared_informer.go:107] caches populated
I1209 18:15:46.538002 20134 nodecontroller.go:513] NodeController observed a new Node: "localhost"
I1209 18:15:46.538030 20134 controller_utils.go:268] Recording Registered Node localhost in NodeController event message for node localhost
I1209 18:15:46.538055 20134 nodecontroller.go:523] Initilizing eviction metric for zone:
I1209 18:15:46.538792 20134 event.go:217] Event(api.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"80a3351b-be3b-11e6-8665-525400560f2f", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node localhost event: Registered Node localhost in NodeController
W1209 18:15:46.540254 20134 nodecontroller.go:783] Missing timestamp for Node localhost. Assuming now as a timestamp.
I1209 18:15:46.540348 20134 nodecontroller.go:668] NodeController detected that all Nodes are not-Ready. Entering master disruption mode.
I1209 18:15:47.235107 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:15:47.237635 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:15:48.237978 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:15:48.241478 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:15:49.241861 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:15:49.244236 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:15:49.693466 20134 kubelet.go:2293] SyncLoop (ADD, "api"): ""
I1209 18:15:49.693694 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:15:49.696608 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:15:49.698569 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:15:49.699580 20134 kubelet_volumes.go:161] Orphaned pod "9194052f-be36-11e6-8171-525400560f2f" found, removing
I1209 18:15:49.701850 20134 kubelet.go:2332] SyncLoop (PLEG): ignore irrelevant event: &pleg.PodLifecycleEvent{ID:"9194052f-be36-11e6-8171-525400560f2f", Type:"ContainerDied", Data:"5122ba25e59b3e0aa0dae770964e9747c9e49fc15f1f6d260713914d3190a104"}
I1209 18:15:49.702072 20134 kubelet.go:2332] SyncLoop (PLEG): ignore irrelevant event: &pleg.PodLifecycleEvent{ID:"9194052f-be36-11e6-8171-525400560f2f", Type:"ContainerDied", Data:"955f3ff60dee1a6e78eca0486c2e4f580321c66e34d5075f518699918d5c0b41"}
I1209 18:15:50.244471 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:15:50.246695 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:15:50.693394 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:15:50.694869 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:15:51.246886 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:15:51.249637 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:15:52.249934 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:15:52.253197 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:15:52.693399 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:15:52.695456 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:15:52.696798 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:15:53.253756 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:15:53.256680 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:15:54.257034 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:15:54.259844 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:15:54.693375 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:15:54.696171 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:15:54.697695 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:15:55.260673 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:15:55.263249 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:15:55.822463 20134 interface.go:248] Default route transits interface "eth0"
I1209 18:15:55.822686 20134 interface.go:93] Interface eth0 is up
I1209 18:15:55.823502 20134 interface.go:138] Interface "eth0" has 2 addresses :[192.168.121.18/24 fe80::5054:ff:fe56:f2f/64].
I1209 18:15:55.823578 20134 interface.go:105] Checking addr 192.168.121.18/24.
I1209 18:15:55.823610 20134 interface.go:114] IP found 192.168.121.18
I1209 18:15:55.823634 20134 interface.go:144] valid IPv4 address for interface "eth0" found as 192.168.121.18.
I1209 18:15:55.824384 20134 interface.go:254] Choosing IP 192.168.121.18
I1209 18:15:55.872873 20134 kubelet_node_status.go:377] Recording NodeReady event message for node localhost
I1209 18:15:55.874112 20134 server.go:608] Event(api.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'NodeReady' Node localhost status is now: NodeReady
I1209 18:15:55.883317 20134 conversion.go:133] failed to handle multiple devices for container. Skipping Filesystem stats
I1209 18:15:55.889005 20134 attach_detach_controller.go:520] processVolumesInUse for node "localhost"
I1209 18:15:55.899832 20134 controller.go:105] Found 0 scheduledjobs
I1209 18:15:55.905575 20134 controller.go:113] Found 0 jobs
I1209 18:15:55.905601 20134 controller.go:116] Found 0 groups
I1209 18:15:55.948684 20134 eviction_manager.go:204] eviction manager: no resources are starved
I1209 18:15:56.263617 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:15:56.265925 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:15:56.545471 20134 nodecontroller.go:809] ReadyCondition for Node localhost transitioned from False to &{Ready True 2016-12-09 18:15:55 +0000 UTC 2016-12-09 18:15:55 +0000 UTC KubeletReady kubelet is posting ready status}
I1209 18:15:56.545592 20134 nodecontroller.go:816] Node localhost ReadyCondition updated. Updating timestamp: {Capacity:map[alpha.kubernetes.io/nvidia-gpu:{i:{value:0 scale:0} d:{Dec:<nil>} s:0 Format:DecimalSI} cpu:{i:{value:2 scale:0} d:{Dec:<nil>} s:2 Format:DecimalSI} memory:{i:{value:8223051776 scale:0} d:{Dec:<nil>} s:8030324Ki Format:BinarySI} pods:{i:{value:20 scale:0} d:{Dec:<nil>} s:20 Format:DecimalSI}] Allocatable:map[memory:{i:{value:8223051776 scale:0} d:{Dec:<nil>} s:8030324Ki Format:BinarySI} pods:{i:{value:20 scale:0} d:{Dec:<nil>} s:20 Format:DecimalSI} alpha.kubernetes.io/nvidia-gpu:{i:{value:0 scale:0} d:{Dec:<nil>} s:0 Format:DecimalSI} cpu:{i:{value:2 scale:0} d:{Dec:<nil>} s:2 Format:DecimalSI}] Phase: Conditions:[{Type:OutOfDisk Status:False LastHeartbeatTime:2016-12-09 18:15:45 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasSufficientDisk Message:kubelet has sufficient disk space available} {Type:MemoryPressure Status:False LastHeartbeatTime:2016-12-09 18:15:45 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasSufficientMemory Message:kubelet has sufficient memory available} {Type:DiskPressure Status:False LastHeartbeatTime:2016-12-09 18:15:45 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasNoDiskPressure Message:kubelet has no disk pressure} {Type:Ready Status:False LastHeartbeatTime:2016-12-09 18:15:45 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletNotReady Message:container runtime is down}] Addresses:[{Type:LegacyHostIP Address:192.168.121.18} {Type:InternalIP Address:192.168.121.18}] DaemonEndpoints:{KubeletEndpoint:{Port:10250}} NodeInfo:{MachineID:8629c58585b148cca9cc925e788be346 SystemUUID:8629C585-85B1-48CC-A9CC-925E788BE346 BootID:1a2f3964-acf4-4825-9949-2816c7af2060 KernelVersion:4.8.6-300.fc25.x86_64 OSImage:CentOS Linux 7 (Core) ContainerRuntimeVersion:docker://1.12.3 KubeletVersion:v1.4.0+776c994 KubeProxyVersion:v1.4.0+776c994 OperatingSystem:linux Architecture:amd64} Images:[{Names:[docker.io/openshift/origin@sha256:0c4dfc568b8dbb1ac288ab1feb955ee2a37e74fd60fc96f8efd4e35076ced06b docker.io/openshift/origin:latest] SizeBytes:540431972} {Names:[docker.io/openshift/origin-deployer@sha256:398189baa7877595df33f6da8dff1fad36a9e437883c5620b85bdf3d06a31fb5 docker.io/openshift/origin-deployer:v1.5.0-alpha.0] SizeBytes:488015134} {Names:[docker.io/openshift/origin-pod@sha256:6216367955502829e7270c3437810cd16e0e99d2973647e521cc9b871d8948f0 docker.io/openshift/origin-pod:v1.5.0-alpha.0] SizeBytes:1138998}] VolumesInUse:[] VolumesAttached:[]} vs {Capacity:map[pods:{i:{value:20 scale:0} d:{Dec:<nil>} s:20 Format:DecimalSI} alpha.kubernetes.io/nvidia-gpu:{i:{value:0 scale:0} d:{Dec:<nil>} s:0 Format:DecimalSI} cpu:{i:{value:2 scale:0} d:{Dec:<nil>} s:2 Format:DecimalSI} memory:{i:{value:8223051776 scale:0} d:{Dec:<nil>} s:8030324Ki Format:BinarySI}] Allocatable:map[alpha.kubernetes.io/nvidia-gpu:{i:{value:0 scale:0} d:{Dec:<nil>} s:0 Format:DecimalSI} cpu:{i:{value:2 scale:0} d:{Dec:<nil>} s:2 Format:DecimalSI} memory:{i:{value:8223051776 scale:0} d:{Dec:<nil>} s:8030324Ki Format:BinarySI} pods:{i:{value:20 scale:0} d:{Dec:<nil>} s:20 Format:DecimalSI}] Phase: Conditions:[{Type:OutOfDisk Status:False LastHeartbeatTime:2016-12-09 18:15:55 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasSufficientDisk Message:kubelet has sufficient disk space available} {Type:MemoryPressure Status:False LastHeartbeatTime:2016-12-09 18:15:55 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasSufficientMemory Message:kubelet has sufficient memory available} {Type:DiskPressure Status:False LastHeartbeatTime:2016-12-09 18:15:55 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasNoDiskPressure Message:kubelet has no disk pressure} {Type:Ready Status:True LastHeartbeatTime:2016-12-09 18:15:55 +0000 UTC LastTransitionTime:2016-12-09 18:15:55 +0000 UTC Reason:KubeletReady Message:kubelet is posting ready status}] Addresses:[{Type:LegacyHostIP Address:192.168.121.18} {Type:InternalIP Address:192.168.121.18}] DaemonEndpoints:{KubeletEndpoint:{Port:10250}} NodeInfo:{MachineID:8629c58585b148cca9cc925e788be346 SystemUUID:8629C585-85B1-48CC-A9CC-925E788BE346 BootID:1a2f3964-acf4-4825-9949-2816c7af2060 KernelVersion:4.8.6-300.fc25.x86_64 OSImage:CentOS Linux 7 (Core) ContainerRuntimeVersion:docker://1.12.3 KubeletVersion:v1.4.0+776c994 KubeProxyVersion:v1.4.0+776c994 OperatingSystem:linux Architecture:amd64} Images:[{Names:[docker.io/openshift/origin@sha256:0c4dfc568b8dbb1ac288ab1feb955ee2a37e74fd60fc96f8efd4e35076ced06b docker.io/openshift/origin:latest] SizeBytes:540431972} {Names:[docker.io/openshift/origin-deployer@sha256:398189baa7877595df33f6da8dff1fad36a9e437883c5620b85bdf3d06a31fb5 docker.io/openshift/origin-deployer:v1.5.0-alpha.0] SizeBytes:488015134} {Names:[docker.io/openshift/origin-pod@sha256:6216367955502829e7270c3437810cd16e0e99d2973647e521cc9b871d8948f0 docker.io/openshift/origin-pod:v1.5.0-alpha.0] SizeBytes:1138998}] VolumesInUse:[] VolumesAttached:[]}.
I1209 18:15:56.547180 20134 nodecontroller.go:685] NodeController detected that some Nodes are Ready. Exiting master disruption mode.
I1209 18:15:56.693270 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:15:56.694925 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:15:57.266741 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:15:57.269230 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:15:58.269889 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:15:58.273397 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:15:58.693276 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:15:58.695589 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:15:58.697067 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:15:59.069088 20134 handler.go:300] unable to get fs usage from thin pool for device 16: no cached value for usage of device 16
I1209 18:15:59.273823 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:15:59.277137 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:16:00.196823 20134 thin_pool_watcher.go:160] checking whether the thin-pool is holding a metadata snapshot
I1209 18:16:00.196907 20134 dmsetup_client.go:61] running dmsetup status docker-252:1-262311-pool
I1209 18:16:00.222003 20134 thin_pool_watcher.go:126] reserving metadata snapshot for thin-pool docker-252:1-262311-pool
I1209 18:16:00.222049 20134 dmsetup_client.go:61] running dmsetup message docker-252:1-262311-pool 0 reserve_metadata_snap
I1209 18:16:00.244096 20134 thin_pool_watcher.go:133] reserved metadata snapshot for thin-pool docker-252:1-262311-pool
I1209 18:16:00.244148 20134 thin_pool_watcher.go:141] running thin_ls on metadata device /dev/loop1
I1209 18:16:00.244161 20134 thin_ls_client.go:56] running command: thin_ls --no-headers -m -o DEV,EXCLUSIVE_BYTES /dev/loop1
I1209 18:16:00.256395 20134 thin_pool_watcher.go:137] releasing metadata snapshot for thin-pool docker-252:1-262311-pool
I1209 18:16:00.256421 20134 dmsetup_client.go:61] running dmsetup message docker-252:1-262311-pool 0 release_metadata_snap
E1209 18:16:00.272177 20134 thin_pool_watcher.go:72] encountered error refreshing thin pool watcher: error performing thin_ls on metadata device /dev/loop1: Error running command `thin_ls --no-headers -m -o DEV,EXCLUSIVE_BYTES /dev/loop1`: exit status 127
output:
I1209 18:16:00.272210 20134 thin_pool_watcher.go:77] thin_ls(1481307360) took 75.402814ms
I1209 18:16:00.277707 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:16:00.279603 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:16:00.693398 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:16:00.695722 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:16:00.697084 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:16:01.192987 20134 reflector.go:284] pkg/controller/volume/persistentvolume/pv_controller_base.go:448: forcing resync
I1209 18:16:01.193303 20134 reflector.go:284] pkg/controller/volume/persistentvolume/pv_controller_base.go:449: forcing resync
I1209 18:16:01.193583 20134 reflector.go:284] pkg/controller/volume/persistentvolume/pv_controller_base.go:153: forcing resync
I1209 18:16:01.279870 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:16:01.282249 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:16:02.282961 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:16:02.285819 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:16:02.693355 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:16:02.695276 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:16:03.286848 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:16:03.289259 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:16:04.164571 20134 handler.go:300] unable to get fs usage from thin pool for device 16: no cached value for usage of device 16
I1209 18:16:04.289851 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:16:04.292797 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:16:04.695161 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:16:04.696934 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:16:04.698785 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:16:05.293633 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:16:05.296520 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:16:05.894190 20134 interface.go:248] Default route transits interface "eth0"
I1209 18:16:05.895320 20134 interface.go:93] Interface eth0 is up
I1209 18:16:05.895931 20134 interface.go:138] Interface "eth0" has 2 addresses :[192.168.121.18/24 fe80::5054:ff:fe56:f2f/64].
I1209 18:16:05.896527 20134 interface.go:105] Checking addr 192.168.121.18/24.
I1209 18:16:05.896610 20134 interface.go:114] IP found 192.168.121.18
I1209 18:16:05.897576 20134 interface.go:144] valid IPv4 address for interface "eth0" found as 192.168.121.18.
I1209 18:16:05.897660 20134 interface.go:254] Choosing IP 192.168.121.18
I1209 18:16:05.912834 20134 controller.go:105] Found 0 scheduledjobs
I1209 18:16:05.924524 20134 controller.go:113] Found 0 jobs
I1209 18:16:05.924599 20134 controller.go:116] Found 0 groups
I1209 18:16:05.952411 20134 conversion.go:133] failed to handle multiple devices for container. Skipping Filesystem stats
I1209 18:16:05.952467 20134 conversion.go:133] failed to handle multiple devices for container. Skipping Filesystem stats
I1209 18:16:05.972062 20134 attach_detach_controller.go:520] processVolumesInUse for node "localhost"
I1209 18:16:06.014616 20134 eviction_manager.go:204] eviction manager: no resources are starved
I1209 18:16:06.296891 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:16:06.299720 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:16:06.555363 20134 nodecontroller.go:816] Node localhost ReadyCondition updated. Updating timestamp: {Capacity:map[alpha.kubernetes.io/nvidia-gpu:{i:{value:0 scale:0} d:{Dec:<nil>} s:0 Format:DecimalSI} cpu:{i:{value:2 scale:0} d:{Dec:<nil>} s:2 Format:DecimalSI} memory:{i:{value:8223051776 scale:0} d:{Dec:<nil>} s:8030324Ki Format:BinarySI} pods:{i:{value:20 scale:0} d:{Dec:<nil>} s:20 Format:DecimalSI}] Allocatable:map[memory:{i:{value:8223051776 scale:0} d:{Dec:<nil>} s:8030324Ki Format:BinarySI} pods:{i:{value:20 scale:0} d:{Dec:<nil>} s:20 Format:DecimalSI} alpha.kubernetes.io/nvidia-gpu:{i:{value:0 scale:0} d:{Dec:<nil>} s:0 Format:DecimalSI} cpu:{i:{value:2 scale:0} d:{Dec:<nil>} s:2 Format:DecimalSI}] Phase: Conditions:[{Type:OutOfDisk Status:False LastHeartbeatTime:2016-12-09 18:15:55 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasSufficientDisk Message:kubelet has sufficient disk space available} {Type:MemoryPressure Status:False LastHeartbeatTime:2016-12-09 18:15:55 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasSufficientMemory Message:kubelet has sufficient memory available} {Type:DiskPressure Status:False LastHeartbeatTime:2016-12-09 18:15:55 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasNoDiskPressure Message:kubelet has no disk pressure} {Type:Ready Status:True LastHeartbeatTime:2016-12-09 18:15:55 +0000 UTC LastTransitionTime:2016-12-09 18:15:55 +0000 UTC Reason:KubeletReady Message:kubelet is posting ready status}] Addresses:[{Type:LegacyHostIP Address:192.168.121.18} {Type:InternalIP Address:192.168.121.18}] DaemonEndpoints:{KubeletEndpoint:{Port:10250}} NodeInfo:{MachineID:8629c58585b148cca9cc925e788be346 SystemUUID:8629C585-85B1-48CC-A9CC-925E788BE346 BootID:1a2f3964-acf4-4825-9949-2816c7af2060 KernelVersion:4.8.6-300.fc25.x86_64 OSImage:CentOS Linux 7 (Core) ContainerRuntimeVersion:docker://1.12.3 KubeletVersion:v1.4.0+776c994 KubeProxyVersion:v1.4.0+776c994 OperatingSystem:linux Architecture:amd64} Images:[{Names:[docker.io/openshift/origin@sha256:0c4dfc568b8dbb1ac288ab1feb955ee2a37e74fd60fc96f8efd4e35076ced06b docker.io/openshift/origin:latest] SizeBytes:540431972} {Names:[docker.io/openshift/origin-deployer@sha256:398189baa7877595df33f6da8dff1fad36a9e437883c5620b85bdf3d06a31fb5 docker.io/openshift/origin-deployer:v1.5.0-alpha.0] SizeBytes:488015134} {Names:[docker.io/openshift/origin-pod@sha256:6216367955502829e7270c3437810cd16e0e99d2973647e521cc9b871d8948f0 docker.io/openshift/origin-pod:v1.5.0-alpha.0] SizeBytes:1138998}] VolumesInUse:[] VolumesAttached:[]} vs {Capacity:map[memory:{i:{value:8223051776 scale:0} d:{Dec:<nil>} s:8030324Ki Format:BinarySI} pods:{i:{value:20 scale:0} d:{Dec:<nil>} s:20 Format:DecimalSI} alpha.kubernetes.io/nvidia-gpu:{i:{value:0 scale:0} d:{Dec:<nil>} s:0 Format:DecimalSI} cpu:{i:{value:2 scale:0} d:{Dec:<nil>} s:2 Format:DecimalSI}] Allocatable:map[alpha.kubernetes.io/nvidia-gpu:{i:{value:0 scale:0} d:{Dec:<nil>} s:0 Format:DecimalSI} cpu:{i:{value:2 scale:0} d:{Dec:<nil>} s:2 Format:DecimalSI} memory:{i:{value:8223051776 scale:0} d:{Dec:<nil>} s:8030324Ki Format:BinarySI} pods:{i:{value:20 scale:0} d:{Dec:<nil>} s:20 Format:DecimalSI}] Phase: Conditions:[{Type:OutOfDisk Status:False LastHeartbeatTime:2016-12-09 18:16:05 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasSufficientDisk Message:kubelet has sufficient disk space available} {Type:MemoryPressure Status:False LastHeartbeatTime:2016-12-09 18:16:05 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasSufficientMemory Message:kubelet has sufficient memory available} {Type:DiskPressure Status:False LastHeartbeatTime:2016-12-09 18:16:05 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasNoDiskPressure Message:kubelet has no disk pressure} {Type:Ready Status:True LastHeartbeatTime:2016-12-09 18:16:05 +0000 UTC LastTransitionTime:2016-12-09 18:15:55 +0000 UTC Reason:KubeletReady Message:kubelet is posting ready status}] Addresses:[{Type:LegacyHostIP Address:192.168.121.18} {Type:InternalIP Address:192.168.121.18}] DaemonEndpoints:{KubeletEndpoint:{Port:10250}} NodeInfo:{MachineID:8629c58585b148cca9cc925e788be346 SystemUUID:8629C585-85B1-48CC-A9CC-925E788BE346 BootID:1a2f3964-acf4-4825-9949-2816c7af2060 KernelVersion:4.8.6-300.fc25.x86_64 OSImage:CentOS Linux 7 (Core) ContainerRuntimeVersion:docker://1.12.3 KubeletVersion:v1.4.0+776c994 KubeProxyVersion:v1.4.0+776c994 OperatingSystem:linux Architecture:amd64} Images:[{Names:[docker.io/openshift/origin@sha256:0c4dfc568b8dbb1ac288ab1feb955ee2a37e74fd60fc96f8efd4e35076ced06b docker.io/openshift/origin:latest] SizeBytes:540431972} {Names:[docker.io/openshift/origin-deployer@sha256:398189baa7877595df33f6da8dff1fad36a9e437883c5620b85bdf3d06a31fb5 docker.io/openshift/origin-deployer:v1.5.0-alpha.0] SizeBytes:488015134} {Names:[docker.io/openshift/origin-pod@sha256:6216367955502829e7270c3437810cd16e0e99d2973647e521cc9b871d8948f0 docker.io/openshift/origin-pod:v1.5.0-alpha.0] SizeBytes:1138998}] VolumesInUse:[] VolumesAttached:[]}.
I1209 18:16:06.693420 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:16:06.695789 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:16:07.300102 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:16:07.303054 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:16:08.303984 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:16:08.306465 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:16:08.693314 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:16:08.694884 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:16:08.696074 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:16:09.306748 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:16:09.308769 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:16:10.309127 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:16:10.311512 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:16:10.693298 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:16:10.695774 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:16:10.697045 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:16:11.311777 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:16:11.314006 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:16:12.314543 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:16:12.319661 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:16:12.693381 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:16:12.695561 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:16:12.697294 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:16:13.320084 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:16:13.322383 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:16:14.322698 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:16:14.324943 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:16:14.693272 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:16:14.695926 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:16:14.840149 20134 reflector.go:284] pkg/controller/replicaset/replica_set.go:205: forcing resync
I1209 18:16:14.847709 20134 reflector.go:284] pkg/controller/deployment/deployment_controller.go:181: forcing resync
I1209 18:16:14.858181 20134 reflector.go:284] pkg/controller/podautoscaler/horizontal.go:135: forcing resync
I1209 18:16:14.859198 20134 reflector.go:284] pkg/controller/daemon/daemoncontroller.go:237: forcing resync
I1209 18:16:14.860209 20134 reflector.go:284] pkg/controller/disruption/disruption.go:264: forcing resync
I1209 18:16:14.860787 20134 reflector.go:284] pkg/controller/disruption/disruption.go:266: forcing resync
I1209 18:16:14.861245 20134 reflector.go:284] pkg/controller/disruption/disruption.go:267: forcing resync
I1209 18:16:14.861872 20134 reflector.go:284] pkg/controller/endpoint/endpoints_controller.go:158: forcing resync
I1209 18:16:14.862378 20134 endpoints_controller.go:321] Finished syncing service "default/kubernetes" endpoints. (7.763µs)
I1209 18:16:14.862450 20134 reflector.go:284] pkg/controller/disruption/disruption.go:268: forcing resync
I1209 18:16:15.169126 20134 proxier.go:758] Syncing iptables rules
I1209 18:16:15.169425 20134 iptables.go:362] running iptables -N [KUBE-SERVICES -t filter]
I1209 18:16:15.186441 20134 iptables.go:362] running iptables -N [KUBE-SERVICES -t nat]
I1209 18:16:15.202456 20134 iptables.go:362] running iptables -C [OUTPUT -t filter -m comment --comment kubernetes service portals -j KUBE-SERVICES]
I1209 18:16:15.217822 20134 iptables.go:362] running iptables -C [OUTPUT -t nat -m comment --comment kubernetes service portals -j KUBE-SERVICES]
I1209 18:16:15.231393 20134 iptables.go:362] running iptables -C [PREROUTING -t nat -m comment --comment kubernetes service portals -j KUBE-SERVICES]
I1209 18:16:15.248186 20134 iptables.go:362] running iptables -N [KUBE-POSTROUTING -t nat]
I1209 18:16:15.263869 20134 iptables.go:362] running iptables -C [POSTROUTING -t nat -m comment --comment kubernetes postrouting rules -j KUBE-POSTROUTING]
I1209 18:16:15.272525 20134 thin_pool_watcher.go:160] checking whether the thin-pool is holding a metadata snapshot
I1209 18:16:15.272739 20134 dmsetup_client.go:61] running dmsetup status docker-252:1-262311-pool
I1209 18:16:15.291094 20134 iptables.go:298] running iptables-save [-t filter]
I1209 18:16:15.307269 20134 thin_pool_watcher.go:126] reserving metadata snapshot for thin-pool docker-252:1-262311-pool
I1209 18:16:15.307545 20134 dmsetup_client.go:61] running dmsetup message docker-252:1-262311-pool 0 reserve_metadata_snap
I1209 18:16:15.313478 20134 iptables.go:298] running iptables-save [-t nat]
I1209 18:16:15.327963 20134 thin_pool_watcher.go:133] reserved metadata snapshot for thin-pool docker-252:1-262311-pool
I1209 18:16:15.328095 20134 thin_pool_watcher.go:141] running thin_ls on metadata device /dev/loop1
I1209 18:16:15.328178 20134 thin_ls_client.go:56] running command: thin_ls --no-headers -m -o DEV,EXCLUSIVE_BYTES /dev/loop1
I1209 18:16:15.334781 20134 proxier.go:1244] Restoring iptables rules: *filter
:KUBE-SERVICES - [0:0]
COMMIT
*nat
:KUBE-SERVICES - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-POSTROUTING - [0:0]
:KUBE-MARK-MASQ - [0:0]
:KUBE-SVC-NPX46M4PTMTKRN6Y - [0:0]
:KUBE-SEP-LLAPZ6I53VN3DC7H - [0:0]
:KUBE-SVC-3VQ6B3MLH7E2SZT4 - [0:0]
:KUBE-SEP-FQLCC2RAIW2XP5BY - [0:0]
:KUBE-SVC-BA6I5HTZKAAAJT56 - [0:0]
:KUBE-SEP-U462OSCUJL4Y6JKA - [0:0]
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x00004000/0x00004000 -j MASQUERADE
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x00004000/0x00004000
-A KUBE-SERVICES -m comment --comment "default/kubernetes:https cluster IP" -m tcp -p tcp -d 172.30.0.1/32 --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment default/kubernetes:https -m recent --name KUBE-SEP-LLAPZ6I53VN3DC7H --rcheck --seconds 180 --reap -j KUBE-SEP-LLAPZ6I53VN3DC7H
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment default/kubernetes:https -j KUBE-SEP-LLAPZ6I53VN3DC7H
-A KUBE-SEP-LLAPZ6I53VN3DC7H -m comment --comment default/kubernetes:https -s 192.168.121.18/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-LLAPZ6I53VN3DC7H -m comment --comment default/kubernetes:https -m recent --name KUBE-SEP-LLAPZ6I53VN3DC7H --set -m tcp -p tcp -j DNAT --to-destination 192.168.121.18:8443
-A KUBE-SERVICES -m comment --comment "default/kubernetes:dns cluster IP" -m udp -p udp -d 172.30.0.1/32 --dport 53 -j KUBE-SVC-3VQ6B3MLH7E2SZT4
-A KUBE-SVC-3VQ6B3MLH7E2SZT4 -m comment --comment default/kubernetes:dns -m recent --name KUBE-SEP-FQLCC2RAIW2XP5BY --rcheck --seconds 180 --reap -j KUBE-SEP-FQLCC2RAIW2XP5BY
-A KUBE-SVC-3VQ6B3MLH7E2SZT4 -m comment --comment default/kubernetes:dns -j KUBE-SEP-FQLCC2RAIW2XP5BY
-A KUBE-SEP-FQLCC2RAIW2XP5BY -m comment --comment default/kubernetes:dns -s 192.168.121.18/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-FQLCC2RAIW2XP5BY -m comment --comment default/kubernetes:dns -m recent --name KUBE-SEP-FQLCC2RAIW2XP5BY --set -m udp -p udp -j DNAT --to-destination 192.168.121.18:8053
-A KUBE-SERVICES -m comment --comment "default/kubernetes:dns-tcp cluster IP" -m tcp -p tcp -d 172.30.0.1/32 --dport 53 -j KUBE-SVC-BA6I5HTZKAAAJT56
-A KUBE-SVC-BA6I5HTZKAAAJT56 -m comment --comment default/kubernetes:dns-tcp -m recent --name KUBE-SEP-U462OSCUJL4Y6JKA --rcheck --seconds 180 --reap -j KUBE-SEP-U462OSCUJL4Y6JKA
-A KUBE-SVC-BA6I5HTZKAAAJT56 -m comment --comment default/kubernetes:dns-tcp -j KUBE-SEP-U462OSCUJL4Y6JKA
-A KUBE-SEP-U462OSCUJL4Y6JKA -m comment --comment default/kubernetes:dns-tcp -s 192.168.121.18/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-U462OSCUJL4Y6JKA -m comment --comment default/kubernetes:dns-tcp -m recent --name KUBE-SEP-U462OSCUJL4Y6JKA --set -m tcp -p tcp -j DNAT --to-destination 192.168.121.18:8053
-A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
COMMIT
I1209 18:16:15.334818 20134 iptables.go:339] running iptables-restore [--noflush --counters]
I1209 18:16:15.350732 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:16:15.351573 20134 thin_pool_watcher.go:137] releasing metadata snapshot for thin-pool docker-252:1-262311-pool
I1209 18:16:15.351588 20134 dmsetup_client.go:61] running dmsetup message docker-252:1-262311-pool 0 release_metadata_snap
I1209 18:16:15.370448 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:16:15.370789 20134 proxier.go:751] syncProxyRules took 201.664432ms
I1209 18:16:15.370828 20134 iptables.go:362] running iptables -N [KUBE-PORTALS-CONTAINER -t nat]
E1209 18:16:15.377646 20134 thin_pool_watcher.go:72] encountered error refreshing thin pool watcher: error performing thin_ls on metadata device /dev/loop1: Error running command `thin_ls --no-headers -m -o DEV,EXCLUSIVE_BYTES /dev/loop1`: exit status 127
output:
I1209 18:16:15.377734 20134 thin_pool_watcher.go:77] thin_ls(1481307375) took 105.224402ms
I1209 18:16:15.391486 20134 iptables.go:362] running iptables -C [PREROUTING -t nat -m comment --comment handle ClusterIPs; NOTE: this must be before the NodePort rules -j KUBE-PORTALS-CONTAINER]
I1209 18:16:15.400139 20134 iptables.go:362] running iptables -N [KUBE-PORTALS-HOST -t nat]
I1209 18:16:15.409620 20134 iptables.go:362] running iptables -C [OUTPUT -t nat -m comment --comment handle ClusterIPs; NOTE: this must be before the NodePort rules -j KUBE-PORTALS-HOST]
I1209 18:16:15.418555 20134 iptables.go:362] running iptables -N [KUBE-NODEPORT-CONTAINER -t nat]
I1209 18:16:15.432253 20134 iptables.go:362] running iptables -C [PREROUTING -t nat -m addrtype --dst-type LOCAL -m comment --comment handle service NodePorts; NOTE: this must be the last rule in the chain -j KUBE-NODEPORT-CONTAINER]
I1209 18:16:15.444432 20134 iptables.go:362] running iptables -N [KUBE-NODEPORT-HOST -t nat]
I1209 18:16:15.456352 20134 iptables.go:362] running iptables -C [OUTPUT -t nat -m addrtype --dst-type LOCAL -m comment --comment handle service NodePorts; NOTE: this must be the last rule in the chain -j KUBE-NODEPORT-HOST]
I1209 18:16:15.467392 20134 iptables.go:362] running iptables -N [KUBE-NODEPORT-NON-LOCAL -t filter]
I1209 18:16:15.477803 20134 iptables.go:362] running iptables -C [INPUT -t filter -m comment --comment Ensure that non-local NodePort traffic can flow -j KUBE-NODEPORT-NON-LOCAL]
I1209 18:16:15.932315 20134 controller.go:105] Found 0 scheduledjobs
I1209 18:16:15.938632 20134 handler.go:300] unable to get fs usage from thin pool for device 16: no cached value for usage of device 16
I1209 18:16:15.939696 20134 controller.go:113] Found 0 jobs
I1209 18:16:15.939837 20134 controller.go:116] Found 0 groups
I1209 18:16:15.975351 20134 interface.go:248] Default route transits interface "eth0"
I1209 18:16:15.975979 20134 interface.go:93] Interface eth0 is up
I1209 18:16:15.976617 20134 interface.go:138] Interface "eth0" has 2 addresses :[192.168.121.18/24 fe80::5054:ff:fe56:f2f/64].
I1209 18:16:15.976684 20134 interface.go:105] Checking addr 192.168.121.18/24.
I1209 18:16:15.976700 20134 interface.go:114] IP found 192.168.121.18
I1209 18:16:15.977844 20134 interface.go:144] valid IPv4 address for interface "eth0" found as 192.168.121.18.
I1209 18:16:15.977879 20134 interface.go:254] Choosing IP 192.168.121.18
I1209 18:16:16.027051 20134 attach_detach_controller.go:520] processVolumesInUse for node "localhost"
I1209 18:16:16.036928 20134 conversion.go:133] failed to handle multiple devices for container. Skipping Filesystem stats
I1209 18:16:16.036957 20134 conversion.go:133] failed to handle multiple devices for container. Skipping Filesystem stats
I1209 18:16:16.072670 20134 eviction_manager.go:204] eviction manager: no resources are starved
I1209 18:16:16.193347 20134 reflector.go:284] pkg/controller/volume/persistentvolume/pv_controller_base.go:448: forcing resync
I1209 18:16:16.193812 20134 reflector.go:284] pkg/controller/volume/persistentvolume/pv_controller_base.go:449: forcing resync
I1209 18:16:16.194138 20134 reflector.go:284] pkg/controller/volume/persistentvolume/pv_controller_base.go:153: forcing resync
I1209 18:16:16.194816 20134 reflector.go:284] pkg/controller/petset/pet_set.go:147: forcing resync
I1209 18:16:16.371000 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:16:16.374708 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:16:16.561931 20134 nodecontroller.go:816] Node localhost ReadyCondition updated. Updating timestamp: {Capacity:map[cpu:{i:{value:2 scale:0} d:{Dec:<nil>} s:2 Format:DecimalSI} memory:{i:{value:8223051776 scale:0} d:{Dec:<nil>} s:8030324Ki Format:BinarySI} pods:{i:{value:20 scale:0} d:{Dec:<nil>} s:20 Format:DecimalSI} alpha.kubernetes.io/nvidia-gpu:{i:{value:0 scale:0} d:{Dec:<nil>} s:0 Format:DecimalSI}] Allocatable:map[alpha.kubernetes.io/nvidia-gpu:{i:{value:0 scale:0} d:{Dec:<nil>} s:0 Format:DecimalSI} cpu:{i:{value:2 scale:0} d:{Dec:<nil>} s:2 Format:DecimalSI} memory:{i:{value:8223051776 scale:0} d:{Dec:<nil>} s:8030324Ki Format:BinarySI} pods:{i:{value:20 scale:0} d:{Dec:<nil>} s:20 Format:DecimalSI}] Phase: Conditions:[{Type:OutOfDisk Status:False LastHeartbeatTime:2016-12-09 18:16:05 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasSufficientDisk Message:kubelet has sufficient disk space available} {Type:MemoryPressure Status:False LastHeartbeatTime:2016-12-09 18:16:05 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasSufficientMemory Message:kubelet has sufficient memory available} {Type:DiskPressure Status:False LastHeartbeatTime:2016-12-09 18:16:05 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasNoDiskPressure Message:kubelet has no disk pressure} {Type:Ready Status:True LastHeartbeatTime:2016-12-09 18:16:05 +0000 UTC LastTransitionTime:2016-12-09 18:15:55 +0000 UTC Reason:KubeletReady Message:kubelet is posting ready status}] Addresses:[{Type:LegacyHostIP Address:192.168.121.18} {Type:InternalIP Address:192.168.121.18}] DaemonEndpoints:{KubeletEndpoint:{Port:10250}} NodeInfo:{MachineID:8629c58585b148cca9cc925e788be346 SystemUUID:8629C585-85B1-48CC-A9CC-925E788BE346 BootID:1a2f3964-acf4-4825-9949-2816c7af2060 KernelVersion:4.8.6-300.fc25.x86_64 OSImage:CentOS Linux 7 (Core) ContainerRuntimeVersion:docker://1.12.3 KubeletVersion:v1.4.0+776c994 KubeProxyVersion:v1.4.0+776c994 OperatingSystem:linux Architecture:amd64} Images:[{Names:[docker.io/openshift/origin@sha256:0c4dfc568b8dbb1ac288ab1feb955ee2a37e74fd60fc96f8efd4e35076ced06b docker.io/openshift/origin:latest] SizeBytes:540431972} {Names:[docker.io/openshift/origin-deployer@sha256:398189baa7877595df33f6da8dff1fad36a9e437883c5620b85bdf3d06a31fb5 docker.io/openshift/origin-deployer:v1.5.0-alpha.0] SizeBytes:488015134} {Names:[docker.io/openshift/origin-pod@sha256:6216367955502829e7270c3437810cd16e0e99d2973647e521cc9b871d8948f0 docker.io/openshift/origin-pod:v1.5.0-alpha.0] SizeBytes:1138998}] VolumesInUse:[] VolumesAttached:[]} vs {Capacity:map[memory:{i:{value:8223051776 scale:0} d:{Dec:<nil>} s:8030324Ki Format:BinarySI} pods:{i:{value:20 scale:0} d:{Dec:<nil>} s:20 Format:DecimalSI} alpha.kubernetes.io/nvidia-gpu:{i:{value:0 scale:0} d:{Dec:<nil>} s:0 Format:DecimalSI} cpu:{i:{value:2 scale:0} d:{Dec:<nil>} s:2 Format:DecimalSI}] Allocatable:map[alpha.kubernetes.io/nvidia-gpu:{i:{value:0 scale:0} d:{Dec:<nil>} s:0 Format:DecimalSI} cpu:{i:{value:2 scale:0} d:{Dec:<nil>} s:2 Format:DecimalSI} memory:{i:{value:8223051776 scale:0} d:{Dec:<nil>} s:8030324Ki Format:BinarySI} pods:{i:{value:20 scale:0} d:{Dec:<nil>} s:20 Format:DecimalSI}] Phase: Conditions:[{Type:OutOfDisk Status:False LastHeartbeatTime:2016-12-09 18:16:16 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasSufficientDisk Message:kubelet has sufficient disk space available} {Type:MemoryPressure Status:False LastHeartbeatTime:2016-12-09 18:16:16 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasSufficientMemory Message:kubelet has sufficient memory available} {Type:DiskPressure Status:False LastHeartbeatTime:2016-12-09 18:16:16 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasNoDiskPressure Message:kubelet has no disk pressure} {Type:Ready Status:True LastHeartbeatTime:2016-12-09 18:16:16 +0000 UTC LastTransitionTime:2016-12-09 18:15:55 +0000 UTC Reason:KubeletReady Message:kubelet is posting ready status}] Addresses:[{Type:LegacyHostIP Address:192.168.121.18} {Type:InternalIP Address:192.168.121.18}] DaemonEndpoints:{KubeletEndpoint:{Port:10250}} NodeInfo:{MachineID:8629c58585b148cca9cc925e788be346 SystemUUID:8629C585-85B1-48CC-A9CC-925E788BE346 BootID:1a2f3964-acf4-4825-9949-2816c7af2060 KernelVersion:4.8.6-300.fc25.x86_64 OSImage:CentOS Linux 7 (Core) ContainerRuntimeVersion:docker://1.12.3 KubeletVersion:v1.4.0+776c994 KubeProxyVersion:v1.4.0+776c994 OperatingSystem:linux Architecture:amd64} Images:[{Names:[docker.io/openshift/origin@sha256:0c4dfc568b8dbb1ac288ab1feb955ee2a37e74fd60fc96f8efd4e35076ced06b docker.io/openshift/origin:latest] SizeBytes:540431972} {Names:[docker.io/openshift/origin-deployer@sha256:398189baa7877595df33f6da8dff1fad36a9e437883c5620b85bdf3d06a31fb5 docker.io/openshift/origin-deployer:v1.5.0-alpha.0] SizeBytes:488015134} {Names:[docker.io/openshift/origin-pod@sha256:6216367955502829e7270c3437810cd16e0e99d2973647e521cc9b871d8948f0 docker.io/openshift/origin-pod:v1.5.0-alpha.0] SizeBytes:1138998}] VolumesInUse:[] VolumesAttached:[]}.
I1209 18:16:16.693475 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:16:16.696157 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:16:16.698368 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:16:17.375380 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:16:17.378487 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:16:18.378973 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:16:18.381214 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:16:18.693377 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:16:18.695972 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:16:19.381954 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:16:19.386484 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:16:20.386824 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:16:20.389956 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:16:20.437275 20134 handler.go:300] unable to get fs usage from thin pool for device 16: no cached value for usage of device 16
I1209 18:16:20.693281 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:16:20.695613 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:16:20.696902 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:16:21.390313 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:16:21.392614 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:16:22.393117 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:16:22.395789 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:16:22.693395 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:16:22.695105 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:16:22.696627 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:16:23.396145 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:16:23.399029 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:16:24.399914 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:16:24.403141 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:16:24.693268 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:16:24.695117 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:16:25.403703 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:16:25.406218 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:16:25.950571 20134 controller.go:105] Found 0 scheduledjobs
I1209 18:16:25.957379 20134 controller.go:113] Found 0 jobs
I1209 18:16:25.957403 20134 controller.go:116] Found 0 groups
I1209 18:16:26.032296 20134 interface.go:248] Default route transits interface "eth0"
I1209 18:16:26.032843 20134 interface.go:93] Interface eth0 is up
I1209 18:16:26.033095 20134 interface.go:138] Interface "eth0" has 2 addresses :[192.168.121.18/24 fe80::5054:ff:fe56:f2f/64].
I1209 18:16:26.033269 20134 interface.go:105] Checking addr 192.168.121.18/24.
I1209 18:16:26.033409 20134 interface.go:114] IP found 192.168.121.18
I1209 18:16:26.033534 20134 interface.go:144] valid IPv4 address for interface "eth0" found as 192.168.121.18.
I1209 18:16:26.033649 20134 interface.go:254] Choosing IP 192.168.121.18
I1209 18:16:26.087947 20134 attach_detach_controller.go:520] processVolumesInUse for node "localhost"
I1209 18:16:26.097351 20134 conversion.go:133] failed to handle multiple devices for container. Skipping Filesystem stats
I1209 18:16:26.097380 20134 conversion.go:133] failed to handle multiple devices for container. Skipping Filesystem stats
I1209 18:16:26.131421 20134 eviction_manager.go:204] eviction manager: no resources are starved
I1209 18:16:26.406570 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:16:26.409054 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:16:26.568370 20134 nodecontroller.go:816] Node localhost ReadyCondition updated. Updating timestamp: {Capacity:map[pods:{i:{value:20 scale:0} d:{Dec:<nil>} s:20 Format:DecimalSI} alpha.kubernetes.io/nvidia-gpu:{i:{value:0 scale:0} d:{Dec:<nil>} s:0 Format:DecimalSI} cpu:{i:{value:2 scale:0} d:{Dec:<nil>} s:2 Format:DecimalSI} memory:{i:{value:8223051776 scale:0} d:{Dec:<nil>} s:8030324Ki Format:BinarySI}] Allocatable:map[memory:{i:{value:8223051776 scale:0} d:{Dec:<nil>} s:8030324Ki Format:BinarySI} pods:{i:{value:20 scale:0} d:{Dec:<nil>} s:20 Format:DecimalSI} alpha.kubernetes.io/nvidia-gpu:{i:{value:0 scale:0} d:{Dec:<nil>} s:0 Format:DecimalSI} cpu:{i:{value:2 scale:0} d:{Dec:<nil>} s:2 Format:DecimalSI}] Phase: Conditions:[{Type:OutOfDisk Status:False LastHeartbeatTime:2016-12-09 18:16:16 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasSufficientDisk Message:kubelet has sufficient disk space available} {Type:MemoryPressure Status:False LastHeartbeatTime:2016-12-09 18:16:16 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasSufficientMemory Message:kubelet has sufficient memory available} {Type:DiskPressure Status:False LastHeartbeatTime:2016-12-09 18:16:16 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasNoDiskPressure Message:kubelet has no disk pressure} {Type:Ready Status:True LastHeartbeatTime:2016-12-09 18:16:16 +0000 UTC LastTransitionTime:2016-12-09 18:15:55 +0000 UTC Reason:KubeletReady Message:kubelet is posting ready status}] Addresses:[{Type:LegacyHostIP Address:192.168.121.18} {Type:InternalIP Address:192.168.121.18}] DaemonEndpoints:{KubeletEndpoint:{Port:10250}} NodeInfo:{MachineID:8629c58585b148cca9cc925e788be346 SystemUUID:8629C585-85B1-48CC-A9CC-925E788BE346 BootID:1a2f3964-acf4-4825-9949-2816c7af2060 KernelVersion:4.8.6-300.fc25.x86_64 OSImage:CentOS Linux 7 (Core) ContainerRuntimeVersion:docker://1.12.3 KubeletVersion:v1.4.0+776c994 KubeProxyVersion:v1.4.0+776c994 OperatingSystem:linux Architecture:amd64} Images:[{Names:[docker.io/openshift/origin@sha256:0c4dfc568b8dbb1ac288ab1feb955ee2a37e74fd60fc96f8efd4e35076ced06b docker.io/openshift/origin:latest] SizeBytes:540431972} {Names:[docker.io/openshift/origin-deployer@sha256:398189baa7877595df33f6da8dff1fad36a9e437883c5620b85bdf3d06a31fb5 docker.io/openshift/origin-deployer:v1.5.0-alpha.0] SizeBytes:488015134} {Names:[docker.io/openshift/origin-pod@sha256:6216367955502829e7270c3437810cd16e0e99d2973647e521cc9b871d8948f0 docker.io/openshift/origin-pod:v1.5.0-alpha.0] SizeBytes:1138998}] VolumesInUse:[] VolumesAttached:[]} vs {Capacity:map[pods:{i:{value:20 scale:0} d:{Dec:<nil>} s:20 Format:DecimalSI} alpha.kubernetes.io/nvidia-gpu:{i:{value:0 scale:0} d:{Dec:<nil>} s:0 Format:DecimalSI} cpu:{i:{value:2 scale:0} d:{Dec:<nil>} s:2 Format:DecimalSI} memory:{i:{value:8223051776 scale:0} d:{Dec:<nil>} s:8030324Ki Format:BinarySI}] Allocatable:map[pods:{i:{value:20 scale:0} d:{Dec:<nil>} s:20 Format:DecimalSI} alpha.kubernetes.io/nvidia-gpu:{i:{value:0 scale:0} d:{Dec:<nil>} s:0 Format:DecimalSI} cpu:{i:{value:2 scale:0} d:{Dec:<nil>} s:2 Format:DecimalSI} memory:{i:{value:8223051776 scale:0} d:{Dec:<nil>} s:8030324Ki Format:BinarySI}] Phase: Conditions:[{Type:OutOfDisk Status:False LastHeartbeatTime:2016-12-09 18:16:26 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasSufficientDisk Message:kubelet has sufficient disk space available} {Type:MemoryPressure Status:False LastHeartbeatTime:2016-12-09 18:16:26 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasSufficientMemory Message:kubelet has sufficient memory available} {Type:DiskPressure Status:False LastHeartbeatTime:2016-12-09 18:16:26 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasNoDiskPressure Message:kubelet has no disk pressure} {Type:Ready Status:True LastHeartbeatTime:2016-12-09 18:16:26 +0000 UTC LastTransitionTime:2016-12-09 18:15:55 +0000 UTC Reason:KubeletReady Message:kubelet is posting ready status}] Addresses:[{Type:LegacyHostIP Address:192.168.121.18} {Type:InternalIP Address:192.168.121.18}] DaemonEndpoints:{KubeletEndpoint:{Port:10250}} NodeInfo:{MachineID:8629c58585b148cca9cc925e788be346 SystemUUID:8629C585-85B1-48CC-A9CC-925E788BE346 BootID:1a2f3964-acf4-4825-9949-2816c7af2060 KernelVersion:4.8.6-300.fc25.x86_64 OSImage:CentOS Linux 7 (Core) ContainerRuntimeVersion:docker://1.12.3 KubeletVersion:v1.4.0+776c994 KubeProxyVersion:v1.4.0+776c994 OperatingSystem:linux Architecture:amd64} Images:[{Names:[docker.io/openshift/origin@sha256:0c4dfc568b8dbb1ac288ab1feb955ee2a37e74fd60fc96f8efd4e35076ced06b docker.io/openshift/origin:latest] SizeBytes:540431972} {Names:[docker.io/openshift/origin-deployer@sha256:398189baa7877595df33f6da8dff1fad36a9e437883c5620b85bdf3d06a31fb5 docker.io/openshift/origin-deployer:v1.5.0-alpha.0] SizeBytes:488015134} {Names:[docker.io/openshift/origin-pod@sha256:6216367955502829e7270c3437810cd16e0e99d2973647e521cc9b871d8948f0 docker.io/openshift/origin-pod:v1.5.0-alpha.0] SizeBytes:1138998}] VolumesInUse:[] VolumesAttached:[]}.
I1209 18:16:26.693405 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:16:26.694940 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:16:26.696487 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:16:27.409532 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:16:27.412650 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:16:27.809462 20134 handler.go:300] unable to get fs usage from thin pool for device 16: no cached value for usage of device 16
I1209 18:16:28.413379 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:16:28.415856 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:16:28.693352 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:16:28.694829 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:16:29.416598 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:16:29.418973 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:16:30.378101 20134 thin_pool_watcher.go:160] checking whether the thin-pool is holding a metadata snapshot
I1209 18:16:30.378156 20134 dmsetup_client.go:61] running dmsetup status docker-252:1-262311-pool
I1209 18:16:30.405032 20134 thin_pool_watcher.go:126] reserving metadata snapshot for thin-pool docker-252:1-262311-pool
I1209 18:16:30.405079 20134 dmsetup_client.go:61] running dmsetup message docker-252:1-262311-pool 0 reserve_metadata_snap
I1209 18:16:30.419413 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:16:30.422315 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:16:30.435398 20134 thin_pool_watcher.go:133] reserved metadata snapshot for thin-pool docker-252:1-262311-pool
I1209 18:16:30.435491 20134 thin_pool_watcher.go:141] running thin_ls on metadata device /dev/loop1
I1209 18:16:30.435524 20134 thin_ls_client.go:56] running command: thin_ls --no-headers -m -o DEV,EXCLUSIVE_BYTES /dev/loop1
I1209 18:16:30.456242 20134 thin_pool_watcher.go:137] releasing metadata snapshot for thin-pool docker-252:1-262311-pool
I1209 18:16:30.456965 20134 dmsetup_client.go:61] running dmsetup message docker-252:1-262311-pool 0 release_metadata_snap
E1209 18:16:30.479674 20134 thin_pool_watcher.go:72] encountered error refreshing thin pool watcher: error performing thin_ls on metadata device /dev/loop1: Error running command `thin_ls --no-headers -m -o DEV,EXCLUSIVE_BYTES /dev/loop1`: exit status 127
output:
I1209 18:16:30.481133 20134 thin_pool_watcher.go:77] thin_ls(1481307390) took 103.044129ms
I1209 18:16:30.693439 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:16:30.695939 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:16:30.697678 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:16:31.193703 20134 reflector.go:284] pkg/controller/volume/persistentvolume/pv_controller_base.go:448: forcing resync
I1209 18:16:31.194268 20134 reflector.go:284] pkg/controller/volume/persistentvolume/pv_controller_base.go:449: forcing resync
I1209 18:16:31.194362 20134 reflector.go:284] pkg/controller/volume/persistentvolume/pv_controller_base.go:153: forcing resync
I1209 18:16:31.422803 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:16:31.425798 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:16:32.052254 20134 handler.go:300] unable to get fs usage from thin pool for device 16: no cached value for usage of device 16
I1209 18:16:32.426381 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:16:32.428802 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:16:32.693348 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:16:32.695451 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:16:33.429064 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:16:33.431400 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:16:34.431743 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:16:34.434476 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:16:34.693365 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:16:34.695805 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:16:34.697052 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:16:35.434904 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:16:35.438952 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:16:35.966224 20134 controller.go:105] Found 0 scheduledjobs
I1209 18:16:35.972683 20134 controller.go:113] Found 0 jobs
I1209 18:16:35.972715 20134 controller.go:116] Found 0 groups
I1209 18:16:36.092485 20134 interface.go:248] Default route transits interface "eth0"
I1209 18:16:36.092694 20134 interface.go:93] Interface eth0 is up
I1209 18:16:36.093670 20134 interface.go:138] Interface "eth0" has 2 addresses :[192.168.121.18/24 fe80::5054:ff:fe56:f2f/64].
I1209 18:16:36.093721 20134 interface.go:105] Checking addr 192.168.121.18/24.
I1209 18:16:36.093735 20134 interface.go:114] IP found 192.168.121.18
I1209 18:16:36.093747 20134 interface.go:144] valid IPv4 address for interface "eth0" found as 192.168.121.18.
I1209 18:16:36.093757 20134 interface.go:254] Choosing IP 192.168.121.18
I1209 18:16:36.145074 20134 attach_detach_controller.go:520] processVolumesInUse for node "localhost"
I1209 18:16:36.166544 20134 conversion.go:133] failed to handle multiple devices for container. Skipping Filesystem stats
I1209 18:16:36.166618 20134 conversion.go:133] failed to handle multiple devices for container. Skipping Filesystem stats
I1209 18:16:36.363401 20134 eviction_manager.go:204] eviction manager: no resources are starved
I1209 18:16:36.439286 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:16:36.441581 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:16:36.573988 20134 nodecontroller.go:816] Node localhost ReadyCondition updated. Updating timestamp: {Capacity:map[pods:{i:{value:20 scale:0} d:{Dec:<nil>} s:20 Format:DecimalSI} alpha.kubernetes.io/nvidia-gpu:{i:{value:0 scale:0} d:{Dec:<nil>} s:0 Format:DecimalSI} cpu:{i:{value:2 scale:0} d:{Dec:<nil>} s:2 Format:DecimalSI} memory:{i:{value:8223051776 scale:0} d:{Dec:<nil>} s:8030324Ki Format:BinarySI}] Allocatable:map[memory:{i:{value:8223051776 scale:0} d:{Dec:<nil>} s:8030324Ki Format:BinarySI} pods:{i:{value:20 scale:0} d:{Dec:<nil>} s:20 Format:DecimalSI} alpha.kubernetes.io/nvidia-gpu:{i:{value:0 scale:0} d:{Dec:<nil>} s:0 Format:DecimalSI} cpu:{i:{value:2 scale:0} d:{Dec:<nil>} s:2 Format:DecimalSI}] Phase: Conditions:[{Type:OutOfDisk Status:False LastHeartbeatTime:2016-12-09 18:16:26 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasSufficientDisk Message:kubelet has sufficient disk space available} {Type:MemoryPressure Status:False LastHeartbeatTime:2016-12-09 18:16:26 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasSufficientMemory Message:kubelet has sufficient memory available} {Type:DiskPressure Status:False LastHeartbeatTime:2016-12-09 18:16:26 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasNoDiskPressure Message:kubelet has no disk pressure} {Type:Ready Status:True LastHeartbeatTime:2016-12-09 18:16:26 +0000 UTC LastTransitionTime:2016-12-09 18:15:55 +0000 UTC Reason:KubeletReady Message:kubelet is posting ready status}] Addresses:[{Type:LegacyHostIP Address:192.168.121.18} {Type:InternalIP Address:192.168.121.18}] DaemonEndpoints:{KubeletEndpoint:{Port:10250}} NodeInfo:{MachineID:8629c58585b148cca9cc925e788be346 SystemUUID:8629C585-85B1-48CC-A9CC-925E788BE346 BootID:1a2f3964-acf4-4825-9949-2816c7af2060 KernelVersion:4.8.6-300.fc25.x86_64 OSImage:CentOS Linux 7 (Core) ContainerRuntimeVersion:docker://1.12.3 KubeletVersion:v1.4.0+776c994 KubeProxyVersion:v1.4.0+776c994 OperatingSystem:linux Architecture:amd64} Images:[{Names:[docker.io/openshift/origin@sha256:0c4dfc568b8dbb1ac288ab1feb955ee2a37e74fd60fc96f8efd4e35076ced06b docker.io/openshift/origin:latest] SizeBytes:540431972} {Names:[docker.io/openshift/origin-deployer@sha256:398189baa7877595df33f6da8dff1fad36a9e437883c5620b85bdf3d06a31fb5 docker.io/openshift/origin-deployer:v1.5.0-alpha.0] SizeBytes:488015134} {Names:[docker.io/openshift/origin-pod@sha256:6216367955502829e7270c3437810cd16e0e99d2973647e521cc9b871d8948f0 docker.io/openshift/origin-pod:v1.5.0-alpha.0] SizeBytes:1138998}] VolumesInUse:[] VolumesAttached:[]} vs {Capacity:map[pods:{i:{value:20 scale:0} d:{Dec:<nil>} s:20 Format:DecimalSI} alpha.kubernetes.io/nvidia-gpu:{i:{value:0 scale:0} d:{Dec:<nil>} s:0 Format:DecimalSI} cpu:{i:{value:2 scale:0} d:{Dec:<nil>} s:2 Format:DecimalSI} memory:{i:{value:8223051776 scale:0} d:{Dec:<nil>} s:8030324Ki Format:BinarySI}] Allocatable:map[alpha.kubernetes.io/nvidia-gpu:{i:{value:0 scale:0} d:{Dec:<nil>} s:0 Format:DecimalSI} cpu:{i:{value:2 scale:0} d:{Dec:<nil>} s:2 Format:DecimalSI} memory:{i:{value:8223051776 scale:0} d:{Dec:<nil>} s:8030324Ki Format:BinarySI} pods:{i:{value:20 scale:0} d:{Dec:<nil>} s:20 Format:DecimalSI}] Phase: Conditions:[{Type:OutOfDisk Status:False LastHeartbeatTime:2016-12-09 18:16:36 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasSufficientDisk Message:kubelet has sufficient disk space available} {Type:MemoryPressure Status:False LastHeartbeatTime:2016-12-09 18:16:36 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasSufficientMemory Message:kubelet has sufficient memory available} {Type:DiskPressure Status:False LastHeartbeatTime:2016-12-09 18:16:36 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasNoDiskPressure Message:kubelet has no disk pressure} {Type:Ready Status:True LastHeartbeatTime:2016-12-09 18:16:36 +0000 UTC LastTransitionTime:2016-12-09 18:15:55 +0000 UTC Reason:KubeletReady Message:kubelet is posting ready status}] Addresses:[{Type:LegacyHostIP Address:192.168.121.18} {Type:InternalIP Address:192.168.121.18}] DaemonEndpoints:{KubeletEndpoint:{Port:10250}} NodeInfo:{MachineID:8629c58585b148cca9cc925e788be346 SystemUUID:8629C585-85B1-48CC-A9CC-925E788BE346 BootID:1a2f3964-acf4-4825-9949-2816c7af2060 KernelVersion:4.8.6-300.fc25.x86_64 OSImage:CentOS Linux 7 (Core) ContainerRuntimeVersion:docker://1.12.3 KubeletVersion:v1.4.0+776c994 KubeProxyVersion:v1.4.0+776c994 OperatingSystem:linux Architecture:amd64} Images:[{Names:[docker.io/openshift/origin@sha256:0c4dfc568b8dbb1ac288ab1feb955ee2a37e74fd60fc96f8efd4e35076ced06b docker.io/openshift/origin:latest] SizeBytes:540431972} {Names:[docker.io/openshift/origin-deployer@sha256:398189baa7877595df33f6da8dff1fad36a9e437883c5620b85bdf3d06a31fb5 docker.io/openshift/origin-deployer:v1.5.0-alpha.0] SizeBytes:488015134} {Names:[docker.io/openshift/origin-pod@sha256:6216367955502829e7270c3437810cd16e0e99d2973647e521cc9b871d8948f0 docker.io/openshift/origin-pod:v1.5.0-alpha.0] SizeBytes:1138998}] VolumesInUse:[] VolumesAttached:[]}.
I1209 18:16:36.693396 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:16:36.695626 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:16:37.442475 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:16:37.445745 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:16:38.446765 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:16:38.449491 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:16:38.693261 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:16:38.695164 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:16:38.696236 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:16:39.450369 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:16:39.452980 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:16:40.453456 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:16:40.456220 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:16:40.693371 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:16:40.695447 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:16:41.456634 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:16:41.458945 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:16:42.388468 20134 handler.go:300] unable to get fs usage from thin pool for device 16: no cached value for usage of device 16
I1209 18:16:42.459305 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:16:42.462651 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:16:42.693364 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:16:42.695579 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:16:42.697060 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:16:43.462880 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:16:43.465006 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:16:44.299423 20134 worker.go:45] 0 Health Check Listeners
I1209 18:16:44.299472 20134 worker.go:46] 0 Services registered for health checking
I1209 18:16:44.465211 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:16:44.466920 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:16:44.602426 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:16:44.605241 20134 container_gc.go:249] Removing container "5122ba25e59b3e0aa0dae770964e9747c9e49fc15f1f6d260713914d3190a104" name "deployment"
I1209 18:16:44.693167 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:16:44.694866 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:16:44.840483 20134 reflector.go:284] pkg/controller/replicaset/replica_set.go:205: forcing resync
I1209 18:16:44.847992 20134 reflector.go:284] pkg/controller/deployment/deployment_controller.go:181: forcing resync
I1209 18:16:44.861316 20134 reflector.go:284] pkg/controller/disruption/disruption.go:266: forcing resync
I1209 18:16:44.861379 20134 reflector.go:284] pkg/controller/podautoscaler/horizontal.go:135: forcing resync
I1209 18:16:44.861413 20134 reflector.go:284] pkg/controller/daemon/daemoncontroller.go:237: forcing resync
I1209 18:16:44.861437 20134 reflector.go:284] pkg/controller/disruption/disruption.go:264: forcing resync
I1209 18:16:44.861474 20134 reflector.go:284] pkg/controller/disruption/disruption.go:267: forcing resync
I1209 18:16:44.862320 20134 reflector.go:284] pkg/controller/endpoint/endpoints_controller.go:158: forcing resync
I1209 18:16:44.862582 20134 endpoints_controller.go:321] Finished syncing service "default/kubernetes" endpoints. (6.009µs)
I1209 18:16:44.862721 20134 reflector.go:284] pkg/controller/disruption/disruption.go:268: forcing resync
I1209 18:16:45.035313 20134 container_gc.go:249] Removing container "955f3ff60dee1a6e78eca0486c2e4f580321c66e34d5075f518699918d5c0b41" name "POD"
I1209 18:16:45.158406 20134 iptables.go:362] running iptables -N [KUBE-MARK-DROP -t nat]
I1209 18:16:45.168975 20134 proxier.go:758] Syncing iptables rules
I1209 18:16:45.169078 20134 iptables.go:362] running iptables -N [KUBE-SERVICES -t filter]
I1209 18:16:45.188404 20134 iptables.go:362] running iptables -C [KUBE-MARK-DROP -t nat -j MARK --set-xmark 0x00008000/0x00008000]
I1209 18:16:45.196829 20134 iptables.go:362] running iptables -N [KUBE-SERVICES -t nat]
I1209 18:16:45.209687 20134 iptables.go:362] running iptables -N [KUBE-FIREWALL -t filter]
I1209 18:16:45.215628 20134 iptables.go:362] running iptables -C [OUTPUT -t filter -m comment --comment kubernetes service portals -j KUBE-SERVICES]
I1209 18:16:45.224929 20134 iptables.go:362] running iptables -C [KUBE-FIREWALL -t filter -m comment --comment kubernetes firewall for dropping marked packets -m mark --mark 0x00008000/0x00008000 -j DROP]
I1209 18:16:45.230134 20134 iptables.go:362] running iptables -C [OUTPUT -t nat -m comment --comment kubernetes service portals -j KUBE-SERVICES]
I1209 18:16:45.240614 20134 iptables.go:362] running iptables -C [OUTPUT -t filter -j KUBE-FIREWALL]
I1209 18:16:45.246018 20134 iptables.go:362] running iptables -C [PREROUTING -t nat -m comment --comment kubernetes service portals -j KUBE-SERVICES]
I1209 18:16:45.256961 20134 iptables.go:362] running iptables -C [INPUT -t filter -j KUBE-FIREWALL]
I1209 18:16:45.261153 20134 iptables.go:362] running iptables -N [KUBE-POSTROUTING -t nat]
I1209 18:16:45.270916 20134 iptables.go:362] running iptables -N [KUBE-MARK-MASQ -t nat]
I1209 18:16:45.275283 20134 iptables.go:362] running iptables -C [POSTROUTING -t nat -m comment --comment kubernetes postrouting rules -j KUBE-POSTROUTING]
I1209 18:16:45.285487 20134 iptables.go:362] running iptables -N [KUBE-POSTROUTING -t nat]
I1209 18:16:45.290583 20134 iptables.go:298] running iptables-save [-t filter]
I1209 18:16:45.302581 20134 iptables.go:362] running iptables -C [KUBE-MARK-MASQ -t nat -j MARK --set-xmark 0x00004000/0x00004000]
I1209 18:16:45.310004 20134 iptables.go:298] running iptables-save [-t nat]
I1209 18:16:45.324888 20134 iptables.go:362] running iptables -C [POSTROUTING -t nat -m comment --comment kubernetes postrouting rules -j KUBE-POSTROUTING]
I1209 18:16:45.332238 20134 proxier.go:1244] Restoring iptables rules: *filter
:KUBE-SERVICES - [0:0]
COMMIT
*nat
:KUBE-SERVICES - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-POSTROUTING - [0:0]
:KUBE-MARK-MASQ - [0:0]
:KUBE-SVC-NPX46M4PTMTKRN6Y - [0:0]
:KUBE-SEP-LLAPZ6I53VN3DC7H - [0:0]
:KUBE-SVC-3VQ6B3MLH7E2SZT4 - [0:0]
:KUBE-SEP-FQLCC2RAIW2XP5BY - [0:0]
:KUBE-SVC-BA6I5HTZKAAAJT56 - [0:0]
:KUBE-SEP-U462OSCUJL4Y6JKA - [0:0]
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x00004000/0x00004000 -j MASQUERADE
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x00004000/0x00004000
-A KUBE-SERVICES -m comment --comment "default/kubernetes:https cluster IP" -m tcp -p tcp -d 172.30.0.1/32 --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment default/kubernetes:https -m recent --name KUBE-SEP-LLAPZ6I53VN3DC7H --rcheck --seconds 180 --reap -j KUBE-SEP-LLAPZ6I53VN3DC7H
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment default/kubernetes:https -j KUBE-SEP-LLAPZ6I53VN3DC7H
-A KUBE-SEP-LLAPZ6I53VN3DC7H -m comment --comment default/kubernetes:https -s 192.168.121.18/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-LLAPZ6I53VN3DC7H -m comment --comment default/kubernetes:https -m recent --name KUBE-SEP-LLAPZ6I53VN3DC7H --set -m tcp -p tcp -j DNAT --to-destination 192.168.121.18:8443
-A KUBE-SERVICES -m comment --comment "default/kubernetes:dns cluster IP" -m udp -p udp -d 172.30.0.1/32 --dport 53 -j KUBE-SVC-3VQ6B3MLH7E2SZT4
-A KUBE-SVC-3VQ6B3MLH7E2SZT4 -m comment --comment default/kubernetes:dns -m recent --name KUBE-SEP-FQLCC2RAIW2XP5BY --rcheck --seconds 180 --reap -j KUBE-SEP-FQLCC2RAIW2XP5BY
-A KUBE-SVC-3VQ6B3MLH7E2SZT4 -m comment --comment default/kubernetes:dns -j KUBE-SEP-FQLCC2RAIW2XP5BY
-A KUBE-SEP-FQLCC2RAIW2XP5BY -m comment --comment default/kubernetes:dns -s 192.168.121.18/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-FQLCC2RAIW2XP5BY -m comment --comment default/kubernetes:dns -m recent --name KUBE-SEP-FQLCC2RAIW2XP5BY --set -m udp -p udp -j DNAT --to-destination 192.168.121.18:8053
-A KUBE-SERVICES -m comment --comment "default/kubernetes:dns-tcp cluster IP" -m tcp -p tcp -d 172.30.0.1/32 --dport 53 -j KUBE-SVC-BA6I5HTZKAAAJT56
-A KUBE-SVC-BA6I5HTZKAAAJT56 -m comment --comment default/kubernetes:dns-tcp -m recent --name KUBE-SEP-U462OSCUJL4Y6JKA --rcheck --seconds 180 --reap -j KUBE-SEP-U462OSCUJL4Y6JKA
-A KUBE-SVC-BA6I5HTZKAAAJT56 -m comment --comment default/kubernetes:dns-tcp -j KUBE-SEP-U462OSCUJL4Y6JKA
-A KUBE-SEP-U462OSCUJL4Y6JKA -m comment --comment default/kubernetes:dns-tcp -s 192.168.121.18/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-U462OSCUJL4Y6JKA -m comment --comment default/kubernetes:dns-tcp -m recent --name KUBE-SEP-U462OSCUJL4Y6JKA --set -m tcp -p tcp -j DNAT --to-destination 192.168.121.18:8053
-A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
COMMIT
I1209 18:16:45.332301 20134 iptables.go:339] running iptables-restore [--noflush --counters]
I1209 18:16:45.352056 20134 iptables.go:362] running iptables -C [KUBE-POSTROUTING -t nat -m comment --comment kubernetes service traffic requiring SNAT -m mark --mark 0x00004000/0x00004000 -j MASQUERADE]
I1209 18:16:45.366674 20134 proxier.go:751] syncProxyRules took 197.702762ms
I1209 18:16:45.366715 20134 iptables.go:362] running iptables -N [KUBE-PORTALS-CONTAINER -t nat]
I1209 18:16:45.378541 20134 iptables.go:362] running iptables -C [PREROUTING -t nat -m comment --comment handle ClusterIPs; NOTE: this must be before the NodePort rules -j KUBE-PORTALS-CONTAINER]
I1209 18:16:45.386836 20134 iptables.go:362] running iptables -N [KUBE-PORTALS-HOST -t nat]
I1209 18:16:45.395311 20134 iptables.go:362] running iptables -C [OUTPUT -t nat -m comment --comment handle ClusterIPs; NOTE: this must be before the NodePort rules -j KUBE-PORTALS-HOST]
I1209 18:16:45.413965 20134 iptables.go:362] running iptables -N [KUBE-NODEPORT-CONTAINER -t nat]
I1209 18:16:45.423905 20134 iptables.go:362] running iptables -C [PREROUTING -t nat -m addrtype --dst-type LOCAL -m comment --comment handle service NodePorts; NOTE: this must be the last rule in the chain -j KUBE-NODEPORT-CONTAINER]
I1209 18:16:45.431862 20134 iptables.go:362] running iptables -N [KUBE-NODEPORT-HOST -t nat]
I1209 18:16:45.443893 20134 iptables.go:362] running iptables -C [OUTPUT -t nat -m addrtype --dst-type LOCAL -m comment --comment handle service NodePorts; NOTE: this must be the last rule in the chain -j KUBE-NODEPORT-HOST]
I1209 18:16:45.452940 20134 iptables.go:362] running iptables -N [KUBE-NODEPORT-NON-LOCAL -t filter]
I1209 18:16:45.460770 20134 iptables.go:362] running iptables -C [INPUT -t filter -m comment --comment Ensure that non-local NodePort traffic can flow -j KUBE-NODEPORT-NON-LOCAL]
I1209 18:16:45.467340 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:16:45.468367 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:16:45.468433 20134 generic.go:141] GenericPLEG: 9194052f-be36-11e6-8171-525400560f2f/5122ba25e59b3e0aa0dae770964e9747c9e49fc15f1f6d260713914d3190a104: exited -> non-existent
I1209 18:16:45.468455 20134 generic.go:141] GenericPLEG: 9194052f-be36-11e6-8171-525400560f2f/955f3ff60dee1a6e78eca0486c2e4f580321c66e34d5075f518699918d5c0b41: exited -> non-existent
I1209 18:16:45.468468 20134 generic.go:318] PLEG: Delete status for pod "9194052f-be36-11e6-8171-525400560f2f"
I1209 18:16:45.481863 20134 thin_pool_watcher.go:160] checking whether the thin-pool is holding a metadata snapshot
I1209 18:16:45.481900 20134 dmsetup_client.go:61] running dmsetup status docker-252:1-262311-pool
I1209 18:16:45.495946 20134 thin_pool_watcher.go:126] reserving metadata snapshot for thin-pool docker-252:1-262311-pool
I1209 18:16:45.495977 20134 dmsetup_client.go:61] running dmsetup message docker-252:1-262311-pool 0 reserve_metadata_snap
I1209 18:16:45.511134 20134 thin_pool_watcher.go:133] reserved metadata snapshot for thin-pool docker-252:1-262311-pool
I1209 18:16:45.511162 20134 thin_pool_watcher.go:141] running thin_ls on metadata device /dev/loop1
I1209 18:16:45.511171 20134 thin_ls_client.go:56] running command: thin_ls --no-headers -m -o DEV,EXCLUSIVE_BYTES /dev/loop1
I1209 18:16:45.522378 20134 thin_pool_watcher.go:137] releasing metadata snapshot for thin-pool docker-252:1-262311-pool
I1209 18:16:45.522409 20134 dmsetup_client.go:61] running dmsetup message docker-252:1-262311-pool 0 release_metadata_snap
E1209 18:16:45.534905 20134 thin_pool_watcher.go:72] encountered error refreshing thin pool watcher: error performing thin_ls on metadata device /dev/loop1: Error running command `thin_ls --no-headers -m -o DEV,EXCLUSIVE_BYTES /dev/loop1`: exit status 127
output:
I1209 18:16:45.534940 20134 thin_pool_watcher.go:77] thin_ls(1481307405) took 53.088065ms
I1209 18:16:45.889725 20134 factory.go:104] Error trying to work out if we can handle /system.slice/sys-kernel-debug.mount: invalid container name
I1209 18:16:45.890426 20134 factory.go:115] Factory "docker" was unable to handle container "/system.slice/sys-kernel-debug.mount"
I1209 18:16:45.890464 20134 factory.go:108] Factory "systemd" can handle container "/system.slice/sys-kernel-debug.mount", but ignoring.
I1209 18:16:45.890512 20134 manager.go:843] ignoring container "/system.slice/sys-kernel-debug.mount"
I1209 18:16:45.891173 20134 factory.go:104] Error trying to work out if we can handle /system.slice/sys-kernel-config.mount: invalid container name
I1209 18:16:45.891208 20134 factory.go:115] Factory "docker" was unable to handle container "/system.slice/sys-kernel-config.mount"
I1209 18:16:45.891730 20134 factory.go:108] Factory "systemd" can handle container "/system.slice/sys-kernel-config.mount", but ignoring.
I1209 18:16:45.891767 20134 manager.go:843] ignoring container "/system.slice/sys-kernel-config.mount"
I1209 18:16:45.893364 20134 factory.go:104] Error trying to work out if we can handle /system.slice/var-lib-docker-devicemapper-mnt-31124fc842b59ad64bc83362cd3a95d06487dfe3da4abe9c3c25a17cf11f5a7f.mount: error inspecting container: Error: No such container: 31124fc842b59ad64bc83362cd3a95d06487dfe3da4abe9c3c25a17cf11f5a7f
I1209 18:16:45.893400 20134 factory.go:115] Factory "docker" was unable to handle container "/system.slice/var-lib-docker-devicemapper-mnt-31124fc842b59ad64bc83362cd3a95d06487dfe3da4abe9c3c25a17cf11f5a7f.mount"
I1209 18:16:45.893419 20134 factory.go:108] Factory "systemd" can handle container "/system.slice/var-lib-docker-devicemapper-mnt-31124fc842b59ad64bc83362cd3a95d06487dfe3da4abe9c3c25a17cf11f5a7f.mount", but ignoring.
I1209 18:16:45.893429 20134 manager.go:843] ignoring container "/system.slice/var-lib-docker-devicemapper-mnt-31124fc842b59ad64bc83362cd3a95d06487dfe3da4abe9c3c25a17cf11f5a7f.mount"
I1209 18:16:45.893450 20134 factory.go:104] Error trying to work out if we can handle /system.slice/var-lib-docker-devicemapper.mount: invalid container name
I1209 18:16:45.893456 20134 factory.go:115] Factory "docker" was unable to handle container "/system.slice/var-lib-docker-devicemapper.mount"
I1209 18:16:45.893465 20134 factory.go:108] Factory "systemd" can handle container "/system.slice/var-lib-docker-devicemapper.mount", but ignoring.
I1209 18:16:45.893472 20134 manager.go:843] ignoring container "/system.slice/var-lib-docker-devicemapper.mount"
I1209 18:16:45.893483 20134 factory.go:104] Error trying to work out if we can handle /system.slice/dev-hugepages.mount: invalid container name
I1209 18:16:45.893489 20134 factory.go:115] Factory "docker" was unable to handle container "/system.slice/dev-hugepages.mount"
I1209 18:16:45.893495 20134 factory.go:108] Factory "systemd" can handle container "/system.slice/dev-hugepages.mount", but ignoring.
I1209 18:16:45.893501 20134 manager.go:843] ignoring container "/system.slice/dev-hugepages.mount"
I1209 18:16:45.893513 20134 factory.go:104] Error trying to work out if we can handle /system.slice/run-docker-netns-default.mount: invalid container name
I1209 18:16:45.893518 20134 factory.go:115] Factory "docker" was unable to handle container "/system.slice/run-docker-netns-default.mount"
I1209 18:16:45.893525 20134 factory.go:108] Factory "systemd" can handle container "/system.slice/run-docker-netns-default.mount", but ignoring.
I1209 18:16:45.893531 20134 manager.go:843] ignoring container "/system.slice/run-docker-netns-default.mount"
I1209 18:16:45.893539 20134 factory.go:104] Error trying to work out if we can handle /system.slice/-.mount: invalid container name
I1209 18:16:45.893544 20134 factory.go:115] Factory "docker" was unable to handle container "/system.slice/-.mount"
I1209 18:16:45.893549 20134 factory.go:108] Factory "systemd" can handle container "/system.slice/-.mount", but ignoring.
I1209 18:16:45.893555 20134 manager.go:843] ignoring container "/system.slice/-.mount"
I1209 18:16:45.893566 20134 factory.go:104] Error trying to work out if we can handle /system.slice/run-user-1001.mount: invalid container name
I1209 18:16:45.893571 20134 factory.go:115] Factory "docker" was unable to handle container "/system.slice/run-user-1001.mount"
I1209 18:16:45.893577 20134 factory.go:108] Factory "systemd" can handle container "/system.slice/run-user-1001.mount", but ignoring.
I1209 18:16:45.893583 20134 manager.go:843] ignoring container "/system.slice/run-user-1001.mount"
I1209 18:16:45.893592 20134 factory.go:104] Error trying to work out if we can handle /system.slice/dev-mqueue.mount: invalid container name
I1209 18:16:45.893597 20134 factory.go:115] Factory "docker" was unable to handle container "/system.slice/dev-mqueue.mount"
I1209 18:16:45.893603 20134 factory.go:108] Factory "systemd" can handle container "/system.slice/dev-mqueue.mount", but ignoring.
I1209 18:16:45.893610 20134 manager.go:843] ignoring container "/system.slice/dev-mqueue.mount"
I1209 18:16:45.979593 20134 controller.go:105] Found 0 scheduledjobs
I1209 18:16:45.985998 20134 controller.go:113] Found 0 jobs
I1209 18:16:45.986017 20134 controller.go:116] Found 0 groups
I1209 18:16:46.150831 20134 interface.go:248] Default route transits interface "eth0"
I1209 18:16:46.151059 20134 interface.go:93] Interface eth0 is up
I1209 18:16:46.151153 20134 interface.go:138] Interface "eth0" has 2 addresses :[192.168.121.18/24 fe80::5054:ff:fe56:f2f/64].
I1209 18:16:46.151185 20134 interface.go:105] Checking addr 192.168.121.18/24.
I1209 18:16:46.151197 20134 interface.go:114] IP found 192.168.121.18
I1209 18:16:46.151210 20134 interface.go:144] valid IPv4 address for interface "eth0" found as 192.168.121.18.
I1209 18:16:46.151219 20134 interface.go:254] Choosing IP 192.168.121.18
I1209 18:16:46.194026 20134 reflector.go:284] pkg/controller/volume/persistentvolume/pv_controller_base.go:448: forcing resync
I1209 18:16:46.196123 20134 reflector.go:284] pkg/controller/petset/pet_set.go:147: forcing resync
I1209 18:16:46.196166 20134 reflector.go:284] pkg/controller/volume/persistentvolume/pv_controller_base.go:449: forcing resync
I1209 18:16:46.196181 20134 reflector.go:284] pkg/controller/volume/persistentvolume/pv_controller_base.go:153: forcing resync
I1209 18:16:46.198545 20134 attach_detach_controller.go:520] processVolumesInUse for node "localhost"
I1209 18:16:46.252085 20134 reflector.go:284] github.com/openshift/origin/pkg/project/controller/factory.go:36: forcing resync
I1209 18:16:46.403884 20134 conversion.go:133] failed to handle multiple devices for container. Skipping Filesystem stats
I1209 18:16:46.403924 20134 conversion.go:133] failed to handle multiple devices for container. Skipping Filesystem stats
I1209 18:16:46.422631 20134 eviction_manager.go:204] eviction manager: no resources are starved
I1209 18:16:46.468753 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:16:46.470369 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:16:46.580280 20134 nodecontroller.go:816] Node localhost ReadyCondition updated. Updating timestamp: {Capacity:map[memory:{i:{value:8223051776 scale:0} d:{Dec:<nil>} s:8030324Ki Format:BinarySI} pods:{i:{value:20 scale:0} d:{Dec:<nil>} s:20 Format:DecimalSI} alpha.kubernetes.io/nvidia-gpu:{i:{value:0 scale:0} d:{Dec:<nil>} s:0 Format:DecimalSI} cpu:{i:{value:2 scale:0} d:{Dec:<nil>} s:2 Format:DecimalSI}] Allocatable:map[cpu:{i:{value:2 scale:0} d:{Dec:<nil>} s:2 Format:DecimalSI} memory:{i:{value:8223051776 scale:0} d:{Dec:<nil>} s:8030324Ki Format:BinarySI} pods:{i:{value:20 scale:0} d:{Dec:<nil>} s:20 Format:DecimalSI} alpha.kubernetes.io/nvidia-gpu:{i:{value:0 scale:0} d:{Dec:<nil>} s:0 Format:DecimalSI}] Phase: Conditions:[{Type:OutOfDisk Status:False LastHeartbeatTime:2016-12-09 18:16:36 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasSufficientDisk Message:kubelet has sufficient disk space available} {Type:MemoryPressure Status:False LastHeartbeatTime:2016-12-09 18:16:36 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasSufficientMemory Message:kubelet has sufficient memory available} {Type:DiskPressure Status:False LastHeartbeatTime:2016-12-09 18:16:36 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasNoDiskPressure Message:kubelet has no disk pressure} {Type:Ready Status:True LastHeartbeatTime:2016-12-09 18:16:36 +0000 UTC LastTransitionTime:2016-12-09 18:15:55 +0000 UTC Reason:KubeletReady Message:kubelet is posting ready status}] Addresses:[{Type:LegacyHostIP Address:192.168.121.18} {Type:InternalIP Address:192.168.121.18}] DaemonEndpoints:{KubeletEndpoint:{Port:10250}} NodeInfo:{MachineID:8629c58585b148cca9cc925e788be346 SystemUUID:8629C585-85B1-48CC-A9CC-925E788BE346 BootID:1a2f3964-acf4-4825-9949-2816c7af2060 KernelVersion:4.8.6-300.fc25.x86_64 OSImage:CentOS Linux 7 (Core) ContainerRuntimeVersion:docker://1.12.3 KubeletVersion:v1.4.0+776c994 KubeProxyVersion:v1.4.0+776c994 OperatingSystem:linux Architecture:amd64} Images:[{Names:[docker.io/openshift/origin@sha256:0c4dfc568b8dbb1ac288ab1feb955ee2a37e74fd60fc96f8efd4e35076ced06b docker.io/openshift/origin:latest] SizeBytes:540431972} {Names:[docker.io/openshift/origin-deployer@sha256:398189baa7877595df33f6da8dff1fad36a9e437883c5620b85bdf3d06a31fb5 docker.io/openshift/origin-deployer:v1.5.0-alpha.0] SizeBytes:488015134} {Names:[docker.io/openshift/origin-pod@sha256:6216367955502829e7270c3437810cd16e0e99d2973647e521cc9b871d8948f0 docker.io/openshift/origin-pod:v1.5.0-alpha.0] SizeBytes:1138998}] VolumesInUse:[] VolumesAttached:[]} vs {Capacity:map[memory:{i:{value:8223051776 scale:0} d:{Dec:<nil>} s:8030324Ki Format:BinarySI} pods:{i:{value:20 scale:0} d:{Dec:<nil>} s:20 Format:DecimalSI} alpha.kubernetes.io/nvidia-gpu:{i:{value:0 scale:0} d:{Dec:<nil>} s:0 Format:DecimalSI} cpu:{i:{value:2 scale:0} d:{Dec:<nil>} s:2 Format:DecimalSI}] Allocatable:map[alpha.kubernetes.io/nvidia-gpu:{i:{value:0 scale:0} d:{Dec:<nil>} s:0 Format:DecimalSI} cpu:{i:{value:2 scale:0} d:{Dec:<nil>} s:2 Format:DecimalSI} memory:{i:{value:8223051776 scale:0} d:{Dec:<nil>} s:8030324Ki Format:BinarySI} pods:{i:{value:20 scale:0} d:{Dec:<nil>} s:20 Format:DecimalSI}] Phase: Conditions:[{Type:OutOfDisk Status:False LastHeartbeatTime:2016-12-09 18:16:46 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasSufficientDisk Message:kubelet has sufficient disk space available} {Type:MemoryPressure Status:False LastHeartbeatTime:2016-12-09 18:16:46 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasSufficientMemory Message:kubelet has sufficient memory available} {Type:DiskPressure Status:False LastHeartbeatTime:2016-12-09 18:16:46 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasNoDiskPressure Message:kubelet has no disk pressure} {Type:Ready Status:True LastHeartbeatTime:2016-12-09 18:16:46 +0000 UTC LastTransitionTime:2016-12-09 18:15:55 +0000 UTC Reason:KubeletReady Message:kubelet is posting ready status}] Addresses:[{Type:LegacyHostIP Address:192.168.121.18} {Type:InternalIP Address:192.168.121.18}] DaemonEndpoints:{KubeletEndpoint:{Port:10250}} NodeInfo:{MachineID:8629c58585b148cca9cc925e788be346 SystemUUID:8629C585-85B1-48CC-A9CC-925E788BE346 BootID:1a2f3964-acf4-4825-9949-2816c7af2060 KernelVersion:4.8.6-300.fc25.x86_64 OSImage:CentOS Linux 7 (Core) ContainerRuntimeVersion:docker://1.12.3 KubeletVersion:v1.4.0+776c994 KubeProxyVersion:v1.4.0+776c994 OperatingSystem:linux Architecture:amd64} Images:[{Names:[docker.io/openshift/origin@sha256:0c4dfc568b8dbb1ac288ab1feb955ee2a37e74fd60fc96f8efd4e35076ced06b docker.io/openshift/origin:latest] SizeBytes:540431972} {Names:[docker.io/openshift/origin-deployer@sha256:398189baa7877595df33f6da8dff1fad36a9e437883c5620b85bdf3d06a31fb5 docker.io/openshift/origin-deployer:v1.5.0-alpha.0] SizeBytes:488015134} {Names:[docker.io/openshift/origin-pod@sha256:6216367955502829e7270c3437810cd16e0e99d2973647e521cc9b871d8948f0 docker.io/openshift/origin-pod:v1.5.0-alpha.0] SizeBytes:1138998}] VolumesInUse:[] VolumesAttached:[]}.
I1209 18:16:46.693402 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:16:46.695516 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:16:46.696952 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:16:47.470631 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:16:47.472463 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:16:48.472808 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:16:48.475206 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:16:48.693431 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:16:48.695640 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:16:48.696757 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:16:49.476057 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:16:49.477919 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:16:49.709081 20134 handler.go:300] unable to get fs usage from thin pool for device 16: no cached value for usage of device 16
I1209 18:16:50.478240 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:16:50.480229 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:16:50.693447 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:16:50.695004 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:16:50.696403 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:16:51.480489 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:16:51.481766 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:16:52.482004 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:16:52.483927 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:16:52.693351 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:16:52.694852 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:16:53.484192 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:16:53.485864 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:16:54.486417 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:16:54.488229 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:16:54.693270 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:16:54.695196 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:16:54.696503 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:16:55.488534 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:16:55.491669 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:16:55.992535 20134 controller.go:105] Found 0 scheduledjobs
I1209 18:16:55.999507 20134 controller.go:113] Found 0 jobs
I1209 18:16:55.999684 20134 controller.go:116] Found 0 groups
I1209 18:16:56.089998 20134 handler.go:300] unable to get fs usage from thin pool for device 16: no cached value for usage of device 16
I1209 18:16:56.199702 20134 interface.go:248] Default route transits interface "eth0"
I1209 18:16:56.199935 20134 interface.go:93] Interface eth0 is up
I1209 18:16:56.200020 20134 interface.go:138] Interface "eth0" has 2 addresses :[192.168.121.18/24 fe80::5054:ff:fe56:f2f/64].
I1209 18:16:56.200045 20134 interface.go:105] Checking addr 192.168.121.18/24.
I1209 18:16:56.200056 20134 interface.go:114] IP found 192.168.121.18
I1209 18:16:56.200066 20134 interface.go:144] valid IPv4 address for interface "eth0" found as 192.168.121.18.
I1209 18:16:56.200075 20134 interface.go:254] Choosing IP 192.168.121.18
I1209 18:16:56.241385 20134 attach_detach_controller.go:520] processVolumesInUse for node "localhost"
I1209 18:16:56.466760 20134 conversion.go:133] failed to handle multiple devices for container. Skipping Filesystem stats
I1209 18:16:56.467007 20134 conversion.go:133] failed to handle multiple devices for container. Skipping Filesystem stats
I1209 18:16:56.484233 20134 eviction_manager.go:204] eviction manager: no resources are starved
I1209 18:16:56.491887 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:16:56.493225 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:16:56.584810 20134 nodecontroller.go:816] Node localhost ReadyCondition updated. Updating timestamp: {Capacity:map[memory:{i:{value:8223051776 scale:0} d:{Dec:<nil>} s:8030324Ki Format:BinarySI} pods:{i:{value:20 scale:0} d:{Dec:<nil>} s:20 Format:DecimalSI} alpha.kubernetes.io/nvidia-gpu:{i:{value:0 scale:0} d:{Dec:<nil>} s:0 Format:DecimalSI} cpu:{i:{value:2 scale:0} d:{Dec:<nil>} s:2 Format:DecimalSI}] Allocatable:map[alpha.kubernetes.io/nvidia-gpu:{i:{value:0 scale:0} d:{Dec:<nil>} s:0 Format:DecimalSI} cpu:{i:{value:2 scale:0} d:{Dec:<nil>} s:2 Format:DecimalSI} memory:{i:{value:8223051776 scale:0} d:{Dec:<nil>} s:8030324Ki Format:BinarySI} pods:{i:{value:20 scale:0} d:{Dec:<nil>} s:20 Format:DecimalSI}] Phase: Conditions:[{Type:OutOfDisk Status:False LastHeartbeatTime:2016-12-09 18:16:46 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasSufficientDisk Message:kubelet has sufficient disk space available} {Type:MemoryPressure Status:False LastHeartbeatTime:2016-12-09 18:16:46 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasSufficientMemory Message:kubelet has sufficient memory available} {Type:DiskPressure Status:False LastHeartbeatTime:2016-12-09 18:16:46 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasNoDiskPressure Message:kubelet has no disk pressure} {Type:Ready Status:True LastHeartbeatTime:2016-12-09 18:16:46 +0000 UTC LastTransitionTime:2016-12-09 18:15:55 +0000 UTC Reason:KubeletReady Message:kubelet is posting ready status}] Addresses:[{Type:LegacyHostIP Address:192.168.121.18} {Type:InternalIP Address:192.168.121.18}] DaemonEndpoints:{KubeletEndpoint:{Port:10250}} NodeInfo:{MachineID:8629c58585b148cca9cc925e788be346 SystemUUID:8629C585-85B1-48CC-A9CC-925E788BE346 BootID:1a2f3964-acf4-4825-9949-2816c7af2060 KernelVersion:4.8.6-300.fc25.x86_64 OSImage:CentOS Linux 7 (Core) ContainerRuntimeVersion:docker://1.12.3 KubeletVersion:v1.4.0+776c994 KubeProxyVersion:v1.4.0+776c994 OperatingSystem:linux Architecture:amd64} Images:[{Names:[docker.io/openshift/origin@sha256:0c4dfc568b8dbb1ac288ab1feb955ee2a37e74fd60fc96f8efd4e35076ced06b docker.io/openshift/origin:latest] SizeBytes:540431972} {Names:[docker.io/openshift/origin-deployer@sha256:398189baa7877595df33f6da8dff1fad36a9e437883c5620b85bdf3d06a31fb5 docker.io/openshift/origin-deployer:v1.5.0-alpha.0] SizeBytes:488015134} {Names:[docker.io/openshift/origin-pod@sha256:6216367955502829e7270c3437810cd16e0e99d2973647e521cc9b871d8948f0 docker.io/openshift/origin-pod:v1.5.0-alpha.0] SizeBytes:1138998}] VolumesInUse:[] VolumesAttached:[]} vs {Capacity:map[pods:{i:{value:20 scale:0} d:{Dec:<nil>} s:20 Format:DecimalSI} alpha.kubernetes.io/nvidia-gpu:{i:{value:0 scale:0} d:{Dec:<nil>} s:0 Format:DecimalSI} cpu:{i:{value:2 scale:0} d:{Dec:<nil>} s:2 Format:DecimalSI} memory:{i:{value:8223051776 scale:0} d:{Dec:<nil>} s:8030324Ki Format:BinarySI}] Allocatable:map[pods:{i:{value:20 scale:0} d:{Dec:<nil>} s:20 Format:DecimalSI} alpha.kubernetes.io/nvidia-gpu:{i:{value:0 scale:0} d:{Dec:<nil>} s:0 Format:DecimalSI} cpu:{i:{value:2 scale:0} d:{Dec:<nil>} s:2 Format:DecimalSI} memory:{i:{value:8223051776 scale:0} d:{Dec:<nil>} s:8030324Ki Format:BinarySI}] Phase: Conditions:[{Type:OutOfDisk Status:False LastHeartbeatTime:2016-12-09 18:16:56 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasSufficientDisk Message:kubelet has sufficient disk space available} {Type:MemoryPressure Status:False LastHeartbeatTime:2016-12-09 18:16:56 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasSufficientMemory Message:kubelet has sufficient memory available} {Type:DiskPressure Status:False LastHeartbeatTime:2016-12-09 18:16:56 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasNoDiskPressure Message:kubelet has no disk pressure} {Type:Ready Status:True LastHeartbeatTime:2016-12-09 18:16:56 +0000 UTC LastTransitionTime:2016-12-09 18:15:55 +0000 UTC Reason:KubeletReady Message:kubelet is posting ready status}] Addresses:[{Type:LegacyHostIP Address:192.168.121.18} {Type:InternalIP Address:192.168.121.18}] DaemonEndpoints:{KubeletEndpoint:{Port:10250}} NodeInfo:{MachineID:8629c58585b148cca9cc925e788be346 SystemUUID:8629C585-85B1-48CC-A9CC-925E788BE346 BootID:1a2f3964-acf4-4825-9949-2816c7af2060 KernelVersion:4.8.6-300.fc25.x86_64 OSImage:CentOS Linux 7 (Core) ContainerRuntimeVersion:docker://1.12.3 KubeletVersion:v1.4.0+776c994 KubeProxyVersion:v1.4.0+776c994 OperatingSystem:linux Architecture:amd64} Images:[{Names:[docker.io/openshift/origin@sha256:0c4dfc568b8dbb1ac288ab1feb955ee2a37e74fd60fc96f8efd4e35076ced06b docker.io/openshift/origin:latest] SizeBytes:540431972} {Names:[docker.io/openshift/origin-deployer@sha256:398189baa7877595df33f6da8dff1fad36a9e437883c5620b85bdf3d06a31fb5 docker.io/openshift/origin-deployer:v1.5.0-alpha.0] SizeBytes:488015134} {Names:[docker.io/openshift/origin-pod@sha256:6216367955502829e7270c3437810cd16e0e99d2973647e521cc9b871d8948f0 docker.io/openshift/origin-pod:v1.5.0-alpha.0] SizeBytes:1138998}] VolumesInUse:[] VolumesAttached:[]}.
I1209 18:16:56.693269 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:16:56.694755 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:16:57.493585 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:16:57.496639 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:16:58.496943 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:16:58.498884 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:16:58.693381 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:16:58.694871 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:16:58.696505 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:16:59.499269 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:16:59.501017 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:17:00.501282 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:17:00.503527 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:17:00.536384 20134 thin_pool_watcher.go:160] checking whether the thin-pool is holding a metadata snapshot
I1209 18:17:00.536506 20134 dmsetup_client.go:61] running dmsetup status docker-252:1-262311-pool
I1209 18:17:00.560678 20134 thin_pool_watcher.go:126] reserving metadata snapshot for thin-pool docker-252:1-262311-pool
I1209 18:17:00.560946 20134 dmsetup_client.go:61] running dmsetup message docker-252:1-262311-pool 0 reserve_metadata_snap
I1209 18:17:00.584036 20134 thin_pool_watcher.go:133] reserved metadata snapshot for thin-pool docker-252:1-262311-pool
I1209 18:17:00.584268 20134 thin_pool_watcher.go:141] running thin_ls on metadata device /dev/loop1
I1209 18:17:00.584303 20134 thin_ls_client.go:56] running command: thin_ls --no-headers -m -o DEV,EXCLUSIVE_BYTES /dev/loop1
I1209 18:17:00.597453 20134 thin_pool_watcher.go:137] releasing metadata snapshot for thin-pool docker-252:1-262311-pool
I1209 18:17:00.597584 20134 dmsetup_client.go:61] running dmsetup message docker-252:1-262311-pool 0 release_metadata_snap
E1209 18:17:00.611537 20134 thin_pool_watcher.go:72] encountered error refreshing thin pool watcher: error performing thin_ls on metadata device /dev/loop1: Error running command `thin_ls --no-headers -m -o DEV,EXCLUSIVE_BYTES /dev/loop1`: exit status 127
output:
I1209 18:17:00.611895 20134 thin_pool_watcher.go:77] thin_ls(1481307420) took 75.569231ms
I1209 18:17:00.693401 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:17:00.695050 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:17:00.696078 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:17:01.184547 20134 handler.go:300] unable to get fs usage from thin pool for device 16: no cached value for usage of device 16
I1209 18:17:01.194212 20134 reflector.go:284] pkg/controller/volume/persistentvolume/pv_controller_base.go:448: forcing resync
I1209 18:17:01.196366 20134 reflector.go:284] pkg/controller/volume/persistentvolume/pv_controller_base.go:449: forcing resync
I1209 18:17:01.196411 20134 reflector.go:284] pkg/controller/volume/persistentvolume/pv_controller_base.go:153: forcing resync
I1209 18:17:01.503898 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:17:01.505888 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:17:02.506132 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:17:02.507999 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:17:02.693361 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:17:02.696116 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:17:03.508305 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:17:03.510807 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:17:04.511091 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:17:04.513548 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:17:04.693316 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:17:04.694923 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:17:04.696559 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:17:05.513869 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:17:05.515729 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:17:06.007388 20134 controller.go:105] Found 0 scheduledjobs
I1209 18:17:06.014641 20134 controller.go:113] Found 0 jobs
I1209 18:17:06.014884 20134 controller.go:116] Found 0 groups
I1209 18:17:06.246789 20134 interface.go:248] Default route transits interface "eth0"
I1209 18:17:06.247054 20134 interface.go:93] Interface eth0 is up
I1209 18:17:06.248255 20134 interface.go:138] Interface "eth0" has 2 addresses :[192.168.121.18/24 fe80::5054:ff:fe56:f2f/64].
I1209 18:17:06.248403 20134 interface.go:105] Checking addr 192.168.121.18/24.
I1209 18:17:06.248423 20134 interface.go:114] IP found 192.168.121.18
I1209 18:17:06.249134 20134 interface.go:144] valid IPv4 address for interface "eth0" found as 192.168.121.18.
I1209 18:17:06.249151 20134 interface.go:254] Choosing IP 192.168.121.18
I1209 18:17:06.300884 20134 attach_detach_controller.go:520] processVolumesInUse for node "localhost"
I1209 18:17:06.515953 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:17:06.517696 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:17:06.519973 20134 conversion.go:133] failed to handle multiple devices for container. Skipping Filesystem stats
I1209 18:17:06.520029 20134 conversion.go:133] failed to handle multiple devices for container. Skipping Filesystem stats
I1209 18:17:06.549075 20134 eviction_manager.go:204] eviction manager: no resources are starved
I1209 18:17:06.591132 20134 nodecontroller.go:816] Node localhost ReadyCondition updated. Updating timestamp: {Capacity:map[alpha.kubernetes.io/nvidia-gpu:{i:{value:0 scale:0} d:{Dec:<nil>} s:0 Format:DecimalSI} cpu:{i:{value:2 scale:0} d:{Dec:<nil>} s:2 Format:DecimalSI} memory:{i:{value:8223051776 scale:0} d:{Dec:<nil>} s:8030324Ki Format:BinarySI} pods:{i:{value:20 scale:0} d:{Dec:<nil>} s:20 Format:DecimalSI}] Allocatable:map[memory:{i:{value:8223051776 scale:0} d:{Dec:<nil>} s:8030324Ki Format:BinarySI} pods:{i:{value:20 scale:0} d:{Dec:<nil>} s:20 Format:DecimalSI} alpha.kubernetes.io/nvidia-gpu:{i:{value:0 scale:0} d:{Dec:<nil>} s:0 Format:DecimalSI} cpu:{i:{value:2 scale:0} d:{Dec:<nil>} s:2 Format:DecimalSI}] Phase: Conditions:[{Type:OutOfDisk Status:False LastHeartbeatTime:2016-12-09 18:16:56 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasSufficientDisk Message:kubelet has sufficient disk space available} {Type:MemoryPressure Status:False LastHeartbeatTime:2016-12-09 18:16:56 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasSufficientMemory Message:kubelet has sufficient memory available} {Type:DiskPressure Status:False LastHeartbeatTime:2016-12-09 18:16:56 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasNoDiskPressure Message:kubelet has no disk pressure} {Type:Ready Status:True LastHeartbeatTime:2016-12-09 18:16:56 +0000 UTC LastTransitionTime:2016-12-09 18:15:55 +0000 UTC Reason:KubeletReady Message:kubelet is posting ready status}] Addresses:[{Type:LegacyHostIP Address:192.168.121.18} {Type:InternalIP Address:192.168.121.18}] DaemonEndpoints:{KubeletEndpoint:{Port:10250}} NodeInfo:{MachineID:8629c58585b148cca9cc925e788be346 SystemUUID:8629C585-85B1-48CC-A9CC-925E788BE346 BootID:1a2f3964-acf4-4825-9949-2816c7af2060 KernelVersion:4.8.6-300.fc25.x86_64 OSImage:CentOS Linux 7 (Core) ContainerRuntimeVersion:docker://1.12.3 KubeletVersion:v1.4.0+776c994 KubeProxyVersion:v1.4.0+776c994 OperatingSystem:linux Architecture:amd64} Images:[{Names:[docker.io/openshift/origin@sha256:0c4dfc568b8dbb1ac288ab1feb955ee2a37e74fd60fc96f8efd4e35076ced06b docker.io/openshift/origin:latest] SizeBytes:540431972} {Names:[docker.io/openshift/origin-deployer@sha256:398189baa7877595df33f6da8dff1fad36a9e437883c5620b85bdf3d06a31fb5 docker.io/openshift/origin-deployer:v1.5.0-alpha.0] SizeBytes:488015134} {Names:[docker.io/openshift/origin-pod@sha256:6216367955502829e7270c3437810cd16e0e99d2973647e521cc9b871d8948f0 docker.io/openshift/origin-pod:v1.5.0-alpha.0] SizeBytes:1138998}] VolumesInUse:[] VolumesAttached:[]} vs {Capacity:map[alpha.kubernetes.io/nvidia-gpu:{i:{value:0 scale:0} d:{Dec:<nil>} s:0 Format:DecimalSI} cpu:{i:{value:2 scale:0} d:{Dec:<nil>} s:2 Format:DecimalSI} memory:{i:{value:8223051776 scale:0} d:{Dec:<nil>} s:8030324Ki Format:BinarySI} pods:{i:{value:20 scale:0} d:{Dec:<nil>} s:20 Format:DecimalSI}] Allocatable:map[memory:{i:{value:8223051776 scale:0} d:{Dec:<nil>} s:8030324Ki Format:BinarySI} pods:{i:{value:20 scale:0} d:{Dec:<nil>} s:20 Format:DecimalSI} alpha.kubernetes.io/nvidia-gpu:{i:{value:0 scale:0} d:{Dec:<nil>} s:0 Format:DecimalSI} cpu:{i:{value:2 scale:0} d:{Dec:<nil>} s:2 Format:DecimalSI}] Phase: Conditions:[{Type:OutOfDisk Status:False LastHeartbeatTime:2016-12-09 18:17:06 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasSufficientDisk Message:kubelet has sufficient disk space available} {Type:MemoryPressure Status:False LastHeartbeatTime:2016-12-09 18:17:06 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasSufficientMemory Message:kubelet has sufficient memory available} {Type:DiskPressure Status:False LastHeartbeatTime:2016-12-09 18:17:06 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasNoDiskPressure Message:kubelet has no disk pressure} {Type:Ready Status:True LastHeartbeatTime:2016-12-09 18:17:06 +0000 UTC LastTransitionTime:2016-12-09 18:15:55 +0000 UTC Reason:KubeletReady Message:kubelet is posting ready status}] Addresses:[{Type:LegacyHostIP Address:192.168.121.18} {Type:InternalIP Address:192.168.121.18}] DaemonEndpoints:{KubeletEndpoint:{Port:10250}} NodeInfo:{MachineID:8629c58585b148cca9cc925e788be346 SystemUUID:8629C585-85B1-48CC-A9CC-925E788BE346 BootID:1a2f3964-acf4-4825-9949-2816c7af2060 KernelVersion:4.8.6-300.fc25.x86_64 OSImage:CentOS Linux 7 (Core) ContainerRuntimeVersion:docker://1.12.3 KubeletVersion:v1.4.0+776c994 KubeProxyVersion:v1.4.0+776c994 OperatingSystem:linux Architecture:amd64} Images:[{Names:[docker.io/openshift/origin@sha256:0c4dfc568b8dbb1ac288ab1feb955ee2a37e74fd60fc96f8efd4e35076ced06b docker.io/openshift/origin:latest] SizeBytes:540431972} {Names:[docker.io/openshift/origin-deployer@sha256:398189baa7877595df33f6da8dff1fad36a9e437883c5620b85bdf3d06a31fb5 docker.io/openshift/origin-deployer:v1.5.0-alpha.0] SizeBytes:488015134} {Names:[docker.io/openshift/origin-pod@sha256:6216367955502829e7270c3437810cd16e0e99d2973647e521cc9b871d8948f0 docker.io/openshift/origin-pod:v1.5.0-alpha.0] SizeBytes:1138998}] VolumesInUse:[] VolumesAttached:[]}.
I1209 18:17:06.693279 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:17:06.695085 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:17:06.932681 20134 handler.go:300] unable to get fs usage from thin pool for device 16: no cached value for usage of device 16
I1209 18:17:07.517964 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:17:07.519560 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:17:08.519873 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:17:08.521477 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:17:08.693369 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:17:08.694363 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:17:08.695724 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:17:09.521672 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:17:09.523996 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:17:10.524307 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:17:10.526992 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:17:10.693422 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:17:10.703481 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:17:10.704575 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:17:11.527373 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:17:11.529126 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:17:12.529433 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:17:12.531245 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:17:12.693299 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:17:12.695045 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:17:13.531722 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:17:13.533573 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:17:14.534017 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:17:14.535818 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:17:14.693237 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:17:14.695239 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:17:14.696727 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:17:14.840719 20134 reflector.go:284] pkg/controller/replicaset/replica_set.go:205: forcing resync
I1209 18:17:14.848224 20134 reflector.go:284] pkg/controller/deployment/deployment_controller.go:181: forcing resync
I1209 18:17:14.861589 20134 reflector.go:284] pkg/controller/disruption/disruption.go:267: forcing resync
I1209 18:17:14.861650 20134 reflector.go:284] pkg/controller/daemon/daemoncontroller.go:237: forcing resync
I1209 18:17:14.861670 20134 reflector.go:284] pkg/controller/disruption/disruption.go:264: forcing resync
I1209 18:17:14.861687 20134 reflector.go:284] pkg/controller/disruption/disruption.go:266: forcing resync
I1209 18:17:14.861657 20134 reflector.go:284] pkg/controller/podautoscaler/horizontal.go:135: forcing resync
I1209 18:17:14.862691 20134 reflector.go:284] pkg/controller/endpoint/endpoints_controller.go:158: forcing resync
I1209 18:17:14.862863 20134 endpoints_controller.go:321] Finished syncing service "default/kubernetes" endpoints. (5.734µs)
I1209 18:17:14.862947 20134 reflector.go:284] pkg/controller/disruption/disruption.go:268: forcing resync
I1209 18:17:15.169007 20134 proxier.go:758] Syncing iptables rules
I1209 18:17:15.169369 20134 iptables.go:362] running iptables -N [KUBE-SERVICES -t filter]
I1209 18:17:15.190381 20134 iptables.go:362] running iptables -N [KUBE-SERVICES -t nat]
I1209 18:17:15.207948 20134 iptables.go:362] running iptables -C [OUTPUT -t filter -m comment --comment kubernetes service portals -j KUBE-SERVICES]
I1209 18:17:15.223254 20134 iptables.go:362] running iptables -C [OUTPUT -t nat -m comment --comment kubernetes service portals -j KUBE-SERVICES]
I1209 18:17:15.239985 20134 iptables.go:362] running iptables -C [PREROUTING -t nat -m comment --comment kubernetes service portals -j KUBE-SERVICES]
I1209 18:17:15.256385 20134 iptables.go:362] running iptables -N [KUBE-POSTROUTING -t nat]
I1209 18:17:15.273009 20134 iptables.go:362] running iptables -C [POSTROUTING -t nat -m comment --comment kubernetes postrouting rules -j KUBE-POSTROUTING]
I1209 18:17:15.285873 20134 iptables.go:298] running iptables-save [-t filter]
I1209 18:17:15.302260 20134 iptables.go:298] running iptables-save [-t nat]
I1209 18:17:15.317633 20134 proxier.go:1244] Restoring iptables rules: *filter
:KUBE-SERVICES - [0:0]
COMMIT
*nat
:KUBE-SERVICES - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-POSTROUTING - [0:0]
:KUBE-MARK-MASQ - [0:0]
:KUBE-SVC-3VQ6B3MLH7E2SZT4 - [0:0]
:KUBE-SEP-FQLCC2RAIW2XP5BY - [0:0]
:KUBE-SVC-BA6I5HTZKAAAJT56 - [0:0]
:KUBE-SEP-U462OSCUJL4Y6JKA - [0:0]
:KUBE-SVC-NPX46M4PTMTKRN6Y - [0:0]
:KUBE-SEP-LLAPZ6I53VN3DC7H - [0:0]
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x00004000/0x00004000 -j MASQUERADE
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x00004000/0x00004000
-A KUBE-SERVICES -m comment --comment "default/kubernetes:dns cluster IP" -m udp -p udp -d 172.30.0.1/32 --dport 53 -j KUBE-SVC-3VQ6B3MLH7E2SZT4
-A KUBE-SVC-3VQ6B3MLH7E2SZT4 -m comment --comment default/kubernetes:dns -m recent --name KUBE-SEP-FQLCC2RAIW2XP5BY --rcheck --seconds 180 --reap -j KUBE-SEP-FQLCC2RAIW2XP5BY
-A KUBE-SVC-3VQ6B3MLH7E2SZT4 -m comment --comment default/kubernetes:dns -j KUBE-SEP-FQLCC2RAIW2XP5BY
-A KUBE-SEP-FQLCC2RAIW2XP5BY -m comment --comment default/kubernetes:dns -s 192.168.121.18/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-FQLCC2RAIW2XP5BY -m comment --comment default/kubernetes:dns -m recent --name KUBE-SEP-FQLCC2RAIW2XP5BY --set -m udp -p udp -j DNAT --to-destination 192.168.121.18:8053
-A KUBE-SERVICES -m comment --comment "default/kubernetes:dns-tcp cluster IP" -m tcp -p tcp -d 172.30.0.1/32 --dport 53 -j KUBE-SVC-BA6I5HTZKAAAJT56
-A KUBE-SVC-BA6I5HTZKAAAJT56 -m comment --comment default/kubernetes:dns-tcp -m recent --name KUBE-SEP-U462OSCUJL4Y6JKA --rcheck --seconds 180 --reap -j KUBE-SEP-U462OSCUJL4Y6JKA
-A KUBE-SVC-BA6I5HTZKAAAJT56 -m comment --comment default/kubernetes:dns-tcp -j KUBE-SEP-U462OSCUJL4Y6JKA
-A KUBE-SEP-U462OSCUJL4Y6JKA -m comment --comment default/kubernetes:dns-tcp -s 192.168.121.18/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-U462OSCUJL4Y6JKA -m comment --comment default/kubernetes:dns-tcp -m recent --name KUBE-SEP-U462OSCUJL4Y6JKA --set -m tcp -p tcp -j DNAT --to-destination 192.168.121.18:8053
-A KUBE-SERVICES -m comment --comment "default/kubernetes:https cluster IP" -m tcp -p tcp -d 172.30.0.1/32 --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment default/kubernetes:https -m recent --name KUBE-SEP-LLAPZ6I53VN3DC7H --rcheck --seconds 180 --reap -j KUBE-SEP-LLAPZ6I53VN3DC7H
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment default/kubernetes:https -j KUBE-SEP-LLAPZ6I53VN3DC7H
-A KUBE-SEP-LLAPZ6I53VN3DC7H -m comment --comment default/kubernetes:https -s 192.168.121.18/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-LLAPZ6I53VN3DC7H -m comment --comment default/kubernetes:https -m recent --name KUBE-SEP-LLAPZ6I53VN3DC7H --set -m tcp -p tcp -j DNAT --to-destination 192.168.121.18:8443
-A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
COMMIT
I1209 18:17:15.317664 20134 iptables.go:339] running iptables-restore [--noflush --counters]
I1209 18:17:15.339151 20134 proxier.go:751] syncProxyRules took 170.148263ms
I1209 18:17:15.339377 20134 iptables.go:362] running iptables -N [KUBE-PORTALS-CONTAINER -t nat]
I1209 18:17:15.362459 20134 iptables.go:362] running iptables -C [PREROUTING -t nat -m comment --comment handle ClusterIPs; NOTE: this must be before the NodePort rules -j KUBE-PORTALS-CONTAINER]
I1209 18:17:15.376880 20134 iptables.go:362] running iptables -N [KUBE-PORTALS-HOST -t nat]
I1209 18:17:15.392424 20134 iptables.go:362] running iptables -C [OUTPUT -t nat -m comment --comment handle ClusterIPs; NOTE: this must be before the NodePort rules -j KUBE-PORTALS-HOST]
I1209 18:17:15.408736 20134 iptables.go:362] running iptables -N [KUBE-NODEPORT-CONTAINER -t nat]
I1209 18:17:15.430490 20134 iptables.go:362] running iptables -C [PREROUTING -t nat -m addrtype --dst-type LOCAL -m comment --comment handle service NodePorts; NOTE: this must be the last rule in the chain -j KUBE-NODEPORT-CONTAINER]
I1209 18:17:15.448897 20134 iptables.go:362] running iptables -N [KUBE-NODEPORT-HOST -t nat]
I1209 18:17:15.466952 20134 iptables.go:362] running iptables -C [OUTPUT -t nat -m addrtype --dst-type LOCAL -m comment --comment handle service NodePorts; NOTE: this must be the last rule in the chain -j KUBE-NODEPORT-HOST]
I1209 18:17:15.483361 20134 iptables.go:362] running iptables -N [KUBE-NODEPORT-NON-LOCAL -t filter]
I1209 18:17:15.501122 20134 iptables.go:362] running iptables -C [INPUT -t filter -m comment --comment Ensure that non-local NodePort traffic can flow -j KUBE-NODEPORT-NON-LOCAL]
I1209 18:17:15.535993 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:17:15.537285 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:17:15.612169 20134 thin_pool_watcher.go:160] checking whether the thin-pool is holding a metadata snapshot
I1209 18:17:15.612214 20134 dmsetup_client.go:61] running dmsetup status docker-252:1-262311-pool
I1209 18:17:15.636035 20134 thin_pool_watcher.go:126] reserving metadata snapshot for thin-pool docker-252:1-262311-pool
I1209 18:17:15.636080 20134 dmsetup_client.go:61] running dmsetup message docker-252:1-262311-pool 0 reserve_metadata_snap
I1209 18:17:15.658212 20134 thin_pool_watcher.go:133] reserved metadata snapshot for thin-pool docker-252:1-262311-pool
I1209 18:17:15.658252 20134 thin_pool_watcher.go:141] running thin_ls on metadata device /dev/loop1
I1209 18:17:15.658285 20134 thin_ls_client.go:56] running command: thin_ls --no-headers -m -o DEV,EXCLUSIVE_BYTES /dev/loop1
I1209 18:17:15.672566 20134 thin_pool_watcher.go:137] releasing metadata snapshot for thin-pool docker-252:1-262311-pool
I1209 18:17:15.672592 20134 dmsetup_client.go:61] running dmsetup message docker-252:1-262311-pool 0 release_metadata_snap
E1209 18:17:15.687775 20134 thin_pool_watcher.go:72] encountered error refreshing thin pool watcher: error performing thin_ls on metadata device /dev/loop1: Error running command `thin_ls --no-headers -m -o DEV,EXCLUSIVE_BYTES /dev/loop1`: exit status 127
output:
I1209 18:17:15.687859 20134 thin_pool_watcher.go:77] thin_ls(1481307435) took 75.70056ms
I1209 18:17:16.022420 20134 controller.go:105] Found 0 scheduledjobs
I1209 18:17:16.029665 20134 controller.go:113] Found 0 jobs
I1209 18:17:16.029699 20134 controller.go:116] Found 0 groups
I1209 18:17:16.194482 20134 reflector.go:284] pkg/controller/volume/persistentvolume/pv_controller_base.go:448: forcing resync
I1209 18:17:16.196267 20134 reflector.go:284] pkg/controller/petset/pet_set.go:147: forcing resync
I1209 18:17:16.196590 20134 reflector.go:284] pkg/controller/volume/persistentvolume/pv_controller_base.go:153: forcing resync
I1209 18:17:16.196661 20134 reflector.go:284] pkg/controller/volume/persistentvolume/pv_controller_base.go:449: forcing resync
I1209 18:17:16.304374 20134 interface.go:248] Default route transits interface "eth0"
I1209 18:17:16.304792 20134 interface.go:93] Interface eth0 is up
I1209 18:17:16.304968 20134 interface.go:138] Interface "eth0" has 2 addresses :[192.168.121.18/24 fe80::5054:ff:fe56:f2f/64].
I1209 18:17:16.305280 20134 interface.go:105] Checking addr 192.168.121.18/24.
I1209 18:17:16.305356 20134 interface.go:114] IP found 192.168.121.18
I1209 18:17:16.305374 20134 interface.go:144] valid IPv4 address for interface "eth0" found as 192.168.121.18.
I1209 18:17:16.305382 20134 interface.go:254] Choosing IP 192.168.121.18
I1209 18:17:16.345129 20134 attach_detach_controller.go:520] processVolumesInUse for node "localhost"
I1209 18:17:16.537567 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:17:16.539874 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:17:16.561811 20134 conversion.go:133] failed to handle multiple devices for container. Skipping Filesystem stats
I1209 18:17:16.561975 20134 conversion.go:133] failed to handle multiple devices for container. Skipping Filesystem stats
I1209 18:17:16.596712 20134 nodecontroller.go:816] Node localhost ReadyCondition updated. Updating timestamp: {Capacity:map[alpha.kubernetes.io/nvidia-gpu:{i:{value:0 scale:0} d:{Dec:<nil>} s:0 Format:DecimalSI} cpu:{i:{value:2 scale:0} d:{Dec:<nil>} s:2 Format:DecimalSI} memory:{i:{value:8223051776 scale:0} d:{Dec:<nil>} s:8030324Ki Format:BinarySI} pods:{i:{value:20 scale:0} d:{Dec:<nil>} s:20 Format:DecimalSI}] Allocatable:map[memory:{i:{value:8223051776 scale:0} d:{Dec:<nil>} s:8030324Ki Format:BinarySI} pods:{i:{value:20 scale:0} d:{Dec:<nil>} s:20 Format:DecimalSI} alpha.kubernetes.io/nvidia-gpu:{i:{value:0 scale:0} d:{Dec:<nil>} s:0 Format:DecimalSI} cpu:{i:{value:2 scale:0} d:{Dec:<nil>} s:2 Format:DecimalSI}] Phase: Conditions:[{Type:OutOfDisk Status:False LastHeartbeatTime:2016-12-09 18:17:06 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasSufficientDisk Message:kubelet has sufficient disk space available} {Type:MemoryPressure Status:False LastHeartbeatTime:2016-12-09 18:17:06 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasSufficientMemory Message:kubelet has sufficient memory available} {Type:DiskPressure Status:False LastHeartbeatTime:2016-12-09 18:17:06 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasNoDiskPressure Message:kubelet has no disk pressure} {Type:Ready Status:True LastHeartbeatTime:2016-12-09 18:17:06 +0000 UTC LastTransitionTime:2016-12-09 18:15:55 +0000 UTC Reason:KubeletReady Message:kubelet is posting ready status}] Addresses:[{Type:LegacyHostIP Address:192.168.121.18} {Type:InternalIP Address:192.168.121.18}] DaemonEndpoints:{KubeletEndpoint:{Port:10250}} NodeInfo:{MachineID:8629c58585b148cca9cc925e788be346 SystemUUID:8629C585-85B1-48CC-A9CC-925E788BE346 BootID:1a2f3964-acf4-4825-9949-2816c7af2060 KernelVersion:4.8.6-300.fc25.x86_64 OSImage:CentOS Linux 7 (Core) ContainerRuntimeVersion:docker://1.12.3 KubeletVersion:v1.4.0+776c994 KubeProxyVersion:v1.4.0+776c994 OperatingSystem:linux Architecture:amd64} Images:[{Names:[docker.io/openshift/origin@sha256:0c4dfc568b8dbb1ac288ab1feb955ee2a37e74fd60fc96f8efd4e35076ced06b docker.io/openshift/origin:latest] SizeBytes:540431972} {Names:[docker.io/openshift/origin-deployer@sha256:398189baa7877595df33f6da8dff1fad36a9e437883c5620b85bdf3d06a31fb5 docker.io/openshift/origin-deployer:v1.5.0-alpha.0] SizeBytes:488015134} {Names:[docker.io/openshift/origin-pod@sha256:6216367955502829e7270c3437810cd16e0e99d2973647e521cc9b871d8948f0 docker.io/openshift/origin-pod:v1.5.0-alpha.0] SizeBytes:1138998}] VolumesInUse:[] VolumesAttached:[]} vs {Capacity:map[pods:{i:{value:20 scale:0} d:{Dec:<nil>} s:20 Format:DecimalSI} alpha.kubernetes.io/nvidia-gpu:{i:{value:0 scale:0} d:{Dec:<nil>} s:0 Format:DecimalSI} cpu:{i:{value:2 scale:0} d:{Dec:<nil>} s:2 Format:DecimalSI} memory:{i:{value:8223051776 scale:0} d:{Dec:<nil>} s:8030324Ki Format:BinarySI}] Allocatable:map[memory:{i:{value:8223051776 scale:0} d:{Dec:<nil>} s:8030324Ki Format:BinarySI} pods:{i:{value:20 scale:0} d:{Dec:<nil>} s:20 Format:DecimalSI} alpha.kubernetes.io/nvidia-gpu:{i:{value:0 scale:0} d:{Dec:<nil>} s:0 Format:DecimalSI} cpu:{i:{value:2 scale:0} d:{Dec:<nil>} s:2 Format:DecimalSI}] Phase: Conditions:[{Type:OutOfDisk Status:False LastHeartbeatTime:2016-12-09 18:17:16 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasSufficientDisk Message:kubelet has sufficient disk space available} {Type:MemoryPressure Status:False LastHeartbeatTime:2016-12-09 18:17:16 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasSufficientMemory Message:kubelet has sufficient memory available} {Type:DiskPressure Status:False LastHeartbeatTime:2016-12-09 18:17:16 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasNoDiskPressure Message:kubelet has no disk pressure} {Type:Ready Status:True LastHeartbeatTime:2016-12-09 18:17:16 +0000 UTC LastTransitionTime:2016-12-09 18:15:55 +0000 UTC Reason:KubeletReady Message:kubelet is posting ready status}] Addresses:[{Type:LegacyHostIP Address:192.168.121.18} {Type:InternalIP Address:192.168.121.18}] DaemonEndpoints:{KubeletEndpoint:{Port:10250}} NodeInfo:{MachineID:8629c58585b148cca9cc925e788be346 SystemUUID:8629C585-85B1-48CC-A9CC-925E788BE346 BootID:1a2f3964-acf4-4825-9949-2816c7af2060 KernelVersion:4.8.6-300.fc25.x86_64 OSImage:CentOS Linux 7 (Core) ContainerRuntimeVersion:docker://1.12.3 KubeletVersion:v1.4.0+776c994 KubeProxyVersion:v1.4.0+776c994 OperatingSystem:linux Architecture:amd64} Images:[{Names:[docker.io/openshift/origin@sha256:0c4dfc568b8dbb1ac288ab1feb955ee2a37e74fd60fc96f8efd4e35076ced06b docker.io/openshift/origin:latest] SizeBytes:540431972} {Names:[docker.io/openshift/origin-deployer@sha256:398189baa7877595df33f6da8dff1fad36a9e437883c5620b85bdf3d06a31fb5 docker.io/openshift/origin-deployer:v1.5.0-alpha.0] SizeBytes:488015134} {Names:[docker.io/openshift/origin-pod@sha256:6216367955502829e7270c3437810cd16e0e99d2973647e521cc9b871d8948f0 docker.io/openshift/origin-pod:v1.5.0-alpha.0] SizeBytes:1138998}] VolumesInUse:[] VolumesAttached:[]}.
I1209 18:17:16.609598 20134 eviction_manager.go:204] eviction manager: no resources are starved
I1209 18:17:16.693366 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:17:16.695805 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:17:16.697798 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:17:17.540198 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:17:17.543110 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:17:18.544107 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:17:18.546434 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:17:18.693393 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:17:18.695517 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:17:18.696744 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:17:19.022789 20134 handler.go:300] unable to get fs usage from thin pool for device 16: no cached value for usage of device 16
I1209 18:17:19.449672 20134 handler.go:300] unable to get fs usage from thin pool for device 16: no cached value for usage of device 16
I1209 18:17:19.546641 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:17:19.548619 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:17:20.548986 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:17:20.550903 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:17:20.693262 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:17:20.695028 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:17:21.551075 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:17:21.552973 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:17:22.553274 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:17:22.555022 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:17:22.693507 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:17:22.696044 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:17:22.697821 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:17:23.555385 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:17:23.557604 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:17:24.557938 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:17:24.560002 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:17:24.693491 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:17:24.696110 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:17:24.697549 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:17:25.560297 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:17:25.563102 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:17:26.035168 20134 controller.go:105] Found 0 scheduledjobs
I1209 18:17:26.041046 20134 controller.go:113] Found 0 jobs
I1209 18:17:26.041085 20134 controller.go:116] Found 0 groups
I1209 18:17:26.348254 20134 interface.go:248] Default route transits interface "eth0"
I1209 18:17:26.348459 20134 interface.go:93] Interface eth0 is up
I1209 18:17:26.348554 20134 interface.go:138] Interface "eth0" has 2 addresses :[192.168.121.18/24 fe80::5054:ff:fe56:f2f/64].
I1209 18:17:26.348593 20134 interface.go:105] Checking addr 192.168.121.18/24.
I1209 18:17:26.348611 20134 interface.go:114] IP found 192.168.121.18
I1209 18:17:26.348624 20134 interface.go:144] valid IPv4 address for interface "eth0" found as 192.168.121.18.
I1209 18:17:26.348641 20134 interface.go:254] Choosing IP 192.168.121.18
I1209 18:17:26.391071 20134 attach_detach_controller.go:520] processVolumesInUse for node "localhost"
I1209 18:17:26.563537 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:17:26.566770 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:17:26.601668 20134 nodecontroller.go:816] Node localhost ReadyCondition updated. Updating timestamp: {Capacity:map[alpha.kubernetes.io/nvidia-gpu:{i:{value:0 scale:0} d:{Dec:<nil>} s:0 Format:DecimalSI} cpu:{i:{value:2 scale:0} d:{Dec:<nil>} s:2 Format:DecimalSI} memory:{i:{value:8223051776 scale:0} d:{Dec:<nil>} s:8030324Ki Format:BinarySI} pods:{i:{value:20 scale:0} d:{Dec:<nil>} s:20 Format:DecimalSI}] Allocatable:map[cpu:{i:{value:2 scale:0} d:{Dec:<nil>} s:2 Format:DecimalSI} memory:{i:{value:8223051776 scale:0} d:{Dec:<nil>} s:8030324Ki Format:BinarySI} pods:{i:{value:20 scale:0} d:{Dec:<nil>} s:20 Format:DecimalSI} alpha.kubernetes.io/nvidia-gpu:{i:{value:0 scale:0} d:{Dec:<nil>} s:0 Format:DecimalSI}] Phase: Conditions:[{Type:OutOfDisk Status:False LastHeartbeatTime:2016-12-09 18:17:16 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasSufficientDisk Message:kubelet has sufficient disk space available} {Type:MemoryPressure Status:False LastHeartbeatTime:2016-12-09 18:17:16 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasSufficientMemory Message:kubelet has sufficient memory available} {Type:DiskPressure Status:False LastHeartbeatTime:2016-12-09 18:17:16 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasNoDiskPressure Message:kubelet has no disk pressure} {Type:Ready Status:True LastHeartbeatTime:2016-12-09 18:17:16 +0000 UTC LastTransitionTime:2016-12-09 18:15:55 +0000 UTC Reason:KubeletReady Message:kubelet is posting ready status}] Addresses:[{Type:LegacyHostIP Address:192.168.121.18} {Type:InternalIP Address:192.168.121.18}] DaemonEndpoints:{KubeletEndpoint:{Port:10250}} NodeInfo:{MachineID:8629c58585b148cca9cc925e788be346 SystemUUID:8629C585-85B1-48CC-A9CC-925E788BE346 BootID:1a2f3964-acf4-4825-9949-2816c7af2060 KernelVersion:4.8.6-300.fc25.x86_64 OSImage:CentOS Linux 7 (Core) ContainerRuntimeVersion:docker://1.12.3 KubeletVersion:v1.4.0+776c994 KubeProxyVersion:v1.4.0+776c994 OperatingSystem:linux Architecture:amd64} Images:[{Names:[docker.io/openshift/origin@sha256:0c4dfc568b8dbb1ac288ab1feb955ee2a37e74fd60fc96f8efd4e35076ced06b docker.io/openshift/origin:latest] SizeBytes:540431972} {Names:[docker.io/openshift/origin-deployer@sha256:398189baa7877595df33f6da8dff1fad36a9e437883c5620b85bdf3d06a31fb5 docker.io/openshift/origin-deployer:v1.5.0-alpha.0] SizeBytes:488015134} {Names:[docker.io/openshift/origin-pod@sha256:6216367955502829e7270c3437810cd16e0e99d2973647e521cc9b871d8948f0 docker.io/openshift/origin-pod:v1.5.0-alpha.0] SizeBytes:1138998}] VolumesInUse:[] VolumesAttached:[]} vs {Capacity:map[memory:{i:{value:8223051776 scale:0} d:{Dec:<nil>} s:8030324Ki Format:BinarySI} pods:{i:{value:20 scale:0} d:{Dec:<nil>} s:20 Format:DecimalSI} alpha.kubernetes.io/nvidia-gpu:{i:{value:0 scale:0} d:{Dec:<nil>} s:0 Format:DecimalSI} cpu:{i:{value:2 scale:0} d:{Dec:<nil>} s:2 Format:DecimalSI}] Allocatable:map[cpu:{i:{value:2 scale:0} d:{Dec:<nil>} s:2 Format:DecimalSI} memory:{i:{value:8223051776 scale:0} d:{Dec:<nil>} s:8030324Ki Format:BinarySI} pods:{i:{value:20 scale:0} d:{Dec:<nil>} s:20 Format:DecimalSI} alpha.kubernetes.io/nvidia-gpu:{i:{value:0 scale:0} d:{Dec:<nil>} s:0 Format:DecimalSI}] Phase: Conditions:[{Type:OutOfDisk Status:False LastHeartbeatTime:2016-12-09 18:17:26 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasSufficientDisk Message:kubelet has sufficient disk space available} {Type:MemoryPressure Status:False LastHeartbeatTime:2016-12-09 18:17:26 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasSufficientMemory Message:kubelet has sufficient memory available} {Type:DiskPressure Status:False LastHeartbeatTime:2016-12-09 18:17:26 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasNoDiskPressure Message:kubelet has no disk pressure} {Type:Ready Status:True LastHeartbeatTime:2016-12-09 18:17:26 +0000 UTC LastTransitionTime:2016-12-09 18:15:55 +0000 UTC Reason:KubeletReady Message:kubelet is posting ready status}] Addresses:[{Type:LegacyHostIP Address:192.168.121.18} {Type:InternalIP Address:192.168.121.18}] DaemonEndpoints:{KubeletEndpoint:{Port:10250}} NodeInfo:{MachineID:8629c58585b148cca9cc925e788be346 SystemUUID:8629C585-85B1-48CC-A9CC-925E788BE346 BootID:1a2f3964-acf4-4825-9949-2816c7af2060 KernelVersion:4.8.6-300.fc25.x86_64 OSImage:CentOS Linux 7 (Core) ContainerRuntimeVersion:docker://1.12.3 KubeletVersion:v1.4.0+776c994 KubeProxyVersion:v1.4.0+776c994 OperatingSystem:linux Architecture:amd64} Images:[{Names:[docker.io/openshift/origin@sha256:0c4dfc568b8dbb1ac288ab1feb955ee2a37e74fd60fc96f8efd4e35076ced06b docker.io/openshift/origin:latest] SizeBytes:540431972} {Names:[docker.io/openshift/origin-deployer@sha256:398189baa7877595df33f6da8dff1fad36a9e437883c5620b85bdf3d06a31fb5 docker.io/openshift/origin-deployer:v1.5.0-alpha.0] SizeBytes:488015134} {Names:[docker.io/openshift/origin-pod@sha256:6216367955502829e7270c3437810cd16e0e99d2973647e521cc9b871d8948f0 docker.io/openshift/origin-pod:v1.5.0-alpha.0] SizeBytes:1138998}] VolumesInUse:[] VolumesAttached:[]}.
I1209 18:17:26.634942 20134 conversion.go:133] failed to handle multiple devices for container. Skipping Filesystem stats
I1209 18:17:26.634972 20134 conversion.go:133] failed to handle multiple devices for container. Skipping Filesystem stats
I1209 18:17:26.677898 20134 eviction_manager.go:204] eviction manager: no resources are starved
I1209 18:17:26.693222 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:17:26.694071 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:17:27.567024 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:17:27.569033 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:17:28.570120 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:17:28.572875 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:17:28.693292 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:17:28.694720 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:17:28.695978 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:17:29.573169 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:17:29.575732 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:17:30.576476 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:17:30.578833 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:17:30.688186 20134 thin_pool_watcher.go:160] checking whether the thin-pool is holding a metadata snapshot
I1209 18:17:30.688235 20134 dmsetup_client.go:61] running dmsetup status docker-252:1-262311-pool
I1209 18:17:30.699791 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:17:30.701909 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:17:30.702986 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:17:30.717622 20134 thin_pool_watcher.go:126] reserving metadata snapshot for thin-pool docker-252:1-262311-pool
I1209 18:17:30.717658 20134 dmsetup_client.go:61] running dmsetup message docker-252:1-262311-pool 0 reserve_metadata_snap
I1209 18:17:30.742613 20134 thin_pool_watcher.go:133] reserved metadata snapshot for thin-pool docker-252:1-262311-pool
I1209 18:17:30.742648 20134 thin_pool_watcher.go:141] running thin_ls on metadata device /dev/loop1
I1209 18:17:30.742659 20134 thin_ls_client.go:56] running command: thin_ls --no-headers -m -o DEV,EXCLUSIVE_BYTES /dev/loop1
I1209 18:17:30.757403 20134 thin_pool_watcher.go:137] releasing metadata snapshot for thin-pool docker-252:1-262311-pool
I1209 18:17:30.757429 20134 dmsetup_client.go:61] running dmsetup message docker-252:1-262311-pool 0 release_metadata_snap
E1209 18:17:30.776098 20134 thin_pool_watcher.go:72] encountered error refreshing thin pool watcher: error performing thin_ls on metadata device /dev/loop1: Error running command `thin_ls --no-headers -m -o DEV,EXCLUSIVE_BYTES /dev/loop1`: exit status 127
output:
I1209 18:17:30.776166 20134 thin_pool_watcher.go:77] thin_ls(1481307450) took 87.992447ms
I1209 18:17:30.886080 20134 handler.go:300] unable to get fs usage from thin pool for device 16: no cached value for usage of device 16
I1209 18:17:31.194781 20134 reflector.go:284] pkg/controller/volume/persistentvolume/pv_controller_base.go:448: forcing resync
I1209 18:17:31.196756 20134 reflector.go:284] pkg/controller/volume/persistentvolume/pv_controller_base.go:153: forcing resync
I1209 18:17:31.196803 20134 reflector.go:284] pkg/controller/volume/persistentvolume/pv_controller_base.go:449: forcing resync
I1209 18:17:31.579231 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:17:31.583781 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:17:32.055511 20134 anyauthpassword.go:40] Got userIdentityMapping: &user.DefaultInfo{Name:"test", UID:"c054c90d-be3b-11e6-8665-525400560f2f", Groups:[]string(nil), Extra:map[string][]string(nil)}
I1209 18:17:32.055658 20134 basicauth.go:45] Login with provider "anypassword" succeeded for login "test": &user.DefaultInfo{Name:"test", UID:"c054c90d-be3b-11e6-8665-525400560f2f", Groups:[]string(nil), Extra:map[string][]string(nil)}
I1209 18:17:32.056639 20134 authenticator.go:38] OAuth authentication succeeded: &user.DefaultInfo{Name:"test", UID:"c054c90d-be3b-11e6-8665-525400560f2f", Groups:[]string(nil), Extra:map[string][]string(nil)}
I1209 18:17:32.146845 20134 handler.go:300] unable to get fs usage from thin pool for device 16: no cached value for usage of device 16
I1209 18:17:32.584609 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:17:32.585937 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:17:32.693414 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:17:32.694778 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:17:33.586175 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:17:33.588276 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:17:34.589010 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:17:34.590833 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:17:34.693380 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:17:34.695436 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:17:34.697148 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:17:35.591642 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:17:35.593387 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:17:36.047992 20134 controller.go:105] Found 0 scheduledjobs
I1209 18:17:36.054577 20134 controller.go:113] Found 0 jobs
I1209 18:17:36.054636 20134 controller.go:116] Found 0 groups
I1209 18:17:36.395076 20134 interface.go:248] Default route transits interface "eth0"
I1209 18:17:36.395304 20134 interface.go:93] Interface eth0 is up
I1209 18:17:36.395425 20134 interface.go:138] Interface "eth0" has 2 addresses :[192.168.121.18/24 fe80::5054:ff:fe56:f2f/64].
I1209 18:17:36.395459 20134 interface.go:105] Checking addr 192.168.121.18/24.
I1209 18:17:36.395469 20134 interface.go:114] IP found 192.168.121.18
I1209 18:17:36.395479 20134 interface.go:144] valid IPv4 address for interface "eth0" found as 192.168.121.18.
I1209 18:17:36.395490 20134 interface.go:254] Choosing IP 192.168.121.18
I1209 18:17:36.451557 20134 attach_detach_controller.go:520] processVolumesInUse for node "localhost"
I1209 18:17:36.593663 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:17:36.595774 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:17:36.607001 20134 nodecontroller.go:816] Node localhost ReadyCondition updated. Updating timestamp: {Capacity:map[memory:{i:{value:8223051776 scale:0} d:{Dec:<nil>} s:8030324Ki Format:BinarySI} pods:{i:{value:20 scale:0} d:{Dec:<nil>} s:20 Format:DecimalSI} alpha.kubernetes.io/nvidia-gpu:{i:{value:0 scale:0} d:{Dec:<nil>} s:0 Format:DecimalSI} cpu:{i:{value:2 scale:0} d:{Dec:<nil>} s:2 Format:DecimalSI}] Allocatable:map[cpu:{i:{value:2 scale:0} d:{Dec:<nil>} s:2 Format:DecimalSI} memory:{i:{value:8223051776 scale:0} d:{Dec:<nil>} s:8030324Ki Format:BinarySI} pods:{i:{value:20 scale:0} d:{Dec:<nil>} s:20 Format:DecimalSI} alpha.kubernetes.io/nvidia-gpu:{i:{value:0 scale:0} d:{Dec:<nil>} s:0 Format:DecimalSI}] Phase: Conditions:[{Type:OutOfDisk Status:False LastHeartbeatTime:2016-12-09 18:17:26 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasSufficientDisk Message:kubelet has sufficient disk space available} {Type:MemoryPressure Status:False LastHeartbeatTime:2016-12-09 18:17:26 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasSufficientMemory Message:kubelet has sufficient memory available} {Type:DiskPressure Status:False LastHeartbeatTime:2016-12-09 18:17:26 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasNoDiskPressure Message:kubelet has no disk pressure} {Type:Ready Status:True LastHeartbeatTime:2016-12-09 18:17:26 +0000 UTC LastTransitionTime:2016-12-09 18:15:55 +0000 UTC Reason:KubeletReady Message:kubelet is posting ready status}] Addresses:[{Type:LegacyHostIP Address:192.168.121.18} {Type:InternalIP Address:192.168.121.18}] DaemonEndpoints:{KubeletEndpoint:{Port:10250}} NodeInfo:{MachineID:8629c58585b148cca9cc925e788be346 SystemUUID:8629C585-85B1-48CC-A9CC-925E788BE346 BootID:1a2f3964-acf4-4825-9949-2816c7af2060 KernelVersion:4.8.6-300.fc25.x86_64 OSImage:CentOS Linux 7 (Core) ContainerRuntimeVersion:docker://1.12.3 KubeletVersion:v1.4.0+776c994 KubeProxyVersion:v1.4.0+776c994 OperatingSystem:linux Architecture:amd64} Images:[{Names:[docker.io/openshift/origin@sha256:0c4dfc568b8dbb1ac288ab1feb955ee2a37e74fd60fc96f8efd4e35076ced06b docker.io/openshift/origin:latest] SizeBytes:540431972} {Names:[docker.io/openshift/origin-deployer@sha256:398189baa7877595df33f6da8dff1fad36a9e437883c5620b85bdf3d06a31fb5 docker.io/openshift/origin-deployer:v1.5.0-alpha.0] SizeBytes:488015134} {Names:[docker.io/openshift/origin-pod@sha256:6216367955502829e7270c3437810cd16e0e99d2973647e521cc9b871d8948f0 docker.io/openshift/origin-pod:v1.5.0-alpha.0] SizeBytes:1138998}] VolumesInUse:[] VolumesAttached:[]} vs {Capacity:map[alpha.kubernetes.io/nvidia-gpu:{i:{value:0 scale:0} d:{Dec:<nil>} s:0 Format:DecimalSI} cpu:{i:{value:2 scale:0} d:{Dec:<nil>} s:2 Format:DecimalSI} memory:{i:{value:8223051776 scale:0} d:{Dec:<nil>} s:8030324Ki Format:BinarySI} pods:{i:{value:20 scale:0} d:{Dec:<nil>} s:20 Format:DecimalSI}] Allocatable:map[pods:{i:{value:20 scale:0} d:{Dec:<nil>} s:20 Format:DecimalSI} alpha.kubernetes.io/nvidia-gpu:{i:{value:0 scale:0} d:{Dec:<nil>} s:0 Format:DecimalSI} cpu:{i:{value:2 scale:0} d:{Dec:<nil>} s:2 Format:DecimalSI} memory:{i:{value:8223051776 scale:0} d:{Dec:<nil>} s:8030324Ki Format:BinarySI}] Phase: Conditions:[{Type:OutOfDisk Status:False LastHeartbeatTime:2016-12-09 18:17:36 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasSufficientDisk Message:kubelet has sufficient disk space available} {Type:MemoryPressure Status:False LastHeartbeatTime:2016-12-09 18:17:36 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasSufficientMemory Message:kubelet has sufficient memory available} {Type:DiskPressure Status:False LastHeartbeatTime:2016-12-09 18:17:36 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasNoDiskPressure Message:kubelet has no disk pressure} {Type:Ready Status:True LastHeartbeatTime:2016-12-09 18:17:36 +0000 UTC LastTransitionTime:2016-12-09 18:15:55 +0000 UTC Reason:KubeletReady Message:kubelet is posting ready status}] Addresses:[{Type:LegacyHostIP Address:192.168.121.18} {Type:InternalIP Address:192.168.121.18}] DaemonEndpoints:{KubeletEndpoint:{Port:10250}} NodeInfo:{MachineID:8629c58585b148cca9cc925e788be346 SystemUUID:8629C585-85B1-48CC-A9CC-925E788BE346 BootID:1a2f3964-acf4-4825-9949-2816c7af2060 KernelVersion:4.8.6-300.fc25.x86_64 OSImage:CentOS Linux 7 (Core) ContainerRuntimeVersion:docker://1.12.3 KubeletVersion:v1.4.0+776c994 KubeProxyVersion:v1.4.0+776c994 OperatingSystem:linux Architecture:amd64} Images:[{Names:[docker.io/openshift/origin@sha256:0c4dfc568b8dbb1ac288ab1feb955ee2a37e74fd60fc96f8efd4e35076ced06b docker.io/openshift/origin:latest] SizeBytes:540431972} {Names:[docker.io/openshift/origin-deployer@sha256:398189baa7877595df33f6da8dff1fad36a9e437883c5620b85bdf3d06a31fb5 docker.io/openshift/origin-deployer:v1.5.0-alpha.0] SizeBytes:488015134} {Names:[docker.io/openshift/origin-pod@sha256:6216367955502829e7270c3437810cd16e0e99d2973647e521cc9b871d8948f0 docker.io/openshift/origin-pod:v1.5.0-alpha.0] SizeBytes:1138998}] VolumesInUse:[] VolumesAttached:[]}.
I1209 18:17:36.693388 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:17:36.695634 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:17:36.729684 20134 conversion.go:133] failed to handle multiple devices for container. Skipping Filesystem stats
I1209 18:17:36.729709 20134 conversion.go:133] failed to handle multiple devices for container. Skipping Filesystem stats
I1209 18:17:36.738882 20134 eviction_manager.go:204] eviction manager: no resources are starved
I1209 18:17:36.795044 20134 reflector.go:284] github.com/openshift/origin/pkg/user/cache/groups.go:38: forcing resync
I1209 18:17:37.595953 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:17:37.597393 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:17:37.758200 20134 namespace_controller.go:195] Finished syncing namespace "test" (280ns)
I1209 18:17:37.766221 20134 create_dockercfg_secrets.go:85] Adding service account default
I1209 18:17:37.772770 20134 namespace_controller.go:195] Finished syncing namespace "test" (310ns)
I1209 18:17:37.779590 20134 create_dockercfg_secrets.go:85] Adding service account builder
I1209 18:17:37.785647 20134 create_dockercfg_secrets.go:90] Updating service account default
I1209 18:17:37.786272 20134 create_dockercfg_secrets.go:460] Creating token secret "default-token-h272y" for service account test/default
I1209 18:17:37.792995 20134 create_dockercfg_secrets.go:85] Adding service account deployer
I1209 18:17:37.794572 20134 create_dockercfg_secrets.go:460] Creating token secret "builder-token-ylk44" for service account test/builder
I1209 18:17:37.800536 20134 create_dockercfg_secrets.go:90] Updating service account builder
I1209 18:17:37.807933 20134 create_dockercfg_secrets.go:479] Token secret for service account test/default is not populated yet
I1209 18:17:37.807955 20134 create_dockercfg_secrets.go:366] The dockercfg secret was not created for service account test/default, will retry
I1209 18:17:37.808115 20134 create_dockercfg_secrets.go:460] Creating token secret "default-token-h272y" for service account test/default
I1209 18:17:37.811359 20134 create_dockercfg_secrets.go:479] Token secret for service account test/builder is not populated yet
I1209 18:17:37.811376 20134 create_dockercfg_secrets.go:366] The dockercfg secret was not created for service account test/builder, will retry
I1209 18:17:37.811513 20134 create_dockercfg_secrets.go:460] Creating token secret "builder-token-ylk44" for service account test/builder
I1209 18:17:37.815689 20134 create_dockercfg_secrets.go:460] Creating token secret "deployer-token-2mmr9" for service account test/deployer
I1209 18:17:37.815937 20134 create_dockercfg_secrets.go:158] Adding token secret test/default-token-h272y
I1209 18:17:37.816299 20134 create_dockercfg_secrets.go:90] Updating service account deployer
I1209 18:17:37.822039 20134 tokens_controller.go:449] deleting secret test/builder-token-k1lvr because reference couldn't be added (Operation cannot be fulfilled on serviceaccounts "builder": the object has been modified; please apply your changes to the latest version and try again)
I1209 18:17:37.824140 20134 tokens_controller.go:449] deleting secret test/default-token-ntvzz because reference couldn't be added (Operation cannot be fulfilled on serviceaccounts "default": the object has been modified; please apply your changes to the latest version and try again)
I1209 18:17:37.824489 20134 create_dockercfg_secrets.go:479] Token secret for service account test/deployer is not populated yet
I1209 18:17:37.824507 20134 create_dockercfg_secrets.go:366] The dockercfg secret was not created for service account test/deployer, will retry
I1209 18:17:37.824601 20134 create_dockercfg_secrets.go:460] Creating token secret "deployer-token-2mmr9" for service account test/deployer
I1209 18:17:37.824843 20134 create_dockercfg_secrets.go:158] Adding token secret test/builder-token-ylk44
I1209 18:17:37.827523 20134 create_dockercfg_secrets.go:479] Token secret for service account test/default is not populated yet
I1209 18:17:37.827553 20134 create_dockercfg_secrets.go:366] The dockercfg secret was not created for service account test/default, will retry
I1209 18:17:37.841772 20134 create_dockercfg_secrets.go:158] Adding token secret test/deployer-token-2mmr9
I1209 18:17:37.842076 20134 create_dockercfg_secrets.go:479] Token secret for service account test/builder is not populated yet
I1209 18:17:37.842092 20134 create_dockercfg_secrets.go:366] The dockercfg secret was not created for service account test/builder, will retry
I1209 18:17:37.842168 20134 tokens_controller.go:449] deleting secret test/deployer-token-jmh4n because reference couldn't be added (Operation cannot be fulfilled on serviceaccounts "deployer": the object has been modified; please apply your changes to the latest version and try again)
I1209 18:17:37.868035 20134 create_dockercfg_secrets.go:156] Updating token secret test/default-token-h272y
I1209 18:17:37.868140 20134 create_dockercfg_secrets.go:497] Creating dockercfg secret "default-dockercfg-6y9j2" for service account test/default
I1209 18:17:37.869751 20134 create_dockercfg_secrets.go:479] Token secret for service account test/deployer is not populated yet
I1209 18:17:37.869769 20134 create_dockercfg_secrets.go:366] The dockercfg secret was not created for service account test/deployer, will retry
I1209 18:17:37.899425 20134 create_dockercfg_secrets.go:156] Updating token secret test/deployer-token-2mmr9
I1209 18:17:37.899614 20134 create_dockercfg_secrets.go:497] Creating dockercfg secret "deployer-dockercfg-cotcn" for service account test/deployer
I1209 18:17:37.909509 20134 create_dockercfg_secrets.go:90] Updating service account default
I1209 18:17:37.917217 20134 create_dockercfg_secrets.go:156] Updating token secret test/builder-token-ylk44
I1209 18:17:37.917378 20134 create_dockercfg_secrets.go:497] Creating dockercfg secret "builder-dockercfg-0fxle" for service account test/builder
I1209 18:17:37.935083 20134 create_dockercfg_secrets.go:90] Updating service account builder
I1209 18:17:37.935264 20134 create_dockercfg_secrets.go:90] Updating service account deployer
I1209 18:17:37.937788 20134 tokens_controller.go:449] deleting secret test/default-token-9opf2 because reference couldn't be added (Operation cannot be fulfilled on serviceaccounts "default": the object has been modified; please apply your changes to the latest version and try again)
I1209 18:17:37.941166 20134 tokens_controller.go:449] deleting secret test/deployer-token-vu9ud because reference couldn't be added (Operation cannot be fulfilled on serviceaccounts "deployer": the object has been modified; please apply your changes to the latest version and try again)
I1209 18:17:37.982471 20134 create_dockercfg_secrets.go:90] Updating service account builder
I1209 18:17:37.990592 20134 create_dockercfg_secrets.go:90] Updating service account default
I1209 18:17:37.994550 20134 create_dockercfg_secrets.go:90] Updating service account deployer
I1209 18:17:38.597561 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:17:38.599500 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:17:38.693374 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:17:38.694788 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:17:38.696206 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:17:39.599678 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:17:39.601789 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:17:40.602140 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:17:40.603819 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:17:40.693402 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:17:40.695560 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:17:40.696702 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:17:41.604138 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:17:41.605917 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:17:42.208547 20134 reflector.go:284] github.com/openshift/origin/pkg/project/auth/cache.go:189: forcing resync
I1209 18:17:42.266437 20134 handler.go:300] unable to get fs usage from thin pool for device 16: no cached value for usage of device 16
I1209 18:17:42.606225 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:17:42.609284 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:17:42.693380 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:17:42.694796 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:17:43.609533 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:17:43.611367 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:17:44.299399 20134 worker.go:45] 0 Health Check Listeners
I1209 18:17:44.299492 20134 worker.go:46] 0 Services registered for health checking
I1209 18:17:44.611577 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:17:44.613349 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:17:44.693189 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:17:44.694173 20134 importer.go:376] importing remote Docker repository registry=https://registry-1.docker.io repository=openshift/deployment-example insecure=false
I1209 18:17:44.694950 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:17:44.695839 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:17:44.817100 20134 credentials.go:169] Being asked for https://auth.docker.io/token, trying index.docker.io/v1 for legacy behavior
I1209 18:17:44.817306 20134 credentials.go:174] Being asked for //index.docker.io/v1, trying docker.io for legacy behavior
I1209 18:17:44.817387 20134 credentials.go:177] Unable to find a secret to match //docker.io (docker.io)
I1209 18:17:44.840906 20134 reflector.go:284] pkg/controller/replicaset/replica_set.go:205: forcing resync
I1209 18:17:44.848371 20134 reflector.go:284] pkg/controller/deployment/deployment_controller.go:181: forcing resync
I1209 18:17:44.861774 20134 reflector.go:284] pkg/controller/disruption/disruption.go:266: forcing resync
I1209 18:17:44.861870 20134 reflector.go:284] pkg/controller/disruption/disruption.go:267: forcing resync
I1209 18:17:44.861940 20134 reflector.go:284] pkg/controller/daemon/daemoncontroller.go:237: forcing resync
I1209 18:17:44.862014 20134 reflector.go:284] pkg/controller/disruption/disruption.go:264: forcing resync
I1209 18:17:44.862090 20134 reflector.go:284] pkg/controller/podautoscaler/horizontal.go:135: forcing resync
I1209 18:17:44.862873 20134 reflector.go:284] pkg/controller/endpoint/endpoints_controller.go:158: forcing resync
I1209 18:17:44.863048 20134 endpoints_controller.go:321] Finished syncing service "default/kubernetes" endpoints. (3.551µs)
I1209 18:17:44.863426 20134 reflector.go:284] pkg/controller/disruption/disruption.go:268: forcing resync
I1209 18:17:45.041204 20134 trace.go:61] Trace "Create /oapi/v1/namespaces/test/imagestreamimports" (started 2016-12-09 18:17:44.69274964 +0000 UTC):
[34.973µs] [34.973µs] About to convert to expected version
[251.731µs] [216.758µs] Conversion done
[1.080016ms] [828.285µs] About to store object in database
[347.502817ms] [346.422801ms] Object stored in database
[347.5186ms] [15.783µs] Self-link added
[348.379118ms] [860.518µs] END
I1209 18:17:45.062173 20134 strategy.go:179] Detected changed tag latest in test/deployment-example
I1209 18:17:45.083033 20134 factory.go:160] Image stream "deployment-example" added.
I1209 18:17:45.094249 20134 image_change_controller.go:37] Build image change controller detected ImageStream change
I1209 18:17:45.097805 20134 controller.go:129] Importing stream test/deployment-example partial=true...
I1209 18:17:45.097986 20134 factory.go:142] DEBUG: stream deployment-example was just imported
I1209 18:17:45.099417 20134 importer.go:376] importing remote Docker repository registry=https://registry-1.docker.io repository=openshift/deployment-example insecure=false
I1209 18:17:45.123454 20134 factory.go:113] Adding deployment config "deployment-example"
I1209 18:17:45.123549 20134 controller.go:80] Reconciling test/deployment-example
I1209 18:17:45.132679 20134 credentials.go:169] Being asked for https://auth.docker.io/token, trying index.docker.io/v1 for legacy behavior
I1209 18:17:45.132738 20134 credentials.go:174] Being asked for //index.docker.io/v1, trying docker.io for legacy behavior
I1209 18:17:45.132781 20134 credentials.go:177] Unable to find a secret to match //docker.io (docker.io)
I1209 18:17:45.134416 20134 rest.go:567] Service type: ClusterIP does not need health check node port
I1209 18:17:45.138837 20134 controller.go:297] Updated the status for "test/deployment-example" (observed generation: 1)
I1209 18:17:45.139182 20134 endpoints_controller.go:360] About to update endpoints for service "test/deployment-example"
I1209 18:17:45.139736 20134 controller.go:107] Adding service test/deployment-example
I1209 18:17:45.139957 20134 secret_creating_controller.go:98] Adding service deployment-example
I1209 18:17:45.140608 20134 config.go:256] Setting services (config.ServiceUpdate) {
Services: ([]api.Service) (len=2 cap=2) {
(api.Service) &TypeMeta{Kind:,APIVersion:,},
(api.Service) &TypeMeta{Kind:,APIVersion:,}
},
Op: (config.Operation) 0
}
I1209 18:17:45.140637 20134 config.go:208] Calling handler.OnServiceUpdate()
I1209 18:17:45.140826 20134 proxier.go:397] Received update notice: []
I1209 18:17:45.140934 20134 proxier.go:431] Adding new service "test/deployment-example:8080-tcp" at 172.30.122.15:8080/TCP
I1209 18:17:45.141039 20134 proxier.go:459] added serviceInfo(test/deployment-example:8080-tcp): (*iptables.serviceInfo)(0xc83267e4d0)({
clusterIP: (net.IP) (len=16 cap=16) 172.30.122.15,
port: (int) 8080,
protocol: (api.Protocol) (len=3) "TCP",
nodePort: (int) 0,
loadBalancerStatus: (api.LoadBalancerStatus) {
Ingress: ([]api.LoadBalancerIngress) {
}
},
sessionAffinityType: (api.ServiceAffinity) (len=4) "None",
stickyMaxAgeSeconds: (int) 180,
externalIPs: ([]string) <nil>,
loadBalancerSourceRanges: ([]string) <nil>,
onlyNodeLocalEndpoints: (bool) false,
healthCheckNodePort: (int) 0
})
I1209 18:17:45.141090 20134 proxier.go:758] Syncing iptables rules
I1209 18:17:45.141105 20134 iptables.go:362] running iptables -N [KUBE-SERVICES -t filter]
I1209 18:17:45.151279 20134 factory.go:125] Updating deployment config "deployment-example"
I1209 18:17:45.151393 20134 controller.go:80] Reconciling test/deployment-example
I1209 18:17:45.161843 20134 iptables.go:362] running iptables -N [KUBE-SERVICES -t nat]
I1209 18:17:45.174178 20134 endpoints_controller.go:497] Update endpoints for test/deployment-example, ready: 0 not ready: 0
I1209 18:17:45.175049 20134 iptables.go:362] running iptables -C [OUTPUT -t filter -m comment --comment kubernetes service portals -j KUBE-SERVICES]
I1209 18:17:45.188692 20134 iptables.go:362] running iptables -C [OUTPUT -t nat -m comment --comment kubernetes service portals -j KUBE-SERVICES]
I1209 18:17:45.200309 20134 endpoints_controller.go:321] Finished syncing service "test/deployment-example" endpoints. (61.148063ms)
I1209 18:17:45.201403 20134 config.go:147] Setting endpoints (config.EndpointsUpdate) {
Endpoints: ([]api.Endpoints) (len=2 cap=2) {
(api.Endpoints) &TypeMeta{Kind:,APIVersion:,},
(api.Endpoints) &TypeMeta{Kind:,APIVersion:,}
},
Op: (config.Operation) 0
}
I1209 18:17:45.201471 20134 config.go:99] Calling handler.OnEndpointsUpdate()
I1209 18:17:45.203796 20134 iptables.go:362] running iptables -C [PREROUTING -t nat -m comment --comment kubernetes service portals -j KUBE-SERVICES]
I1209 18:17:45.210802 20134 controller.go:60] Error instantiating deployment config test/deployment-example: cannot trigger a deployment for "deployment-example" because it contains unresolved images
I1209 18:17:45.218493 20134 iptables.go:362] running iptables -N [KUBE-POSTROUTING -t nat]
I1209 18:17:45.224433 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:17:45.230306 20134 controller.go:60] Error instantiating deployment config test/deployment-example: cannot trigger a deployment for "deployment-example" because it contains unresolved images
I1209 18:17:45.235571 20134 iptables.go:362] running iptables -C [POSTROUTING -t nat -m comment --comment kubernetes postrouting rules -j KUBE-POSTROUTING]
I1209 18:17:45.252614 20134 controller.go:60] Error instantiating deployment config test/deployment-example: cannot trigger a deployment for "deployment-example" because it contains unresolved images
I1209 18:17:45.257394 20134 iptables.go:298] running iptables-save [-t filter]
I1209 18:17:45.272021 20134 iptables.go:298] running iptables-save [-t nat]
I1209 18:17:45.285456 20134 controller.go:60] Error instantiating deployment config test/deployment-example: cannot trigger a deployment for "deployment-example" because it contains unresolved images
I1209 18:17:45.294562 20134 proxier.go:1244] Restoring iptables rules: *filter
:KUBE-SERVICES - [0:0]
-A KUBE-SERVICES -m comment --comment "test/deployment-example:8080-tcp has no endpoints" -m tcp -p tcp -d 172.30.122.15/32 --dport 8080 -j REJECT
COMMIT
*nat
:KUBE-SERVICES - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-POSTROUTING - [0:0]
:KUBE-MARK-MASQ - [0:0]
:KUBE-SVC-7FAS7WLN46SI3LNQ - [0:0]
:KUBE-SVC-NPX46M4PTMTKRN6Y - [0:0]
:KUBE-SEP-LLAPZ6I53VN3DC7H - [0:0]
:KUBE-SVC-3VQ6B3MLH7E2SZT4 - [0:0]
:KUBE-SEP-FQLCC2RAIW2XP5BY - [0:0]
:KUBE-SVC-BA6I5HTZKAAAJT56 - [0:0]
:KUBE-SEP-U462OSCUJL4Y6JKA - [0:0]
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x00004000/0x00004000 -j MASQUERADE
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x00004000/0x00004000
-A KUBE-SERVICES -m comment --comment "test/deployment-example:8080-tcp cluster IP" -m tcp -p tcp -d 172.30.122.15/32 --dport 8080 -j KUBE-SVC-7FAS7WLN46SI3LNQ
-A KUBE-SERVICES -m comment --comment "default/kubernetes:https cluster IP" -m tcp -p tcp -d 172.30.0.1/32 --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment default/kubernetes:https -m recent --name KUBE-SEP-LLAPZ6I53VN3DC7H --rcheck --seconds 180 --reap -j KUBE-SEP-LLAPZ6I53VN3DC7H
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment default/kubernetes:https -j KUBE-SEP-LLAPZ6I53VN3DC7H
-A KUBE-SEP-LLAPZ6I53VN3DC7H -m comment --comment default/kubernetes:https -s 192.168.121.18/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-LLAPZ6I53VN3DC7H -m comment --comment default/kubernetes:https -m recent --name KUBE-SEP-LLAPZ6I53VN3DC7H --set -m tcp -p tcp -j DNAT --to-destination 192.168.121.18:8443
-A KUBE-SERVICES -m comment --comment "default/kubernetes:dns cluster IP" -m udp -p udp -d 172.30.0.1/32 --dport 53 -j KUBE-SVC-3VQ6B3MLH7E2SZT4
-A KUBE-SVC-3VQ6B3MLH7E2SZT4 -m comment --comment default/kubernetes:dns -m recent --name KUBE-SEP-FQLCC2RAIW2XP5BY --rcheck --seconds 180 --reap -j KUBE-SEP-FQLCC2RAIW2XP5BY
-A KUBE-SVC-3VQ6B3MLH7E2SZT4 -m comment --comment default/kubernetes:dns -j KUBE-SEP-FQLCC2RAIW2XP5BY
-A KUBE-SEP-FQLCC2RAIW2XP5BY -m comment --comment default/kubernetes:dns -s 192.168.121.18/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-FQLCC2RAIW2XP5BY -m comment --comment default/kubernetes:dns -m recent --name KUBE-SEP-FQLCC2RAIW2XP5BY --set -m udp -p udp -j DNAT --to-destination 192.168.121.18:8053
-A KUBE-SERVICES -m comment --comment "default/kubernetes:dns-tcp cluster IP" -m tcp -p tcp -d 172.30.0.1/32 --dport 53 -j KUBE-SVC-BA6I5HTZKAAAJT56
-A KUBE-SVC-BA6I5HTZKAAAJT56 -m comment --comment default/kubernetes:dns-tcp -m recent --name KUBE-SEP-U462OSCUJL4Y6JKA --rcheck --seconds 180 --reap -j KUBE-SEP-U462OSCUJL4Y6JKA
-A KUBE-SVC-BA6I5HTZKAAAJT56 -m comment --comment default/kubernetes:dns-tcp -j KUBE-SEP-U462OSCUJL4Y6JKA
-A KUBE-SEP-U462OSCUJL4Y6JKA -m comment --comment default/kubernetes:dns-tcp -s 192.168.121.18/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-U462OSCUJL4Y6JKA -m comment --comment default/kubernetes:dns-tcp -m recent --name KUBE-SEP-U462OSCUJL4Y6JKA --set -m tcp -p tcp -j DNAT --to-destination 192.168.121.18:8053
-A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
COMMIT
I1209 18:17:45.295066 20134 iptables.go:339] running iptables-restore [--noflush --counters]
I1209 18:17:45.308536 20134 helper.go:382] Image metadata already filled for sha256:ea9135488f323060cb18ab3ec06286cd49e4b3a611fce1a6a442651ecf421f99
I1209 18:17:45.328597 20134 rest.go:251] updating stream {"name":"deployment-example","namespace":"test","uid":"c815a8da-be3b-11e6-8665-525400560f2f","resourceVersion":"507","generation":1,"creationTimestamp":"2016-12-09T18:17:45Z","labels":{"app":"deployment-example"},"annotations":{"openshift.io/generated-by":"OpenShiftNewApp"},"Spec":{"DockerImageRepository":"","Tags":{"latest":{"Name":"latest","Annotations":{"openshift.io/imported-from":"openshift/deployment-example"},"From":{"kind":"DockerImage","name":"openshift/deployment-example"},"Reference":false,"Generation":
A: 1,"ImportPolicy":{"Insecure":false,"Scheduled":false}}}},"Status":{"DockerImageRepository":"","Tags":{}}}
B: 0,"ImportPolicy":{"Insecure":false,"Scheduled":false}}}},"Status":{"DockerImageRepository":"","Tags":{"latest":{"Items":[{"Created":"2016-12-09T18:17:45Z","DockerImageReference":"openshift/deployment-example@sha256:ea9135488f323060cb18ab3ec06286cd49e4b3a611fce1a6a442651ecf421f99","Image":"sha256:ea9135488f323060cb18ab3ec06286cd49e4b3a611fce1a6a442651ecf421f99","Generation":2}],"Conditions":null}}}}
I1209 18:17:45.334655 20134 controller.go:60] Error instantiating deployment config test/deployment-example: cannot trigger a deployment for "deployment-example" because it contains unresolved images
I1209 18:17:45.444090 20134 proxier.go:751] syncProxyRules took 302.992332ms
I1209 18:17:45.444149 20134 proxier.go:391] OnServiceUpdate took 303.24067ms for 2 services
I1209 18:17:45.444173 20134 proxier.go:758] Syncing iptables rules
I1209 18:17:45.463501 20134 iptables.go:362] running iptables -N [KUBE-SERVICES -t filter]
I1209 18:17:45.497775 20134 iptables.go:362] running iptables -N [KUBE-MARK-DROP -t nat]
I1209 18:17:45.532697 20134 iptables.go:362] running iptables -N [KUBE-SERVICES -t nat]
I1209 18:17:45.554747 20134 iptables.go:362] running iptables -C [KUBE-MARK-DROP -t nat -j MARK --set-xmark 0x00008000/0x00008000]
I1209 18:17:45.567817 20134 iptables.go:362] running iptables -C [OUTPUT -t filter -m comment --comment kubernetes service portals -j KUBE-SERVICES]
I1209 18:17:45.572546 20134 iptables.go:362] running iptables -N [KUBE-FIREWALL -t filter]
I1209 18:17:45.583990 20134 iptables.go:362] running iptables -C [OUTPUT -t nat -m comment --comment kubernetes service portals -j KUBE-SERVICES]
I1209 18:17:45.589848 20134 iptables.go:362] running iptables -C [KUBE-FIREWALL -t filter -m comment --comment kubernetes firewall for dropping marked packets -m mark --mark 0x00008000/0x00008000 -j DROP]
I1209 18:17:45.604375 20134 iptables.go:362] running iptables -C [PREROUTING -t nat -m comment --comment kubernetes service portals -j KUBE-SERVICES]
I1209 18:17:45.611079 20134 iptables.go:362] running iptables -C [OUTPUT -t filter -j KUBE-FIREWALL]
I1209 18:17:45.624643 20134 iptables.go:362] running iptables -N [KUBE-POSTROUTING -t nat]
I1209 18:17:45.631436 20134 iptables.go:362] running iptables -C [INPUT -t filter -j KUBE-FIREWALL]
I1209 18:17:45.644758 20134 iptables.go:362] running iptables -C [POSTROUTING -t nat -m comment --comment kubernetes postrouting rules -j KUBE-POSTROUTING]
I1209 18:17:45.651645 20134 iptables.go:362] running iptables -N [KUBE-MARK-MASQ -t nat]
I1209 18:17:45.661219 20134 iptables.go:298] running iptables-save [-t filter]
I1209 18:17:45.662024 20134 iptables.go:362] running iptables -N [KUBE-POSTROUTING -t nat]
I1209 18:17:45.668211 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:17:45.686206 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:17:45.687142 20134 iptables.go:298] running iptables-save [-t nat]
I1209 18:17:45.691598 20134 iptables.go:362] running iptables -C [KUBE-MARK-MASQ -t nat -j MARK --set-xmark 0x00004000/0x00004000]
I1209 18:17:45.702143 20134 proxier.go:1244] Restoring iptables rules: *filter
:KUBE-SERVICES - [0:0]
-A KUBE-SERVICES -m comment --comment "test/deployment-example:8080-tcp has no endpoints" -m tcp -p tcp -d 172.30.122.15/32 --dport 8080 -j REJECT
COMMIT
*nat
:KUBE-SERVICES - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-POSTROUTING - [0:0]
:KUBE-MARK-MASQ - [0:0]
:KUBE-SVC-NPX46M4PTMTKRN6Y - [0:0]
:KUBE-SEP-LLAPZ6I53VN3DC7H - [0:0]
:KUBE-SVC-3VQ6B3MLH7E2SZT4 - [0:0]
:KUBE-SEP-FQLCC2RAIW2XP5BY - [0:0]
:KUBE-SVC-BA6I5HTZKAAAJT56 - [0:0]
:KUBE-SEP-U462OSCUJL4Y6JKA - [0:0]
:KUBE-SVC-7FAS7WLN46SI3LNQ - [0:0]
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x00004000/0x00004000 -j MASQUERADE
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x00004000/0x00004000
-A KUBE-SERVICES -m comment --comment "default/kubernetes:https cluster IP" -m tcp -p tcp -d 172.30.0.1/32 --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment default/kubernetes:https -m recent --name KUBE-SEP-LLAPZ6I53VN3DC7H --rcheck --seconds 180 --reap -j KUBE-SEP-LLAPZ6I53VN3DC7H
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment default/kubernetes:https -j KUBE-SEP-LLAPZ6I53VN3DC7H
-A KUBE-SEP-LLAPZ6I53VN3DC7H -m comment --comment default/kubernetes:https -s 192.168.121.18/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-LLAPZ6I53VN3DC7H -m comment --comment default/kubernetes:https -m recent --name KUBE-SEP-LLAPZ6I53VN3DC7H --set -m tcp -p tcp -j DNAT --to-destination 192.168.121.18:8443
-A KUBE-SERVICES -m comment --comment "default/kubernetes:dns cluster IP" -m udp -p udp -d 172.30.0.1/32 --dport 53 -j KUBE-SVC-3VQ6B3MLH7E2SZT4
-A KUBE-SVC-3VQ6B3MLH7E2SZT4 -m comment --comment default/kubernetes:dns -m recent --name KUBE-SEP-FQLCC2RAIW2XP5BY --rcheck --seconds 180 --reap -j KUBE-SEP-FQLCC2RAIW2XP5BY
-A KUBE-SVC-3VQ6B3MLH7E2SZT4 -m comment --comment default/kubernetes:dns -j KUBE-SEP-FQLCC2RAIW2XP5BY
-A KUBE-SEP-FQLCC2RAIW2XP5BY -m comment --comment default/kubernetes:dns -s 192.168.121.18/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-FQLCC2RAIW2XP5BY -m comment --comment default/kubernetes:dns -m recent --name KUBE-SEP-FQLCC2RAIW2XP5BY --set -m udp -p udp -j DNAT --to-destination 192.168.121.18:8053
-A KUBE-SERVICES -m comment --comment "default/kubernetes:dns-tcp cluster IP" -m tcp -p tcp -d 172.30.0.1/32 --dport 53 -j KUBE-SVC-BA6I5HTZKAAAJT56
-A KUBE-SVC-BA6I5HTZKAAAJT56 -m comment --comment default/kubernetes:dns-tcp -m recent --name KUBE-SEP-U462OSCUJL4Y6JKA --rcheck --seconds 180 --reap -j KUBE-SEP-U462OSCUJL4Y6JKA
-A KUBE-SVC-BA6I5HTZKAAAJT56 -m comment --comment default/kubernetes:dns-tcp -j KUBE-SEP-U462OSCUJL4Y6JKA
-A KUBE-SEP-U462OSCUJL4Y6JKA -m comment --comment default/kubernetes:dns-tcp -s 192.168.121.18/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-U462OSCUJL4Y6JKA -m comment --comment default/kubernetes:dns-tcp -m recent --name KUBE-SEP-U462OSCUJL4Y6JKA --set -m tcp -p tcp -j DNAT --to-destination 192.168.121.18:8053
-A KUBE-SERVICES -m comment --comment "test/deployment-example:8080-tcp cluster IP" -m tcp -p tcp -d 172.30.122.15/32 --dport 8080 -j KUBE-SVC-7FAS7WLN46SI3LNQ
-A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
COMMIT
I1209 18:17:45.702651 20134 iptables.go:339] running iptables-restore [--noflush --counters]
I1209 18:17:45.708878 20134 iptables.go:362] running iptables -C [POSTROUTING -t nat -m comment --comment kubernetes postrouting rules -j KUBE-POSTROUTING]
I1209 18:17:45.718504 20134 iptables.go:362] running iptables -C [KUBE-POSTROUTING -t nat -m comment --comment kubernetes service traffic requiring SNAT -m mark --mark 0x00004000/0x00004000 -j MASQUERADE]
I1209 18:17:45.731238 20134 helper.go:687] UpdateTrackingTags: stream=test/deployment-example, updatedTag=latest, updatedImage.dockerImageReference=openshift/deployment-example@sha256:ea9135488f323060cb18ab3ec06286cd49e4b3a611fce1a6a442651ecf421f99, updatedImage.image=sha256:ea9135488f323060cb18ab3ec06286cd49e4b3a611fce1a6a442651ecf421f99
I1209 18:17:45.731402 20134 helper.go:689] Examining spec tag "latest", tagRef=api.TagReference{Name:"latest", Annotations:map[string]string{"openshift.io/imported-from":"openshift/deployment-example"}, From:(*api.ObjectReference)(0xc821fd0e70), Reference:false, Generation:(*int64)(0xc82433abc0), ImportPolicy:api.TagImportPolicy{Insecure:false, Scheduled:false}}
I1209 18:17:45.731480 20134 helper.go:699] tagRef.Kind "DockerImage" isn't ImageStreamTag, skipping
I1209 18:17:45.736682 20134 proxier.go:751] syncProxyRules took 292.504351ms
I1209 18:17:45.736965 20134 iptables.go:362] running iptables -N [KUBE-PORTALS-CONTAINER -t nat]
I1209 18:17:45.749021 20134 iptables.go:362] running iptables -C [PREROUTING -t nat -m comment --comment handle ClusterIPs; NOTE: this must be before the NodePort rules -j KUBE-PORTALS-CONTAINER]
I1209 18:17:45.766957 20134 trace.go:61] Trace "Create /oapi/v1/namespaces/test/imagestreamimports" (started 2016-12-09 18:17:45.09894083 +0000 UTC):
[25.305µs] [25.305µs] About to convert to expected version
[66.898µs] [41.593µs] Conversion done
[243.738µs] [176.84µs] About to store object in database
[667.475118ms] [667.23138ms] Object stored in database
[667.489321ms] [14.203µs] Self-link added
[667.969972ms] [480.651µs] END
I1209 18:17:45.769152 20134 proxier.go:758] Syncing iptables rules
I1209 18:17:45.769306 20134 healthcheck.go:86] LB service health check mutation request Service: default/kubernetes - 0 Endpoints []
I1209 18:17:45.769460 20134 healthcheck.go:86] LB service health check mutation request Service: default/kubernetes - 0 Endpoints []
I1209 18:17:45.769547 20134 healthcheck.go:86] LB service health check mutation request Service: default/kubernetes - 0 Endpoints []
I1209 18:17:45.770968 20134 controller.go:168] Import stream test/deployment-example partial=true import: &api.ImageStream{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:api.ObjectMeta{Name:"deployment-example", GenerateName:"", Namespace:"test", SelfLink:"", UID:"c815a8da-be3b-11e6-8665-525400560f2f", ResourceVersion:"514", Generation:2, CreationTimestamp:unversioned.Time{Time:time.Time{sec:63616904265, nsec:0, loc:(*time.Location)(0xa4624c0)}}, DeletionTimestamp:(*unversioned.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"deployment-example"}, Annotations:map[string]string{"openshift.io/generated-by":"OpenShiftNewApp", "openshift.io/image.dockerRepositoryCheck":"2016-12-09T18:17:45Z"}, OwnerReferences:[]api.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:api.ImageStreamSpec{DockerImageRepository:"", Tags:map[string]api.TagReference{"latest":api.TagReference{Name:"latest", Annotations:map[string]string{"openshift.io/imported-from":"openshift/deployment-example"}, From:(*api.ObjectReference)(0xc822f80690), Reference:false, Generation:(*int64)(0xc82501cf10), ImportPolicy:api.TagImportPolicy{Insecure:false, Scheduled:false}}}}, Status:api.ImageStreamStatus{DockerImageRepository:"", Tags:map[string]api.TagEventList{"latest":api.TagEventList{Items:[]api.TagEvent{api.TagEvent{Created:unversioned.Time{Time:time.Time{sec:63616904265, nsec:0, loc:(*time.Location)(0xa4624c0)}}, DockerImageReference:"openshift/deployment-example@sha256:ea9135488f323060cb18ab3ec06286cd49e4b3a611fce1a6a442651ecf421f99", Image:"sha256:ea9135488f323060cb18ab3ec06286cd49e4b3a611fce1a6a442651ecf421f99", Generation:2}}, Conditions:[]api.TagEventCondition(nil)}}}}
I1209 18:17:45.773431 20134 iptables.go:362] running iptables -N [KUBE-PORTALS-HOST -t nat]
I1209 18:17:45.780966 20134 thin_pool_watcher.go:160] checking whether the thin-pool is holding a metadata snapshot
I1209 18:17:45.781015 20134 dmsetup_client.go:61] running dmsetup status docker-252:1-262311-pool
E1209 18:17:45.789986 20134 controller.go:65] cannot trigger a deployment for "deployment-example" because it contains unresolved images
I1209 18:17:45.797672 20134 iptables.go:362] running iptables -C [OUTPUT -t nat -m comment --comment handle ClusterIPs; NOTE: this must be before the NodePort rules -j KUBE-PORTALS-HOST]
I1209 18:17:45.806899 20134 thin_pool_watcher.go:126] reserving metadata snapshot for thin-pool docker-252:1-262311-pool
I1209 18:17:45.806959 20134 dmsetup_client.go:61] running dmsetup message docker-252:1-262311-pool 0 reserve_metadata_snap
I1209 18:17:45.817609 20134 iptables.go:362] running iptables -N [KUBE-NODEPORT-CONTAINER -t nat]
I1209 18:17:45.826571 20134 thin_pool_watcher.go:133] reserved metadata snapshot for thin-pool docker-252:1-262311-pool
I1209 18:17:45.826673 20134 thin_pool_watcher.go:141] running thin_ls on metadata device /dev/loop1
I1209 18:17:45.827545 20134 thin_ls_client.go:56] running command: thin_ls --no-headers -m -o DEV,EXCLUSIVE_BYTES /dev/loop1
I1209 18:17:45.843262 20134 thin_pool_watcher.go:137] releasing metadata snapshot for thin-pool docker-252:1-262311-pool
I1209 18:17:45.843288 20134 dmsetup_client.go:61] running dmsetup message docker-252:1-262311-pool 0 release_metadata_snap
E1209 18:17:45.865433 20134 thin_pool_watcher.go:72] encountered error refreshing thin pool watcher: error performing thin_ls on metadata device /dev/loop1: Error running command `thin_ls --no-headers -m -o DEV,EXCLUSIVE_BYTES /dev/loop1`: exit status 127
output:
I1209 18:17:45.865477 20134 thin_pool_watcher.go:77] thin_ls(1481307465) took 84.519648ms
I1209 18:17:45.866672 20134 iptables.go:362] running iptables -C [PREROUTING -t nat -m addrtype --dst-type LOCAL -m comment --comment handle service NodePorts; NOTE: this must be the last rule in the chain -j KUBE-NODEPORT-CONTAINER]
I1209 18:17:45.880829 20134 iptables.go:362] running iptables -N [KUBE-NODEPORT-HOST -t nat]
I1209 18:17:45.894558 20134 iptables.go:362] running iptables -C [OUTPUT -t nat -m addrtype --dst-type LOCAL -m comment --comment handle service NodePorts; NOTE: this must be the last rule in the chain -j KUBE-NODEPORT-HOST]
I1209 18:17:45.907237 20134 iptables.go:362] running iptables -N [KUBE-NODEPORT-NON-LOCAL -t filter]
I1209 18:17:45.917614 20134 iptables.go:362] running iptables -C [INPUT -t filter -m comment --comment Ensure that non-local NodePort traffic can flow -j KUBE-NODEPORT-NON-LOCAL]
I1209 18:17:45.939597 20134 image_change_controller.go:37] Build image change controller detected ImageStream change
I1209 18:17:45.949214 20134 factory.go:104] Error trying to work out if we can handle /system.slice/sys-kernel-debug.mount: invalid container name
I1209 18:17:45.949246 20134 factory.go:115] Factory "docker" was unable to handle container "/system.slice/sys-kernel-debug.mount"
I1209 18:17:45.949256 20134 factory.go:108] Factory "systemd" can handle container "/system.slice/sys-kernel-debug.mount", but ignoring.
I1209 18:17:45.949263 20134 manager.go:843] ignoring container "/system.slice/sys-kernel-debug.mount"
I1209 18:17:45.950062 20134 factory.go:104] Error trying to work out if we can handle /system.slice/dev-hugepages.mount: invalid container name
I1209 18:17:45.950073 20134 factory.go:115] Factory "docker" was unable to handle container "/system.slice/dev-hugepages.mount"
I1209 18:17:45.950079 20134 factory.go:108] Factory "systemd" can handle container "/system.slice/dev-hugepages.mount", but ignoring.
I1209 18:17:45.950085 20134 manager.go:843] ignoring container "/system.slice/dev-hugepages.mount"
I1209 18:17:45.950112 20134 factory.go:104] Error trying to work out if we can handle /system.slice/var-lib-docker-devicemapper.mount: invalid container name
I1209 18:17:45.950120 20134 factory.go:115] Factory "docker" was unable to handle container "/system.slice/var-lib-docker-devicemapper.mount"
I1209 18:17:45.950126 20134 factory.go:108] Factory "systemd" can handle container "/system.slice/var-lib-docker-devicemapper.mount", but ignoring.
I1209 18:17:45.950133 20134 manager.go:843] ignoring container "/system.slice/var-lib-docker-devicemapper.mount"
I1209 18:17:45.950143 20134 factory.go:104] Error trying to work out if we can handle /system.slice/sys-kernel-config.mount: invalid container name
I1209 18:17:45.958110 20134 factory.go:115] Factory "docker" was unable to handle container "/system.slice/sys-kernel-config.mount"
I1209 18:17:45.958133 20134 factory.go:108] Factory "systemd" can handle container "/system.slice/sys-kernel-config.mount", but ignoring.
I1209 18:17:45.958141 20134 manager.go:843] ignoring container "/system.slice/sys-kernel-config.mount"
I1209 18:17:45.958154 20134 factory.go:104] Error trying to work out if we can handle /system.slice/-.mount: invalid container name
I1209 18:17:45.958162 20134 factory.go:115] Factory "docker" was unable to handle container "/system.slice/-.mount"
I1209 18:17:45.958166 20134 factory.go:108] Factory "systemd" can handle container "/system.slice/-.mount", but ignoring.
I1209 18:17:45.958171 20134 manager.go:843] ignoring container "/system.slice/-.mount"
I1209 18:17:45.958181 20134 factory.go:104] Error trying to work out if we can handle /system.slice/run-docker-netns-default.mount: invalid container name
I1209 18:17:45.958185 20134 factory.go:115] Factory "docker" was unable to handle container "/system.slice/run-docker-netns-default.mount"
I1209 18:17:45.958214 20134 factory.go:108] Factory "systemd" can handle container "/system.slice/run-docker-netns-default.mount", but ignoring.
I1209 18:17:45.958221 20134 manager.go:843] ignoring container "/system.slice/run-docker-netns-default.mount"
I1209 18:17:45.958241 20134 factory.go:104] Error trying to work out if we can handle /system.slice/run-user-1001.mount: invalid container name
I1209 18:17:45.958246 20134 factory.go:115] Factory "docker" was unable to handle container "/system.slice/run-user-1001.mount"
I1209 18:17:45.960905 20134 factory.go:108] Factory "systemd" can handle container "/system.slice/run-user-1001.mount", but ignoring.
I1209 18:17:45.960930 20134 manager.go:843] ignoring container "/system.slice/run-user-1001.mount"
I1209 18:17:45.960943 20134 factory.go:104] Error trying to work out if we can handle /system.slice/dev-mqueue.mount: invalid container name
I1209 18:17:45.960971 20134 factory.go:115] Factory "docker" was unable to handle container "/system.slice/dev-mqueue.mount"
I1209 18:17:45.960979 20134 factory.go:108] Factory "systemd" can handle container "/system.slice/dev-mqueue.mount", but ignoring.
I1209 18:17:45.960985 20134 manager.go:843] ignoring container "/system.slice/dev-mqueue.mount"
I1209 18:17:45.951657 20134 iptables.go:362] running iptables -N [KUBE-SERVICES -t filter]
I1209 18:17:45.969426 20134 factory.go:104] Error trying to work out if we can handle /system.slice/var-lib-docker-devicemapper-mnt-31124fc842b59ad64bc83362cd3a95d06487dfe3da4abe9c3c25a17cf11f5a7f.mount: error inspecting container: Error: No such container: 31124fc842b59ad64bc83362cd3a95d06487dfe3da4abe9c3c25a17cf11f5a7f
I1209 18:17:45.969638 20134 factory.go:115] Factory "docker" was unable to handle container "/system.slice/var-lib-docker-devicemapper-mnt-31124fc842b59ad64bc83362cd3a95d06487dfe3da4abe9c3c25a17cf11f5a7f.mount"
I1209 18:17:45.969720 20134 factory.go:108] Factory "systemd" can handle container "/system.slice/var-lib-docker-devicemapper-mnt-31124fc842b59ad64bc83362cd3a95d06487dfe3da4abe9c3c25a17cf11f5a7f.mount", but ignoring.
I1209 18:17:45.969752 20134 manager.go:843] ignoring container "/system.slice/var-lib-docker-devicemapper-mnt-31124fc842b59ad64bc83362cd3a95d06487dfe3da4abe9c3c25a17cf11f5a7f.mount"
I1209 18:17:45.969790 20134 manager.go:346] Global Housekeeping(1481307465) took 131.379571ms
I1209 18:17:45.955117 20134 factory.go:181] Image stream "deployment-example" updated.
I1209 18:17:45.985678 20134 iptables.go:362] running iptables -N [KUBE-SERVICES -t nat]
I1209 18:17:46.008543 20134 rest.go:84] New deployment for "deployment-example" caused by []api.DeploymentCause{api.DeploymentCause{Type:"ImageChange", ImageTrigger:(*api.DeploymentCauseImageTrigger)(0xc82faa3960)}}
I1209 18:17:46.012200 20134 iptables.go:362] running iptables -C [OUTPUT -t filter -m comment --comment kubernetes service portals -j KUBE-SERVICES]
I1209 18:17:46.029139 20134 factory.go:125] Updating deployment config "deployment-example"
I1209 18:17:46.029196 20134 controller.go:80] Reconciling test/deployment-example
I1209 18:17:46.031121 20134 iptables.go:362] running iptables -C [OUTPUT -t nat -m comment --comment kubernetes service portals -j KUBE-SERVICES]
I1209 18:17:46.055756 20134 iptables.go:362] running iptables -C [PREROUTING -t nat -m comment --comment kubernetes service portals -j KUBE-SERVICES]
I1209 18:17:46.067889 20134 iptables.go:362] running iptables -N [KUBE-POSTROUTING -t nat]
I1209 18:17:46.078213 20134 iptables.go:362] running iptables -C [POSTROUTING -t nat -m comment --comment kubernetes postrouting rules -j KUBE-POSTROUTING]
I1209 18:17:46.090443 20134 iptables.go:298] running iptables-save [-t filter]
I1209 18:17:46.096442 20134 controller_utils.go:159] Controller test/deployment-example-1 either never recorded expectations, or the ttl expired.
I1209 18:17:46.096571 20134 replication_controller_utils.go:52] Updating replica count for rc: test/deployment-example-1, replicas 0->0 (need 0), fullyLabeledReplicas 0->0, readyReplicas 0->0, sequence No: 0->1
I1209 18:17:46.098059 20134 factory.go:153] Replication controller "deployment-example-1" added.
I1209 18:17:46.103071 20134 iptables.go:298] running iptables-save [-t nat]
I1209 18:17:46.115402 20134 proxier.go:1244] Restoring iptables rules: *filter
:KUBE-SERVICES - [0:0]
-A KUBE-SERVICES -m comment --comment "test/deployment-example:8080-tcp has no endpoints" -m tcp -p tcp -d 172.30.122.15/32 --dport 8080 -j REJECT
COMMIT
*nat
:KUBE-SERVICES - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-POSTROUTING - [0:0]
:KUBE-MARK-MASQ - [0:0]
:KUBE-SVC-NPX46M4PTMTKRN6Y - [0:0]
:KUBE-SEP-LLAPZ6I53VN3DC7H - [0:0]
:KUBE-SVC-3VQ6B3MLH7E2SZT4 - [0:0]
:KUBE-SEP-FQLCC2RAIW2XP5BY - [0:0]
:KUBE-SVC-BA6I5HTZKAAAJT56 - [0:0]
:KUBE-SEP-U462OSCUJL4Y6JKA - [0:0]
:KUBE-SVC-7FAS7WLN46SI3LNQ - [0:0]
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x00004000/0x00004000 -j MASQUERADE
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x00004000/0x00004000
-A KUBE-SERVICES -m comment --comment "default/kubernetes:https cluster IP" -m tcp -p tcp -d 172.30.0.1/32 --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment default/kubernetes:https -m recent --name KUBE-SEP-LLAPZ6I53VN3DC7H --rcheck --seconds 180 --reap -j KUBE-SEP-LLAPZ6I53VN3DC7H
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment default/kubernetes:https -j KUBE-SEP-LLAPZ6I53VN3DC7H
-A KUBE-SEP-LLAPZ6I53VN3DC7H -m comment --comment default/kubernetes:https -s 192.168.121.18/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-LLAPZ6I53VN3DC7H -m comment --comment default/kubernetes:https -m recent --name KUBE-SEP-LLAPZ6I53VN3DC7H --set -m tcp -p tcp -j DNAT --to-destination 192.168.121.18:8443
-A KUBE-SERVICES -m comment --comment "default/kubernetes:dns cluster IP" -m udp -p udp -d 172.30.0.1/32 --dport 53 -j KUBE-SVC-3VQ6B3MLH7E2SZT4
-A KUBE-SVC-3VQ6B3MLH7E2SZT4 -m comment --comment default/kubernetes:dns -m recent --name KUBE-SEP-FQLCC2RAIW2XP5BY --rcheck --seconds 180 --reap -j KUBE-SEP-FQLCC2RAIW2XP5BY
-A KUBE-SVC-3VQ6B3MLH7E2SZT4 -m comment --comment default/kubernetes:dns -j KUBE-SEP-FQLCC2RAIW2XP5BY
-A KUBE-SEP-FQLCC2RAIW2XP5BY -m comment --comment default/kubernetes:dns -s 192.168.121.18/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-FQLCC2RAIW2XP5BY -m comment --comment default/kubernetes:dns -m recent --name KUBE-SEP-FQLCC2RAIW2XP5BY --set -m udp -p udp -j DNAT --to-destination 192.168.121.18:8053
-A KUBE-SERVICES -m comment --comment "default/kubernetes:dns-tcp cluster IP" -m tcp -p tcp -d 172.30.0.1/32 --dport 53 -j KUBE-SVC-BA6I5HTZKAAAJT56
-A KUBE-SVC-BA6I5HTZKAAAJT56 -m comment --comment default/kubernetes:dns-tcp -m recent --name KUBE-SEP-U462OSCUJL4Y6JKA --rcheck --seconds 180 --reap -j KUBE-SEP-U462OSCUJL4Y6JKA
-A KUBE-SVC-BA6I5HTZKAAAJT56 -m comment --comment default/kubernetes:dns-tcp -j KUBE-SEP-U462OSCUJL4Y6JKA
-A KUBE-SEP-U462OSCUJL4Y6JKA -m comment --comment default/kubernetes:dns-tcp -s 192.168.121.18/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-U462OSCUJL4Y6JKA -m comment --comment default/kubernetes:dns-tcp -m recent --name KUBE-SEP-U462OSCUJL4Y6JKA --set -m tcp -p tcp -j DNAT --to-destination 192.168.121.18:8053
-A KUBE-SERVICES -m comment --comment "test/deployment-example:8080-tcp cluster IP" -m tcp -p tcp -d 172.30.122.15/32 --dport 8080 -j KUBE-SVC-7FAS7WLN46SI3LNQ
-A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
COMMIT
I1209 18:17:46.116142 20134 controller.go:105] Found 0 scheduledjobs
I1209 18:17:46.115432 20134 iptables.go:339] running iptables-restore [--noflush --counters]
I1209 18:17:46.141015 20134 factory.go:125] Updating deployment config "deployment-example"
I1209 18:17:46.152007 20134 proxier.go:751] syncProxyRules took 382.855622ms
I1209 18:17:46.152032 20134 proxier.go:523] OnEndpointsUpdate took 950.489634ms for 2 endpoints
I1209 18:17:46.152303 20134 controller.go:297] Updated the status for "test/deployment-example" (observed generation: 2)
I1209 18:17:46.152431 20134 controller.go:80] Reconciling test/deployment-example
I1209 18:17:46.152834 20134 proxier.go:397] Received update notice: []
I1209 18:17:46.152889 20134 proxier.go:758] Syncing iptables rules
I1209 18:17:46.153286 20134 iptables.go:362] running iptables -N [KUBE-SERVICES -t filter]
I1209 18:17:46.178386 20134 controller.go:113] Found 0 jobs
I1209 18:17:46.178410 20134 controller.go:116] Found 0 groups
I1209 18:17:46.181342 20134 iptables.go:362] running iptables -N [KUBE-SERVICES -t nat]
I1209 18:17:46.194562 20134 accept.go:86] Made decision for &{DockerImage openshift/origin-deployer:v1.5.0-alpha.0 } (as: openshift/origin-deployer:v1.5.0-alpha.0, err: only images imported into the registry are allowed (openshift/origin-deployer:v1.5.0-alpha.0)): true
I1209 18:17:46.194621 20134 accept.go:114] allowed: &admission.attributesRecord{kind:unversioned.GroupVersionKind{Group:"", Version:"v1", Kind:"Pod"}, namespace:"test", name:"", resource:unversioned.GroupVersionResource{Group:"", Version:"v1", Resource:"pods"}, subresource:"", operation:"CREATE", object:(*api.Pod)(0xc82ebb4280), oldObject:runtime.Object(nil), userInfo:(*user.DefaultInfo)(0xc8315fe6c0)}
I1209 18:17:46.195486 20134 reflector.go:284] pkg/controller/volume/persistentvolume/pv_controller_base.go:448: forcing resync
I1209 18:17:46.195878 20134 iptables.go:362] running iptables -C [OUTPUT -t filter -m comment --comment kubernetes service portals -j KUBE-SERVICES]
I1209 18:17:46.203375 20134 factory.go:171] Replication controller "deployment-example-1" updated.
I1209 18:17:46.203593 20134 controller.go:80] Reconciling test/deployment-example
I1209 18:17:46.205003 20134 replication_controller.go:620] Finished syncing controller "test/deployment-example-1" (108.585478ms)
I1209 18:17:46.205337 20134 replication_controller.go:323] Observed updated replication controller deployment-example-1. Desired pod count change: 0->0
I1209 18:17:46.205793 20134 controller_utils.go:159] Controller test/deployment-example-1 either never recorded expectations, or the ttl expired.
I1209 18:17:46.206074 20134 replication_controller.go:620] Finished syncing controller "test/deployment-example-1" (295.009µs)
I1209 18:17:46.213972 20134 iptables.go:362] running iptables -C [OUTPUT -t nat -m comment --comment kubernetes service portals -j KUBE-SERVICES]
I1209 18:17:46.227563 20134 reflector.go:284] pkg/controller/petset/pet_set.go:147: forcing resync
I1209 18:17:46.227629 20134 reflector.go:284] pkg/controller/volume/persistentvolume/pv_controller_base.go:153: forcing resync
I1209 18:17:46.227648 20134 reflector.go:284] pkg/controller/volume/persistentvolume/pv_controller_base.go:449: forcing resync
I1209 18:17:46.228187 20134 iptables.go:362] running iptables -C [PREROUTING -t nat -m comment --comment kubernetes service portals -j KUBE-SERVICES]
I1209 18:17:46.234261 20134 reflector.go:284] github.com/openshift/origin/pkg/build/controller/factory/factory.go:206: forcing resync
I1209 18:17:46.234651 20134 reflector.go:284] github.com/openshift/origin/pkg/build/controller/factory/factory.go:209: forcing resync
I1209 18:17:46.234946 20134 reflector.go:284] github.com/openshift/origin/pkg/build/controller/factory/factory.go:350: forcing resync
I1209 18:17:46.235210 20134 reflector.go:284] github.com/openshift/origin/pkg/build/controller/factory/factory.go:302: forcing resync
I1209 18:17:46.235538 20134 image_change_controller.go:37] Build image change controller detected ImageStream change
I1209 18:17:46.238106 20134 admission.go:77] getting security context constraints for pod deployment-example-1-deploy (generate: ) in namespace test with user info &{system:serviceaccount:openshift-infra:deploymentconfig-controller 7e4deb62-be3b-11e6-8665-525400560f2f [system:serviceaccounts system:serviceaccounts:openshift-infra system:authenticated] map[]}
I1209 18:17:46.238188 20134 admission.go:88] getting security context constraints for pod deployment-example-1-deploy (generate: ) with service account info &{system:serviceaccount:test:deployer [system:serviceaccounts system:serviceaccounts:test] map[]}
I1209 18:17:46.240216 20134 matcher.go:297] got preallocated values for min: 1000040000, max: 1000049999 for uid range in namespace test
I1209 18:17:46.240309 20134 matcher.go:310] got preallocated value for level: s0:c6,c5 for selinux options in namespace test
I1209 18:17:46.240396 20134 matcher.go:340] got preallocated value for groups: 1000040000/10000 in namespace test
I1209 18:17:46.240491 20134 admission.go:149] validating pod deployment-example-1-deploy (generate: ) against providers restricted
I1209 18:17:46.240619 20134 admission.go:116] pod deployment-example-1-deploy (generate: ) validated against provider restricted
I1209 18:17:46.246618 20134 iptables.go:362] running iptables -N [KUBE-POSTROUTING -t nat]
I1209 18:17:46.255530 20134 reflector.go:284] github.com/openshift/origin/pkg/project/controller/factory.go:36: forcing resync
I1209 18:17:46.257162 20134 replication_controller.go:256] No controllers found for pod deployment-example-1-deploy, replication manager will avoid syncing
I1209 18:17:46.257197 20134 replica_set.go:330] Pod deployment-example-1-deploy created: &api.Pod{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:api.ObjectMeta{Name:"deployment-example-1-deploy", GenerateName:"", Namespace:"test", SelfLink:"/api/v1/namespaces/test/pods/deployment-example-1-deploy", UID:"c8ca050c-be3b-11e6-8665-525400560f2f", ResourceVersion:"520", Generation:0, CreationTimestamp:unversioned.Time{Time:time.Time{sec:63616904266, nsec:0, loc:(*time.Location)(0xa4624c0)}}, DeletionTimestamp:(*unversioned.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"openshift.io/deployer-pod-for.name":"deployment-example-1"}, Annotations:map[string]string{"openshift.io/deployment.name":"deployment-example-1", "openshift.io/scc":"restricted"}, OwnerReferences:[]api.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:api.PodSpec{Volumes:[]api.Volume{api.Volume{Name:"deployer-token-gvtbo", VolumeSource:api.VolumeSource{HostPath:(*api.HostPathVolumeSource)(nil), EmptyDir:(*api.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*api.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*api.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*api.GitRepoVolumeSource)(nil), Secret:(*api.SecretVolumeSource)(0xc829893a40), NFS:(*api.NFSVolumeSource)(nil), ISCSI:(*api.ISCSIVolumeSource)(nil), Glusterfs:(*api.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*api.PersistentVolumeClaimVolumeSource)(nil), RBD:(*api.RBDVolumeSource)(nil), Quobyte:(*api.QuobyteVolumeSource)(nil), FlexVolume:(*api.FlexVolumeSource)(nil), Cinder:(*api.CinderVolumeSource)(nil), CephFS:(*api.CephFSVolumeSource)(nil), Flocker:(*api.FlockerVolumeSource)(nil), DownwardAPI:(*api.DownwardAPIVolumeSource)(nil), FC:(*api.FCVolumeSource)(nil), AzureFile:(*api.AzureFileVolumeSource)(nil), ConfigMap:(*api.ConfigMapVolumeSource)(nil), VsphereVolume:(*api.VsphereVirtualDiskVolumeSource)(nil), AzureDisk:(*api.AzureDiskVolumeSource)(nil)}}}, InitContainers:[]api.Container(nil), Containers:[]api.Container{api.Container{Name:"deployment", Image:"openshift/origin-deployer:v1.5.0-alpha.0", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]api.ContainerPort(nil), Env:[]api.EnvVar{api.EnvVar{Name:"KUBERNETES_MASTER", Value:"https://192.168.121.18:8443", ValueFrom:(*api.EnvVarSource)(nil)}, api.EnvVar{Name:"OPENSHIFT_MASTER", Value:"https://192.168.121.18:8443", ValueFrom:(*api.EnvVarSource)(nil)}, api.EnvVar{Name:"BEARER_TOKEN_FILE", Value:"/var/run/secrets/kubernetes.io/serviceaccount/token", ValueFrom:(*api.EnvVarSource)(nil)}, api.EnvVar{Name:"OPENSHIFT_CA_DATA", Value:"-----BEGIN CERTIFICATE-----\nMIIC6jCCAdKgAwIBAgIBATANBgkqhkiG9w0BAQsFADAmMSQwIgYDVQQDDBtvcGVu\nc2hpZnQtc2lnbmVyQDE0ODEzMDczMzMwHhcNMTYxMjA5MTgxNTMzWhcNMjExMjA4\nMTgxNTM0WjAmMSQwIgYDVQQDDBtvcGVuc2hpZnQtc2lnbmVyQDE0ODEzMDczMzMw\nggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDJneo5Oz8Q1EA7BkGhSvFX\nDcdnYv+mvPZsUZOC5VbKaXSz84X1nLQj7n5lxWK4f9gmFiXscgxBOIxXD9e2Q6AM\ngIJEjMHzEXYOgbWqhbd6EtUrWEGEzA8dEAvPyDWe2jdovcTUkP7JxyqATCk7j4Ao\nv27HJFyyR3iVOleAPaT1ZICj+w4Jy1np60rmMCsn2A2vfKceu7T5lt3uGNEFhTMd\n7bbvFaIkv3lFWMkrRaDqAlWanYQLEGMF8Mh68ty9A/mqCAJUjVB363/i7IIG1nwI\nymFYS3IKsCk0W8WgC9p515lMH3d1rsT+UlF+BhRa3Boh+wD4q2NfrgkguqEQ4f+x\nAgMBAAGjIzAhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MA0GCSqG\nSIb3DQEBCwUAA4IBAQCBKeKsPueA4RHoEzLl+xNnrig0dBvbfcBCzCf4w221QHAT\nS3FcQvByLjzpUUl2lPtqA5+tvg+lxwKba8DTgI3IpP2J2Pu4eLu4jH/YnL+5u18t\nq3/CGSFBSL0UJ2cZQe+9AH7LOSSqyj86PBIjZhW3iSwQhjqB2p7dA25gj0wnoM77\nkh8VckM83HcbzVspgvpQ6zR2RuXfLA3Lcoi9wd8eojaaNwqN4AhCIdEu4nXz7FFM\nGv+2fHzxLRUFA0YPdMfYio3s6qF3Yg2G52S1dYNEGUNQTvMl38KFS6lU+jaj0qjn\nVVkXIA4NrteIMUC1NH2DcvTcWtWNtZWkga7aytT0\n-----END CERTIFICATE-----\n", ValueFrom:(*api.EnvVarSource)(nil)}, api.EnvVar{Name:"OPENSHIFT_DEPLOYMENT_NAME", Value:"deployment-example-1", ValueFrom:(*api.EnvVarSource)(nil)}, api.EnvVar{Name:"OPENSHIFT_DEPLOYMENT_NAMESPACE", Value:"test", ValueFrom:(*api.EnvVarSource)(nil)}}, Resources:api.ResourceRequirements{Limits:api.ResourceList(nil), Requests:api.ResourceList(nil)}, VolumeMounts:[]api.VolumeMount{api.VolumeMount{Name:"deployer-token-gvtbo", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:""}}, LivenessProbe:(*api.Probe)(nil), ReadinessProbe:(*api.Probe)(nil), Lifecycle:(*api.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", ImagePullPolicy:"IfNotPresent", SecurityContext:(*api.SecurityContext)(0xc829893a70), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Never", TerminationGracePeriodSeconds:(*int64)(0xc82cd939d0), ActiveDeadlineSeconds:(*int64)(0xc82cd939d8), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"deployer", NodeName:"", SecurityContext:(*api.PodSecurityContext)(0xc8277dd3c0), ImagePullSecrets:[]api.LocalObjectReference{api.LocalObjectReference{Name:"deployer-dockercfg-cotcn"}}, Hostname:"", Subdomain:""}, Status:api.PodStatus{Phase:"Pending", Conditions:[]api.PodCondition(nil), Message:"", Reason:"", HostIP:"", PodIP:"", StartTime:(*unversioned.Time)(nil), InitContainerStatuses:[]api.ContainerStatus(nil), ContainerStatuses:[]api.ContainerStatus(nil)}}.
I1209 18:17:46.257881 20134 replica_set.go:238] No ReplicaSets found for pod deployment-example-1-deploy, ReplicaSet controller will avoid syncing
I1209 18:17:46.258489 20134 jobcontroller.go:166] No jobs found for pod deployment-example-1-deploy, job controller will avoid syncing
I1209 18:17:46.258525 20134 daemoncontroller.go:341] Pod deployment-example-1-deploy added.
I1209 18:17:46.258995 20134 daemoncontroller.go:293] No daemon sets found for pod deployment-example-1-deploy, daemon set controller will avoid syncing
I1209 18:17:46.259042 20134 disruption.go:295] addPod called on pod "deployment-example-1-deploy"
I1209 18:17:46.259779 20134 disruption.go:361] No PodDisruptionBudgets found for pod deployment-example-1-deploy, PodDisruptionBudget controller will avoid syncing.
I1209 18:17:46.259802 20134 disruption.go:298] No matching pdb for pod "deployment-example-1-deploy"
I1209 18:17:46.260518 20134 pet_set.go:159] Pod deployment-example-1-deploy created, labels: map[openshift.io/deployer-pod-for.name:deployment-example-1]
I1209 18:17:46.260816 20134 pet_set.go:238] No PetSets found for pod deployment-example-1-deploy, PetSet controller will avoid syncing
I1209 18:17:46.261530 20134 deployment_controller.go:351] Pod deployment-example-1-deploy created: &api.Pod{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:api.ObjectMeta{Name:"deployment-example-1-deploy", GenerateName:"", Namespace:"test", SelfLink:"/api/v1/namespaces/test/pods/deployment-example-1-deploy", UID:"c8ca050c-be3b-11e6-8665-525400560f2f", ResourceVersion:"520", Generation:0, CreationTimestamp:unversioned.Time{Time:time.Time{sec:63616904266, nsec:0, loc:(*time.Location)(0xa4624c0)}}, DeletionTimestamp:(*unversioned.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"openshift.io/deployer-pod-for.name":"deployment-example-1"}, Annotations:map[string]string{"openshift.io/scc":"restricted", "openshift.io/deployment.name":"deployment-example-1"}, OwnerReferences:[]api.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:api.PodSpec{Volumes:[]api.Volume{api.Volume{Name:"deployer-token-gvtbo", VolumeSource:api.VolumeSource{HostPath:(*api.HostPathVolumeSource)(nil), EmptyDir:(*api.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*api.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*api.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*api.GitRepoVolumeSource)(nil), Secret:(*api.SecretVolumeSource)(0xc8262168a0), NFS:(*api.NFSVolumeSource)(nil), ISCSI:(*api.ISCSIVolumeSource)(nil), Glusterfs:(*api.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*api.PersistentVolumeClaimVolumeSource)(nil), RBD:(*api.RBDVolumeSource)(nil), Quobyte:(*api.QuobyteVolumeSource)(nil), FlexVolume:(*api.FlexVolumeSource)(nil), Cinder:(*api.CinderVolumeSource)(nil), CephFS:(*api.CephFSVolumeSource)(nil), Flocker:(*api.FlockerVolumeSource)(nil), DownwardAPI:(*api.DownwardAPIVolumeSource)(nil), FC:(*api.FCVolumeSource)(nil), AzureFile:(*api.AzureFileVolumeSource)(nil), ConfigMap:(*api.ConfigMapVolumeSource)(nil), VsphereVolume:(*api.VsphereVirtualDiskVolumeSource)(nil), AzureDisk:(*api.AzureDiskVolumeSource)(nil)}}}, InitContainers:[]api.Container(nil), Containers:[]api.Container{api.Container{Name:"deployment", Image:"openshift/origin-deployer:v1.5.0-alpha.0", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]api.ContainerPort(nil), Env:[]api.EnvVar{api.EnvVar{Name:"KUBERNETES_MASTER", Value:"https://192.168.121.18:8443", ValueFrom:(*api.EnvVarSource)(nil)}, api.EnvVar{Name:"OPENSHIFT_MASTER", Value:"https://192.168.121.18:8443", ValueFrom:(*api.EnvVarSource)(nil)}, api.EnvVar{Name:"BEARER_TOKEN_FILE", Value:"/var/run/secrets/kubernetes.io/serviceaccount/token", ValueFrom:(*api.EnvVarSource)(nil)}, api.EnvVar{Name:"OPENSHIFT_CA_DATA", Value:"-----BEGIN CERTIFICATE-----\nMIIC6jCCAdKgAwIBAgIBATANBgkqhkiG9w0BAQsFADAmMSQwIgYDVQQDDBtvcGVu\nc2hpZnQtc2lnbmVyQDE0ODEzMDczMzMwHhcNMTYxMjA5MTgxNTMzWhcNMjExMjA4\nMTgxNTM0WjAmMSQwIgYDVQQDDBtvcGVuc2hpZnQtc2lnbmVyQDE0ODEzMDczMzMw\nggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDJneo5Oz8Q1EA7BkGhSvFX\nDcdnYv+mvPZsUZOC5VbKaXSz84X1nLQj7n5lxWK4f9gmFiXscgxBOIxXD9e2Q6AM\ngIJEjMHzEXYOgbWqhbd6EtUrWEGEzA8dEAvPyDWe2jdovcTUkP7JxyqATCk7j4Ao\nv27HJFyyR3iVOleAPaT1ZICj+w4Jy1np60rmMCsn2A2vfKceu7T5lt3uGNEFhTMd\n7bbvFaIkv3lFWMkrRaDqAlWanYQLEGMF8Mh68ty9A/mqCAJUjVB363/i7IIG1nwI\nymFYS3IKsCk0W8WgC9p515lMH3d1rsT+UlF+BhRa3Boh+wD4q2NfrgkguqEQ4f+x\nAgMBAAGjIzAhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MA0GCSqG\nSIb3DQEBCwUAA4IBAQCBKeKsPueA4RHoEzLl+xNnrig0dBvbfcBCzCf4w221QHAT\nS3FcQvByLjzpUUl2lPtqA5+tvg+lxwKba8DTgI3IpP2J2Pu4eLu4jH/YnL+5u18t\nq3/CGSFBSL0UJ2cZQe+9AH7LOSSqyj86PBIjZhW3iSwQhjqB2p7dA25gj0wnoM77\nkh8VckM83HcbzVspgvpQ6zR2RuXfLA3Lcoi9wd8eojaaNwqN4AhCIdEu4nXz7FFM\nGv+2fHzxLRUFA0YPdMfYio3s6qF3Yg2G52S1dYNEGUNQTvMl38KFS6lU+jaj0qjn\nVVkXIA4NrteIMUC1NH2DcvTcWtWNtZWkga7aytT0\n-----END CERTIFICATE-----\n", ValueFrom:(*api.EnvVarSource)(nil)}, api.EnvVar{Name:"OPENSHIFT_DEPLOYMENT_NAME", Value:"deployment-example-1", ValueFrom:(*api.EnvVarSource)(nil)}, api.EnvVar{Name:"OPENSHIFT_DEPLOYMENT_NAMESPACE", Value:"test", ValueFrom:(*api.EnvVarSource)(nil)}}, Resources:api.ResourceRequirements{Limits:api.ResourceList(nil), Requests:api.ResourceList(nil)}, VolumeMounts:[]api.VolumeMount{api.VolumeMount{Name:"deployer-token-gvtbo", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:""}}, LivenessProbe:(*api.Probe)(nil), ReadinessProbe:(*api.Probe)(nil), Lifecycle:(*api.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", ImagePullPolicy:"IfNotPresent", SecurityContext:(*api.SecurityContext)(0xc8262168d0), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Never", TerminationGracePeriodSeconds:(*int64)(0xc824e74d10), ActiveDeadlineSeconds:(*int64)(0xc824e74d18), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"deployer", NodeName:"", SecurityContext:(*api.PodSecurityContext)(0xc8277ddb40), ImagePullSecrets:[]api.LocalObjectReference{api.LocalObjectReference{Name:"deployer-dockercfg-cotcn"}}, Hostname:"", Subdomain:""}, Status:api.PodStatus{Phase:"Pending", Conditions:[]api.PodCondition(nil), Message:"", Reason:"", HostIP:"", PodIP:"", StartTime:(*unversioned.Time)(nil), InitContainerStatuses:[]api.ContainerStatus(nil), ContainerStatuses:[]api.ContainerStatus(nil)}}.
I1209 18:17:46.261879 20134 deployment_controller.go:334] Error: <nil>. No deployment found for Pod deployment-example-1-deploy, deployment controller will avoid syncing.
I1209 18:17:46.262699 20134 factory.go:422] About to try and schedule pod deployment-example-1-deploy
I1209 18:17:46.262717 20134 scheduler.go:96] Attempting to schedule pod: test/deployment-example-1-deploy
I1209 18:17:46.264076 20134 factory.go:582] Attempting to bind deployment-example-1-deploy to localhost
I1209 18:17:46.228218 20134 reflector.go:284] github.com/openshift/origin/pkg/build/controller/factory/factory.go:90: forcing resync
I1209 18:17:46.266480 20134 controller.go:128] Created deployer pod deployment-example-1-deploy for deployment test/deployment-example-1
I1209 18:17:46.290161 20134 config.go:281] Setting pods for source api
I1209 18:17:46.290833 20134 config.go:397] Receiving a new pod "deployment-example-1-deploy_test(c8ca050c-be3b-11e6-8665-525400560f2f)"
I1209 18:17:46.291145 20134 kubelet.go:2293] SyncLoop (ADD, "api"): "deployment-example-1-deploy_test(c8ca050c-be3b-11e6-8665-525400560f2f)"
I1209 18:17:46.291712 20134 kubelet.go:2761] Generating status for "deployment-example-1-deploy_test(c8ca050c-be3b-11e6-8665-525400560f2f)"
I1209 18:17:46.292024 20134 kubelet.go:2726] pod waiting > 0, pending
I1209 18:17:46.292672 20134 volume_manager.go:324] Waiting for volumes to attach and mount for pod "deployment-example-1-deploy_test(c8ca050c-be3b-11e6-8665-525400560f2f)"
I1209 18:17:46.293127 20134 iptables.go:362] running iptables -C [POSTROUTING -t nat -m comment --comment kubernetes postrouting rules -j KUBE-POSTROUTING]
I1209 18:17:46.291905 20134 deployment_controller.go:368] Pod deployment-example-1-deploy updated &api.Pod{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:api.ObjectMeta{Name:"deployment-example-1-deploy", GenerateName:"", Namespace:"test", SelfLink:"/api/v1/namespaces/test/pods/deployment-example-1-deploy", UID:"c8ca050c-be3b-11e6-8665-525400560f2f", ResourceVersion:"520", Generation:0, CreationTimestamp:unversioned.Time{Time:time.Time{sec:63616904266, nsec:0, loc:(*time.Location)(0xa4624c0)}}, DeletionTimestamp:(*unversioned.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"openshift.io/deployer-pod-for.name":"deployment-example-1"}, Annotations:map[string]string{"openshift.io/scc":"restricted", "openshift.io/deployment.name":"deployment-example-1"}, OwnerReferences:[]api.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:api.PodSpec{Volumes:[]api.Volume{api.Volume{Name:"deployer-token-gvtbo", VolumeSource:api.VolumeSource{HostPath:(*api.HostPathVolumeSource)(nil), EmptyDir:(*api.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*api.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*api.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*api.GitRepoVolumeSource)(nil), Secret:(*api.SecretVolumeSource)(0xc8262168a0), NFS:(*api.NFSVolumeSource)(nil), ISCSI:(*api.ISCSIVolumeSource)(nil), Glusterfs:(*api.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*api.PersistentVolumeClaimVolumeSource)(nil), RBD:(*api.RBDVolumeSource)(nil), Quobyte:(*api.QuobyteVolumeSource)(nil), FlexVolume:(*api.FlexVolumeSource)(nil), Cinder:(*api.CinderVolumeSource)(nil), CephFS:(*api.CephFSVolumeSource)(nil), Flocker:(*api.FlockerVolumeSource)(nil), DownwardAPI:(*api.DownwardAPIVolumeSource)(nil), FC:(*api.FCVolumeSource)(nil), AzureFile:(*api.AzureFileVolumeSource)(nil), ConfigMap:(*api.ConfigMapVolumeSource)(nil), VsphereVolume:(*api.VsphereVirtualDiskVolumeSource)(nil), AzureDisk:(*api.AzureDiskVolumeSource)(nil)}}}, InitContainers:[]api.Container(nil), Containers:[]api.Container{api.Container{Name:"deployment", Image:"openshift/origin-deployer:v1.5.0-alpha.0", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]api.ContainerPort(nil), Env:[]api.EnvVar{api.EnvVar{Name:"KUBERNETES_MASTER", Value:"https://192.168.121.18:8443", ValueFrom:(*api.EnvVarSource)(nil)}, api.EnvVar{Name:"OPENSHIFT_MASTER", Value:"https://192.168.121.18:8443", ValueFrom:(*api.EnvVarSource)(nil)}, api.EnvVar{Name:"BEARER_TOKEN_FILE", Value:"/var/run/secrets/kubernetes.io/serviceaccount/token", ValueFrom:(*api.EnvVarSource)(nil)}, api.EnvVar{Name:"OPENSHIFT_CA_DATA", Value:"-----BEGIN CERTIFICATE-----\nMIIC6jCCAdKgAwIBAgIBATANBgkqhkiG9w0BAQsFADAmMSQwIgYDVQQDDBtvcGVu\nc2hpZnQtc2lnbmVyQDE0ODEzMDczMzMwHhcNMTYxMjA5MTgxNTMzWhcNMjExMjA4\nMTgxNTM0WjAmMSQwIgYDVQQDDBtvcGVuc2hpZnQtc2lnbmVyQDE0ODEzMDczMzMw\nggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDJneo5Oz8Q1EA7BkGhSvFX\nDcdnYv+mvPZsUZOC5VbKaXSz84X1nLQj7n5lxWK4f9gmFiXscgxBOIxXD9e2Q6AM\ngIJEjMHzEXYOgbWqhbd6EtUrWEGEzA8dEAvPyDWe2jdovcTUkP7JxyqATCk7j4Ao\nv27HJFyyR3iVOleAPaT1ZICj+w4Jy1np60rmMCsn2A2vfKceu7T5lt3uGNEFhTMd\n7bbvFaIkv3lFWMkrRaDqAlWanYQLEGMF8Mh68ty9A/mqCAJUjVB363/i7IIG1nwI\nymFYS3IKsCk0W8WgC9p515lMH3d1rsT+UlF+BhRa3Boh+wD4q2NfrgkguqEQ4f+x\nAgMBAAGjIzAhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MA0GCSqG\nSIb3DQEBCwUAA4IBAQCBKeKsPueA4RHoEzLl+xNnrig0dBvbfcBCzCf4w221QHAT\nS3FcQvByLjzpUUl2lPtqA5+tvg+lxwKba8DTgI3IpP2J2Pu4eLu4jH/YnL+5u18t\nq3/CGSFBSL0UJ2cZQe+9AH7LOSSqyj86PBIjZhW3iSwQhjqB2p7dA25gj0wnoM77\nkh8VckM83HcbzVspgvpQ6zR2RuXfLA3Lcoi9wd8eojaaNwqN4AhCIdEu4nXz7FFM\nGv+2fHzxLRUFA0YPdMfYio3s6qF3Yg2G52S1dYNEGUNQTvMl38KFS6lU+jaj0qjn\nVVkXIA4NrteIMUC1NH2DcvTcWtWNtZWkga7aytT0\n-----END CERTIFICATE-----\n", ValueFrom:(*api.EnvVarSource)(nil)}, api.EnvVar{Name:"OPENSHIFT_DEPLOYMENT_NAME", Value:"deployment-example-1", ValueFrom:(*api.EnvVarSource)(nil)}, api.EnvVar{Name:"OPENSHIFT_DEPLOYMENT_NAMESPACE", Value:"test", ValueFrom:(*api.EnvVarSource)(nil)}}, Resources:api.ResourceRequirements{Limits:api.ResourceList(nil), Requests:api.ResourceList(nil)}, VolumeMounts:[]api.VolumeMount{api.VolumeMount{Name:"deployer-token-gvtbo", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:""}}, LivenessProbe:(*api.Probe)(nil), ReadinessProbe:(*api.Probe)(nil), Lifecycle:(*api.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", ImagePullPolicy:"IfNotPresent", SecurityContext:(*api.SecurityContext)(0xc8262168d0), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Never", TerminationGracePeriodSeconds:(*int64)(0xc824e74d10), ActiveDeadlineSeconds:(*int64)(0xc824e74d18), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"deployer", NodeName:"", SecurityContext:(*api.PodSecurityContext)(0xc8277ddb40), ImagePullSecrets:[]api.LocalObjectReference{api.LocalObjectReference{Name:"deployer-dockercfg-cotcn"}}, Hostname:"", Subdomain:""}, Status:api.PodStatus{Phase:"Pending", Conditions:[]api.PodCondition(nil), Message:"", Reason:"", HostIP:"", PodIP:"", StartTime:(*unversioned.Time)(nil), InitContainerStatuses:[]api.ContainerStatus(nil), ContainerStatuses:[]api.ContainerStatus(nil)}} -> &api.Pod{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:api.ObjectMeta{Name:"deployment-example-1-deploy", GenerateName:"", Namespace:"test", SelfLink:"/api/v1/namespaces/test/pods/deployment-example-1-deploy", UID:"c8ca050c-be3b-11e6-8665-525400560f2f", ResourceVersion:"521", Generation:0, CreationTimestamp:unversioned.Time{Time:time.Time{sec:63616904266, nsec:0, loc:(*time.Location)(0xa4624c0)}}, DeletionTimestamp:(*unversioned.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"openshift.io/deployer-pod-for.name":"deployment-example-1"}, Annotations:map[string]string{"openshift.io/scc":"restricted", "openshift.io/deployment.name":"deployment-example-1"}, OwnerReferences:[]api.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:api.PodSpec{Volumes:[]api.Volume{api.Volume{Name:"deployer-token-gvtbo", VolumeSource:api.VolumeSource{HostPath:(*api.HostPathVolumeSource)(nil), EmptyDir:(*api.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*api.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*api.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*api.GitRepoVolumeSource)(nil), Secret:(*api.SecretVolumeSource)(0xc826761c50), NFS:(*api.NFSVolumeSource)(nil), ISCSI:(*api.ISCSIVolumeSource)(nil), Glusterfs:(*api.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*api.PersistentVolumeClaimVolumeSource)(nil), RBD:(*api.RBDVolumeSource)(nil), Quobyte:(*api.QuobyteVolumeSource)(nil), FlexVolume:(*api.FlexVolumeSource)(nil), Cinder:(*api.CinderVolumeSource)(nil), CephFS:(*api.CephFSVolumeSource)(nil), Flocker:(*api.FlockerVolumeSource)(nil), DownwardAPI:(*api.DownwardAPIVolumeSource)(nil), FC:(*api.FCVolumeSource)(nil), AzureFile:(*api.AzureFileVolumeSource)(nil), ConfigMap:(*api.ConfigMapVolumeSource)(nil), VsphereVolume:(*api.VsphereVirtualDiskVolumeSource)(nil), AzureDisk:(*api.AzureDiskVolumeSource)(nil)}}}, InitContainers:[]api.Container(nil), Containers:[]api.Container{api.Container{Name:"deployment", Image:"openshift/origin-deployer:v1.5.0-alpha.0", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]api.ContainerPort(nil), Env:[]api.EnvVar{api.EnvVar{Name:"KUBERNETES_MASTER", Value:"https://192.168.121.18:8443", ValueFrom:(*api.EnvVarSource)(nil)}, api.EnvVar{Name:"OPENSHIFT_MASTER", Value:"https://192.168.121.18:8443", ValueFrom:(*api.EnvVarSource)(nil)}, api.EnvVar{Name:"BEARER_TOKEN_FILE", Value:"/var/run/secrets/kubernetes.io/serviceaccount/token", ValueFrom:(*api.EnvVarSource)(nil)}, api.EnvVar{Name:"OPENSHIFT_CA_DATA", Value:"-----BEGIN CERTIFICATE-----\nMIIC6jCCAdKgAwIBAgIBATANBgkqhkiG9w0BAQsFADAmMSQwIgYDVQQDDBtvcGVu\nc2hpZnQtc2lnbmVyQDE0ODEzMDczMzMwHhcNMTYxMjA5MTgxNTMzWhcNMjExMjA4\nMTgxNTM0WjAmMSQwIgYDVQQDDBtvcGVuc2hpZnQtc2lnbmVyQDE0ODEzMDczMzMw\nggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDJneo5Oz8Q1EA7BkGhSvFX\nDcdnYv+mvPZsUZOC5VbKaXSz84X1nLQj7n5lxWK4f9gmFiXscgxBOIxXD9e2Q6AM\ngIJEjMHzEXYOgbWqhbd6EtUrWEGEzA8dEAvPyDWe2jdovcTUkP7JxyqATCk7j4Ao\nv27HJFyyR3iVOleAPaT1ZICj+w4Jy1np60rmMCsn2A2vfKceu7T5lt3uGNEFhTMd\n7bbvFaIkv3lFWMkrRaDqAlWanYQLEGMF8Mh68ty9A/mqCAJUjVB363/i7IIG1nwI\nymFYS3IKsCk0W8WgC9p515lMH3d1rsT+UlF+BhRa3Boh+wD4q2NfrgkguqEQ4f+x\nAgMBAAGjIzAhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MA0GCSqG\nSIb3DQEBCwUAA4IBAQCBKeKsPueA4RHoEzLl+xNnrig0dBvbfcBCzCf4w221QHAT\nS3FcQvByLjzpUUl2lPtqA5+tvg+lxwKba8DTgI3IpP2J2Pu4eLu4jH/YnL+5u18t\nq3/CGSFBSL0UJ2cZQe+9AH7LOSSqyj86PBIjZhW3iSwQhjqB2p7dA25gj0wnoM77\nkh8VckM83HcbzVspgvpQ6zR2RuXfLA3Lcoi9wd8eojaaNwqN4AhCIdEu4nXz7FFM\nGv+2fHzxLRUFA0YPdMfYio3s6qF3Yg2G52S1dYNEGUNQTvMl38KFS6lU+jaj0qjn\nVVkXIA4NrteIMUC1NH2DcvTcWtWNtZWkga7aytT0\n-----END CERTIFICATE-----\n", ValueFrom:(*api.EnvVarSource)(nil)}, api.EnvVar{Name:"OPENSHIFT_DEPLOYMENT_NAME", Value:"deployment-example-1", ValueFrom:(*api.EnvVarSource)(nil)}, api.EnvVar{Name:"OPENSHIFT_DEPLOYMENT_NAMESPACE", Value:"test", ValueFrom:(*api.EnvVarSource)(nil)}}, Resources:api.ResourceRequirements{Limits:api.ResourceList(nil), Requests:api.ResourceList(nil)}, VolumeMounts:[]api.VolumeMount{api.VolumeMount{Name:"deployer-token-gvtbo", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:""}}, LivenessProbe:(*api.Probe)(nil), ReadinessProbe:(*api.Probe)(nil), Lifecycle:(*api.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", ImagePullPolicy:"IfNotPresent", SecurityContext:(*api.SecurityContext)(0xc826761cb0), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Never", TerminationGracePeriodSeconds:(*int64)(0xc82e4bf510), ActiveDeadlineSeconds:(*int64)(0xc82e4bf518), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"deployer", NodeName:"localhost", SecurityContext:(*api.PodSecurityContext)(0xc82c3d8800), ImagePullSecrets:[]api.LocalObjectReference{api.LocalObjectReference{Name:"deployer-dockercfg-cotcn"}}, Hostname:"", Subdomain:""}, Status:api.PodStatus{Phase:"Pending", Conditions:[]api.PodCondition{api.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:unversioned.Time{Time:time.Time{sec:0, nsec:0, loc:(*time.Location)(nil)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63616904266, nsec:0, loc:(*time.Location)(0xa4624c0)}}, Reason:"", Message:""}}, Message:"", Reason:"", HostIP:"", PodIP:"", StartTime:(*unversioned.Time)(nil), InitContainerStatuses:[]api.ContainerStatus(nil), ContainerStatuses:[]api.ContainerStatus(nil)}}.
I1209 18:17:46.299900 20134 deployment_controller.go:334] Error: <nil>. No deployment found for Pod deployment-example-1-deploy, deployment controller will avoid syncing.
I1209 18:17:46.300006 20134 deployment_controller.go:334] Error: <nil>. No deployment found for Pod deployment-example-1-deploy, deployment controller will avoid syncing.
I1209 18:17:46.300209 20134 replication_controller.go:379] Pod deployment-example-1-deploy updated, objectMeta {Name:deployment-example-1-deploy GenerateName: Namespace:test SelfLink:/api/v1/namespaces/test/pods/deployment-example-1-deploy UID:c8ca050c-be3b-11e6-8665-525400560f2f ResourceVersion:520 Generation:0 CreationTimestamp:2016-12-09 18:17:46 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[openshift.io/deployer-pod-for.name:deployment-example-1] Annotations:map[openshift.io/deployment.name:deployment-example-1 openshift.io/scc:restricted] OwnerReferences:[] Finalizers:[] ClusterName:} -> {Name:deployment-example-1-deploy GenerateName: Namespace:test SelfLink:/api/v1/namespaces/test/pods/deployment-example-1-deploy UID:c8ca050c-be3b-11e6-8665-525400560f2f ResourceVersion:521 Generation:0 CreationTimestamp:2016-12-09 18:17:46 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[openshift.io/deployer-pod-for.name:deployment-example-1] Annotations:map[openshift.io/deployment.name:deployment-example-1 openshift.io/scc:restricted] OwnerReferences:[] Finalizers:[] ClusterName:}.
I1209 18:17:46.300592 20134 replication_controller.go:256] No controllers found for pod deployment-example-1-deploy, replication manager will avoid syncing
I1209 18:17:46.300644 20134 replica_set.go:362] Pod deployment-example-1-deploy updated, objectMeta {Name:deployment-example-1-deploy GenerateName: Namespace:test SelfLink:/api/v1/namespaces/test/pods/deployment-example-1-deploy UID:c8ca050c-be3b-11e6-8665-525400560f2f ResourceVersion:520 Generation:0 CreationTimestamp:2016-12-09 18:17:46 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[openshift.io/deployer-pod-for.name:deployment-example-1] Annotations:map[openshift.io/deployment.name:deployment-example-1 openshift.io/scc:restricted] OwnerReferences:[] Finalizers:[] ClusterName:} -> {Name:deployment-example-1-deploy GenerateName: Namespace:test SelfLink:/api/v1/namespaces/test/pods/deployment-example-1-deploy UID:c8ca050c-be3b-11e6-8665-525400560f2f ResourceVersion:521 Generation:0 CreationTimestamp:2016-12-09 18:17:46 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[openshift.io/deployer-pod-for.name:deployment-example-1] Annotations:map[openshift.io/deployment.name:deployment-example-1 openshift.io/scc:restricted] OwnerReferences:[] Finalizers:[] ClusterName:}.
I1209 18:17:46.300817 20134 replica_set.go:238] No ReplicaSets found for pod deployment-example-1-deploy, ReplicaSet controller will avoid syncing
I1209 18:17:46.301438 20134 jobcontroller.go:166] No jobs found for pod deployment-example-1-deploy, job controller will avoid syncing
I1209 18:17:46.301507 20134 daemoncontroller.go:364] Pod deployment-example-1-deploy updated.
I1209 18:17:46.301596 20134 daemoncontroller.go:293] No daemon sets found for pod deployment-example-1-deploy, daemon set controller will avoid syncing
I1209 18:17:46.305165 20134 disruption.go:307] updatePod called on pod "deployment-example-1-deploy"
I1209 18:17:46.305285 20134 disruption.go:361] No PodDisruptionBudgets found for pod deployment-example-1-deploy, PodDisruptionBudget controller will avoid syncing.
I1209 18:17:46.305313 20134 disruption.go:310] No matching pdb for pod "deployment-example-1-deploy"
I1209 18:17:46.305466 20134 pet_set.go:238] No PetSets found for pod deployment-example-1-deploy, PetSet controller will avoid syncing
I1209 18:17:46.308836 20134 iptables.go:298] running iptables-save [-t filter]
I1209 18:17:46.319663 20134 iptables.go:298] running iptables-save [-t nat]
I1209 18:17:46.331427 20134 proxier.go:1244] Restoring iptables rules: *filter
:KUBE-SERVICES - [0:0]
-A KUBE-SERVICES -m comment --comment "test/deployment-example:8080-tcp has no endpoints" -m tcp -p tcp -d 172.30.122.15/32 --dport 8080 -j REJECT
COMMIT
*nat
:KUBE-SERVICES - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-POSTROUTING - [0:0]
:KUBE-MARK-MASQ - [0:0]
:KUBE-SVC-NPX46M4PTMTKRN6Y - [0:0]
:KUBE-SEP-LLAPZ6I53VN3DC7H - [0:0]
:KUBE-SVC-3VQ6B3MLH7E2SZT4 - [0:0]
:KUBE-SEP-FQLCC2RAIW2XP5BY - [0:0]
:KUBE-SVC-BA6I5HTZKAAAJT56 - [0:0]
:KUBE-SEP-U462OSCUJL4Y6JKA - [0:0]
:KUBE-SVC-7FAS7WLN46SI3LNQ - [0:0]
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x00004000/0x00004000 -j MASQUERADE
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x00004000/0x00004000
-A KUBE-SERVICES -m comment --comment "default/kubernetes:https cluster IP" -m tcp -p tcp -d 172.30.0.1/32 --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment default/kubernetes:https -m recent --name KUBE-SEP-LLAPZ6I53VN3DC7H --rcheck --seconds 180 --reap -j KUBE-SEP-LLAPZ6I53VN3DC7H
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment default/kubernetes:https -j KUBE-SEP-LLAPZ6I53VN3DC7H
-A KUBE-SEP-LLAPZ6I53VN3DC7H -m comment --comment default/kubernetes:https -s 192.168.121.18/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-LLAPZ6I53VN3DC7H -m comment --comment default/kubernetes:https -m recent --name KUBE-SEP-LLAPZ6I53VN3DC7H --set -m tcp -p tcp -j DNAT --to-destination 192.168.121.18:8443
-A KUBE-SERVICES -m comment --comment "default/kubernetes:dns cluster IP" -m udp -p udp -d 172.30.0.1/32 --dport 53 -j KUBE-SVC-3VQ6B3MLH7E2SZT4
-A KUBE-SVC-3VQ6B3MLH7E2SZT4 -m comment --comment default/kubernetes:dns -m recent --name KUBE-SEP-FQLCC2RAIW2XP5BY --rcheck --seconds 180 --reap -j KUBE-SEP-FQLCC2RAIW2XP5BY
-A KUBE-SVC-3VQ6B3MLH7E2SZT4 -m comment --comment default/kubernetes:dns -j KUBE-SEP-FQLCC2RAIW2XP5BY
-A KUBE-SEP-FQLCC2RAIW2XP5BY -m comment --comment default/kubernetes:dns -s 192.168.121.18/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-FQLCC2RAIW2XP5BY -m comment --comment default/kubernetes:dns -m recent --name KUBE-SEP-FQLCC2RAIW2XP5BY --set -m udp -p udp -j DNAT --to-destination 192.168.121.18:8053
-A KUBE-SERVICES -m comment --comment "default/kubernetes:dns-tcp cluster IP" -m tcp -p tcp -d 172.30.0.1/32 --dport 53 -j KUBE-SVC-BA6I5HTZKAAAJT56
-A KUBE-SVC-BA6I5HTZKAAAJT56 -m comment --comment default/kubernetes:dns-tcp -m recent --name KUBE-SEP-U462OSCUJL4Y6JKA --rcheck --seconds 180 --reap -j KUBE-SEP-U462OSCUJL4Y6JKA
-A KUBE-SVC-BA6I5HTZKAAAJT56 -m comment --comment default/kubernetes:dns-tcp -j KUBE-SEP-U462OSCUJL4Y6JKA
-A KUBE-SEP-U462OSCUJL4Y6JKA -m comment --comment default/kubernetes:dns-tcp -s 192.168.121.18/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-U462OSCUJL4Y6JKA -m comment --comment default/kubernetes:dns-tcp -m recent --name KUBE-SEP-U462OSCUJL4Y6JKA --set -m tcp -p tcp -j DNAT --to-destination 192.168.121.18:8053
-A KUBE-SERVICES -m comment --comment "test/deployment-example:8080-tcp cluster IP" -m tcp -p tcp -d 172.30.122.15/32 --dport 8080 -j KUBE-SVC-7FAS7WLN46SI3LNQ
-A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
COMMIT
I1209 18:17:46.331459 20134 iptables.go:339] running iptables-restore [--noflush --counters]
I1209 18:17:46.344923 20134 controller.go:155] Detected existing deployer pod deployment-example-1-deploy for deployment test/deployment-example-1
I1209 18:17:46.357103 20134 deployment_controller.go:368] Pod deployment-example-1-deploy updated &api.Pod{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:api.ObjectMeta{Name:"deployment-example-1-deploy", GenerateName:"", Namespace:"test", SelfLink:"/api/v1/namespaces/test/pods/deployment-example-1-deploy", UID:"c8ca050c-be3b-11e6-8665-525400560f2f", ResourceVersion:"521", Generation:0, CreationTimestamp:unversioned.Time{Time:time.Time{sec:63616904266, nsec:0, loc:(*time.Location)(0xa4624c0)}}, DeletionTimestamp:(*unversioned.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"openshift.io/deployer-pod-for.name":"deployment-example-1"}, Annotations:map[string]string{"openshift.io/scc":"restricted", "openshift.io/deployment.name":"deployment-example-1"}, OwnerReferences:[]api.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:api.PodSpec{Volumes:[]api.Volume{api.Volume{Name:"deployer-token-gvtbo", VolumeSource:api.VolumeSource{HostPath:(*api.HostPathVolumeSource)(nil), EmptyDir:(*api.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*api.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*api.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*api.GitRepoVolumeSource)(nil), Secret:(*api.SecretVolumeSource)(0xc826761c50), NFS:(*api.NFSVolumeSource)(nil), ISCSI:(*api.ISCSIVolumeSource)(nil), Glusterfs:(*api.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*api.PersistentVolumeClaimVolumeSource)(nil), RBD:(*api.RBDVolumeSource)(nil), Quobyte:(*api.QuobyteVolumeSource)(nil), FlexVolume:(*api.FlexVolumeSource)(nil), Cinder:(*api.CinderVolumeSource)(nil), CephFS:(*api.CephFSVolumeSource)(nil), Flocker:(*api.FlockerVolumeSource)(nil), DownwardAPI:(*api.DownwardAPIVolumeSource)(nil), FC:(*api.FCVolumeSource)(nil), AzureFile:(*api.AzureFileVolumeSource)(nil), ConfigMap:(*api.ConfigMapVolumeSource)(nil), VsphereVolume:(*api.VsphereVirtualDiskVolumeSource)(nil), AzureDisk:(*api.AzureDiskVolumeSource)(nil)}}}, InitContainers:[]api.Container(nil), Containers:[]api.Container{api.Container{Name:"deployment", Image:"openshift/origin-deployer:v1.5.0-alpha.0", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]api.ContainerPort(nil), Env:[]api.EnvVar{api.EnvVar{Name:"KUBERNETES_MASTER", Value:"https://192.168.121.18:8443", ValueFrom:(*api.EnvVarSource)(nil)}, api.EnvVar{Name:"OPENSHIFT_MASTER", Value:"https://192.168.121.18:8443", ValueFrom:(*api.EnvVarSource)(nil)}, api.EnvVar{Name:"BEARER_TOKEN_FILE", Value:"/var/run/secrets/kubernetes.io/serviceaccount/token", ValueFrom:(*api.EnvVarSource)(nil)}, api.EnvVar{Name:"OPENSHIFT_CA_DATA", Value:"-----BEGIN CERTIFICATE-----\nMIIC6jCCAdKgAwIBAgIBATANBgkqhkiG9w0BAQsFADAmMSQwIgYDVQQDDBtvcGVu\nc2hpZnQtc2lnbmVyQDE0ODEzMDczMzMwHhcNMTYxMjA5MTgxNTMzWhcNMjExMjA4\nMTgxNTM0WjAmMSQwIgYDVQQDDBtvcGVuc2hpZnQtc2lnbmVyQDE0ODEzMDczMzMw\nggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDJneo5Oz8Q1EA7BkGhSvFX\nDcdnYv+mvPZsUZOC5VbKaXSz84X1nLQj7n5lxWK4f9gmFiXscgxBOIxXD9e2Q6AM\ngIJEjMHzEXYOgbWqhbd6EtUrWEGEzA8dEAvPyDWe2jdovcTUkP7JxyqATCk7j4Ao\nv27HJFyyR3iVOleAPaT1ZICj+w4Jy1np60rmMCsn2A2vfKceu7T5lt3uGNEFhTMd\n7bbvFaIkv3lFWMkrRaDqAlWanYQLEGMF8Mh68ty9A/mqCAJUjVB363/i7IIG1nwI\nymFYS3IKsCk0W8WgC9p515lMH3d1rsT+UlF+BhRa3Boh+wD4q2NfrgkguqEQ4f+x\nAgMBAAGjIzAhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MA0GCSqG\nSIb3DQEBCwUAA4IBAQCBKeKsPueA4RHoEzLl+xNnrig0dBvbfcBCzCf4w221QHAT\nS3FcQvByLjzpUUl2lPtqA5+tvg+lxwKba8DTgI3IpP2J2Pu4eLu4jH/YnL+5u18t\nq3/CGSFBSL0UJ2cZQe+9AH7LOSSqyj86PBIjZhW3iSwQhjqB2p7dA25gj0wnoM77\nkh8VckM83HcbzVspgvpQ6zR2RuXfLA3Lcoi9wd8eojaaNwqN4AhCIdEu4nXz7FFM\nGv+2fHzxLRUFA0YPdMfYio3s6qF3Yg2G52S1dYNEGUNQTvMl38KFS6lU+jaj0qjn\nVVkXIA4NrteIMUC1NH2DcvTcWtWNtZWkga7aytT0\n-----END CERTIFICATE-----\n", ValueFrom:(*api.EnvVarSource)(nil)}, api.EnvVar{Name:"OPENSHIFT_DEPLOYMENT_NAME", Value:"deployment-example-1", ValueFrom:(*api.EnvVarSource)(nil)}, api.EnvVar{Name:"OPENSHIFT_DEPLOYMENT_NAMESPACE", Value:"test", ValueFrom:(*api.EnvVarSource)(nil)}}, Resources:api.ResourceRequirements{Limits:api.ResourceList(nil), Requests:api.ResourceList(nil)}, VolumeMounts:[]api.VolumeMount{api.VolumeMount{Name:"deployer-token-gvtbo", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:""}}, LivenessProbe:(*api.Probe)(nil), ReadinessProbe:(*api.Probe)(nil), Lifecycle:(*api.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", ImagePullPolicy:"IfNotPresent", SecurityContext:(*api.SecurityContext)(0xc826761cb0), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Never", TerminationGracePeriodSeconds:(*int64)(0xc82e4bf510), ActiveDeadlineSeconds:(*int64)(0xc82e4bf518), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"deployer", NodeName:"localhost", SecurityContext:(*api.PodSecurityContext)(0xc82c3d8800), ImagePullSecrets:[]api.LocalObjectReference{api.LocalObjectReference{Name:"deployer-dockercfg-cotcn"}}, Hostname:"", Subdomain:""}, Status:api.PodStatus{Phase:"Pending", Conditions:[]api.PodCondition{api.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:unversioned.Time{Time:time.Time{sec:0, nsec:0, loc:(*time.Location)(nil)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63616904266, nsec:0, loc:(*time.Location)(0xa4624c0)}}, Reason:"", Message:""}}, Message:"", Reason:"", HostIP:"", PodIP:"", StartTime:(*unversioned.Time)(nil), InitContainerStatuses:[]api.ContainerStatus(nil), ContainerStatuses:[]api.ContainerStatus(nil)}} -> &api.Pod{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:api.ObjectMeta{Name:"deployment-example-1-deploy", GenerateName:"", Namespace:"test", SelfLink:"/api/v1/namespaces/test/pods/deployment-example-1-deploy", UID:"c8ca050c-be3b-11e6-8665-525400560f2f", ResourceVersion:"523", Generation:0, CreationTimestamp:unversioned.Time{Time:time.Time{sec:63616904266, nsec:0, loc:(*time.Location)(0xa4624c0)}}, DeletionTimestamp:(*unversioned.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"openshift.io/deployer-pod-for.name":"deployment-example-1"}, Annotations:map[string]string{"openshift.io/deployment.name":"deployment-example-1", "openshift.io/scc":"restricted"}, OwnerReferences:[]api.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:api.PodSpec{Volumes:[]api.Volume{api.Volume{Name:"deployer-token-gvtbo", VolumeSource:api.VolumeSource{HostPath:(*api.HostPathVolumeSource)(nil), EmptyDir:(*api.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*api.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*api.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*api.GitRepoVolumeSource)(nil), Secret:(*api.SecretVolumeSource)(0xc8278ca180), NFS:(*api.NFSVolumeSource)(nil), ISCSI:(*api.ISCSIVolumeSource)(nil), Glusterfs:(*api.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*api.PersistentVolumeClaimVolumeSource)(nil), RBD:(*api.RBDVolumeSource)(nil), Quobyte:(*api.QuobyteVolumeSource)(nil), FlexVolume:(*api.FlexVolumeSource)(nil), Cinder:(*api.CinderVolumeSource)(nil), CephFS:(*api.CephFSVolumeSource)(nil), Flocker:(*api.FlockerVolumeSource)(nil), DownwardAPI:(*api.DownwardAPIVolumeSource)(nil), FC:(*api.FCVolumeSource)(nil), AzureFile:(*api.AzureFileVolumeSource)(nil), ConfigMap:(*api.ConfigMapVolumeSource)(nil), VsphereVolume:(*api.VsphereVirtualDiskVolumeSource)(nil), AzureDisk:(*api.AzureDiskVolumeSource)(nil)}}}, InitContainers:[]api.Container(nil), Containers:[]api.Container{api.Container{Name:"deployment", Image:"openshift/origin-deployer:v1.5.0-alpha.0", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]api.ContainerPort(nil), Env:[]api.EnvVar{api.EnvVar{Name:"KUBERNETES_MASTER", Value:"https://192.168.121.18:8443", ValueFrom:(*api.EnvVarSource)(nil)}, api.EnvVar{Name:"OPENSHIFT_MASTER", Value:"https://192.168.121.18:8443", ValueFrom:(*api.EnvVarSource)(nil)}, api.EnvVar{Name:"BEARER_TOKEN_FILE", Value:"/var/run/secrets/kubernetes.io/serviceaccount/token", ValueFrom:(*api.EnvVarSource)(nil)}, api.EnvVar{Name:"OPENSHIFT_CA_DATA", Value:"-----BEGIN CERTIFICATE-----\nMIIC6jCCAdKgAwIBAgIBATANBgkqhkiG9w0BAQsFADAmMSQwIgYDVQQDDBtvcGVu\nc2hpZnQtc2lnbmVyQDE0ODEzMDczMzMwHhcNMTYxMjA5MTgxNTMzWhcNMjExMjA4\nMTgxNTM0WjAmMSQwIgYDVQQDDBtvcGVuc2hpZnQtc2lnbmVyQDE0ODEzMDczMzMw\nggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDJneo5Oz8Q1EA7BkGhSvFX\nDcdnYv+mvPZsUZOC5VbKaXSz84X1nLQj7n5lxWK4f9gmFiXscgxBOIxXD9e2Q6AM\ngIJEjMHzEXYOgbWqhbd6EtUrWEGEzA8dEAvPyDWe2jdovcTUkP7JxyqATCk7j4Ao\nv27HJFyyR3iVOleAPaT1ZICj+w4Jy1np60rmMCsn2A2vfKceu7T5lt3uGNEFhTMd\n7bbvFaIkv3lFWMkrRaDqAlWanYQLEGMF8Mh68ty9A/mqCAJUjVB363/i7IIG1nwI\nymFYS3IKsCk0W8WgC9p515lMH3d1rsT+UlF+BhRa3Boh+wD4q2NfrgkguqEQ4f+x\nAgMBAAGjIzAhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MA0GCSqG\nSIb3DQEBCwUAA4IBAQCBKeKsPueA4RHoEzLl+xNnrig0dBvbfcBCzCf4w221QHAT\nS3FcQvByLjzpUUl2lPtqA5+tvg+lxwKba8DTgI3IpP2J2Pu4eLu4jH/YnL+5u18t\nq3/CGSFBSL0UJ2cZQe+9AH7LOSSqyj86PBIjZhW3iSwQhjqB2p7dA25gj0wnoM77\nkh8VckM83HcbzVspgvpQ6zR2RuXfLA3Lcoi9wd8eojaaNwqN4AhCIdEu4nXz7FFM\nGv+2fHzxLRUFA0YPdMfYio3s6qF3Yg2G52S1dYNEGUNQTvMl38KFS6lU+jaj0qjn\nVVkXIA4NrteIMUC1NH2DcvTcWtWNtZWkga7aytT0\n-----END CERTIFICATE-----\n", ValueFrom:(*api.EnvVarSource)(nil)}, api.EnvVar{Name:"OPENSHIFT_DEPLOYMENT_NAME", Value:"deployment-example-1", ValueFrom:(*api.EnvVarSource)(nil)}, api.EnvVar{Name:"OPENSHIFT_DEPLOYMENT_NAMESPACE", Value:"test", ValueFrom:(*api.EnvVarSource)(nil)}}, Resources:api.ResourceRequirements{Limits:api.ResourceList(nil), Requests:api.ResourceList(nil)}, VolumeMounts:[]api.VolumeMount{api.VolumeMount{Name:"deployer-token-gvtbo", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:""}}, LivenessProbe:(*api.Probe)(nil), ReadinessProbe:(*api.Probe)(nil), Lifecycle:(*api.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", ImagePullPolicy:"IfNotPresent", SecurityContext:(*api.SecurityContext)(0xc8278ca1b0), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Never", TerminationGracePeriodSeconds:(*int64)(0xc82d3f83f0), ActiveDeadlineSeconds:(*int64)(0xc82d3f83f8), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"deployer", NodeName:"localhost", SecurityContext:(*api.PodSecurityContext)(0xc82ed3d4c0), ImagePullSecrets:[]api.LocalObjectReference{api.LocalObjectReference{Name:"deployer-dockercfg-cotcn"}}, Hostname:"", Subdomain:""}, Status:api.PodStatus{Phase:"Pending", Conditions:[]api.PodCondition{api.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:unversioned.Time{Time:time.Time{sec:0, nsec:0, loc:(*time.Location)(nil)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63616904266, nsec:0, loc:(*time.Location)(0xa4624c0)}}, Reason:"", Message:""}, api.PodCondition{Type:"Ready", Status:"False", LastProbeTime:unversioned.Time{Time:time.Time{sec:0, nsec:0, loc:(*time.Location)(nil)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63616904266, nsec:0, loc:(*time.Location)(0xa4624c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [deployment]"}, api.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:unversioned.Time{Time:time.Time{sec:0, nsec:0, loc:(*time.Location)(nil)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63616904266, nsec:0, loc:(*time.Location)(0xa4624c0)}}, Reason:"", Message:""}}, Message:"", Reason:"", HostIP:"192.168.121.18", PodIP:"", StartTime:(*unversioned.Time)(0xc82eb15380), InitContainerStatuses:[]api.ContainerStatus(nil), ContainerStatuses:[]api.ContainerStatus{api.ContainerStatus{Name:"deployment", State:api.ContainerState{Waiting:(*api.ContainerStateWaiting)(0xc82eb15400), Running:(*api.ContainerStateRunning)(nil), Terminated:(*api.ContainerStateTerminated)(nil)}, LastTerminationState:api.ContainerState{Waiting:(*api.ContainerStateWaiting)(nil), Running:(*api.ContainerStateRunning)(nil), Terminated:(*api.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"openshift/origin-deployer:v1.5.0-alpha.0", ImageID:"", ContainerID:""}}}}.
I1209 18:17:46.357745 20134 deployment_controller.go:334] Error: <nil>. No deployment found for Pod deployment-example-1-deploy, deployment controller will avoid syncing.
I1209 18:17:46.357765 20134 deployment_controller.go:334] Error: <nil>. No deployment found for Pod deployment-example-1-deploy, deployment controller will avoid syncing.
I1209 18:17:46.358973 20134 config.go:281] Setting pods for source api
I1209 18:17:46.359626 20134 kubelet.go:2306] SyncLoop (RECONCILE, "api"): "deployment-example-1-deploy_test(c8ca050c-be3b-11e6-8665-525400560f2f)"
I1209 18:17:46.360069 20134 replication_controller.go:379] Pod deployment-example-1-deploy updated, objectMeta {Name:deployment-example-1-deploy GenerateName: Namespace:test SelfLink:/api/v1/namespaces/test/pods/deployment-example-1-deploy UID:c8ca050c-be3b-11e6-8665-525400560f2f ResourceVersion:521 Generation:0 CreationTimestamp:2016-12-09 18:17:46 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[openshift.io/deployer-pod-for.name:deployment-example-1] Annotations:map[openshift.io/deployment.name:deployment-example-1 openshift.io/scc:restricted] OwnerReferences:[] Finalizers:[] ClusterName:} -> {Name:deployment-example-1-deploy GenerateName: Namespace:test SelfLink:/api/v1/namespaces/test/pods/deployment-example-1-deploy UID:c8ca050c-be3b-11e6-8665-525400560f2f ResourceVersion:523 Generation:0 CreationTimestamp:2016-12-09 18:17:46 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[openshift.io/deployer-pod-for.name:deployment-example-1] Annotations:map[openshift.io/scc:restricted openshift.io/deployment.name:deployment-example-1] OwnerReferences:[] Finalizers:[] ClusterName:}.
I1209 18:17:46.360306 20134 replication_controller.go:256] No controllers found for pod deployment-example-1-deploy, replication manager will avoid syncing
I1209 18:17:46.360931 20134 jobcontroller.go:166] No jobs found for pod deployment-example-1-deploy, job controller will avoid syncing
I1209 18:17:46.360961 20134 daemoncontroller.go:364] Pod deployment-example-1-deploy updated.
I1209 18:17:46.361014 20134 daemoncontroller.go:293] No daemon sets found for pod deployment-example-1-deploy, daemon set controller will avoid syncing
I1209 18:17:46.361517 20134 disruption.go:307] updatePod called on pod "deployment-example-1-deploy"
I1209 18:17:46.361542 20134 disruption.go:361] No PodDisruptionBudgets found for pod deployment-example-1-deploy, PodDisruptionBudget controller will avoid syncing.
I1209 18:17:46.362001 20134 disruption.go:310] No matching pdb for pod "deployment-example-1-deploy"
I1209 18:17:46.362304 20134 pet_set.go:238] No PetSets found for pod deployment-example-1-deploy, PetSet controller will avoid syncing
I1209 18:17:46.362619 20134 replica_set.go:362] Pod deployment-example-1-deploy updated, objectMeta {Name:deployment-example-1-deploy GenerateName: Namespace:test SelfLink:/api/v1/namespaces/test/pods/deployment-example-1-deploy UID:c8ca050c-be3b-11e6-8665-525400560f2f ResourceVersion:521 Generation:0 CreationTimestamp:2016-12-09 18:17:46 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[openshift.io/deployer-pod-for.name:deployment-example-1] Annotations:map[openshift.io/deployment.name:deployment-example-1 openshift.io/scc:restricted] OwnerReferences:[] Finalizers:[] ClusterName:} -> {Name:deployment-example-1-deploy GenerateName: Namespace:test SelfLink:/api/v1/namespaces/test/pods/deployment-example-1-deploy UID:c8ca050c-be3b-11e6-8665-525400560f2f ResourceVersion:523 Generation:0 CreationTimestamp:2016-12-09 18:17:46 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[openshift.io/deployer-pod-for.name:deployment-example-1] Annotations:map[openshift.io/scc:restricted openshift.io/deployment.name:deployment-example-1] OwnerReferences:[] Finalizers:[] ClusterName:}.
I1209 18:17:46.362946 20134 replica_set.go:238] No ReplicaSets found for pod deployment-example-1-deploy, ReplicaSet controller will avoid syncing
I1209 18:17:46.363254 20134 reconciler.go:225] VerifyControllerAttachedVolume operation started for volume "kubernetes.io/secret/c8ca050c-be3b-11e6-8665-525400560f2f-deployer-token-gvtbo" (spec.Name: "deployer-token-gvtbo") pod "c8ca050c-be3b-11e6-8665-525400560f2f" (UID: "c8ca050c-be3b-11e6-8665-525400560f2f")
I1209 18:17:46.365039 20134 status_manager.go:425] Status for pod "deployment-example-1-deploy_test(c8ca050c-be3b-11e6-8665-525400560f2f)" updated successfully: {status:{Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:{Time:{sec:0 nsec:0 loc:0x9506e40}} LastTransitionTime:{Time:{sec:63616904266 nsec:0 loc:0x9506e40}} Reason: Message:} {Type:Ready Status:False LastProbeTime:{Time:{sec:0 nsec:0 loc:0x9506e40}} LastTransitionTime:{Time:{sec:63616904266 nsec:0 loc:0x9506e40}} Reason:ContainersNotReady Message:containers with unready status: [deployment]} {Type:PodScheduled Status:True LastProbeTime:{Time:{sec:0 nsec:0 loc:0x9506e40}} LastTransitionTime:{Time:{sec:63616904266 nsec:0 loc:0x9506e40}} Reason: Message:}] Message: Reason: HostIP:192.168.121.18 PodIP: StartTime:0xc82ccab180 InitContainerStatuses:[] ContainerStatuses:[{Name:deployment State:{Waiting:0xc82ccab160 Running:<nil> Terminated:<nil>} LastTerminationState:{Waiting:<nil> Running:<nil> Terminated:<nil>} Ready:false RestartCount:0 Image:openshift/origin-deployer:v1.5.0-alpha.0 ImageID: ContainerID:}]} version:1 podName:deployment-example-1-deploy podNamespace:test}
I1209 18:17:46.378169 20134 proxier.go:751] syncProxyRules took 225.277693ms
I1209 18:17:46.378197 20134 proxier.go:391] OnServiceUpdate took 225.340314ms for 2 services
I1209 18:17:46.381070 20134 replication_controller.go:323] Observed updated replication controller deployment-example-1. Desired pod count change: 0->0
I1209 18:17:46.381249 20134 controller.go:225] Updated deployment test/deployment-example-1 status from New to Pending (scale: 0)
I1209 18:17:46.381351 20134 controller.go:155] Detected existing deployer pod deployment-example-1-deploy for deployment test/deployment-example-1
I1209 18:17:46.381803 20134 controller_utils.go:159] Controller test/deployment-example-1 either never recorded expectations, or the ttl expired.
I1209 18:17:46.382095 20134 replication_controller.go:620] Finished syncing controller "test/deployment-example-1" (303.622µs)
I1209 18:17:46.383032 20134 factory.go:171] Replication controller "deployment-example-1" updated.
I1209 18:17:46.383116 20134 controller.go:80] Reconciling test/deployment-example
I1209 18:17:46.392722 20134 factory.go:125] Updating deployment config "deployment-example"
I1209 18:17:46.394482 20134 controller.go:297] Updated the status for "test/deployment-example" (observed generation: 2)
I1209 18:17:46.394519 20134 controller.go:80] Reconciling test/deployment-example
I1209 18:17:46.400921 20134 reflector.go:284] github.com/openshift/origin/pkg/service/controller/servingcert/secret_creating_controller.go:118: forcing resync
I1209 18:17:46.401024 20134 secret_creating_controller.go:103] Updating service deployment-example
I1209 18:17:46.401140 20134 secret_creating_controller.go:103] Updating service kubernetes
I1209 18:17:46.457841 20134 interface.go:248] Default route transits interface "eth0"
I1209 18:17:46.458387 20134 interface.go:93] Interface eth0 is up
I1209 18:17:46.458555 20134 interface.go:138] Interface "eth0" has 2 addresses :[192.168.121.18/24 fe80::5054:ff:fe56:f2f/64].
I1209 18:17:46.458630 20134 interface.go:105] Checking addr 192.168.121.18/24.
I1209 18:17:46.458657 20134 interface.go:114] IP found 192.168.121.18
I1209 18:17:46.458705 20134 interface.go:144] valid IPv4 address for interface "eth0" found as 192.168.121.18.
I1209 18:17:46.458750 20134 interface.go:254] Choosing IP 192.168.121.18
I1209 18:17:46.464031 20134 reconciler.go:299] MountVolume operation started for volume "kubernetes.io/secret/c8ca050c-be3b-11e6-8665-525400560f2f-deployer-token-gvtbo" (spec.Name: "deployer-token-gvtbo") to pod "c8ca050c-be3b-11e6-8665-525400560f2f" (UID: "c8ca050c-be3b-11e6-8665-525400560f2f").
I1209 18:17:46.464674 20134 secret.go:164] Setting up volume deployer-token-gvtbo for pod c8ca050c-be3b-11e6-8665-525400560f2f at /var/lib/origin/openshift.local.volumes/pods/c8ca050c-be3b-11e6-8665-525400560f2f/volumes/kubernetes.io~secret/deployer-token-gvtbo
I1209 18:17:46.466443 20134 nsenter_mount.go:175] findmnt: directory /var/lib/origin/openshift.local.volumes/pods/c8ca050c-be3b-11e6-8665-525400560f2f/volumes/kubernetes.io~secret/deployer-token-gvtbo does not exist
I1209 18:17:46.467033 20134 empty_dir_linux.go:39] Determining mount medium of /var/lib/origin/openshift.local.volumes/pods/c8ca050c-be3b-11e6-8665-525400560f2f/volumes/kubernetes.io~secret/deployer-token-gvtbo
I1209 18:17:46.467067 20134 nsenter_mount.go:183] findmnt command: nsenter [--mount=/rootfs/proc/1/ns/mnt -- /bin/findmnt -o target,fstype --noheadings --first-only --target /var/lib/origin/openshift.local.volumes/pods/c8ca050c-be3b-11e6-8665-525400560f2f/volumes/kubernetes.io~secret/deployer-token-gvtbo]
I1209 18:17:46.493236 20134 nsenter_mount.go:196] IsLikelyNotMountPoint findmnt output for path /var/lib/origin/openshift.local.volumes/pods/c8ca050c-be3b-11e6-8665-525400560f2f/volumes/kubernetes.io~secret/deployer-token-gvtbo: /:
I1209 18:17:46.493319 20134 nsenter_mount.go:202] IsLikelyNotMountPoint: /var/lib/origin/openshift.local.volumes/pods/c8ca050c-be3b-11e6-8665-525400560f2f/volumes/kubernetes.io~secret/deployer-token-gvtbo is not a mount point
I1209 18:17:46.493413 20134 empty_dir_linux.go:49] Statfs_t of /var/lib/origin/openshift.local.volumes/pods/c8ca050c-be3b-11e6-8665-525400560f2f/volumes/kubernetes.io~secret/deployer-token-gvtbo: {Type:61267 Bsize:4096 Blocks:10288208 Bfree:9826418 Bavail:9298047 Files:2621440 Ffree:2591515 Fsid:{X__val:[633347183 -1919301888]} Namelen:255 Frsize:4096 Flags:4128 Spare:[0 0 0 0]}
I1209 18:17:46.493922 20134 empty_dir.go:258] pod c8ca050c-be3b-11e6-8665-525400560f2f: mounting tmpfs for volume wrapped_deployer-token-gvtbo with opts [rootcontext="system_u:object_r:container_file_t:s0"]
I1209 18:17:46.493965 20134 nsenter_mount.go:114] nsenter Mounting tmpfs /var/lib/origin/openshift.local.volumes/pods/c8ca050c-be3b-11e6-8665-525400560f2f/volumes/kubernetes.io~secret/deployer-token-gvtbo tmpfs [rootcontext="system_u:object_r:container_file_t:s0"]
I1209 18:17:46.494017 20134 nsenter_mount.go:117] Mount command: nsenter [--mount=/rootfs/proc/1/ns/mnt -- /bin/mount -t tmpfs -o rootcontext="system_u:object_r:container_file_t:s0" tmpfs /var/lib/origin/openshift.local.volumes/pods/c8ca050c-be3b-11e6-8665-525400560f2f/volumes/kubernetes.io~secret/deployer-token-gvtbo]
I1209 18:17:46.519705 20134 attach_detach_controller.go:520] processVolumesInUse for node "localhost"
I1209 18:17:46.534086 20134 secret.go:191] Received secret test/deployer-token-gvtbo containing (4) pieces of data, 4102 total bytes
I1209 18:17:46.534767 20134 atomic_writer.go:321] /var/lib/origin/openshift.local.volumes/pods/c8ca050c-be3b-11e6-8665-525400560f2f/volumes/kubernetes.io~secret/deployer-token-gvtbo: current paths: []
I1209 18:17:46.534832 20134 atomic_writer.go:333] /var/lib/origin/openshift.local.volumes/pods/c8ca050c-be3b-11e6-8665-525400560f2f/volumes/kubernetes.io~secret/deployer-token-gvtbo: new paths: [ca.crt namespace service-ca.crt token]
I1209 18:17:46.535396 20134 atomic_writer.go:336] /var/lib/origin/openshift.local.volumes/pods/c8ca050c-be3b-11e6-8665-525400560f2f/volumes/kubernetes.io~secret/deployer-token-gvtbo: paths to remove: map[]
I1209 18:17:46.535884 20134 atomic_writer.go:144] pod test/deployment-example-1-deploy volume deployer-token-gvtbo: write required for target directory /var/lib/origin/openshift.local.volumes/pods/c8ca050c-be3b-11e6-8665-525400560f2f/volumes/kubernetes.io~secret/deployer-token-gvtbo
I1209 18:17:46.536693 20134 atomic_writer.go:159] pod test/deployment-example-1-deploy volume deployer-token-gvtbo: performed write of new data to ts data directory: /var/lib/origin/openshift.local.volumes/pods/c8ca050c-be3b-11e6-8665-525400560f2f/volumes/kubernetes.io~secret/deployer-token-gvtbo/..129812_09_12_18_17_46.626389800
I1209 18:17:46.537839 20134 operation_executor.go:802] MountVolume.SetUp succeeded for volume "kubernetes.io/secret/c8ca050c-be3b-11e6-8665-525400560f2f-deployer-token-gvtbo" (spec.Name: "deployer-token-gvtbo") pod "c8ca050c-be3b-11e6-8665-525400560f2f" (UID: "c8ca050c-be3b-11e6-8665-525400560f2f").
I1209 18:17:46.596845 20134 volume_manager.go:353] All volumes are attached and mounted for pod "deployment-example-1-deploy_test(c8ca050c-be3b-11e6-8665-525400560f2f)"
I1209 18:17:46.602471 20134 docker_manager.go:1897] Syncing Pod "deployment-example-1-deploy_test(c8ca050c-be3b-11e6-8665-525400560f2f)": &api.Pod{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:api.ObjectMeta{Name:"deployment-example-1-deploy", GenerateName:"", Namespace:"test", SelfLink:"/api/v1/namespaces/test/pods/deployment-example-1-deploy", UID:"c8ca050c-be3b-11e6-8665-525400560f2f", ResourceVersion:"521", Generation:0, CreationTimestamp:unversioned.Time{Time:time.Time{sec:63616904266, nsec:0, loc:(*time.Location)(0xa4624c0)}}, DeletionTimestamp:(*unversioned.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"openshift.io/deployer-pod-for.name":"deployment-example-1"}, Annotations:map[string]string{"openshift.io/deployment.name":"deployment-example-1", "openshift.io/scc":"restricted", "kubernetes.io/config.source":"api", "kubernetes.io/config.seen":"2016-12-09T18:17:46.290848307Z"}, OwnerReferences:[]api.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:api.PodSpec{Volumes:[]api.Volume{api.Volume{Name:"deployer-token-gvtbo", VolumeSource:api.VolumeSource{HostPath:(*api.HostPathVolumeSource)(nil), EmptyDir:(*api.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*api.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*api.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*api.GitRepoVolumeSource)(nil), Secret:(*api.SecretVolumeSource)(0xc82af9c270), NFS:(*api.NFSVolumeSource)(nil), ISCSI:(*api.ISCSIVolumeSource)(nil), Glusterfs:(*api.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*api.PersistentVolumeClaimVolumeSource)(nil), RBD:(*api.RBDVolumeSource)(nil), Quobyte:(*api.QuobyteVolumeSource)(nil), FlexVolume:(*api.FlexVolumeSource)(nil), Cinder:(*api.CinderVolumeSource)(nil), CephFS:(*api.CephFSVolumeSource)(nil), Flocker:(*api.FlockerVolumeSource)(nil), DownwardAPI:(*api.DownwardAPIVolumeSource)(nil), FC:(*api.FCVolumeSource)(nil), AzureFile:(*api.AzureFileVolumeSource)(nil), ConfigMap:(*api.ConfigMapVolumeSource)(nil), VsphereVolume:(*api.VsphereVirtualDiskVolumeSource)(nil), AzureDisk:(*api.AzureDiskVolumeSource)(nil)}}}, InitContainers:[]api.Container(nil), Containers:[]api.Container{api.Container{Name:"deployment", Image:"openshift/origin-deployer:v1.5.0-alpha.0", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]api.ContainerPort(nil), Env:[]api.EnvVar{api.EnvVar{Name:"KUBERNETES_MASTER", Value:"https://192.168.121.18:8443", ValueFrom:(*api.EnvVarSource)(nil)}, api.EnvVar{Name:"OPENSHIFT_MASTER", Value:"https://192.168.121.18:8443", ValueFrom:(*api.EnvVarSource)(nil)}, api.EnvVar{Name:"BEARER_TOKEN_FILE", Value:"/var/run/secrets/kubernetes.io/serviceaccount/token", ValueFrom:(*api.EnvVarSource)(nil)}, api.EnvVar{Name:"OPENSHIFT_CA_DATA", Value:"-----BEGIN CERTIFICATE-----\nMIIC6jCCAdKgAwIBAgIBATANBgkqhkiG9w0BAQsFADAmMSQwIgYDVQQDDBtvcGVu\nc2hpZnQtc2lnbmVyQDE0ODEzMDczMzMwHhcNMTYxMjA5MTgxNTMzWhcNMjExMjA4\nMTgxNTM0WjAmMSQwIgYDVQQDDBtvcGVuc2hpZnQtc2lnbmVyQDE0ODEzMDczMzMw\nggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDJneo5Oz8Q1EA7BkGhSvFX\nDcdnYv+mvPZsUZOC5VbKaXSz84X1nLQj7n5lxWK4f9gmFiXscgxBOIxXD9e2Q6AM\ngIJEjMHzEXYOgbWqhbd6EtUrWEGEzA8dEAvPyDWe2jdovcTUkP7JxyqATCk7j4Ao\nv27HJFyyR3iVOleAPaT1ZICj+w4Jy1np60rmMCsn2A2vfKceu7T5lt3uGNEFhTMd\n7bbvFaIkv3lFWMkrRaDqAlWanYQLEGMF8Mh68ty9A/mqCAJUjVB363/i7IIG1nwI\nymFYS3IKsCk0W8WgC9p515lMH3d1rsT+UlF+BhRa3Boh+wD4q2NfrgkguqEQ4f+x\nAgMBAAGjIzAhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MA0GCSqG\nSIb3DQEBCwUAA4IBAQCBKeKsPueA4RHoEzLl+xNnrig0dBvbfcBCzCf4w221QHAT\nS3FcQvByLjzpUUl2lPtqA5+tvg+lxwKba8DTgI3IpP2J2Pu4eLu4jH/YnL+5u18t\nq3/CGSFBSL0UJ2cZQe+9AH7LOSSqyj86PBIjZhW3iSwQhjqB2p7dA25gj0wnoM77\nkh8VckM83HcbzVspgvpQ6zR2RuXfLA3Lcoi9wd8eojaaNwqN4AhCIdEu4nXz7FFM\nGv+2fHzxLRUFA0YPdMfYio3s6qF3Yg2G52S1dYNEGUNQTvMl38KFS6lU+jaj0qjn\nVVkXIA4NrteIMUC1NH2DcvTcWtWNtZWkga7aytT0\n-----END CERTIFICATE-----\n", ValueFrom:(*api.EnvVarSource)(nil)}, api.EnvVar{Name:"OPENSHIFT_DEPLOYMENT_NAME", Value:"deployment-example-1", ValueFrom:(*api.EnvVarSource)(nil)}, api.EnvVar{Name:"OPENSHIFT_DEPLOYMENT_NAMESPACE", Value:"test", ValueFrom:(*api.EnvVarSource)(nil)}}, Resources:api.ResourceRequirements{Limits:api.ResourceList(nil), Requests:api.ResourceList(nil)}, VolumeMounts:[]api.VolumeMount{api.VolumeMount{Name:"deployer-token-gvtbo", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:""}}, LivenessProbe:(*api.Probe)(nil), ReadinessProbe:(*api.Probe)(nil), Lifecycle:(*api.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", ImagePullPolicy:"IfNotPresent", SecurityContext:(*api.SecurityContext)(0xc82af9c2a0), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Never", TerminationGracePeriodSeconds:(*int64)(0xc82e56d2e0), ActiveDeadlineSeconds:(*int64)(0xc82e56d2e8), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"deployer", NodeName:"localhost", SecurityContext:(*api.PodSecurityContext)(0xc82b74a040), ImagePullSecrets:[]api.LocalObjectReference{api.LocalObjectReference{Name:"deployer-dockercfg-cotcn"}}, Hostname:"", Subdomain:""}, Status:api.PodStatus{Phase:"Pending", Conditions:[]api.PodCondition{api.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:unversioned.Time{Time:time.Time{sec:0, nsec:0, loc:(*time.Location)(nil)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63616904266, nsec:0, loc:(*time.Location)(0xa4624c0)}}, Reason:"", Message:""}}, Message:"", Reason:"", HostIP:"", PodIP:"", StartTime:(*unversioned.Time)(nil), InitContainerStatuses:[]api.ContainerStatus(nil), ContainerStatuses:[]api.ContainerStatus(nil)}}
I1209 18:17:46.603094 20134 docker_manager.go:1916] Need to restart pod infra container for "deployment-example-1-deploy_test(c8ca050c-be3b-11e6-8665-525400560f2f)" because it is not found
I1209 18:17:46.603483 20134 docker_manager.go:1962] Container {Name:deployment Image:openshift/origin-deployer:v1.5.0-alpha.0 Command:[] Args:[] WorkingDir: Ports:[] Env:[{Name:KUBERNETES_MASTER Value:https://192.168.121.18:8443 ValueFrom:<nil>} {Name:OPENSHIFT_MASTER Value:https://192.168.121.18:8443 ValueFrom:<nil>} {Name:BEARER_TOKEN_FILE Value:/var/run/secrets/kubernetes.io/serviceaccount/token ValueFrom:<nil>} {Name:OPENSHIFT_CA_DATA Value:-----BEGIN CERTIFICATE-----
MIIC6jCCAdKgAwIBAgIBATANBgkqhkiG9w0BAQsFADAmMSQwIgYDVQQDDBtvcGVu
c2hpZnQtc2lnbmVyQDE0ODEzMDczMzMwHhcNMTYxMjA5MTgxNTMzWhcNMjExMjA4
MTgxNTM0WjAmMSQwIgYDVQQDDBtvcGVuc2hpZnQtc2lnbmVyQDE0ODEzMDczMzMw
ggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDJneo5Oz8Q1EA7BkGhSvFX
DcdnYv+mvPZsUZOC5VbKaXSz84X1nLQj7n5lxWK4f9gmFiXscgxBOIxXD9e2Q6AM
gIJEjMHzEXYOgbWqhbd6EtUrWEGEzA8dEAvPyDWe2jdovcTUkP7JxyqATCk7j4Ao
v27HJFyyR3iVOleAPaT1ZICj+w4Jy1np60rmMCsn2A2vfKceu7T5lt3uGNEFhTMd
7bbvFaIkv3lFWMkrRaDqAlWanYQLEGMF8Mh68ty9A/mqCAJUjVB363/i7IIG1nwI
ymFYS3IKsCk0W8WgC9p515lMH3d1rsT+UlF+BhRa3Boh+wD4q2NfrgkguqEQ4f+x
AgMBAAGjIzAhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MA0GCSqG
SIb3DQEBCwUAA4IBAQCBKeKsPueA4RHoEzLl+xNnrig0dBvbfcBCzCf4w221QHAT
S3FcQvByLjzpUUl2lPtqA5+tvg+lxwKba8DTgI3IpP2J2Pu4eLu4jH/YnL+5u18t
q3/CGSFBSL0UJ2cZQe+9AH7LOSSqyj86PBIjZhW3iSwQhjqB2p7dA25gj0wnoM77
kh8VckM83HcbzVspgvpQ6zR2RuXfLA3Lcoi9wd8eojaaNwqN4AhCIdEu4nXz7FFM
Gv+2fHzxLRUFA0YPdMfYio3s6qF3Yg2G52S1dYNEGUNQTvMl38KFS6lU+jaj0qjn
VVkXIA4NrteIMUC1NH2DcvTcWtWNtZWkga7aytT0
-----END CERTIFICATE-----
ValueFrom:<nil>} {Name:OPENSHIFT_DEPLOYMENT_NAME Value:deployment-example-1 ValueFrom:<nil>} {Name:OPENSHIFT_DEPLOYMENT_NAMESPACE Value:test ValueFrom:<nil>}] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:deployer-token-gvtbo ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath:}] LivenessProbe:<nil> ReadinessProbe:<nil> Lifecycle:<nil> TerminationMessagePath:/dev/termination-log ImagePullPolicy:IfNotPresent SecurityContext:0xc82af9c2a0 Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
I1209 18:17:46.603719 20134 docker_manager.go:2055] Got container changes for pod "deployment-example-1-deploy_test(c8ca050c-be3b-11e6-8665-525400560f2f)": {StartInfraContainer:true InfraChanged:false InfraContainerId: InitFailed:false InitContainersToKeep:map[] ContainersToStart:map[0:Container {Name:deployment Image:openshift/origin-deployer:v1.5.0-alpha.0 Command:[] Args:[] WorkingDir: Ports:[] Env:[{Name:KUBERNETES_MASTER Value:https://192.168.121.18:8443 ValueFrom:<nil>} {Name:OPENSHIFT_MASTER Value:https://192.168.121.18:8443 ValueFrom:<nil>} {Name:BEARER_TOKEN_FILE Value:/var/run/secrets/kubernetes.io/serviceaccount/token ValueFrom:<nil>} {Name:OPENSHIFT_CA_DATA Value:-----BEGIN CERTIFICATE-----
MIIC6jCCAdKgAwIBAgIBATANBgkqhkiG9w0BAQsFADAmMSQwIgYDVQQDDBtvcGVu
c2hpZnQtc2lnbmVyQDE0ODEzMDczMzMwHhcNMTYxMjA5MTgxNTMzWhcNMjExMjA4
MTgxNTM0WjAmMSQwIgYDVQQDDBtvcGVuc2hpZnQtc2lnbmVyQDE0ODEzMDczMzMw
ggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDJneo5Oz8Q1EA7BkGhSvFX
DcdnYv+mvPZsUZOC5VbKaXSz84X1nLQj7n5lxWK4f9gmFiXscgxBOIxXD9e2Q6AM
gIJEjMHzEXYOgbWqhbd6EtUrWEGEzA8dEAvPyDWe2jdovcTUkP7JxyqATCk7j4Ao
v27HJFyyR3iVOleAPaT1ZICj+w4Jy1np60rmMCsn2A2vfKceu7T5lt3uGNEFhTMd
7bbvFaIkv3lFWMkrRaDqAlWanYQLEGMF8Mh68ty9A/mqCAJUjVB363/i7IIG1nwI
ymFYS3IKsCk0W8WgC9p515lMH3d1rsT+UlF+BhRa3Boh+wD4q2NfrgkguqEQ4f+x
AgMBAAGjIzAhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MA0GCSqG
SIb3DQEBCwUAA4IBAQCBKeKsPueA4RHoEzLl+xNnrig0dBvbfcBCzCf4w221QHAT
S3FcQvByLjzpUUl2lPtqA5+tvg+lxwKba8DTgI3IpP2J2Pu4eLu4jH/YnL+5u18t
q3/CGSFBSL0UJ2cZQe+9AH7LOSSqyj86PBIjZhW3iSwQhjqB2p7dA25gj0wnoM77
kh8VckM83HcbzVspgvpQ6zR2RuXfLA3Lcoi9wd8eojaaNwqN4AhCIdEu4nXz7FFM
Gv+2fHzxLRUFA0YPdMfYio3s6qF3Yg2G52S1dYNEGUNQTvMl38KFS6lU+jaj0qjn
VVkXIA4NrteIMUC1NH2DcvTcWtWNtZWkga7aytT0
-----END CERTIFICATE-----
ValueFrom:<nil>} {Name:OPENSHIFT_DEPLOYMENT_NAME Value:deployment-example-1 ValueFrom:<nil>} {Name:OPENSHIFT_DEPLOYMENT_NAMESPACE Value:test ValueFrom:<nil>}] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:deployer-token-gvtbo ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath:}] LivenessProbe:<nil> ReadinessProbe:<nil> Lifecycle:<nil> TerminationMessagePath:/dev/termination-log ImagePullPolicy:IfNotPresent SecurityContext:0xc82af9c2a0 Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.] ContainersToKeep:map[]}
I1209 18:17:46.603823 20134 docker_manager.go:2064] Killing Infra Container for "deployment-example-1-deploy_test(c8ca050c-be3b-11e6-8665-525400560f2f)", will start new one
I1209 18:17:46.603848 20134 docker_manager.go:2122] Creating pod infra container for "deployment-example-1-deploy_test(c8ca050c-be3b-11e6-8665-525400560f2f)"
I1209 18:17:46.612546 20134 docker_manager.go:1663] Generating ref for container POD: &api.ObjectReference{Kind:"Pod", Namespace:"test", Name:"deployment-example-1-deploy", UID:"c8ca050c-be3b-11e6-8665-525400560f2f", APIVersion:"v1", ResourceVersion:"521", FieldPath:"implicitly required container POD"}
I1209 18:17:46.612719 20134 kubelet.go:1211] container: test/deployment-example-1-deploy/POD podIP: "" creating hosts mount: false
I1209 18:17:46.615456 20134 nodecontroller.go:816] Node localhost ReadyCondition updated. Updating timestamp: {Capacity:map[memory:{i:{value:8223051776 scale:0} d:{Dec:<nil>} s:8030324Ki Format:BinarySI} pods:{i:{value:20 scale:0} d:{Dec:<nil>} s:20 Format:DecimalSI} alpha.kubernetes.io/nvidia-gpu:{i:{value:0 scale:0} d:{Dec:<nil>} s:0 Format:DecimalSI} cpu:{i:{value:2 scale:0} d:{Dec:<nil>} s:2 Format:DecimalSI}] Allocatable:map[memory:{i:{value:8223051776 scale:0} d:{Dec:<nil>} s:8030324Ki Format:BinarySI} pods:{i:{value:20 scale:0} d:{Dec:<nil>} s:20 Format:DecimalSI} alpha.kubernetes.io/nvidia-gpu:{i:{value:0 scale:0} d:{Dec:<nil>} s:0 Format:DecimalSI} cpu:{i:{value:2 scale:0} d:{Dec:<nil>} s:2 Format:DecimalSI}] Phase: Conditions:[{Type:OutOfDisk Status:False LastHeartbeatTime:2016-12-09 18:17:36 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasSufficientDisk Message:kubelet has sufficient disk space available} {Type:MemoryPressure Status:False LastHeartbeatTime:2016-12-09 18:17:36 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasSufficientMemory Message:kubelet has sufficient memory available} {Type:DiskPressure Status:False LastHeartbeatTime:2016-12-09 18:17:36 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasNoDiskPressure Message:kubelet has no disk pressure} {Type:Ready Status:True LastHeartbeatTime:2016-12-09 18:17:36 +0000 UTC LastTransitionTime:2016-12-09 18:15:55 +0000 UTC Reason:KubeletReady Message:kubelet is posting ready status}] Addresses:[{Type:LegacyHostIP Address:192.168.121.18} {Type:InternalIP Address:192.168.121.18}] DaemonEndpoints:{KubeletEndpoint:{Port:10250}} NodeInfo:{MachineID:8629c58585b148cca9cc925e788be346 SystemUUID:8629C585-85B1-48CC-A9CC-925E788BE346 BootID:1a2f3964-acf4-4825-9949-2816c7af2060 KernelVersion:4.8.6-300.fc25.x86_64 OSImage:CentOS Linux 7 (Core) ContainerRuntimeVersion:docker://1.12.3 KubeletVersion:v1.4.0+776c994 KubeProxyVersion:v1.4.0+776c994 OperatingSystem:linux Architecture:amd64} Images:[{Names:[docker.io/openshift/origin@sha256:0c4dfc568b8dbb1ac288ab1feb955ee2a37e74fd60fc96f8efd4e35076ced06b docker.io/openshift/origin:latest] SizeBytes:540431972} {Names:[docker.io/openshift/origin-deployer@sha256:398189baa7877595df33f6da8dff1fad36a9e437883c5620b85bdf3d06a31fb5 docker.io/openshift/origin-deployer:v1.5.0-alpha.0] SizeBytes:488015134} {Names:[docker.io/openshift/origin-pod@sha256:6216367955502829e7270c3437810cd16e0e99d2973647e521cc9b871d8948f0 docker.io/openshift/origin-pod:v1.5.0-alpha.0] SizeBytes:1138998}] VolumesInUse:[] VolumesAttached:[]} vs {Capacity:map[alpha.kubernetes.io/nvidia-gpu:{i:{value:0 scale:0} d:{Dec:<nil>} s:0 Format:DecimalSI} cpu:{i:{value:2 scale:0} d:{Dec:<nil>} s:2 Format:DecimalSI} memory:{i:{value:8223051776 scale:0} d:{Dec:<nil>} s:8030324Ki Format:BinarySI} pods:{i:{value:20 scale:0} d:{Dec:<nil>} s:20 Format:DecimalSI}] Allocatable:map[pods:{i:{value:20 scale:0} d:{Dec:<nil>} s:20 Format:DecimalSI} alpha.kubernetes.io/nvidia-gpu:{i:{value:0 scale:0} d:{Dec:<nil>} s:0 Format:DecimalSI} cpu:{i:{value:2 scale:0} d:{Dec:<nil>} s:2 Format:DecimalSI} memory:{i:{value:8223051776 scale:0} d:{Dec:<nil>} s:8030324Ki Format:BinarySI}] Phase: Conditions:[{Type:OutOfDisk Status:False LastHeartbeatTime:2016-12-09 18:17:46 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasSufficientDisk Message:kubelet has sufficient disk space available} {Type:MemoryPressure Status:False LastHeartbeatTime:2016-12-09 18:17:46 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasSufficientMemory Message:kubelet has sufficient memory available} {Type:DiskPressure Status:False LastHeartbeatTime:2016-12-09 18:17:46 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasNoDiskPressure Message:kubelet has no disk pressure} {Type:Ready Status:True LastHeartbeatTime:2016-12-09 18:17:46 +0000 UTC LastTransitionTime:2016-12-09 18:15:55 +0000 UTC Reason:KubeletReady Message:kubelet is posting ready status}] Addresses:[{Type:LegacyHostIP Address:192.168.121.18} {Type:InternalIP Address:192.168.121.18}] DaemonEndpoints:{KubeletEndpoint:{Port:10250}} NodeInfo:{MachineID:8629c58585b148cca9cc925e788be346 SystemUUID:8629C585-85B1-48CC-A9CC-925E788BE346 BootID:1a2f3964-acf4-4825-9949-2816c7af2060 KernelVersion:4.8.6-300.fc25.x86_64 OSImage:CentOS Linux 7 (Core) ContainerRuntimeVersion:docker://1.12.3 KubeletVersion:v1.4.0+776c994 KubeProxyVersion:v1.4.0+776c994 OperatingSystem:linux Architecture:amd64} Images:[{Names:[docker.io/openshift/origin@sha256:0c4dfc568b8dbb1ac288ab1feb955ee2a37e74fd60fc96f8efd4e35076ced06b docker.io/openshift/origin:latest] SizeBytes:540431972} {Names:[docker.io/openshift/origin-deployer@sha256:398189baa7877595df33f6da8dff1fad36a9e437883c5620b85bdf3d06a31fb5 docker.io/openshift/origin-deployer:v1.5.0-alpha.0] SizeBytes:488015134} {Names:[docker.io/openshift/origin-pod@sha256:6216367955502829e7270c3437810cd16e0e99d2973647e521cc9b871d8948f0 docker.io/openshift/origin-pod:v1.5.0-alpha.0] SizeBytes:1138998}] VolumesInUse:[] VolumesAttached:[]}.
I1209 18:17:46.680754 20134 docker_manager.go:742] Container test/deployment-example-1-deploy/POD: setting entrypoint "[]" and command "[]"
I1209 18:17:46.686888 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:17:46.693244 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:17:46.699365 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:17:46.700171 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:17:46.702049 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:17:46.775548 20134 conversion.go:133] failed to handle multiple devices for container. Skipping Filesystem stats
I1209 18:17:46.775578 20134 conversion.go:133] failed to handle multiple devices for container. Skipping Filesystem stats
I1209 18:17:46.805022 20134 eviction_manager.go:204] eviction manager: no resources are starved
E1209 18:17:46.933692 20134 docker_manager.go:761] Logging security options: {key:seccomp value:unconfined msg:}
I1209 18:17:47.184030 20134 factory.go:111] Using factory "docker" for container "/system.slice/docker-8b611012de8ed25a1e5fbff8b398d356ecc26def016255572658a5f18b2ca77c.scope"
E1209 18:17:47.185040 20134 docker_manager.go:1711] Failed to create symbolic link to the log file of pod "deployment-example-1-deploy_test(c8ca050c-be3b-11e6-8665-525400560f2f)" container "POD": symlink /var/log/containers/deployment-example-1-deploy_test_POD-8b611012de8ed25a1e5fbff8b398d356ecc26def016255572658a5f18b2ca77c.log: no such file or directory
I1209 18:17:47.185096 20134 docker_manager.go:1802] DNS ResolvConfPath exists: /var/lib/docker/containers/8b611012de8ed25a1e5fbff8b398d356ecc26def016255572658a5f18b2ca77c/resolv.conf. Will attempt to add ndots option: options ndots:5
I1209 18:17:47.185304 20134 docker_manager.go:2136] Calling network plugin kubernetes.io/no-op to setup pod for deployment-example-1-deploy_test(c8ca050c-be3b-11e6-8665-525400560f2f)
I1209 18:17:47.215516 20134 hairpin.go:110] Enabling hairpin on interface vetheed5498
I1209 18:17:47.216168 20134 docker_manager.go:2177] Determined pod ip after infra change: "deployment-example-1-deploy_test(c8ca050c-be3b-11e6-8665-525400560f2f)": "172.17.0.2"
I1209 18:17:47.216293 20134 docker_manager.go:2262] Creating container &{Name:deployment Image:openshift/origin-deployer:v1.5.0-alpha.0 Command:[] Args:[] WorkingDir: Ports:[] Env:[{Name:KUBERNETES_MASTER Value:https://192.168.121.18:8443 ValueFrom:<nil>} {Name:OPENSHIFT_MASTER Value:https://192.168.121.18:8443 ValueFrom:<nil>} {Name:BEARER_TOKEN_FILE Value:/var/run/secrets/kubernetes.io/serviceaccount/token ValueFrom:<nil>} {Name:OPENSHIFT_CA_DATA Value:-----BEGIN CERTIFICATE-----
MIIC6jCCAdKgAwIBAgIBATANBgkqhkiG9w0BAQsFADAmMSQwIgYDVQQDDBtvcGVu
c2hpZnQtc2lnbmVyQDE0ODEzMDczMzMwHhcNMTYxMjA5MTgxNTMzWhcNMjExMjA4
MTgxNTM0WjAmMSQwIgYDVQQDDBtvcGVuc2hpZnQtc2lnbmVyQDE0ODEzMDczMzMw
ggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDJneo5Oz8Q1EA7BkGhSvFX
DcdnYv+mvPZsUZOC5VbKaXSz84X1nLQj7n5lxWK4f9gmFiXscgxBOIxXD9e2Q6AM
gIJEjMHzEXYOgbWqhbd6EtUrWEGEzA8dEAvPyDWe2jdovcTUkP7JxyqATCk7j4Ao
v27HJFyyR3iVOleAPaT1ZICj+w4Jy1np60rmMCsn2A2vfKceu7T5lt3uGNEFhTMd
7bbvFaIkv3lFWMkrRaDqAlWanYQLEGMF8Mh68ty9A/mqCAJUjVB363/i7IIG1nwI
ymFYS3IKsCk0W8WgC9p515lMH3d1rsT+UlF+BhRa3Boh+wD4q2NfrgkguqEQ4f+x
AgMBAAGjIzAhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MA0GCSqG
SIb3DQEBCwUAA4IBAQCBKeKsPueA4RHoEzLl+xNnrig0dBvbfcBCzCf4w221QHAT
S3FcQvByLjzpUUl2lPtqA5+tvg+lxwKba8DTgI3IpP2J2Pu4eLu4jH/YnL+5u18t
q3/CGSFBSL0UJ2cZQe+9AH7LOSSqyj86PBIjZhW3iSwQhjqB2p7dA25gj0wnoM77
kh8VckM83HcbzVspgvpQ6zR2RuXfLA3Lcoi9wd8eojaaNwqN4AhCIdEu4nXz7FFM
Gv+2fHzxLRUFA0YPdMfYio3s6qF3Yg2G52S1dYNEGUNQTvMl38KFS6lU+jaj0qjn
VVkXIA4NrteIMUC1NH2DcvTcWtWNtZWkga7aytT0
-----END CERTIFICATE-----
ValueFrom:<nil>} {Name:OPENSHIFT_DEPLOYMENT_NAME Value:deployment-example-1 ValueFrom:<nil>} {Name:OPENSHIFT_DEPLOYMENT_NAMESPACE Value:test ValueFrom:<nil>}] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:deployer-token-gvtbo ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath:}] LivenessProbe:<nil> ReadinessProbe:<nil> Lifecycle:<nil> TerminationMessagePath:/dev/termination-log ImagePullPolicy:IfNotPresent SecurityContext:0xc82af9c2a0 Stdin:false StdinOnce:false TTY:false} in pod deployment-example-1-deploy_test(c8ca050c-be3b-11e6-8665-525400560f2f)
I1209 18:17:47.218837 20134 docker_manager.go:1663] Generating ref for container deployment: &api.ObjectReference{Kind:"Pod", Namespace:"test", Name:"deployment-example-1-deploy", UID:"c8ca050c-be3b-11e6-8665-525400560f2f", APIVersion:"v1", ResourceVersion:"521", FieldPath:"spec.containers{deployment}"}
I1209 18:17:47.218932 20134 kubelet.go:1211] container: test/deployment-example-1-deploy/deployment podIP: "172.17.0.2" creating hosts mount: true
I1209 18:17:47.219905 20134 server.go:608] Event(api.ObjectReference{Kind:"Pod", Namespace:"test", Name:"deployment-example-1-deploy", UID:"c8ca050c-be3b-11e6-8665-525400560f2f", APIVersion:"v1", ResourceVersion:"521", FieldPath:"spec.containers{deployment}"}): type: 'Normal' reason: 'Pulled' Container image "openshift/origin-deployer:v1.5.0-alpha.0" already present on machine
I1209 18:17:47.242708 20134 manager.go:874] Added container: "/system.slice/docker-8b611012de8ed25a1e5fbff8b398d356ecc26def016255572658a5f18b2ca77c.scope" (aliases: [k8s_POD.8e33df32_deployment-example-1-deploy_test_c8ca050c-be3b-11e6-8665-525400560f2f_b66d9549 8b611012de8ed25a1e5fbff8b398d356ecc26def016255572658a5f18b2ca77c], namespace: "docker")
I1209 18:17:47.243103 20134 handler.go:325] Added event &{/system.slice/docker-8b611012de8ed25a1e5fbff8b398d356ecc26def016255572658a5f18b2ca77c.scope 2016-12-09 18:17:47.029252844 +0000 UTC containerCreation {<nil>}}
I1209 18:17:47.243377 20134 container.go:407] Start housekeeping for container "/system.slice/docker-8b611012de8ed25a1e5fbff8b398d356ecc26def016255572658a5f18b2ca77c.scope"
I1209 18:17:47.253930 20134 handler.go:300] unable to get fs usage from thin pool for device 18: no cached value for usage of device 18
I1209 18:17:47.254892 20134 factory.go:111] Using factory "docker" for container "/system.slice/var-lib-docker-containers-8b611012de8ed25a1e5fbff8b398d356ecc26def016255572658a5f18b2ca77c-shm.mount"
I1209 18:17:47.288791 20134 docker_manager.go:742] Container test/deployment-example-1-deploy/deployment: setting entrypoint "[]" and command "[]"
I1209 18:17:47.300864 20134 manager.go:874] Added container: "/system.slice/var-lib-docker-containers-8b611012de8ed25a1e5fbff8b398d356ecc26def016255572658a5f18b2ca77c-shm.mount" (aliases: [k8s_POD.8e33df32_deployment-example-1-deploy_test_c8ca050c-be3b-11e6-8665-525400560f2f_b66d9549 8b611012de8ed25a1e5fbff8b398d356ecc26def016255572658a5f18b2ca77c], namespace: "docker")
I1209 18:17:47.301085 20134 handler.go:325] Added event &{/system.slice/var-lib-docker-containers-8b611012de8ed25a1e5fbff8b398d356ecc26def016255572658a5f18b2ca77c-shm.mount 2016-12-09 18:17:47.040252844 +0000 UTC containerCreation {<nil>}}
I1209 18:17:47.301168 20134 factory.go:104] Error trying to work out if we can handle /system.slice/var-lib-origin-openshift.local.volumes-pods-c8ca050c\x2dbe3b\x2d11e6\x2d8665\x2d525400560f2f-volumes-kubernetes.io\x7esecret-deployer\x2dtoken\x2dgvtbo.mount: invalid container name
I1209 18:17:47.301176 20134 factory.go:115] Factory "docker" was unable to handle container "/system.slice/var-lib-origin-openshift.local.volumes-pods-c8ca050c\\x2dbe3b\\x2d11e6\\x2d8665\\x2d525400560f2f-volumes-kubernetes.io\\x7esecret-deployer\\x2dtoken\\x2dgvtbo.mount"
I1209 18:17:47.301190 20134 factory.go:108] Factory "systemd" can handle container "/system.slice/var-lib-origin-openshift.local.volumes-pods-c8ca050c\\x2dbe3b\\x2d11e6\\x2d8665\\x2d525400560f2f-volumes-kubernetes.io\\x7esecret-deployer\\x2dtoken\\x2dgvtbo.mount", but ignoring.
I1209 18:17:47.301203 20134 manager.go:843] ignoring container "/system.slice/var-lib-origin-openshift.local.volumes-pods-c8ca050c\\x2dbe3b\\x2d11e6\\x2d8665\\x2d525400560f2f-volumes-kubernetes.io\\x7esecret-deployer\\x2dtoken\\x2dgvtbo.mount"
I1209 18:17:47.301361 20134 container.go:407] Start housekeeping for container "/system.slice/var-lib-docker-containers-8b611012de8ed25a1e5fbff8b398d356ecc26def016255572658a5f18b2ca77c-shm.mount"
I1209 18:17:47.302024 20134 handler.go:300] unable to get fs usage from thin pool for device 18: no cached value for usage of device 18
I1209 18:17:47.312375 20134 factory.go:104] Error trying to work out if we can handle /system.slice/var-lib-docker-devicemapper-mnt-bdd1f18d5018c96e2fcdcb01c6ecbe5165a4ab9113a3cc512107d48900c0a892.mount: error inspecting container: Error: No such container: bdd1f18d5018c96e2fcdcb01c6ecbe5165a4ab9113a3cc512107d48900c0a892
I1209 18:17:47.312892 20134 factory.go:115] Factory "docker" was unable to handle container "/system.slice/var-lib-docker-devicemapper-mnt-bdd1f18d5018c96e2fcdcb01c6ecbe5165a4ab9113a3cc512107d48900c0a892.mount"
I1209 18:17:47.313370 20134 factory.go:108] Factory "systemd" can handle container "/system.slice/var-lib-docker-devicemapper-mnt-bdd1f18d5018c96e2fcdcb01c6ecbe5165a4ab9113a3cc512107d48900c0a892.mount", but ignoring.
I1209 18:17:47.313663 20134 manager.go:843] ignoring container "/system.slice/var-lib-docker-devicemapper-mnt-bdd1f18d5018c96e2fcdcb01c6ecbe5165a4ab9113a3cc512107d48900c0a892.mount"
E1209 18:17:47.567809 20134 docker_manager.go:761] Logging security options: {key:seccomp value:unconfined msg:}
I1209 18:17:47.570497 20134 server.go:608] Event(api.ObjectReference{Kind:"Pod", Namespace:"test", Name:"deployment-example-1-deploy", UID:"c8ca050c-be3b-11e6-8665-525400560f2f", APIVersion:"v1", ResourceVersion:"521", FieldPath:"spec.containers{deployment}"}): type: 'Normal' reason: 'Created' Created container with docker id 94f86506bd76; Security:[seccomp=unconfined]
I1209 18:17:47.702215 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:17:47.767133 20134 factory.go:111] Using factory "docker" for container "/system.slice/docker-94f86506bd763352f94fec132bedfa2b00fc94726ee3ad7e8cd530c3c74e8f04.scope"
I1209 18:17:47.767184 20134 server.go:608] Event(api.ObjectReference{Kind:"Pod", Namespace:"test", Name:"deployment-example-1-deploy", UID:"c8ca050c-be3b-11e6-8665-525400560f2f", APIVersion:"v1", ResourceVersion:"521", FieldPath:"spec.containers{deployment}"}): type: 'Normal' reason: 'Started' Started container with docker id 94f86506bd76
I1209 18:17:47.767929 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:17:47.768025 20134 generic.go:141] GenericPLEG: c8ca050c-be3b-11e6-8665-525400560f2f/94f86506bd763352f94fec132bedfa2b00fc94726ee3ad7e8cd530c3c74e8f04: non-existent -> running
I1209 18:17:47.768089 20134 generic.go:141] GenericPLEG: c8ca050c-be3b-11e6-8665-525400560f2f/8b611012de8ed25a1e5fbff8b398d356ecc26def016255572658a5f18b2ca77c: non-existent -> running
E1209 18:17:47.776456 20134 docker_manager.go:1711] Failed to create symbolic link to the log file of pod "deployment-example-1-deploy_test(c8ca050c-be3b-11e6-8665-525400560f2f)" container "deployment": symlink /var/log/containers/deployment-example-1-deploy_test_deployment-94f86506bd763352f94fec132bedfa2b00fc94726ee3ad7e8cd530c3c74e8f04.log: no such file or directory
I1209 18:17:47.777160 20134 docker_manager.go:373] Container inspect result: {ContainerJSONBase:0xc82fc5e160 Mounts:[{Name: Source:/var/lib/origin/openshift.local.volumes/pods/c8ca050c-be3b-11e6-8665-525400560f2f/volumes/kubernetes.io~secret/deployer-token-gvtbo Destination:/var/run/secrets/kubernetes.io/serviceaccount Driver: Mode:ro,Z RW:false Propagation:rprivate} {Name: Source:/var/lib/origin/openshift.local.volumes/pods/c8ca050c-be3b-11e6-8665-525400560f2f/etc-hosts Destination:/etc/hosts Driver: Mode: RW:true Propagation:rprivate} {Name: Source:/var/lib/origin/openshift.local.volumes/pods/c8ca050c-be3b-11e6-8665-525400560f2f/containers/deployment/e821506a Destination:/dev/termination-log Driver: Mode: RW:true Propagation:rprivate}] Config:0xc830eacc60 NetworkSettings:0xc831139800}
I1209 18:17:47.780494 20134 docker_manager.go:373] Container inspect result: {ContainerJSONBase:0xc83113eb00 Mounts:[] Config:0xc8268bd320 NetworkSettings:0xc82de1ab00}
I1209 18:17:47.781714 20134 generic.go:327] PLEG: Write status for deployment-example-1-deploy/test: &container.PodStatus{ID:"c8ca050c-be3b-11e6-8665-525400560f2f", Name:"deployment-example-1-deploy", Namespace:"test", IP:"172.17.0.2", ContainerStatuses:[]*container.ContainerStatus{(*container.ContainerStatus)(0xc832471260), (*container.ContainerStatus)(0xc8323e1420)}} (err: <nil>)
I1209 18:17:47.781778 20134 kubelet.go:2328] SyncLoop (PLEG): "deployment-example-1-deploy_test(c8ca050c-be3b-11e6-8665-525400560f2f)", event: &pleg.PodLifecycleEvent{ID:"c8ca050c-be3b-11e6-8665-525400560f2f", Type:"ContainerStarted", Data:"94f86506bd763352f94fec132bedfa2b00fc94726ee3ad7e8cd530c3c74e8f04"}
I1209 18:17:47.782199 20134 kubelet.go:2328] SyncLoop (PLEG): "deployment-example-1-deploy_test(c8ca050c-be3b-11e6-8665-525400560f2f)", event: &pleg.PodLifecycleEvent{ID:"c8ca050c-be3b-11e6-8665-525400560f2f", Type:"ContainerStarted", Data:"8b611012de8ed25a1e5fbff8b398d356ecc26def016255572658a5f18b2ca77c"}
I1209 18:17:47.821170 20134 manager.go:874] Added container: "/system.slice/docker-94f86506bd763352f94fec132bedfa2b00fc94726ee3ad7e8cd530c3c74e8f04.scope" (aliases: [k8s_deployment.62f3c0aa_deployment-example-1-deploy_test_c8ca050c-be3b-11e6-8665-525400560f2f_e821506a 94f86506bd763352f94fec132bedfa2b00fc94726ee3ad7e8cd530c3c74e8f04], namespace: "docker")
I1209 18:17:47.821636 20134 handler.go:325] Added event &{/system.slice/docker-94f86506bd763352f94fec132bedfa2b00fc94726ee3ad7e8cd530c3c74e8f04.scope 2016-12-09 18:17:47.641252844 +0000 UTC containerCreation {<nil>}}
I1209 18:17:47.821760 20134 factory.go:104] Error trying to work out if we can handle /system.slice/run-docker-netns-611ff718c197.mount: invalid container name
I1209 18:17:47.821790 20134 factory.go:115] Factory "docker" was unable to handle container "/system.slice/run-docker-netns-611ff718c197.mount"
I1209 18:17:47.821837 20134 factory.go:108] Factory "systemd" can handle container "/system.slice/run-docker-netns-611ff718c197.mount", but ignoring.
I1209 18:17:47.821864 20134 manager.go:843] ignoring container "/system.slice/run-docker-netns-611ff718c197.mount"
I1209 18:17:47.821919 20134 container.go:407] Start housekeeping for container "/system.slice/docker-94f86506bd763352f94fec132bedfa2b00fc94726ee3ad7e8cd530c3c74e8f04.scope"
I1209 18:17:47.822965 20134 handler.go:300] unable to get fs usage from thin pool for device 20: no cached value for usage of device 20
I1209 18:17:47.830997 20134 factory.go:104] Error trying to work out if we can handle /system.slice/var-lib-docker-devicemapper-mnt-a590b98ad03dc7da45d71032441e3cf43ad898783e3572be4fa774f180056c74.mount: error inspecting container: Error: No such container: a590b98ad03dc7da45d71032441e3cf43ad898783e3572be4fa774f180056c74
I1209 18:17:47.831170 20134 factory.go:115] Factory "docker" was unable to handle container "/system.slice/var-lib-docker-devicemapper-mnt-a590b98ad03dc7da45d71032441e3cf43ad898783e3572be4fa774f180056c74.mount"
I1209 18:17:47.831245 20134 factory.go:108] Factory "systemd" can handle container "/system.slice/var-lib-docker-devicemapper-mnt-a590b98ad03dc7da45d71032441e3cf43ad898783e3572be4fa774f180056c74.mount", but ignoring.
I1209 18:17:47.831270 20134 manager.go:843] ignoring container "/system.slice/var-lib-docker-devicemapper-mnt-a590b98ad03dc7da45d71032441e3cf43ad898783e3572be4fa774f180056c74.mount"
I1209 18:17:48.470503 20134 manager.go:931] Destroyed container: "/system.slice/docker-94f86506bd763352f94fec132bedfa2b00fc94726ee3ad7e8cd530c3c74e8f04.scope" (aliases: [k8s_deployment.62f3c0aa_deployment-example-1-deploy_test_c8ca050c-be3b-11e6-8665-525400560f2f_e821506a 94f86506bd763352f94fec132bedfa2b00fc94726ee3ad7e8cd530c3c74e8f04], namespace: "docker")
I1209 18:17:48.470559 20134 handler.go:325] Added event &{/system.slice/docker-94f86506bd763352f94fec132bedfa2b00fc94726ee3ad7e8cd530c3c74e8f04.scope 2016-12-09 18:17:48.470549843 +0000 UTC containerDeletion {<nil>}}
I1209 18:17:48.693395 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:17:48.695027 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:17:48.697468 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:17:48.781954 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:17:48.784240 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:17:48.784397 20134 generic.go:141] GenericPLEG: c8ca050c-be3b-11e6-8665-525400560f2f/94f86506bd763352f94fec132bedfa2b00fc94726ee3ad7e8cd530c3c74e8f04: running -> exited
I1209 18:17:48.788769 20134 docker_manager.go:373] Container inspect result: {ContainerJSONBase:0xc826e2b8c0 Mounts:[{Name: Source:/var/lib/origin/openshift.local.volumes/pods/c8ca050c-be3b-11e6-8665-525400560f2f/volumes/kubernetes.io~secret/deployer-token-gvtbo Destination:/var/run/secrets/kubernetes.io/serviceaccount Driver: Mode:ro,Z RW:false Propagation:rprivate} {Name: Source:/var/lib/origin/openshift.local.volumes/pods/c8ca050c-be3b-11e6-8665-525400560f2f/etc-hosts Destination:/etc/hosts Driver: Mode: RW:true Propagation:rprivate} {Name: Source:/var/lib/origin/openshift.local.volumes/pods/c8ca050c-be3b-11e6-8665-525400560f2f/containers/deployment/e821506a Destination:/dev/termination-log Driver: Mode: RW:true Propagation:rprivate}] Config:0xc82d3e5e60 NetworkSettings:0xc82d0af500}
I1209 18:17:48.793399 20134 docker_manager.go:373] Container inspect result: {ContainerJSONBase:0xc82b5fe160 Mounts:[] Config:0xc8316697a0 NetworkSettings:0xc8282a0e00}
I1209 18:17:48.795213 20134 generic.go:327] PLEG: Write status for deployment-example-1-deploy/test: &container.PodStatus{ID:"c8ca050c-be3b-11e6-8665-525400560f2f", Name:"deployment-example-1-deploy", Namespace:"test", IP:"172.17.0.2", ContainerStatuses:[]*container.ContainerStatus{(*container.ContainerStatus)(0xc82ffe47e0), (*container.ContainerStatus)(0xc82ffe4c40)}} (err: <nil>)
I1209 18:17:48.795361 20134 kubelet.go:2761] Generating status for "deployment-example-1-deploy_test(c8ca050c-be3b-11e6-8665-525400560f2f)"
I1209 18:17:48.795490 20134 helpers.go:73] Already ran container "deployment" of pod "deployment-example-1-deploy_test(c8ca050c-be3b-11e6-8665-525400560f2f)", do nothing
I1209 18:17:48.795354 20134 kubelet.go:2328] SyncLoop (PLEG): "deployment-example-1-deploy_test(c8ca050c-be3b-11e6-8665-525400560f2f)", event: &pleg.PodLifecycleEvent{ID:"c8ca050c-be3b-11e6-8665-525400560f2f", Type:"ContainerDied", Data:"94f86506bd763352f94fec132bedfa2b00fc94726ee3ad7e8cd530c3c74e8f04"}
I1209 18:17:48.801113 20134 docker_manager.go:1430] Calling network plugin kubernetes.io/no-op to tear down pod for deployment-example-1-deploy_test
I1209 18:17:48.802453 20134 docker_manager.go:1507] Killing container "8b611012de8ed25a1e5fbff8b398d356ecc26def016255572658a5f18b2ca77c test/deployment-example-1-deploy" with 10 second grace period
I1209 18:17:48.814960 20134 replication_controller.go:379] Pod deployment-example-1-deploy updated, objectMeta {Name:deployment-example-1-deploy GenerateName: Namespace:test SelfLink:/api/v1/namespaces/test/pods/deployment-example-1-deploy UID:c8ca050c-be3b-11e6-8665-525400560f2f ResourceVersion:523 Generation:0 CreationTimestamp:2016-12-09 18:17:46 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[openshift.io/deployer-pod-for.name:deployment-example-1] Annotations:map[openshift.io/scc:restricted openshift.io/deployment.name:deployment-example-1] OwnerReferences:[] Finalizers:[] ClusterName:} -> {Name:deployment-example-1-deploy GenerateName: Namespace:test SelfLink:/api/v1/namespaces/test/pods/deployment-example-1-deploy UID:c8ca050c-be3b-11e6-8665-525400560f2f ResourceVersion:531 Generation:0 CreationTimestamp:2016-12-09 18:17:46 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[openshift.io/deployer-pod-for.name:deployment-example-1] Annotations:map[openshift.io/scc:restricted openshift.io/deployment.name:deployment-example-1] OwnerReferences:[] Finalizers:[] ClusterName:}.
I1209 18:17:48.815406 20134 replication_controller.go:256] No controllers found for pod deployment-example-1-deploy, replication manager will avoid syncing
I1209 18:17:48.815445 20134 replica_set.go:362] Pod deployment-example-1-deploy updated, objectMeta {Name:deployment-example-1-deploy GenerateName: Namespace:test SelfLink:/api/v1/namespaces/test/pods/deployment-example-1-deploy UID:c8ca050c-be3b-11e6-8665-525400560f2f ResourceVersion:523 Generation:0 CreationTimestamp:2016-12-09 18:17:46 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[openshift.io/deployer-pod-for.name:deployment-example-1] Annotations:map[openshift.io/scc:restricted openshift.io/deployment.name:deployment-example-1] OwnerReferences:[] Finalizers:[] ClusterName:} -> {Name:deployment-example-1-deploy GenerateName: Namespace:test SelfLink:/api/v1/namespaces/test/pods/deployment-example-1-deploy UID:c8ca050c-be3b-11e6-8665-525400560f2f ResourceVersion:531 Generation:0 CreationTimestamp:2016-12-09 18:17:46 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[openshift.io/deployer-pod-for.name:deployment-example-1] Annotations:map[openshift.io/deployment.name:deployment-example-1 openshift.io/scc:restricted] OwnerReferences:[] Finalizers:[] ClusterName:}.
I1209 18:17:48.815568 20134 replica_set.go:238] No ReplicaSets found for pod deployment-example-1-deploy, ReplicaSet controller will avoid syncing
I1209 18:17:48.815602 20134 jobcontroller.go:166] No jobs found for pod deployment-example-1-deploy, job controller will avoid syncing
I1209 18:17:48.815622 20134 daemoncontroller.go:364] Pod deployment-example-1-deploy updated.
I1209 18:17:48.815652 20134 daemoncontroller.go:293] No daemon sets found for pod deployment-example-1-deploy, daemon set controller will avoid syncing
I1209 18:17:48.815672 20134 disruption.go:307] updatePod called on pod "deployment-example-1-deploy"
I1209 18:17:48.815690 20134 disruption.go:361] No PodDisruptionBudgets found for pod deployment-example-1-deploy, PodDisruptionBudget controller will avoid syncing.
I1209 18:17:48.815695 20134 disruption.go:310] No matching pdb for pod "deployment-example-1-deploy"
I1209 18:17:48.816096 20134 pet_set.go:238] No PetSets found for pod deployment-example-1-deploy, PetSet controller will avoid syncing
I1209 18:17:48.817478 20134 deployment_controller.go:368] Pod deployment-example-1-deploy updated &api.Pod{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:api.ObjectMeta{Name:"deployment-example-1-deploy", GenerateName:"", Namespace:"test", SelfLink:"/api/v1/namespaces/test/pods/deployment-example-1-deploy", UID:"c8ca050c-be3b-11e6-8665-525400560f2f", ResourceVersion:"523", Generation:0, CreationTimestamp:unversioned.Time{Time:time.Time{sec:63616904266, nsec:0, loc:(*time.Location)(0xa4624c0)}}, DeletionTimestamp:(*unversioned.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"openshift.io/deployer-pod-for.name":"deployment-example-1"}, Annotations:map[string]string{"openshift.io/deployment.name":"deployment-example-1", "openshift.io/scc":"restricted"}, OwnerReferences:[]api.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:api.PodSpec{Volumes:[]api.Volume{api.Volume{Name:"deployer-token-gvtbo", VolumeSource:api.VolumeSource{HostPath:(*api.HostPathVolumeSource)(nil), EmptyDir:(*api.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*api.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*api.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*api.GitRepoVolumeSource)(nil), Secret:(*api.SecretVolumeSource)(0xc8278ca180), NFS:(*api.NFSVolumeSource)(nil), ISCSI:(*api.ISCSIVolumeSource)(nil), Glusterfs:(*api.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*api.PersistentVolumeClaimVolumeSource)(nil), RBD:(*api.RBDVolumeSource)(nil), Quobyte:(*api.QuobyteVolumeSource)(nil), FlexVolume:(*api.FlexVolumeSource)(nil), Cinder:(*api.CinderVolumeSource)(nil), CephFS:(*api.CephFSVolumeSource)(nil), Flocker:(*api.FlockerVolumeSource)(nil), DownwardAPI:(*api.DownwardAPIVolumeSource)(nil), FC:(*api.FCVolumeSource)(nil), AzureFile:(*api.AzureFileVolumeSource)(nil), ConfigMap:(*api.ConfigMapVolumeSource)(nil), VsphereVolume:(*api.VsphereVirtualDiskVolumeSource)(nil), AzureDisk:(*api.AzureDiskVolumeSource)(nil)}}}, InitContainers:[]api.Container(nil), Containers:[]api.Container{api.Container{Name:"deployment", Image:"openshift/origin-deployer:v1.5.0-alpha.0", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]api.ContainerPort(nil), Env:[]api.EnvVar{api.EnvVar{Name:"KUBERNETES_MASTER", Value:"https://192.168.121.18:8443", ValueFrom:(*api.EnvVarSource)(nil)}, api.EnvVar{Name:"OPENSHIFT_MASTER", Value:"https://192.168.121.18:8443", ValueFrom:(*api.EnvVarSource)(nil)}, api.EnvVar{Name:"BEARER_TOKEN_FILE", Value:"/var/run/secrets/kubernetes.io/serviceaccount/token", ValueFrom:(*api.EnvVarSource)(nil)}, api.EnvVar{Name:"OPENSHIFT_CA_DATA", Value:"-----BEGIN CERTIFICATE-----\nMIIC6jCCAdKgAwIBAgIBATANBgkqhkiG9w0BAQsFADAmMSQwIgYDVQQDDBtvcGVu\nc2hpZnQtc2lnbmVyQDE0ODEzMDczMzMwHhcNMTYxMjA5MTgxNTMzWhcNMjExMjA4\nMTgxNTM0WjAmMSQwIgYDVQQDDBtvcGVuc2hpZnQtc2lnbmVyQDE0ODEzMDczMzMw\nggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDJneo5Oz8Q1EA7BkGhSvFX\nDcdnYv+mvPZsUZOC5VbKaXSz84X1nLQj7n5lxWK4f9gmFiXscgxBOIxXD9e2Q6AM\ngIJEjMHzEXYOgbWqhbd6EtUrWEGEzA8dEAvPyDWe2jdovcTUkP7JxyqATCk7j4Ao\nv27HJFyyR3iVOleAPaT1ZICj+w4Jy1np60rmMCsn2A2vfKceu7T5lt3uGNEFhTMd\n7bbvFaIkv3lFWMkrRaDqAlWanYQLEGMF8Mh68ty9A/mqCAJUjVB363/i7IIG1nwI\nymFYS3IKsCk0W8WgC9p515lMH3d1rsT+UlF+BhRa3Boh+wD4q2NfrgkguqEQ4f+x\nAgMBAAGjIzAhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MA0GCSqG\nSIb3DQEBCwUAA4IBAQCBKeKsPueA4RHoEzLl+xNnrig0dBvbfcBCzCf4w221QHAT\nS3FcQvByLjzpUUl2lPtqA5+tvg+lxwKba8DTgI3IpP2J2Pu4eLu4jH/YnL+5u18t\nq3/CGSFBSL0UJ2cZQe+9AH7LOSSqyj86PBIjZhW3iSwQhjqB2p7dA25gj0wnoM77\nkh8VckM83HcbzVspgvpQ6zR2RuXfLA3Lcoi9wd8eojaaNwqN4AhCIdEu4nXz7FFM\nGv+2fHzxLRUFA0YPdMfYio3s6qF3Yg2G52S1dYNEGUNQTvMl38KFS6lU+jaj0qjn\nVVkXIA4NrteIMUC1NH2DcvTcWtWNtZWkga7aytT0\n-----END CERTIFICATE-----\n", ValueFrom:(*api.EnvVarSource)(nil)}, api.EnvVar{Name:"OPENSHIFT_DEPLOYMENT_NAME", Value:"deployment-example-1", ValueFrom:(*api.EnvVarSource)(nil)}, api.EnvVar{Name:"OPENSHIFT_DEPLOYMENT_NAMESPACE", Value:"test", ValueFrom:(*api.EnvVarSource)(nil)}}, Resources:api.ResourceRequirements{Limits:api.ResourceList(nil), Requests:api.ResourceList(nil)}, VolumeMounts:[]api.VolumeMount{api.VolumeMount{Name:"deployer-token-gvtbo", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:""}}, LivenessProbe:(*api.Probe)(nil), ReadinessProbe:(*api.Probe)(nil), Lifecycle:(*api.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", ImagePullPolicy:"IfNotPresent", SecurityContext:(*api.SecurityContext)(0xc8278ca1b0), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Never", TerminationGracePeriodSeconds:(*int64)(0xc82d3f83f0), ActiveDeadlineSeconds:(*int64)(0xc82d3f83f8), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"deployer", NodeName:"localhost", SecurityContext:(*api.PodSecurityContext)(0xc82ed3d4c0), ImagePullSecrets:[]api.LocalObjectReference{api.LocalObjectReference{Name:"deployer-dockercfg-cotcn"}}, Hostname:"", Subdomain:""}, Status:api.PodStatus{Phase:"Pending", Conditions:[]api.PodCondition{api.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:unversioned.Time{Time:time.Time{sec:0, nsec:0, loc:(*time.Location)(nil)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63616904266, nsec:0, loc:(*time.Location)(0xa4624c0)}}, Reason:"", Message:""}, api.PodCondition{Type:"Ready", Status:"False", LastProbeTime:unversioned.Time{Time:time.Time{sec:0, nsec:0, loc:(*time.Location)(nil)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63616904266, nsec:0, loc:(*time.Location)(0xa4624c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [deployment]"}, api.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:unversioned.Time{Time:time.Time{sec:0, nsec:0, loc:(*time.Location)(nil)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63616904266, nsec:0, loc:(*time.Location)(0xa4624c0)}}, Reason:"", Message:""}}, Message:"", Reason:"", HostIP:"192.168.121.18", PodIP:"", StartTime:(*unversioned.Time)(0xc82eb15380), InitContainerStatuses:[]api.ContainerStatus(nil), ContainerStatuses:[]api.ContainerStatus{api.ContainerStatus{Name:"deployment", State:api.ContainerState{Waiting:(*api.ContainerStateWaiting)(0xc82eb15400), Running:(*api.ContainerStateRunning)(nil), Terminated:(*api.ContainerStateTerminated)(nil)}, LastTerminationState:api.ContainerState{Waiting:(*api.ContainerStateWaiting)(nil), Running:(*api.ContainerStateRunning)(nil), Terminated:(*api.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"openshift/origin-deployer:v1.5.0-alpha.0", ImageID:"", ContainerID:""}}}} -> &api.Pod{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:api.ObjectMeta{Name:"deployment-example-1-deploy", GenerateName:"", Namespace:"test", SelfLink:"/api/v1/namespaces/test/pods/deployment-example-1-deploy", UID:"c8ca050c-be3b-11e6-8665-525400560f2f", ResourceVersion:"531", Generation:0, CreationTimestamp:unversioned.Time{Time:time.Time{sec:63616904266, nsec:0, loc:(*time.Location)(0xa4624c0)}}, DeletionTimestamp:(*unversioned.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"openshift.io/deployer-pod-for.name":"deployment-example-1"}, Annotations:map[string]string{"openshift.io/deployment.name":"deployment-example-1", "openshift.io/scc":"restricted"}, OwnerReferences:[]api.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:api.PodSpec{Volumes:[]api.Volume{api.Volume{Name:"deployer-token-gvtbo", VolumeSource:api.VolumeSource{HostPath:(*api.HostPathVolumeSource)(nil), EmptyDir:(*api.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*api.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*api.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*api.GitRepoVolumeSource)(nil), Secret:(*api.SecretVolumeSource)(0xc827b3ce40), NFS:(*api.NFSVolumeSource)(nil), ISCSI:(*api.ISCSIVolumeSource)(nil), Glusterfs:(*api.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*api.PersistentVolumeClaimVolumeSource)(nil), RBD:(*api.RBDVolumeSource)(nil), Quobyte:(*api.QuobyteVolumeSource)(nil), FlexVolume:(*api.FlexVolumeSource)(nil), Cinder:(*api.CinderVolumeSource)(nil), CephFS:(*api.CephFSVolumeSource)(nil), Flocker:(*api.FlockerVolumeSource)(nil), DownwardAPI:(*api.DownwardAPIVolumeSource)(nil), FC:(*api.FCVolumeSource)(nil), AzureFile:(*api.AzureFileVolumeSource)(nil), ConfigMap:(*api.ConfigMapVolumeSource)(nil), VsphereVolume:(*api.VsphereVirtualDiskVolumeSource)(nil), AzureDisk:(*api.AzureDiskVolumeSource)(nil)}}}, InitContainers:[]api.Container(nil), Containers:[]api.Container{api.Container{Name:"deployment", Image:"openshift/origin-deployer:v1.5.0-alpha.0", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]api.ContainerPort(nil), Env:[]api.EnvVar{api.EnvVar{Name:"KUBERNETES_MASTER", Value:"https://192.168.121.18:8443", ValueFrom:(*api.EnvVarSource)(nil)}, api.EnvVar{Name:"OPENSHIFT_MASTER", Value:"https://192.168.121.18:8443", ValueFrom:(*api.EnvVarSource)(nil)}, api.EnvVar{Name:"BEARER_TOKEN_FILE", Value:"/var/run/secrets/kubernetes.io/serviceaccount/token", ValueFrom:(*api.EnvVarSource)(nil)}, api.EnvVar{Name:"OPENSHIFT_CA_DATA", Value:"-----BEGIN CERTIFICATE-----\nMIIC6jCCAdKgAwIBAgIBATANBgkqhkiG9w0BAQsFADAmMSQwIgYDVQQDDBtvcGVu\nc2hpZnQtc2lnbmVyQDE0ODEzMDczMzMwHhcNMTYxMjA5MTgxNTMzWhcNMjExMjA4\nMTgxNTM0WjAmMSQwIgYDVQQDDBtvcGVuc2hpZnQtc2lnbmVyQDE0ODEzMDczMzMw\nggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDJneo5Oz8Q1EA7BkGhSvFX\nDcdnYv+mvPZsUZOC5VbKaXSz84X1nLQj7n5lxWK4f9gmFiXscgxBOIxXD9e2Q6AM\ngIJEjMHzEXYOgbWqhbd6EtUrWEGEzA8dEAvPyDWe2jdovcTUkP7JxyqATCk7j4Ao\nv27HJFyyR3iVOleAPaT1ZICj+w4Jy1np60rmMCsn2A2vfKceu7T5lt3uGNEFhTMd\n7bbvFaIkv3lFWMkrRaDqAlWanYQLEGMF8Mh68ty9A/mqCAJUjVB363/i7IIG1nwI\nymFYS3IKsCk0W8WgC9p515lMH3d1rsT+UlF+BhRa3Boh+wD4q2NfrgkguqEQ4f+x\nAgMBAAGjIzAhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MA0GCSqG\nSIb3DQEBCwUAA4IBAQCBKeKsPueA4RHoEzLl+xNnrig0dBvbfcBCzCf4w221QHAT\nS3FcQvByLjzpUUl2lPtqA5+tvg+lxwKba8DTgI3IpP2J2Pu4eLu4jH/YnL+5u18t\nq3/CGSFBSL0UJ2cZQe+9AH7LOSSqyj86PBIjZhW3iSwQhjqB2p7dA25gj0wnoM77\nkh8VckM83HcbzVspgvpQ6zR2RuXfLA3Lcoi9wd8eojaaNwqN4AhCIdEu4nXz7FFM\nGv+2fHzxLRUFA0YPdMfYio3s6qF3Yg2G52S1dYNEGUNQTvMl38KFS6lU+jaj0qjn\nVVkXIA4NrteIMUC1NH2DcvTcWtWNtZWkga7aytT0\n-----END CERTIFICATE-----\n", ValueFrom:(*api.EnvVarSource)(nil)}, api.EnvVar{Name:"OPENSHIFT_DEPLOYMENT_NAME", Value:"deployment-example-1", ValueFrom:(*api.EnvVarSource)(nil)}, api.EnvVar{Name:"OPENSHIFT_DEPLOYMENT_NAMESPACE", Value:"test", ValueFrom:(*api.EnvVarSource)(nil)}}, Resources:api.ResourceRequirements{Limits:api.ResourceList(nil), Requests:api.ResourceList(nil)}, VolumeMounts:[]api.VolumeMount{api.VolumeMount{Name:"deployer-token-gvtbo", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:""}}, LivenessProbe:(*api.Probe)(nil), ReadinessProbe:(*api.Probe)(nil), Lifecycle:(*api.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", ImagePullPolicy:"IfNotPresent", SecurityContext:(*api.SecurityContext)(0xc827b3ce70), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Never", TerminationGracePeriodSeconds:(*int64)(0xc8313a69c0), ActiveDeadlineSeconds:(*int64)(0xc8313a69c8), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"deployer", NodeName:"localhost", SecurityContext:(*api.PodSecurityContext)(0xc828c31300), ImagePullSecrets:[]api.LocalObjectReference{api.LocalObjectReference{Name:"deployer-dockercfg-cotcn"}}, Hostname:"", Subdomain:""}, Status:api.PodStatus{Phase:"Failed", Conditions:[]api.PodCondition{api.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:unversioned.Time{Time:time.Time{sec:0, nsec:0, loc:(*time.Location)(nil)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63616904266, nsec:0, loc:(*time.Location)(0xa4624c0)}}, Reason:"", Message:""}, api.PodCondition{Type:"Ready", Status:"False", LastProbeTime:unversioned.Time{Time:time.Time{sec:0, nsec:0, loc:(*time.Location)(nil)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63616904266, nsec:0, loc:(*time.Location)(0xa4624c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [deployment]"}, api.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:unversioned.Time{Time:time.Time{sec:0, nsec:0, loc:(*time.Location)(nil)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63616904266, nsec:0, loc:(*time.Location)(0xa4624c0)}}, Reason:"", Message:""}}, Message:"", Reason:"", HostIP:"192.168.121.18", PodIP:"172.17.0.2", StartTime:(*unversioned.Time)(0xc82a166580), InitContainerStatuses:[]api.ContainerStatus(nil), ContainerStatuses:[]api.ContainerStatus{api.ContainerStatus{Name:"deployment", State:api.ContainerState{Waiting:(*api.ContainerStateWaiting)(nil), Running:(*api.ContainerStateRunning)(nil), Terminated:(*api.ContainerStateTerminated)(0xc8235429a0)}, LastTerminationState:api.ContainerState{Waiting:(*api.ContainerStateWaiting)(nil), Running:(*api.ContainerStateRunning)(nil), Terminated:(*api.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"openshift/origin-deployer:v1.5.0-alpha.0", ImageID:"docker-pullable://docker.io/openshift/origin-deployer@sha256:398189baa7877595df33f6da8dff1fad36a9e437883c5620b85bdf3d06a31fb5", ContainerID:"docker://94f86506bd763352f94fec132bedfa2b00fc94726ee3ad7e8cd530c3c74e8f04"}}}}.
I1209 18:17:48.818176 20134 deployment_controller.go:334] Error: <nil>. No deployment found for Pod deployment-example-1-deploy, deployment controller will avoid syncing.
I1209 18:17:48.818526 20134 deployment_controller.go:334] Error: <nil>. No deployment found for Pod deployment-example-1-deploy, deployment controller will avoid syncing.
I1209 18:17:48.819011 20134 config.go:281] Setting pods for source api
I1209 18:17:48.819436 20134 kubelet.go:2306] SyncLoop (RECONCILE, "api"): "deployment-example-1-deploy_test(c8ca050c-be3b-11e6-8665-525400560f2f)"
I1209 18:17:48.820892 20134 status_manager.go:425] Status for pod "deployment-example-1-deploy_test(c8ca050c-be3b-11e6-8665-525400560f2f)" updated successfully: {status:{Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:{Time:{sec:0 nsec:0 loc:0x9506e40}} LastTransitionTime:{Time:{sec:63616904266 nsec:0 loc:0x9506e40}} Reason: Message:} {Type:Ready Status:False LastProbeTime:{Time:{sec:0 nsec:0 loc:0x9506e40}} LastTransitionTime:{Time:{sec:63616904266 nsec:0 loc:0x9506e40}} Reason:ContainersNotReady Message:containers with unready status: [deployment]} {Type:PodScheduled Status:True LastProbeTime:{Time:{sec:0 nsec:0 loc:0x9506e40}} LastTransitionTime:{Time:{sec:63616904266 nsec:0 loc:0x9506e40}} Reason: Message:}] Message: Reason: HostIP:192.168.121.18 PodIP:172.17.0.2 StartTime:0xc82ccab180 InitContainerStatuses:[] ContainerStatuses:[{Name:deployment State:{Waiting:<nil> Running:<nil> Terminated:0xc8236195e0} LastTerminationState:{Waiting:<nil> Running:<nil> Terminated:<nil>} Ready:false RestartCount:0 Image:openshift/origin-deployer:v1.5.0-alpha.0 ImageID:docker-pullable://docker.io/openshift/origin-deployer@sha256:398189baa7877595df33f6da8dff1fad36a9e437883c5620b85bdf3d06a31fb5 ContainerID:docker://94f86506bd763352f94fec132bedfa2b00fc94726ee3ad7e8cd530c3c74e8f04}]} version:2 podName:deployment-example-1-deploy podNamespace:test}
I1209 18:17:48.832207 20134 controller.go:225] Updated deployment test/deployment-example-1 status from Pending to Failed (scale: 0)
I1209 18:17:48.833961 20134 factory.go:171] Replication controller "deployment-example-1" updated.
I1209 18:17:48.834789 20134 replication_controller.go:323] Observed updated replication controller deployment-example-1. Desired pod count change: 0->0
I1209 18:17:48.834966 20134 controller.go:80] Reconciling test/deployment-example
I1209 18:17:48.836527 20134 controller_utils.go:159] Controller test/deployment-example-1 either never recorded expectations, or the ttl expired.
I1209 18:17:48.836656 20134 replication_controller.go:620] Finished syncing controller "test/deployment-example-1" (130.189µs)
I1209 18:17:48.842887 20134 factory.go:125] Updating deployment config "deployment-example"
I1209 18:17:48.843188 20134 controller.go:297] Updated the status for "test/deployment-example" (observed generation: 2)
I1209 18:17:48.843229 20134 controller.go:80] Reconciling test/deployment-example
I1209 18:17:48.896553 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:17:48.896606 20134 desired_state_of_world_populator.go:201] Pod "deployment-example-1-deploy_test(c8ca050c-be3b-11e6-8665-525400560f2f)" has been removed from pod manager. However, it still has one or more containers in the non-exited state. Therefore, it will not be removed from volume manager.
I1209 18:17:48.984977 20134 manager.go:931] Destroyed container: "/system.slice/docker-8b611012de8ed25a1e5fbff8b398d356ecc26def016255572658a5f18b2ca77c.scope" (aliases: [k8s_POD.8e33df32_deployment-example-1-deploy_test_c8ca050c-be3b-11e6-8665-525400560f2f_b66d9549 8b611012de8ed25a1e5fbff8b398d356ecc26def016255572658a5f18b2ca77c], namespace: "docker")
I1209 18:17:48.985066 20134 handler.go:325] Added event &{/system.slice/docker-8b611012de8ed25a1e5fbff8b398d356ecc26def016255572658a5f18b2ca77c.scope 2016-12-09 18:17:48.985051 +0000 UTC containerDeletion {<nil>}}
I1209 18:17:48.996864 20134 desired_state_of_world_populator.go:123] Skipping findAndRemoveDeletedPods(). Not permitted until 2016-12-09 18:17:50.896589901 +0000 UTC (getPodStatusRetryDuration 2s).
I1209 18:17:49.097076 20134 desired_state_of_world_populator.go:123] Skipping findAndRemoveDeletedPods(). Not permitted until 2016-12-09 18:17:50.896589901 +0000 UTC (getPodStatusRetryDuration 2s).
I1209 18:17:49.111162 20134 manager.go:931] Destroyed container: "/system.slice/var-lib-docker-containers-8b611012de8ed25a1e5fbff8b398d356ecc26def016255572658a5f18b2ca77c-shm.mount" (aliases: [k8s_POD.8e33df32_deployment-example-1-deploy_test_c8ca050c-be3b-11e6-8665-525400560f2f_b66d9549 8b611012de8ed25a1e5fbff8b398d356ecc26def016255572658a5f18b2ca77c], namespace: "docker")
I1209 18:17:49.111233 20134 handler.go:325] Added event &{/system.slice/var-lib-docker-containers-8b611012de8ed25a1e5fbff8b398d356ecc26def016255572658a5f18b2ca77c-shm.mount 2016-12-09 18:17:49.111207952 +0000 UTC containerDeletion {<nil>}}
I1209 18:17:49.195587 20134 docker_manager.go:1546] Container "8b611012de8ed25a1e5fbff8b398d356ecc26def016255572658a5f18b2ca77c test/deployment-example-1-deploy" exited after 393.097734ms
I1209 18:17:49.197254 20134 desired_state_of_world_populator.go:123] Skipping findAndRemoveDeletedPods(). Not permitted until 2016-12-09 18:17:50.896589901 +0000 UTC (getPodStatusRetryDuration 2s).
I1209 18:17:49.297449 20134 desired_state_of_world_populator.go:123] Skipping findAndRemoveDeletedPods(). Not permitted until 2016-12-09 18:17:50.896589901 +0000 UTC (getPodStatusRetryDuration 2s).
I1209 18:17:49.397650 20134 desired_state_of_world_populator.go:123] Skipping findAndRemoveDeletedPods(). Not permitted until 2016-12-09 18:17:50.896589901 +0000 UTC (getPodStatusRetryDuration 2s).
I1209 18:17:49.497890 20134 desired_state_of_world_populator.go:123] Skipping findAndRemoveDeletedPods(). Not permitted until 2016-12-09 18:17:50.896589901 +0000 UTC (getPodStatusRetryDuration 2s).
I1209 18:17:49.598211 20134 desired_state_of_world_populator.go:123] Skipping findAndRemoveDeletedPods(). Not permitted until 2016-12-09 18:17:50.896589901 +0000 UTC (getPodStatusRetryDuration 2s).
I1209 18:17:49.698450 20134 desired_state_of_world_populator.go:123] Skipping findAndRemoveDeletedPods(). Not permitted until 2016-12-09 18:17:50.896589901 +0000 UTC (getPodStatusRetryDuration 2s).
I1209 18:17:49.795564 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:17:49.797792 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:17:49.798414 20134 generic.go:141] GenericPLEG: c8ca050c-be3b-11e6-8665-525400560f2f/8b611012de8ed25a1e5fbff8b398d356ecc26def016255572658a5f18b2ca77c: running -> exited
I1209 18:17:49.798993 20134 desired_state_of_world_populator.go:123] Skipping findAndRemoveDeletedPods(). Not permitted until 2016-12-09 18:17:50.896589901 +0000 UTC (getPodStatusRetryDuration 2s).
I1209 18:17:49.802449 20134 docker_manager.go:373] Container inspect result: {ContainerJSONBase:0xc82a2cf080 Mounts:[{Name: Source:/var/lib/origin/openshift.local.volumes/pods/c8ca050c-be3b-11e6-8665-525400560f2f/volumes/kubernetes.io~secret/deployer-token-gvtbo Destination:/var/run/secrets/kubernetes.io/serviceaccount Driver: Mode:ro,Z RW:false Propagation:rprivate} {Name: Source:/var/lib/origin/openshift.local.volumes/pods/c8ca050c-be3b-11e6-8665-525400560f2f/etc-hosts Destination:/etc/hosts Driver: Mode: RW:true Propagation:rprivate} {Name: Source:/var/lib/origin/openshift.local.volumes/pods/c8ca050c-be3b-11e6-8665-525400560f2f/containers/deployment/e821506a Destination:/dev/termination-log Driver: Mode: RW:true Propagation:rprivate}] Config:0xc82c7759e0 NetworkSettings:0xc825625c00}
I1209 18:17:49.806874 20134 docker_manager.go:373] Container inspect result: {ContainerJSONBase:0xc82df86420 Mounts:[] Config:0xc82cd40120 NetworkSettings:0xc825706100}
I1209 18:17:49.808522 20134 generic.go:327] PLEG: Write status for deployment-example-1-deploy/test: &container.PodStatus{ID:"c8ca050c-be3b-11e6-8665-525400560f2f", Name:"deployment-example-1-deploy", Namespace:"test", IP:"", ContainerStatuses:[]*container.ContainerStatus{(*container.ContainerStatus)(0xc830a27ea0), (*container.ContainerStatus)(0xc82ce3e380)}} (err: <nil>)
I1209 18:17:49.808652 20134 kubelet.go:2328] SyncLoop (PLEG): "deployment-example-1-deploy_test(c8ca050c-be3b-11e6-8665-525400560f2f)", event: &pleg.PodLifecycleEvent{ID:"c8ca050c-be3b-11e6-8665-525400560f2f", Type:"ContainerDied", Data:"8b611012de8ed25a1e5fbff8b398d356ecc26def016255572658a5f18b2ca77c"}
I1209 18:17:49.808840 20134 kubelet.go:2761] Generating status for "deployment-example-1-deploy_test(c8ca050c-be3b-11e6-8665-525400560f2f)"
I1209 18:17:49.809809 20134 helpers.go:73] Already ran container "deployment" of pod "deployment-example-1-deploy_test(c8ca050c-be3b-11e6-8665-525400560f2f)", do nothing
I1209 18:17:49.820698 20134 status_manager.go:425] Status for pod "deployment-example-1-deploy_test(c8ca050c-be3b-11e6-8665-525400560f2f)" updated successfully: {status:{Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:{Time:{sec:0 nsec:0 loc:0x9506e40}} LastTransitionTime:{Time:{sec:63616904266 nsec:0 loc:0x9506e40}} Reason: Message:} {Type:Ready Status:False LastProbeTime:{Time:{sec:0 nsec:0 loc:0x9506e40}} LastTransitionTime:{Time:{sec:63616904266 nsec:0 loc:0x9506e40}} Reason:ContainersNotReady Message:containers with unready status: [deployment]} {Type:PodScheduled Status:True LastProbeTime:{Time:{sec:0 nsec:0 loc:0x9506e40}} LastTransitionTime:{Time:{sec:63616904266 nsec:0 loc:0x9506e40}} Reason: Message:}] Message: Reason: HostIP:192.168.121.18 PodIP: StartTime:0xc82ccab180 InitContainerStatuses:[] ContainerStatuses:[{Name:deployment State:{Waiting:<nil> Running:<nil> Terminated:0xc8231e2690} LastTerminationState:{Waiting:<nil> Running:<nil> Terminated:<nil>} Ready:false RestartCount:0 Image:openshift/origin-deployer:v1.5.0-alpha.0 ImageID:docker-pullable://docker.io/openshift/origin-deployer@sha256:398189baa7877595df33f6da8dff1fad36a9e437883c5620b85bdf3d06a31fb5 ContainerID:docker://94f86506bd763352f94fec132bedfa2b00fc94726ee3ad7e8cd530c3c74e8f04}]} version:3 podName:deployment-example-1-deploy podNamespace:test}
I1209 18:17:49.824369 20134 config.go:281] Setting pods for source api
I1209 18:17:49.824100 20134 deployment_controller.go:368] Pod deployment-example-1-deploy updated &api.Pod{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:api.ObjectMeta{Name:"deployment-example-1-deploy", GenerateName:"", Namespace:"test", SelfLink:"/api/v1/namespaces/test/pods/deployment-example-1-deploy", UID:"c8ca050c-be3b-11e6-8665-525400560f2f", ResourceVersion:"531", Generation:0, CreationTimestamp:unversioned.Time{Time:time.Time{sec:63616904266, nsec:0, loc:(*time.Location)(0xa4624c0)}}, DeletionTimestamp:(*unversioned.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"openshift.io/deployer-pod-for.name":"deployment-example-1"}, Annotations:map[string]string{"openshift.io/deployment.name":"deployment-example-1", "openshift.io/scc":"restricted"}, OwnerReferences:[]api.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:api.PodSpec{Volumes:[]api.Volume{api.Volume{Name:"deployer-token-gvtbo", VolumeSource:api.VolumeSource{HostPath:(*api.HostPathVolumeSource)(nil), EmptyDir:(*api.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*api.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*api.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*api.GitRepoVolumeSource)(nil), Secret:(*api.SecretVolumeSource)(0xc827b3ce40), NFS:(*api.NFSVolumeSource)(nil), ISCSI:(*api.ISCSIVolumeSource)(nil), Glusterfs:(*api.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*api.PersistentVolumeClaimVolumeSource)(nil), RBD:(*api.RBDVolumeSource)(nil), Quobyte:(*api.QuobyteVolumeSource)(nil), FlexVolume:(*api.FlexVolumeSource)(nil), Cinder:(*api.CinderVolumeSource)(nil), CephFS:(*api.CephFSVolumeSource)(nil), Flocker:(*api.FlockerVolumeSource)(nil), DownwardAPI:(*api.DownwardAPIVolumeSource)(nil), FC:(*api.FCVolumeSource)(nil), AzureFile:(*api.AzureFileVolumeSource)(nil), ConfigMap:(*api.ConfigMapVolumeSource)(nil), VsphereVolume:(*api.VsphereVirtualDiskVolumeSource)(nil), AzureDisk:(*api.AzureDiskVolumeSource)(nil)}}}, InitContainers:[]api.Container(nil), Containers:[]api.Container{api.Container{Name:"deployment", Image:"openshift/origin-deployer:v1.5.0-alpha.0", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]api.ContainerPort(nil), Env:[]api.EnvVar{api.EnvVar{Name:"KUBERNETES_MASTER", Value:"https://192.168.121.18:8443", ValueFrom:(*api.EnvVarSource)(nil)}, api.EnvVar{Name:"OPENSHIFT_MASTER", Value:"https://192.168.121.18:8443", ValueFrom:(*api.EnvVarSource)(nil)}, api.EnvVar{Name:"BEARER_TOKEN_FILE", Value:"/var/run/secrets/kubernetes.io/serviceaccount/token", ValueFrom:(*api.EnvVarSource)(nil)}, api.EnvVar{Name:"OPENSHIFT_CA_DATA", Value:"-----BEGIN CERTIFICATE-----\nMIIC6jCCAdKgAwIBAgIBATANBgkqhkiG9w0BAQsFADAmMSQwIgYDVQQDDBtvcGVu\nc2hpZnQtc2lnbmVyQDE0ODEzMDczMzMwHhcNMTYxMjA5MTgxNTMzWhcNMjExMjA4\nMTgxNTM0WjAmMSQwIgYDVQQDDBtvcGVuc2hpZnQtc2lnbmVyQDE0ODEzMDczMzMw\nggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDJneo5Oz8Q1EA7BkGhSvFX\nDcdnYv+mvPZsUZOC5VbKaXSz84X1nLQj7n5lxWK4f9gmFiXscgxBOIxXD9e2Q6AM\ngIJEjMHzEXYOgbWqhbd6EtUrWEGEzA8dEAvPyDWe2jdovcTUkP7JxyqATCk7j4Ao\nv27HJFyyR3iVOleAPaT1ZICj+w4Jy1np60rmMCsn2A2vfKceu7T5lt3uGNEFhTMd\n7bbvFaIkv3lFWMkrRaDqAlWanYQLEGMF8Mh68ty9A/mqCAJUjVB363/i7IIG1nwI\nymFYS3IKsCk0W8WgC9p515lMH3d1rsT+UlF+BhRa3Boh+wD4q2NfrgkguqEQ4f+x\nAgMBAAGjIzAhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MA0GCSqG\nSIb3DQEBCwUAA4IBAQCBKeKsPueA4RHoEzLl+xNnrig0dBvbfcBCzCf4w221QHAT\nS3FcQvByLjzpUUl2lPtqA5+tvg+lxwKba8DTgI3IpP2J2Pu4eLu4jH/YnL+5u18t\nq3/CGSFBSL0UJ2cZQe+9AH7LOSSqyj86PBIjZhW3iSwQhjqB2p7dA25gj0wnoM77\nkh8VckM83HcbzVspgvpQ6zR2RuXfLA3Lcoi9wd8eojaaNwqN4AhCIdEu4nXz7FFM\nGv+2fHzxLRUFA0YPdMfYio3s6qF3Yg2G52S1dYNEGUNQTvMl38KFS6lU+jaj0qjn\nVVkXIA4NrteIMUC1NH2DcvTcWtWNtZWkga7aytT0\n-----END CERTIFICATE-----\n", ValueFrom:(*api.EnvVarSource)(nil)}, api.EnvVar{Name:"OPENSHIFT_DEPLOYMENT_NAME", Value:"deployment-example-1", ValueFrom:(*api.EnvVarSource)(nil)}, api.EnvVar{Name:"OPENSHIFT_DEPLOYMENT_NAMESPACE", Value:"test", ValueFrom:(*api.EnvVarSource)(nil)}}, Resources:api.ResourceRequirements{Limits:api.ResourceList(nil), Requests:api.ResourceList(nil)}, VolumeMounts:[]api.VolumeMount{api.VolumeMount{Name:"deployer-token-gvtbo", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:""}}, LivenessProbe:(*api.Probe)(nil), ReadinessProbe:(*api.Probe)(nil), Lifecycle:(*api.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", ImagePullPolicy:"IfNotPresent", SecurityContext:(*api.SecurityContext)(0xc827b3ce70), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Never", TerminationGracePeriodSeconds:(*int64)(0xc8313a69c0), ActiveDeadlineSeconds:(*int64)(0xc8313a69c8), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"deployer", NodeName:"localhost", SecurityContext:(*api.PodSecurityContext)(0xc828c31300), ImagePullSecrets:[]api.LocalObjectReference{api.LocalObjectReference{Name:"deployer-dockercfg-cotcn"}}, Hostname:"", Subdomain:""}, Status:api.PodStatus{Phase:"Failed", Conditions:[]api.PodCondition{api.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:unversioned.Time{Time:time.Time{sec:0, nsec:0, loc:(*time.Location)(nil)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63616904266, nsec:0, loc:(*time.Location)(0xa4624c0)}}, Reason:"", Message:""}, api.PodCondition{Type:"Ready", Status:"False", LastProbeTime:unversioned.Time{Time:time.Time{sec:0, nsec:0, loc:(*time.Location)(nil)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63616904266, nsec:0, loc:(*time.Location)(0xa4624c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [deployment]"}, api.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:unversioned.Time{Time:time.Time{sec:0, nsec:0, loc:(*time.Location)(nil)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63616904266, nsec:0, loc:(*time.Location)(0xa4624c0)}}, Reason:"", Message:""}}, Message:"", Reason:"", HostIP:"192.168.121.18", PodIP:"172.17.0.2", StartTime:(*unversioned.Time)(0xc82a166580), InitContainerStatuses:[]api.ContainerStatus(nil), ContainerStatuses:[]api.ContainerStatus{api.ContainerStatus{Name:"deployment", State:api.ContainerState{Waiting:(*api.ContainerStateWaiting)(nil), Running:(*api.ContainerStateRunning)(nil), Terminated:(*api.ContainerStateTerminated)(0xc8235429a0)}, LastTerminationState:api.ContainerState{Waiting:(*api.ContainerStateWaiting)(nil), Running:(*api.ContainerStateRunning)(nil), Terminated:(*api.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"openshift/origin-deployer:v1.5.0-alpha.0", ImageID:"docker-pullable://docker.io/openshift/origin-deployer@sha256:398189baa7877595df33f6da8dff1fad36a9e437883c5620b85bdf3d06a31fb5", ContainerID:"docker://94f86506bd763352f94fec132bedfa2b00fc94726ee3ad7e8cd530c3c74e8f04"}}}} -> &api.Pod{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:api.ObjectMeta{Name:"deployment-example-1-deploy", GenerateName:"", Namespace:"test", SelfLink:"/api/v1/namespaces/test/pods/deployment-example-1-deploy", UID:"c8ca050c-be3b-11e6-8665-525400560f2f", ResourceVersion:"534", Generation:0, CreationTimestamp:unversioned.Time{Time:time.Time{sec:63616904266, nsec:0, loc:(*time.Location)(0xa4624c0)}}, DeletionTimestamp:(*unversioned.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"openshift.io/deployer-pod-for.name":"deployment-example-1"}, Annotations:map[string]string{"openshift.io/deployment.name":"deployment-example-1", "openshift.io/scc":"restricted"}, OwnerReferences:[]api.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:api.PodSpec{Volumes:[]api.Volume{api.Volume{Name:"deployer-token-gvtbo", VolumeSource:api.VolumeSource{HostPath:(*api.HostPathVolumeSource)(nil), EmptyDir:(*api.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*api.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*api.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*api.GitRepoVolumeSource)(nil), Secret:(*api.SecretVolumeSource)(0xc82f8aa9c0), NFS:(*api.NFSVolumeSource)(nil), ISCSI:(*api.ISCSIVolumeSource)(nil), Glusterfs:(*api.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*api.PersistentVolumeClaimVolumeSource)(nil), RBD:(*api.RBDVolumeSource)(nil), Quobyte:(*api.QuobyteVolumeSource)(nil), FlexVolume:(*api.FlexVolumeSource)(nil), Cinder:(*api.CinderVolumeSource)(nil), CephFS:(*api.CephFSVolumeSource)(nil), Flocker:(*api.FlockerVolumeSource)(nil), DownwardAPI:(*api.DownwardAPIVolumeSource)(nil), FC:(*api.FCVolumeSource)(nil), AzureFile:(*api.AzureFileVolumeSource)(nil), ConfigMap:(*api.ConfigMapVolumeSource)(nil), VsphereVolume:(*api.VsphereVirtualDiskVolumeSource)(nil), AzureDisk:(*api.AzureDiskVolumeSource)(nil)}}}, InitContainers:[]api.Container(nil), Containers:[]api.Container{api.Container{Name:"deployment", Image:"openshift/origin-deployer:v1.5.0-alpha.0", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]api.ContainerPort(nil), Env:[]api.EnvVar{api.EnvVar{Name:"KUBERNETES_MASTER", Value:"https://192.168.121.18:8443", ValueFrom:(*api.EnvVarSource)(nil)}, api.EnvVar{Name:"OPENSHIFT_MASTER", Value:"https://192.168.121.18:8443", ValueFrom:(*api.EnvVarSource)(nil)}, api.EnvVar{Name:"BEARER_TOKEN_FILE", Value:"/var/run/secrets/kubernetes.io/serviceaccount/token", ValueFrom:(*api.EnvVarSource)(nil)}, api.EnvVar{Name:"OPENSHIFT_CA_DATA", Value:"-----BEGIN CERTIFICATE-----\nMIIC6jCCAdKgAwIBAgIBATANBgkqhkiG9w0BAQsFADAmMSQwIgYDVQQDDBtvcGVu\nc2hpZnQtc2lnbmVyQDE0ODEzMDczMzMwHhcNMTYxMjA5MTgxNTMzWhcNMjExMjA4\nMTgxNTM0WjAmMSQwIgYDVQQDDBtvcGVuc2hpZnQtc2lnbmVyQDE0ODEzMDczMzMw\nggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDJneo5Oz8Q1EA7BkGhSvFX\nDcdnYv+mvPZsUZOC5VbKaXSz84X1nLQj7n5lxWK4f9gmFiXscgxBOIxXD9e2Q6AM\ngIJEjMHzEXYOgbWqhbd6EtUrWEGEzA8dEAvPyDWe2jdovcTUkP7JxyqATCk7j4Ao\nv27HJFyyR3iVOleAPaT1ZICj+w4Jy1np60rmMCsn2A2vfKceu7T5lt3uGNEFhTMd\n7bbvFaIkv3lFWMkrRaDqAlWanYQLEGMF8Mh68ty9A/mqCAJUjVB363/i7IIG1nwI\nymFYS3IKsCk0W8WgC9p515lMH3d1rsT+UlF+BhRa3Boh+wD4q2NfrgkguqEQ4f+x\nAgMBAAGjIzAhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MA0GCSqG\nSIb3DQEBCwUAA4IBAQCBKeKsPueA4RHoEzLl+xNnrig0dBvbfcBCzCf4w221QHAT\nS3FcQvByLjzpUUl2lPtqA5+tvg+lxwKba8DTgI3IpP2J2Pu4eLu4jH/YnL+5u18t\nq3/CGSFBSL0UJ2cZQe+9AH7LOSSqyj86PBIjZhW3iSwQhjqB2p7dA25gj0wnoM77\nkh8VckM83HcbzVspgvpQ6zR2RuXfLA3Lcoi9wd8eojaaNwqN4AhCIdEu4nXz7FFM\nGv+2fHzxLRUFA0YPdMfYio3s6qF3Yg2G52S1dYNEGUNQTvMl38KFS6lU+jaj0qjn\nVVkXIA4NrteIMUC1NH2DcvTcWtWNtZWkga7aytT0\n-----END CERTIFICATE-----\n", ValueFrom:(*api.EnvVarSource)(nil)}, api.EnvVar{Name:"OPENSHIFT_DEPLOYMENT_NAME", Value:"deployment-example-1", ValueFrom:(*api.EnvVarSource)(nil)}, api.EnvVar{Name:"OPENSHIFT_DEPLOYMENT_NAMESPACE", Value:"test", ValueFrom:(*api.EnvVarSource)(nil)}}, Resources:api.ResourceRequirements{Limits:api.ResourceList(nil), Requests:api.ResourceList(nil)}, VolumeMounts:[]api.VolumeMount{api.VolumeMount{Name:"deployer-token-gvtbo", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:""}}, LivenessProbe:(*api.Probe)(nil), ReadinessProbe:(*api.Probe)(nil), Lifecycle:(*api.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", ImagePullPolicy:"IfNotPresent", SecurityContext:(*api.SecurityContext)(0xc82f8aa9f0), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Never", TerminationGracePeriodSeconds:(*int64)(0xc82fc319b0), ActiveDeadlineSeconds:(*int64)(0xc82fc319b8), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"deployer", NodeName:"localhost", SecurityContext:(*api.PodSecurityContext)(0xc82dfabec0), ImagePullSecrets:[]api.LocalObjectReference{api.LocalObjectReference{Name:"deployer-dockercfg-cotcn"}}, Hostname:"", Subdomain:""}, Status:api.PodStatus{Phase:"Failed", Conditions:[]api.PodCondition{api.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:unversioned.Time{Time:time.Time{sec:0, nsec:0, loc:(*time.Location)(nil)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63616904266, nsec:0, loc:(*time.Location)(0xa4624c0)}}, Reason:"", Message:""}, api.PodCondition{Type:"Ready", Status:"False", LastProbeTime:unversioned.Time{Time:time.Time{sec:0, nsec:0, loc:(*time.Location)(nil)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63616904266, nsec:0, loc:(*time.Location)(0xa4624c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [deployment]"}, api.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:unversioned.Time{Time:time.Time{sec:0, nsec:0, loc:(*time.Location)(nil)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63616904266, nsec:0, loc:(*time.Location)(0xa4624c0)}}, Reason:"", Message:""}}, Message:"", Reason:"", HostIP:"192.168.121.18", PodIP:"", StartTime:(*unversioned.Time)(0xc828fdcd80), InitContainerStatuses:[]api.ContainerStatus(nil), ContainerStatuses:[]api.ContainerStatus{api.ContainerStatus{Name:"deployment", State:api.ContainerState{Waiting:(*api.ContainerStateWaiting)(nil), Running:(*api.ContainerStateRunning)(nil), Terminated:(*api.ContainerStateTerminated)(0xc8231d0e00)}, LastTerminationState:api.ContainerState{Waiting:(*api.ContainerStateWaiting)(nil), Running:(*api.ContainerStateRunning)(nil), Terminated:(*api.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"openshift/origin-deployer:v1.5.0-alpha.0", ImageID:"docker-pullable://docker.io/openshift/origin-deployer@sha256:398189baa7877595df33f6da8dff1fad36a9e437883c5620b85bdf3d06a31fb5", ContainerID:"docker://94f86506bd763352f94fec132bedfa2b00fc94726ee3ad7e8cd530c3c74e8f04"}}}}.
I1209 18:17:49.824976 20134 deployment_controller.go:334] Error: <nil>. No deployment found for Pod deployment-example-1-deploy, deployment controller will avoid syncing.
I1209 18:17:49.825011 20134 deployment_controller.go:334] Error: <nil>. No deployment found for Pod deployment-example-1-deploy, deployment controller will avoid syncing.
I1209 18:17:49.825442 20134 replication_controller.go:379] Pod deployment-example-1-deploy updated, objectMeta {Name:deployment-example-1-deploy GenerateName: Namespace:test SelfLink:/api/v1/namespaces/test/pods/deployment-example-1-deploy UID:c8ca050c-be3b-11e6-8665-525400560f2f ResourceVersion:531 Generation:0 CreationTimestamp:2016-12-09 18:17:46 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[openshift.io/deployer-pod-for.name:deployment-example-1] Annotations:map[openshift.io/deployment.name:deployment-example-1 openshift.io/scc:restricted] OwnerReferences:[] Finalizers:[] ClusterName:} -> {Name:deployment-example-1-deploy GenerateName: Namespace:test SelfLink:/api/v1/namespaces/test/pods/deployment-example-1-deploy UID:c8ca050c-be3b-11e6-8665-525400560f2f ResourceVersion:534 Generation:0 CreationTimestamp:2016-12-09 18:17:46 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[openshift.io/deployer-pod-for.name:deployment-example-1] Annotations:map[openshift.io/deployment.name:deployment-example-1 openshift.io/scc:restricted] OwnerReferences:[] Finalizers:[] ClusterName:}.
I1209 18:17:49.825610 20134 replication_controller.go:256] No controllers found for pod deployment-example-1-deploy, replication manager will avoid syncing
I1209 18:17:49.825630 20134 replica_set.go:362] Pod deployment-example-1-deploy updated, objectMeta {Name:deployment-example-1-deploy GenerateName: Namespace:test SelfLink:/api/v1/namespaces/test/pods/deployment-example-1-deploy UID:c8ca050c-be3b-11e6-8665-525400560f2f ResourceVersion:531 Generation:0 CreationTimestamp:2016-12-09 18:17:46 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[openshift.io/deployer-pod-for.name:deployment-example-1] Annotations:map[openshift.io/deployment.name:deployment-example-1 openshift.io/scc:restricted] OwnerReferences:[] Finalizers:[] ClusterName:} -> {Name:deployment-example-1-deploy GenerateName: Namespace:test SelfLink:/api/v1/namespaces/test/pods/deployment-example-1-deploy UID:c8ca050c-be3b-11e6-8665-525400560f2f ResourceVersion:534 Generation:0 CreationTimestamp:2016-12-09 18:17:46 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[openshift.io/deployer-pod-for.name:deployment-example-1] Annotations:map[openshift.io/deployment.name:deployment-example-1 openshift.io/scc:restricted] OwnerReferences:[] Finalizers:[] ClusterName:}.
I1209 18:17:49.825747 20134 replica_set.go:238] No ReplicaSets found for pod deployment-example-1-deploy, ReplicaSet controller will avoid syncing
I1209 18:17:49.825776 20134 jobcontroller.go:166] No jobs found for pod deployment-example-1-deploy, job controller will avoid syncing
I1209 18:17:49.825794 20134 daemoncontroller.go:364] Pod deployment-example-1-deploy updated.
I1209 18:17:49.825819 20134 daemoncontroller.go:293] No daemon sets found for pod deployment-example-1-deploy, daemon set controller will avoid syncing
I1209 18:17:49.825836 20134 disruption.go:307] updatePod called on pod "deployment-example-1-deploy"
I1209 18:17:49.825852 20134 disruption.go:361] No PodDisruptionBudgets found for pod deployment-example-1-deploy, PodDisruptionBudget controller will avoid syncing.
I1209 18:17:49.825858 20134 disruption.go:310] No matching pdb for pod "deployment-example-1-deploy"
I1209 18:17:49.825919 20134 pet_set.go:238] No PetSets found for pod deployment-example-1-deploy, PetSet controller will avoid syncing
I1209 18:17:49.828907 20134 kubelet.go:2306] SyncLoop (RECONCILE, "api"): "deployment-example-1-deploy_test(c8ca050c-be3b-11e6-8665-525400560f2f)"
I1209 18:17:49.902290 20134 desired_state_of_world_populator.go:123] Skipping findAndRemoveDeletedPods(). Not permitted until 2016-12-09 18:17:50.896589901 +0000 UTC (getPodStatusRetryDuration 2s).
I1209 18:17:50.002736 20134 desired_state_of_world_populator.go:123] Skipping findAndRemoveDeletedPods(). Not permitted until 2016-12-09 18:17:50.896589901 +0000 UTC (getPodStatusRetryDuration 2s).
I1209 18:17:50.102931 20134 desired_state_of_world_populator.go:123] Skipping findAndRemoveDeletedPods(). Not permitted until 2016-12-09 18:17:50.896589901 +0000 UTC (getPodStatusRetryDuration 2s).
I1209 18:17:50.203185 20134 desired_state_of_world_populator.go:123] Skipping findAndRemoveDeletedPods(). Not permitted until 2016-12-09 18:17:50.896589901 +0000 UTC (getPodStatusRetryDuration 2s).
I1209 18:17:50.303399 20134 desired_state_of_world_populator.go:123] Skipping findAndRemoveDeletedPods(). Not permitted until 2016-12-09 18:17:50.896589901 +0000 UTC (getPodStatusRetryDuration 2s).
I1209 18:17:50.403639 20134 desired_state_of_world_populator.go:123] Skipping findAndRemoveDeletedPods(). Not permitted until 2016-12-09 18:17:50.896589901 +0000 UTC (getPodStatusRetryDuration 2s).
I1209 18:17:50.503853 20134 desired_state_of_world_populator.go:123] Skipping findAndRemoveDeletedPods(). Not permitted until 2016-12-09 18:17:50.896589901 +0000 UTC (getPodStatusRetryDuration 2s).
I1209 18:17:50.604316 20134 desired_state_of_world_populator.go:123] Skipping findAndRemoveDeletedPods(). Not permitted until 2016-12-09 18:17:50.896589901 +0000 UTC (getPodStatusRetryDuration 2s).
I1209 18:17:50.693358 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:17:50.695404 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:17:50.696604 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:17:50.704628 20134 desired_state_of_world_populator.go:123] Skipping findAndRemoveDeletedPods(). Not permitted until 2016-12-09 18:17:50.896589901 +0000 UTC (getPodStatusRetryDuration 2s).
I1209 18:17:50.804974 20134 desired_state_of_world_populator.go:123] Skipping findAndRemoveDeletedPods(). Not permitted until 2016-12-09 18:17:50.896589901 +0000 UTC (getPodStatusRetryDuration 2s).
I1209 18:17:50.808687 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:17:50.810626 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:17:50.906241 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:17:50.906379 20134 desired_state_of_world_populator.go:209] Removing volume "kubernetes.io/secret/c8ca050c-be3b-11e6-8665-525400560f2f-deployer-token-gvtbo" (volSpec="deployer-token-gvtbo") for pod "deployment-example-1-deploy_test(c8ca050c-be3b-11e6-8665-525400560f2f)" from desired state.
I1209 18:17:50.953050 20134 reconciler.go:184] UnmountVolume operation started for volume "kubernetes.io/secret/c8ca050c-be3b-11e6-8665-525400560f2f-deployer-token-gvtbo" (spec.Name: "deployer-token-gvtbo") from pod "c8ca050c-be3b-11e6-8665-525400560f2f" (UID: "c8ca050c-be3b-11e6-8665-525400560f2f").
I1209 18:17:50.953305 20134 secret.go:276] Tearing down volume deployer-token-gvtbo for pod c8ca050c-be3b-11e6-8665-525400560f2f at /var/lib/origin/openshift.local.volumes/pods/c8ca050c-be3b-11e6-8665-525400560f2f/volumes/kubernetes.io~secret/deployer-token-gvtbo
I1209 18:17:50.953524 20134 empty_dir_linux.go:39] Determining mount medium of /var/lib/origin/openshift.local.volumes/pods/c8ca050c-be3b-11e6-8665-525400560f2f/volumes/kubernetes.io~secret/deployer-token-gvtbo
I1209 18:17:50.953567 20134 nsenter_mount.go:183] findmnt command: nsenter [--mount=/rootfs/proc/1/ns/mnt -- /bin/findmnt -o target,fstype --noheadings --first-only --target /var/lib/origin/openshift.local.volumes/pods/c8ca050c-be3b-11e6-8665-525400560f2f/volumes/kubernetes.io~secret/deployer-token-gvtbo]
I1209 18:17:50.976529 20134 nsenter_mount.go:196] IsLikelyNotMountPoint findmnt output for path /var/lib/origin/openshift.local.volumes/pods/c8ca050c-be3b-11e6-8665-525400560f2f/volumes/kubernetes.io~secret/deployer-token-gvtbo: /var/lib/origin/openshift.local.volumes/pods/c8ca050c-be3b-11e6-8665-525400560f2f/volumes/kubernetes.io~secret/deployer-token-gvtbo:
I1209 18:17:50.976591 20134 nsenter_mount.go:199] IsLikelyNotMountPoint: /var/lib/origin/openshift.local.volumes/pods/c8ca050c-be3b-11e6-8665-525400560f2f/volumes/kubernetes.io~secret/deployer-token-gvtbo is a mount point
I1209 18:17:50.976628 20134 empty_dir_linux.go:49] Statfs_t of /var/lib/origin/openshift.local.volumes/pods/c8ca050c-be3b-11e6-8665-525400560f2f/volumes/kubernetes.io~secret/deployer-token-gvtbo: {Type:61267 Bsize:4096 Blocks:10288208 Bfree:9823605 Bavail:9295234 Files:2621440 Ffree:2591467 Fsid:{X__val:[633347183 -1919301888]} Namelen:255 Frsize:4096 Flags:4128 Spare:[0 0 0 0]}
I1209 18:17:50.992515 20134 operation_executor.go:877] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8ca050c-be3b-11e6-8665-525400560f2f-deployer-token-gvtbo" (OuterVolumeSpecName: "deployer-token-gvtbo") pod "c8ca050c-be3b-11e6-8665-525400560f2f" (UID: "c8ca050c-be3b-11e6-8665-525400560f2f"). InnerVolumeSpecName "deployer-token-gvtbo". PluginName "kubernetes.io/secret", VolumeGidValue ""
I1209 18:17:51.006731 20134 desired_state_of_world_populator.go:123] Skipping findAndRemoveDeletedPods(). Not permitted until 2016-12-09 18:17:52.906308513 +0000 UTC (getPodStatusRetryDuration 2s).
I1209 18:17:51.047254 20134 handler.go:300] unable to get fs usage from thin pool for device 16: no cached value for usage of device 16
I1209 18:17:51.106973 20134 desired_state_of_world_populator.go:123] Skipping findAndRemoveDeletedPods(). Not permitted until 2016-12-09 18:17:52.906308513 +0000 UTC (getPodStatusRetryDuration 2s).
I1209 18:17:51.207239 20134 desired_state_of_world_populator.go:123] Skipping findAndRemoveDeletedPods(). Not permitted until 2016-12-09 18:17:52.906308513 +0000 UTC (getPodStatusRetryDuration 2s).
I1209 18:17:51.307459 20134 desired_state_of_world_populator.go:123] Skipping findAndRemoveDeletedPods(). Not permitted until 2016-12-09 18:17:52.906308513 +0000 UTC (getPodStatusRetryDuration 2s).
I1209 18:17:51.407727 20134 desired_state_of_world_populator.go:123] Skipping findAndRemoveDeletedPods(). Not permitted until 2016-12-09 18:17:52.906308513 +0000 UTC (getPodStatusRetryDuration 2s).
I1209 18:17:51.507924 20134 desired_state_of_world_populator.go:123] Skipping findAndRemoveDeletedPods(). Not permitted until 2016-12-09 18:17:52.906308513 +0000 UTC (getPodStatusRetryDuration 2s).
I1209 18:17:51.608167 20134 desired_state_of_world_populator.go:123] Skipping findAndRemoveDeletedPods(). Not permitted until 2016-12-09 18:17:52.906308513 +0000 UTC (getPodStatusRetryDuration 2s).
I1209 18:17:51.708496 20134 desired_state_of_world_populator.go:123] Skipping findAndRemoveDeletedPods(). Not permitted until 2016-12-09 18:17:52.906308513 +0000 UTC (getPodStatusRetryDuration 2s).
I1209 18:17:51.808799 20134 desired_state_of_world_populator.go:123] Skipping findAndRemoveDeletedPods(). Not permitted until 2016-12-09 18:17:52.906308513 +0000 UTC (getPodStatusRetryDuration 2s).
I1209 18:17:51.810906 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:17:51.814961 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:17:51.909070 20134 desired_state_of_world_populator.go:123] Skipping findAndRemoveDeletedPods(). Not permitted until 2016-12-09 18:17:52.906308513 +0000 UTC (getPodStatusRetryDuration 2s).
I1209 18:17:52.009362 20134 desired_state_of_world_populator.go:123] Skipping findAndRemoveDeletedPods(). Not permitted until 2016-12-09 18:17:52.906308513 +0000 UTC (getPodStatusRetryDuration 2s).
I1209 18:17:52.109595 20134 desired_state_of_world_populator.go:123] Skipping findAndRemoveDeletedPods(). Not permitted until 2016-12-09 18:17:52.906308513 +0000 UTC (getPodStatusRetryDuration 2s).
I1209 18:17:52.209862 20134 desired_state_of_world_populator.go:123] Skipping findAndRemoveDeletedPods(). Not permitted until 2016-12-09 18:17:52.906308513 +0000 UTC (getPodStatusRetryDuration 2s).
I1209 18:17:52.310129 20134 desired_state_of_world_populator.go:123] Skipping findAndRemoveDeletedPods(). Not permitted until 2016-12-09 18:17:52.906308513 +0000 UTC (getPodStatusRetryDuration 2s).
I1209 18:17:52.410419 20134 desired_state_of_world_populator.go:123] Skipping findAndRemoveDeletedPods(). Not permitted until 2016-12-09 18:17:52.906308513 +0000 UTC (getPodStatusRetryDuration 2s).
I1209 18:17:52.510696 20134 desired_state_of_world_populator.go:123] Skipping findAndRemoveDeletedPods(). Not permitted until 2016-12-09 18:17:52.906308513 +0000 UTC (getPodStatusRetryDuration 2s).
I1209 18:17:52.611067 20134 desired_state_of_world_populator.go:123] Skipping findAndRemoveDeletedPods(). Not permitted until 2016-12-09 18:17:52.906308513 +0000 UTC (getPodStatusRetryDuration 2s).
I1209 18:17:52.693434 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:17:52.694967 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:17:52.711370 20134 desired_state_of_world_populator.go:123] Skipping findAndRemoveDeletedPods(). Not permitted until 2016-12-09 18:17:52.906308513 +0000 UTC (getPodStatusRetryDuration 2s).
I1209 18:17:52.812150 20134 desired_state_of_world_populator.go:123] Skipping findAndRemoveDeletedPods(). Not permitted until 2016-12-09 18:17:52.906308513 +0000 UTC (getPodStatusRetryDuration 2s).
I1209 18:17:52.815897 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:17:52.818462 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:17:53.818921 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:17:53.823592 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:17:54.405284 20134 handler.go:300] unable to get fs usage from thin pool for device 16: no cached value for usage of device 16
I1209 18:17:54.693407 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:17:54.695888 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:17:54.697529 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:17:54.823930 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:17:54.826710 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:17:55.827685 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:17:55.830106 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:17:56.185156 20134 controller.go:105] Found 0 scheduledjobs
I1209 18:17:56.192550 20134 controller.go:113] Found 0 jobs
I1209 18:17:56.192602 20134 controller.go:116] Found 0 groups
I1209 18:17:56.539869 20134 interface.go:248] Default route transits interface "eth0"
I1209 18:17:56.540026 20134 interface.go:93] Interface eth0 is up
I1209 18:17:56.541496 20134 interface.go:138] Interface "eth0" has 2 addresses :[192.168.121.18/24 fe80::5054:ff:fe56:f2f/64].
I1209 18:17:56.541845 20134 interface.go:105] Checking addr 192.168.121.18/24.
I1209 18:17:56.541890 20134 interface.go:114] IP found 192.168.121.18
I1209 18:17:56.541905 20134 interface.go:144] valid IPv4 address for interface "eth0" found as 192.168.121.18.
I1209 18:17:56.541917 20134 interface.go:254] Choosing IP 192.168.121.18
I1209 18:17:56.642232 20134 attach_detach_controller.go:520] processVolumesInUse for node "localhost"
I1209 18:17:56.693222 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:17:56.707400 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:17:56.830368 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:17:56.832395 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:17:56.858900 20134 conversion.go:133] failed to handle multiple devices for container. Skipping Filesystem stats
I1209 18:17:56.858924 20134 conversion.go:133] failed to handle multiple devices for container. Skipping Filesystem stats
I1209 18:17:56.866117 20134 eviction_manager.go:204] eviction manager: no resources are starved
I1209 18:17:57.832646 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:17:57.835138 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:17:58.693369 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:17:58.695891 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:17:58.697545 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:17:58.835386 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:17:58.838207 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:17:59.838442 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:17:59.841028 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:18:00.693276 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:18:00.695359 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:18:00.841370 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:18:00.843798 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:18:00.865743 20134 thin_pool_watcher.go:160] checking whether the thin-pool is holding a metadata snapshot
I1209 18:18:00.865777 20134 dmsetup_client.go:61] running dmsetup status docker-252:1-262311-pool
I1209 18:18:00.892472 20134 thin_pool_watcher.go:126] reserving metadata snapshot for thin-pool docker-252:1-262311-pool
I1209 18:18:00.892523 20134 dmsetup_client.go:61] running dmsetup message docker-252:1-262311-pool 0 reserve_metadata_snap
I1209 18:18:00.917544 20134 thin_pool_watcher.go:133] reserved metadata snapshot for thin-pool docker-252:1-262311-pool
I1209 18:18:00.917609 20134 thin_pool_watcher.go:141] running thin_ls on metadata device /dev/loop1
I1209 18:18:00.917625 20134 thin_ls_client.go:56] running command: thin_ls --no-headers -m -o DEV,EXCLUSIVE_BYTES /dev/loop1
I1209 18:18:00.937243 20134 thin_pool_watcher.go:137] releasing metadata snapshot for thin-pool docker-252:1-262311-pool
I1209 18:18:00.937521 20134 dmsetup_client.go:61] running dmsetup message docker-252:1-262311-pool 0 release_metadata_snap
E1209 18:18:00.966236 20134 thin_pool_watcher.go:72] encountered error refreshing thin pool watcher: error performing thin_ls on metadata device /dev/loop1: Error running command `thin_ls --no-headers -m -o DEV,EXCLUSIVE_BYTES /dev/loop1`: exit status 127
output:
I1209 18:18:00.966653 20134 thin_pool_watcher.go:77] thin_ls(1481307480) took 100.912455ms
I1209 18:18:01.195812 20134 reflector.go:284] pkg/controller/volume/persistentvolume/pv_controller_base.go:448: forcing resync
I1209 18:18:01.227810 20134 reflector.go:284] pkg/controller/volume/persistentvolume/pv_controller_base.go:449: forcing resync
I1209 18:18:01.228531 20134 reflector.go:284] pkg/controller/volume/persistentvolume/pv_controller_base.go:153: forcing resync
I1209 18:18:01.623632 20134 nodecontroller.go:816] Node localhost ReadyCondition updated. Updating timestamp: {Capacity:map[memory:{i:{value:8223051776 scale:0} d:{Dec:<nil>} s:8030324Ki Format:BinarySI} pods:{i:{value:20 scale:0} d:{Dec:<nil>} s:20 Format:DecimalSI} alpha.kubernetes.io/nvidia-gpu:{i:{value:0 scale:0} d:{Dec:<nil>} s:0 Format:DecimalSI} cpu:{i:{value:2 scale:0} d:{Dec:<nil>} s:2 Format:DecimalSI}] Allocatable:map[pods:{i:{value:20 scale:0} d:{Dec:<nil>} s:20 Format:DecimalSI} alpha.kubernetes.io/nvidia-gpu:{i:{value:0 scale:0} d:{Dec:<nil>} s:0 Format:DecimalSI} cpu:{i:{value:2 scale:0} d:{Dec:<nil>} s:2 Format:DecimalSI} memory:{i:{value:8223051776 scale:0} d:{Dec:<nil>} s:8030324Ki Format:BinarySI}] Phase: Conditions:[{Type:OutOfDisk Status:False LastHeartbeatTime:2016-12-09 18:17:46 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasSufficientDisk Message:kubelet has sufficient disk space available} {Type:MemoryPressure Status:False LastHeartbeatTime:2016-12-09 18:17:46 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasSufficientMemory Message:kubelet has sufficient memory available} {Type:DiskPressure Status:False LastHeartbeatTime:2016-12-09 18:17:46 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasNoDiskPressure Message:kubelet has no disk pressure} {Type:Ready Status:True LastHeartbeatTime:2016-12-09 18:17:46 +0000 UTC LastTransitionTime:2016-12-09 18:15:55 +0000 UTC Reason:KubeletReady Message:kubelet is posting ready status}] Addresses:[{Type:LegacyHostIP Address:192.168.121.18} {Type:InternalIP Address:192.168.121.18}] DaemonEndpoints:{KubeletEndpoint:{Port:10250}} NodeInfo:{MachineID:8629c58585b148cca9cc925e788be346 SystemUUID:8629C585-85B1-48CC-A9CC-925E788BE346 BootID:1a2f3964-acf4-4825-9949-2816c7af2060 KernelVersion:4.8.6-300.fc25.x86_64 OSImage:CentOS Linux 7 (Core) ContainerRuntimeVersion:docker://1.12.3 KubeletVersion:v1.4.0+776c994 KubeProxyVersion:v1.4.0+776c994 OperatingSystem:linux Architecture:amd64} Images:[{Names:[docker.io/openshift/origin@sha256:0c4dfc568b8dbb1ac288ab1feb955ee2a37e74fd60fc96f8efd4e35076ced06b docker.io/openshift/origin:latest] SizeBytes:540431972} {Names:[docker.io/openshift/origin-deployer@sha256:398189baa7877595df33f6da8dff1fad36a9e437883c5620b85bdf3d06a31fb5 docker.io/openshift/origin-deployer:v1.5.0-alpha.0] SizeBytes:488015134} {Names:[docker.io/openshift/origin-pod@sha256:6216367955502829e7270c3437810cd16e0e99d2973647e521cc9b871d8948f0 docker.io/openshift/origin-pod:v1.5.0-alpha.0] SizeBytes:1138998}] VolumesInUse:[] VolumesAttached:[]} vs {Capacity:map[alpha.kubernetes.io/nvidia-gpu:{i:{value:0 scale:0} d:{Dec:<nil>} s:0 Format:DecimalSI} cpu:{i:{value:2 scale:0} d:{Dec:<nil>} s:2 Format:DecimalSI} memory:{i:{value:8223051776 scale:0} d:{Dec:<nil>} s:8030324Ki Format:BinarySI} pods:{i:{value:20 scale:0} d:{Dec:<nil>} s:20 Format:DecimalSI}] Allocatable:map[pods:{i:{value:20 scale:0} d:{Dec:<nil>} s:20 Format:DecimalSI} alpha.kubernetes.io/nvidia-gpu:{i:{value:0 scale:0} d:{Dec:<nil>} s:0 Format:DecimalSI} cpu:{i:{value:2 scale:0} d:{Dec:<nil>} s:2 Format:DecimalSI} memory:{i:{value:8223051776 scale:0} d:{Dec:<nil>} s:8030324Ki Format:BinarySI}] Phase: Conditions:[{Type:OutOfDisk Status:False LastHeartbeatTime:2016-12-09 18:17:56 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasSufficientDisk Message:kubelet has sufficient disk space available} {Type:MemoryPressure Status:False LastHeartbeatTime:2016-12-09 18:17:56 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasSufficientMemory Message:kubelet has sufficient memory available} {Type:DiskPressure Status:False LastHeartbeatTime:2016-12-09 18:17:56 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasNoDiskPressure Message:kubelet has no disk pressure} {Type:Ready Status:True LastHeartbeatTime:2016-12-09 18:17:56 +0000 UTC LastTransitionTime:2016-12-09 18:15:55 +0000 UTC Reason:KubeletReady Message:kubelet is posting ready status}] Addresses:[{Type:LegacyHostIP Address:192.168.121.18} {Type:InternalIP Address:192.168.121.18}] DaemonEndpoints:{KubeletEndpoint:{Port:10250}} NodeInfo:{MachineID:8629c58585b148cca9cc925e788be346 SystemUUID:8629C585-85B1-48CC-A9CC-925E788BE346 BootID:1a2f3964-acf4-4825-9949-2816c7af2060 KernelVersion:4.8.6-300.fc25.x86_64 OSImage:CentOS Linux 7 (Core) ContainerRuntimeVersion:docker://1.12.3 KubeletVersion:v1.4.0+776c994 KubeProxyVersion:v1.4.0+776c994 OperatingSystem:linux Architecture:amd64} Images:[{Names:[docker.io/openshift/origin@sha256:0c4dfc568b8dbb1ac288ab1feb955ee2a37e74fd60fc96f8efd4e35076ced06b docker.io/openshift/origin:latest] SizeBytes:540431972} {Names:[docker.io/openshift/origin-deployer@sha256:398189baa7877595df33f6da8dff1fad36a9e437883c5620b85bdf3d06a31fb5 docker.io/openshift/origin-deployer:v1.5.0-alpha.0] SizeBytes:488015134} {Names:[docker.io/openshift/origin-pod@sha256:6216367955502829e7270c3437810cd16e0e99d2973647e521cc9b871d8948f0 docker.io/openshift/origin-pod:v1.5.0-alpha.0] SizeBytes:1138998}] VolumesInUse:[] VolumesAttached:[]}.
I1209 18:18:01.844104 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:18:01.846906 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:18:02.693394 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:18:02.695468 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:18:02.697061 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:18:02.847214 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:18:02.849880 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:18:03.850157 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:18:03.860380 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:18:04.695396 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:18:04.707638 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:18:04.708716 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:18:04.860660 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:18:04.863588 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:18:05.864096 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:18:05.900118 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:18:06.205925 20134 controller.go:105] Found 0 scheduledjobs
I1209 18:18:06.214107 20134 controller.go:113] Found 0 jobs
I1209 18:18:06.214131 20134 controller.go:116] Found 0 groups
I1209 18:18:06.648311 20134 interface.go:248] Default route transits interface "eth0"
I1209 18:18:06.648650 20134 interface.go:93] Interface eth0 is up
I1209 18:18:06.648824 20134 interface.go:138] Interface "eth0" has 2 addresses :[192.168.121.18/24 fe80::5054:ff:fe56:f2f/64].
I1209 18:18:06.648903 20134 interface.go:105] Checking addr 192.168.121.18/24.
I1209 18:18:06.648948 20134 interface.go:114] IP found 192.168.121.18
I1209 18:18:06.648978 20134 interface.go:144] valid IPv4 address for interface "eth0" found as 192.168.121.18.
I1209 18:18:06.649002 20134 interface.go:254] Choosing IP 192.168.121.18
I1209 18:18:06.693287 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:18:06.694849 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:18:06.701551 20134 attach_detach_controller.go:520] processVolumesInUse for node "localhost"
I1209 18:18:06.900422 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:18:06.903084 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:18:06.907090 20134 conversion.go:133] failed to handle multiple devices for container. Skipping Filesystem stats
I1209 18:18:06.907219 20134 conversion.go:133] failed to handle multiple devices for container. Skipping Filesystem stats
I1209 18:18:06.928797 20134 eviction_manager.go:204] eviction manager: no resources are starved
I1209 18:18:07.903401 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:18:07.906145 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:18:08.693292 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:18:08.695000 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:18:08.696027 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:18:08.841202 20134 handler.go:300] unable to get fs usage from thin pool for device 16: no cached value for usage of device 16
I1209 18:18:08.842475 20134 handler.go:300] unable to get fs usage from thin pool for device 16: no cached value for usage of device 16
I1209 18:18:08.906935 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:18:08.908863 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:18:09.909175 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:18:09.912042 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:18:10.693435 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:18:10.695012 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:18:10.696241 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:18:10.912482 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:18:10.915382 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:18:11.631496 20134 nodecontroller.go:816] Node localhost ReadyCondition updated. Updating timestamp: {Capacity:map[cpu:{i:{value:2 scale:0} d:{Dec:<nil>} s:2 Format:DecimalSI} memory:{i:{value:8223051776 scale:0} d:{Dec:<nil>} s:8030324Ki Format:BinarySI} pods:{i:{value:20 scale:0} d:{Dec:<nil>} s:20 Format:DecimalSI} alpha.kubernetes.io/nvidia-gpu:{i:{value:0 scale:0} d:{Dec:<nil>} s:0 Format:DecimalSI}] Allocatable:map[memory:{i:{value:8223051776 scale:0} d:{Dec:<nil>} s:8030324Ki Format:BinarySI} pods:{i:{value:20 scale:0} d:{Dec:<nil>} s:20 Format:DecimalSI} alpha.kubernetes.io/nvidia-gpu:{i:{value:0 scale:0} d:{Dec:<nil>} s:0 Format:DecimalSI} cpu:{i:{value:2 scale:0} d:{Dec:<nil>} s:2 Format:DecimalSI}] Phase: Conditions:[{Type:OutOfDisk Status:False LastHeartbeatTime:2016-12-09 18:17:56 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasSufficientDisk Message:kubelet has sufficient disk space available} {Type:MemoryPressure Status:False LastHeartbeatTime:2016-12-09 18:17:56 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasSufficientMemory Message:kubelet has sufficient memory available} {Type:DiskPressure Status:False LastHeartbeatTime:2016-12-09 18:17:56 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasNoDiskPressure Message:kubelet has no disk pressure} {Type:Ready Status:True LastHeartbeatTime:2016-12-09 18:17:56 +0000 UTC LastTransitionTime:2016-12-09 18:15:55 +0000 UTC Reason:KubeletReady Message:kubelet is posting ready status}] Addresses:[{Type:LegacyHostIP Address:192.168.121.18} {Type:InternalIP Address:192.168.121.18}] DaemonEndpoints:{KubeletEndpoint:{Port:10250}} NodeInfo:{MachineID:8629c58585b148cca9cc925e788be346 SystemUUID:8629C585-85B1-48CC-A9CC-925E788BE346 BootID:1a2f3964-acf4-4825-9949-2816c7af2060 KernelVersion:4.8.6-300.fc25.x86_64 OSImage:CentOS Linux 7 (Core) ContainerRuntimeVersion:docker://1.12.3 KubeletVersion:v1.4.0+776c994 KubeProxyVersion:v1.4.0+776c994 OperatingSystem:linux Architecture:amd64} Images:[{Names:[docker.io/openshift/origin@sha256:0c4dfc568b8dbb1ac288ab1feb955ee2a37e74fd60fc96f8efd4e35076ced06b docker.io/openshift/origin:latest] SizeBytes:540431972} {Names:[docker.io/openshift/origin-deployer@sha256:398189baa7877595df33f6da8dff1fad36a9e437883c5620b85bdf3d06a31fb5 docker.io/openshift/origin-deployer:v1.5.0-alpha.0] SizeBytes:488015134} {Names:[docker.io/openshift/origin-pod@sha256:6216367955502829e7270c3437810cd16e0e99d2973647e521cc9b871d8948f0 docker.io/openshift/origin-pod:v1.5.0-alpha.0] SizeBytes:1138998}] VolumesInUse:[] VolumesAttached:[]} vs {Capacity:map[memory:{i:{value:8223051776 scale:0} d:{Dec:<nil>} s:8030324Ki Format:BinarySI} pods:{i:{value:20 scale:0} d:{Dec:<nil>} s:20 Format:DecimalSI} alpha.kubernetes.io/nvidia-gpu:{i:{value:0 scale:0} d:{Dec:<nil>} s:0 Format:DecimalSI} cpu:{i:{value:2 scale:0} d:{Dec:<nil>} s:2 Format:DecimalSI}] Allocatable:map[memory:{i:{value:8223051776 scale:0} d:{Dec:<nil>} s:8030324Ki Format:BinarySI} pods:{i:{value:20 scale:0} d:{Dec:<nil>} s:20 Format:DecimalSI} alpha.kubernetes.io/nvidia-gpu:{i:{value:0 scale:0} d:{Dec:<nil>} s:0 Format:DecimalSI} cpu:{i:{value:2 scale:0} d:{Dec:<nil>} s:2 Format:DecimalSI}] Phase: Conditions:[{Type:OutOfDisk Status:False LastHeartbeatTime:2016-12-09 18:18:06 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasSufficientDisk Message:kubelet has sufficient disk space available} {Type:MemoryPressure Status:False LastHeartbeatTime:2016-12-09 18:18:06 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasSufficientMemory Message:kubelet has sufficient memory available} {Type:DiskPressure Status:False LastHeartbeatTime:2016-12-09 18:18:06 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasNoDiskPressure Message:kubelet has no disk pressure} {Type:Ready Status:True LastHeartbeatTime:2016-12-09 18:18:06 +0000 UTC LastTransitionTime:2016-12-09 18:15:55 +0000 UTC Reason:KubeletReady Message:kubelet is posting ready status}] Addresses:[{Type:LegacyHostIP Address:192.168.121.18} {Type:InternalIP Address:192.168.121.18}] DaemonEndpoints:{KubeletEndpoint:{Port:10250}} NodeInfo:{MachineID:8629c58585b148cca9cc925e788be346 SystemUUID:8629C585-85B1-48CC-A9CC-925E788BE346 BootID:1a2f3964-acf4-4825-9949-2816c7af2060 KernelVersion:4.8.6-300.fc25.x86_64 OSImage:CentOS Linux 7 (Core) ContainerRuntimeVersion:docker://1.12.3 KubeletVersion:v1.4.0+776c994 KubeProxyVersion:v1.4.0+776c994 OperatingSystem:linux Architecture:amd64} Images:[{Names:[docker.io/openshift/origin@sha256:0c4dfc568b8dbb1ac288ab1feb955ee2a37e74fd60fc96f8efd4e35076ced06b docker.io/openshift/origin:latest] SizeBytes:540431972} {Names:[docker.io/openshift/origin-deployer@sha256:398189baa7877595df33f6da8dff1fad36a9e437883c5620b85bdf3d06a31fb5 docker.io/openshift/origin-deployer:v1.5.0-alpha.0] SizeBytes:488015134} {Names:[docker.io/openshift/origin-pod@sha256:6216367955502829e7270c3437810cd16e0e99d2973647e521cc9b871d8948f0 docker.io/openshift/origin-pod:v1.5.0-alpha.0] SizeBytes:1138998}] VolumesInUse:[] VolumesAttached:[]}.
I1209 18:18:11.916188 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:18:11.918840 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:18:12.693410 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:18:12.695907 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:18:12.697786 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:18:12.919233 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:18:12.923055 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:18:13.923990 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:18:13.926176 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:18:14.693320 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:18:14.695489 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:18:14.841124 20134 reflector.go:284] pkg/controller/replicaset/replica_set.go:205: forcing resync
I1209 18:18:14.848730 20134 reflector.go:284] pkg/controller/deployment/deployment_controller.go:181: forcing resync
I1209 18:18:14.862122 20134 reflector.go:284] pkg/controller/disruption/disruption.go:266: forcing resync
I1209 18:18:14.862112 20134 reflector.go:284] pkg/controller/daemon/daemoncontroller.go:237: forcing resync
I1209 18:18:14.862496 20134 reflector.go:284] pkg/controller/podautoscaler/horizontal.go:135: forcing resync
I1209 18:18:14.862538 20134 reflector.go:284] pkg/controller/disruption/disruption.go:264: forcing resync
I1209 18:18:14.862758 20134 reflector.go:284] pkg/controller/disruption/disruption.go:267: forcing resync
I1209 18:18:14.863115 20134 reflector.go:284] pkg/controller/endpoint/endpoints_controller.go:158: forcing resync
I1209 18:18:14.863362 20134 endpoints_controller.go:321] Finished syncing service "default/kubernetes" endpoints. (6.274µs)
I1209 18:18:14.863473 20134 endpoints_controller.go:360] About to update endpoints for service "test/deployment-example"
I1209 18:18:14.865523 20134 reflector.go:284] pkg/controller/disruption/disruption.go:268: forcing resync
I1209 18:18:14.869846 20134 endpoints_controller.go:497] Update endpoints for test/deployment-example, ready: 0 not ready: 0
I1209 18:18:14.876809 20134 endpoints_controller.go:321] Finished syncing service "test/deployment-example" endpoints. (13.339084ms)
I1209 18:18:14.878288 20134 config.go:147] Setting endpoints (config.EndpointsUpdate) {
Endpoints: ([]api.Endpoints) (len=2 cap=2) {
(api.Endpoints) &TypeMeta{Kind:,APIVersion:,},
(api.Endpoints) &TypeMeta{Kind:,APIVersion:,}
},
Op: (config.Operation) 0
}
I1209 18:18:14.878392 20134 config.go:99] Calling handler.OnEndpointsUpdate()
I1209 18:18:14.878574 20134 proxier.go:758] Syncing iptables rules
I1209 18:18:14.878694 20134 iptables.go:362] running iptables -N [KUBE-SERVICES -t filter]
I1209 18:18:14.889171 20134 healthcheck.go:86] LB service health check mutation request Service: default/kubernetes - 0 Endpoints []
I1209 18:18:14.889444 20134 healthcheck.go:86] LB service health check mutation request Service: default/kubernetes - 0 Endpoints []
I1209 18:18:14.889514 20134 healthcheck.go:86] LB service health check mutation request Service: default/kubernetes - 0 Endpoints []
I1209 18:18:14.900443 20134 iptables.go:362] running iptables -N [KUBE-SERVICES -t nat]
I1209 18:18:14.915002 20134 iptables.go:362] running iptables -C [OUTPUT -t filter -m comment --comment kubernetes service portals -j KUBE-SERVICES]
I1209 18:18:14.926508 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:18:14.929014 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:18:14.931568 20134 iptables.go:362] running iptables -C [OUTPUT -t nat -m comment --comment kubernetes service portals -j KUBE-SERVICES]
I1209 18:18:14.948670 20134 iptables.go:362] running iptables -C [PREROUTING -t nat -m comment --comment kubernetes service portals -j KUBE-SERVICES]
I1209 18:18:14.965544 20134 iptables.go:362] running iptables -N [KUBE-POSTROUTING -t nat]
I1209 18:18:14.981984 20134 iptables.go:362] running iptables -C [POSTROUTING -t nat -m comment --comment kubernetes postrouting rules -j KUBE-POSTROUTING]
I1209 18:18:14.997973 20134 iptables.go:298] running iptables-save [-t filter]
I1209 18:18:15.021235 20134 iptables.go:298] running iptables-save [-t nat]
I1209 18:18:15.038577 20134 proxier.go:1244] Restoring iptables rules: *filter
:KUBE-SERVICES - [0:0]
-A KUBE-SERVICES -m comment --comment "test/deployment-example:8080-tcp has no endpoints" -m tcp -p tcp -d 172.30.122.15/32 --dport 8080 -j REJECT
COMMIT
*nat
:KUBE-SERVICES - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-POSTROUTING - [0:0]
:KUBE-MARK-MASQ - [0:0]
:KUBE-SVC-NPX46M4PTMTKRN6Y - [0:0]
:KUBE-SEP-LLAPZ6I53VN3DC7H - [0:0]
:KUBE-SVC-3VQ6B3MLH7E2SZT4 - [0:0]
:KUBE-SEP-FQLCC2RAIW2XP5BY - [0:0]
:KUBE-SVC-BA6I5HTZKAAAJT56 - [0:0]
:KUBE-SEP-U462OSCUJL4Y6JKA - [0:0]
:KUBE-SVC-7FAS7WLN46SI3LNQ - [0:0]
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x00004000/0x00004000 -j MASQUERADE
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x00004000/0x00004000
-A KUBE-SERVICES -m comment --comment "default/kubernetes:https cluster IP" -m tcp -p tcp -d 172.30.0.1/32 --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment default/kubernetes:https -m recent --name KUBE-SEP-LLAPZ6I53VN3DC7H --rcheck --seconds 180 --reap -j KUBE-SEP-LLAPZ6I53VN3DC7H
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment default/kubernetes:https -j KUBE-SEP-LLAPZ6I53VN3DC7H
-A KUBE-SEP-LLAPZ6I53VN3DC7H -m comment --comment default/kubernetes:https -s 192.168.121.18/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-LLAPZ6I53VN3DC7H -m comment --comment default/kubernetes:https -m recent --name KUBE-SEP-LLAPZ6I53VN3DC7H --set -m tcp -p tcp -j DNAT --to-destination 192.168.121.18:8443
-A KUBE-SERVICES -m comment --comment "default/kubernetes:dns cluster IP" -m udp -p udp -d 172.30.0.1/32 --dport 53 -j KUBE-SVC-3VQ6B3MLH7E2SZT4
-A KUBE-SVC-3VQ6B3MLH7E2SZT4 -m comment --comment default/kubernetes:dns -m recent --name KUBE-SEP-FQLCC2RAIW2XP5BY --rcheck --seconds 180 --reap -j KUBE-SEP-FQLCC2RAIW2XP5BY
-A KUBE-SVC-3VQ6B3MLH7E2SZT4 -m comment --comment default/kubernetes:dns -j KUBE-SEP-FQLCC2RAIW2XP5BY
-A KUBE-SEP-FQLCC2RAIW2XP5BY -m comment --comment default/kubernetes:dns -s 192.168.121.18/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-FQLCC2RAIW2XP5BY -m comment --comment default/kubernetes:dns -m recent --name KUBE-SEP-FQLCC2RAIW2XP5BY --set -m udp -p udp -j DNAT --to-destination 192.168.121.18:8053
-A KUBE-SERVICES -m comment --comment "default/kubernetes:dns-tcp cluster IP" -m tcp -p tcp -d 172.30.0.1/32 --dport 53 -j KUBE-SVC-BA6I5HTZKAAAJT56
-A KUBE-SVC-BA6I5HTZKAAAJT56 -m comment --comment default/kubernetes:dns-tcp -m recent --name KUBE-SEP-U462OSCUJL4Y6JKA --rcheck --seconds 180 --reap -j KUBE-SEP-U462OSCUJL4Y6JKA
-A KUBE-SVC-BA6I5HTZKAAAJT56 -m comment --comment default/kubernetes:dns-tcp -j KUBE-SEP-U462OSCUJL4Y6JKA
-A KUBE-SEP-U462OSCUJL4Y6JKA -m comment --comment default/kubernetes:dns-tcp -s 192.168.121.18/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-U462OSCUJL4Y6JKA -m comment --comment default/kubernetes:dns-tcp -m recent --name KUBE-SEP-U462OSCUJL4Y6JKA --set -m tcp -p tcp -j DNAT --to-destination 192.168.121.18:8053
-A KUBE-SERVICES -m comment --comment "test/deployment-example:8080-tcp cluster IP" -m tcp -p tcp -d 172.30.122.15/32 --dport 8080 -j KUBE-SVC-7FAS7WLN46SI3LNQ
-A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
COMMIT
I1209 18:18:15.038621 20134 iptables.go:339] running iptables-restore [--noflush --counters]
I1209 18:18:15.052902 20134 proxier.go:751] syncProxyRules took 174.324921ms
I1209 18:18:15.053080 20134 proxier.go:523] OnEndpointsUpdate took 174.57508ms for 2 endpoints
I1209 18:18:15.053231 20134 proxier.go:397] Received update notice: []
I1209 18:18:15.053380 20134 proxier.go:758] Syncing iptables rules
I1209 18:18:15.053479 20134 iptables.go:362] running iptables -N [KUBE-SERVICES -t filter]
I1209 18:18:15.068695 20134 iptables.go:362] running iptables -N [KUBE-SERVICES -t nat]
I1209 18:18:15.084433 20134 iptables.go:362] running iptables -C [OUTPUT -t filter -m comment --comment kubernetes service portals -j KUBE-SERVICES]
I1209 18:18:15.099280 20134 iptables.go:362] running iptables -C [OUTPUT -t nat -m comment --comment kubernetes service portals -j KUBE-SERVICES]
I1209 18:18:15.112824 20134 iptables.go:362] running iptables -C [PREROUTING -t nat -m comment --comment kubernetes service portals -j KUBE-SERVICES]
I1209 18:18:15.127935 20134 iptables.go:362] running iptables -N [KUBE-POSTROUTING -t nat]
I1209 18:18:15.140202 20134 iptables.go:362] running iptables -C [POSTROUTING -t nat -m comment --comment kubernetes postrouting rules -j KUBE-POSTROUTING]
I1209 18:18:15.151886 20134 iptables.go:298] running iptables-save [-t filter]
I1209 18:18:15.164163 20134 iptables.go:298] running iptables-save [-t nat]
I1209 18:18:15.177166 20134 proxier.go:1244] Restoring iptables rules: *filter
:KUBE-SERVICES - [0:0]
-A KUBE-SERVICES -m comment --comment "test/deployment-example:8080-tcp has no endpoints" -m tcp -p tcp -d 172.30.122.15/32 --dport 8080 -j REJECT
COMMIT
*nat
:KUBE-SERVICES - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-POSTROUTING - [0:0]
:KUBE-MARK-MASQ - [0:0]
:KUBE-SVC-NPX46M4PTMTKRN6Y - [0:0]
:KUBE-SEP-LLAPZ6I53VN3DC7H - [0:0]
:KUBE-SVC-3VQ6B3MLH7E2SZT4 - [0:0]
:KUBE-SEP-FQLCC2RAIW2XP5BY - [0:0]
:KUBE-SVC-BA6I5HTZKAAAJT56 - [0:0]
:KUBE-SEP-U462OSCUJL4Y6JKA - [0:0]
:KUBE-SVC-7FAS7WLN46SI3LNQ - [0:0]
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x00004000/0x00004000 -j MASQUERADE
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x00004000/0x00004000
-A KUBE-SERVICES -m comment --comment "default/kubernetes:https cluster IP" -m tcp -p tcp -d 172.30.0.1/32 --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment default/kubernetes:https -m recent --name KUBE-SEP-LLAPZ6I53VN3DC7H --rcheck --seconds 180 --reap -j KUBE-SEP-LLAPZ6I53VN3DC7H
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment default/kubernetes:https -j KUBE-SEP-LLAPZ6I53VN3DC7H
-A KUBE-SEP-LLAPZ6I53VN3DC7H -m comment --comment default/kubernetes:https -s 192.168.121.18/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-LLAPZ6I53VN3DC7H -m comment --comment default/kubernetes:https -m recent --name KUBE-SEP-LLAPZ6I53VN3DC7H --set -m tcp -p tcp -j DNAT --to-destination 192.168.121.18:8443
-A KUBE-SERVICES -m comment --comment "default/kubernetes:dns cluster IP" -m udp -p udp -d 172.30.0.1/32 --dport 53 -j KUBE-SVC-3VQ6B3MLH7E2SZT4
-A KUBE-SVC-3VQ6B3MLH7E2SZT4 -m comment --comment default/kubernetes:dns -m recent --name KUBE-SEP-FQLCC2RAIW2XP5BY --rcheck --seconds 180 --reap -j KUBE-SEP-FQLCC2RAIW2XP5BY
-A KUBE-SVC-3VQ6B3MLH7E2SZT4 -m comment --comment default/kubernetes:dns -j KUBE-SEP-FQLCC2RAIW2XP5BY
-A KUBE-SEP-FQLCC2RAIW2XP5BY -m comment --comment default/kubernetes:dns -s 192.168.121.18/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-FQLCC2RAIW2XP5BY -m comment --comment default/kubernetes:dns -m recent --name KUBE-SEP-FQLCC2RAIW2XP5BY --set -m udp -p udp -j DNAT --to-destination 192.168.121.18:8053
-A KUBE-SERVICES -m comment --comment "default/kubernetes:dns-tcp cluster IP" -m tcp -p tcp -d 172.30.0.1/32 --dport 53 -j KUBE-SVC-BA6I5HTZKAAAJT56
-A KUBE-SVC-BA6I5HTZKAAAJT56 -m comment --comment default/kubernetes:dns-tcp -m recent --name KUBE-SEP-U462OSCUJL4Y6JKA --rcheck --seconds 180 --reap -j KUBE-SEP-U462OSCUJL4Y6JKA
-A KUBE-SVC-BA6I5HTZKAAAJT56 -m comment --comment default/kubernetes:dns-tcp -j KUBE-SEP-U462OSCUJL4Y6JKA
-A KUBE-SEP-U462OSCUJL4Y6JKA -m comment --comment default/kubernetes:dns-tcp -s 192.168.121.18/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-U462OSCUJL4Y6JKA -m comment --comment default/kubernetes:dns-tcp -m recent --name KUBE-SEP-U462OSCUJL4Y6JKA --set -m tcp -p tcp -j DNAT --to-destination 192.168.121.18:8053
-A KUBE-SERVICES -m comment --comment "test/deployment-example:8080-tcp cluster IP" -m tcp -p tcp -d 172.30.122.15/32 --dport 8080 -j KUBE-SVC-7FAS7WLN46SI3LNQ
-A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
COMMIT
I1209 18:18:15.177223 20134 iptables.go:339] running iptables-restore [--noflush --counters]
I1209 18:18:15.190179 20134 proxier.go:751] syncProxyRules took 136.797116ms
I1209 18:18:15.190228 20134 proxier.go:391] OnServiceUpdate took 136.909341ms for 2 services
I1209 18:18:15.190280 20134 proxier.go:758] Syncing iptables rules
I1209 18:18:15.190300 20134 iptables.go:362] running iptables -N [KUBE-SERVICES -t filter]
I1209 18:18:15.202635 20134 iptables.go:362] running iptables -N [KUBE-SERVICES -t nat]
I1209 18:18:15.214039 20134 iptables.go:362] running iptables -C [OUTPUT -t filter -m comment --comment kubernetes service portals -j KUBE-SERVICES]
I1209 18:18:15.224790 20134 iptables.go:362] running iptables -C [OUTPUT -t nat -m comment --comment kubernetes service portals -j KUBE-SERVICES]
I1209 18:18:15.241173 20134 iptables.go:362] running iptables -C [PREROUTING -t nat -m comment --comment kubernetes service portals -j KUBE-SERVICES]
I1209 18:18:15.257433 20134 iptables.go:362] running iptables -N [KUBE-POSTROUTING -t nat]
I1209 18:18:15.274868 20134 iptables.go:362] running iptables -C [POSTROUTING -t nat -m comment --comment kubernetes postrouting rules -j KUBE-POSTROUTING]
I1209 18:18:15.290004 20134 iptables.go:298] running iptables-save [-t filter]
I1209 18:18:15.305815 20134 iptables.go:298] running iptables-save [-t nat]
I1209 18:18:15.321506 20134 proxier.go:1244] Restoring iptables rules: *filter
:KUBE-SERVICES - [0:0]
-A KUBE-SERVICES -m comment --comment "test/deployment-example:8080-tcp has no endpoints" -m tcp -p tcp -d 172.30.122.15/32 --dport 8080 -j REJECT
COMMIT
*nat
:KUBE-SERVICES - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-POSTROUTING - [0:0]
:KUBE-MARK-MASQ - [0:0]
:KUBE-SVC-7FAS7WLN46SI3LNQ - [0:0]
:KUBE-SVC-NPX46M4PTMTKRN6Y - [0:0]
:KUBE-SEP-LLAPZ6I53VN3DC7H - [0:0]
:KUBE-SVC-3VQ6B3MLH7E2SZT4 - [0:0]
:KUBE-SEP-FQLCC2RAIW2XP5BY - [0:0]
:KUBE-SVC-BA6I5HTZKAAAJT56 - [0:0]
:KUBE-SEP-U462OSCUJL4Y6JKA - [0:0]
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x00004000/0x00004000 -j MASQUERADE
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x00004000/0x00004000
-A KUBE-SERVICES -m comment --comment "test/deployment-example:8080-tcp cluster IP" -m tcp -p tcp -d 172.30.122.15/32 --dport 8080 -j KUBE-SVC-7FAS7WLN46SI3LNQ
-A KUBE-SERVICES -m comment --comment "default/kubernetes:https cluster IP" -m tcp -p tcp -d 172.30.0.1/32 --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment default/kubernetes:https -m recent --name KUBE-SEP-LLAPZ6I53VN3DC7H --rcheck --seconds 180 --reap -j KUBE-SEP-LLAPZ6I53VN3DC7H
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment default/kubernetes:https -j KUBE-SEP-LLAPZ6I53VN3DC7H
-A KUBE-SEP-LLAPZ6I53VN3DC7H -m comment --comment default/kubernetes:https -s 192.168.121.18/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-LLAPZ6I53VN3DC7H -m comment --comment default/kubernetes:https -m recent --name KUBE-SEP-LLAPZ6I53VN3DC7H --set -m tcp -p tcp -j DNAT --to-destination 192.168.121.18:8443
-A KUBE-SERVICES -m comment --comment "default/kubernetes:dns cluster IP" -m udp -p udp -d 172.30.0.1/32 --dport 53 -j KUBE-SVC-3VQ6B3MLH7E2SZT4
-A KUBE-SVC-3VQ6B3MLH7E2SZT4 -m comment --comment default/kubernetes:dns -m recent --name KUBE-SEP-FQLCC2RAIW2XP5BY --rcheck --seconds 180 --reap -j KUBE-SEP-FQLCC2RAIW2XP5BY
-A KUBE-SVC-3VQ6B3MLH7E2SZT4 -m comment --comment default/kubernetes:dns -j KUBE-SEP-FQLCC2RAIW2XP5BY
-A KUBE-SEP-FQLCC2RAIW2XP5BY -m comment --comment default/kubernetes:dns -s 192.168.121.18/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-FQLCC2RAIW2XP5BY -m comment --comment default/kubernetes:dns -m recent --name KUBE-SEP-FQLCC2RAIW2XP5BY --set -m udp -p udp -j DNAT --to-destination 192.168.121.18:8053
-A KUBE-SERVICES -m comment --comment "default/kubernetes:dns-tcp cluster IP" -m tcp -p tcp -d 172.30.0.1/32 --dport 53 -j KUBE-SVC-BA6I5HTZKAAAJT56
-A KUBE-SVC-BA6I5HTZKAAAJT56 -m comment --comment default/kubernetes:dns-tcp -m recent --name KUBE-SEP-U462OSCUJL4Y6JKA --rcheck --seconds 180 --reap -j KUBE-SEP-U462OSCUJL4Y6JKA
-A KUBE-SVC-BA6I5HTZKAAAJT56 -m comment --comment default/kubernetes:dns-tcp -j KUBE-SEP-U462OSCUJL4Y6JKA
-A KUBE-SEP-U462OSCUJL4Y6JKA -m comment --comment default/kubernetes:dns-tcp -s 192.168.121.18/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-U462OSCUJL4Y6JKA -m comment --comment default/kubernetes:dns-tcp -m recent --name KUBE-SEP-U462OSCUJL4Y6JKA --set -m tcp -p tcp -j DNAT --to-destination 192.168.121.18:8053
-A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
COMMIT
I1209 18:18:15.321534 20134 iptables.go:339] running iptables-restore [--noflush --counters]
I1209 18:18:15.346072 20134 proxier.go:751] syncProxyRules took 155.782655ms
I1209 18:18:15.346281 20134 iptables.go:362] running iptables -N [KUBE-PORTALS-CONTAINER -t nat]
I1209 18:18:15.365164 20134 iptables.go:362] running iptables -C [PREROUTING -t nat -m comment --comment handle ClusterIPs; NOTE: this must be before the NodePort rules -j KUBE-PORTALS-CONTAINER]
I1209 18:18:15.389009 20134 iptables.go:362] running iptables -N [KUBE-PORTALS-HOST -t nat]
I1209 18:18:15.406299 20134 iptables.go:362] running iptables -C [OUTPUT -t nat -m comment --comment handle ClusterIPs; NOTE: this must be before the NodePort rules -j KUBE-PORTALS-HOST]
I1209 18:18:15.433526 20134 iptables.go:362] running iptables -N [KUBE-NODEPORT-CONTAINER -t nat]
I1209 18:18:15.461507 20134 iptables.go:362] running iptables -C [PREROUTING -t nat -m addrtype --dst-type LOCAL -m comment --comment handle service NodePorts; NOTE: this must be the last rule in the chain -j KUBE-NODEPORT-CONTAINER]
I1209 18:18:15.490168 20134 iptables.go:362] running iptables -N [KUBE-NODEPORT-HOST -t nat]
I1209 18:18:15.503839 20134 iptables.go:362] running iptables -C [OUTPUT -t nat -m addrtype --dst-type LOCAL -m comment --comment handle service NodePorts; NOTE: this must be the last rule in the chain -j KUBE-NODEPORT-HOST]
I1209 18:18:15.514504 20134 iptables.go:362] running iptables -N [KUBE-NODEPORT-NON-LOCAL -t filter]
I1209 18:18:15.534134 20134 iptables.go:362] running iptables -C [INPUT -t filter -m comment --comment Ensure that non-local NodePort traffic can flow -j KUBE-NODEPORT-NON-LOCAL]
I1209 18:18:15.929373 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:18:15.932282 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:18:15.967027 20134 thin_pool_watcher.go:160] checking whether the thin-pool is holding a metadata snapshot
I1209 18:18:15.967062 20134 dmsetup_client.go:61] running dmsetup status docker-252:1-262311-pool
I1209 18:18:15.997875 20134 thin_pool_watcher.go:126] reserving metadata snapshot for thin-pool docker-252:1-262311-pool
I1209 18:18:15.998797 20134 dmsetup_client.go:61] running dmsetup message docker-252:1-262311-pool 0 reserve_metadata_snap
I1209 18:18:16.028431 20134 thin_pool_watcher.go:133] reserved metadata snapshot for thin-pool docker-252:1-262311-pool
I1209 18:18:16.028475 20134 thin_pool_watcher.go:141] running thin_ls on metadata device /dev/loop1
I1209 18:18:16.028489 20134 thin_ls_client.go:56] running command: thin_ls --no-headers -m -o DEV,EXCLUSIVE_BYTES /dev/loop1
I1209 18:18:16.044397 20134 thin_pool_watcher.go:137] releasing metadata snapshot for thin-pool docker-252:1-262311-pool
I1209 18:18:16.044442 20134 dmsetup_client.go:61] running dmsetup message docker-252:1-262311-pool 0 release_metadata_snap
E1209 18:18:16.061980 20134 thin_pool_watcher.go:72] encountered error refreshing thin pool watcher: error performing thin_ls on metadata device /dev/loop1: Error running command `thin_ls --no-headers -m -o DEV,EXCLUSIVE_BYTES /dev/loop1`: exit status 127
output:
I1209 18:18:16.062033 20134 thin_pool_watcher.go:77] thin_ls(1481307495) took 95.015824ms
I1209 18:18:16.196116 20134 reflector.go:284] pkg/controller/volume/persistentvolume/pv_controller_base.go:448: forcing resync
I1209 18:18:16.222912 20134 controller.go:105] Found 0 scheduledjobs
I1209 18:18:16.230736 20134 reflector.go:284] pkg/controller/petset/pet_set.go:147: forcing resync
I1209 18:18:16.230887 20134 reflector.go:284] pkg/controller/volume/persistentvolume/pv_controller_base.go:449: forcing resync
I1209 18:18:16.230918 20134 reflector.go:284] pkg/controller/volume/persistentvolume/pv_controller_base.go:153: forcing resync
I1209 18:18:16.235774 20134 controller.go:113] Found 0 jobs
I1209 18:18:16.235807 20134 controller.go:116] Found 0 groups
I1209 18:18:16.693517 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:18:16.695779 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:18:16.697456 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:18:16.707554 20134 interface.go:248] Default route transits interface "eth0"
I1209 18:18:16.707768 20134 interface.go:93] Interface eth0 is up
I1209 18:18:16.707895 20134 interface.go:138] Interface "eth0" has 2 addresses :[192.168.121.18/24 fe80::5054:ff:fe56:f2f/64].
I1209 18:18:16.707948 20134 interface.go:105] Checking addr 192.168.121.18/24.
I1209 18:18:16.707974 20134 interface.go:114] IP found 192.168.121.18
I1209 18:18:16.707999 20134 interface.go:144] valid IPv4 address for interface "eth0" found as 192.168.121.18.
I1209 18:18:16.708023 20134 interface.go:254] Choosing IP 192.168.121.18
I1209 18:18:16.760083 20134 attach_detach_controller.go:520] processVolumesInUse for node "localhost"
I1209 18:18:16.932538 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:18:16.936027 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:18:16.966204 20134 conversion.go:133] failed to handle multiple devices for container. Skipping Filesystem stats
I1209 18:18:16.966236 20134 conversion.go:133] failed to handle multiple devices for container. Skipping Filesystem stats
I1209 18:18:16.994747 20134 eviction_manager.go:204] eviction manager: no resources are starved
I1209 18:18:17.936863 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:18:17.939532 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:18:18.693390 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:18:18.696626 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:18:18.939822 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:18:18.943175 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:18:19.944186 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:18:19.947288 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:18:20.693374 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:18:20.694973 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:18:20.696200 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:18:20.948063 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:18:20.950889 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:18:21.638387 20134 nodecontroller.go:816] Node localhost ReadyCondition updated. Updating timestamp: {Capacity:map[memory:{i:{value:8223051776 scale:0} d:{Dec:<nil>} s:8030324Ki Format:BinarySI} pods:{i:{value:20 scale:0} d:{Dec:<nil>} s:20 Format:DecimalSI} alpha.kubernetes.io/nvidia-gpu:{i:{value:0 scale:0} d:{Dec:<nil>} s:0 Format:DecimalSI} cpu:{i:{value:2 scale:0} d:{Dec:<nil>} s:2 Format:DecimalSI}] Allocatable:map[memory:{i:{value:8223051776 scale:0} d:{Dec:<nil>} s:8030324Ki Format:BinarySI} pods:{i:{value:20 scale:0} d:{Dec:<nil>} s:20 Format:DecimalSI} alpha.kubernetes.io/nvidia-gpu:{i:{value:0 scale:0} d:{Dec:<nil>} s:0 Format:DecimalSI} cpu:{i:{value:2 scale:0} d:{Dec:<nil>} s:2 Format:DecimalSI}] Phase: Conditions:[{Type:OutOfDisk Status:False LastHeartbeatTime:2016-12-09 18:18:06 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasSufficientDisk Message:kubelet has sufficient disk space available} {Type:MemoryPressure Status:False LastHeartbeatTime:2016-12-09 18:18:06 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasSufficientMemory Message:kubelet has sufficient memory available} {Type:DiskPressure Status:False LastHeartbeatTime:2016-12-09 18:18:06 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasNoDiskPressure Message:kubelet has no disk pressure} {Type:Ready Status:True LastHeartbeatTime:2016-12-09 18:18:06 +0000 UTC LastTransitionTime:2016-12-09 18:15:55 +0000 UTC Reason:KubeletReady Message:kubelet is posting ready status}] Addresses:[{Type:LegacyHostIP Address:192.168.121.18} {Type:InternalIP Address:192.168.121.18}] DaemonEndpoints:{KubeletEndpoint:{Port:10250}} NodeInfo:{MachineID:8629c58585b148cca9cc925e788be346 SystemUUID:8629C585-85B1-48CC-A9CC-925E788BE346 BootID:1a2f3964-acf4-4825-9949-2816c7af2060 KernelVersion:4.8.6-300.fc25.x86_64 OSImage:CentOS Linux 7 (Core) ContainerRuntimeVersion:docker://1.12.3 KubeletVersion:v1.4.0+776c994 KubeProxyVersion:v1.4.0+776c994 OperatingSystem:linux Architecture:amd64} Images:[{Names:[docker.io/openshift/origin@sha256:0c4dfc568b8dbb1ac288ab1feb955ee2a37e74fd60fc96f8efd4e35076ced06b docker.io/openshift/origin:latest] SizeBytes:540431972} {Names:[docker.io/openshift/origin-deployer@sha256:398189baa7877595df33f6da8dff1fad36a9e437883c5620b85bdf3d06a31fb5 docker.io/openshift/origin-deployer:v1.5.0-alpha.0] SizeBytes:488015134} {Names:[docker.io/openshift/origin-pod@sha256:6216367955502829e7270c3437810cd16e0e99d2973647e521cc9b871d8948f0 docker.io/openshift/origin-pod:v1.5.0-alpha.0] SizeBytes:1138998}] VolumesInUse:[] VolumesAttached:[]} vs {Capacity:map[memory:{i:{value:8223051776 scale:0} d:{Dec:<nil>} s:8030324Ki Format:BinarySI} pods:{i:{value:20 scale:0} d:{Dec:<nil>} s:20 Format:DecimalSI} alpha.kubernetes.io/nvidia-gpu:{i:{value:0 scale:0} d:{Dec:<nil>} s:0 Format:DecimalSI} cpu:{i:{value:2 scale:0} d:{Dec:<nil>} s:2 Format:DecimalSI}] Allocatable:map[alpha.kubernetes.io/nvidia-gpu:{i:{value:0 scale:0} d:{Dec:<nil>} s:0 Format:DecimalSI} cpu:{i:{value:2 scale:0} d:{Dec:<nil>} s:2 Format:DecimalSI} memory:{i:{value:8223051776 scale:0} d:{Dec:<nil>} s:8030324Ki Format:BinarySI} pods:{i:{value:20 scale:0} d:{Dec:<nil>} s:20 Format:DecimalSI}] Phase: Conditions:[{Type:OutOfDisk Status:False LastHeartbeatTime:2016-12-09 18:18:16 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasSufficientDisk Message:kubelet has sufficient disk space available} {Type:MemoryPressure Status:False LastHeartbeatTime:2016-12-09 18:18:16 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasSufficientMemory Message:kubelet has sufficient memory available} {Type:DiskPressure Status:False LastHeartbeatTime:2016-12-09 18:18:16 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasNoDiskPressure Message:kubelet has no disk pressure} {Type:Ready Status:True LastHeartbeatTime:2016-12-09 18:18:16 +0000 UTC LastTransitionTime:2016-12-09 18:15:55 +0000 UTC Reason:KubeletReady Message:kubelet is posting ready status}] Addresses:[{Type:LegacyHostIP Address:192.168.121.18} {Type:InternalIP Address:192.168.121.18}] DaemonEndpoints:{KubeletEndpoint:{Port:10250}} NodeInfo:{MachineID:8629c58585b148cca9cc925e788be346 SystemUUID:8629C585-85B1-48CC-A9CC-925E788BE346 BootID:1a2f3964-acf4-4825-9949-2816c7af2060 KernelVersion:4.8.6-300.fc25.x86_64 OSImage:CentOS Linux 7 (Core) ContainerRuntimeVersion:docker://1.12.3 KubeletVersion:v1.4.0+776c994 KubeProxyVersion:v1.4.0+776c994 OperatingSystem:linux Architecture:amd64} Images:[{Names:[docker.io/openshift/origin@sha256:0c4dfc568b8dbb1ac288ab1feb955ee2a37e74fd60fc96f8efd4e35076ced06b docker.io/openshift/origin:latest] SizeBytes:540431972} {Names:[docker.io/openshift/origin-deployer@sha256:398189baa7877595df33f6da8dff1fad36a9e437883c5620b85bdf3d06a31fb5 docker.io/openshift/origin-deployer:v1.5.0-alpha.0] SizeBytes:488015134} {Names:[docker.io/openshift/origin-pod@sha256:6216367955502829e7270c3437810cd16e0e99d2973647e521cc9b871d8948f0 docker.io/openshift/origin-pod:v1.5.0-alpha.0] SizeBytes:1138998}] VolumesInUse:[] VolumesAttached:[]}.
I1209 18:18:21.951702 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:18:21.955208 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:18:22.693366 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:18:22.695690 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:18:22.697371 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:18:22.955977 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:18:22.959125 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:18:23.960014 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:18:23.963435 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:18:24.462031 20134 handler.go:300] unable to get fs usage from thin pool for device 16: no cached value for usage of device 16
I1209 18:18:24.693260 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:18:24.695276 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:18:24.964114 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:18:24.965816 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:18:25.040166 20134 handler.go:300] unable to get fs usage from thin pool for device 16: no cached value for usage of device 16
I1209 18:18:25.966143 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:18:25.969172 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:18:26.243936 20134 controller.go:105] Found 0 scheduledjobs
I1209 18:18:26.249255 20134 controller.go:113] Found 0 jobs
I1209 18:18:26.249373 20134 controller.go:116] Found 0 groups
I1209 18:18:26.693279 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:18:26.695251 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:18:26.696589 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:18:26.764784 20134 interface.go:248] Default route transits interface "eth0"
I1209 18:18:26.764959 20134 interface.go:93] Interface eth0 is up
I1209 18:18:26.765038 20134 interface.go:138] Interface "eth0" has 2 addresses :[192.168.121.18/24 fe80::5054:ff:fe56:f2f/64].
I1209 18:18:26.765075 20134 interface.go:105] Checking addr 192.168.121.18/24.
I1209 18:18:26.765088 20134 interface.go:114] IP found 192.168.121.18
I1209 18:18:26.765101 20134 interface.go:144] valid IPv4 address for interface "eth0" found as 192.168.121.18.
I1209 18:18:26.765112 20134 interface.go:254] Choosing IP 192.168.121.18
I1209 18:18:26.797525 20134 attach_detach_controller.go:520] processVolumesInUse for node "localhost"
I1209 18:18:26.971101 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:18:26.983867 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:18:27.027835 20134 conversion.go:133] failed to handle multiple devices for container. Skipping Filesystem stats
I1209 18:18:27.028573 20134 conversion.go:133] failed to handle multiple devices for container. Skipping Filesystem stats
I1209 18:18:27.051982 20134 eviction_manager.go:204] eviction manager: no resources are starved
I1209 18:18:27.984480 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:18:27.986897 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:18:28.693530 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:18:28.695681 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:18:28.697283 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:18:28.987378 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:18:28.990375 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:18:29.990679 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:18:29.993045 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:18:30.694125 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:18:30.704263 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:18:30.706242 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:18:30.994395 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:18:30.997887 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:18:31.063287 20134 thin_pool_watcher.go:160] checking whether the thin-pool is holding a metadata snapshot
I1209 18:18:31.063320 20134 dmsetup_client.go:61] running dmsetup status docker-252:1-262311-pool
I1209 18:18:31.100054 20134 thin_pool_watcher.go:126] reserving metadata snapshot for thin-pool docker-252:1-262311-pool
I1209 18:18:31.100769 20134 dmsetup_client.go:61] running dmsetup message docker-252:1-262311-pool 0 reserve_metadata_snap
I1209 18:18:31.135570 20134 thin_pool_watcher.go:133] reserved metadata snapshot for thin-pool docker-252:1-262311-pool
I1209 18:18:31.135606 20134 thin_pool_watcher.go:141] running thin_ls on metadata device /dev/loop1
I1209 18:18:31.135619 20134 thin_ls_client.go:56] running command: thin_ls --no-headers -m -o DEV,EXCLUSIVE_BYTES /dev/loop1
I1209 18:18:31.152804 20134 thin_pool_watcher.go:137] releasing metadata snapshot for thin-pool docker-252:1-262311-pool
I1209 18:18:31.152831 20134 dmsetup_client.go:61] running dmsetup message docker-252:1-262311-pool 0 release_metadata_snap
E1209 18:18:31.171396 20134 thin_pool_watcher.go:72] encountered error refreshing thin pool watcher: error performing thin_ls on metadata device /dev/loop1: Error running command `thin_ls --no-headers -m -o DEV,EXCLUSIVE_BYTES /dev/loop1`: exit status 127
output:
I1209 18:18:31.171445 20134 thin_pool_watcher.go:77] thin_ls(1481307511) took 108.167423ms
I1209 18:18:31.196426 20134 reflector.go:284] pkg/controller/volume/persistentvolume/pv_controller_base.go:448: forcing resync
I1209 18:18:31.231161 20134 reflector.go:284] pkg/controller/volume/persistentvolume/pv_controller_base.go:153: forcing resync
I1209 18:18:31.231173 20134 reflector.go:284] pkg/controller/volume/persistentvolume/pv_controller_base.go:449: forcing resync
I1209 18:18:31.654735 20134 nodecontroller.go:816] Node localhost ReadyCondition updated. Updating timestamp: {Capacity:map[pods:{i:{value:20 scale:0} d:{Dec:<nil>} s:20 Format:DecimalSI} alpha.kubernetes.io/nvidia-gpu:{i:{value:0 scale:0} d:{Dec:<nil>} s:0 Format:DecimalSI} cpu:{i:{value:2 scale:0} d:{Dec:<nil>} s:2 Format:DecimalSI} memory:{i:{value:8223051776 scale:0} d:{Dec:<nil>} s:8030324Ki Format:BinarySI}] Allocatable:map[alpha.kubernetes.io/nvidia-gpu:{i:{value:0 scale:0} d:{Dec:<nil>} s:0 Format:DecimalSI} cpu:{i:{value:2 scale:0} d:{Dec:<nil>} s:2 Format:DecimalSI} memory:{i:{value:8223051776 scale:0} d:{Dec:<nil>} s:8030324Ki Format:BinarySI} pods:{i:{value:20 scale:0} d:{Dec:<nil>} s:20 Format:DecimalSI}] Phase: Conditions:[{Type:OutOfDisk Status:False LastHeartbeatTime:2016-12-09 18:18:16 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasSufficientDisk Message:kubelet has sufficient disk space available} {Type:MemoryPressure Status:False LastHeartbeatTime:2016-12-09 18:18:16 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasSufficientMemory Message:kubelet has sufficient memory available} {Type:DiskPressure Status:False LastHeartbeatTime:2016-12-09 18:18:16 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasNoDiskPressure Message:kubelet has no disk pressure} {Type:Ready Status:True LastHeartbeatTime:2016-12-09 18:18:16 +0000 UTC LastTransitionTime:2016-12-09 18:15:55 +0000 UTC Reason:KubeletReady Message:kubelet is posting ready status}] Addresses:[{Type:LegacyHostIP Address:192.168.121.18} {Type:InternalIP Address:192.168.121.18}] DaemonEndpoints:{KubeletEndpoint:{Port:10250}} NodeInfo:{MachineID:8629c58585b148cca9cc925e788be346 SystemUUID:8629C585-85B1-48CC-A9CC-925E788BE346 BootID:1a2f3964-acf4-4825-9949-2816c7af2060 KernelVersion:4.8.6-300.fc25.x86_64 OSImage:CentOS Linux 7 (Core) ContainerRuntimeVersion:docker://1.12.3 KubeletVersion:v1.4.0+776c994 KubeProxyVersion:v1.4.0+776c994 OperatingSystem:linux Architecture:amd64} Images:[{Names:[docker.io/openshift/origin@sha256:0c4dfc568b8dbb1ac288ab1feb955ee2a37e74fd60fc96f8efd4e35076ced06b docker.io/openshift/origin:latest] SizeBytes:540431972} {Names:[docker.io/openshift/origin-deployer@sha256:398189baa7877595df33f6da8dff1fad36a9e437883c5620b85bdf3d06a31fb5 docker.io/openshift/origin-deployer:v1.5.0-alpha.0] SizeBytes:488015134} {Names:[docker.io/openshift/origin-pod@sha256:6216367955502829e7270c3437810cd16e0e99d2973647e521cc9b871d8948f0 docker.io/openshift/origin-pod:v1.5.0-alpha.0] SizeBytes:1138998}] VolumesInUse:[] VolumesAttached:[]} vs {Capacity:map[pods:{i:{value:20 scale:0} d:{Dec:<nil>} s:20 Format:DecimalSI} alpha.kubernetes.io/nvidia-gpu:{i:{value:0 scale:0} d:{Dec:<nil>} s:0 Format:DecimalSI} cpu:{i:{value:2 scale:0} d:{Dec:<nil>} s:2 Format:DecimalSI} memory:{i:{value:8223051776 scale:0} d:{Dec:<nil>} s:8030324Ki Format:BinarySI}] Allocatable:map[pods:{i:{value:20 scale:0} d:{Dec:<nil>} s:20 Format:DecimalSI} alpha.kubernetes.io/nvidia-gpu:{i:{value:0 scale:0} d:{Dec:<nil>} s:0 Format:DecimalSI} cpu:{i:{value:2 scale:0} d:{Dec:<nil>} s:2 Format:DecimalSI} memory:{i:{value:8223051776 scale:0} d:{Dec:<nil>} s:8030324Ki Format:BinarySI}] Phase: Conditions:[{Type:OutOfDisk Status:False LastHeartbeatTime:2016-12-09 18:18:26 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasSufficientDisk Message:kubelet has sufficient disk space available} {Type:MemoryPressure Status:False LastHeartbeatTime:2016-12-09 18:18:26 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasSufficientMemory Message:kubelet has sufficient memory available} {Type:DiskPressure Status:False LastHeartbeatTime:2016-12-09 18:18:26 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasNoDiskPressure Message:kubelet has no disk pressure} {Type:Ready Status:True LastHeartbeatTime:2016-12-09 18:18:26 +0000 UTC LastTransitionTime:2016-12-09 18:15:55 +0000 UTC Reason:KubeletReady Message:kubelet is posting ready status}] Addresses:[{Type:LegacyHostIP Address:192.168.121.18} {Type:InternalIP Address:192.168.121.18}] DaemonEndpoints:{KubeletEndpoint:{Port:10250}} NodeInfo:{MachineID:8629c58585b148cca9cc925e788be346 SystemUUID:8629C585-85B1-48CC-A9CC-925E788BE346 BootID:1a2f3964-acf4-4825-9949-2816c7af2060 KernelVersion:4.8.6-300.fc25.x86_64 OSImage:CentOS Linux 7 (Core) ContainerRuntimeVersion:docker://1.12.3 KubeletVersion:v1.4.0+776c994 KubeProxyVersion:v1.4.0+776c994 OperatingSystem:linux Architecture:amd64} Images:[{Names:[docker.io/openshift/origin@sha256:0c4dfc568b8dbb1ac288ab1feb955ee2a37e74fd60fc96f8efd4e35076ced06b docker.io/openshift/origin:latest] SizeBytes:540431972} {Names:[docker.io/openshift/origin-deployer@sha256:398189baa7877595df33f6da8dff1fad36a9e437883c5620b85bdf3d06a31fb5 docker.io/openshift/origin-deployer:v1.5.0-alpha.0] SizeBytes:488015134} {Names:[docker.io/openshift/origin-pod@sha256:6216367955502829e7270c3437810cd16e0e99d2973647e521cc9b871d8948f0 docker.io/openshift/origin-pod:v1.5.0-alpha.0] SizeBytes:1138998}] VolumesInUse:[] VolumesAttached:[]}.
I1209 18:18:31.998234 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:18:32.001034 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:18:32.693354 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:18:32.695256 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:18:33.001897 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:18:33.004535 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:18:34.004876 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:18:34.006990 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:18:34.693256 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:18:34.695289 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:18:34.697088 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:18:35.007503 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:18:35.011051 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:18:35.234018 20134 handler.go:300] unable to get fs usage from thin pool for device 16: no cached value for usage of device 16
I1209 18:18:36.012026 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:18:36.015433 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:18:36.256609 20134 controller.go:105] Found 0 scheduledjobs
I1209 18:18:36.264077 20134 controller.go:113] Found 0 jobs
I1209 18:18:36.264278 20134 controller.go:116] Found 0 groups
I1209 18:18:36.693269 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:18:36.695106 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:18:36.696492 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:18:36.801694 20134 interface.go:248] Default route transits interface "eth0"
I1209 18:18:36.802266 20134 interface.go:93] Interface eth0 is up
I1209 18:18:36.802564 20134 interface.go:138] Interface "eth0" has 2 addresses :[192.168.121.18/24 fe80::5054:ff:fe56:f2f/64].
I1209 18:18:36.802749 20134 interface.go:105] Checking addr 192.168.121.18/24.
I1209 18:18:36.802879 20134 interface.go:114] IP found 192.168.121.18
I1209 18:18:36.803001 20134 interface.go:144] valid IPv4 address for interface "eth0" found as 192.168.121.18.
I1209 18:18:36.803161 20134 interface.go:254] Choosing IP 192.168.121.18
I1209 18:18:36.856281 20134 attach_detach_controller.go:520] processVolumesInUse for node "localhost"
I1209 18:18:37.016032 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:18:37.019244 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:18:37.099792 20134 conversion.go:133] failed to handle multiple devices for container. Skipping Filesystem stats
I1209 18:18:37.100026 20134 conversion.go:133] failed to handle multiple devices for container. Skipping Filesystem stats
I1209 18:18:37.107192 20134 eviction_manager.go:204] eviction manager: no resources are starved
I1209 18:18:38.020071 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:18:38.023550 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:18:38.693368 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:18:38.695280 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:18:38.696544 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:18:39.024241 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:18:39.026513 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:18:40.026847 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:18:40.028754 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:18:40.693410 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:18:40.696010 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:18:41.029134 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:18:41.032114 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:18:41.661744 20134 nodecontroller.go:816] Node localhost ReadyCondition updated. Updating timestamp: {Capacity:map[alpha.kubernetes.io/nvidia-gpu:{i:{value:0 scale:0} d:{Dec:<nil>} s:0 Format:DecimalSI} cpu:{i:{value:2 scale:0} d:{Dec:<nil>} s:2 Format:DecimalSI} memory:{i:{value:8223051776 scale:0} d:{Dec:<nil>} s:8030324Ki Format:BinarySI} pods:{i:{value:20 scale:0} d:{Dec:<nil>} s:20 Format:DecimalSI}] Allocatable:map[memory:{i:{value:8223051776 scale:0} d:{Dec:<nil>} s:8030324Ki Format:BinarySI} pods:{i:{value:20 scale:0} d:{Dec:<nil>} s:20 Format:DecimalSI} alpha.kubernetes.io/nvidia-gpu:{i:{value:0 scale:0} d:{Dec:<nil>} s:0 Format:DecimalSI} cpu:{i:{value:2 scale:0} d:{Dec:<nil>} s:2 Format:DecimalSI}] Phase: Conditions:[{Type:OutOfDisk Status:False LastHeartbeatTime:2016-12-09 18:18:26 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasSufficientDisk Message:kubelet has sufficient disk space available} {Type:MemoryPressure Status:False LastHeartbeatTime:2016-12-09 18:18:26 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasSufficientMemory Message:kubelet has sufficient memory available} {Type:DiskPressure Status:False LastHeartbeatTime:2016-12-09 18:18:26 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasNoDiskPressure Message:kubelet has no disk pressure} {Type:Ready Status:True LastHeartbeatTime:2016-12-09 18:18:26 +0000 UTC LastTransitionTime:2016-12-09 18:15:55 +0000 UTC Reason:KubeletReady Message:kubelet is posting ready status}] Addresses:[{Type:LegacyHostIP Address:192.168.121.18} {Type:InternalIP Address:192.168.121.18}] DaemonEndpoints:{KubeletEndpoint:{Port:10250}} NodeInfo:{MachineID:8629c58585b148cca9cc925e788be346 SystemUUID:8629C585-85B1-48CC-A9CC-925E788BE346 BootID:1a2f3964-acf4-4825-9949-2816c7af2060 KernelVersion:4.8.6-300.fc25.x86_64 OSImage:CentOS Linux 7 (Core) ContainerRuntimeVersion:docker://1.12.3 KubeletVersion:v1.4.0+776c994 KubeProxyVersion:v1.4.0+776c994 OperatingSystem:linux Architecture:amd64} Images:[{Names:[docker.io/openshift/origin@sha256:0c4dfc568b8dbb1ac288ab1feb955ee2a37e74fd60fc96f8efd4e35076ced06b docker.io/openshift/origin:latest] SizeBytes:540431972} {Names:[docker.io/openshift/origin-deployer@sha256:398189baa7877595df33f6da8dff1fad36a9e437883c5620b85bdf3d06a31fb5 docker.io/openshift/origin-deployer:v1.5.0-alpha.0] SizeBytes:488015134} {Names:[docker.io/openshift/origin-pod@sha256:6216367955502829e7270c3437810cd16e0e99d2973647e521cc9b871d8948f0 docker.io/openshift/origin-pod:v1.5.0-alpha.0] SizeBytes:1138998}] VolumesInUse:[] VolumesAttached:[]} vs {Capacity:map[memory:{i:{value:8223051776 scale:0} d:{Dec:<nil>} s:8030324Ki Format:BinarySI} pods:{i:{value:20 scale:0} d:{Dec:<nil>} s:20 Format:DecimalSI} alpha.kubernetes.io/nvidia-gpu:{i:{value:0 scale:0} d:{Dec:<nil>} s:0 Format:DecimalSI} cpu:{i:{value:2 scale:0} d:{Dec:<nil>} s:2 Format:DecimalSI}] Allocatable:map[memory:{i:{value:8223051776 scale:0} d:{Dec:<nil>} s:8030324Ki Format:BinarySI} pods:{i:{value:20 scale:0} d:{Dec:<nil>} s:20 Format:DecimalSI} alpha.kubernetes.io/nvidia-gpu:{i:{value:0 scale:0} d:{Dec:<nil>} s:0 Format:DecimalSI} cpu:{i:{value:2 scale:0} d:{Dec:<nil>} s:2 Format:DecimalSI}] Phase: Conditions:[{Type:OutOfDisk Status:False LastHeartbeatTime:2016-12-09 18:18:36 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasSufficientDisk Message:kubelet has sufficient disk space available} {Type:MemoryPressure Status:False LastHeartbeatTime:2016-12-09 18:18:36 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasSufficientMemory Message:kubelet has sufficient memory available} {Type:DiskPressure Status:False LastHeartbeatTime:2016-12-09 18:18:36 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasNoDiskPressure Message:kubelet has no disk pressure} {Type:Ready Status:True LastHeartbeatTime:2016-12-09 18:18:36 +0000 UTC LastTransitionTime:2016-12-09 18:15:55 +0000 UTC Reason:KubeletReady Message:kubelet is posting ready status}] Addresses:[{Type:LegacyHostIP Address:192.168.121.18} {Type:InternalIP Address:192.168.121.18}] DaemonEndpoints:{KubeletEndpoint:{Port:10250}} NodeInfo:{MachineID:8629c58585b148cca9cc925e788be346 SystemUUID:8629C585-85B1-48CC-A9CC-925E788BE346 BootID:1a2f3964-acf4-4825-9949-2816c7af2060 KernelVersion:4.8.6-300.fc25.x86_64 OSImage:CentOS Linux 7 (Core) ContainerRuntimeVersion:docker://1.12.3 KubeletVersion:v1.4.0+776c994 KubeProxyVersion:v1.4.0+776c994 OperatingSystem:linux Architecture:amd64} Images:[{Names:[docker.io/openshift/origin@sha256:0c4dfc568b8dbb1ac288ab1feb955ee2a37e74fd60fc96f8efd4e35076ced06b docker.io/openshift/origin:latest] SizeBytes:540431972} {Names:[docker.io/openshift/origin-deployer@sha256:398189baa7877595df33f6da8dff1fad36a9e437883c5620b85bdf3d06a31fb5 docker.io/openshift/origin-deployer:v1.5.0-alpha.0] SizeBytes:488015134} {Names:[docker.io/openshift/origin-pod@sha256:6216367955502829e7270c3437810cd16e0e99d2973647e521cc9b871d8948f0 docker.io/openshift/origin-pod:v1.5.0-alpha.0] SizeBytes:1138998}] VolumesInUse:[] VolumesAttached:[]}.
I1209 18:18:42.032997 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:18:42.035811 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:18:42.693419 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:18:42.694925 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:18:42.695860 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:18:43.036118 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:18:43.038713 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:18:43.953795 20134 handler.go:300] unable to get fs usage from thin pool for device 16: no cached value for usage of device 16
I1209 18:18:44.039030 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:18:44.041250 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:18:44.299442 20134 worker.go:45] 0 Health Check Listeners
I1209 18:18:44.299493 20134 worker.go:46] 0 Services registered for health checking
I1209 18:18:44.546516 20134 reconciler.go:142] Sources are all ready, starting reconstruct state function
I1209 18:18:44.693261 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:18:44.694959 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:18:44.841644 20134 reflector.go:284] pkg/controller/replicaset/replica_set.go:205: forcing resync
I1209 18:18:44.848990 20134 reflector.go:284] pkg/controller/deployment/deployment_controller.go:181: forcing resync
I1209 18:18:44.862493 20134 reflector.go:284] pkg/controller/daemon/daemoncontroller.go:237: forcing resync
I1209 18:18:44.862535 20134 reflector.go:284] pkg/controller/disruption/disruption.go:266: forcing resync
I1209 18:18:44.862783 20134 reflector.go:284] pkg/controller/disruption/disruption.go:264: forcing resync
I1209 18:18:44.862834 20134 reflector.go:284] pkg/controller/podautoscaler/horizontal.go:135: forcing resync
I1209 18:18:44.863019 20134 reflector.go:284] pkg/controller/disruption/disruption.go:267: forcing resync
I1209 18:18:44.863364 20134 reflector.go:284] pkg/controller/endpoint/endpoints_controller.go:158: forcing resync
I1209 18:18:44.863495 20134 endpoints_controller.go:321] Finished syncing service "default/kubernetes" endpoints. (5.538µs)
I1209 18:18:44.863524 20134 endpoints_controller.go:360] About to update endpoints for service "test/deployment-example"
I1209 18:18:44.865759 20134 reflector.go:284] pkg/controller/disruption/disruption.go:268: forcing resync
I1209 18:18:44.870047 20134 endpoints_controller.go:497] Update endpoints for test/deployment-example, ready: 0 not ready: 0
I1209 18:18:44.876143 20134 endpoints_controller.go:321] Finished syncing service "test/deployment-example" endpoints. (12.617798ms)
I1209 18:18:45.042040 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:18:45.044064 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:18:45.168906 20134 proxier.go:758] Syncing iptables rules
I1209 18:18:45.169721 20134 iptables.go:362] running iptables -N [KUBE-SERVICES -t filter]
I1209 18:18:45.190519 20134 iptables.go:362] running iptables -N [KUBE-SERVICES -t nat]
I1209 18:18:45.207654 20134 iptables.go:362] running iptables -C [OUTPUT -t filter -m comment --comment kubernetes service portals -j KUBE-SERVICES]
I1209 18:18:45.221840 20134 iptables.go:362] running iptables -C [OUTPUT -t nat -m comment --comment kubernetes service portals -j KUBE-SERVICES]
I1209 18:18:45.230975 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:18:45.241821 20134 iptables.go:362] running iptables -C [PREROUTING -t nat -m comment --comment kubernetes service portals -j KUBE-SERVICES]
I1209 18:18:45.255157 20134 iptables.go:362] running iptables -N [KUBE-POSTROUTING -t nat]
I1209 18:18:45.266494 20134 iptables.go:362] running iptables -C [POSTROUTING -t nat -m comment --comment kubernetes postrouting rules -j KUBE-POSTROUTING]
I1209 18:18:45.277605 20134 iptables.go:298] running iptables-save [-t filter]
I1209 18:18:45.289377 20134 iptables.go:298] running iptables-save [-t nat]
I1209 18:18:45.300626 20134 proxier.go:1244] Restoring iptables rules: *filter
:KUBE-SERVICES - [0:0]
-A KUBE-SERVICES -m comment --comment "test/deployment-example:8080-tcp has no endpoints" -m tcp -p tcp -d 172.30.122.15/32 --dport 8080 -j REJECT
COMMIT
*nat
:KUBE-SERVICES - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-POSTROUTING - [0:0]
:KUBE-MARK-MASQ - [0:0]
:KUBE-SVC-NPX46M4PTMTKRN6Y - [0:0]
:KUBE-SEP-LLAPZ6I53VN3DC7H - [0:0]
:KUBE-SVC-3VQ6B3MLH7E2SZT4 - [0:0]
:KUBE-SEP-FQLCC2RAIW2XP5BY - [0:0]
:KUBE-SVC-BA6I5HTZKAAAJT56 - [0:0]
:KUBE-SEP-U462OSCUJL4Y6JKA - [0:0]
:KUBE-SVC-7FAS7WLN46SI3LNQ - [0:0]
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x00004000/0x00004000 -j MASQUERADE
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x00004000/0x00004000
-A KUBE-SERVICES -m comment --comment "default/kubernetes:https cluster IP" -m tcp -p tcp -d 172.30.0.1/32 --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment default/kubernetes:https -m recent --name KUBE-SEP-LLAPZ6I53VN3DC7H --rcheck --seconds 180 --reap -j KUBE-SEP-LLAPZ6I53VN3DC7H
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment default/kubernetes:https -j KUBE-SEP-LLAPZ6I53VN3DC7H
-A KUBE-SEP-LLAPZ6I53VN3DC7H -m comment --comment default/kubernetes:https -s 192.168.121.18/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-LLAPZ6I53VN3DC7H -m comment --comment default/kubernetes:https -m recent --name KUBE-SEP-LLAPZ6I53VN3DC7H --set -m tcp -p tcp -j DNAT --to-destination 192.168.121.18:8443
-A KUBE-SERVICES -m comment --comment "default/kubernetes:dns cluster IP" -m udp -p udp -d 172.30.0.1/32 --dport 53 -j KUBE-SVC-3VQ6B3MLH7E2SZT4
-A KUBE-SVC-3VQ6B3MLH7E2SZT4 -m comment --comment default/kubernetes:dns -m recent --name KUBE-SEP-FQLCC2RAIW2XP5BY --rcheck --seconds 180 --reap -j KUBE-SEP-FQLCC2RAIW2XP5BY
-A KUBE-SVC-3VQ6B3MLH7E2SZT4 -m comment --comment default/kubernetes:dns -j KUBE-SEP-FQLCC2RAIW2XP5BY
-A KUBE-SEP-FQLCC2RAIW2XP5BY -m comment --comment default/kubernetes:dns -s 192.168.121.18/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-FQLCC2RAIW2XP5BY -m comment --comment default/kubernetes:dns -m recent --name KUBE-SEP-FQLCC2RAIW2XP5BY --set -m udp -p udp -j DNAT --to-destination 192.168.121.18:8053
-A KUBE-SERVICES -m comment --comment "default/kubernetes:dns-tcp cluster IP" -m tcp -p tcp -d 172.30.0.1/32 --dport 53 -j KUBE-SVC-BA6I5HTZKAAAJT56
-A KUBE-SVC-BA6I5HTZKAAAJT56 -m comment --comment default/kubernetes:dns-tcp -m recent --name KUBE-SEP-U462OSCUJL4Y6JKA --rcheck --seconds 180 --reap -j KUBE-SEP-U462OSCUJL4Y6JKA
-A KUBE-SVC-BA6I5HTZKAAAJT56 -m comment --comment default/kubernetes:dns-tcp -j KUBE-SEP-U462OSCUJL4Y6JKA
-A KUBE-SEP-U462OSCUJL4Y6JKA -m comment --comment default/kubernetes:dns-tcp -s 192.168.121.18/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-U462OSCUJL4Y6JKA -m comment --comment default/kubernetes:dns-tcp -m recent --name KUBE-SEP-U462OSCUJL4Y6JKA --set -m tcp -p tcp -j DNAT --to-destination 192.168.121.18:8053
-A KUBE-SERVICES -m comment --comment "test/deployment-example:8080-tcp cluster IP" -m tcp -p tcp -d 172.30.122.15/32 --dport 8080 -j KUBE-SVC-7FAS7WLN46SI3LNQ
-A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
COMMIT
I1209 18:18:45.300682 20134 iptables.go:339] running iptables-restore [--noflush --counters]
I1209 18:18:45.317018 20134 proxier.go:751] syncProxyRules took 148.114173ms
I1209 18:18:45.317127 20134 iptables.go:362] running iptables -N [KUBE-PORTALS-CONTAINER -t nat]
I1209 18:18:45.329424 20134 iptables.go:362] running iptables -C [PREROUTING -t nat -m comment --comment handle ClusterIPs; NOTE: this must be before the NodePort rules -j KUBE-PORTALS-CONTAINER]
I1209 18:18:45.341003 20134 iptables.go:362] running iptables -N [KUBE-PORTALS-HOST -t nat]
I1209 18:18:45.353650 20134 iptables.go:362] running iptables -C [OUTPUT -t nat -m comment --comment handle ClusterIPs; NOTE: this must be before the NodePort rules -j KUBE-PORTALS-HOST]
I1209 18:18:45.364645 20134 iptables.go:362] running iptables -N [KUBE-NODEPORT-CONTAINER -t nat]
I1209 18:18:45.374699 20134 iptables.go:362] running iptables -C [PREROUTING -t nat -m addrtype --dst-type LOCAL -m comment --comment handle service NodePorts; NOTE: this must be the last rule in the chain -j KUBE-NODEPORT-CONTAINER]
I1209 18:18:45.383876 20134 iptables.go:362] running iptables -N [KUBE-NODEPORT-HOST -t nat]
I1209 18:18:45.393053 20134 iptables.go:362] running iptables -C [OUTPUT -t nat -m addrtype --dst-type LOCAL -m comment --comment handle service NodePorts; NOTE: this must be the last rule in the chain -j KUBE-NODEPORT-HOST]
I1209 18:18:45.402237 20134 iptables.go:362] running iptables -N [KUBE-NODEPORT-NON-LOCAL -t filter]
I1209 18:18:45.411552 20134 iptables.go:362] running iptables -C [INPUT -t filter -m comment --comment Ensure that non-local NodePort traffic can flow -j KUBE-NODEPORT-NON-LOCAL]
I1209 18:18:45.729516 20134 iptables.go:362] running iptables -N [KUBE-MARK-DROP -t nat]
I1209 18:18:45.749349 20134 iptables.go:362] running iptables -C [KUBE-MARK-DROP -t nat -j MARK --set-xmark 0x00008000/0x00008000]
I1209 18:18:45.764613 20134 iptables.go:362] running iptables -N [KUBE-FIREWALL -t filter]
I1209 18:18:45.778496 20134 iptables.go:362] running iptables -C [KUBE-FIREWALL -t filter -m comment --comment kubernetes firewall for dropping marked packets -m mark --mark 0x00008000/0x00008000 -j DROP]
I1209 18:18:45.789286 20134 iptables.go:362] running iptables -C [OUTPUT -t filter -j KUBE-FIREWALL]
I1209 18:18:45.806524 20134 iptables.go:362] running iptables -C [INPUT -t filter -j KUBE-FIREWALL]
I1209 18:18:45.822244 20134 iptables.go:362] running iptables -N [KUBE-MARK-MASQ -t nat]
I1209 18:18:45.837936 20134 iptables.go:362] running iptables -N [KUBE-POSTROUTING -t nat]
I1209 18:18:45.852422 20134 iptables.go:362] running iptables -C [KUBE-MARK-MASQ -t nat -j MARK --set-xmark 0x00004000/0x00004000]
I1209 18:18:45.871317 20134 iptables.go:362] running iptables -C [POSTROUTING -t nat -m comment --comment kubernetes postrouting rules -j KUBE-POSTROUTING]
I1209 18:18:45.884415 20134 iptables.go:362] running iptables -C [KUBE-POSTROUTING -t nat -m comment --comment kubernetes service traffic requiring SNAT -m mark --mark 0x00004000/0x00004000 -j MASQUERADE]
I1209 18:18:45.904091 20134 factory.go:104] Error trying to work out if we can handle /system.slice/run-docker-netns-default.mount: invalid container name
I1209 18:18:45.904547 20134 factory.go:115] Factory "docker" was unable to handle container "/system.slice/run-docker-netns-default.mount"
I1209 18:18:45.904579 20134 factory.go:108] Factory "systemd" can handle container "/system.slice/run-docker-netns-default.mount", but ignoring.
I1209 18:18:45.904629 20134 manager.go:843] ignoring container "/system.slice/run-docker-netns-default.mount"
I1209 18:18:45.905136 20134 factory.go:104] Error trying to work out if we can handle /system.slice/-.mount: invalid container name
I1209 18:18:45.905403 20134 factory.go:115] Factory "docker" was unable to handle container "/system.slice/-.mount"
I1209 18:18:45.905429 20134 factory.go:108] Factory "systemd" can handle container "/system.slice/-.mount", but ignoring.
I1209 18:18:45.905451 20134 manager.go:843] ignoring container "/system.slice/-.mount"
I1209 18:18:45.905985 20134 factory.go:104] Error trying to work out if we can handle /system.slice/sys-kernel-config.mount: invalid container name
I1209 18:18:45.906221 20134 factory.go:115] Factory "docker" was unable to handle container "/system.slice/sys-kernel-config.mount"
I1209 18:18:45.906247 20134 factory.go:108] Factory "systemd" can handle container "/system.slice/sys-kernel-config.mount", but ignoring.
I1209 18:18:45.906268 20134 manager.go:843] ignoring container "/system.slice/sys-kernel-config.mount"
I1209 18:18:45.906747 20134 factory.go:104] Error trying to work out if we can handle /system.slice/run-user-1001.mount: invalid container name
I1209 18:18:45.906995 20134 factory.go:115] Factory "docker" was unable to handle container "/system.slice/run-user-1001.mount"
I1209 18:18:45.907018 20134 factory.go:108] Factory "systemd" can handle container "/system.slice/run-user-1001.mount", but ignoring.
I1209 18:18:45.907050 20134 manager.go:843] ignoring container "/system.slice/run-user-1001.mount"
I1209 18:18:45.907653 20134 factory.go:104] Error trying to work out if we can handle /system.slice/sys-kernel-debug.mount: invalid container name
I1209 18:18:45.907676 20134 factory.go:115] Factory "docker" was unable to handle container "/system.slice/sys-kernel-debug.mount"
I1209 18:18:45.907697 20134 factory.go:108] Factory "systemd" can handle container "/system.slice/sys-kernel-debug.mount", but ignoring.
I1209 18:18:45.908403 20134 manager.go:843] ignoring container "/system.slice/sys-kernel-debug.mount"
I1209 18:18:45.908447 20134 factory.go:104] Error trying to work out if we can handle /system.slice/dev-hugepages.mount: invalid container name
I1209 18:18:45.908468 20134 factory.go:115] Factory "docker" was unable to handle container "/system.slice/dev-hugepages.mount"
I1209 18:18:45.908482 20134 factory.go:108] Factory "systemd" can handle container "/system.slice/dev-hugepages.mount", but ignoring.
I1209 18:18:45.908494 20134 manager.go:843] ignoring container "/system.slice/dev-hugepages.mount"
I1209 18:18:45.909380 20134 factory.go:104] Error trying to work out if we can handle /system.slice/dev-mqueue.mount: invalid container name
I1209 18:18:45.909406 20134 factory.go:115] Factory "docker" was unable to handle container "/system.slice/dev-mqueue.mount"
I1209 18:18:45.909458 20134 factory.go:108] Factory "systemd" can handle container "/system.slice/dev-mqueue.mount", but ignoring.
I1209 18:18:45.909567 20134 manager.go:843] ignoring container "/system.slice/dev-mqueue.mount"
I1209 18:18:45.910611 20134 factory.go:104] Error trying to work out if we can handle /system.slice/var-lib-docker-devicemapper-mnt-31124fc842b59ad64bc83362cd3a95d06487dfe3da4abe9c3c25a17cf11f5a7f.mount: error inspecting container: Error: No such container: 31124fc842b59ad64bc83362cd3a95d06487dfe3da4abe9c3c25a17cf11f5a7f
I1209 18:18:45.910640 20134 factory.go:115] Factory "docker" was unable to handle container "/system.slice/var-lib-docker-devicemapper-mnt-31124fc842b59ad64bc83362cd3a95d06487dfe3da4abe9c3c25a17cf11f5a7f.mount"
I1209 18:18:45.910673 20134 factory.go:108] Factory "systemd" can handle container "/system.slice/var-lib-docker-devicemapper-mnt-31124fc842b59ad64bc83362cd3a95d06487dfe3da4abe9c3c25a17cf11f5a7f.mount", but ignoring.
I1209 18:18:45.911198 20134 manager.go:843] ignoring container "/system.slice/var-lib-docker-devicemapper-mnt-31124fc842b59ad64bc83362cd3a95d06487dfe3da4abe9c3c25a17cf11f5a7f.mount"
I1209 18:18:45.911449 20134 factory.go:104] Error trying to work out if we can handle /system.slice/var-lib-docker-devicemapper.mount: invalid container name
I1209 18:18:45.911474 20134 factory.go:115] Factory "docker" was unable to handle container "/system.slice/var-lib-docker-devicemapper.mount"
I1209 18:18:45.911505 20134 factory.go:108] Factory "systemd" can handle container "/system.slice/var-lib-docker-devicemapper.mount", but ignoring.
I1209 18:18:45.911520 20134 manager.go:843] ignoring container "/system.slice/var-lib-docker-devicemapper.mount"
I1209 18:18:46.044423 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:18:46.046695 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:18:46.125618 20134 handler.go:300] unable to get fs usage from thin pool for device 16: no cached value for usage of device 16
I1209 18:18:46.171651 20134 thin_pool_watcher.go:160] checking whether the thin-pool is holding a metadata snapshot
I1209 18:18:46.171686 20134 dmsetup_client.go:61] running dmsetup status docker-252:1-262311-pool
I1209 18:18:46.194878 20134 thin_pool_watcher.go:126] reserving metadata snapshot for thin-pool docker-252:1-262311-pool
I1209 18:18:46.194916 20134 dmsetup_client.go:61] running dmsetup message docker-252:1-262311-pool 0 reserve_metadata_snap
I1209 18:18:46.202510 20134 reflector.go:284] pkg/controller/volume/persistentvolume/pv_controller_base.go:448: forcing resync
I1209 18:18:46.216443 20134 thin_pool_watcher.go:133] reserved metadata snapshot for thin-pool docker-252:1-262311-pool
I1209 18:18:46.216474 20134 thin_pool_watcher.go:141] running thin_ls on metadata device /dev/loop1
I1209 18:18:46.216484 20134 thin_ls_client.go:56] running command: thin_ls --no-headers -m -o DEV,EXCLUSIVE_BYTES /dev/loop1
I1209 18:18:46.231023 20134 reflector.go:284] pkg/controller/petset/pet_set.go:147: forcing resync
I1209 18:18:46.231639 20134 reflector.go:284] pkg/controller/volume/persistentvolume/pv_controller_base.go:449: forcing resync
I1209 18:18:46.231665 20134 reflector.go:284] pkg/controller/volume/persistentvolume/pv_controller_base.go:153: forcing resync
I1209 18:18:46.232726 20134 thin_pool_watcher.go:137] releasing metadata snapshot for thin-pool docker-252:1-262311-pool
I1209 18:18:46.232748 20134 dmsetup_client.go:61] running dmsetup message docker-252:1-262311-pool 0 release_metadata_snap
E1209 18:18:46.247363 20134 thin_pool_watcher.go:72] encountered error refreshing thin pool watcher: error performing thin_ls on metadata device /dev/loop1: Error running command `thin_ls --no-headers -m -o DEV,EXCLUSIVE_BYTES /dev/loop1`: exit status 127
output:
I1209 18:18:46.247407 20134 thin_pool_watcher.go:77] thin_ls(1481307526) took 75.76244ms
I1209 18:18:46.255734 20134 reflector.go:284] github.com/openshift/origin/pkg/project/controller/factory.go:36: forcing resync
I1209 18:18:46.268716 20134 controller.go:105] Found 0 scheduledjobs
I1209 18:18:46.271872 20134 controller.go:113] Found 0 jobs
I1209 18:18:46.271919 20134 controller.go:116] Found 0 groups
I1209 18:18:46.693535 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:18:46.695219 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:18:46.697137 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:18:46.859263 20134 interface.go:248] Default route transits interface "eth0"
I1209 18:18:46.860163 20134 interface.go:93] Interface eth0 is up
I1209 18:18:46.860764 20134 interface.go:138] Interface "eth0" has 2 addresses :[192.168.121.18/24 fe80::5054:ff:fe56:f2f/64].
I1209 18:18:46.860855 20134 interface.go:105] Checking addr 192.168.121.18/24.
I1209 18:18:46.860903 20134 interface.go:114] IP found 192.168.121.18
I1209 18:18:46.860931 20134 interface.go:144] valid IPv4 address for interface "eth0" found as 192.168.121.18.
I1209 18:18:46.861907 20134 interface.go:254] Choosing IP 192.168.121.18
I1209 18:18:47.092407 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:18:47.098031 20134 attach_detach_controller.go:520] processVolumesInUse for node "localhost"
I1209 18:18:47.099778 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:18:47.137877 20134 conversion.go:133] failed to handle multiple devices for container. Skipping Filesystem stats
I1209 18:18:47.137903 20134 conversion.go:133] failed to handle multiple devices for container. Skipping Filesystem stats
I1209 18:18:47.158710 20134 eviction_manager.go:204] eviction manager: no resources are starved
I1209 18:18:48.100046 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:18:48.102213 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:18:48.693425 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:18:48.695151 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:18:49.102478 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:18:49.104678 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:18:50.105053 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:18:50.106957 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:18:50.693374 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:18:50.694775 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:18:50.696111 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:18:51.107310 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:18:51.109712 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:18:51.667553 20134 nodecontroller.go:816] Node localhost ReadyCondition updated. Updating timestamp: {Capacity:map[pods:{i:{value:20 scale:0} d:{Dec:<nil>} s:20 Format:DecimalSI} alpha.kubernetes.io/nvidia-gpu:{i:{value:0 scale:0} d:{Dec:<nil>} s:0 Format:DecimalSI} cpu:{i:{value:2 scale:0} d:{Dec:<nil>} s:2 Format:DecimalSI} memory:{i:{value:8223051776 scale:0} d:{Dec:<nil>} s:8030324Ki Format:BinarySI}] Allocatable:map[alpha.kubernetes.io/nvidia-gpu:{i:{value:0 scale:0} d:{Dec:<nil>} s:0 Format:DecimalSI} cpu:{i:{value:2 scale:0} d:{Dec:<nil>} s:2 Format:DecimalSI} memory:{i:{value:8223051776 scale:0} d:{Dec:<nil>} s:8030324Ki Format:BinarySI} pods:{i:{value:20 scale:0} d:{Dec:<nil>} s:20 Format:DecimalSI}] Phase: Conditions:[{Type:OutOfDisk Status:False LastHeartbeatTime:2016-12-09 18:18:36 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasSufficientDisk Message:kubelet has sufficient disk space available} {Type:MemoryPressure Status:False LastHeartbeatTime:2016-12-09 18:18:36 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasSufficientMemory Message:kubelet has sufficient memory available} {Type:DiskPressure Status:False LastHeartbeatTime:2016-12-09 18:18:36 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasNoDiskPressure Message:kubelet has no disk pressure} {Type:Ready Status:True LastHeartbeatTime:2016-12-09 18:18:36 +0000 UTC LastTransitionTime:2016-12-09 18:15:55 +0000 UTC Reason:KubeletReady Message:kubelet is posting ready status}] Addresses:[{Type:LegacyHostIP Address:192.168.121.18} {Type:InternalIP Address:192.168.121.18}] DaemonEndpoints:{KubeletEndpoint:{Port:10250}} NodeInfo:{MachineID:8629c58585b148cca9cc925e788be346 SystemUUID:8629C585-85B1-48CC-A9CC-925E788BE346 BootID:1a2f3964-acf4-4825-9949-2816c7af2060 KernelVersion:4.8.6-300.fc25.x86_64 OSImage:CentOS Linux 7 (Core) ContainerRuntimeVersion:docker://1.12.3 KubeletVersion:v1.4.0+776c994 KubeProxyVersion:v1.4.0+776c994 OperatingSystem:linux Architecture:amd64} Images:[{Names:[docker.io/openshift/origin@sha256:0c4dfc568b8dbb1ac288ab1feb955ee2a37e74fd60fc96f8efd4e35076ced06b docker.io/openshift/origin:latest] SizeBytes:540431972} {Names:[docker.io/openshift/origin-deployer@sha256:398189baa7877595df33f6da8dff1fad36a9e437883c5620b85bdf3d06a31fb5 docker.io/openshift/origin-deployer:v1.5.0-alpha.0] SizeBytes:488015134} {Names:[docker.io/openshift/origin-pod@sha256:6216367955502829e7270c3437810cd16e0e99d2973647e521cc9b871d8948f0 docker.io/openshift/origin-pod:v1.5.0-alpha.0] SizeBytes:1138998}] VolumesInUse:[] VolumesAttached:[]} vs {Capacity:map[memory:{i:{value:8223051776 scale:0} d:{Dec:<nil>} s:8030324Ki Format:BinarySI} pods:{i:{value:20 scale:0} d:{Dec:<nil>} s:20 Format:DecimalSI} alpha.kubernetes.io/nvidia-gpu:{i:{value:0 scale:0} d:{Dec:<nil>} s:0 Format:DecimalSI} cpu:{i:{value:2 scale:0} d:{Dec:<nil>} s:2 Format:DecimalSI}] Allocatable:map[alpha.kubernetes.io/nvidia-gpu:{i:{value:0 scale:0} d:{Dec:<nil>} s:0 Format:DecimalSI} cpu:{i:{value:2 scale:0} d:{Dec:<nil>} s:2 Format:DecimalSI} memory:{i:{value:8223051776 scale:0} d:{Dec:<nil>} s:8030324Ki Format:BinarySI} pods:{i:{value:20 scale:0} d:{Dec:<nil>} s:20 Format:DecimalSI}] Phase: Conditions:[{Type:OutOfDisk Status:False LastHeartbeatTime:2016-12-09 18:18:46 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasSufficientDisk Message:kubelet has sufficient disk space available} {Type:MemoryPressure Status:False LastHeartbeatTime:2016-12-09 18:18:46 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasSufficientMemory Message:kubelet has sufficient memory available} {Type:DiskPressure Status:False LastHeartbeatTime:2016-12-09 18:18:46 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasNoDiskPressure Message:kubelet has no disk pressure} {Type:Ready Status:True LastHeartbeatTime:2016-12-09 18:18:46 +0000 UTC LastTransitionTime:2016-12-09 18:15:55 +0000 UTC Reason:KubeletReady Message:kubelet is posting ready status}] Addresses:[{Type:LegacyHostIP Address:192.168.121.18} {Type:InternalIP Address:192.168.121.18}] DaemonEndpoints:{KubeletEndpoint:{Port:10250}} NodeInfo:{MachineID:8629c58585b148cca9cc925e788be346 SystemUUID:8629C585-85B1-48CC-A9CC-925E788BE346 BootID:1a2f3964-acf4-4825-9949-2816c7af2060 KernelVersion:4.8.6-300.fc25.x86_64 OSImage:CentOS Linux 7 (Core) ContainerRuntimeVersion:docker://1.12.3 KubeletVersion:v1.4.0+776c994 KubeProxyVersion:v1.4.0+776c994 OperatingSystem:linux Architecture:amd64} Images:[{Names:[docker.io/openshift/origin@sha256:0c4dfc568b8dbb1ac288ab1feb955ee2a37e74fd60fc96f8efd4e35076ced06b docker.io/openshift/origin:latest] SizeBytes:540431972} {Names:[docker.io/openshift/origin-deployer@sha256:398189baa7877595df33f6da8dff1fad36a9e437883c5620b85bdf3d06a31fb5 docker.io/openshift/origin-deployer:v1.5.0-alpha.0] SizeBytes:488015134} {Names:[docker.io/openshift/origin-pod@sha256:6216367955502829e7270c3437810cd16e0e99d2973647e521cc9b871d8948f0 docker.io/openshift/origin-pod:v1.5.0-alpha.0] SizeBytes:1138998}] VolumesInUse:[] VolumesAttached:[]}.
I1209 18:18:52.110026 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:18:52.113404 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:18:52.693292 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:18:52.695351 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:18:53.114149 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:18:53.116944 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:18:54.117381 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:18:54.120480 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:18:54.693276 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:18:54.695828 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:18:54.697556 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:18:55.120857 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:18:55.124096 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:18:56.124394 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:18:56.126414 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:18:56.278895 20134 controller.go:105] Found 0 scheduledjobs
I1209 18:18:56.286201 20134 controller.go:113] Found 0 jobs
I1209 18:18:56.286269 20134 controller.go:116] Found 0 groups
I1209 18:18:56.693359 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:18:56.695278 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:18:56.696643 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:18:56.867545 20134 handler.go:300] unable to get fs usage from thin pool for device 16: no cached value for usage of device 16
I1209 18:18:56.915738 20134 interface.go:248] Default route transits interface "eth0"
I1209 18:18:56.915990 20134 interface.go:93] Interface eth0 is up
I1209 18:18:56.916156 20134 interface.go:138] Interface "eth0" has 2 addresses :[192.168.121.18/24 fe80::5054:ff:fe56:f2f/64].
I1209 18:18:56.916189 20134 interface.go:105] Checking addr 192.168.121.18/24.
I1209 18:18:56.916201 20134 interface.go:114] IP found 192.168.121.18
I1209 18:18:56.916236 20134 interface.go:144] valid IPv4 address for interface "eth0" found as 192.168.121.18.
I1209 18:18:56.916252 20134 interface.go:254] Choosing IP 192.168.121.18
I1209 18:18:56.964151 20134 attach_detach_controller.go:520] processVolumesInUse for node "localhost"
I1209 18:18:57.126836 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:18:57.129920 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:18:57.196313 20134 conversion.go:133] failed to handle multiple devices for container. Skipping Filesystem stats
I1209 18:18:57.196358 20134 conversion.go:133] failed to handle multiple devices for container. Skipping Filesystem stats
I1209 18:18:57.218375 20134 eviction_manager.go:204] eviction manager: no resources are starved
I1209 18:18:57.693374 20134 kubelet.go:2347] SyncLoop (SYNC): 1 pods; deployment-example-1-deploy_test(c8ca050c-be3b-11e6-8665-525400560f2f)
I1209 18:18:58.130399 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:18:58.133063 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:18:58.693447 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:18:58.695142 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:18:58.696875 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:18:59.133515 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:18:59.135914 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:19:00.136754 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:19:00.139849 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:19:00.693298 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:19:00.695354 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:19:01.140737 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:19:01.143754 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:19:01.202834 20134 reflector.go:284] pkg/controller/volume/persistentvolume/pv_controller_base.go:448: forcing resync
I1209 18:19:01.231915 20134 reflector.go:284] pkg/controller/volume/persistentvolume/pv_controller_base.go:153: forcing resync
I1209 18:19:01.232013 20134 reflector.go:284] pkg/controller/volume/persistentvolume/pv_controller_base.go:449: forcing resync
I1209 18:19:01.247681 20134 thin_pool_watcher.go:160] checking whether the thin-pool is holding a metadata snapshot
I1209 18:19:01.247880 20134 dmsetup_client.go:61] running dmsetup status docker-252:1-262311-pool
I1209 18:19:01.271099 20134 thin_pool_watcher.go:126] reserving metadata snapshot for thin-pool docker-252:1-262311-pool
I1209 18:19:01.271316 20134 dmsetup_client.go:61] running dmsetup message docker-252:1-262311-pool 0 reserve_metadata_snap
I1209 18:19:01.296438 20134 thin_pool_watcher.go:133] reserved metadata snapshot for thin-pool docker-252:1-262311-pool
I1209 18:19:01.296646 20134 thin_pool_watcher.go:141] running thin_ls on metadata device /dev/loop1
I1209 18:19:01.296683 20134 thin_ls_client.go:56] running command: thin_ls --no-headers -m -o DEV,EXCLUSIVE_BYTES /dev/loop1
I1209 18:19:01.311510 20134 thin_pool_watcher.go:137] releasing metadata snapshot for thin-pool docker-252:1-262311-pool
I1209 18:19:01.311673 20134 dmsetup_client.go:61] running dmsetup message docker-252:1-262311-pool 0 release_metadata_snap
E1209 18:19:01.331920 20134 thin_pool_watcher.go:72] encountered error refreshing thin pool watcher: error performing thin_ls on metadata device /dev/loop1: Error running command `thin_ls --no-headers -m -o DEV,EXCLUSIVE_BYTES /dev/loop1`: exit status 127
output:
I1209 18:19:01.332049 20134 thin_pool_watcher.go:77] thin_ls(1481307541) took 84.353839ms
I1209 18:19:01.502915 20134 handler.go:300] unable to get fs usage from thin pool for device 16: no cached value for usage of device 16
I1209 18:19:01.673097 20134 nodecontroller.go:816] Node localhost ReadyCondition updated. Updating timestamp: {Capacity:map[pods:{i:{value:20 scale:0} d:{Dec:<nil>} s:20 Format:DecimalSI} alpha.kubernetes.io/nvidia-gpu:{i:{value:0 scale:0} d:{Dec:<nil>} s:0 Format:DecimalSI} cpu:{i:{value:2 scale:0} d:{Dec:<nil>} s:2 Format:DecimalSI} memory:{i:{value:8223051776 scale:0} d:{Dec:<nil>} s:8030324Ki Format:BinarySI}] Allocatable:map[pods:{i:{value:20 scale:0} d:{Dec:<nil>} s:20 Format:DecimalSI} alpha.kubernetes.io/nvidia-gpu:{i:{value:0 scale:0} d:{Dec:<nil>} s:0 Format:DecimalSI} cpu:{i:{value:2 scale:0} d:{Dec:<nil>} s:2 Format:DecimalSI} memory:{i:{value:8223051776 scale:0} d:{Dec:<nil>} s:8030324Ki Format:BinarySI}] Phase: Conditions:[{Type:OutOfDisk Status:False LastHeartbeatTime:2016-12-09 18:18:46 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasSufficientDisk Message:kubelet has sufficient disk space available} {Type:MemoryPressure Status:False LastHeartbeatTime:2016-12-09 18:18:46 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasSufficientMemory Message:kubelet has sufficient memory available} {Type:DiskPressure Status:False LastHeartbeatTime:2016-12-09 18:18:46 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasNoDiskPressure Message:kubelet has no disk pressure} {Type:Ready Status:True LastHeartbeatTime:2016-12-09 18:18:46 +0000 UTC LastTransitionTime:2016-12-09 18:15:55 +0000 UTC Reason:KubeletReady Message:kubelet is posting ready status}] Addresses:[{Type:LegacyHostIP Address:192.168.121.18} {Type:InternalIP Address:192.168.121.18}] DaemonEndpoints:{KubeletEndpoint:{Port:10250}} NodeInfo:{MachineID:8629c58585b148cca9cc925e788be346 SystemUUID:8629C585-85B1-48CC-A9CC-925E788BE346 BootID:1a2f3964-acf4-4825-9949-2816c7af2060 KernelVersion:4.8.6-300.fc25.x86_64 OSImage:CentOS Linux 7 (Core) ContainerRuntimeVersion:docker://1.12.3 KubeletVersion:v1.4.0+776c994 KubeProxyVersion:v1.4.0+776c994 OperatingSystem:linux Architecture:amd64} Images:[{Names:[docker.io/openshift/origin@sha256:0c4dfc568b8dbb1ac288ab1feb955ee2a37e74fd60fc96f8efd4e35076ced06b docker.io/openshift/origin:latest] SizeBytes:540431972} {Names:[docker.io/openshift/origin-deployer@sha256:398189baa7877595df33f6da8dff1fad36a9e437883c5620b85bdf3d06a31fb5 docker.io/openshift/origin-deployer:v1.5.0-alpha.0] SizeBytes:488015134} {Names:[docker.io/openshift/origin-pod@sha256:6216367955502829e7270c3437810cd16e0e99d2973647e521cc9b871d8948f0 docker.io/openshift/origin-pod:v1.5.0-alpha.0] SizeBytes:1138998}] VolumesInUse:[] VolumesAttached:[]} vs {Capacity:map[memory:{i:{value:8223051776 scale:0} d:{Dec:<nil>} s:8030324Ki Format:BinarySI} pods:{i:{value:20 scale:0} d:{Dec:<nil>} s:20 Format:DecimalSI} alpha.kubernetes.io/nvidia-gpu:{i:{value:0 scale:0} d:{Dec:<nil>} s:0 Format:DecimalSI} cpu:{i:{value:2 scale:0} d:{Dec:<nil>} s:2 Format:DecimalSI}] Allocatable:map[cpu:{i:{value:2 scale:0} d:{Dec:<nil>} s:2 Format:DecimalSI} memory:{i:{value:8223051776 scale:0} d:{Dec:<nil>} s:8030324Ki Format:BinarySI} pods:{i:{value:20 scale:0} d:{Dec:<nil>} s:20 Format:DecimalSI} alpha.kubernetes.io/nvidia-gpu:{i:{value:0 scale:0} d:{Dec:<nil>} s:0 Format:DecimalSI}] Phase: Conditions:[{Type:OutOfDisk Status:False LastHeartbeatTime:2016-12-09 18:18:56 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasSufficientDisk Message:kubelet has sufficient disk space available} {Type:MemoryPressure Status:False LastHeartbeatTime:2016-12-09 18:18:56 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasSufficientMemory Message:kubelet has sufficient memory available} {Type:DiskPressure Status:False LastHeartbeatTime:2016-12-09 18:18:56 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasNoDiskPressure Message:kubelet has no disk pressure} {Type:Ready Status:True LastHeartbeatTime:2016-12-09 18:18:56 +0000 UTC LastTransitionTime:2016-12-09 18:15:55 +0000 UTC Reason:KubeletReady Message:kubelet is posting ready status}] Addresses:[{Type:LegacyHostIP Address:192.168.121.18} {Type:InternalIP Address:192.168.121.18}] DaemonEndpoints:{KubeletEndpoint:{Port:10250}} NodeInfo:{MachineID:8629c58585b148cca9cc925e788be346 SystemUUID:8629C585-85B1-48CC-A9CC-925E788BE346 BootID:1a2f3964-acf4-4825-9949-2816c7af2060 KernelVersion:4.8.6-300.fc25.x86_64 OSImage:CentOS Linux 7 (Core) ContainerRuntimeVersion:docker://1.12.3 KubeletVersion:v1.4.0+776c994 KubeProxyVersion:v1.4.0+776c994 OperatingSystem:linux Architecture:amd64} Images:[{Names:[docker.io/openshift/origin@sha256:0c4dfc568b8dbb1ac288ab1feb955ee2a37e74fd60fc96f8efd4e35076ced06b docker.io/openshift/origin:latest] SizeBytes:540431972} {Names:[docker.io/openshift/origin-deployer@sha256:398189baa7877595df33f6da8dff1fad36a9e437883c5620b85bdf3d06a31fb5 docker.io/openshift/origin-deployer:v1.5.0-alpha.0] SizeBytes:488015134} {Names:[docker.io/openshift/origin-pod@sha256:6216367955502829e7270c3437810cd16e0e99d2973647e521cc9b871d8948f0 docker.io/openshift/origin-pod:v1.5.0-alpha.0] SizeBytes:1138998}] VolumesInUse:[] VolumesAttached:[]}.
I1209 18:19:02.144619 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:19:02.147634 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:19:02.693358 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:19:02.695060 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:19:02.696198 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:19:03.148408 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:19:03.150780 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:19:04.151103 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:19:04.153286 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:19:04.693312 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:19:04.695821 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:19:04.696891 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:19:05.154045 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:19:05.156192 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:19:06.156543 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:19:06.158642 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:19:06.293728 20134 controller.go:105] Found 0 scheduledjobs
I1209 18:19:06.300897 20134 controller.go:113] Found 0 jobs
I1209 18:19:06.300949 20134 controller.go:116] Found 0 groups
I1209 18:19:06.693713 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:19:06.695206 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:19:06.696362 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:19:06.967245 20134 interface.go:248] Default route transits interface "eth0"
I1209 18:19:06.967738 20134 interface.go:93] Interface eth0 is up
I1209 18:19:06.967957 20134 interface.go:138] Interface "eth0" has 2 addresses :[192.168.121.18/24 fe80::5054:ff:fe56:f2f/64].
I1209 18:19:06.968141 20134 interface.go:105] Checking addr 192.168.121.18/24.
I1209 18:19:06.968228 20134 interface.go:114] IP found 192.168.121.18
I1209 18:19:06.968313 20134 interface.go:144] valid IPv4 address for interface "eth0" found as 192.168.121.18.
I1209 18:19:06.968409 20134 interface.go:254] Choosing IP 192.168.121.18
I1209 18:19:07.017855 20134 attach_detach_controller.go:520] processVolumesInUse for node "localhost"
I1209 18:19:07.159105 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:19:07.161521 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:19:07.225618 20134 conversion.go:133] failed to handle multiple devices for container. Skipping Filesystem stats
I1209 18:19:07.225651 20134 conversion.go:133] failed to handle multiple devices for container. Skipping Filesystem stats
I1209 18:19:07.281677 20134 eviction_manager.go:204] eviction manager: no resources are starved
I1209 18:19:08.162399 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:19:08.164762 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:19:08.693318 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:19:08.695350 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:19:09.165522 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:19:09.167469 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:19:09.874589 20134 handler.go:300] unable to get fs usage from thin pool for device 16: no cached value for usage of device 16
I1209 18:19:10.167770 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:19:10.169617 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:19:10.693251 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:19:10.695372 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:19:10.696433 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:19:11.169847 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:19:11.172607 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:19:11.678198 20134 nodecontroller.go:816] Node localhost ReadyCondition updated. Updating timestamp: {Capacity:map[memory:{i:{value:8223051776 scale:0} d:{Dec:<nil>} s:8030324Ki Format:BinarySI} pods:{i:{value:20 scale:0} d:{Dec:<nil>} s:20 Format:DecimalSI} alpha.kubernetes.io/nvidia-gpu:{i:{value:0 scale:0} d:{Dec:<nil>} s:0 Format:DecimalSI} cpu:{i:{value:2 scale:0} d:{Dec:<nil>} s:2 Format:DecimalSI}] Allocatable:map[cpu:{i:{value:2 scale:0} d:{Dec:<nil>} s:2 Format:DecimalSI} memory:{i:{value:8223051776 scale:0} d:{Dec:<nil>} s:8030324Ki Format:BinarySI} pods:{i:{value:20 scale:0} d:{Dec:<nil>} s:20 Format:DecimalSI} alpha.kubernetes.io/nvidia-gpu:{i:{value:0 scale:0} d:{Dec:<nil>} s:0 Format:DecimalSI}] Phase: Conditions:[{Type:OutOfDisk Status:False LastHeartbeatTime:2016-12-09 18:18:56 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasSufficientDisk Message:kubelet has sufficient disk space available} {Type:MemoryPressure Status:False LastHeartbeatTime:2016-12-09 18:18:56 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasSufficientMemory Message:kubelet has sufficient memory available} {Type:DiskPressure Status:False LastHeartbeatTime:2016-12-09 18:18:56 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasNoDiskPressure Message:kubelet has no disk pressure} {Type:Ready Status:True LastHeartbeatTime:2016-12-09 18:18:56 +0000 UTC LastTransitionTime:2016-12-09 18:15:55 +0000 UTC Reason:KubeletReady Message:kubelet is posting ready status}] Addresses:[{Type:LegacyHostIP Address:192.168.121.18} {Type:InternalIP Address:192.168.121.18}] DaemonEndpoints:{KubeletEndpoint:{Port:10250}} NodeInfo:{MachineID:8629c58585b148cca9cc925e788be346 SystemUUID:8629C585-85B1-48CC-A9CC-925E788BE346 BootID:1a2f3964-acf4-4825-9949-2816c7af2060 KernelVersion:4.8.6-300.fc25.x86_64 OSImage:CentOS Linux 7 (Core) ContainerRuntimeVersion:docker://1.12.3 KubeletVersion:v1.4.0+776c994 KubeProxyVersion:v1.4.0+776c994 OperatingSystem:linux Architecture:amd64} Images:[{Names:[docker.io/openshift/origin@sha256:0c4dfc568b8dbb1ac288ab1feb955ee2a37e74fd60fc96f8efd4e35076ced06b docker.io/openshift/origin:latest] SizeBytes:540431972} {Names:[docker.io/openshift/origin-deployer@sha256:398189baa7877595df33f6da8dff1fad36a9e437883c5620b85bdf3d06a31fb5 docker.io/openshift/origin-deployer:v1.5.0-alpha.0] SizeBytes:488015134} {Names:[docker.io/openshift/origin-pod@sha256:6216367955502829e7270c3437810cd16e0e99d2973647e521cc9b871d8948f0 docker.io/openshift/origin-pod:v1.5.0-alpha.0] SizeBytes:1138998}] VolumesInUse:[] VolumesAttached:[]} vs {Capacity:map[pods:{i:{value:20 scale:0} d:{Dec:<nil>} s:20 Format:DecimalSI} alpha.kubernetes.io/nvidia-gpu:{i:{value:0 scale:0} d:{Dec:<nil>} s:0 Format:DecimalSI} cpu:{i:{value:2 scale:0} d:{Dec:<nil>} s:2 Format:DecimalSI} memory:{i:{value:8223051776 scale:0} d:{Dec:<nil>} s:8030324Ki Format:BinarySI}] Allocatable:map[alpha.kubernetes.io/nvidia-gpu:{i:{value:0 scale:0} d:{Dec:<nil>} s:0 Format:DecimalSI} cpu:{i:{value:2 scale:0} d:{Dec:<nil>} s:2 Format:DecimalSI} memory:{i:{value:8223051776 scale:0} d:{Dec:<nil>} s:8030324Ki Format:BinarySI} pods:{i:{value:20 scale:0} d:{Dec:<nil>} s:20 Format:DecimalSI}] Phase: Conditions:[{Type:OutOfDisk Status:False LastHeartbeatTime:2016-12-09 18:19:07 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasSufficientDisk Message:kubelet has sufficient disk space available} {Type:MemoryPressure Status:False LastHeartbeatTime:2016-12-09 18:19:07 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasSufficientMemory Message:kubelet has sufficient memory available} {Type:DiskPressure Status:False LastHeartbeatTime:2016-12-09 18:19:07 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasNoDiskPressure Message:kubelet has no disk pressure} {Type:Ready Status:True LastHeartbeatTime:2016-12-09 18:19:07 +0000 UTC LastTransitionTime:2016-12-09 18:15:55 +0000 UTC Reason:KubeletReady Message:kubelet is posting ready status}] Addresses:[{Type:LegacyHostIP Address:192.168.121.18} {Type:InternalIP Address:192.168.121.18}] DaemonEndpoints:{KubeletEndpoint:{Port:10250}} NodeInfo:{MachineID:8629c58585b148cca9cc925e788be346 SystemUUID:8629C585-85B1-48CC-A9CC-925E788BE346 BootID:1a2f3964-acf4-4825-9949-2816c7af2060 KernelVersion:4.8.6-300.fc25.x86_64 OSImage:CentOS Linux 7 (Core) ContainerRuntimeVersion:docker://1.12.3 KubeletVersion:v1.4.0+776c994 KubeProxyVersion:v1.4.0+776c994 OperatingSystem:linux Architecture:amd64} Images:[{Names:[docker.io/openshift/origin@sha256:0c4dfc568b8dbb1ac288ab1feb955ee2a37e74fd60fc96f8efd4e35076ced06b docker.io/openshift/origin:latest] SizeBytes:540431972} {Names:[docker.io/openshift/origin-deployer@sha256:398189baa7877595df33f6da8dff1fad36a9e437883c5620b85bdf3d06a31fb5 docker.io/openshift/origin-deployer:v1.5.0-alpha.0] SizeBytes:488015134} {Names:[docker.io/openshift/origin-pod@sha256:6216367955502829e7270c3437810cd16e0e99d2973647e521cc9b871d8948f0 docker.io/openshift/origin-pod:v1.5.0-alpha.0] SizeBytes:1138998}] VolumesInUse:[] VolumesAttached:[]}.
I1209 18:19:12.172913 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:19:12.175291 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:19:12.693439 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:19:12.695046 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:19:13.176169 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:19:13.178451 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:19:14.179376 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:19:14.181536 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:19:14.693358 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:19:14.694867 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:19:14.696316 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:19:14.841929 20134 reflector.go:284] pkg/controller/replicaset/replica_set.go:205: forcing resync
I1209 18:19:14.849268 20134 reflector.go:284] pkg/controller/deployment/deployment_controller.go:181: forcing resync
I1209 18:19:14.862757 20134 reflector.go:284] pkg/controller/disruption/disruption.go:266: forcing resync
I1209 18:19:14.862910 20134 reflector.go:284] pkg/controller/daemon/daemoncontroller.go:237: forcing resync
I1209 18:19:14.862967 20134 reflector.go:284] pkg/controller/podautoscaler/horizontal.go:135: forcing resync
I1209 18:19:14.862983 20134 reflector.go:284] pkg/controller/disruption/disruption.go:264: forcing resync
I1209 18:19:14.863294 20134 reflector.go:284] pkg/controller/disruption/disruption.go:267: forcing resync
I1209 18:19:14.863551 20134 reflector.go:284] pkg/controller/endpoint/endpoints_controller.go:158: forcing resync
I1209 18:19:14.863684 20134 endpoints_controller.go:321] Finished syncing service "default/kubernetes" endpoints. (5.72µs)
I1209 18:19:14.863741 20134 endpoints_controller.go:360] About to update endpoints for service "test/deployment-example"
I1209 18:19:14.866595 20134 reflector.go:284] pkg/controller/disruption/disruption.go:268: forcing resync
I1209 18:19:14.870785 20134 endpoints_controller.go:497] Update endpoints for test/deployment-example, ready: 0 not ready: 0
I1209 18:19:14.877544 20134 endpoints_controller.go:321] Finished syncing service "test/deployment-example" endpoints. (13.805016ms)
I1209 18:19:15.160228 20134 handler.go:300] unable to get fs usage from thin pool for device 16: no cached value for usage of device 16
I1209 18:19:15.168908 20134 proxier.go:758] Syncing iptables rules
I1209 18:19:15.169013 20134 iptables.go:362] running iptables -N [KUBE-SERVICES -t filter]
I1209 18:19:15.181864 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:19:15.184299 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:19:15.191169 20134 iptables.go:362] running iptables -N [KUBE-SERVICES -t nat]
I1209 18:19:15.210055 20134 iptables.go:362] running iptables -C [OUTPUT -t filter -m comment --comment kubernetes service portals -j KUBE-SERVICES]
I1209 18:19:15.226173 20134 iptables.go:362] running iptables -C [OUTPUT -t nat -m comment --comment kubernetes service portals -j KUBE-SERVICES]
I1209 18:19:15.244266 20134 iptables.go:362] running iptables -C [PREROUTING -t nat -m comment --comment kubernetes service portals -j KUBE-SERVICES]
I1209 18:19:15.259791 20134 iptables.go:362] running iptables -N [KUBE-POSTROUTING -t nat]
I1209 18:19:15.276238 20134 iptables.go:362] running iptables -C [POSTROUTING -t nat -m comment --comment kubernetes postrouting rules -j KUBE-POSTROUTING]
I1209 18:19:15.291295 20134 iptables.go:298] running iptables-save [-t filter]
I1209 18:19:15.307564 20134 iptables.go:298] running iptables-save [-t nat]
I1209 18:19:15.321250 20134 proxier.go:1244] Restoring iptables rules: *filter
:KUBE-SERVICES - [0:0]
-A KUBE-SERVICES -m comment --comment "test/deployment-example:8080-tcp has no endpoints" -m tcp -p tcp -d 172.30.122.15/32 --dport 8080 -j REJECT
COMMIT
*nat
:KUBE-SERVICES - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-POSTROUTING - [0:0]
:KUBE-MARK-MASQ - [0:0]
:KUBE-SVC-NPX46M4PTMTKRN6Y - [0:0]
:KUBE-SEP-LLAPZ6I53VN3DC7H - [0:0]
:KUBE-SVC-3VQ6B3MLH7E2SZT4 - [0:0]
:KUBE-SEP-FQLCC2RAIW2XP5BY - [0:0]
:KUBE-SVC-BA6I5HTZKAAAJT56 - [0:0]
:KUBE-SEP-U462OSCUJL4Y6JKA - [0:0]
:KUBE-SVC-7FAS7WLN46SI3LNQ - [0:0]
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x00004000/0x00004000 -j MASQUERADE
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x00004000/0x00004000
-A KUBE-SERVICES -m comment --comment "default/kubernetes:https cluster IP" -m tcp -p tcp -d 172.30.0.1/32 --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment default/kubernetes:https -m recent --name KUBE-SEP-LLAPZ6I53VN3DC7H --rcheck --seconds 180 --reap -j KUBE-SEP-LLAPZ6I53VN3DC7H
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment default/kubernetes:https -j KUBE-SEP-LLAPZ6I53VN3DC7H
-A KUBE-SEP-LLAPZ6I53VN3DC7H -m comment --comment default/kubernetes:https -s 192.168.121.18/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-LLAPZ6I53VN3DC7H -m comment --comment default/kubernetes:https -m recent --name KUBE-SEP-LLAPZ6I53VN3DC7H --set -m tcp -p tcp -j DNAT --to-destination 192.168.121.18:8443
-A KUBE-SERVICES -m comment --comment "default/kubernetes:dns cluster IP" -m udp -p udp -d 172.30.0.1/32 --dport 53 -j KUBE-SVC-3VQ6B3MLH7E2SZT4
-A KUBE-SVC-3VQ6B3MLH7E2SZT4 -m comment --comment default/kubernetes:dns -m recent --name KUBE-SEP-FQLCC2RAIW2XP5BY --rcheck --seconds 180 --reap -j KUBE-SEP-FQLCC2RAIW2XP5BY
-A KUBE-SVC-3VQ6B3MLH7E2SZT4 -m comment --comment default/kubernetes:dns -j KUBE-SEP-FQLCC2RAIW2XP5BY
-A KUBE-SEP-FQLCC2RAIW2XP5BY -m comment --comment default/kubernetes:dns -s 192.168.121.18/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-FQLCC2RAIW2XP5BY -m comment --comment default/kubernetes:dns -m recent --name KUBE-SEP-FQLCC2RAIW2XP5BY --set -m udp -p udp -j DNAT --to-destination 192.168.121.18:8053
-A KUBE-SERVICES -m comment --comment "default/kubernetes:dns-tcp cluster IP" -m tcp -p tcp -d 172.30.0.1/32 --dport 53 -j KUBE-SVC-BA6I5HTZKAAAJT56
-A KUBE-SVC-BA6I5HTZKAAAJT56 -m comment --comment default/kubernetes:dns-tcp -m recent --name KUBE-SEP-U462OSCUJL4Y6JKA --rcheck --seconds 180 --reap -j KUBE-SEP-U462OSCUJL4Y6JKA
-A KUBE-SVC-BA6I5HTZKAAAJT56 -m comment --comment default/kubernetes:dns-tcp -j KUBE-SEP-U462OSCUJL4Y6JKA
-A KUBE-SEP-U462OSCUJL4Y6JKA -m comment --comment default/kubernetes:dns-tcp -s 192.168.121.18/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-U462OSCUJL4Y6JKA -m comment --comment default/kubernetes:dns-tcp -m recent --name KUBE-SEP-U462OSCUJL4Y6JKA --set -m tcp -p tcp -j DNAT --to-destination 192.168.121.18:8053
-A KUBE-SERVICES -m comment --comment "test/deployment-example:8080-tcp cluster IP" -m tcp -p tcp -d 172.30.122.15/32 --dport 8080 -j KUBE-SVC-7FAS7WLN46SI3LNQ
-A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
COMMIT
I1209 18:19:15.321846 20134 iptables.go:339] running iptables-restore [--noflush --counters]
I1209 18:19:15.336294 20134 proxier.go:751] syncProxyRules took 167.386007ms
I1209 18:19:15.336479 20134 iptables.go:362] running iptables -N [KUBE-PORTALS-CONTAINER -t nat]
I1209 18:19:15.349423 20134 iptables.go:362] running iptables -C [PREROUTING -t nat -m comment --comment handle ClusterIPs; NOTE: this must be before the NodePort rules -j KUBE-PORTALS-CONTAINER]
I1209 18:19:15.359220 20134 iptables.go:362] running iptables -N [KUBE-PORTALS-HOST -t nat]
I1209 18:19:15.372241 20134 iptables.go:362] running iptables -C [OUTPUT -t nat -m comment --comment handle ClusterIPs; NOTE: this must be before the NodePort rules -j KUBE-PORTALS-HOST]
I1209 18:19:15.385799 20134 iptables.go:362] running iptables -N [KUBE-NODEPORT-CONTAINER -t nat]
I1209 18:19:15.399240 20134 iptables.go:362] running iptables -C [PREROUTING -t nat -m addrtype --dst-type LOCAL -m comment --comment handle service NodePorts; NOTE: this must be the last rule in the chain -j KUBE-NODEPORT-CONTAINER]
I1209 18:19:15.412923 20134 iptables.go:362] running iptables -N [KUBE-NODEPORT-HOST -t nat]
I1209 18:19:15.423963 20134 iptables.go:362] running iptables -C [OUTPUT -t nat -m addrtype --dst-type LOCAL -m comment --comment handle service NodePorts; NOTE: this must be the last rule in the chain -j KUBE-NODEPORT-HOST]
I1209 18:19:15.433725 20134 iptables.go:362] running iptables -N [KUBE-NODEPORT-NON-LOCAL -t filter]
I1209 18:19:15.443320 20134 iptables.go:362] running iptables -C [INPUT -t filter -m comment --comment Ensure that non-local NodePort traffic can flow -j KUBE-NODEPORT-NON-LOCAL]
I1209 18:19:16.184773 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:19:16.187171 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:19:16.203126 20134 reflector.go:284] pkg/controller/volume/persistentvolume/pv_controller_base.go:448: forcing resync
I1209 18:19:16.231373 20134 reflector.go:284] pkg/controller/petset/pet_set.go:147: forcing resync
I1209 18:19:16.232308 20134 reflector.go:284] pkg/controller/volume/persistentvolume/pv_controller_base.go:153: forcing resync
I1209 18:19:16.232760 20134 reflector.go:284] pkg/controller/volume/persistentvolume/pv_controller_base.go:449: forcing resync
I1209 18:19:16.308270 20134 controller.go:105] Found 0 scheduledjobs
I1209 18:19:16.314067 20134 controller.go:113] Found 0 jobs
I1209 18:19:16.314155 20134 controller.go:116] Found 0 groups
I1209 18:19:16.332420 20134 thin_pool_watcher.go:160] checking whether the thin-pool is holding a metadata snapshot
I1209 18:19:16.332449 20134 dmsetup_client.go:61] running dmsetup status docker-252:1-262311-pool
I1209 18:19:16.351179 20134 thin_pool_watcher.go:126] reserving metadata snapshot for thin-pool docker-252:1-262311-pool
I1209 18:19:16.351213 20134 dmsetup_client.go:61] running dmsetup message docker-252:1-262311-pool 0 reserve_metadata_snap
I1209 18:19:16.373209 20134 thin_pool_watcher.go:133] reserved metadata snapshot for thin-pool docker-252:1-262311-pool
I1209 18:19:16.373237 20134 thin_pool_watcher.go:141] running thin_ls on metadata device /dev/loop1
I1209 18:19:16.373246 20134 thin_ls_client.go:56] running command: thin_ls --no-headers -m -o DEV,EXCLUSIVE_BYTES /dev/loop1
I1209 18:19:16.387652 20134 thin_pool_watcher.go:137] releasing metadata snapshot for thin-pool docker-252:1-262311-pool
I1209 18:19:16.387674 20134 dmsetup_client.go:61] running dmsetup message docker-252:1-262311-pool 0 release_metadata_snap
E1209 18:19:16.403821 20134 thin_pool_watcher.go:72] encountered error refreshing thin pool watcher: error performing thin_ls on metadata device /dev/loop1: Error running command `thin_ls --no-headers -m -o DEV,EXCLUSIVE_BYTES /dev/loop1`: exit status 127
output:
I1209 18:19:16.403873 20134 thin_pool_watcher.go:77] thin_ls(1481307556) took 71.458561ms
I1209 18:19:16.693366 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:19:16.694782 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:19:16.696404 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:19:17.020616 20134 interface.go:248] Default route transits interface "eth0"
I1209 18:19:17.021760 20134 interface.go:93] Interface eth0 is up
I1209 18:19:17.022628 20134 interface.go:138] Interface "eth0" has 2 addresses :[192.168.121.18/24 fe80::5054:ff:fe56:f2f/64].
I1209 18:19:17.022721 20134 interface.go:105] Checking addr 192.168.121.18/24.
I1209 18:19:17.022776 20134 interface.go:114] IP found 192.168.121.18
I1209 18:19:17.023705 20134 interface.go:144] valid IPv4 address for interface "eth0" found as 192.168.121.18.
I1209 18:19:17.024622 20134 interface.go:254] Choosing IP 192.168.121.18
I1209 18:19:17.075757 20134 attach_detach_controller.go:520] processVolumesInUse for node "localhost"
I1209 18:19:17.187959 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:19:17.190561 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:19:17.333557 20134 conversion.go:133] failed to handle multiple devices for container. Skipping Filesystem stats
I1209 18:19:17.333782 20134 conversion.go:133] failed to handle multiple devices for container. Skipping Filesystem stats
I1209 18:19:17.344148 20134 eviction_manager.go:204] eviction manager: no resources are starved
I1209 18:19:18.191372 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:19:18.193739 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:19:18.693373 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:19:18.695924 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:19:19.194735 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:19:19.198379 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:19:20.198690 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:19:20.201508 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:19:20.693436 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:19:20.694742 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:19:20.696045 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:19:21.201807 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:19:21.203783 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:19:21.683470 20134 nodecontroller.go:816] Node localhost ReadyCondition updated. Updating timestamp: {Capacity:map[memory:{i:{value:8223051776 scale:0} d:{Dec:<nil>} s:8030324Ki Format:BinarySI} pods:{i:{value:20 scale:0} d:{Dec:<nil>} s:20 Format:DecimalSI} alpha.kubernetes.io/nvidia-gpu:{i:{value:0 scale:0} d:{Dec:<nil>} s:0 Format:DecimalSI} cpu:{i:{value:2 scale:0} d:{Dec:<nil>} s:2 Format:DecimalSI}] Allocatable:map[alpha.kubernetes.io/nvidia-gpu:{i:{value:0 scale:0} d:{Dec:<nil>} s:0 Format:DecimalSI} cpu:{i:{value:2 scale:0} d:{Dec:<nil>} s:2 Format:DecimalSI} memory:{i:{value:8223051776 scale:0} d:{Dec:<nil>} s:8030324Ki Format:BinarySI} pods:{i:{value:20 scale:0} d:{Dec:<nil>} s:20 Format:DecimalSI}] Phase: Conditions:[{Type:OutOfDisk Status:False LastHeartbeatTime:2016-12-09 18:19:07 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasSufficientDisk Message:kubelet has sufficient disk space available} {Type:MemoryPressure Status:False LastHeartbeatTime:2016-12-09 18:19:07 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasSufficientMemory Message:kubelet has sufficient memory available} {Type:DiskPressure Status:False LastHeartbeatTime:2016-12-09 18:19:07 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasNoDiskPressure Message:kubelet has no disk pressure} {Type:Ready Status:True LastHeartbeatTime:2016-12-09 18:19:07 +0000 UTC LastTransitionTime:2016-12-09 18:15:55 +0000 UTC Reason:KubeletReady Message:kubelet is posting ready status}] Addresses:[{Type:LegacyHostIP Address:192.168.121.18} {Type:InternalIP Address:192.168.121.18}] DaemonEndpoints:{KubeletEndpoint:{Port:10250}} NodeInfo:{MachineID:8629c58585b148cca9cc925e788be346 SystemUUID:8629C585-85B1-48CC-A9CC-925E788BE346 BootID:1a2f3964-acf4-4825-9949-2816c7af2060 KernelVersion:4.8.6-300.fc25.x86_64 OSImage:CentOS Linux 7 (Core) ContainerRuntimeVersion:docker://1.12.3 KubeletVersion:v1.4.0+776c994 KubeProxyVersion:v1.4.0+776c994 OperatingSystem:linux Architecture:amd64} Images:[{Names:[docker.io/openshift/origin@sha256:0c4dfc568b8dbb1ac288ab1feb955ee2a37e74fd60fc96f8efd4e35076ced06b docker.io/openshift/origin:latest] SizeBytes:540431972} {Names:[docker.io/openshift/origin-deployer@sha256:398189baa7877595df33f6da8dff1fad36a9e437883c5620b85bdf3d06a31fb5 docker.io/openshift/origin-deployer:v1.5.0-alpha.0] SizeBytes:488015134} {Names:[docker.io/openshift/origin-pod@sha256:6216367955502829e7270c3437810cd16e0e99d2973647e521cc9b871d8948f0 docker.io/openshift/origin-pod:v1.5.0-alpha.0] SizeBytes:1138998}] VolumesInUse:[] VolumesAttached:[]} vs {Capacity:map[alpha.kubernetes.io/nvidia-gpu:{i:{value:0 scale:0} d:{Dec:<nil>} s:0 Format:DecimalSI} cpu:{i:{value:2 scale:0} d:{Dec:<nil>} s:2 Format:DecimalSI} memory:{i:{value:8223051776 scale:0} d:{Dec:<nil>} s:8030324Ki Format:BinarySI} pods:{i:{value:20 scale:0} d:{Dec:<nil>} s:20 Format:DecimalSI}] Allocatable:map[alpha.kubernetes.io/nvidia-gpu:{i:{value:0 scale:0} d:{Dec:<nil>} s:0 Format:DecimalSI} cpu:{i:{value:2 scale:0} d:{Dec:<nil>} s:2 Format:DecimalSI} memory:{i:{value:8223051776 scale:0} d:{Dec:<nil>} s:8030324Ki Format:BinarySI} pods:{i:{value:20 scale:0} d:{Dec:<nil>} s:20 Format:DecimalSI}] Phase: Conditions:[{Type:OutOfDisk Status:False LastHeartbeatTime:2016-12-09 18:19:17 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasSufficientDisk Message:kubelet has sufficient disk space available} {Type:MemoryPressure Status:False LastHeartbeatTime:2016-12-09 18:19:17 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasSufficientMemory Message:kubelet has sufficient memory available} {Type:DiskPressure Status:False LastHeartbeatTime:2016-12-09 18:19:17 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasNoDiskPressure Message:kubelet has no disk pressure} {Type:Ready Status:True LastHeartbeatTime:2016-12-09 18:19:17 +0000 UTC LastTransitionTime:2016-12-09 18:15:55 +0000 UTC Reason:KubeletReady Message:kubelet is posting ready status}] Addresses:[{Type:LegacyHostIP Address:192.168.121.18} {Type:InternalIP Address:192.168.121.18}] DaemonEndpoints:{KubeletEndpoint:{Port:10250}} NodeInfo:{MachineID:8629c58585b148cca9cc925e788be346 SystemUUID:8629C585-85B1-48CC-A9CC-925E788BE346 BootID:1a2f3964-acf4-4825-9949-2816c7af2060 KernelVersion:4.8.6-300.fc25.x86_64 OSImage:CentOS Linux 7 (Core) ContainerRuntimeVersion:docker://1.12.3 KubeletVersion:v1.4.0+776c994 KubeProxyVersion:v1.4.0+776c994 OperatingSystem:linux Architecture:amd64} Images:[{Names:[docker.io/openshift/origin@sha256:0c4dfc568b8dbb1ac288ab1feb955ee2a37e74fd60fc96f8efd4e35076ced06b docker.io/openshift/origin:latest] SizeBytes:540431972} {Names:[docker.io/openshift/origin-deployer@sha256:398189baa7877595df33f6da8dff1fad36a9e437883c5620b85bdf3d06a31fb5 docker.io/openshift/origin-deployer:v1.5.0-alpha.0] SizeBytes:488015134} {Names:[docker.io/openshift/origin-pod@sha256:6216367955502829e7270c3437810cd16e0e99d2973647e521cc9b871d8948f0 docker.io/openshift/origin-pod:v1.5.0-alpha.0] SizeBytes:1138998}] VolumesInUse:[] VolumesAttached:[]}.
I1209 18:19:22.204171 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:19:22.206581 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:19:22.693408 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:19:22.694919 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:19:23.206877 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:19:23.209877 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:19:23.758537 20134 handler.go:300] unable to get fs usage from thin pool for device 16: no cached value for usage of device 16
I1209 18:19:24.210697 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:19:24.213097 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:19:24.693228 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:19:24.695579 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:19:24.697673 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:19:25.213933 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:19:25.216353 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:19:26.217066 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:19:26.220216 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:19:26.320817 20134 controller.go:105] Found 0 scheduledjobs
I1209 18:19:26.327011 20134 controller.go:113] Found 0 jobs
I1209 18:19:26.327068 20134 controller.go:116] Found 0 groups
I1209 18:19:26.693260 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:19:26.694624 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:19:27.079921 20134 interface.go:248] Default route transits interface "eth0"
I1209 18:19:27.080215 20134 interface.go:93] Interface eth0 is up
I1209 18:19:27.081222 20134 interface.go:138] Interface "eth0" has 2 addresses :[192.168.121.18/24 fe80::5054:ff:fe56:f2f/64].
I1209 18:19:27.081317 20134 interface.go:105] Checking addr 192.168.121.18/24.
I1209 18:19:27.081385 20134 interface.go:114] IP found 192.168.121.18
I1209 18:19:27.082013 20134 interface.go:144] valid IPv4 address for interface "eth0" found as 192.168.121.18.
I1209 18:19:27.082063 20134 interface.go:254] Choosing IP 192.168.121.18
I1209 18:19:27.126767 20134 attach_detach_controller.go:520] processVolumesInUse for node "localhost"
I1209 18:19:27.220602 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:19:27.222849 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:19:27.398228 20134 conversion.go:133] failed to handle multiple devices for container. Skipping Filesystem stats
I1209 18:19:27.398313 20134 conversion.go:133] failed to handle multiple devices for container. Skipping Filesystem stats
I1209 18:19:27.402908 20134 eviction_manager.go:204] eviction manager: no resources are starved
I1209 18:19:27.533527 20134 handler.go:300] unable to get fs usage from thin pool for device 16: no cached value for usage of device 16
I1209 18:19:28.223847 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:19:28.227416 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:19:28.693288 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:19:28.695589 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:19:28.697490 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:19:29.227985 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:19:29.230821 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:19:30.231754 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:19:30.234443 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:19:30.693366 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:19:30.695118 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:19:30.696445 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:19:31.203453 20134 reflector.go:284] pkg/controller/volume/persistentvolume/pv_controller_base.go:448: forcing resync
I1209 18:19:31.232687 20134 reflector.go:284] pkg/controller/volume/persistentvolume/pv_controller_base.go:153: forcing resync
I1209 18:19:31.233420 20134 reflector.go:284] pkg/controller/volume/persistentvolume/pv_controller_base.go:449: forcing resync
I1209 18:19:31.234782 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:19:31.236786 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:19:31.246880 20134 scheduler.go:74] DEBUG: scheduler: queue (2):
[]controller.bucket{controller.bucket{}, controller.bucket{}, controller.bucket{}, controller.bucket{}, controller.bucket{}}
I1209 18:19:31.246973 20134 scheduler.go:79] DEBUG: scheduler: position: 3 5
I1209 18:19:31.246982 20134 scheduler.go:56] DEBUG: scheduler: waiting for limit
I1209 18:19:31.404132 20134 thin_pool_watcher.go:160] checking whether the thin-pool is holding a metadata snapshot
I1209 18:19:31.404214 20134 dmsetup_client.go:61] running dmsetup status docker-252:1-262311-pool
I1209 18:19:31.433865 20134 thin_pool_watcher.go:126] reserving metadata snapshot for thin-pool docker-252:1-262311-pool
I1209 18:19:31.433920 20134 dmsetup_client.go:61] running dmsetup message docker-252:1-262311-pool 0 reserve_metadata_snap
I1209 18:19:31.459724 20134 thin_pool_watcher.go:133] reserved metadata snapshot for thin-pool docker-252:1-262311-pool
I1209 18:19:31.459779 20134 thin_pool_watcher.go:141] running thin_ls on metadata device /dev/loop1
I1209 18:19:31.459794 20134 thin_ls_client.go:56] running command: thin_ls --no-headers -m -o DEV,EXCLUSIVE_BYTES /dev/loop1
I1209 18:19:31.474937 20134 thin_pool_watcher.go:137] releasing metadata snapshot for thin-pool docker-252:1-262311-pool
I1209 18:19:31.474985 20134 dmsetup_client.go:61] running dmsetup message docker-252:1-262311-pool 0 release_metadata_snap
E1209 18:19:31.491964 20134 thin_pool_watcher.go:72] encountered error refreshing thin pool watcher: error performing thin_ls on metadata device /dev/loop1: Error running command `thin_ls --no-headers -m -o DEV,EXCLUSIVE_BYTES /dev/loop1`: exit status 127
output:
I1209 18:19:31.492018 20134 thin_pool_watcher.go:77] thin_ls(1481307571) took 87.894412ms
I1209 18:19:31.689045 20134 nodecontroller.go:816] Node localhost ReadyCondition updated. Updating timestamp: {Capacity:map[cpu:{i:{value:2 scale:0} d:{Dec:<nil>} s:2 Format:DecimalSI} memory:{i:{value:8223051776 scale:0} d:{Dec:<nil>} s:8030324Ki Format:BinarySI} pods:{i:{value:20 scale:0} d:{Dec:<nil>} s:20 Format:DecimalSI} alpha.kubernetes.io/nvidia-gpu:{i:{value:0 scale:0} d:{Dec:<nil>} s:0 Format:DecimalSI}] Allocatable:map[memory:{i:{value:8223051776 scale:0} d:{Dec:<nil>} s:8030324Ki Format:BinarySI} pods:{i:{value:20 scale:0} d:{Dec:<nil>} s:20 Format:DecimalSI} alpha.kubernetes.io/nvidia-gpu:{i:{value:0 scale:0} d:{Dec:<nil>} s:0 Format:DecimalSI} cpu:{i:{value:2 scale:0} d:{Dec:<nil>} s:2 Format:DecimalSI}] Phase: Conditions:[{Type:OutOfDisk Status:False LastHeartbeatTime:2016-12-09 18:19:17 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasSufficientDisk Message:kubelet has sufficient disk space available} {Type:MemoryPressure Status:False LastHeartbeatTime:2016-12-09 18:19:17 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasSufficientMemory Message:kubelet has sufficient memory available} {Type:DiskPressure Status:False LastHeartbeatTime:2016-12-09 18:19:17 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasNoDiskPressure Message:kubelet has no disk pressure} {Type:Ready Status:True LastHeartbeatTime:2016-12-09 18:19:17 +0000 UTC LastTransitionTime:2016-12-09 18:15:55 +0000 UTC Reason:KubeletReady Message:kubelet is posting ready status}] Addresses:[{Type:LegacyHostIP Address:192.168.121.18} {Type:InternalIP Address:192.168.121.18}] DaemonEndpoints:{KubeletEndpoint:{Port:10250}} NodeInfo:{MachineID:8629c58585b148cca9cc925e788be346 SystemUUID:8629C585-85B1-48CC-A9CC-925E788BE346 BootID:1a2f3964-acf4-4825-9949-2816c7af2060 KernelVersion:4.8.6-300.fc25.x86_64 OSImage:CentOS Linux 7 (Core) ContainerRuntimeVersion:docker://1.12.3 KubeletVersion:v1.4.0+776c994 KubeProxyVersion:v1.4.0+776c994 OperatingSystem:linux Architecture:amd64} Images:[{Names:[docker.io/openshift/origin@sha256:0c4dfc568b8dbb1ac288ab1feb955ee2a37e74fd60fc96f8efd4e35076ced06b docker.io/openshift/origin:latest] SizeBytes:540431972} {Names:[docker.io/openshift/origin-deployer@sha256:398189baa7877595df33f6da8dff1fad36a9e437883c5620b85bdf3d06a31fb5 docker.io/openshift/origin-deployer:v1.5.0-alpha.0] SizeBytes:488015134} {Names:[docker.io/openshift/origin-pod@sha256:6216367955502829e7270c3437810cd16e0e99d2973647e521cc9b871d8948f0 docker.io/openshift/origin-pod:v1.5.0-alpha.0] SizeBytes:1138998}] VolumesInUse:[] VolumesAttached:[]} vs {Capacity:map[memory:{i:{value:8223051776 scale:0} d:{Dec:<nil>} s:8030324Ki Format:BinarySI} pods:{i:{value:20 scale:0} d:{Dec:<nil>} s:20 Format:DecimalSI} alpha.kubernetes.io/nvidia-gpu:{i:{value:0 scale:0} d:{Dec:<nil>} s:0 Format:DecimalSI} cpu:{i:{value:2 scale:0} d:{Dec:<nil>} s:2 Format:DecimalSI}] Allocatable:map[pods:{i:{value:20 scale:0} d:{Dec:<nil>} s:20 Format:DecimalSI} alpha.kubernetes.io/nvidia-gpu:{i:{value:0 scale:0} d:{Dec:<nil>} s:0 Format:DecimalSI} cpu:{i:{value:2 scale:0} d:{Dec:<nil>} s:2 Format:DecimalSI} memory:{i:{value:8223051776 scale:0} d:{Dec:<nil>} s:8030324Ki Format:BinarySI}] Phase: Conditions:[{Type:OutOfDisk Status:False LastHeartbeatTime:2016-12-09 18:19:27 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasSufficientDisk Message:kubelet has sufficient disk space available} {Type:MemoryPressure Status:False LastHeartbeatTime:2016-12-09 18:19:27 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasSufficientMemory Message:kubelet has sufficient memory available} {Type:DiskPressure Status:False LastHeartbeatTime:2016-12-09 18:19:27 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasNoDiskPressure Message:kubelet has no disk pressure} {Type:Ready Status:True LastHeartbeatTime:2016-12-09 18:19:27 +0000 UTC LastTransitionTime:2016-12-09 18:15:55 +0000 UTC Reason:KubeletReady Message:kubelet is posting ready status}] Addresses:[{Type:LegacyHostIP Address:192.168.121.18} {Type:InternalIP Address:192.168.121.18}] DaemonEndpoints:{KubeletEndpoint:{Port:10250}} NodeInfo:{MachineID:8629c58585b148cca9cc925e788be346 SystemUUID:8629C585-85B1-48CC-A9CC-925E788BE346 BootID:1a2f3964-acf4-4825-9949-2816c7af2060 KernelVersion:4.8.6-300.fc25.x86_64 OSImage:CentOS Linux 7 (Core) ContainerRuntimeVersion:docker://1.12.3 KubeletVersion:v1.4.0+776c994 KubeProxyVersion:v1.4.0+776c994 OperatingSystem:linux Architecture:amd64} Images:[{Names:[docker.io/openshift/origin@sha256:0c4dfc568b8dbb1ac288ab1feb955ee2a37e74fd60fc96f8efd4e35076ced06b docker.io/openshift/origin:latest] SizeBytes:540431972} {Names:[docker.io/openshift/origin-deployer@sha256:398189baa7877595df33f6da8dff1fad36a9e437883c5620b85bdf3d06a31fb5 docker.io/openshift/origin-deployer:v1.5.0-alpha.0] SizeBytes:488015134} {Names:[docker.io/openshift/origin-pod@sha256:6216367955502829e7270c3437810cd16e0e99d2973647e521cc9b871d8948f0 docker.io/openshift/origin-pod:v1.5.0-alpha.0] SizeBytes:1138998}] VolumesInUse:[] VolumesAttached:[]}.
I1209 18:19:32.237576 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:19:32.239311 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:19:32.693400 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:19:32.695363 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:19:32.696938 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:19:33.239597 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:19:33.242666 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:19:34.243154 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:19:34.245781 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:19:34.693563 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:19:34.698055 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:19:34.699907 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:19:35.246559 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:19:35.249771 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:19:36.250085 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:19:36.252538 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:19:36.335545 20134 controller.go:105] Found 0 scheduledjobs
I1209 18:19:36.343465 20134 controller.go:113] Found 0 jobs
I1209 18:19:36.343663 20134 controller.go:116] Found 0 groups
I1209 18:19:36.693283 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:19:36.694961 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:19:36.795283 20134 reflector.go:284] github.com/openshift/origin/pkg/user/cache/groups.go:38: forcing resync
I1209 18:19:37.134476 20134 interface.go:248] Default route transits interface "eth0"
I1209 18:19:37.135621 20134 interface.go:93] Interface eth0 is up
I1209 18:19:37.136228 20134 interface.go:138] Interface "eth0" has 2 addresses :[192.168.121.18/24 fe80::5054:ff:fe56:f2f/64].
I1209 18:19:37.136305 20134 interface.go:105] Checking addr 192.168.121.18/24.
I1209 18:19:37.137180 20134 interface.go:114] IP found 192.168.121.18
I1209 18:19:37.137241 20134 interface.go:144] valid IPv4 address for interface "eth0" found as 192.168.121.18.
I1209 18:19:37.138070 20134 interface.go:254] Choosing IP 192.168.121.18
I1209 18:19:37.189220 20134 attach_detach_controller.go:520] processVolumesInUse for node "localhost"
I1209 18:19:37.252847 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:19:37.254850 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:19:37.427950 20134 conversion.go:133] failed to handle multiple devices for container. Skipping Filesystem stats
I1209 18:19:37.428004 20134 conversion.go:133] failed to handle multiple devices for container. Skipping Filesystem stats
I1209 18:19:37.473901 20134 eviction_manager.go:204] eviction manager: no resources are starved
I1209 18:19:38.255133 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:19:38.257307 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:19:38.380604 20134 handler.go:300] unable to get fs usage from thin pool for device 16: no cached value for usage of device 16
I1209 18:19:38.693344 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:19:38.695229 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:19:38.696487 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:19:39.258172 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:19:39.260790 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:19:40.261367 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:19:40.263541 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:19:40.693432 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:19:40.696099 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:19:40.697689 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:19:41.263823 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:19:41.266150 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:19:41.694882 20134 nodecontroller.go:816] Node localhost ReadyCondition updated. Updating timestamp: {Capacity:map[memory:{i:{value:8223051776 scale:0} d:{Dec:<nil>} s:8030324Ki Format:BinarySI} pods:{i:{value:20 scale:0} d:{Dec:<nil>} s:20 Format:DecimalSI} alpha.kubernetes.io/nvidia-gpu:{i:{value:0 scale:0} d:{Dec:<nil>} s:0 Format:DecimalSI} cpu:{i:{value:2 scale:0} d:{Dec:<nil>} s:2 Format:DecimalSI}] Allocatable:map[pods:{i:{value:20 scale:0} d:{Dec:<nil>} s:20 Format:DecimalSI} alpha.kubernetes.io/nvidia-gpu:{i:{value:0 scale:0} d:{Dec:<nil>} s:0 Format:DecimalSI} cpu:{i:{value:2 scale:0} d:{Dec:<nil>} s:2 Format:DecimalSI} memory:{i:{value:8223051776 scale:0} d:{Dec:<nil>} s:8030324Ki Format:BinarySI}] Phase: Conditions:[{Type:OutOfDisk Status:False LastHeartbeatTime:2016-12-09 18:19:27 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasSufficientDisk Message:kubelet has sufficient disk space available} {Type:MemoryPressure Status:False LastHeartbeatTime:2016-12-09 18:19:27 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasSufficientMemory Message:kubelet has sufficient memory available} {Type:DiskPressure Status:False LastHeartbeatTime:2016-12-09 18:19:27 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasNoDiskPressure Message:kubelet has no disk pressure} {Type:Ready Status:True LastHeartbeatTime:2016-12-09 18:19:27 +0000 UTC LastTransitionTime:2016-12-09 18:15:55 +0000 UTC Reason:KubeletReady Message:kubelet is posting ready status}] Addresses:[{Type:LegacyHostIP Address:192.168.121.18} {Type:InternalIP Address:192.168.121.18}] DaemonEndpoints:{KubeletEndpoint:{Port:10250}} NodeInfo:{MachineID:8629c58585b148cca9cc925e788be346 SystemUUID:8629C585-85B1-48CC-A9CC-925E788BE346 BootID:1a2f3964-acf4-4825-9949-2816c7af2060 KernelVersion:4.8.6-300.fc25.x86_64 OSImage:CentOS Linux 7 (Core) ContainerRuntimeVersion:docker://1.12.3 KubeletVersion:v1.4.0+776c994 KubeProxyVersion:v1.4.0+776c994 OperatingSystem:linux Architecture:amd64} Images:[{Names:[docker.io/openshift/origin@sha256:0c4dfc568b8dbb1ac288ab1feb955ee2a37e74fd60fc96f8efd4e35076ced06b docker.io/openshift/origin:latest] SizeBytes:540431972} {Names:[docker.io/openshift/origin-deployer@sha256:398189baa7877595df33f6da8dff1fad36a9e437883c5620b85bdf3d06a31fb5 docker.io/openshift/origin-deployer:v1.5.0-alpha.0] SizeBytes:488015134} {Names:[docker.io/openshift/origin-pod@sha256:6216367955502829e7270c3437810cd16e0e99d2973647e521cc9b871d8948f0 docker.io/openshift/origin-pod:v1.5.0-alpha.0] SizeBytes:1138998}] VolumesInUse:[] VolumesAttached:[]} vs {Capacity:map[pods:{i:{value:20 scale:0} d:{Dec:<nil>} s:20 Format:DecimalSI} alpha.kubernetes.io/nvidia-gpu:{i:{value:0 scale:0} d:{Dec:<nil>} s:0 Format:DecimalSI} cpu:{i:{value:2 scale:0} d:{Dec:<nil>} s:2 Format:DecimalSI} memory:{i:{value:8223051776 scale:0} d:{Dec:<nil>} s:8030324Ki Format:BinarySI}] Allocatable:map[cpu:{i:{value:2 scale:0} d:{Dec:<nil>} s:2 Format:DecimalSI} memory:{i:{value:8223051776 scale:0} d:{Dec:<nil>} s:8030324Ki Format:BinarySI} pods:{i:{value:20 scale:0} d:{Dec:<nil>} s:20 Format:DecimalSI} alpha.kubernetes.io/nvidia-gpu:{i:{value:0 scale:0} d:{Dec:<nil>} s:0 Format:DecimalSI}] Phase: Conditions:[{Type:OutOfDisk Status:False LastHeartbeatTime:2016-12-09 18:19:37 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasSufficientDisk Message:kubelet has sufficient disk space available} {Type:MemoryPressure Status:False LastHeartbeatTime:2016-12-09 18:19:37 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasSufficientMemory Message:kubelet has sufficient memory available} {Type:DiskPressure Status:False LastHeartbeatTime:2016-12-09 18:19:37 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasNoDiskPressure Message:kubelet has no disk pressure} {Type:Ready Status:True LastHeartbeatTime:2016-12-09 18:19:37 +0000 UTC LastTransitionTime:2016-12-09 18:15:55 +0000 UTC Reason:KubeletReady Message:kubelet is posting ready status}] Addresses:[{Type:LegacyHostIP Address:192.168.121.18} {Type:InternalIP Address:192.168.121.18}] DaemonEndpoints:{KubeletEndpoint:{Port:10250}} NodeInfo:{MachineID:8629c58585b148cca9cc925e788be346 SystemUUID:8629C585-85B1-48CC-A9CC-925E788BE346 BootID:1a2f3964-acf4-4825-9949-2816c7af2060 KernelVersion:4.8.6-300.fc25.x86_64 OSImage:CentOS Linux 7 (Core) ContainerRuntimeVersion:docker://1.12.3 KubeletVersion:v1.4.0+776c994 KubeProxyVersion:v1.4.0+776c994 OperatingSystem:linux Architecture:amd64} Images:[{Names:[docker.io/openshift/origin@sha256:0c4dfc568b8dbb1ac288ab1feb955ee2a37e74fd60fc96f8efd4e35076ced06b docker.io/openshift/origin:latest] SizeBytes:540431972} {Names:[docker.io/openshift/origin-deployer@sha256:398189baa7877595df33f6da8dff1fad36a9e437883c5620b85bdf3d06a31fb5 docker.io/openshift/origin-deployer:v1.5.0-alpha.0] SizeBytes:488015134} {Names:[docker.io/openshift/origin-pod@sha256:6216367955502829e7270c3437810cd16e0e99d2973647e521cc9b871d8948f0 docker.io/openshift/origin-pod:v1.5.0-alpha.0] SizeBytes:1138998}] VolumesInUse:[] VolumesAttached:[]}.
I1209 18:19:42.208863 20134 reflector.go:284] github.com/openshift/origin/pkg/project/auth/cache.go:189: forcing resync
I1209 18:19:42.266442 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:19:42.269994 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:19:42.693429 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:19:42.695830 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:19:43.270532 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:19:43.273381 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:19:44.273739 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:19:44.276086 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:19:44.299428 20134 worker.go:45] 0 Health Check Listeners
I1209 18:19:44.299622 20134 worker.go:46] 0 Services registered for health checking
I1209 18:19:44.693202 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:19:44.694742 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:19:44.696056 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:19:44.842187 20134 reflector.go:284] pkg/controller/replicaset/replica_set.go:205: forcing resync
I1209 18:19:44.849618 20134 reflector.go:284] pkg/controller/deployment/deployment_controller.go:181: forcing resync
I1209 18:19:44.863012 20134 reflector.go:284] pkg/controller/podautoscaler/horizontal.go:135: forcing resync
I1209 18:19:44.863073 20134 reflector.go:284] pkg/controller/disruption/disruption.go:266: forcing resync
I1209 18:19:44.863773 20134 reflector.go:284] pkg/controller/endpoint/endpoints_controller.go:158: forcing resync
I1209 18:19:44.863969 20134 endpoints_controller.go:321] Finished syncing service "default/kubernetes" endpoints. (5.63µs)
I1209 18:19:44.864032 20134 endpoints_controller.go:360] About to update endpoints for service "test/deployment-example"
I1209 18:19:44.864888 20134 reflector.go:284] pkg/controller/daemon/daemoncontroller.go:237: forcing resync
I1209 18:19:44.865293 20134 reflector.go:284] pkg/controller/disruption/disruption.go:264: forcing resync
I1209 18:19:44.865875 20134 reflector.go:284] pkg/controller/disruption/disruption.go:267: forcing resync
I1209 18:19:44.866827 20134 reflector.go:284] pkg/controller/disruption/disruption.go:268: forcing resync
I1209 18:19:44.872151 20134 endpoints_controller.go:497] Update endpoints for test/deployment-example, ready: 0 not ready: 0
I1209 18:19:44.879298 20134 endpoints_controller.go:321] Finished syncing service "test/deployment-example" endpoints. (15.267388ms)
I1209 18:19:45.169014 20134 proxier.go:758] Syncing iptables rules
I1209 18:19:45.169057 20134 iptables.go:362] running iptables -N [KUBE-SERVICES -t filter]
I1209 18:19:45.190320 20134 iptables.go:362] running iptables -N [KUBE-SERVICES -t nat]
I1209 18:19:45.205573 20134 iptables.go:362] running iptables -C [OUTPUT -t filter -m comment --comment kubernetes service portals -j KUBE-SERVICES]
I1209 18:19:45.219930 20134 iptables.go:362] running iptables -C [OUTPUT -t nat -m comment --comment kubernetes service portals -j KUBE-SERVICES]
I1209 18:19:45.236297 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:19:45.240525 20134 iptables.go:362] running iptables -C [PREROUTING -t nat -m comment --comment kubernetes service portals -j KUBE-SERVICES]
I1209 18:19:45.254245 20134 iptables.go:362] running iptables -N [KUBE-POSTROUTING -t nat]
I1209 18:19:45.270506 20134 iptables.go:362] running iptables -C [POSTROUTING -t nat -m comment --comment kubernetes postrouting rules -j KUBE-POSTROUTING]
I1209 18:19:45.277184 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:19:45.279509 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:19:45.284522 20134 iptables.go:298] running iptables-save [-t filter]
I1209 18:19:45.297008 20134 iptables.go:298] running iptables-save [-t nat]
I1209 18:19:45.310187 20134 proxier.go:1244] Restoring iptables rules: *filter
:KUBE-SERVICES - [0:0]
-A KUBE-SERVICES -m comment --comment "test/deployment-example:8080-tcp has no endpoints" -m tcp -p tcp -d 172.30.122.15/32 --dport 8080 -j REJECT
COMMIT
*nat
:KUBE-SERVICES - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-POSTROUTING - [0:0]
:KUBE-MARK-MASQ - [0:0]
:KUBE-SVC-NPX46M4PTMTKRN6Y - [0:0]
:KUBE-SEP-LLAPZ6I53VN3DC7H - [0:0]
:KUBE-SVC-3VQ6B3MLH7E2SZT4 - [0:0]
:KUBE-SEP-FQLCC2RAIW2XP5BY - [0:0]
:KUBE-SVC-BA6I5HTZKAAAJT56 - [0:0]
:KUBE-SEP-U462OSCUJL4Y6JKA - [0:0]
:KUBE-SVC-7FAS7WLN46SI3LNQ - [0:0]
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x00004000/0x00004000 -j MASQUERADE
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x00004000/0x00004000
-A KUBE-SERVICES -m comment --comment "default/kubernetes:https cluster IP" -m tcp -p tcp -d 172.30.0.1/32 --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment default/kubernetes:https -m recent --name KUBE-SEP-LLAPZ6I53VN3DC7H --rcheck --seconds 180 --reap -j KUBE-SEP-LLAPZ6I53VN3DC7H
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment default/kubernetes:https -j KUBE-SEP-LLAPZ6I53VN3DC7H
-A KUBE-SEP-LLAPZ6I53VN3DC7H -m comment --comment default/kubernetes:https -s 192.168.121.18/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-LLAPZ6I53VN3DC7H -m comment --comment default/kubernetes:https -m recent --name KUBE-SEP-LLAPZ6I53VN3DC7H --set -m tcp -p tcp -j DNAT --to-destination 192.168.121.18:8443
-A KUBE-SERVICES -m comment --comment "default/kubernetes:dns cluster IP" -m udp -p udp -d 172.30.0.1/32 --dport 53 -j KUBE-SVC-3VQ6B3MLH7E2SZT4
-A KUBE-SVC-3VQ6B3MLH7E2SZT4 -m comment --comment default/kubernetes:dns -m recent --name KUBE-SEP-FQLCC2RAIW2XP5BY --rcheck --seconds 180 --reap -j KUBE-SEP-FQLCC2RAIW2XP5BY
-A KUBE-SVC-3VQ6B3MLH7E2SZT4 -m comment --comment default/kubernetes:dns -j KUBE-SEP-FQLCC2RAIW2XP5BY
-A KUBE-SEP-FQLCC2RAIW2XP5BY -m comment --comment default/kubernetes:dns -s 192.168.121.18/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-FQLCC2RAIW2XP5BY -m comment --comment default/kubernetes:dns -m recent --name KUBE-SEP-FQLCC2RAIW2XP5BY --set -m udp -p udp -j DNAT --to-destination 192.168.121.18:8053
-A KUBE-SERVICES -m comment --comment "default/kubernetes:dns-tcp cluster IP" -m tcp -p tcp -d 172.30.0.1/32 --dport 53 -j KUBE-SVC-BA6I5HTZKAAAJT56
-A KUBE-SVC-BA6I5HTZKAAAJT56 -m comment --comment default/kubernetes:dns-tcp -m recent --name KUBE-SEP-U462OSCUJL4Y6JKA --rcheck --seconds 180 --reap -j KUBE-SEP-U462OSCUJL4Y6JKA
-A KUBE-SVC-BA6I5HTZKAAAJT56 -m comment --comment default/kubernetes:dns-tcp -j KUBE-SEP-U462OSCUJL4Y6JKA
-A KUBE-SEP-U462OSCUJL4Y6JKA -m comment --comment default/kubernetes:dns-tcp -s 192.168.121.18/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-U462OSCUJL4Y6JKA -m comment --comment default/kubernetes:dns-tcp -m recent --name KUBE-SEP-U462OSCUJL4Y6JKA --set -m tcp -p tcp -j DNAT --to-destination 192.168.121.18:8053
-A KUBE-SERVICES -m comment --comment "test/deployment-example:8080-tcp cluster IP" -m tcp -p tcp -d 172.30.122.15/32 --dport 8080 -j KUBE-SVC-7FAS7WLN46SI3LNQ
-A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
COMMIT
I1209 18:19:45.310688 20134 iptables.go:339] running iptables-restore [--noflush --counters]
I1209 18:19:45.322865 20134 proxier.go:751] syncProxyRules took 153.858522ms
I1209 18:19:45.322933 20134 iptables.go:362] running iptables -N [KUBE-PORTALS-CONTAINER -t nat]
I1209 18:19:45.332365 20134 iptables.go:362] running iptables -C [PREROUTING -t nat -m comment --comment handle ClusterIPs; NOTE: this must be before the NodePort rules -j KUBE-PORTALS-CONTAINER]
I1209 18:19:45.341274 20134 iptables.go:362] running iptables -N [KUBE-PORTALS-HOST -t nat]
I1209 18:19:45.349631 20134 iptables.go:362] running iptables -C [OUTPUT -t nat -m comment --comment handle ClusterIPs; NOTE: this must be before the NodePort rules -j KUBE-PORTALS-HOST]
I1209 18:19:45.358271 20134 iptables.go:362] running iptables -N [KUBE-NODEPORT-CONTAINER -t nat]
I1209 18:19:45.380583 20134 iptables.go:362] running iptables -C [PREROUTING -t nat -m addrtype --dst-type LOCAL -m comment --comment handle service NodePorts; NOTE: this must be the last rule in the chain -j KUBE-NODEPORT-CONTAINER]
I1209 18:19:45.399137 20134 iptables.go:362] running iptables -N [KUBE-NODEPORT-HOST -t nat]
I1209 18:19:45.415520 20134 iptables.go:362] running iptables -C [OUTPUT -t nat -m addrtype --dst-type LOCAL -m comment --comment handle service NodePorts; NOTE: this must be the last rule in the chain -j KUBE-NODEPORT-HOST]
I1209 18:19:45.435932 20134 iptables.go:362] running iptables -N [KUBE-NODEPORT-NON-LOCAL -t filter]
I1209 18:19:45.453415 20134 iptables.go:362] running iptables -C [INPUT -t filter -m comment --comment Ensure that non-local NodePort traffic can flow -j KUBE-NODEPORT-NON-LOCAL]
I1209 18:19:45.590629 20134 handler.go:300] unable to get fs usage from thin pool for device 16: no cached value for usage of device 16
I1209 18:19:45.883942 20134 factory.go:104] Error trying to work out if we can handle /system.slice/run-user-1001.mount: invalid container name
I1209 18:19:45.884032 20134 factory.go:115] Factory "docker" was unable to handle container "/system.slice/run-user-1001.mount"
I1209 18:19:45.884057 20134 factory.go:108] Factory "systemd" can handle container "/system.slice/run-user-1001.mount", but ignoring.
I1209 18:19:45.884158 20134 manager.go:843] ignoring container "/system.slice/run-user-1001.mount"
I1209 18:19:45.885316 20134 factory.go:104] Error trying to work out if we can handle /system.slice/run-docker-netns-default.mount: invalid container name
I1209 18:19:45.885372 20134 factory.go:115] Factory "docker" was unable to handle container "/system.slice/run-docker-netns-default.mount"
I1209 18:19:45.885876 20134 factory.go:108] Factory "systemd" can handle container "/system.slice/run-docker-netns-default.mount", but ignoring.
I1209 18:19:45.885921 20134 manager.go:843] ignoring container "/system.slice/run-docker-netns-default.mount"
I1209 18:19:45.886491 20134 factory.go:104] Error trying to work out if we can handle /system.slice/sys-kernel-debug.mount: invalid container name
I1209 18:19:45.886520 20134 factory.go:115] Factory "docker" was unable to handle container "/system.slice/sys-kernel-debug.mount"
I1209 18:19:45.886950 20134 factory.go:108] Factory "systemd" can handle container "/system.slice/sys-kernel-debug.mount", but ignoring.
I1209 18:19:45.887264 20134 manager.go:843] ignoring container "/system.slice/sys-kernel-debug.mount"
I1209 18:19:45.887344 20134 factory.go:104] Error trying to work out if we can handle /system.slice/-.mount: invalid container name
I1209 18:19:45.887884 20134 factory.go:115] Factory "docker" was unable to handle container "/system.slice/-.mount"
I1209 18:19:45.887958 20134 factory.go:108] Factory "systemd" can handle container "/system.slice/-.mount", but ignoring.
I1209 18:19:45.888560 20134 manager.go:843] ignoring container "/system.slice/-.mount"
I1209 18:19:45.890431 20134 factory.go:104] Error trying to work out if we can handle /system.slice/var-lib-docker-devicemapper-mnt-31124fc842b59ad64bc83362cd3a95d06487dfe3da4abe9c3c25a17cf11f5a7f.mount: error inspecting container: Error: No such container: 31124fc842b59ad64bc83362cd3a95d06487dfe3da4abe9c3c25a17cf11f5a7f
I1209 18:19:45.890448 20134 factory.go:115] Factory "docker" was unable to handle container "/system.slice/var-lib-docker-devicemapper-mnt-31124fc842b59ad64bc83362cd3a95d06487dfe3da4abe9c3c25a17cf11f5a7f.mount"
I1209 18:19:45.890467 20134 factory.go:108] Factory "systemd" can handle container "/system.slice/var-lib-docker-devicemapper-mnt-31124fc842b59ad64bc83362cd3a95d06487dfe3da4abe9c3c25a17cf11f5a7f.mount", but ignoring.
I1209 18:19:45.890476 20134 manager.go:843] ignoring container "/system.slice/var-lib-docker-devicemapper-mnt-31124fc842b59ad64bc83362cd3a95d06487dfe3da4abe9c3c25a17cf11f5a7f.mount"
I1209 18:19:45.890491 20134 factory.go:104] Error trying to work out if we can handle /system.slice/dev-hugepages.mount: invalid container name
I1209 18:19:45.890496 20134 factory.go:115] Factory "docker" was unable to handle container "/system.slice/dev-hugepages.mount"
I1209 18:19:45.890502 20134 factory.go:108] Factory "systemd" can handle container "/system.slice/dev-hugepages.mount", but ignoring.
I1209 18:19:45.890508 20134 manager.go:843] ignoring container "/system.slice/dev-hugepages.mount"
I1209 18:19:45.890518 20134 factory.go:104] Error trying to work out if we can handle /system.slice/sys-kernel-config.mount: invalid container name
I1209 18:19:45.890522 20134 factory.go:115] Factory "docker" was unable to handle container "/system.slice/sys-kernel-config.mount"
I1209 18:19:45.890528 20134 factory.go:108] Factory "systemd" can handle container "/system.slice/sys-kernel-config.mount", but ignoring.
I1209 18:19:45.890534 20134 manager.go:843] ignoring container "/system.slice/sys-kernel-config.mount"
I1209 18:19:45.890542 20134 factory.go:104] Error trying to work out if we can handle /system.slice/dev-mqueue.mount: invalid container name
I1209 18:19:45.890546 20134 factory.go:115] Factory "docker" was unable to handle container "/system.slice/dev-mqueue.mount"
I1209 18:19:45.890552 20134 factory.go:108] Factory "systemd" can handle container "/system.slice/dev-mqueue.mount", but ignoring.
I1209 18:19:45.890557 20134 manager.go:843] ignoring container "/system.slice/dev-mqueue.mount"
I1209 18:19:45.890569 20134 factory.go:104] Error trying to work out if we can handle /system.slice/var-lib-docker-devicemapper.mount: invalid container name
I1209 18:19:45.890573 20134 factory.go:115] Factory "docker" was unable to handle container "/system.slice/var-lib-docker-devicemapper.mount"
I1209 18:19:45.890582 20134 factory.go:108] Factory "systemd" can handle container "/system.slice/var-lib-docker-devicemapper.mount", but ignoring.
I1209 18:19:45.890588 20134 manager.go:843] ignoring container "/system.slice/var-lib-docker-devicemapper.mount"
I1209 18:19:45.898818 20134 iptables.go:362] running iptables -N [KUBE-MARK-DROP -t nat]
I1209 18:19:45.913041 20134 iptables.go:362] running iptables -C [KUBE-MARK-DROP -t nat -j MARK --set-xmark 0x00008000/0x00008000]
I1209 18:19:45.929151 20134 iptables.go:362] running iptables -N [KUBE-FIREWALL -t filter]
I1209 18:19:45.944870 20134 iptables.go:362] running iptables -C [KUBE-FIREWALL -t filter -m comment --comment kubernetes firewall for dropping marked packets -m mark --mark 0x00008000/0x00008000 -j DROP]
I1209 18:19:45.960746 20134 iptables.go:362] running iptables -C [OUTPUT -t filter -j KUBE-FIREWALL]
I1209 18:19:45.975155 20134 iptables.go:362] running iptables -C [INPUT -t filter -j KUBE-FIREWALL]
I1209 18:19:45.991198 20134 iptables.go:362] running iptables -N [KUBE-MARK-MASQ -t nat]
I1209 18:19:46.007481 20134 iptables.go:362] running iptables -N [KUBE-POSTROUTING -t nat]
I1209 18:19:46.021681 20134 iptables.go:362] running iptables -C [KUBE-MARK-MASQ -t nat -j MARK --set-xmark 0x00004000/0x00004000]
I1209 18:19:46.037416 20134 iptables.go:362] running iptables -C [POSTROUTING -t nat -m comment --comment kubernetes postrouting rules -j KUBE-POSTROUTING]
I1209 18:19:46.050078 20134 iptables.go:362] running iptables -C [KUBE-POSTROUTING -t nat -m comment --comment kubernetes service traffic requiring SNAT -m mark --mark 0x00004000/0x00004000 -j MASQUERADE]
I1209 18:19:46.203684 20134 reflector.go:284] pkg/controller/volume/persistentvolume/pv_controller_base.go:448: forcing resync
I1209 18:19:46.231828 20134 reflector.go:284] pkg/controller/petset/pet_set.go:147: forcing resync
I1209 18:19:46.232883 20134 reflector.go:284] pkg/controller/volume/persistentvolume/pv_controller_base.go:153: forcing resync
I1209 18:19:46.233557 20134 reflector.go:284] pkg/controller/volume/persistentvolume/pv_controller_base.go:449: forcing resync
I1209 18:19:46.234494 20134 reflector.go:284] github.com/openshift/origin/pkg/build/controller/factory/factory.go:206: forcing resync
I1209 18:19:46.235016 20134 reflector.go:284] github.com/openshift/origin/pkg/build/controller/factory/factory.go:209: forcing resync
I1209 18:19:46.235057 20134 reflector.go:284] github.com/openshift/origin/pkg/build/controller/factory/factory.go:350: forcing resync
I1209 18:19:46.235595 20134 reflector.go:284] github.com/openshift/origin/pkg/build/controller/factory/factory.go:302: forcing resync
I1209 18:19:46.236237 20134 image_change_controller.go:37] Build image change controller detected ImageStream change
I1209 18:19:46.255967 20134 reflector.go:284] github.com/openshift/origin/pkg/project/controller/factory.go:36: forcing resync
I1209 18:19:46.265168 20134 reflector.go:284] github.com/openshift/origin/pkg/build/controller/factory/factory.go:90: forcing resync
I1209 18:19:46.279856 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:19:46.281953 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:19:46.354800 20134 controller.go:105] Found 0 scheduledjobs
I1209 18:19:46.362011 20134 controller.go:113] Found 0 jobs
I1209 18:19:46.362081 20134 controller.go:116] Found 0 groups
I1209 18:19:46.401191 20134 reflector.go:284] github.com/openshift/origin/pkg/service/controller/servingcert/secret_creating_controller.go:118: forcing resync
I1209 18:19:46.401581 20134 secret_creating_controller.go:103] Updating service kubernetes
I1209 18:19:46.401839 20134 secret_creating_controller.go:103] Updating service deployment-example
I1209 18:19:46.492262 20134 thin_pool_watcher.go:160] checking whether the thin-pool is holding a metadata snapshot
I1209 18:19:46.492364 20134 dmsetup_client.go:61] running dmsetup status docker-252:1-262311-pool
I1209 18:19:46.528930 20134 thin_pool_watcher.go:126] reserving metadata snapshot for thin-pool docker-252:1-262311-pool
I1209 18:19:46.529293 20134 dmsetup_client.go:61] running dmsetup message docker-252:1-262311-pool 0 reserve_metadata_snap
I1209 18:19:46.562143 20134 thin_pool_watcher.go:133] reserved metadata snapshot for thin-pool docker-252:1-262311-pool
I1209 18:19:46.562498 20134 thin_pool_watcher.go:141] running thin_ls on metadata device /dev/loop1
I1209 18:19:46.562682 20134 thin_ls_client.go:56] running command: thin_ls --no-headers -m -o DEV,EXCLUSIVE_BYTES /dev/loop1
I1209 18:19:46.584979 20134 thin_pool_watcher.go:137] releasing metadata snapshot for thin-pool docker-252:1-262311-pool
I1209 18:19:46.585227 20134 dmsetup_client.go:61] running dmsetup message docker-252:1-262311-pool 0 release_metadata_snap
E1209 18:19:46.607844 20134 thin_pool_watcher.go:72] encountered error refreshing thin pool watcher: error performing thin_ls on metadata device /dev/loop1: Error running command `thin_ls --no-headers -m -o DEV,EXCLUSIVE_BYTES /dev/loop1`: exit status 127
output:
I1209 18:19:46.608200 20134 thin_pool_watcher.go:77] thin_ls(1481307586) took 115.933414ms
I1209 18:19:46.693489 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:19:46.695192 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:19:46.696444 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:19:47.192964 20134 interface.go:248] Default route transits interface "eth0"
I1209 18:19:47.194034 20134 interface.go:93] Interface eth0 is up
I1209 18:19:47.194949 20134 interface.go:138] Interface "eth0" has 2 addresses :[192.168.121.18/24 fe80::5054:ff:fe56:f2f/64].
I1209 18:19:47.195084 20134 interface.go:105] Checking addr 192.168.121.18/24.
I1209 18:19:47.195117 20134 interface.go:114] IP found 192.168.121.18
I1209 18:19:47.195935 20134 interface.go:144] valid IPv4 address for interface "eth0" found as 192.168.121.18.
I1209 18:19:47.196888 20134 interface.go:254] Choosing IP 192.168.121.18
I1209 18:19:47.246965 20134 attach_detach_controller.go:520] processVolumesInUse for node "localhost"
I1209 18:19:47.282392 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:19:47.285720 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:19:47.518932 20134 conversion.go:133] failed to handle multiple devices for container. Skipping Filesystem stats
I1209 18:19:47.518972 20134 conversion.go:133] failed to handle multiple devices for container. Skipping Filesystem stats
I1209 18:19:47.536923 20134 eviction_manager.go:204] eviction manager: no resources are starved
I1209 18:19:48.286029 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:19:48.288991 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:19:48.420463 20134 handler.go:300] unable to get fs usage from thin pool for device 16: no cached value for usage of device 16
I1209 18:19:48.693440 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:19:48.695695 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:19:49.289386 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:19:49.291201 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:19:50.291557 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:19:50.293758 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:19:50.693404 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:19:50.695291 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:19:50.696746 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:19:51.294630 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:19:51.297175 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:19:51.701139 20134 nodecontroller.go:816] Node localhost ReadyCondition updated. Updating timestamp: {Capacity:map[cpu:{i:{value:2 scale:0} d:{Dec:<nil>} s:2 Format:DecimalSI} memory:{i:{value:8223051776 scale:0} d:{Dec:<nil>} s:8030324Ki Format:BinarySI} pods:{i:{value:20 scale:0} d:{Dec:<nil>} s:20 Format:DecimalSI} alpha.kubernetes.io/nvidia-gpu:{i:{value:0 scale:0} d:{Dec:<nil>} s:0 Format:DecimalSI}] Allocatable:map[alpha.kubernetes.io/nvidia-gpu:{i:{value:0 scale:0} d:{Dec:<nil>} s:0 Format:DecimalSI} cpu:{i:{value:2 scale:0} d:{Dec:<nil>} s:2 Format:DecimalSI} memory:{i:{value:8223051776 scale:0} d:{Dec:<nil>} s:8030324Ki Format:BinarySI} pods:{i:{value:20 scale:0} d:{Dec:<nil>} s:20 Format:DecimalSI}] Phase: Conditions:[{Type:OutOfDisk Status:False LastHeartbeatTime:2016-12-09 18:19:37 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasSufficientDisk Message:kubelet has sufficient disk space available} {Type:MemoryPressure Status:False LastHeartbeatTime:2016-12-09 18:19:37 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasSufficientMemory Message:kubelet has sufficient memory available} {Type:DiskPressure Status:False LastHeartbeatTime:2016-12-09 18:19:37 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasNoDiskPressure Message:kubelet has no disk pressure} {Type:Ready Status:True LastHeartbeatTime:2016-12-09 18:19:37 +0000 UTC LastTransitionTime:2016-12-09 18:15:55 +0000 UTC Reason:KubeletReady Message:kubelet is posting ready status}] Addresses:[{Type:LegacyHostIP Address:192.168.121.18} {Type:InternalIP Address:192.168.121.18}] DaemonEndpoints:{KubeletEndpoint:{Port:10250}} NodeInfo:{MachineID:8629c58585b148cca9cc925e788be346 SystemUUID:8629C585-85B1-48CC-A9CC-925E788BE346 BootID:1a2f3964-acf4-4825-9949-2816c7af2060 KernelVersion:4.8.6-300.fc25.x86_64 OSImage:CentOS Linux 7 (Core) ContainerRuntimeVersion:docker://1.12.3 KubeletVersion:v1.4.0+776c994 KubeProxyVersion:v1.4.0+776c994 OperatingSystem:linux Architecture:amd64} Images:[{Names:[docker.io/openshift/origin@sha256:0c4dfc568b8dbb1ac288ab1feb955ee2a37e74fd60fc96f8efd4e35076ced06b docker.io/openshift/origin:latest] SizeBytes:540431972} {Names:[docker.io/openshift/origin-deployer@sha256:398189baa7877595df33f6da8dff1fad36a9e437883c5620b85bdf3d06a31fb5 docker.io/openshift/origin-deployer:v1.5.0-alpha.0] SizeBytes:488015134} {Names:[docker.io/openshift/origin-pod@sha256:6216367955502829e7270c3437810cd16e0e99d2973647e521cc9b871d8948f0 docker.io/openshift/origin-pod:v1.5.0-alpha.0] SizeBytes:1138998}] VolumesInUse:[] VolumesAttached:[]} vs {Capacity:map[memory:{i:{value:8223051776 scale:0} d:{Dec:<nil>} s:8030324Ki Format:BinarySI} pods:{i:{value:20 scale:0} d:{Dec:<nil>} s:20 Format:DecimalSI} alpha.kubernetes.io/nvidia-gpu:{i:{value:0 scale:0} d:{Dec:<nil>} s:0 Format:DecimalSI} cpu:{i:{value:2 scale:0} d:{Dec:<nil>} s:2 Format:DecimalSI}] Allocatable:map[alpha.kubernetes.io/nvidia-gpu:{i:{value:0 scale:0} d:{Dec:<nil>} s:0 Format:DecimalSI} cpu:{i:{value:2 scale:0} d:{Dec:<nil>} s:2 Format:DecimalSI} memory:{i:{value:8223051776 scale:0} d:{Dec:<nil>} s:8030324Ki Format:BinarySI} pods:{i:{value:20 scale:0} d:{Dec:<nil>} s:20 Format:DecimalSI}] Phase: Conditions:[{Type:OutOfDisk Status:False LastHeartbeatTime:2016-12-09 18:19:47 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasSufficientDisk Message:kubelet has sufficient disk space available} {Type:MemoryPressure Status:False LastHeartbeatTime:2016-12-09 18:19:47 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasSufficientMemory Message:kubelet has sufficient memory available} {Type:DiskPressure Status:False LastHeartbeatTime:2016-12-09 18:19:47 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasNoDiskPressure Message:kubelet has no disk pressure} {Type:Ready Status:True LastHeartbeatTime:2016-12-09 18:19:47 +0000 UTC LastTransitionTime:2016-12-09 18:15:55 +0000 UTC Reason:KubeletReady Message:kubelet is posting ready status}] Addresses:[{Type:LegacyHostIP Address:192.168.121.18} {Type:InternalIP Address:192.168.121.18}] DaemonEndpoints:{KubeletEndpoint:{Port:10250}} NodeInfo:{MachineID:8629c58585b148cca9cc925e788be346 SystemUUID:8629C585-85B1-48CC-A9CC-925E788BE346 BootID:1a2f3964-acf4-4825-9949-2816c7af2060 KernelVersion:4.8.6-300.fc25.x86_64 OSImage:CentOS Linux 7 (Core) ContainerRuntimeVersion:docker://1.12.3 KubeletVersion:v1.4.0+776c994 KubeProxyVersion:v1.4.0+776c994 OperatingSystem:linux Architecture:amd64} Images:[{Names:[docker.io/openshift/origin@sha256:0c4dfc568b8dbb1ac288ab1feb955ee2a37e74fd60fc96f8efd4e35076ced06b docker.io/openshift/origin:latest] SizeBytes:540431972} {Names:[docker.io/openshift/origin-deployer@sha256:398189baa7877595df33f6da8dff1fad36a9e437883c5620b85bdf3d06a31fb5 docker.io/openshift/origin-deployer:v1.5.0-alpha.0] SizeBytes:488015134} {Names:[docker.io/openshift/origin-pod@sha256:6216367955502829e7270c3437810cd16e0e99d2973647e521cc9b871d8948f0 docker.io/openshift/origin-pod:v1.5.0-alpha.0] SizeBytes:1138998}] VolumesInUse:[] VolumesAttached:[]}.
I1209 18:19:52.297909 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:19:52.300269 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:19:52.693406 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:19:52.695864 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:19:52.697369 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:19:53.300500 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:19:53.303366 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:19:54.303678 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:19:54.306282 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:19:54.693262 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:19:54.695832 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:19:55.306573 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:19:55.308925 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:19:56.309261 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:19:56.311375 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:19:56.368508 20134 controller.go:105] Found 0 scheduledjobs
I1209 18:19:56.375522 20134 controller.go:113] Found 0 jobs
I1209 18:19:56.376101 20134 controller.go:116] Found 0 groups
I1209 18:19:56.693376 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:19:56.694862 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:19:56.695867 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:19:57.249082 20134 interface.go:248] Default route transits interface "eth0"
I1209 18:19:57.249389 20134 interface.go:93] Interface eth0 is up
I1209 18:19:57.250287 20134 interface.go:138] Interface "eth0" has 2 addresses :[192.168.121.18/24 fe80::5054:ff:fe56:f2f/64].
I1209 18:19:57.250376 20134 interface.go:105] Checking addr 192.168.121.18/24.
I1209 18:19:57.250406 20134 interface.go:114] IP found 192.168.121.18
I1209 18:19:57.251045 20134 interface.go:144] valid IPv4 address for interface "eth0" found as 192.168.121.18.
I1209 18:19:57.251097 20134 interface.go:254] Choosing IP 192.168.121.18
I1209 18:19:57.296074 20134 attach_detach_controller.go:520] processVolumesInUse for node "localhost"
I1209 18:19:57.311734 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:19:57.313430 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:19:57.590365 20134 conversion.go:133] failed to handle multiple devices for container. Skipping Filesystem stats
I1209 18:19:57.590394 20134 conversion.go:133] failed to handle multiple devices for container. Skipping Filesystem stats
I1209 18:19:57.593593 20134 eviction_manager.go:204] eviction manager: no resources are starved
I1209 18:19:58.313785 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:19:58.316392 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:19:58.693469 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:19:58.695938 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:19:58.697781 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:19:59.317297 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:19:59.321691 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:20:00.322035 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:20:00.325468 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:20:00.693380 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:20:00.695728 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:20:01.203998 20134 reflector.go:284] pkg/controller/volume/persistentvolume/pv_controller_base.go:448: forcing resync
I1209 18:20:01.233095 20134 reflector.go:284] pkg/controller/volume/persistentvolume/pv_controller_base.go:153: forcing resync
I1209 18:20:01.233720 20134 reflector.go:284] pkg/controller/volume/persistentvolume/pv_controller_base.go:449: forcing resync
I1209 18:20:01.325669 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:20:01.327541 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:20:01.608588 20134 thin_pool_watcher.go:160] checking whether the thin-pool is holding a metadata snapshot
I1209 18:20:01.608631 20134 dmsetup_client.go:61] running dmsetup status docker-252:1-262311-pool
I1209 18:20:01.640622 20134 thin_pool_watcher.go:126] reserving metadata snapshot for thin-pool docker-252:1-262311-pool
I1209 18:20:01.641731 20134 dmsetup_client.go:61] running dmsetup message docker-252:1-262311-pool 0 reserve_metadata_snap
I1209 18:20:01.673955 20134 thin_pool_watcher.go:133] reserved metadata snapshot for thin-pool docker-252:1-262311-pool
I1209 18:20:01.674005 20134 thin_pool_watcher.go:141] running thin_ls on metadata device /dev/loop1
I1209 18:20:01.674022 20134 thin_ls_client.go:56] running command: thin_ls --no-headers -m -o DEV,EXCLUSIVE_BYTES /dev/loop1
I1209 18:20:01.697679 20134 thin_pool_watcher.go:137] releasing metadata snapshot for thin-pool docker-252:1-262311-pool
I1209 18:20:01.697757 20134 dmsetup_client.go:61] running dmsetup message docker-252:1-262311-pool 0 release_metadata_snap
I1209 18:20:01.709509 20134 nodecontroller.go:816] Node localhost ReadyCondition updated. Updating timestamp: {Capacity:map[memory:{i:{value:8223051776 scale:0} d:{Dec:<nil>} s:8030324Ki Format:BinarySI} pods:{i:{value:20 scale:0} d:{Dec:<nil>} s:20 Format:DecimalSI} alpha.kubernetes.io/nvidia-gpu:{i:{value:0 scale:0} d:{Dec:<nil>} s:0 Format:DecimalSI} cpu:{i:{value:2 scale:0} d:{Dec:<nil>} s:2 Format:DecimalSI}] Allocatable:map[cpu:{i:{value:2 scale:0} d:{Dec:<nil>} s:2 Format:DecimalSI} memory:{i:{value:8223051776 scale:0} d:{Dec:<nil>} s:8030324Ki Format:BinarySI} pods:{i:{value:20 scale:0} d:{Dec:<nil>} s:20 Format:DecimalSI} alpha.kubernetes.io/nvidia-gpu:{i:{value:0 scale:0} d:{Dec:<nil>} s:0 Format:DecimalSI}] Phase: Conditions:[{Type:OutOfDisk Status:False LastHeartbeatTime:2016-12-09 18:19:47 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasSufficientDisk Message:kubelet has sufficient disk space available} {Type:MemoryPressure Status:False LastHeartbeatTime:2016-12-09 18:19:47 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasSufficientMemory Message:kubelet has sufficient memory available} {Type:DiskPressure Status:False LastHeartbeatTime:2016-12-09 18:19:47 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasNoDiskPressure Message:kubelet has no disk pressure} {Type:Ready Status:True LastHeartbeatTime:2016-12-09 18:19:47 +0000 UTC LastTransitionTime:2016-12-09 18:15:55 +0000 UTC Reason:KubeletReady Message:kubelet is posting ready status}] Addresses:[{Type:LegacyHostIP Address:192.168.121.18} {Type:InternalIP Address:192.168.121.18}] DaemonEndpoints:{KubeletEndpoint:{Port:10250}} NodeInfo:{MachineID:8629c58585b148cca9cc925e788be346 SystemUUID:8629C585-85B1-48CC-A9CC-925E788BE346 BootID:1a2f3964-acf4-4825-9949-2816c7af2060 KernelVersion:4.8.6-300.fc25.x86_64 OSImage:CentOS Linux 7 (Core) ContainerRuntimeVersion:docker://1.12.3 KubeletVersion:v1.4.0+776c994 KubeProxyVersion:v1.4.0+776c994 OperatingSystem:linux Architecture:amd64} Images:[{Names:[docker.io/openshift/origin@sha256:0c4dfc568b8dbb1ac288ab1feb955ee2a37e74fd60fc96f8efd4e35076ced06b docker.io/openshift/origin:latest] SizeBytes:540431972} {Names:[docker.io/openshift/origin-deployer@sha256:398189baa7877595df33f6da8dff1fad36a9e437883c5620b85bdf3d06a31fb5 docker.io/openshift/origin-deployer:v1.5.0-alpha.0] SizeBytes:488015134} {Names:[docker.io/openshift/origin-pod@sha256:6216367955502829e7270c3437810cd16e0e99d2973647e521cc9b871d8948f0 docker.io/openshift/origin-pod:v1.5.0-alpha.0] SizeBytes:1138998}] VolumesInUse:[] VolumesAttached:[]} vs {Capacity:map[pods:{i:{value:20 scale:0} d:{Dec:<nil>} s:20 Format:DecimalSI} alpha.kubernetes.io/nvidia-gpu:{i:{value:0 scale:0} d:{Dec:<nil>} s:0 Format:DecimalSI} cpu:{i:{value:2 scale:0} d:{Dec:<nil>} s:2 Format:DecimalSI} memory:{i:{value:8223051776 scale:0} d:{Dec:<nil>} s:8030324Ki Format:BinarySI}] Allocatable:map[pods:{i:{value:20 scale:0} d:{Dec:<nil>} s:20 Format:DecimalSI} alpha.kubernetes.io/nvidia-gpu:{i:{value:0 scale:0} d:{Dec:<nil>} s:0 Format:DecimalSI} cpu:{i:{value:2 scale:0} d:{Dec:<nil>} s:2 Format:DecimalSI} memory:{i:{value:8223051776 scale:0} d:{Dec:<nil>} s:8030324Ki Format:BinarySI}] Phase: Conditions:[{Type:OutOfDisk Status:False LastHeartbeatTime:2016-12-09 18:19:57 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasSufficientDisk Message:kubelet has sufficient disk space available} {Type:MemoryPressure Status:False LastHeartbeatTime:2016-12-09 18:19:57 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasSufficientMemory Message:kubelet has sufficient memory available} {Type:DiskPressure Status:False LastHeartbeatTime:2016-12-09 18:19:57 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasNoDiskPressure Message:kubelet has no disk pressure} {Type:Ready Status:True LastHeartbeatTime:2016-12-09 18:19:57 +0000 UTC LastTransitionTime:2016-12-09 18:15:55 +0000 UTC Reason:KubeletReady Message:kubelet is posting ready status}] Addresses:[{Type:LegacyHostIP Address:192.168.121.18} {Type:InternalIP Address:192.168.121.18}] DaemonEndpoints:{KubeletEndpoint:{Port:10250}} NodeInfo:{MachineID:8629c58585b148cca9cc925e788be346 SystemUUID:8629C585-85B1-48CC-A9CC-925E788BE346 BootID:1a2f3964-acf4-4825-9949-2816c7af2060 KernelVersion:4.8.6-300.fc25.x86_64 OSImage:CentOS Linux 7 (Core) ContainerRuntimeVersion:docker://1.12.3 KubeletVersion:v1.4.0+776c994 KubeProxyVersion:v1.4.0+776c994 OperatingSystem:linux Architecture:amd64} Images:[{Names:[docker.io/openshift/origin@sha256:0c4dfc568b8dbb1ac288ab1feb955ee2a37e74fd60fc96f8efd4e35076ced06b docker.io/openshift/origin:latest] SizeBytes:540431972} {Names:[docker.io/openshift/origin-deployer@sha256:398189baa7877595df33f6da8dff1fad36a9e437883c5620b85bdf3d06a31fb5 docker.io/openshift/origin-deployer:v1.5.0-alpha.0] SizeBytes:488015134} {Names:[docker.io/openshift/origin-pod@sha256:6216367955502829e7270c3437810cd16e0e99d2973647e521cc9b871d8948f0 docker.io/openshift/origin-pod:v1.5.0-alpha.0] SizeBytes:1138998}] VolumesInUse:[] VolumesAttached:[]}.
E1209 18:20:01.727935 20134 thin_pool_watcher.go:72] encountered error refreshing thin pool watcher: error performing thin_ls on metadata device /dev/loop1: Error running command `thin_ls --no-headers -m -o DEV,EXCLUSIVE_BYTES /dev/loop1`: exit status 127
output:
I1209 18:20:01.728026 20134 thin_pool_watcher.go:77] thin_ls(1481307601) took 119.446264ms
I1209 18:20:01.985078 20134 handler.go:300] unable to get fs usage from thin pool for device 16: no cached value for usage of device 16
I1209 18:20:02.327810 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:20:02.330688 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:20:02.693296 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:20:02.694923 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:20:02.696275 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:20:03.331434 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:20:03.334025 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:20:04.334883 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:20:04.337576 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:20:04.694460 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:20:04.699815 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:20:04.701541 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:20:05.337846 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:20:05.342702 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:20:06.343036 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:20:06.347536 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:20:06.385563 20134 controller.go:105] Found 0 scheduledjobs
I1209 18:20:06.394026 20134 controller.go:113] Found 0 jobs
I1209 18:20:06.394064 20134 controller.go:116] Found 0 groups
I1209 18:20:06.693378 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:20:06.695158 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:20:07.302835 20134 interface.go:248] Default route transits interface "eth0"
I1209 18:20:07.303158 20134 interface.go:93] Interface eth0 is up
I1209 18:20:07.304269 20134 interface.go:138] Interface "eth0" has 2 addresses :[192.168.121.18/24 fe80::5054:ff:fe56:f2f/64].
I1209 18:20:07.304367 20134 interface.go:105] Checking addr 192.168.121.18/24.
I1209 18:20:07.304395 20134 interface.go:114] IP found 192.168.121.18
I1209 18:20:07.305269 20134 interface.go:144] valid IPv4 address for interface "eth0" found as 192.168.121.18.
I1209 18:20:07.305358 20134 interface.go:254] Choosing IP 192.168.121.18
I1209 18:20:07.348394 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:20:07.356281 20134 attach_detach_controller.go:520] processVolumesInUse for node "localhost"
I1209 18:20:07.356609 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:20:07.640779 20134 conversion.go:133] failed to handle multiple devices for container. Skipping Filesystem stats
I1209 18:20:07.640815 20134 conversion.go:133] failed to handle multiple devices for container. Skipping Filesystem stats
I1209 18:20:07.651253 20134 eviction_manager.go:204] eviction manager: no resources are starved
I1209 18:20:07.664398 20134 handler.go:300] unable to get fs usage from thin pool for device 16: no cached value for usage of device 16
I1209 18:20:08.357704 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:20:08.359610 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:20:08.693316 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:20:08.695557 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:20:08.697049 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:20:09.359943 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:20:09.361731 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:20:10.027449 20134 node_auth.go:143] Node request attributes: namespace=, user=&user.DefaultInfo{Name:"system:openshift-node-admin", UID:"", Groups:[]string{"system:node-admins", "system:authenticated"}, Extra:map[string][]string(nil)}, attrs=authorizer.DefaultAuthorizationAttributes{Verb:"get", APIVersion:"v1", APIGroup:"", Resource:"nodes/proxy", ResourceName:"localhost", RequestAttributes:interface {}(nil), NonResourceURL:false, URL:"/containerLogs/test/deployment-example-1-deploy/deployment"}
I1209 18:20:10.029838 20134 authorizer.go:69] allowed=true, reason=allowed by cluster rule
I1209 18:20:10.029887 20134 authorizer.go:28] allowed=true, reason=allowed by cluster rule
I1209 18:20:10.032807 20134 server.go:971] GET /containerLogs/test/deployment-example-1-deploy/deployment: (5.60457ms) 200 [[Go-http-client/1.1] [::1]:49532]
I1209 18:20:10.362086 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:20:10.364822 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:20:10.693310 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:20:10.694726 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:20:11.365218 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:20:11.368466 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:20:11.716018 20134 nodecontroller.go:816] Node localhost ReadyCondition updated. Updating timestamp: {Capacity:map[alpha.kubernetes.io/nvidia-gpu:{i:{value:0 scale:0} d:{Dec:<nil>} s:0 Format:DecimalSI} cpu:{i:{value:2 scale:0} d:{Dec:<nil>} s:2 Format:DecimalSI} memory:{i:{value:8223051776 scale:0} d:{Dec:<nil>} s:8030324Ki Format:BinarySI} pods:{i:{value:20 scale:0} d:{Dec:<nil>} s:20 Format:DecimalSI}] Allocatable:map[memory:{i:{value:8223051776 scale:0} d:{Dec:<nil>} s:8030324Ki Format:BinarySI} pods:{i:{value:20 scale:0} d:{Dec:<nil>} s:20 Format:DecimalSI} alpha.kubernetes.io/nvidia-gpu:{i:{value:0 scale:0} d:{Dec:<nil>} s:0 Format:DecimalSI} cpu:{i:{value:2 scale:0} d:{Dec:<nil>} s:2 Format:DecimalSI}] Phase: Conditions:[{Type:OutOfDisk Status:False LastHeartbeatTime:2016-12-09 18:19:57 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasSufficientDisk Message:kubelet has sufficient disk space available} {Type:MemoryPressure Status:False LastHeartbeatTime:2016-12-09 18:19:57 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasSufficientMemory Message:kubelet has sufficient memory available} {Type:DiskPressure Status:False LastHeartbeatTime:2016-12-09 18:19:57 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasNoDiskPressure Message:kubelet has no disk pressure} {Type:Ready Status:True LastHeartbeatTime:2016-12-09 18:19:57 +0000 UTC LastTransitionTime:2016-12-09 18:15:55 +0000 UTC Reason:KubeletReady Message:kubelet is posting ready status}] Addresses:[{Type:LegacyHostIP Address:192.168.121.18} {Type:InternalIP Address:192.168.121.18}] DaemonEndpoints:{KubeletEndpoint:{Port:10250}} NodeInfo:{MachineID:8629c58585b148cca9cc925e788be346 SystemUUID:8629C585-85B1-48CC-A9CC-925E788BE346 BootID:1a2f3964-acf4-4825-9949-2816c7af2060 KernelVersion:4.8.6-300.fc25.x86_64 OSImage:CentOS Linux 7 (Core) ContainerRuntimeVersion:docker://1.12.3 KubeletVersion:v1.4.0+776c994 KubeProxyVersion:v1.4.0+776c994 OperatingSystem:linux Architecture:amd64} Images:[{Names:[docker.io/openshift/origin@sha256:0c4dfc568b8dbb1ac288ab1feb955ee2a37e74fd60fc96f8efd4e35076ced06b docker.io/openshift/origin:latest] SizeBytes:540431972} {Names:[docker.io/openshift/origin-deployer@sha256:398189baa7877595df33f6da8dff1fad36a9e437883c5620b85bdf3d06a31fb5 docker.io/openshift/origin-deployer:v1.5.0-alpha.0] SizeBytes:488015134} {Names:[docker.io/openshift/origin-pod@sha256:6216367955502829e7270c3437810cd16e0e99d2973647e521cc9b871d8948f0 docker.io/openshift/origin-pod:v1.5.0-alpha.0] SizeBytes:1138998}] VolumesInUse:[] VolumesAttached:[]} vs {Capacity:map[pods:{i:{value:20 scale:0} d:{Dec:<nil>} s:20 Format:DecimalSI} alpha.kubernetes.io/nvidia-gpu:{i:{value:0 scale:0} d:{Dec:<nil>} s:0 Format:DecimalSI} cpu:{i:{value:2 scale:0} d:{Dec:<nil>} s:2 Format:DecimalSI} memory:{i:{value:8223051776 scale:0} d:{Dec:<nil>} s:8030324Ki Format:BinarySI}] Allocatable:map[alpha.kubernetes.io/nvidia-gpu:{i:{value:0 scale:0} d:{Dec:<nil>} s:0 Format:DecimalSI} cpu:{i:{value:2 scale:0} d:{Dec:<nil>} s:2 Format:DecimalSI} memory:{i:{value:8223051776 scale:0} d:{Dec:<nil>} s:8030324Ki Format:BinarySI} pods:{i:{value:20 scale:0} d:{Dec:<nil>} s:20 Format:DecimalSI}] Phase: Conditions:[{Type:OutOfDisk Status:False LastHeartbeatTime:2016-12-09 18:20:07 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasSufficientDisk Message:kubelet has sufficient disk space available} {Type:MemoryPressure Status:False LastHeartbeatTime:2016-12-09 18:20:07 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasSufficientMemory Message:kubelet has sufficient memory available} {Type:DiskPressure Status:False LastHeartbeatTime:2016-12-09 18:20:07 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasNoDiskPressure Message:kubelet has no disk pressure} {Type:Ready Status:True LastHeartbeatTime:2016-12-09 18:20:07 +0000 UTC LastTransitionTime:2016-12-09 18:15:55 +0000 UTC Reason:KubeletReady Message:kubelet is posting ready status}] Addresses:[{Type:LegacyHostIP Address:192.168.121.18} {Type:InternalIP Address:192.168.121.18}] DaemonEndpoints:{KubeletEndpoint:{Port:10250}} NodeInfo:{MachineID:8629c58585b148cca9cc925e788be346 SystemUUID:8629C585-85B1-48CC-A9CC-925E788BE346 BootID:1a2f3964-acf4-4825-9949-2816c7af2060 KernelVersion:4.8.6-300.fc25.x86_64 OSImage:CentOS Linux 7 (Core) ContainerRuntimeVersion:docker://1.12.3 KubeletVersion:v1.4.0+776c994 KubeProxyVersion:v1.4.0+776c994 OperatingSystem:linux Architecture:amd64} Images:[{Names:[docker.io/openshift/origin@sha256:0c4dfc568b8dbb1ac288ab1feb955ee2a37e74fd60fc96f8efd4e35076ced06b docker.io/openshift/origin:latest] SizeBytes:540431972} {Names:[docker.io/openshift/origin-deployer@sha256:398189baa7877595df33f6da8dff1fad36a9e437883c5620b85bdf3d06a31fb5 docker.io/openshift/origin-deployer:v1.5.0-alpha.0] SizeBytes:488015134} {Names:[docker.io/openshift/origin-pod@sha256:6216367955502829e7270c3437810cd16e0e99d2973647e521cc9b871d8948f0 docker.io/openshift/origin-pod:v1.5.0-alpha.0] SizeBytes:1138998}] VolumesInUse:[] VolumesAttached:[]}.
I1209 18:20:12.369204 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:20:12.372822 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:20:12.693428 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:20:12.695707 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:20:12.697322 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:20:13.373308 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:20:13.376228 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:20:14.376777 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:20:14.379233 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:20:14.693303 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:20:14.695375 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:20:14.842530 20134 reflector.go:284] pkg/controller/replicaset/replica_set.go:205: forcing resync
I1209 18:20:14.849980 20134 reflector.go:284] pkg/controller/deployment/deployment_controller.go:181: forcing resync
I1209 18:20:14.863322 20134 reflector.go:284] pkg/controller/disruption/disruption.go:266: forcing resync
I1209 18:20:14.863439 20134 reflector.go:284] pkg/controller/podautoscaler/horizontal.go:135: forcing resync
I1209 18:20:14.864045 20134 reflector.go:284] pkg/controller/endpoint/endpoints_controller.go:158: forcing resync
I1209 18:20:14.864276 20134 endpoints_controller.go:360] About to update endpoints for service "test/deployment-example"
I1209 18:20:14.864976 20134 reflector.go:284] pkg/controller/daemon/daemoncontroller.go:237: forcing resync
I1209 18:20:14.864395 20134 endpoints_controller.go:321] Finished syncing service "default/kubernetes" endpoints. (5.928µs)
I1209 18:20:14.865417 20134 reflector.go:284] pkg/controller/disruption/disruption.go:264: forcing resync
I1209 18:20:14.866242 20134 reflector.go:284] pkg/controller/disruption/disruption.go:267: forcing resync
I1209 18:20:14.866973 20134 reflector.go:284] pkg/controller/disruption/disruption.go:268: forcing resync
I1209 18:20:14.872965 20134 endpoints_controller.go:497] Update endpoints for test/deployment-example, ready: 0 not ready: 0
I1209 18:20:14.880300 20134 endpoints_controller.go:321] Finished syncing service "test/deployment-example" endpoints. (16.027962ms)
I1209 18:20:15.168939 20134 proxier.go:758] Syncing iptables rules
I1209 18:20:15.169040 20134 iptables.go:362] running iptables -N [KUBE-SERVICES -t filter]
I1209 18:20:15.193564 20134 iptables.go:362] running iptables -N [KUBE-SERVICES -t nat]
I1209 18:20:15.212311 20134 iptables.go:362] running iptables -C [OUTPUT -t filter -m comment --comment kubernetes service portals -j KUBE-SERVICES]
I1209 18:20:15.229770 20134 iptables.go:362] running iptables -C [OUTPUT -t nat -m comment --comment kubernetes service portals -j KUBE-SERVICES]
I1209 18:20:15.247820 20134 iptables.go:362] running iptables -C [PREROUTING -t nat -m comment --comment kubernetes service portals -j KUBE-SERVICES]
I1209 18:20:15.264183 20134 iptables.go:362] running iptables -N [KUBE-POSTROUTING -t nat]
I1209 18:20:15.280048 20134 iptables.go:362] running iptables -C [POSTROUTING -t nat -m comment --comment kubernetes postrouting rules -j KUBE-POSTROUTING]
I1209 18:20:15.296250 20134 iptables.go:298] running iptables-save [-t filter]
I1209 18:20:15.310738 20134 iptables.go:298] running iptables-save [-t nat]
I1209 18:20:15.323871 20134 proxier.go:1244] Restoring iptables rules: *filter
:KUBE-SERVICES - [0:0]
-A KUBE-SERVICES -m comment --comment "test/deployment-example:8080-tcp has no endpoints" -m tcp -p tcp -d 172.30.122.15/32 --dport 8080 -j REJECT
COMMIT
*nat
:KUBE-SERVICES - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-POSTROUTING - [0:0]
:KUBE-MARK-MASQ - [0:0]
:KUBE-SVC-NPX46M4PTMTKRN6Y - [0:0]
:KUBE-SEP-LLAPZ6I53VN3DC7H - [0:0]
:KUBE-SVC-3VQ6B3MLH7E2SZT4 - [0:0]
:KUBE-SEP-FQLCC2RAIW2XP5BY - [0:0]
:KUBE-SVC-BA6I5HTZKAAAJT56 - [0:0]
:KUBE-SEP-U462OSCUJL4Y6JKA - [0:0]
:KUBE-SVC-7FAS7WLN46SI3LNQ - [0:0]
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x00004000/0x00004000 -j MASQUERADE
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x00004000/0x00004000
-A KUBE-SERVICES -m comment --comment "default/kubernetes:https cluster IP" -m tcp -p tcp -d 172.30.0.1/32 --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment default/kubernetes:https -m recent --name KUBE-SEP-LLAPZ6I53VN3DC7H --rcheck --seconds 180 --reap -j KUBE-SEP-LLAPZ6I53VN3DC7H
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment default/kubernetes:https -j KUBE-SEP-LLAPZ6I53VN3DC7H
-A KUBE-SEP-LLAPZ6I53VN3DC7H -m comment --comment default/kubernetes:https -s 192.168.121.18/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-LLAPZ6I53VN3DC7H -m comment --comment default/kubernetes:https -m recent --name KUBE-SEP-LLAPZ6I53VN3DC7H --set -m tcp -p tcp -j DNAT --to-destination 192.168.121.18:8443
-A KUBE-SERVICES -m comment --comment "default/kubernetes:dns cluster IP" -m udp -p udp -d 172.30.0.1/32 --dport 53 -j KUBE-SVC-3VQ6B3MLH7E2SZT4
-A KUBE-SVC-3VQ6B3MLH7E2SZT4 -m comment --comment default/kubernetes:dns -m recent --name KUBE-SEP-FQLCC2RAIW2XP5BY --rcheck --seconds 180 --reap -j KUBE-SEP-FQLCC2RAIW2XP5BY
-A KUBE-SVC-3VQ6B3MLH7E2SZT4 -m comment --comment default/kubernetes:dns -j KUBE-SEP-FQLCC2RAIW2XP5BY
-A KUBE-SEP-FQLCC2RAIW2XP5BY -m comment --comment default/kubernetes:dns -s 192.168.121.18/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-FQLCC2RAIW2XP5BY -m comment --comment default/kubernetes:dns -m recent --name KUBE-SEP-FQLCC2RAIW2XP5BY --set -m udp -p udp -j DNAT --to-destination 192.168.121.18:8053
-A KUBE-SERVICES -m comment --comment "default/kubernetes:dns-tcp cluster IP" -m tcp -p tcp -d 172.30.0.1/32 --dport 53 -j KUBE-SVC-BA6I5HTZKAAAJT56
-A KUBE-SVC-BA6I5HTZKAAAJT56 -m comment --comment default/kubernetes:dns-tcp -m recent --name KUBE-SEP-U462OSCUJL4Y6JKA --rcheck --seconds 180 --reap -j KUBE-SEP-U462OSCUJL4Y6JKA
-A KUBE-SVC-BA6I5HTZKAAAJT56 -m comment --comment default/kubernetes:dns-tcp -j KUBE-SEP-U462OSCUJL4Y6JKA
-A KUBE-SEP-U462OSCUJL4Y6JKA -m comment --comment default/kubernetes:dns-tcp -s 192.168.121.18/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-U462OSCUJL4Y6JKA -m comment --comment default/kubernetes:dns-tcp -m recent --name KUBE-SEP-U462OSCUJL4Y6JKA --set -m tcp -p tcp -j DNAT --to-destination 192.168.121.18:8053
-A KUBE-SERVICES -m comment --comment "test/deployment-example:8080-tcp cluster IP" -m tcp -p tcp -d 172.30.122.15/32 --dport 8080 -j KUBE-SVC-7FAS7WLN46SI3LNQ
-A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
COMMIT
I1209 18:20:15.324736 20134 iptables.go:339] running iptables-restore [--noflush --counters]
I1209 18:20:15.342316 20134 proxier.go:751] syncProxyRules took 173.380755ms
I1209 18:20:15.342552 20134 iptables.go:362] running iptables -N [KUBE-PORTALS-CONTAINER -t nat]
I1209 18:20:15.355163 20134 iptables.go:362] running iptables -C [PREROUTING -t nat -m comment --comment handle ClusterIPs; NOTE: this must be before the NodePort rules -j KUBE-PORTALS-CONTAINER]
I1209 18:20:15.365012 20134 iptables.go:362] running iptables -N [KUBE-PORTALS-HOST -t nat]
I1209 18:20:15.374922 20134 iptables.go:362] running iptables -C [OUTPUT -t nat -m comment --comment handle ClusterIPs; NOTE: this must be before the NodePort rules -j KUBE-PORTALS-HOST]
I1209 18:20:15.379592 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:20:15.381659 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:20:15.386828 20134 iptables.go:362] running iptables -N [KUBE-NODEPORT-CONTAINER -t nat]
I1209 18:20:15.397176 20134 iptables.go:362] running iptables -C [PREROUTING -t nat -m addrtype --dst-type LOCAL -m comment --comment handle service NodePorts; NOTE: this must be the last rule in the chain -j KUBE-NODEPORT-CONTAINER]
I1209 18:20:15.406683 20134 iptables.go:362] running iptables -N [KUBE-NODEPORT-HOST -t nat]
I1209 18:20:15.416091 20134 iptables.go:362] running iptables -C [OUTPUT -t nat -m addrtype --dst-type LOCAL -m comment --comment handle service NodePorts; NOTE: this must be the last rule in the chain -j KUBE-NODEPORT-HOST]
I1209 18:20:15.428290 20134 iptables.go:362] running iptables -N [KUBE-NODEPORT-NON-LOCAL -t filter]
I1209 18:20:15.437984 20134 iptables.go:362] running iptables -C [INPUT -t filter -m comment --comment Ensure that non-local NodePort traffic can flow -j KUBE-NODEPORT-NON-LOCAL]
I1209 18:20:16.204374 20134 reflector.go:284] pkg/controller/volume/persistentvolume/pv_controller_base.go:448: forcing resync
I1209 18:20:16.232163 20134 reflector.go:284] pkg/controller/petset/pet_set.go:147: forcing resync
I1209 18:20:16.233439 20134 reflector.go:284] pkg/controller/volume/persistentvolume/pv_controller_base.go:153: forcing resync
I1209 18:20:16.234022 20134 reflector.go:284] pkg/controller/volume/persistentvolume/pv_controller_base.go:449: forcing resync
I1209 18:20:16.382049 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:20:16.384653 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:20:16.401128 20134 controller.go:105] Found 0 scheduledjobs
I1209 18:20:16.407046 20134 controller.go:113] Found 0 jobs
I1209 18:20:16.407138 20134 controller.go:116] Found 0 groups
I1209 18:20:16.693355 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:20:16.695565 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:20:16.696960 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:20:16.728532 20134 thin_pool_watcher.go:160] checking whether the thin-pool is holding a metadata snapshot
I1209 18:20:16.728590 20134 dmsetup_client.go:61] running dmsetup status docker-252:1-262311-pool
I1209 18:20:16.757218 20134 thin_pool_watcher.go:126] reserving metadata snapshot for thin-pool docker-252:1-262311-pool
I1209 18:20:16.757316 20134 dmsetup_client.go:61] running dmsetup message docker-252:1-262311-pool 0 reserve_metadata_snap
I1209 18:20:16.787172 20134 thin_pool_watcher.go:133] reserved metadata snapshot for thin-pool docker-252:1-262311-pool
I1209 18:20:16.787259 20134 thin_pool_watcher.go:141] running thin_ls on metadata device /dev/loop1
I1209 18:20:16.787284 20134 thin_ls_client.go:56] running command: thin_ls --no-headers -m -o DEV,EXCLUSIVE_BYTES /dev/loop1
I1209 18:20:16.805121 20134 thin_pool_watcher.go:137] releasing metadata snapshot for thin-pool docker-252:1-262311-pool
I1209 18:20:16.805183 20134 dmsetup_client.go:61] running dmsetup message docker-252:1-262311-pool 0 release_metadata_snap
E1209 18:20:16.823650 20134 thin_pool_watcher.go:72] encountered error refreshing thin pool watcher: error performing thin_ls on metadata device /dev/loop1: Error running command `thin_ls --no-headers -m -o DEV,EXCLUSIVE_BYTES /dev/loop1`: exit status 127
output:
I1209 18:20:16.823730 20134 thin_pool_watcher.go:77] thin_ls(1481307616) took 95.203135ms
I1209 18:20:17.360577 20134 interface.go:248] Default route transits interface "eth0"
I1209 18:20:17.360853 20134 interface.go:93] Interface eth0 is up
I1209 18:20:17.360979 20134 interface.go:138] Interface "eth0" has 2 addresses :[192.168.121.18/24 fe80::5054:ff:fe56:f2f/64].
I1209 18:20:17.362190 20134 interface.go:105] Checking addr 192.168.121.18/24.
I1209 18:20:17.362248 20134 interface.go:114] IP found 192.168.121.18
I1209 18:20:17.362317 20134 interface.go:144] valid IPv4 address for interface "eth0" found as 192.168.121.18.
I1209 18:20:17.362384 20134 interface.go:254] Choosing IP 192.168.121.18
I1209 18:20:17.384988 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:20:17.389145 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:20:17.412287 20134 attach_detach_controller.go:520] processVolumesInUse for node "localhost"
I1209 18:20:17.684242 20134 conversion.go:133] failed to handle multiple devices for container. Skipping Filesystem stats
I1209 18:20:17.684314 20134 conversion.go:133] failed to handle multiple devices for container. Skipping Filesystem stats
I1209 18:20:17.717724 20134 eviction_manager.go:204] eviction manager: no resources are starved
I1209 18:20:18.389411 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:20:18.391994 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:20:18.693314 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:20:18.695274 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:20:19.392291 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:20:19.394365 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:20:20.375181 20134 handler.go:300] unable to get fs usage from thin pool for device 16: no cached value for usage of device 16
I1209 18:20:20.395182 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:20:20.398086 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:20:20.693352 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:20:20.696445 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:20:20.697758 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:20:21.399017 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:20:21.402127 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:20:21.721886 20134 nodecontroller.go:816] Node localhost ReadyCondition updated. Updating timestamp: {Capacity:map[pods:{i:{value:20 scale:0} d:{Dec:<nil>} s:20 Format:DecimalSI} alpha.kubernetes.io/nvidia-gpu:{i:{value:0 scale:0} d:{Dec:<nil>} s:0 Format:DecimalSI} cpu:{i:{value:2 scale:0} d:{Dec:<nil>} s:2 Format:DecimalSI} memory:{i:{value:8223051776 scale:0} d:{Dec:<nil>} s:8030324Ki Format:BinarySI}] Allocatable:map[cpu:{i:{value:2 scale:0} d:{Dec:<nil>} s:2 Format:DecimalSI} memory:{i:{value:8223051776 scale:0} d:{Dec:<nil>} s:8030324Ki Format:BinarySI} pods:{i:{value:20 scale:0} d:{Dec:<nil>} s:20 Format:DecimalSI} alpha.kubernetes.io/nvidia-gpu:{i:{value:0 scale:0} d:{Dec:<nil>} s:0 Format:DecimalSI}] Phase: Conditions:[{Type:OutOfDisk Status:False LastHeartbeatTime:2016-12-09 18:20:07 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasSufficientDisk Message:kubelet has sufficient disk space available} {Type:MemoryPressure Status:False LastHeartbeatTime:2016-12-09 18:20:07 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasSufficientMemory Message:kubelet has sufficient memory available} {Type:DiskPressure Status:False LastHeartbeatTime:2016-12-09 18:20:07 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasNoDiskPressure Message:kubelet has no disk pressure} {Type:Ready Status:True LastHeartbeatTime:2016-12-09 18:20:07 +0000 UTC LastTransitionTime:2016-12-09 18:15:55 +0000 UTC Reason:KubeletReady Message:kubelet is posting ready status}] Addresses:[{Type:LegacyHostIP Address:192.168.121.18} {Type:InternalIP Address:192.168.121.18}] DaemonEndpoints:{KubeletEndpoint:{Port:10250}} NodeInfo:{MachineID:8629c58585b148cca9cc925e788be346 SystemUUID:8629C585-85B1-48CC-A9CC-925E788BE346 BootID:1a2f3964-acf4-4825-9949-2816c7af2060 KernelVersion:4.8.6-300.fc25.x86_64 OSImage:CentOS Linux 7 (Core) ContainerRuntimeVersion:docker://1.12.3 KubeletVersion:v1.4.0+776c994 KubeProxyVersion:v1.4.0+776c994 OperatingSystem:linux Architecture:amd64} Images:[{Names:[docker.io/openshift/origin@sha256:0c4dfc568b8dbb1ac288ab1feb955ee2a37e74fd60fc96f8efd4e35076ced06b docker.io/openshift/origin:latest] SizeBytes:540431972} {Names:[docker.io/openshift/origin-deployer@sha256:398189baa7877595df33f6da8dff1fad36a9e437883c5620b85bdf3d06a31fb5 docker.io/openshift/origin-deployer:v1.5.0-alpha.0] SizeBytes:488015134} {Names:[docker.io/openshift/origin-pod@sha256:6216367955502829e7270c3437810cd16e0e99d2973647e521cc9b871d8948f0 docker.io/openshift/origin-pod:v1.5.0-alpha.0] SizeBytes:1138998}] VolumesInUse:[] VolumesAttached:[]} vs {Capacity:map[memory:{i:{value:8223051776 scale:0} d:{Dec:<nil>} s:8030324Ki Format:BinarySI} pods:{i:{value:20 scale:0} d:{Dec:<nil>} s:20 Format:DecimalSI} alpha.kubernetes.io/nvidia-gpu:{i:{value:0 scale:0} d:{Dec:<nil>} s:0 Format:DecimalSI} cpu:{i:{value:2 scale:0} d:{Dec:<nil>} s:2 Format:DecimalSI}] Allocatable:map[alpha.kubernetes.io/nvidia-gpu:{i:{value:0 scale:0} d:{Dec:<nil>} s:0 Format:DecimalSI} cpu:{i:{value:2 scale:0} d:{Dec:<nil>} s:2 Format:DecimalSI} memory:{i:{value:8223051776 scale:0} d:{Dec:<nil>} s:8030324Ki Format:BinarySI} pods:{i:{value:20 scale:0} d:{Dec:<nil>} s:20 Format:DecimalSI}] Phase: Conditions:[{Type:OutOfDisk Status:False LastHeartbeatTime:2016-12-09 18:20:17 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasSufficientDisk Message:kubelet has sufficient disk space available} {Type:MemoryPressure Status:False LastHeartbeatTime:2016-12-09 18:20:17 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasSufficientMemory Message:kubelet has sufficient memory available} {Type:DiskPressure Status:False LastHeartbeatTime:2016-12-09 18:20:17 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasNoDiskPressure Message:kubelet has no disk pressure} {Type:Ready Status:True LastHeartbeatTime:2016-12-09 18:20:17 +0000 UTC LastTransitionTime:2016-12-09 18:15:55 +0000 UTC Reason:KubeletReady Message:kubelet is posting ready status}] Addresses:[{Type:LegacyHostIP Address:192.168.121.18} {Type:InternalIP Address:192.168.121.18}] DaemonEndpoints:{KubeletEndpoint:{Port:10250}} NodeInfo:{MachineID:8629c58585b148cca9cc925e788be346 SystemUUID:8629C585-85B1-48CC-A9CC-925E788BE346 BootID:1a2f3964-acf4-4825-9949-2816c7af2060 KernelVersion:4.8.6-300.fc25.x86_64 OSImage:CentOS Linux 7 (Core) ContainerRuntimeVersion:docker://1.12.3 KubeletVersion:v1.4.0+776c994 KubeProxyVersion:v1.4.0+776c994 OperatingSystem:linux Architecture:amd64} Images:[{Names:[docker.io/openshift/origin@sha256:0c4dfc568b8dbb1ac288ab1feb955ee2a37e74fd60fc96f8efd4e35076ced06b docker.io/openshift/origin:latest] SizeBytes:540431972} {Names:[docker.io/openshift/origin-deployer@sha256:398189baa7877595df33f6da8dff1fad36a9e437883c5620b85bdf3d06a31fb5 docker.io/openshift/origin-deployer:v1.5.0-alpha.0] SizeBytes:488015134} {Names:[docker.io/openshift/origin-pod@sha256:6216367955502829e7270c3437810cd16e0e99d2973647e521cc9b871d8948f0 docker.io/openshift/origin-pod:v1.5.0-alpha.0] SizeBytes:1138998}] VolumesInUse:[] VolumesAttached:[]}.
I1209 18:20:22.402946 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:20:22.406696 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:20:22.693257 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:20:22.695960 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:20:23.407002 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:20:23.411596 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:20:24.411842 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:20:24.414706 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:20:24.694296 20134 handler.go:300] unable to get fs usage from thin pool for device 16: no cached value for usage of device 16
I1209 18:20:24.694526 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:20:24.696802 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:20:24.698567 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:20:25.415040 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:20:25.417512 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:20:26.419734 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:20:26.427236 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:20:26.427582 20134 controller.go:105] Found 0 scheduledjobs
I1209 18:20:26.433437 20134 controller.go:113] Found 0 jobs
I1209 18:20:26.433582 20134 controller.go:116] Found 0 groups
I1209 18:20:26.693305 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:20:26.695550 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:20:27.414867 20134 interface.go:248] Default route transits interface "eth0"
I1209 18:20:27.415044 20134 interface.go:93] Interface eth0 is up
I1209 18:20:27.415136 20134 interface.go:138] Interface "eth0" has 2 addresses :[192.168.121.18/24 fe80::5054:ff:fe56:f2f/64].
I1209 18:20:27.415160 20134 interface.go:105] Checking addr 192.168.121.18/24.
I1209 18:20:27.415169 20134 interface.go:114] IP found 192.168.121.18
I1209 18:20:27.415179 20134 interface.go:144] valid IPv4 address for interface "eth0" found as 192.168.121.18.
I1209 18:20:27.416630 20134 interface.go:254] Choosing IP 192.168.121.18
I1209 18:20:27.428632 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:20:27.431690 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:20:27.473147 20134 attach_detach_controller.go:520] processVolumesInUse for node "localhost"
I1209 18:20:27.753890 20134 conversion.go:133] failed to handle multiple devices for container. Skipping Filesystem stats
I1209 18:20:27.753929 20134 conversion.go:133] failed to handle multiple devices for container. Skipping Filesystem stats
I1209 18:20:27.777356 20134 eviction_manager.go:204] eviction manager: no resources are starved
I1209 18:20:28.432017 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:20:28.436907 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:20:28.693287 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:20:28.694702 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:20:28.696208 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:20:29.437291 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:20:29.440590 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:20:30.441020 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:20:30.444188 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:20:30.693359 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:20:30.695098 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:20:30.696619 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:20:31.204738 20134 reflector.go:284] pkg/controller/volume/persistentvolume/pv_controller_base.go:448: forcing resync
I1209 18:20:31.233850 20134 reflector.go:284] pkg/controller/volume/persistentvolume/pv_controller_base.go:153: forcing resync
I1209 18:20:31.234535 20134 reflector.go:284] pkg/controller/volume/persistentvolume/pv_controller_base.go:449: forcing resync
I1209 18:20:31.444563 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:20:31.447226 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:20:31.727367 20134 nodecontroller.go:816] Node localhost ReadyCondition updated. Updating timestamp: {Capacity:map[memory:{i:{value:8223051776 scale:0} d:{Dec:<nil>} s:8030324Ki Format:BinarySI} pods:{i:{value:20 scale:0} d:{Dec:<nil>} s:20 Format:DecimalSI} alpha.kubernetes.io/nvidia-gpu:{i:{value:0 scale:0} d:{Dec:<nil>} s:0 Format:DecimalSI} cpu:{i:{value:2 scale:0} d:{Dec:<nil>} s:2 Format:DecimalSI}] Allocatable:map[alpha.kubernetes.io/nvidia-gpu:{i:{value:0 scale:0} d:{Dec:<nil>} s:0 Format:DecimalSI} cpu:{i:{value:2 scale:0} d:{Dec:<nil>} s:2 Format:DecimalSI} memory:{i:{value:8223051776 scale:0} d:{Dec:<nil>} s:8030324Ki Format:BinarySI} pods:{i:{value:20 scale:0} d:{Dec:<nil>} s:20 Format:DecimalSI}] Phase: Conditions:[{Type:OutOfDisk Status:False LastHeartbeatTime:2016-12-09 18:20:17 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasSufficientDisk Message:kubelet has sufficient disk space available} {Type:MemoryPressure Status:False LastHeartbeatTime:2016-12-09 18:20:17 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasSufficientMemory Message:kubelet has sufficient memory available} {Type:DiskPressure Status:False LastHeartbeatTime:2016-12-09 18:20:17 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasNoDiskPressure Message:kubelet has no disk pressure} {Type:Ready Status:True LastHeartbeatTime:2016-12-09 18:20:17 +0000 UTC LastTransitionTime:2016-12-09 18:15:55 +0000 UTC Reason:KubeletReady Message:kubelet is posting ready status}] Addresses:[{Type:LegacyHostIP Address:192.168.121.18} {Type:InternalIP Address:192.168.121.18}] DaemonEndpoints:{KubeletEndpoint:{Port:10250}} NodeInfo:{MachineID:8629c58585b148cca9cc925e788be346 SystemUUID:8629C585-85B1-48CC-A9CC-925E788BE346 BootID:1a2f3964-acf4-4825-9949-2816c7af2060 KernelVersion:4.8.6-300.fc25.x86_64 OSImage:CentOS Linux 7 (Core) ContainerRuntimeVersion:docker://1.12.3 KubeletVersion:v1.4.0+776c994 KubeProxyVersion:v1.4.0+776c994 OperatingSystem:linux Architecture:amd64} Images:[{Names:[docker.io/openshift/origin@sha256:0c4dfc568b8dbb1ac288ab1feb955ee2a37e74fd60fc96f8efd4e35076ced06b docker.io/openshift/origin:latest] SizeBytes:540431972} {Names:[docker.io/openshift/origin-deployer@sha256:398189baa7877595df33f6da8dff1fad36a9e437883c5620b85bdf3d06a31fb5 docker.io/openshift/origin-deployer:v1.5.0-alpha.0] SizeBytes:488015134} {Names:[docker.io/openshift/origin-pod@sha256:6216367955502829e7270c3437810cd16e0e99d2973647e521cc9b871d8948f0 docker.io/openshift/origin-pod:v1.5.0-alpha.0] SizeBytes:1138998}] VolumesInUse:[] VolumesAttached:[]} vs {Capacity:map[alpha.kubernetes.io/nvidia-gpu:{i:{value:0 scale:0} d:{Dec:<nil>} s:0 Format:DecimalSI} cpu:{i:{value:2 scale:0} d:{Dec:<nil>} s:2 Format:DecimalSI} memory:{i:{value:8223051776 scale:0} d:{Dec:<nil>} s:8030324Ki Format:BinarySI} pods:{i:{value:20 scale:0} d:{Dec:<nil>} s:20 Format:DecimalSI}] Allocatable:map[memory:{i:{value:8223051776 scale:0} d:{Dec:<nil>} s:8030324Ki Format:BinarySI} pods:{i:{value:20 scale:0} d:{Dec:<nil>} s:20 Format:DecimalSI} alpha.kubernetes.io/nvidia-gpu:{i:{value:0 scale:0} d:{Dec:<nil>} s:0 Format:DecimalSI} cpu:{i:{value:2 scale:0} d:{Dec:<nil>} s:2 Format:DecimalSI}] Phase: Conditions:[{Type:OutOfDisk Status:False LastHeartbeatTime:2016-12-09 18:20:27 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasSufficientDisk Message:kubelet has sufficient disk space available} {Type:MemoryPressure Status:False LastHeartbeatTime:2016-12-09 18:20:27 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasSufficientMemory Message:kubelet has sufficient memory available} {Type:DiskPressure Status:False LastHeartbeatTime:2016-12-09 18:20:27 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasNoDiskPressure Message:kubelet has no disk pressure} {Type:Ready Status:True LastHeartbeatTime:2016-12-09 18:20:27 +0000 UTC LastTransitionTime:2016-12-09 18:15:55 +0000 UTC Reason:KubeletReady Message:kubelet is posting ready status}] Addresses:[{Type:LegacyHostIP Address:192.168.121.18} {Type:InternalIP Address:192.168.121.18}] DaemonEndpoints:{KubeletEndpoint:{Port:10250}} NodeInfo:{MachineID:8629c58585b148cca9cc925e788be346 SystemUUID:8629C585-85B1-48CC-A9CC-925E788BE346 BootID:1a2f3964-acf4-4825-9949-2816c7af2060 KernelVersion:4.8.6-300.fc25.x86_64 OSImage:CentOS Linux 7 (Core) ContainerRuntimeVersion:docker://1.12.3 KubeletVersion:v1.4.0+776c994 KubeProxyVersion:v1.4.0+776c994 OperatingSystem:linux Architecture:amd64} Images:[{Names:[docker.io/openshift/origin@sha256:0c4dfc568b8dbb1ac288ab1feb955ee2a37e74fd60fc96f8efd4e35076ced06b docker.io/openshift/origin:latest] SizeBytes:540431972} {Names:[docker.io/openshift/origin-deployer@sha256:398189baa7877595df33f6da8dff1fad36a9e437883c5620b85bdf3d06a31fb5 docker.io/openshift/origin-deployer:v1.5.0-alpha.0] SizeBytes:488015134} {Names:[docker.io/openshift/origin-pod@sha256:6216367955502829e7270c3437810cd16e0e99d2973647e521cc9b871d8948f0 docker.io/openshift/origin-pod:v1.5.0-alpha.0] SizeBytes:1138998}] VolumesInUse:[] VolumesAttached:[]}.
I1209 18:20:31.823983 20134 thin_pool_watcher.go:160] checking whether the thin-pool is holding a metadata snapshot
I1209 18:20:31.824029 20134 dmsetup_client.go:61] running dmsetup status docker-252:1-262311-pool
I1209 18:20:31.846477 20134 thin_pool_watcher.go:126] reserving metadata snapshot for thin-pool docker-252:1-262311-pool
I1209 18:20:31.846522 20134 dmsetup_client.go:61] running dmsetup message docker-252:1-262311-pool 0 reserve_metadata_snap
I1209 18:20:31.868984 20134 thin_pool_watcher.go:133] reserved metadata snapshot for thin-pool docker-252:1-262311-pool
I1209 18:20:31.869022 20134 thin_pool_watcher.go:141] running thin_ls on metadata device /dev/loop1
I1209 18:20:31.869206 20134 thin_ls_client.go:56] running command: thin_ls --no-headers -m -o DEV,EXCLUSIVE_BYTES /dev/loop1
I1209 18:20:31.882075 20134 thin_pool_watcher.go:137] releasing metadata snapshot for thin-pool docker-252:1-262311-pool
I1209 18:20:31.882099 20134 dmsetup_client.go:61] running dmsetup message docker-252:1-262311-pool 0 release_metadata_snap
E1209 18:20:31.896802 20134 thin_pool_watcher.go:72] encountered error refreshing thin pool watcher: error performing thin_ls on metadata device /dev/loop1: Error running command `thin_ls --no-headers -m -o DEV,EXCLUSIVE_BYTES /dev/loop1`: exit status 127
output:
I1209 18:20:31.896845 20134 thin_pool_watcher.go:77] thin_ls(1481307631) took 72.872084ms
I1209 18:20:32.447570 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:20:32.450939 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:20:32.693322 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:20:32.695226 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:20:33.451256 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:20:33.454417 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:20:34.454840 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:20:34.457092 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:20:34.693309 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:20:34.695155 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:20:34.696585 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:20:35.457442 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:20:35.460726 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:20:36.059396 20134 handler.go:300] unable to get fs usage from thin pool for device 16: no cached value for usage of device 16
I1209 18:20:36.440377 20134 controller.go:105] Found 0 scheduledjobs
I1209 18:20:36.447615 20134 controller.go:113] Found 0 jobs
I1209 18:20:36.447703 20134 controller.go:116] Found 0 groups
I1209 18:20:36.461095 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:20:36.463450 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:20:36.693270 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:20:36.695022 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:20:36.696704 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:20:37.359600 20134 factory.go:104] Error trying to work out if we can handle /system.slice/systemd-hostnamed.service: invalid container name
I1209 18:20:37.359642 20134 factory.go:115] Factory "docker" was unable to handle container "/system.slice/systemd-hostnamed.service"
I1209 18:20:37.359666 20134 factory.go:104] Error trying to work out if we can handle /system.slice/systemd-hostnamed.service: /system.slice/systemd-hostnamed.service not handled by systemd handler
I1209 18:20:37.359676 20134 factory.go:115] Factory "systemd" was unable to handle container "/system.slice/systemd-hostnamed.service"
I1209 18:20:37.359686 20134 factory.go:111] Using factory "raw" for container "/system.slice/systemd-hostnamed.service"
I1209 18:20:37.360050 20134 manager.go:874] Added container: "/system.slice/systemd-hostnamed.service" (aliases: [], namespace: "")
I1209 18:20:37.360389 20134 handler.go:325] Added event &{/system.slice/systemd-hostnamed.service 2016-12-09 18:20:37.356252844 +0000 UTC containerCreation {<nil>}}
I1209 18:20:37.360520 20134 container.go:407] Start housekeeping for container "/system.slice/systemd-hostnamed.service"
I1209 18:20:37.465662 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:20:37.468100 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:20:37.476507 20134 interface.go:248] Default route transits interface "eth0"
I1209 18:20:37.477149 20134 interface.go:93] Interface eth0 is up
I1209 18:20:37.477680 20134 interface.go:138] Interface "eth0" has 2 addresses :[192.168.121.18/24 fe80::5054:ff:fe56:f2f/64].
I1209 18:20:37.477723 20134 interface.go:105] Checking addr 192.168.121.18/24.
I1209 18:20:37.477735 20134 interface.go:114] IP found 192.168.121.18
I1209 18:20:37.477746 20134 interface.go:144] valid IPv4 address for interface "eth0" found as 192.168.121.18.
I1209 18:20:37.477755 20134 interface.go:254] Choosing IP 192.168.121.18
I1209 18:20:37.531245 20134 attach_detach_controller.go:520] processVolumesInUse for node "localhost"
I1209 18:20:37.584884 20134 handler.go:300] unable to get fs usage from thin pool for device 16: no cached value for usage of device 16
I1209 18:20:37.806048 20134 conversion.go:133] failed to handle multiple devices for container. Skipping Filesystem stats
I1209 18:20:37.806085 20134 conversion.go:133] failed to handle multiple devices for container. Skipping Filesystem stats
I1209 18:20:37.839695 20134 eviction_manager.go:204] eviction manager: no resources are starved
I1209 18:20:38.468409 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:20:38.470941 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:20:38.693373 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:20:38.694963 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:20:39.471320 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:20:39.474287 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:20:40.475120 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:20:40.477705 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:20:40.693395 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:20:40.695696 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:20:40.697608 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:20:41.478158 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:20:41.481405 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:20:41.733117 20134 nodecontroller.go:816] Node localhost ReadyCondition updated. Updating timestamp: {Capacity:map[alpha.kubernetes.io/nvidia-gpu:{i:{value:0 scale:0} d:{Dec:<nil>} s:0 Format:DecimalSI} cpu:{i:{value:2 scale:0} d:{Dec:<nil>} s:2 Format:DecimalSI} memory:{i:{value:8223051776 scale:0} d:{Dec:<nil>} s:8030324Ki Format:BinarySI} pods:{i:{value:20 scale:0} d:{Dec:<nil>} s:20 Format:DecimalSI}] Allocatable:map[memory:{i:{value:8223051776 scale:0} d:{Dec:<nil>} s:8030324Ki Format:BinarySI} pods:{i:{value:20 scale:0} d:{Dec:<nil>} s:20 Format:DecimalSI} alpha.kubernetes.io/nvidia-gpu:{i:{value:0 scale:0} d:{Dec:<nil>} s:0 Format:DecimalSI} cpu:{i:{value:2 scale:0} d:{Dec:<nil>} s:2 Format:DecimalSI}] Phase: Conditions:[{Type:OutOfDisk Status:False LastHeartbeatTime:2016-12-09 18:20:27 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasSufficientDisk Message:kubelet has sufficient disk space available} {Type:MemoryPressure Status:False LastHeartbeatTime:2016-12-09 18:20:27 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasSufficientMemory Message:kubelet has sufficient memory available} {Type:DiskPressure Status:False LastHeartbeatTime:2016-12-09 18:20:27 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasNoDiskPressure Message:kubelet has no disk pressure} {Type:Ready Status:True LastHeartbeatTime:2016-12-09 18:20:27 +0000 UTC LastTransitionTime:2016-12-09 18:15:55 +0000 UTC Reason:KubeletReady Message:kubelet is posting ready status}] Addresses:[{Type:LegacyHostIP Address:192.168.121.18} {Type:InternalIP Address:192.168.121.18}] DaemonEndpoints:{KubeletEndpoint:{Port:10250}} NodeInfo:{MachineID:8629c58585b148cca9cc925e788be346 SystemUUID:8629C585-85B1-48CC-A9CC-925E788BE346 BootID:1a2f3964-acf4-4825-9949-2816c7af2060 KernelVersion:4.8.6-300.fc25.x86_64 OSImage:CentOS Linux 7 (Core) ContainerRuntimeVersion:docker://1.12.3 KubeletVersion:v1.4.0+776c994 KubeProxyVersion:v1.4.0+776c994 OperatingSystem:linux Architecture:amd64} Images:[{Names:[docker.io/openshift/origin@sha256:0c4dfc568b8dbb1ac288ab1feb955ee2a37e74fd60fc96f8efd4e35076ced06b docker.io/openshift/origin:latest] SizeBytes:540431972} {Names:[docker.io/openshift/origin-deployer@sha256:398189baa7877595df33f6da8dff1fad36a9e437883c5620b85bdf3d06a31fb5 docker.io/openshift/origin-deployer:v1.5.0-alpha.0] SizeBytes:488015134} {Names:[docker.io/openshift/origin-pod@sha256:6216367955502829e7270c3437810cd16e0e99d2973647e521cc9b871d8948f0 docker.io/openshift/origin-pod:v1.5.0-alpha.0] SizeBytes:1138998}] VolumesInUse:[] VolumesAttached:[]} vs {Capacity:map[alpha.kubernetes.io/nvidia-gpu:{i:{value:0 scale:0} d:{Dec:<nil>} s:0 Format:DecimalSI} cpu:{i:{value:2 scale:0} d:{Dec:<nil>} s:2 Format:DecimalSI} memory:{i:{value:8223051776 scale:0} d:{Dec:<nil>} s:8030324Ki Format:BinarySI} pods:{i:{value:20 scale:0} d:{Dec:<nil>} s:20 Format:DecimalSI}] Allocatable:map[cpu:{i:{value:2 scale:0} d:{Dec:<nil>} s:2 Format:DecimalSI} memory:{i:{value:8223051776 scale:0} d:{Dec:<nil>} s:8030324Ki Format:BinarySI} pods:{i:{value:20 scale:0} d:{Dec:<nil>} s:20 Format:DecimalSI} alpha.kubernetes.io/nvidia-gpu:{i:{value:0 scale:0} d:{Dec:<nil>} s:0 Format:DecimalSI}] Phase: Conditions:[{Type:OutOfDisk Status:False LastHeartbeatTime:2016-12-09 18:20:37 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasSufficientDisk Message:kubelet has sufficient disk space available} {Type:MemoryPressure Status:False LastHeartbeatTime:2016-12-09 18:20:37 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasSufficientMemory Message:kubelet has sufficient memory available} {Type:DiskPressure Status:False LastHeartbeatTime:2016-12-09 18:20:37 +0000 UTC LastTransitionTime:2016-12-09 18:15:45 +0000 UTC Reason:KubeletHasNoDiskPressure Message:kubelet has no disk pressure} {Type:Ready Status:True LastHeartbeatTime:2016-12-09 18:20:37 +0000 UTC LastTransitionTime:2016-12-09 18:15:55 +0000 UTC Reason:KubeletReady Message:kubelet is posting ready status}] Addresses:[{Type:LegacyHostIP Address:192.168.121.18} {Type:InternalIP Address:192.168.121.18}] DaemonEndpoints:{KubeletEndpoint:{Port:10250}} NodeInfo:{MachineID:8629c58585b148cca9cc925e788be346 SystemUUID:8629C585-85B1-48CC-A9CC-925E788BE346 BootID:1a2f3964-acf4-4825-9949-2816c7af2060 KernelVersion:4.8.6-300.fc25.x86_64 OSImage:CentOS Linux 7 (Core) ContainerRuntimeVersion:docker://1.12.3 KubeletVersion:v1.4.0+776c994 KubeProxyVersion:v1.4.0+776c994 OperatingSystem:linux Architecture:amd64} Images:[{Names:[docker.io/openshift/origin@sha256:0c4dfc568b8dbb1ac288ab1feb955ee2a37e74fd60fc96f8efd4e35076ced06b docker.io/openshift/origin:latest] SizeBytes:540431972} {Names:[docker.io/openshift/origin-deployer@sha256:398189baa7877595df33f6da8dff1fad36a9e437883c5620b85bdf3d06a31fb5 docker.io/openshift/origin-deployer:v1.5.0-alpha.0] SizeBytes:488015134} {Names:[docker.io/openshift/origin-pod@sha256:6216367955502829e7270c3437810cd16e0e99d2973647e521cc9b871d8948f0 docker.io/openshift/origin-pod:v1.5.0-alpha.0] SizeBytes:1138998}] VolumesInUse:[] VolumesAttached:[]}.
I1209 18:20:42.482371 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:20:42.485209 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:20:42.693375 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:20:42.695880 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:20:42.697398 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:20:43.485657 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:20:43.488990 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:20:44.299412 20134 worker.go:45] 0 Health Check Listeners
I1209 18:20:44.299555 20134 worker.go:46] 0 Services registered for health checking
I1209 18:20:44.489865 20134 generic.go:177] GenericPLEG: Relisting
I1209 18:20:44.492396 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:20:44.608539 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:20:44.608712 20134 image_gc_manager.go:171] Pod test/deployment-example-1-deploy, container deployment uses image openshift/origin-deployer:v1.5.0-alpha.0(sha256:46451247dbc7a9a41cc208c3687553c2900c2c8dadb31e83948f5115110ea26d)
I1209 18:20:44.608742 20134 image_gc_manager.go:171] Pod test/deployment-example-1-deploy, container POD uses image openshift/origin-pod:v1.5.0-alpha.0(sha256:bb8660075f6b1f690983f8c0b119d0cc95e78e937e1982c2c0202cd97116f6de)
I1209 18:20:44.608787 20134 image_gc_manager.go:182] Adding image ID sha256:02998436bf31e2ccffbbbea70f8d713d6a785e0db7bb1c35ca831967b9a7a346 to currentImages
I1209 18:20:44.608799 20134 image_gc_manager.go:199] Image ID sha256:02998436bf31e2ccffbbbea70f8d713d6a785e0db7bb1c35ca831967b9a7a346 has size 540431972
I1209 18:20:44.608808 20134 image_gc_manager.go:182] Adding image ID sha256:46451247dbc7a9a41cc208c3687553c2900c2c8dadb31e83948f5115110ea26d to currentImages
I1209 18:20:44.608816 20134 image_gc_manager.go:195] Setting Image ID sha256:46451247dbc7a9a41cc208c3687553c2900c2c8dadb31e83948f5115110ea26d lastUsed to 2016-12-09 18:20:44.608779033 +0000 UTC
I1209 18:20:44.608842 20134 image_gc_manager.go:199] Image ID sha256:46451247dbc7a9a41cc208c3687553c2900c2c8dadb31e83948f5115110ea26d has size 488015134
I1209 18:20:44.608849 20134 image_gc_manager.go:182] Adding image ID sha256:bb8660075f6b1f690983f8c0b119d0cc95e78e937e1982c2c0202cd97116f6de to currentImages
I1209 18:20:44.608856 20134 image_gc_manager.go:195] Setting Image ID sha256:bb8660075f6b1f690983f8c0b119d0cc95e78e937e1982c2c0202cd97116f6de lastUsed to 2016-12-09 18:20:44.608779033 +0000 UTC
I1209 18:20:44.609095 20134 image_gc_manager.go:199] Image ID sha256:bb8660075f6b1f690983f8c0b119d0cc95e78e937e1982c2c0202cd97116f6de has size 1138998
I1209 18:20:44.693362 20134 kubelet.go:2370] SyncLoop (housekeeping)
I1209 18:20:44.695191 20134 docker.go:500] Docker Container: /origin is not managed by kubelet.
I1209 18:20:44.842844 20134 reflector.go:284] pkg/controller/replicaset/replica_set.go:205: forcing resync
I1209 18:20:44.850357 20134 reflector.go:284] pkg/controller/deployment/deployment_controller.go:181: forcing resync
I1209 18:20:44.863876 20134 reflector.go:284] pkg/controller/disruption/disruption.go:266: forcing resync
I1209 18:20:44.864509 20134 reflector.go:284] pkg/controller/podautoscaler/horizontal.go:135: forcing resync
I1209 18:20:44.864543 20134 reflector.go:284] pkg/controller/endpoint/endpoints_controller.go:158: forcing resync
I1209 18:20:44.864768 20134 endpoints_controller.go:321] Finished syncing service "default/kubernetes" endpoints. (7.313µs)
I1209 18:20:44.864828 20134 endpoints_controller.go:360] About to update endpoints for service "test/deployment-example"
I1209 18:20:44.867816 20134 endpoints_controller.go:321] Finished syncing service "default/kubernetes" endpoints. (5µs)
I1209 18:20:44.869394 20134 reflector.go:284] pkg/controller/disruption/disruption.go:268: forcing resync
I1209 18:20:44.869537 201
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment