Skip to content

Instantly share code, notes, and snippets.

@TheYkk
Created March 14, 2019 15:17
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save TheYkk/6b6f1fb67f0848b6a8dbf28888772da4 to your computer and use it in GitHub Desktop.
Save TheYkk/6b6f1fb67f0848b6a8dbf28888772da4 to your computer and use it in GitHub Desktop.
rancher isue
vagrant@ranc:~$ docker logs \
> --timestamps \
> $(docker ps | grep -E "rancher/rancher:|rancher/rancher " | awk '{ print $1 }')
2019-03-14T15:05:59.192796794Z 2019/03/14 15:05:59 [INFO] Rancher version v2.1.7 is starting
2019-03-14T15:05:59.192841549Z 2019/03/14 15:05:59 [INFO] Rancher arguments {ACMEDomains:[] AddLocal:auto Embedded:false KubeConfig: HTTPListenPort:80 HTTPSListenPort:443 K8sMode:auto Debug:false NoCACerts:false ListenConfig:<nil> AuditLogPath:/var/log/auditlog/rancher-api-audit.log AuditLogMaxage:10 AuditLogMaxsize:100 AuditLogMaxbackup:10 AuditLevel:0}
2019-03-14T15:05:59.192847794Z 2019/03/14 15:05:59 [INFO] Listening on /tmp/log.sock
2019-03-14T15:05:59.195793842Z 2019/03/14 15:05:59 [INFO] Running etcd --peer-client-cert-auth --client-cert-auth --listen-peer-urls=https://0.0.0.0:2380 --key-file=/etc/kubernetes/ssl/kube-etcd-127-0-0-1-key.pem --peer-cert-file=/etc/kubernetes/ssl/kube-etcd-127-0-0-1.pem --peer-key-file=/etc/kubernetes/ssl/kube-etcd-127-0-0-1-key.pem --election-timeout=5000 --heartbeat-interval=500 --initial-cluster-token=etcd-cluster-1 --advertise-client-urls=https://127.0.0.1:2379,https://127.0.0.1:4001 --initial-advertise-peer-urls=https://127.0.0.1:2380 --initial-cluster=etcd-master=https://127.0.0.1:2380 --initial-cluster-state=new --name=etcd-master --data-dir=/var/lib/rancher/etcd/ --listen-client-urls=https://0.0.0.0:2379 --trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --peer-trusted-ca-file=/etc/kubernetes/ssl/kube-ca.pem --cert-file=/etc/kubernetes/ssl/kube-etcd-127-0-0-1.pem
2019-03-14T15:05:59.196122457Z 2019-03-14 15:05:59.196004 I | etcdmain: etcd Version: 3.2.13
2019-03-14T15:05:59.196150694Z 2019-03-14 15:05:59.196021 I | etcdmain: Git SHA: Not provided (use ./build instead of go build)
2019-03-14T15:05:59.196153782Z 2019-03-14 15:05:59.196025 I | etcdmain: Go Version: go1.11
2019-03-14T15:05:59.196156218Z 2019-03-14 15:05:59.196027 I | etcdmain: Go OS/Arch: linux/amd64
2019-03-14T15:05:59.196158553Z 2019-03-14 15:05:59.196030 I | etcdmain: setting maximum number of CPUs to 4, total number of available CPUs is 4
2019-03-14T15:05:59.196160978Z 2019-03-14 15:05:59.196065 N | etcdmain: the server is already initialized as member before, starting as etcd member...
2019-03-14T15:05:59.196163424Z 2019-03-14 15:05:59.196085 I | embed: peerTLS: cert = /etc/kubernetes/ssl/kube-etcd-127-0-0-1.pem, key = /etc/kubernetes/ssl/kube-etcd-127-0-0-1-key.pem, ca = , trusted-ca = /etc/kubernetes/ssl/kube-ca.pem, client-cert-auth = true
2019-03-14T15:05:59.196462841Z 2019-03-14 15:05:59.196412 I | embed: listening for peers on https://0.0.0.0:2380
2019-03-14T15:05:59.196516503Z 2019-03-14 15:05:59.196440 I | embed: listening for client requests on 0.0.0.0:2379
2019-03-14T15:05:59.201102079Z 2019-03-14 15:05:59.200985 I | etcdserver: name = etcd-master
2019-03-14T15:05:59.201118779Z 2019-03-14 15:05:59.201004 I | etcdserver: data dir = /var/lib/rancher/etcd/
2019-03-14T15:05:59.201121736Z 2019-03-14 15:05:59.201008 I | etcdserver: member dir = /var/lib/rancher/etcd/member
2019-03-14T15:05:59.201124241Z 2019-03-14 15:05:59.201011 I | etcdserver: heartbeat = 500ms
2019-03-14T15:05:59.201126557Z 2019-03-14 15:05:59.201014 I | etcdserver: election = 5000ms
2019-03-14T15:05:59.201128953Z 2019-03-14 15:05:59.201017 I | etcdserver: snapshot count = 100000
2019-03-14T15:05:59.201131248Z 2019-03-14 15:05:59.201027 I | etcdserver: advertise client URLs = https://127.0.0.1:2379,https://127.0.0.1:4001
2019-03-14T15:05:59.201139848Z 2019-03-14 15:05:59.201064 W | wal: ignored file 0000000000000000-0000000000000000.wal.broken in wal
2019-03-14T15:05:59.379254298Z 2019-03-14 15:05:59.378921 I | etcdserver: restarting member e92d66acd89ecf29 in cluster 7581d6eb2d25405b at commit index 38994
2019-03-14T15:05:59.380367647Z 2019-03-14 15:05:59.380270 I | raft: e92d66acd89ecf29 became follower at term 5
2019-03-14T15:05:59.380376919Z 2019-03-14 15:05:59.380289 I | raft: newRaft e92d66acd89ecf29 [peers: [], term: 5, commit: 38994, applied: 0, lastindex: 38994, lastterm: 5]
2019-03-14T15:05:59.380901741Z 2019-03-14 15:05:59.380851 I | mvcc: restore compact to 34598
2019-03-14T15:05:59.461433978Z 2019-03-14 15:05:59.461158 W | auth: simple token is not cryptographically signed
2019-03-14T15:05:59.462465268Z 2019-03-14 15:05:59.462366 I | etcdserver: starting server... [version: 3.2.13, cluster version: to_be_decided]
2019-03-14T15:05:59.462772227Z 2019-03-14 15:05:59.462732 I | embed: ClientTLS: cert = /etc/kubernetes/ssl/kube-etcd-127-0-0-1.pem, key = /etc/kubernetes/ssl/kube-etcd-127-0-0-1-key.pem, ca = , trusted-ca = /etc/kubernetes/ssl/kube-ca.pem, client-cert-auth = true
2019-03-14T15:05:59.463154900Z 2019-03-14 15:05:59.463072 I | etcdserver/membership: added member e92d66acd89ecf29 [https://127.0.0.1:2380] to cluster 7581d6eb2d25405b
2019-03-14T15:05:59.463177595Z 2019-03-14 15:05:59.463149 N | etcdserver/membership: set the initial cluster version to 3.2
2019-03-14T15:05:59.463238770Z 2019-03-14 15:05:59.463172 I | etcdserver/api: enabled capabilities for version 3.2
2019-03-14T15:06:02.882782684Z 2019-03-14 15:06:02.882632 I | raft: e92d66acd89ecf29 is starting a new election at term 5
2019-03-14T15:06:02.882800636Z 2019-03-14 15:06:02.882699 I | raft: e92d66acd89ecf29 became candidate at term 6
2019-03-14T15:06:02.882804446Z 2019-03-14 15:06:02.882714 I | raft: e92d66acd89ecf29 received MsgVoteResp from e92d66acd89ecf29 at term 6
2019-03-14T15:06:02.882807633Z 2019-03-14 15:06:02.882726 I | raft: e92d66acd89ecf29 became leader at term 6
2019-03-14T15:06:02.882932925Z 2019-03-14 15:06:02.882731 I | raft: raft.node: e92d66acd89ecf29 elected leader e92d66acd89ecf29 at term 6
2019-03-14T15:06:02.883140704Z 2019-03-14 15:06:02.883054 I | etcdserver: published {Name:etcd-master ClientURLs:[https://127.0.0.1:2379 https://127.0.0.1:4001]} to cluster 7581d6eb2d25405b
2019-03-14T15:06:02.883148042Z 2019-03-14 15:06:02.883083 I | embed: ready to serve client requests
2019-03-14T15:06:02.883332996Z 2019-03-14 15:06:02.883282 I | embed: serving client requests on [::]:2379
2019-03-14T15:06:02.890786116Z 2019/03/14 15:06:02 [INFO] Running kube-apiserver --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem --etcd-prefix=/registry --authorization-mode=Node,RBAC --secure-port=6443 --requestheader-allowed-names= --cloud-provider= --etcd-certfile=/etc/kubernetes/ssl/kube-node.pem --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --proxy-client-cert-file= --allow-privileged=true --requestheader-group-headers= --requestheader-username-headers= --service-account-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem --requestheader-client-ca-file= --advertise-address=10.43.0.1 --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --etcd-cafile=/etc/kubernetes/ssl/kube-ca.pem --etcd-servers=https://127.0.0.1:2379 --insecure-bind-address=127.0.0.1 --bind-address=127.0.0.1 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --requestheader-extra-headers-prefix= --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --endpoint-reconciler-type=lease --storage-backend=etcd3 --service-cluster-ip-range=10.43.0.0/16 --etcd-keyfile=/etc/kubernetes/ssl/kube-node-key.pem --proxy-client-key-file= --insecure-port=0 --service-node-port-range=30000-32767 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota -v=1 --logtostderr=false --alsologtostderr=false
2019-03-14T15:06:02.891143546Z 2019/03/14 15:06:02 [INFO] Activating driver import
2019-03-14T15:06:02.891243113Z 2019/03/14 15:06:02 [INFO] Activating driver import done
2019-03-14T15:06:02.891261466Z 2019/03/14 15:06:02 [INFO] Activating driver rke
2019-03-14T15:06:02.891285633Z 2019/03/14 15:06:02 [INFO] Activating driver rke done
2019-03-14T15:06:02.891290394Z 2019/03/14 15:06:02 [INFO] Activating driver gke
2019-03-14T15:06:02.891292781Z 2019/03/14 15:06:02 [INFO] Activating driver gke done
2019-03-14T15:06:02.891295116Z 2019/03/14 15:06:02 [INFO] Activating driver aks
2019-03-14T15:06:02.891311405Z 2019/03/14 15:06:02 [INFO] Activating driver aks done
2019-03-14T15:06:02.891315033Z 2019/03/14 15:06:02 [INFO] Activating driver eks
2019-03-14T15:06:02.891329628Z 2019/03/14 15:06:02 [INFO] Activating driver eks done
2019-03-14T15:06:05.644570732Z [restful] 2019/03/14 15:06:05 log.go:33: [restful/swagger] listing is available at https://10.43.0.1:6443/swaggerapi
2019-03-14T15:06:05.644595691Z [restful] 2019/03/14 15:06:05 log.go:33: [restful/swagger] https://10.43.0.1:6443/swaggerui/ is mapped to folder /swagger-ui/
2019-03-14T15:06:07.064527499Z [restful] 2019/03/14 15:06:07 log.go:33: [restful/swagger] listing is available at https://10.43.0.1:6443/swaggerapi
2019-03-14T15:06:07.064563004Z [restful] 2019/03/14 15:06:07 log.go:33: [restful/swagger] https://10.43.0.1:6443/swaggerui/ is mapped to folder /swagger-ui/
2019-03-14T15:06:10.370934367Z 2019/03/14 15:06:10 [INFO] Running kube-controller-manager --enable-hostpath-provisioner=false --service-cluster-ip-range=10.43.0.0/16 --allow-untagged-cloud=true --leader-elect=true --cloud-provider= --configure-cloud-routes=false --node-monitor-grace-period=40s --allocate-node-cidrs=true --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-controller-manager.yaml --service-account-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem --address=0.0.0.0 --pod-eviction-timeout=5m0s --v=2 --cluster-cidr=10.42.0.0/16 --root-ca-file=/etc/kubernetes/ssl/kube-ca.pem --use-service-account-credentials=true -v=1 --logtostderr=false --alsologtostderr=false --controllers * --controllers -resourcequota --controllers -service
2019-03-14T15:06:10.373982877Z 2019/03/14 15:06:10 [INFO] Running in single server mode, will not peer connections
2019-03-14T15:06:10.457572827Z E0314 15:06:10.457453 6 memcache.go:134] couldn't get resource list for management.cattle.io/v3: the server could not find the requested resource
2019-03-14T15:06:10.457920643Z E0314 15:06:10.457829 6 memcache.go:134] couldn't get resource list for project.cattle.io/v3: the server could not find the requested resource
2019-03-14T15:06:10.504448279Z 2019/03/14 15:06:10 [INFO] Starting API controllers
2019-03-14T15:06:10.805669803Z 2019/03/14 15:06:10 [INFO] Listening on :443
2019-03-14T15:06:10.805726076Z 2019/03/14 15:06:10 [INFO] Listening on :80
2019-03-14T15:06:57.823022912Z 2019/03/14 15:06:57 [INFO] Starting catalog controller
2019-03-14T15:06:57.826365792Z 2019/03/14 15:06:57 [INFO] Starting management controllers
2019-03-14T15:06:59.653474331Z 2019/03/14 15:06:59 [INFO] deleting azureConfig from node schema
2019-03-14T15:06:59.662008164Z 2019/03/14 15:06:59 [INFO] deleting azureConfig from node schema
2019-03-14T15:06:59.666814901Z 2019/03/14 15:06:59 [INFO] Reconciling GlobalRoles
2019-03-14T15:06:59.667898850Z 2019/03/14 15:06:59 [INFO] uploading digitaloceanConfig to node schema
2019-03-14T15:06:59.668033249Z 2019/03/14 15:06:59 [INFO] Deleting schema vmwarevsphereconfig
2019-03-14T15:06:59.670025381Z 2019/03/14 15:06:59 [INFO] Deleting schema amazonec2config
2019-03-14T15:06:59.672209579Z 2019/03/14 15:06:59 [INFO] uploading digitaloceanConfig to node schema
2019-03-14T15:06:59.675050055Z 2019/03/14 15:06:59 [INFO] Reconciling RoleTemplates
2019-03-14T15:06:59.678159204Z 2019/03/14 15:06:59 [INFO] Deleting schema vmwarevsphereconfig done
2019-03-14T15:06:59.683778561Z 2019/03/14 15:06:59 [INFO] deleting vmwarevsphereConfig from node schema
2019-03-14T15:06:59.691412753Z 2019/03/14 15:06:59 [INFO] deleting vmwarevsphereConfig from node schema
2019-03-14T15:06:59.691620676Z 2019/03/14 15:06:59 [INFO] Deleting schema amazonec2config done
2019-03-14T15:06:59.889030648Z 2019/03/14 15:06:59 [INFO] Creating node driver amazonec2
2019-03-14T15:06:59.890979466Z 2019/03/14 15:06:59 [INFO] Creating node driver exoscale
2019-03-14T15:06:59.892616374Z 2019/03/14 15:06:59 [INFO] Creating node driver openstack
2019-03-14T15:06:59.894540023Z 2019/03/14 15:06:59 [INFO] Creating node driver otc
2019-03-14T15:06:59.896660891Z 2019/03/14 15:06:59 [INFO] Creating node driver packet
2019-03-14T15:06:59.900593856Z 2019/03/14 15:06:59 [INFO] Creating node driver rackspace
2019-03-14T15:06:59.903062404Z 2019/03/14 15:06:59 [INFO] Creating node driver softlayer
2019-03-14T15:06:59.904478909Z 2019/03/14 15:06:59 [INFO] Creating node driver aliyunecs
2019-03-14T15:06:59.906585491Z 2019/03/14 15:06:59 [INFO] Creating node driver vmwarevsphere
2019-03-14T15:06:59.908321475Z 2019/03/14 15:06:59 [INFO] Rancher startup complete
2019-03-14T15:06:59.987222068Z 2019/03/14 15:06:59 [INFO] uploading amazonec2Config to node schema
2019-03-14T15:06:59.990985155Z 2019/03/14 15:06:59 [INFO] uploading amazonec2Config to node schema
2019-03-14T15:06:59.993038471Z 2019/03/14 15:06:59 [INFO] uploading vmwarevsphereConfig to node schema
2019-03-14T15:06:59.994935356Z 2019/03/14 15:06:59 [INFO] uploading vmwarevsphereConfig to node schema
2019-03-14T15:07:02.826553100Z time="2019-03-14 15:07:02" level=info msg="Telemetry Client v0.5.1"
2019-03-14T15:07:02.826580786Z time="2019-03-14 15:07:02" level=info msg="Listening on 0.0.0.0:8114"
2019-03-14T15:07:03.944103123Z 2019/03/14 15:07:03 [INFO] deleting amazonec2Config from node schema
2019-03-14T15:07:03.946193774Z 2019/03/14 15:07:03 [INFO] deleting amazonec2Config from node schema
2019-03-14T15:07:05.542566287Z 2019/03/14 15:07:05 [INFO] deleting digitaloceanConfig from node schema
2019-03-14T15:07:05.545716719Z 2019/03/14 15:07:05 [INFO] deleting digitaloceanConfig from node schema
2019-03-14T15:07:05.667936255Z 2019/03/14 15:07:05 [INFO] Updating catalog helm
2019-03-14T15:07:06.905103270Z 2019/03/14 15:07:06 [INFO] deleting vmwarevsphereConfig from node schema
2019-03-14T15:07:06.907240285Z 2019/03/14 15:07:06 [INFO] deleting vmwarevsphereConfig from node schema
2019-03-14T15:09:17.937322600Z 2019/03/14 15:09:17 [INFO] [mgmt-cluster-rbac-delete] Creating namespace c-h5fkv
2019-03-14T15:09:17.940465195Z 2019/03/14 15:09:17 [INFO] [mgmt-cluster-rbac-delete] Creating Default project for cluster c-h5fkv
2019-03-14T15:09:17.944496720Z 2019/03/14 15:09:17 [INFO] [mgmt-project-rbac-create] Creating namespace p-xvvkh
2019-03-14T15:09:17.945540840Z 2019/03/14 15:09:17 [INFO] [mgmt-cluster-rbac-delete] Creating System project for cluster c-h5fkv
2019-03-14T15:09:17.947023346Z 2019/03/14 15:09:17 [INFO] [mgmt-project-rbac-create] Creating creator projectRoleTemplateBinding for user user-8n8vs for project p-xvvkh
2019-03-14T15:09:17.947591339Z 2019/03/14 15:09:17 [INFO] [mgmt-cluster-rbac-delete] Updating cluster c-h5fkv
2019-03-14T15:09:17.947930836Z 2019/03/14 15:09:17 [INFO] [mgmt-project-rbac-create] Creating namespace p-n5dxn
2019-03-14T15:09:17.950018353Z 2019/03/14 15:09:17 [INFO] [mgmt-project-rbac-create] Creating creator projectRoleTemplateBinding for user user-8n8vs for project p-n5dxn
2019-03-14T15:09:17.952044236Z 2019/03/14 15:09:17 [INFO] [mgmt-auth-crtb-controller] Setting InitialRolesPopulated condition on project p-xvvkh
2019-03-14T15:09:17.953365353Z 2019/03/14 15:09:17 [INFO] [mgmt-project-rbac-create] Creating creator clusterRoleTemplateBinding for user user-8n8vs for cluster c-h5fkv
2019-03-14T15:09:17.956397945Z 2019/03/14 15:09:17 [INFO] [mgmt-auth-crtb-controller] Setting InitialRolesPopulated condition on project p-n5dxn
2019-03-14T15:09:17.956880471Z 2019/03/14 15:09:17 [INFO] [mgmt-auth-prtb-controller] Creating clusterRole p-xvvkh-projectowner
2019-03-14T15:09:17.957595961Z 2019/03/14 15:09:17 [INFO] [mgmt-project-rbac-create] Updating project p-xvvkh
2019-03-14T15:09:17.959557720Z 2019/03/14 15:09:17 [INFO] [mgmt-auth-prtb-controller] Creating clusterRole p-n5dxn-projectowner
2019-03-14T15:09:17.962417453Z 2019/03/14 15:09:17 [INFO] [mgmt-project-rbac-create] Updating project p-n5dxn
2019-03-14T15:09:17.964142810Z 2019/03/14 15:09:17 [INFO] [mgmt-auth-crtb-controller] Creating clusterRole c-h5fkv-clusterowner
2019-03-14T15:09:17.967129893Z 2019/03/14 15:09:17 [INFO] [mgmt-auth-prtb-controller] Creating roleBinding for membership in project p-xvvkh for subject user-8n8vs
2019-03-14T15:09:17.968029871Z 2019/03/14 15:09:17 [INFO] [mgmt-auth-prtb-controller] Creating roleBinding for membership in project p-n5dxn for subject user-8n8vs
2019-03-14T15:09:17.969414852Z 2019/03/14 15:09:17 [INFO] [mgmt-auth-crtb-controller] Setting InitialRolesPopulated condition on cluster
2019-03-14T15:09:17.969441284Z 2019/03/14 15:09:17 [INFO] [mgmt-cluster-rbac-delete] Updating cluster c-h5fkv
2019-03-14T15:09:17.971567343Z 2019/03/14 15:09:17 [INFO] [mgmt-auth-crtb-controller] Creating clusterRoleBinding for membership in cluster c-h5fkv for subject user-8n8vs
2019-03-14T15:09:17.972492190Z 2019/03/14 15:09:17 [INFO] [mgmt-auth-prtb-controller] Creating clusterRole c-h5fkv-clustermember
2019-03-14T15:09:17.972612456Z 2019/03/14 15:09:17 [INFO] [mgmt-auth-prtb-controller] Creating clusterRole c-h5fkv-clustermember
2019-03-14T15:09:17.975687228Z 2019/03/14 15:09:17 [INFO] [mgmt-project-rbac-create] Updating project p-xvvkh
2019-03-14T15:09:17.976508890Z 2019/03/14 15:09:17 [INFO] [mgmt-auth-prtb-controller] Creating clusterRoleBinding for membership in cluster c-h5fkv for subject user-8n8vs
2019-03-14T15:09:17.979042698Z 2019/03/14 15:09:17 [INFO] [mgmt-auth-crtb-controller] Creating role cluster-owner in namespace c-h5fkv
2019-03-14T15:09:17.982680948Z 2019/03/14 15:09:17 [INFO] [mgmt-project-rbac-create] Updating project p-n5dxn
2019-03-14T15:09:17.982717014Z 2019/03/14 15:09:17 [INFO] [mgmt-auth-prtb-controller] Creating role project-owner in namespace c-h5fkv
2019-03-14T15:09:17.985628649Z 2019/03/14 15:09:17 [INFO] [mgmt-auth-crtb-controller] Creating roleBinding for subject user-8n8vs with role cluster-owner in namespace
2019-03-14T15:09:17.987350789Z 2019/03/14 15:09:17 [INFO] [mgmt-project-rbac-create] Updating project p-n5dxn
2019-03-14T15:09:17.988171328Z 2019/03/14 15:09:17 [INFO] [mgmt-auth-prtb-controller] Creating roleBinding for subject user-8n8vs with role project-owner in namespace
2019-03-14T15:09:17.989503292Z 2019/03/14 15:09:17 [ERROR] ProjectRoleTemplateBindingController p-xvvkh/creator-project-owner [mgmt-auth-prtb-controller] failed with : clusterroles.rbac.authorization.k8s.io "c-h5fkv-clustermember" already exists
2019-03-14T15:09:17.989660657Z 2019/03/14 15:09:17 [INFO] [mgmt-auth-crtb-controller] Creating role cluster-owner in namespace p-xvvkh
2019-03-14T15:09:17.989700652Z 2019/03/14 15:09:17 [INFO] [mgmt-auth-prtb-controller] Updating clusterRoleBinding clusterrolebinding-xljrz for cluster membership in cluster c-h5fkv for subject user-8n8vs
2019-03-14T15:09:17.992048397Z 2019/03/14 15:09:17 [INFO] [mgmt-auth-prtb-controller] Creating role project-owner in namespace p-n5dxn
2019-03-14T15:09:17.993395026Z 2019/03/14 15:09:17 [INFO] [mgmt-auth-prtb-controller] Creating roleBinding for subject user-8n8vs with role project-owner in namespace
2019-03-14T15:09:17.995872450Z 2019/03/14 15:09:17 [INFO] [mgmt-auth-crtb-controller] Creating roleBinding for subject user-8n8vs with role cluster-owner in namespace
2019-03-14T15:09:17.998255329Z 2019/03/14 15:09:17 [INFO] [mgmt-auth-prtb-controller] Creating role admin in namespace p-n5dxn
2019-03-14T15:09:18.000626049Z 2019/03/14 15:09:18 [INFO] [mgmt-auth-crtb-controller] Creating role cluster-owner in namespace p-n5dxn
2019-03-14T15:09:18.000811780Z 2019/03/14 15:09:18 [INFO] [mgmt-auth-prtb-controller] Creating role project-owner in namespace p-xvvkh
2019-03-14T15:09:18.001562022Z 2019/03/14 15:09:18 [INFO] [mgmt-auth-prtb-controller] Creating roleBinding for subject user-8n8vs with role project-owner in namespace
2019-03-14T15:09:18.005381188Z 2019/03/14 15:09:18 [INFO] [mgmt-auth-prtb-controller] Creating role admin in namespace p-xvvkh
2019-03-14T15:09:18.007024866Z 2019/03/14 15:09:18 [INFO] [mgmt-auth-prtb-controller] Creating roleBinding for subject user-8n8vs with role admin in namespace
2019-03-14T15:09:18.008321115Z 2019/03/14 15:09:18 [INFO] [mgmt-auth-prtb-controller] Creating roleBinding for subject user-8n8vs with role project-owner in namespace
2019-03-14T15:09:18.008668300Z 2019/03/14 15:09:18 [INFO] [mgmt-auth-crtb-controller] Creating roleBinding for subject user-8n8vs with role cluster-owner in namespace
2019-03-14T15:09:18.010660531Z 2019/03/14 15:09:18 [INFO] [mgmt-auth-prtb-controller] Creating roleBinding for subject user-8n8vs with role admin in namespace
2019-03-14T15:09:18.018706671Z 2019/03/14 15:09:18 [INFO] [mgmt-cluster-rbac-delete] Updating cluster c-h5fkv
2019-03-14T15:09:39.311504284Z 2019/03/14 15:09:39 [INFO] Handling backend connection request [c-h5fkv:m-46051abce881]
2019-03-14T15:09:39.323044011Z 2019/03/14 15:09:39 [INFO] Provisioning cluster [c-h5fkv]
2019-03-14T15:09:39.323084949Z 2019/03/14 15:09:39 [INFO] Creating cluster [c-h5fkv]
2019-03-14T15:09:39.331350410Z 2019/03/14 15:09:39 [INFO] cluster [c-h5fkv] provisioning: Building Kubernetes cluster
2019-03-14T15:09:39.334412702Z 2019/03/14 15:09:39 [INFO] cluster [c-h5fkv] provisioning: [dialer] Setup tunnel for host [10.0.2.15]
2019-03-14T15:09:41.689226024Z 2019/03/14 15:09:41 [INFO] cluster [c-h5fkv] provisioning: [network] Deploying port listener containers
2019-03-14T15:09:42.155895244Z 2019/03/14 15:09:42 [INFO] cluster [c-h5fkv] provisioning: [network] Successfully started [rke-etcd-port-listener] container on host [10.0.2.15]
2019-03-14T15:09:42.156421384Z 2019/03/14 15:09:42 [INFO] Handling backend connection request [c-h5fkv:m-46051abce881]
2019-03-14T15:09:42.501787663Z 2019/03/14 15:09:42 [INFO] cluster [c-h5fkv] provisioning: [network] Successfully started [rke-cp-port-listener] container on host [10.0.2.15]
2019-03-14T15:09:42.867329932Z 2019/03/14 15:09:42 [INFO] cluster [c-h5fkv] provisioning: [network] Successfully started [rke-worker-port-listener] container on host [10.0.2.15]
2019-03-14T15:09:42.871879929Z 2019/03/14 15:09:42 [INFO] cluster [c-h5fkv] provisioning: [network] Port listener containers deployed successfully
2019-03-14T15:09:42.876637688Z 2019/03/14 15:09:42 [INFO] cluster [c-h5fkv] provisioning: [network] Running control plane -> etcd port checks
2019-03-14T15:09:43.105879878Z 2019/03/14 15:09:43 [INFO] cluster [c-h5fkv] provisioning: [network] Successfully started [rke-port-checker] container on host [10.0.2.15]
2019-03-14T15:09:43.234204044Z 2019/03/14 15:09:43 [INFO] cluster [c-h5fkv] provisioning: [network] Running control plane -> worker port checks
2019-03-14T15:09:43.578512881Z 2019/03/14 15:09:43 [INFO] cluster [c-h5fkv] provisioning: [network] Successfully started [rke-port-checker] container on host [10.0.2.15]
2019-03-14T15:09:43.714592728Z 2019/03/14 15:09:43 [INFO] cluster [c-h5fkv] provisioning: [network] Running workers -> control plane port checks
2019-03-14T15:09:43.962545881Z 2019/03/14 15:09:43 [INFO] cluster [c-h5fkv] provisioning: [network] Successfully started [rke-port-checker] container on host [10.0.2.15]
2019-03-14T15:09:44.107895472Z 2019/03/14 15:09:44 [INFO] cluster [c-h5fkv] provisioning: [network] Skipping kubeapi port check
2019-03-14T15:09:44.110453833Z 2019/03/14 15:09:44 [INFO] cluster [c-h5fkv] provisioning: [network] Removing port listener containers
2019-03-14T15:09:44.393326960Z 2019/03/14 15:09:44 [INFO] cluster [c-h5fkv] provisioning: [remove/rke-etcd-port-listener] Successfully removed container on host [10.0.2.15]
2019-03-14T15:09:44.662909089Z 2019/03/14 15:09:44 [INFO] cluster [c-h5fkv] provisioning: [remove/rke-cp-port-listener] Successfully removed container on host [10.0.2.15]
2019-03-14T15:09:44.926361610Z 2019/03/14 15:09:44 [INFO] cluster [c-h5fkv] provisioning: [remove/rke-worker-port-listener] Successfully removed container on host [10.0.2.15]
2019-03-14T15:09:44.930747556Z 2019/03/14 15:09:44 [INFO] cluster [c-h5fkv] provisioning: [network] Port listener containers removed successfully
2019-03-14T15:09:44.934256535Z 2019/03/14 15:09:44 [INFO] cluster [c-h5fkv] provisioning: [certificates] Attempting to recover certificates from backup on [etcd,controlPlane] hosts
2019-03-14T15:09:45.290949692Z 2019/03/14 15:09:45 [INFO] cluster [c-h5fkv] provisioning: [certificates] Successfully started [cert-fetcher] container on host [10.0.2.15]
2019-03-14T15:09:45.817983269Z 2019/03/14 15:09:45 [INFO] cluster [c-h5fkv] provisioning: [certificates] No Certificate backup found on [etcd,controlPlane] hosts
2019-03-14T15:09:45.822548637Z 2019/03/14 15:09:45 [INFO] cluster [c-h5fkv] provisioning: [certificates] Generating CA kubernetes certificates
2019-03-14T15:09:46.235328362Z 2019/03/14 15:09:46 [INFO] cluster [c-h5fkv] provisioning: [certificates] Generating Kubernetes API server certificates
2019-03-14T15:09:46.392681382Z 2019/03/14 15:09:46 [INFO] cluster [c-h5fkv] provisioning: [certificates] Generating Kube Controller certificates
2019-03-14T15:09:46.551394141Z 2019/03/14 15:09:46 [INFO] cluster [c-h5fkv] provisioning: [certificates] Generating Kube Scheduler certificates
2019-03-14T15:09:47.104317006Z 2019/03/14 15:09:47 [INFO] cluster [c-h5fkv] provisioning: [certificates] Generating Kube Proxy certificates
2019-03-14T15:09:47.626825491Z 2019/03/14 15:09:47 [INFO] cluster [c-h5fkv] provisioning: [certificates] Generating Node certificate
2019-03-14T15:09:47.835662338Z 2019/03/14 15:09:47 [INFO] cluster [c-h5fkv] provisioning: [certificates] Generating admin certificates and kubeconfig
2019-03-14T15:09:48.056053056Z 2019/03/14 15:09:48 [INFO] cluster [c-h5fkv] provisioning: [certificates] Generating etcd-10.0.2.15 certificate and key
2019-03-14T15:09:48.158101525Z 2019/03/14 15:09:48 [INFO] cluster [c-h5fkv] provisioning: [certificates] Generating Kubernetes API server aggregation layer requestheader client CA certificates
2019-03-14T15:09:48.275547499Z 2019/03/14 15:09:48 [INFO] cluster [c-h5fkv] provisioning: [certificates] Generating Kubernetes API server proxy client certificates
2019-03-14T15:09:48.628825555Z 2019/03/14 15:09:48 [INFO] cluster [c-h5fkv] provisioning: [certificates] Temporarily saving certs to [etcd,controlPlane] hosts
2019-03-14T15:09:53.950355400Z 2019/03/14 15:09:53 [INFO] cluster [c-h5fkv] provisioning: [certificates] Saved certs to [etcd,controlPlane] hosts
2019-03-14T15:09:53.959968571Z 2019/03/14 15:09:53 [INFO] cluster [c-h5fkv] provisioning: [reconcile] Reconciling cluster state
2019-03-14T15:09:53.959996678Z 2019/03/14 15:09:53 [INFO] cluster [c-h5fkv] provisioning: [reconcile] This is newly generated cluster
2019-03-14T15:09:53.961764066Z 2019/03/14 15:09:53 [INFO] cluster [c-h5fkv] provisioning: [certificates] Deploying kubernetes certificates to Cluster nodes
2019-03-14T15:09:59.354644627Z 2019/03/14 15:09:59 [INFO] cluster [c-h5fkv] provisioning: Successfully Deployed local admin kubeconfig at [management-state/rke/rke-882239139/kube_config_cluster.yml]
2019-03-14T15:09:59.358925815Z 2019/03/14 15:09:59 [INFO] cluster [c-h5fkv] provisioning: [certificates] Successfully deployed kubernetes certificates to Cluster nodes
2019-03-14T15:09:59.361722397Z 2019/03/14 15:09:59 [INFO] cluster [c-h5fkv] provisioning: Pre-pulling kubernetes images
2019-03-14T15:09:59.379757833Z 2019/03/14 15:09:59 [INFO] cluster [c-h5fkv] provisioning: Kubernetes images pulled successfully
2019-03-14T15:09:59.383426826Z 2019/03/14 15:09:59 [INFO] cluster [c-h5fkv] provisioning: [etcd] Building up etcd plane..
2019-03-14T15:09:59.740018006Z 2019/03/14 15:09:59 [INFO] cluster [c-h5fkv] provisioning: [etcd] Successfully started [etcd] container on host [10.0.2.15]
2019-03-14T15:09:59.744557547Z 2019/03/14 15:09:59 [INFO] cluster [c-h5fkv] provisioning: [etcd] Saving snapshot [etcd-rolling-snapshots] on host [10.0.2.15]
2019-03-14T15:09:59.941933563Z 2019/03/14 15:09:59 [INFO] cluster [c-h5fkv] provisioning: [etcd] Successfully started [etcd-rolling-snapshots] container on host [10.0.2.15]
2019-03-14T15:10:05.266795557Z 2019/03/14 15:10:05 [INFO] cluster [c-h5fkv] provisioning: [certificates] Successfully started [rke-bundle-cert] container on host [10.0.2.15]
2019-03-14T15:10:05.581245350Z 2019/03/14 15:10:05 [INFO] cluster [c-h5fkv] provisioning: [certificates] successfully saved certificate bundle [/opt/rke/etcd-snapshots//pki.bundle.tar.gz] on host [10.0.2.15]
2019-03-14T15:10:05.929501493Z 2019/03/14 15:10:05 [INFO] cluster [c-h5fkv] provisioning: [etcd] Successfully started [rke-log-linker] container on host [10.0.2.15]
2019-03-14T15:10:06.174054271Z 2019/03/14 15:10:06 [INFO] cluster [c-h5fkv] provisioning: [remove/rke-log-linker] Successfully removed container on host [10.0.2.15]
2019-03-14T15:10:06.183518905Z 2019/03/14 15:10:06 [INFO] cluster [c-h5fkv] provisioning: [etcd] Successfully started etcd plane..
2019-03-14T15:10:06.183637386Z 2019/03/14 15:10:06 [INFO] cluster [c-h5fkv] provisioning: [controlplane] Building up Controller Plane..
2019-03-14T15:10:06.507739653Z 2019/03/14 15:10:06 [INFO] cluster [c-h5fkv] provisioning: [controlplane] Successfully started [kube-apiserver] container on host [10.0.2.15]
2019-03-14T15:10:06.514330818Z 2019/03/14 15:10:06 [INFO] cluster [c-h5fkv] provisioning: [healthcheck] Start Healthcheck on service [kube-apiserver] on host [10.0.2.15]
2019-03-14T15:10:16.684568005Z 2019-03-14 15:10:16.684405 W | wal: sync duration of 2.69979515s, expected less than 1s
2019-03-14T15:10:23.837411713Z 2019/03/14 15:10:23 [INFO] cluster [c-h5fkv] provisioning: [healthcheck] service [kube-apiserver] on host [10.0.2.15] is healthy
2019-03-14T15:10:24.174371688Z 2019/03/14 15:10:24 [INFO] cluster [c-h5fkv] provisioning: [controlplane] Successfully started [rke-log-linker] container on host [10.0.2.15]
2019-03-14T15:10:24.389950241Z 2019/03/14 15:10:24 [INFO] cluster [c-h5fkv] provisioning: [remove/rke-log-linker] Successfully removed container on host [10.0.2.15]
2019-03-14T15:10:24.628425826Z 2019/03/14 15:10:24 [INFO] cluster [c-h5fkv] provisioning: [controlplane] Successfully started [kube-controller-manager] container on host [10.0.2.15]
2019-03-14T15:10:24.634422649Z 2019/03/14 15:10:24 [INFO] cluster [c-h5fkv] provisioning: [healthcheck] Start Healthcheck on service [kube-controller-manager] on host [10.0.2.15]
2019-03-14T15:10:29.629713227Z 2019/03/14 15:10:29 [INFO] cluster [c-h5fkv] provisioning: [healthcheck] service [kube-controller-manager] on host [10.0.2.15] is healthy
2019-03-14T15:10:29.917071285Z 2019/03/14 15:10:29 [INFO] cluster [c-h5fkv] provisioning: [controlplane] Successfully started [rke-log-linker] container on host [10.0.2.15]
2019-03-14T15:10:30.162550468Z 2019/03/14 15:10:30 [INFO] cluster [c-h5fkv] provisioning: [remove/rke-log-linker] Successfully removed container on host [10.0.2.15]
2019-03-14T15:10:30.431355953Z 2019/03/14 15:10:30 [INFO] cluster [c-h5fkv] provisioning: [controlplane] Successfully started [kube-scheduler] container on host [10.0.2.15]
2019-03-14T15:10:30.434407681Z 2019/03/14 15:10:30 [INFO] cluster [c-h5fkv] provisioning: [healthcheck] Start Healthcheck on service [kube-scheduler] on host [10.0.2.15]
2019-03-14T15:10:35.433067080Z 2019/03/14 15:10:35 [INFO] cluster [c-h5fkv] provisioning: [healthcheck] service [kube-scheduler] on host [10.0.2.15] is healthy
2019-03-14T15:10:35.847843201Z 2019/03/14 15:10:35 [INFO] cluster [c-h5fkv] provisioning: [controlplane] Successfully started [rke-log-linker] container on host [10.0.2.15]
2019-03-14T15:10:36.070338306Z 2019/03/14 15:10:36 [INFO] cluster [c-h5fkv] provisioning: [remove/rke-log-linker] Successfully removed container on host [10.0.2.15]
2019-03-14T15:10:36.075820707Z 2019/03/14 15:10:36 [INFO] cluster [c-h5fkv] provisioning: [controlplane] Successfully started Controller Plane..
2019-03-14T15:10:36.078490648Z 2019/03/14 15:10:36 [INFO] cluster [c-h5fkv] provisioning: [authz] Creating rke-job-deployer ServiceAccount
2019-03-14T15:10:36.087045406Z 2019/03/14 15:10:36 [INFO] cluster [c-h5fkv] provisioning: [authz] rke-job-deployer ServiceAccount created successfully
2019-03-14T15:10:36.092807621Z 2019/03/14 15:10:36 [INFO] cluster [c-h5fkv] provisioning: [authz] Creating system:node ClusterRoleBinding
2019-03-14T15:10:36.097196750Z 2019/03/14 15:10:36 [INFO] cluster [c-h5fkv] provisioning: [authz] system:node ClusterRoleBinding created successfully
2019-03-14T15:10:36.114155193Z 2019/03/14 15:10:36 [INFO] cluster [c-h5fkv] provisioning: [certificates] Save kubernetes certificates as secrets
2019-03-14T15:10:36.127609718Z 2019/03/14 15:10:36 [INFO] cluster [c-h5fkv] provisioning: [certificates] Successfully saved certificates as kubernetes secret [k8s-certs]
2019-03-14T15:10:36.130449051Z 2019/03/14 15:10:36 [INFO] cluster [c-h5fkv] provisioning: [state] Saving cluster state to Kubernetes
2019-03-14T15:10:36.505788083Z 2019/03/14 15:10:36 [INFO] cluster [c-h5fkv] provisioning: [state] Successfully Saved cluster state to Kubernetes ConfigMap: cluster-state
2019-03-14T15:10:36.509614125Z 2019/03/14 15:10:36 [INFO] cluster [c-h5fkv] provisioning: [state] Saving cluster state to cluster nodes
2019-03-14T15:10:36.604103015Z 2019/03/14 15:10:36 [INFO] cluster [c-h5fkv] provisioning: [remove/cluster-state-deployer] Successfully removed container on host [10.0.2.15]
2019-03-14T15:10:36.905132729Z 2019/03/14 15:10:36 [INFO] cluster [c-h5fkv] provisioning: [state] Successfully started [cluster-state-deployer] container on host [10.0.2.15]
2019-03-14T15:10:37.234066620Z 2019/03/14 15:10:37 [INFO] cluster [c-h5fkv] provisioning: [remove/cluster-state-deployer] Successfully removed container on host [10.0.2.15]
2019-03-14T15:10:37.239565480Z 2019/03/14 15:10:37 [INFO] cluster [c-h5fkv] provisioning: [worker] Building up Worker Plane..
2019-03-14T15:10:37.259121025Z 2019/03/14 15:10:37 [INFO] cluster [c-h5fkv] provisioning: [remove/service-sidekick] Successfully removed container on host [10.0.2.15]
2019-03-14T15:10:37.503977565Z 2019/03/14 15:10:37 [INFO] cluster [c-h5fkv] provisioning: [worker] Successfully started [kubelet] container on host [10.0.2.15]
2019-03-14T15:10:37.508202996Z 2019/03/14 15:10:37 [INFO] cluster [c-h5fkv] provisioning: [healthcheck] Start Healthcheck on service [kubelet] on host [10.0.2.15]
2019-03-14T15:10:42.515231432Z 2019/03/14 15:10:42 [INFO] cluster [c-h5fkv] provisioning: [healthcheck] service [kubelet] on host [10.0.2.15] is healthy
2019-03-14T15:10:42.859281942Z 2019/03/14 15:10:42 [INFO] cluster [c-h5fkv] provisioning: [worker] Successfully started [rke-log-linker] container on host [10.0.2.15]
2019-03-14T15:10:43.098664967Z 2019/03/14 15:10:43 [INFO] cluster [c-h5fkv] provisioning: [remove/rke-log-linker] Successfully removed container on host [10.0.2.15]
2019-03-14T15:10:43.289366586Z 2019/03/14 15:10:43 [INFO] cluster [c-h5fkv] provisioning: [worker] Successfully started [kube-proxy] container on host [10.0.2.15]
2019-03-14T15:10:43.293200041Z 2019/03/14 15:10:43 [INFO] cluster [c-h5fkv] provisioning: [healthcheck] Start Healthcheck on service [kube-proxy] on host [10.0.2.15]
2019-03-14T15:10:48.292257416Z 2019/03/14 15:10:48 [INFO] cluster [c-h5fkv] provisioning: [healthcheck] service [kube-proxy] on host [10.0.2.15] is healthy
2019-03-14T15:10:48.590516313Z 2019/03/14 15:10:48 [INFO] cluster [c-h5fkv] provisioning: [worker] Successfully started [rke-log-linker] container on host [10.0.2.15]
2019-03-14T15:10:48.846681777Z 2019/03/14 15:10:48 [INFO] cluster [c-h5fkv] provisioning: [remove/rke-log-linker] Successfully removed container on host [10.0.2.15]
2019-03-14T15:10:48.850196213Z 2019/03/14 15:10:48 [INFO] cluster [c-h5fkv] provisioning: [worker] Successfully started Worker Plane..
2019-03-14T15:10:48.853795853Z 2019/03/14 15:10:48 [INFO] cluster [c-h5fkv] provisioning: [sync] Syncing nodes Labels and Taints
2019-03-14T15:10:48.868952531Z 2019/03/14 15:10:48 [INFO] cluster [c-h5fkv] provisioning: [sync] Successfully synced nodes Labels and Taints
2019-03-14T15:10:48.873501404Z 2019/03/14 15:10:48 [INFO] cluster [c-h5fkv] provisioning: [network] Setting up network plugin: canal
2019-03-14T15:10:48.878123112Z 2019/03/14 15:10:48 [INFO] cluster [c-h5fkv] provisioning: [addons] Saving addon ConfigMap to Kubernetes
2019-03-14T15:10:48.888135651Z 2019/03/14 15:10:48 [INFO] cluster [c-h5fkv] provisioning: [addons] Successfully Saved addon to Kubernetes ConfigMap: rke-network-plugin
2019-03-14T15:10:48.893849637Z 2019/03/14 15:10:48 [INFO] cluster [c-h5fkv] provisioning: [addons] Executing deploy job..
2019-03-14T15:10:53.909502713Z 2019/03/14 15:10:53 [INFO] cluster [c-h5fkv] provisioning: [addons] Setting up KubeDNS
2019-03-14T15:10:53.913254284Z 2019/03/14 15:10:53 [INFO] cluster [c-h5fkv] provisioning: [addons] Saving addon ConfigMap to Kubernetes
2019-03-14T15:10:53.916996802Z 2019/03/14 15:10:53 [INFO] cluster [c-h5fkv] provisioning: [addons] Successfully Saved addon to Kubernetes ConfigMap: rke-kubedns-addon
2019-03-14T15:10:53.958809629Z 2019/03/14 15:10:53 [INFO] cluster [c-h5fkv] provisioning: [addons] Executing deploy job..
2019-03-14T15:10:58.959108026Z 2019/03/14 15:10:58 [INFO] cluster [c-h5fkv] provisioning: [addons] KubeDNS deployed successfully..
2019-03-14T15:10:58.962765332Z 2019/03/14 15:10:58 [INFO] cluster [c-h5fkv] provisioning: [addons] Setting up Metrics Server
2019-03-14T15:10:58.970087252Z 2019/03/14 15:10:58 [INFO] cluster [c-h5fkv] provisioning: [addons] Saving addon ConfigMap to Kubernetes
2019-03-14T15:10:58.977934018Z 2019/03/14 15:10:58 [INFO] cluster [c-h5fkv] provisioning: [addons] Successfully Saved addon to Kubernetes ConfigMap: rke-metrics-addon
2019-03-14T15:10:58.982755133Z 2019/03/14 15:10:58 [INFO] cluster [c-h5fkv] provisioning: [addons] Executing deploy job..
2019-03-14T15:11:03.990359087Z 2019/03/14 15:11:03 [INFO] cluster [c-h5fkv] provisioning: [addons] KubeDNS deployed successfully..
2019-03-14T15:11:03.994946533Z 2019/03/14 15:11:03 [INFO] cluster [c-h5fkv] provisioning: [ingress] Setting up nginx ingress controller
2019-03-14T15:11:03.998822588Z 2019/03/14 15:11:03 [INFO] cluster [c-h5fkv] provisioning: [addons] Saving addon ConfigMap to Kubernetes
2019-03-14T15:11:04.003130068Z 2019/03/14 15:11:04 [INFO] cluster [c-h5fkv] provisioning: [addons] Successfully Saved addon to Kubernetes ConfigMap: rke-ingress-controller
2019-03-14T15:11:04.006649086Z 2019/03/14 15:11:04 [INFO] cluster [c-h5fkv] provisioning: [addons] Executing deploy job..
2019-03-14T15:11:09.018375660Z 2019/03/14 15:11:09 [INFO] cluster [c-h5fkv] provisioning: [ingress] ingress controller nginx is successfully deployed
2019-03-14T15:11:09.023850374Z 2019/03/14 15:11:09 [INFO] cluster [c-h5fkv] provisioning: [addons] Setting up user addons
2019-03-14T15:11:09.034915131Z 2019/03/14 15:11:09 [INFO] cluster [c-h5fkv] provisioning: [addons] no user addons defined
2019-03-14T15:11:09.041579110Z 2019/03/14 15:11:09 [INFO] cluster [c-h5fkv] provisioning: Finished building Kubernetes cluster successfully
2019-03-14T15:11:09.348988715Z 2019/03/14 15:11:09 [INFO] Provisioned cluster [c-h5fkv]
2019-03-14T15:11:09.374515698Z 2019/03/14 15:11:09 [INFO] Creating user for principal system://c-h5fkv
2019-03-14T15:11:09.383620092Z 2019/03/14 15:11:09 [INFO] Creating globalRoleBindings for u-he4kcz6k2f
2019-03-14T15:11:09.388061119Z 2019/03/14 15:11:09 [INFO] Registering project network policy
2019-03-14T15:11:09.388163973Z 2019/03/14 15:11:09 [INFO] Registering namespaceHandler for adding labels
2019-03-14T15:11:09.388640566Z 2019/03/14 15:11:09 [INFO] registering podsecuritypolicy cluster handler for cluster c-h5fkv
2019-03-14T15:11:09.388692659Z 2019/03/14 15:11:09 [INFO] registering podsecuritypolicy project handler for cluster c-h5fkv
2019-03-14T15:11:09.388790953Z 2019/03/14 15:11:09 [INFO] registering podsecuritypolicy namespace handler for cluster c-h5fkv
2019-03-14T15:11:09.388798169Z 2019/03/14 15:11:09 [INFO] registering podsecuritypolicy serviceaccount handler for cluster c-h5fkv
2019-03-14T15:11:09.388801678Z 2019/03/14 15:11:09 [INFO] registering podsecuritypolicy template handler for cluster c-h5fkv
2019-03-14T15:11:09.390428782Z 2019/03/14 15:11:09 [INFO] Creating new GlobalRoleBinding for GlobalRoleBinding grb-vk4fr
2019-03-14T15:11:09.390538503Z 2019/03/14 15:11:09 [INFO] [mgmt-auth-grb-controller] Creating clusterRoleBinding for globalRoleBinding grb-vk4fr for user u-he4kcz6k2f with role cattle-globalrole-user
2019-03-14T15:11:09.393559709Z 2019/03/14 15:11:09 [INFO] Starting cluster controllers for c-h5fkv
2019-03-14T15:11:09.563746019Z 2019/03/14 15:11:09 [INFO] Creating token for user u-he4kcz6k2f
2019-03-14T15:11:09.567314704Z 2019/03/14 15:11:09 [INFO] [mgmt-auth-crtb-controller] Creating clusterRoleBinding for membership in cluster c-h5fkv for subject u-he4kcz6k2f
2019-03-14T15:11:09.574425532Z 2019/03/14 15:11:09 [INFO] [mgmt-auth-crtb-controller] Creating roleBinding for subject u-he4kcz6k2f with role cluster-owner in namespace
2019-03-14T15:11:09.580407480Z 2019/03/14 15:11:09 [INFO] [mgmt-auth-crtb-controller] Creating roleBinding for subject u-he4kcz6k2f with role cluster-owner in namespace
2019-03-14T15:11:09.583526668Z 2019/03/14 15:11:09 [INFO] [mgmt-auth-crtb-controller] Creating roleBinding for subject u-he4kcz6k2f with role cluster-owner in namespace
2019-03-14T15:11:09.603641542Z 2019/03/14 15:11:09 [INFO] Starting cluster agent for c-h5fkv [owner=true]
2019-03-14T15:11:09.604430827Z 2019/03/14 15:11:09 [INFO] Updating workload [ingress-nginx/nginx-ingress-controller] with public endpoints [[{"nodeName":"c-h5fkv:m-46051abce881","addresses":["10.0.2.15"],"port":80,"protocol":"TCP","podName":"ingress-nginx:nginx-ingress-controller-6m5lx","allNodes":false},{"nodeName":"c-h5fkv:m-46051abce881","addresses":["10.0.2.15"],"port":443,"protocol":"TCP","podName":"ingress-nginx:nginx-ingress-controller-6m5lx","allNodes":false}]]
2019-03-14T15:11:09.635320370Z 2019/03/14 15:11:09 [INFO] Creating clusterRole for roleTemplate Cluster Owner (cluster-owner).
2019-03-14T15:11:09.671638307Z 2019/03/14 15:11:09 [INFO] Creating roleBinding User user-8n8vs Role cluster-owner
2019-03-14T15:11:09.674714909Z 2019/03/14 15:11:09 [INFO] clusterNetAnnHandler: updating EnableNetworkPolicy of cluster c-h5fkv to true
2019-03-14T15:11:09.684386339Z 2019/03/14 15:11:09 [INFO] Creating clusterRole for roleTemplate Project Owner (project-owner).
2019-03-14T15:11:09.684943327Z 2019/03/14 15:11:09 [INFO] Creating clusterRole for roleTemplate Project Owner (project-owner).
2019-03-14T15:11:09.692878378Z 2019/03/14 15:11:09 [INFO] Creating roleBinding User u-he4kcz6k2f Role cluster-owner
2019-03-14T15:11:09.793398206Z 2019/03/14 15:11:09 [INFO] Creating clusterRole project-owner-promoted for project access to global resource.
2019-03-14T15:11:09.816079380Z 2019/03/14 15:11:09 [INFO] Updating clusterRole project-owner-promoted for project access to global resource.
2019-03-14T15:11:09.817939649Z 2019/03/14 15:11:09 [ERROR] ProjectRoleTemplateBindingController p-xvvkh/creator-project-owner [cluster-prtb-sync] failed with : couldn't ensure roles: couldn't create role project-owner: clusterroles.rbac.authorization.k8s.io "project-owner" already exists
2019-03-14T15:11:09.818373269Z 2019/03/14 15:11:09 [INFO] Creating roleBinding User user-8n8vs Role project-owner
2019-03-14T15:11:09.838326498Z 2019/03/14 15:11:09 [INFO] Creating clusterRole for roleTemplate Create Namespaces (create-ns).
2019-03-14T15:11:09.839571937Z 2019/03/14 15:11:09 [INFO] clusterNetAnnHandler: updating EnableNetworkPolicy of cluster c-h5fkv to true
2019-03-14T15:11:09.842324734Z 2019/03/14 15:11:09 [INFO] namespaceHandler: addProjectIDLabelToNamespace: adding label field.cattle.io/projectId=p-n5dxn to namespace=kube-public
2019-03-14T15:11:09.842866552Z 2019/03/14 15:11:09 [INFO] Creating roleBinding User user-8n8vs Role admin
2019-03-14T15:11:09.843973069Z 2019/03/14 15:11:09 [INFO] namespaceHandler: addProjectIDLabelToNamespace: adding label field.cattle.io/projectId=p-n5dxn to namespace=cattle-system
2019-03-14T15:11:09.845568097Z 2019/03/14 15:11:09 [INFO] namespaceHandler: addProjectIDLabelToNamespace: adding label field.cattle.io/projectId=p-xvvkh to namespace=default
2019-03-14T15:11:09.864431419Z 2019/03/14 15:11:09 [INFO] Creating clusterRoleBinding for project access to global resource for subject user-8n8vs role p-n5dxn-namespaces-edit.
2019-03-14T15:11:09.867253833Z 2019/03/14 15:11:09 [INFO] namespaceHandler: addProjectIDLabelToNamespace: adding label field.cattle.io/projectId=p-n5dxn to namespace=ingress-nginx
2019-03-14T15:11:09.869200195Z 2019/03/14 15:11:09 [INFO] Creating roleBinding User user-8n8vs Role project-owner
2019-03-14T15:11:09.873302332Z 2019/03/14 15:11:09 [INFO] Creating clusterRole for roleTemplate Create Namespaces (create-ns).
2019-03-14T15:11:09.880746914Z 2019/03/14 15:11:09 [ERROR] ProjectRoleTemplateBindingController p-xvvkh/creator-project-owner [cluster-prtb-sync] failed with : couldn't create role create-ns: clusterroles.rbac.authorization.k8s.io "create-ns" already exists
2019-03-14T15:11:09.884990889Z 2019/03/14 15:11:09 [INFO] Creating roleBinding User user-8n8vs Role admin
2019-03-14T15:11:09.890440757Z 2019/03/14 15:11:09 [INFO] Creating clusterRoleBinding for project access to global resource for subject user-8n8vs role project-owner-promoted.
2019-03-14T15:11:09.890988057Z 2019/03/14 15:11:09 [INFO] Creating roleBinding User user-8n8vs Role project-owner
2019-03-14T15:11:09.894820789Z 2019/03/14 15:11:09 [INFO] Creating clusterRoleBinding for project access to global resource for subject user-8n8vs role project-owner-promoted.
2019-03-14T15:11:09.898172174Z 2019/03/14 15:11:09 [INFO] Creating roleBinding User user-8n8vs Role admin
2019-03-14T15:11:09.910699011Z 2019/03/14 15:11:09 [INFO] Creating clusterRoleBinding for project access to global resource for subject user-8n8vs role create-ns.
2019-03-14T15:11:09.913284858Z 2019/03/14 15:11:09 [INFO] Creating roleBinding User user-8n8vs Role project-owner
2019-03-14T15:11:09.915889955Z 2019/03/14 15:11:09 [INFO] projectSyncer: createDefaultNetworkPolicy: successfully created default network policy for project: p-n5dxn
2019-03-14T15:11:09.925423979Z 2019/03/14 15:11:09 [INFO] Creating roleBinding User user-8n8vs Role admin
2019-03-14T15:11:09.926573975Z 2019/03/14 15:11:09 [INFO] projectSyncer: createDefaultNetworkPolicy: successfully created default network policy for project: p-xvvkh
2019-03-14T15:11:09.932825944Z 2019/03/14 15:11:09 [INFO] Creating roleBinding User user-8n8vs Role admin
2019-03-14T15:11:09.963086930Z 2019/03/14 15:11:09 [INFO] Creating roleBinding User user-8n8vs Role project-owner
2019-03-14T15:11:09.974369732Z 2019/03/14 15:11:09 [INFO] Creating roleBinding User user-8n8vs Role admin
2019-03-14T15:11:10.047002523Z 2019/03/14 15:11:10 [INFO] Deleting roleBinding clusterrolebinding-qf24s
2019-03-14T15:11:10.183583820Z 2019/03/14 15:11:10 [INFO] namespaceHandler: addProjectIDLabelToNamespace: adding label field.cattle.io/projectId=p-n5dxn to namespace=kube-system
2019-03-14T15:11:10.322839608Z 2019/03/14 15:11:10 [INFO] Creating clusterRoleBinding for project access to global resource for subject user-8n8vs role p-xvvkh-namespaces-edit.
2019-03-14T15:11:10.327602009Z 2019/03/14 15:11:10 [INFO] Updating clusterRoleBinding clusterrolebinding-d6g69 for project access to global resource for subject user-8n8vs role project-owner-promoted.
2019-03-14T15:11:10.332660773Z 2019/03/14 15:11:10 [INFO] Updating clusterRoleBinding clusterrolebinding-mlm78 for project access to global resource for subject user-8n8vs role project-owner-promoted.
2019-03-14T15:11:10.336821165Z 2019/03/14 15:11:10 [INFO] Updating clusterRoleBinding clusterrolebinding-vqtfl for project access to global resource for subject user-8n8vs role create-ns.
2019-03-14T15:11:12.074047992Z 2019-03-14 15:11:12.073944 W | wal: sync duration of 1.518709278s, expected less than 1s
2019-03-14T15:11:13.634310843Z 2019/03/14 15:11:13 [INFO] Handling backend connection request [c-h5fkv:m-46051abce881]
2019-03-14T15:11:13.900997014Z 2019/03/14 15:11:13 [INFO] Handling backend connection request [c-h5fkv]
2019-03-14T15:11:24.404276696Z 2019/03/14 15:11:24 [INFO] clusterHandler: calling sync to create network policies for cluster c-h5fkv
2019-03-14T15:11:27.331899752Z 2019-03-14 15:11:27.331626 W | wal: sync duration of 1.185778165s, expected less than 1s
2019-03-14T15:11:28.740823957Z 2019/03/14 15:11:28 [INFO] error in remotedialer server [400]: websocket: close 1006 unexpected EOF
2019-03-14T15:11:28.740856434Z E0314 15:11:28.733968 6 streamwatcher.go:109] Unable to decode an event from the watch stream: tunnel disconnect
2019-03-14T15:11:28.740861837Z E0314 15:11:28.734014 6 streamwatcher.go:109] Unable to decode an event from the watch stream: tunnel disconnect
2019-03-14T15:11:28.740865374Z E0314 15:11:28.734027 6 streamwatcher.go:109] Unable to decode an event from the watch stream: tunnel disconnect
2019-03-14T15:11:28.740868442Z E0314 15:11:28.734072 6 streamwatcher.go:109] Unable to decode an event from the watch stream: tunnel disconnect
2019-03-14T15:11:28.740871288Z E0314 15:11:28.734084 6 streamwatcher.go:109] Unable to decode an event from the watch stream: tunnel disconnect
2019-03-14T15:11:28.740874076Z E0314 15:11:28.734108 6 streamwatcher.go:109] Unable to decode an event from the watch stream: tunnel disconnect
2019-03-14T15:11:28.740876912Z E0314 15:11:28.734119 6 streamwatcher.go:109] Unable to decode an event from the watch stream: tunnel disconnect
2019-03-14T15:11:28.740879608Z E0314 15:11:28.734131 6 streamwatcher.go:109] Unable to decode an event from the watch stream: tunnel disconnect
2019-03-14T15:11:28.740882235Z E0314 15:11:28.734141 6 streamwatcher.go:109] Unable to decode an event from the watch stream: tunnel disconnect
2019-03-14T15:11:28.740884951Z E0314 15:11:28.734277 6 streamwatcher.go:109] Unable to decode an event from the watch stream: tunnel disconnect
2019-03-14T15:11:28.740887739Z E0314 15:11:28.734291 6 streamwatcher.go:109] Unable to decode an event from the watch stream: tunnel disconnect
2019-03-14T15:11:28.740890414Z E0314 15:11:28.734318 6 streamwatcher.go:109] Unable to decode an event from the watch stream: tunnel disconnect
2019-03-14T15:11:28.740893030Z E0314 15:11:28.734327 6 streamwatcher.go:109] Unable to decode an event from the watch stream: tunnel disconnect
2019-03-14T15:11:28.740895687Z E0314 15:11:28.734344 6 streamwatcher.go:109] Unable to decode an event from the watch stream: tunnel disconnect
2019-03-14T15:11:28.740898343Z E0314 15:11:28.734352 6 streamwatcher.go:109] Unable to decode an event from the watch stream: tunnel disconnect
2019-03-14T15:11:28.740900959Z E0314 15:11:28.734380 6 streamwatcher.go:109] Unable to decode an event from the watch stream: tunnel disconnect
2019-03-14T15:11:28.740903545Z E0314 15:11:28.734399 6 streamwatcher.go:109] Unable to decode an event from the watch stream: tunnel disconnect
2019-03-14T15:11:28.740906663Z E0314 15:11:28.734431 6 streamwatcher.go:109] Unable to decode an event from the watch stream: tunnel disconnect
2019-03-14T15:11:28.740909499Z E0314 15:11:28.734451 6 streamwatcher.go:109] Unable to decode an event from the watch stream: tunnel disconnect
2019-03-14T15:11:28.740912146Z E0314 15:11:28.734459 6 streamwatcher.go:109] Unable to decode an event from the watch stream: tunnel disconnect
2019-03-14T15:11:28.740914732Z E0314 15:11:28.734473 6 streamwatcher.go:109] Unable to decode an event from the watch stream: tunnel disconnect
2019-03-14T15:11:28.740924385Z E0314 15:11:28.734481 6 streamwatcher.go:109] Unable to decode an event from the watch stream: tunnel disconnect
2019-03-14T15:11:28.740927382Z E0314 15:11:28.734501 6 streamwatcher.go:109] Unable to decode an event from the watch stream: tunnel disconnect
2019-03-14T15:11:28.740930179Z E0314 15:11:28.734510 6 streamwatcher.go:109] Unable to decode an event from the watch stream: tunnel disconnect
2019-03-14T15:11:28.748228773Z 2019/03/14 15:11:28 [INFO] error in remotedialer server [400]: websocket: close 1006 unexpected EOF
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment