Created
June 30, 2015 03:50
-
-
Save alexanderguzhva/9f46b5367f33eca29772 to your computer and use it in GitHub Desktop.
kubernetes v0.20.1 cluster up log
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
(sorry for special & unicode characters, it's 11:45 PM) | |
Bringing machine 'master' up with 'virtualbox' provider... | |
Bringing machine 'minion-1' up with 'virtualbox' provider... | |
Bringing machine 'minion-2' up with 'virtualbox' provider... | |
==> master: Clearing any previously set forwarded ports... | |
==> master: Clearing any previously set network interfaces... | |
==> master: Preparing network interfaces based on configuration... | |
master: Adapter 1: nat | |
master: Adapter 2: hostonly | |
==> master: Forwarding ports... | |
master: 22 => 2222 (adapter 1) | |
==> master: Running 'pre-boot' VM customizations... | |
==> master: Booting VM... | |
==> master: Waiting for machine to boot. This may take a few minutes... | |
master: SSH address: 127.0.0.1:2222 | |
master: SSH username: vagrant | |
master: SSH auth method: private key | |
master: Warning: Connection timeout. Retrying... | |
==> master: Machine booted and ready! | |
==> master: Checking for guest additions in VM... | |
==> master: Configuring and enabling network interfaces... | |
==> master: Mounting shared folders... | |
master: /vagrant => /home/nop/k8s/201r/kubernetes | |
==> master: Machine already provisioned. Run `vagrant provision` or use the `--provision` | |
==> master: to force provisioning. Provisioners marked to run always will still run. | |
==> master: Running provisioner: shell... | |
master: Running: /tmp/vagrant-shell20150629-10932-1ydmqmn.sh | |
==> master: Verifying network configuration | |
==> master: It looks like the required network bridge has not yet been created | |
==> master: Installing, enabling prerequisites | |
==> master: Package openvswitch-2.3.1-3.git20150327.fc21.x86_64 already installed and latest version | |
==> master: Package bridge-utils-1.5-10.fc21.x86_64 already installed and latest version | |
==> master: Nothing to do | |
==> master: Create a new docker bridge | |
==> master: Cannot find device "cbr0" | |
==> master: bridge cbr0 doesn't exist; can't delete it | |
==> master: Add ovs bridge | |
==> master: ovs-vsctl: | |
==> master: no port named gre0 | |
==> master: Add tun device | |
==> master: ovs-vsctl: | |
==> master: no port named tun0 | |
==> master: Add oflow rules | |
==> master: Creating persistent gre tunnels | |
==> master: Created persistent gre tunnels | |
==> master: Add ip route rules such that all pod traffic flows through docker bridge | |
==> master: Network configuration verified | |
==> master: Running release install script | |
==> master: /kube-install /home/vagrant | |
==> master: +++ Installing salt files into new trees | |
==> master: ‘./kubernetes/saltbase/salt’ -> ‘/srv/salt-new/salt’ | |
==> master: ‘./kubernetes/saltbase/salt/fluentd-gcp’ -> ‘/srv/salt-new/salt/fluentd-gcp’ | |
==> master: ‘./kubernetes/saltbase/salt/fluentd-gcp/init.sls’ -> ‘/srv/salt-new/salt/fluentd-gcp/init.sls’ | |
==> master: ‘./kubernetes/saltbase/salt/fluentd-gcp/fluentd-gcp.yaml’ -> ‘/srv/salt-new/salt/fluentd-gcp/fluentd-gcp.yaml’ | |
==> master: ‘./kubernetes/saltbase/salt/docker’ -> ‘/srv/salt-new/salt/docker’ | |
==> master: ‘./kubernetes/saltbase/salt/docker/init.sls’ -> ‘/srv/salt-new/salt/docker/init.sls’ | |
==> master: ‘./kubernetes/saltbase/salt/docker/default’ -> ‘/srv/salt-new/salt/docker/default’ | |
==> master: ‘./kubernetes/saltbase/salt/docker/docker-defaults’ -> ‘/srv/salt-new/salt/docker/docker-defaults’ | |
==> master: ‘./kubernetes/saltbase/salt/helpers’ -> ‘/srv/salt-new/salt/helpers’ | |
==> master: ‘./kubernetes/saltbase/salt/helpers/safe_format_and_mount’ -> ‘/srv/salt-new/salt/helpers/safe_format_and_mount’ | |
==> master: ‘./kubernetes/saltbase/salt/helpers/init.sls’ -> ‘/srv/salt-new/salt/helpers/init.sls’ | |
==> master: ‘./kubernetes/saltbase/salt/kube-addons’ -> ‘/srv/salt-new/salt/kube-addons’ | |
==> master: ‘./kubernetes/saltbase/salt/kube-addons/kube-addon-update.sh’ -> ‘/srv/salt-new/salt/kube-addons/kube-addon-update.sh’ | |
==> master: ‘./kubernetes/saltbase/salt/kube-addons/init.sls’ -> ‘/srv/salt-new/salt/kube-addons/init.sls’ | |
==> master: ‘./kubernetes/saltbase/salt/kube-addons/kube-addons.sh’ -> ‘/srv/salt-new/salt/kube-addons/kube-addons.sh’ | |
==> master: ‘./kubernetes/saltbase/salt/kube-addons/kube-addons.service’ -> ‘/srv/salt-new/salt/kube-addons/kube-addons.service’ | |
==> master: ‘./kubernetes/saltbase/salt/kube-addons/initd’ -> ‘/srv/salt-new/salt/kube-addons/initd’ | |
==> master: ‘./kubernetes/saltbase/salt/kube-addons/dns’ -> ‘/srv/salt-new/salt/kube-addons/dns’ | |
==> master: ‘./kubernetes/saltbase/salt/kube-addons/dns/skydns-rc.yaml.in’ -> ‘/srv/salt-new/salt/kube-addons/dns/skydns-rc.yaml.in’ | |
==> master: ‘./kubernetes/saltbase/salt/kube-addons/dns/skydns-svc.yaml.in’ -> ‘/srv/salt-new/salt/kube-addons/dns/skydns-svc.yaml.in’ | |
==> master: ‘./kubernetes/saltbase/salt/kube-addons/cluster-monitoring’ -> ‘/srv/salt-new/salt/kube-addons/cluster-monitoring’ | |
==> master: ‘./kubernetes/saltbase/salt/kube-addons/cluster-monitoring/influxdb’ -> ‘/srv/salt-new/salt/kube-addons/cluster-monitoring/influxdb’ | |
==> master: ‘./kubernetes/saltbase/salt/kube-addons/cluster-monitoring/influxdb/heapster-controller.yaml’ -> ‘/srv/salt-new/salt/kube-addons/cluster-monitoring/influxdb/heapster-controller.yaml’ | |
==> master: ‘./kubernetes/saltbase/salt/kube-addons/cluster-monitoring/influxdb/influxdb-service.yaml’ -> ‘/srv/salt-new/salt/kube-addons/cluster-monitoring/influxdb/influxdb-service.yaml’ | |
==> master: ‘./kubernetes/saltbase/salt/kube-addons/cluster-monitoring/influxdb/heapster-service.yaml’ -> ‘/srv/salt-new/salt/kube-addons/cluster-monitoring/influxdb/heapster-service.yaml’ | |
==> master: ‘./kubernetes/saltbase/salt/kube-addons/cluster-monitoring/influxdb/influxdb-grafana-controller.yaml’ -> ‘/srv/salt-new/salt/kube-addons/cluster-monitoring/influxdb/influxdb-grafana-controller.yaml’ | |
==> master: ‘./kubernetes/saltbase/salt/kube-addons/cluster-monitoring/influxdb/grafana-service.yaml’ -> ‘/srv/salt-new/salt/kube-addons/cluster-monitoring/influxdb/grafana-service.yaml’ | |
==> master: ‘./kubernetes/saltbase/salt/kube-addons/cluster-monitoring/standalone’ -> ‘/srv/salt-new/salt/kube-addons/cluster-monitoring/standalone’ | |
==> master: ‘./kubernetes/saltbase/salt/kube-addons/cluster-monitoring/standalone/heapster-controller.yaml’ -> ‘/srv/salt-new/salt/kube-addons/cluster-monitoring/standalone/heapster-controller.yaml’ | |
==> master: ‘./kubernetes/saltbase/salt/kube-addons/cluster-monitoring/standalone/heapster-service.yaml’ -> ‘/srv/salt-new/salt/kube-addons/cluster-monitoring/standalone/heapster-service.yaml’ | |
==> master: ‘./kubernetes/saltbase/salt/kube-addons/cluster-monitoring/googleinfluxdb’ -> ‘/srv/salt-new/salt/kube-addons/cluster-monitoring/googleinfluxdb’ | |
==> master: ‘./kubernetes/saltbase/salt/kube-addons/cluster-monitoring/googleinfluxdb/heapster-controller-combined.yaml’ -> ‘/srv/salt-new/salt/kube-addons/cluster-monitorin | |
==> master: g/googleinfluxdb/heapster-controller-combined.yaml’ | |
==> master: ‘./kubernetes/saltbase/salt/kube-addons/cluster-monitoring/google’ -> ‘/srv/salt-new/salt/kube-addons/cluster-monitoring/google’ | |
==> master: ‘./kubernetes/saltbase/salt/kube-addons/cluster-monitoring/google/heapster-controller.yaml’ -> ‘/srv/salt-new/salt/kube-addons/cluster-monitoring/google/heapster-controller.yaml’ | |
==> master: ‘./kubernetes/saltbase/salt/kube-addons/cluster-monitoring/google/heapster-service.yaml’ -> ‘/srv/salt-new/salt/kube-addons/cluster-monitoring/google/heapster-service.yaml’ | |
==> master: ‘./kubernetes/saltbase/salt/kube-addons/fluentd-elasticsearch’ -> ‘/srv/salt-new/salt/kube-addons/fluentd-elasticsearch’ | |
==> master: ‘./kubernetes/saltbase/salt/kube-addons/fluentd-elasticsearch/es-service.yaml’ -> ‘/srv/salt-new/salt/kube-addons/fluentd-elasticsearch/es-service.yaml’ | |
==> master: ‘./kubernetes/saltbase/salt/kube-addons/fluentd-elasticsearch/es-controller.yaml’ -> ‘/srv/salt-new/salt/kube-addons/fluentd-elasticsearch/es-controller.yaml’ | |
==> master: ‘./kubernetes/saltbase/salt/kube-addons/fluentd-elasticsearch/kibana-service.yaml’ -> ‘/srv/salt-new/salt/kube-addons/fluentd-elasticsearch/kibana-service.yaml’ | |
==> master: ‘./kubernetes/saltbase/salt/kube-addons/fluentd-elasticsearch/kibana-controller.yaml’ -> ‘/srv/salt-new/salt/kube-addons/fluentd-elasticsearch/kibana-controller.yaml’ | |
==> master: ‘./kubernetes/saltbase/salt/kube-master-addons’ -> ‘/srv/salt-new/salt/kube-master-addons’ | |
==> master: ‘./kubernetes/saltbase/salt/kube-master-addons/init.sls’ -> ‘/srv/salt-new/salt/kube-master-addons/init.sls’ | |
==> master: ‘./kubernetes/saltbase/salt/kube-master-addons/kube-master-addons.service’ -> ‘/srv/salt-new/salt/kube-master-addons/kube-master-addons.service’ | |
==> master: ‘./kubernetes/saltbase/salt/kube-master-addons/kube-master-addons.sh’ -> ‘/srv/salt-new/salt/kube-master-addons/kube-master-addons.sh’ | |
==> master: ‘./kubernetes/saltbase/salt/kube-master-addons/initd’ -> ‘/srv/salt-new/salt/kube-master-addons/initd’ | |
==> master: ‘./kubernetes/saltbase/salt/kube-scheduler’ -> ‘/srv/salt-new/salt/kube-scheduler’ | |
==> master: ‘./kubernetes/saltbase/salt/kube-scheduler/init.sls’ -> ‘/srv/salt-new/salt/kube-scheduler/init.sls’ | |
==> master: ‘./kubernetes/saltbase/salt/kube-scheduler/kube-scheduler.manifest’ -> ‘/srv/salt-new/salt/kube-scheduler/kube-scheduler.manifest’ | |
==> master: ‘./kubernetes/saltbase/salt/openvpn’ -> ‘/srv/salt-new/salt/openvpn’ | |
==> master: ‘./kubernetes/saltbase/salt/openvpn/init.sls’ -> ‘/srv/salt-new/salt/openvpn/init.sls’ | |
==> master: ‘./kubernetes/saltbase/salt/openvpn/server.conf’ -> ‘/srv/salt-new/salt/openvpn/server.conf’ | |
==> master: ‘./kubernetes/saltbase/salt/monit’ -> ‘/srv/salt-new/salt/monit’ | |
==> master: ‘./kubernetes/saltbase/salt/monit/docker’ -> ‘/srv/salt-new/salt/monit/docker’ | |
==> master: ‘./kubernetes/saltbase/salt/monit/kube-addons’ -> ‘/srv/salt-new/salt/monit/kube-addons’ | |
==> master: ‘./kubernetes/saltbase/salt/monit/init.sls’ -> ‘/srv/salt-new/salt/monit/init.sls’ | |
==> master: ‘./kubernetes/saltbase/salt/monit/kube-proxy’ -> ‘/srv/salt-new/salt/monit/kube-proxy’ | |
==> master: ‘./kubernetes/saltbase/salt/monit/kubelet’ -> ‘/srv/salt-new/salt/monit/kubelet’ | |
==> master: ‘./kubernetes/saltbase/salt/monit/monit_watcher.sh’ -> ‘/srv/salt-new/salt/monit/monit_watcher.sh’ | |
==> master: ‘./kubernetes/saltbase/salt/debian-auto-upgrades’ -> ‘/srv/salt-new/salt/debian-auto-upgrades’ | |
==> master: ‘./kubernetes/saltbase/salt/debian-auto-upgrades/20auto-upgrades’ -> ‘/srv/salt-new/salt/debian-auto-upgrades/20auto-upgrades’ | |
==> master: ‘./kubernetes/saltbase/salt/debian-auto-upgrades/init.sls’ -> ‘/srv/salt-new/salt/debian-auto-upgrades/init.sls’ | |
==> master: ‘./kubernetes/saltbase/salt/logrotate’ -> ‘/srv/salt-new/salt/logrotate’ | |
==> master: ‘./kubernetes/saltbase/salt/logrotate/conf’ -> ‘/srv/salt-new/salt/logrotate/conf’ | |
==> master: ‘./kubernetes/saltbase/salt/logrotate/init.sls’ -> ‘/srv/salt-new/salt/logrotate/init.sls’ | |
==> master: ‘./kubernetes/saltbase/salt/logrotate/docker-containers’ -> ‘/srv/salt-new/salt/logrotate/docker-containers’ | |
==> master: ‘./kubernetes/saltbase/salt/logrotate/cron’ -> ‘/srv/salt-new/salt/logrotate/cron’ | |
==> master: ‘./kubernetes/saltbase | |
==> master: /salt/nginx’ -> ‘/srv/salt-new/salt/nginx’ | |
==> master: ‘./kubernetes/saltbase/salt/nginx/kubernetes-site’ -> ‘/srv/salt-new/salt/nginx/kubernetes-site’ | |
==> master: ‘./kubernetes/saltbase/salt/nginx/init.sls’ -> ‘/srv/salt-new/salt/nginx/init.sls’ | |
==> master: ‘./kubernetes/saltbase/salt/nginx/nginx.json’ -> ‘/srv/salt-new/salt/nginx/nginx.json’ | |
==> master: ‘./kubernetes/saltbase/salt/nginx/nginx.conf’ -> ‘/srv/salt-new/salt/nginx/nginx.conf’ | |
==> master: ‘./kubernetes/saltbase/salt/kube-proxy’ -> ‘/srv/salt-new/salt/kube-proxy’ | |
==> master: ‘./kubernetes/saltbase/salt/kube-proxy/init.sls’ -> ‘/srv/salt-new/salt/kube-proxy/init.sls’ | |
==> master: ‘./kubernetes/saltbase/salt/kube-proxy/default’ -> ‘/srv/salt-new/salt/kube-proxy/default’ | |
==> master: ‘./kubernetes/saltbase/salt/kube-proxy/kubeconfig’ -> ‘/srv/salt-new/salt/kube-proxy/kubeconfig’ | |
==> master: ‘./kubernetes/saltbase/salt/kube-proxy/kube-proxy.service’ -> ‘/srv/salt-new/salt/kube-proxy/kube-proxy.service’ | |
==> master: ‘./kubernetes/saltbase/salt/kube-proxy/initd’ -> ‘/srv/salt-new/salt/kube-proxy/initd’ | |
==> master: ‘./kubernetes/saltbase/salt/kubelet’ -> ‘/srv/salt-new/salt/kubelet’ | |
==> master: ‘./kubernetes/saltbase/salt/kubelet/kubelet.service’ -> ‘/srv/salt-new/salt/kubelet/kubelet.service’ | |
==> master: ‘./kubernetes/saltbase/salt/kubelet/init.sls’ -> ‘/srv/salt-new/salt/kubelet/init.sls’ | |
==> master: ‘./kubernetes/saltbase/salt/kubelet/kubernetes_auth’ -> ‘/srv/salt-new/salt/kubelet/kubernetes_auth’ | |
==> master: ‘./kubernetes/saltbase/salt/kubelet/default’ -> ‘/srv/salt-new/salt/kubelet/default’ | |
==> master: ‘./kubernetes/saltbase/salt/kubelet/initd’ -> ‘/srv/salt-new/salt/kubelet/initd’ | |
==> master: ‘./kubernetes/saltbase/salt/fluentd-es’ -> ‘/srv/salt-new/salt/fluentd-es’ | |
==> master: ‘./kubernetes/saltbase/salt/fluentd-es/fluentd-es.yaml’ -> ‘/srv/salt-new/salt/fluentd-es/fluentd-es.yaml’ | |
==> master: ‘./kubernetes/saltbase/salt/fluentd-es/init.sls’ -> ‘/srv/salt-new/salt/fluentd-es/init.sls’ | |
==> master: ‘./kubernetes/saltbase/salt/_states’ -> ‘/srv/salt-new/salt/_states’ | |
==> master: ‘./kubernetes/saltbase/salt/_states/container_bridge.py’ -> ‘/srv/salt-new/salt/_states/container_bridge.py’ | |
==> master: ‘./kubernetes/saltbase/salt/kube-admission-controls’ -> ‘/srv/salt-new/salt/kube-admission-controls’ | |
==> master: ‘./kubernetes/saltbase/salt/kube-admission-controls/init.sls’ -> ‘/srv/salt-new/salt/kube-admission-controls/init.sls’ | |
==> master: ‘./kubernetes/saltbase/salt/kube-admission-controls/limit-range’ -> ‘/srv/salt-new/salt/kube-admission-controls/limit-range’ | |
==> master: ‘./kubernetes/saltbase/salt/kube-admission-controls/limit-range/limit-range.yaml’ -> ‘/srv/salt-new/salt/kube-admission-controls/limit-range/limit-range.yaml’ | |
==> master: ‘./kubernetes/saltbase/salt/openvpn-client’ -> ‘/srv/salt-new/salt/openvpn-client’ | |
==> master: ‘./kubernetes/saltbase/salt/openvpn-client/init.sls’ -> ‘/srv/salt-new/salt/openvpn-client/init.sls’ | |
==> master: ‘./kubernetes/saltbase/salt/openvpn-client/client.conf’ -> ‘/srv/salt-new/salt/openvpn-client/client.conf’ | |
==> master: ‘./kubernetes/saltbase/salt/README.md’ -> ‘/srv/salt-new/salt/README.md’ | |
==> master: ‘./kubernetes/saltbase/salt/generate-cert’ -> ‘/srv/salt-new/salt/generate-cert’ | |
==> master: ‘./kubernetes/saltbase/salt/generate-cert/init.sls’ -> ‘/srv/salt-new/salt/generate-cert/init.sls’ | |
==> master: ‘./kubernetes/saltbase/salt/generate-cert/make-ca-cert.sh’ -> ‘/srv/salt-new/salt/generate-cert/make-ca-cert.sh’ | |
==> master: ‘./kubernetes/saltbase/salt/generate-cert/make-cert.sh’ -> ‘/srv/salt-new/salt/generate-cert/make-cert.sh’ | |
==> master: ‘./kubernetes/saltbase/salt/top.sls’ -> ‘/srv/salt-new/salt/top.sls’ | |
==> master: ‘./kubernetes/saltbase/salt/etcd’ -> ‘/srv/salt-new/salt/etcd’ | |
==> master: ‘./kubernetes/saltbase/salt/etcd/init.sls’ -> ‘/srv/salt-new/salt/etcd/init.sls’ | |
==> master: ‘./kubernetes/saltbase/salt/etcd/etcd.manifest’ -> ‘/srv/salt-new/salt/etcd/etcd.manifest’ | |
==> master: ‘./kubernetes/saltbase/salt/kube-client-tools.sls’ -> ‘/srv/salt-new/salt/kube-client-tools.sls’ | |
==> master: ‘./kubernetes/saltbase/salt/cadvisor’ -> ‘/srv/salt-new/salt/cadvisor’ | |
==> master: ‘./kubernetes/saltbase/salt/cadvisor/init.sls’ -> ‘/srv/salt-new/salt/cadvisor/init.sls’ | |
==> master: ‘./kubern | |
==> master: etes/saltbase/salt/base.sls’ -> ‘/srv/salt-new/salt/base.sls’ | |
==> master: ‘./kubernetes/saltbase/salt/kube-apiserver’ -> ‘/srv/salt-new/salt/kube-apiserver’ | |
==> master: ‘./kubernetes/saltbase/salt/kube-apiserver/init.sls’ -> ‘/srv/salt-new/salt/kube-apiserver/init.sls’ | |
==> master: ‘./kubernetes/saltbase/salt/kube-apiserver/kube-apiserver.manifest’ -> ‘/srv/salt-new/salt/kube-apiserver/kube-apiserver.manifest’ | |
==> master: ‘./kubernetes/saltbase/salt/static-routes’ -> ‘/srv/salt-new/salt/static-routes’ | |
==> master: ‘./kubernetes/saltbase/salt/static-routes/if-down’ -> ‘/srv/salt-new/salt/static-routes/if-down’ | |
==> master: ‘./kubernetes/saltbase/salt/static-routes/init.sls’ -> ‘/srv/salt-new/salt/static-routes/init.sls’ | |
==> master: ‘./kubernetes/saltbase/salt/static-routes/refresh’ -> ‘/srv/salt-new/salt/static-routes/refresh’ | |
==> master: ‘./kubernetes/saltbase/salt/static-routes/if-up’ -> ‘/srv/salt-new/salt/static-routes/if-up’ | |
==> master: ‘./kubernetes/saltbase/salt/kube-controller-manager’ -> ‘/srv/salt-new/salt/kube-controller-manager’ | |
==> master: ‘./kubernetes/saltbase/salt/kube-controller-manager/init.sls’ -> ‘/srv/salt-new/salt/kube-controller-manager/init.sls’ | |
==> master: ‘./kubernetes/saltbase/salt/kube-controller-manager/kube-controller-manager.manifest’ -> ‘/srv/salt-new/salt/kube-controller-manager/kube-controller-manager.manifest’ | |
==> master: ‘./kubernetes/saltbase/pillar’ -> ‘/srv/salt-new/pillar’ | |
==> master: ‘./kubernetes/saltbase/pillar/cluster-params.sls’ -> ‘/srv/salt-new/pillar/cluster-params.sls’ | |
==> master: ‘./kubernetes/saltbase/pillar/logging.sls’ -> ‘/srv/salt-new/pillar/logging.sls’ | |
==> master: ‘./kubernetes/saltbase/pillar/docker-images.sls’ -> ‘/srv/salt-new/pillar/docker-images.sls’ | |
==> master: ‘./kubernetes/saltbase/pillar/mine.sls’ -> ‘/srv/salt-new/pillar/mine.sls’ | |
==> master: ‘./kubernetes/saltbase/pillar/README.md’ -> ‘/srv/salt-new/pillar/README.md’ | |
==> master: ‘./kubernetes/saltbase/pillar/top.sls’ -> ‘/srv/salt-new/pillar/top.sls’ | |
==> master: ‘./kubernetes/saltbase/pillar/privilege.sls’ -> ‘/srv/salt-new/pillar/privilege.sls’ | |
==> master: ‘./kubernetes/saltbase/reactor’ -> ‘/srv/salt-new/reactor’ | |
==> master: ‘./kubernetes/saltbase/reactor/highstate-new.sls’ -> ‘/srv/salt-new/reactor/highstate-new.sls’ | |
==> master: ‘./kubernetes/saltbase/reactor/highstate-minions.sls’ -> ‘/srv/salt-new/reactor/highstate-minions.sls’ | |
==> master: ‘./kubernetes/saltbase/reactor/README.md’ -> ‘/srv/salt-new/reactor/README.md’ | |
==> master: ‘./kubernetes/saltbase/reactor/highstate-masters.sls’ -> ‘/srv/salt-new/reactor/highstate-masters.sls’ | |
==> master: +++ Installing salt overlay files | |
==> master: ‘/srv/salt-overlay/salt/kube-proxy/kubeconfig’ -> ‘/srv/salt-new/salt/kube-proxy/kubeconfig’ | |
==> master: ‘/srv/salt-overlay/salt/kubelet/kubernetes_auth’ -> ‘/srv/salt-new/salt/kubelet/kubernetes_auth’ | |
==> master: ‘/srv/salt-overlay/salt/kubelet/kubeconfig’ -> ‘/srv/salt-new/salt/kubelet/kubeconfig’ | |
==> master: ‘/srv/salt-overlay/salt/kube-apiserver/basic_auth.csv’ -> ‘/srv/salt-new/salt/kube-apiserver/basic_auth.csv’ | |
==> master: ‘/srv/salt-overlay/salt/kube-apiserver/known_tokens.csv’ -> ‘/srv/salt-new/salt/kube-apiserver/known_tokens.csv’ | |
==> master: ‘/srv/salt-overlay/pillar/cluster-params.sls’ -> ‘/srv/salt-new/pillar/cluster-params.sls’ | |
==> master: +++ Install binaries from tar: kubernetes-server-linux-amd64.tar.gz | |
==> master: ‘/tmp/kubernetes.Amk2kU/kubernetes/server/bin/hyperkube’ -> ‘/srv/salt-new/salt/kube-bins/hyperkube’ | |
==> master: ‘/tmp/kubernetes.Amk2kU/kubernetes/server/bin/kube-apiserver’ -> ‘/srv/salt-new/salt/kube-bins/kube-apiserver’ | |
==> master: ‘/tmp/kubernetes.Amk2kU/kubernetes/server/bin/kube-apiserver.docker_tag’ -> ‘/srv/salt-new/salt/kube-bins/kube-apiserver.docker_tag’ | |
==> master: ‘/tmp/kubernetes.Amk2kU/kubernetes/server/bin/kube-apiserver.tar’ -> ‘/srv/salt-new/salt/kube-bins/kube-apiserver.tar’ | |
==> master: ‘/tmp/kubernetes.Amk2kU/kubernetes/server/bin/kube-controller-manager’ -> ‘/srv/salt-new/salt/kube-bins/kube-controller-manager’ | |
==> master: ‘/tmp/kubernetes.Amk2kU/kubernetes/server/bin/kube-controller-manager.docker_tag’ -> ‘/srv/salt-new/salt/kube-bins/kube-controller-manager.docker_tag’ | |
==> master: ‘/tmp/kubernetes.Amk2kU/kubernetes/server/bin/kube-controller-manager.tar’ -> ‘/srv/salt-new/salt/kube-bins/kube-controller-manager.tar’ | |
==> master: ‘/tmp/kubernetes.Amk2kU/kubernetes/server/bin/kubectl’ -> ‘/srv/salt-new/salt/kube-bins/kubectl’ | |
==> master: ‘/tmp/kubernetes.Amk2kU/kubernetes/server/bin/kubelet’ -> ‘/srv/salt-new/salt/kube-bins/kubelet’ | |
==> master: ‘/tmp/kubernetes.Amk2kU/kubernetes/server/bin/kube-proxy’ -> ‘/srv/salt-new/salt/kube-bins/kube-proxy’ | |
==> master: ‘/tmp/kubernetes.Amk2kU/kubernetes/server/bin/kubernetes’ -> ‘/srv/salt-new/salt/kube-bins/kubernetes’ | |
==> master: ‘/tmp/kubernetes.Amk2kU/kubernetes/server/bin/kube-scheduler’ -> ‘/srv/salt-new/salt/kube-bins/kube-scheduler’ | |
==> master: ‘/tmp/kubernetes.Amk2kU/kubernetes/server/bin/kube-scheduler.docker_tag’ -> ‘/srv/salt-new/salt/kube-bins/kube-scheduler.docker_tag’ | |
==> master: ‘/tmp/kubernetes.Amk2kU/kubernetes/server/bin/kube-scheduler.tar’ -> ‘/srv/salt-new/salt/kube-bins/kube-scheduler.tar’ | |
==> master: +++ Swapping in new configs | |
==> master: ‘/srv/salt-new/salt’ -> ‘/srv/salt’ | |
==> master: ‘/srv/salt-new/pillar’ -> ‘/srv/pillar’ | |
==> master: ‘/srv/salt-new/reactor’ -> ‘/srv/reactor’ | |
==> master: /home/vagrant | |
==> master: Executing configuration | |
==> master: kubernetes-master: | |
==> master: True | |
==> master: kubernetes-minion-2: | |
==> master: Minion did not return. [Not connected] | |
==> master: kubernetes-minion-1: | |
==> master: Minion did not return. [Not connected] | |
==> master: [0;32mkubernetes-master:[0m | |
==> master: [0;36m Name: docker - Function: service.running - Result: Changed[0m | |
==> master: [0;36m Name: kubelet - Function: service.running - Result: Changed[0m | |
==> master: [0;36m Name: /srv/pillar/docker-images.sls - Function: file.touch - Result: Changed[0m | |
==> master: [0;36m Name: kube-master-addons - Function: service.running - Result: Changed[0m | |
==> master: [0;36m Name: /etc/kubernetes/addons - Function: file.absent - Result: Changed[0m | |
==> master: [0;36m Name: /etc/kubernetes/addons - Function: file.directory - Result: Changed[0m | |
==> master: [0;36m Name: /etc/kubernetes/addons/dns/skydns-svc.yaml - Function: file.managed - Result: Changed[0m | |
==> master: [0;36m Name: /etc/kubernetes/addons/dns/skydns-rc.yaml - Function: file.managed - Result: Changed[0m | |
==> master: [0;36m Name: kube-addons - Function: service.dead - Result: Changed[0m | |
==> master: [0;36m Name: kube-addons - Function: service.running - Result: Changed[0m | |
==> master: [0;36m | |
==> master: Summary | |
==> master: -------------[0m | |
==> master: [0;32mSucceeded: 52[0m ([0;32mchanged=10[0m) | |
==> master: [0;36mFailed: 0[0m | |
==> master: [0;36m------------- | |
==> master: Total states run: 52[0m | |
==> master: [0;36mkubernetes-minion-2[0m: | |
==> master: [0;31m Minion did not return. [Not connected][0m | |
==> master: [0;36mkubernetes-minion-1[0m: | |
==> master: [0;31m Minion did not return. [Not connected][0m | |
==> minion-1: Clearing any previously set forwarded ports... | |
==> minion-1: Fixed port collision for 22 => 2222. Now on port 2200. | |
==> minion-1: Clearing any previously set network interfaces... | |
==> minion-1: Preparing network interfaces based on configuration... | |
minion-1: Adapter 1: nat | |
minion-1: Adapter 2: hostonly | |
==> minion-1: Forwarding ports... | |
minion-1: 22 => 2200 (adapter 1) | |
==> minion-1: Running 'pre-boot' VM customizations... | |
==> minion-1: Booting VM... | |
==> minion-1: Waiting for machine to boot. This may take a few minutes... | |
minion-1: SSH address: 127.0.0.1:2200 | |
minion-1: SSH username: vagrant | |
minion-1: SSH auth method: private key | |
minion-1: Warning: Connection timeout. Retrying... | |
==> minion-1: Machine booted and ready! | |
==> minion-1: Checking for guest additions in VM... | |
==> minion-1: Configuring and enabling network interfaces... | |
==> minion-1: Mounting shared folders... | |
minion-1: /vagrant => /home/nop/k8s/201r/kubernetes | |
==> minion-1: Machine already provisioned. Run `vagrant provision` or use the `--provision` | |
==> minion-1: to force provisioning. Provisioners marked to run always will still run. | |
==> minion-1: Running provisioner: shell... | |
minion-1: Running: /tmp/vagrant-shell20150629-10932-y9s2o4.sh | |
==> minion-1: Verifying network configuration | |
==> minion-1: It looks like the required network bridge has not yet been created | |
==> minion-1: Installing, enabling prerequisites | |
==> minion-1: Package openvswitch-2.3.1-3.git20150327.fc21.x86_64 already installed and latest version | |
==> minion-1: Package bridge-utils-1.5-10.fc21.x86_64 already installed and latest version | |
==> minion-1: Nothing to do | |
==> minion-1: Create a new docker bridge | |
==> minion-1: Cannot find device "cbr0" | |
==> minion-1: bridge cbr0 doesn't exist; can't delete it | |
==> minion-1: Add ovs bridge | |
==> minion-1: ovs-vsctl: | |
==> minion-1: no port named gre0 | |
==> minion-1: Add tun device | |
==> minion-1: ovs-vsctl: no port named tun0 | |
==> minion-1: Add oflow rules | |
==> minion-1: Creating persistent gre tunnels | |
==> minion-1: Created persistent gre tunnels | |
==> minion-1: Add ip route rules such that all pod traffic flows through docker bridge | |
==> minion-1: Network configuration verified | |
==> minion-2: Clearing any previously set forwarded ports... | |
==> minion-2: Fixed port collision for 22 => 2222. Now on port 2201. | |
==> minion-2: Clearing any previously set network interfaces... | |
==> minion-2: Preparing network interfaces based on configuration... | |
minion-2: Adapter 1: nat | |
minion-2: Adapter 2: hostonly | |
==> minion-2: Forwarding ports... | |
minion-2: 22 => 2201 (adapter 1) | |
==> minion-2: Running 'pre-boot' VM customizations... | |
==> minion-2: Booting VM... | |
==> minion-2: Waiting for machine to boot. This may take a few minutes... | |
minion-2: SSH address: 127.0.0.1:2201 | |
minion-2: SSH username: vagrant | |
minion-2: SSH auth method: private key | |
minion-2: Warning: Connection timeout. Retrying... | |
==> minion-2: Machine booted and ready! | |
==> minion-2: Checking for guest additions in VM... | |
==> minion-2: Configuring and enabling network interfaces... | |
==> minion-2: Mounting shared folders... | |
minion-2: /vagrant => /home/nop/k8s/201r/kubernetes | |
==> minion-2: Machine already provisioned. Run `vagrant provision` or use the `--provision` | |
==> minion-2: to force provisioning. Provisioners marked to run always will still run. | |
==> minion-2: Running provisioner: shell... | |
minion-2: Running: /tmp/vagrant-shell20150629-10932-14bwojq.sh | |
==> minion-2: Verifying network configuration | |
==> minion-2: It looks like the required network bridge has not yet been created | |
==> minion-2: Installing, enabling prerequisites | |
==> minion-2: Package openvswitch-2.3.1-3.git20150327.fc21.x86_64 already installed and latest version | |
==> minion-2: Package bridge-utils-1.5-10.fc21.x86_64 already installed and latest version | |
==> minion-2: Nothing to do | |
==> minion-2: Create a new docker bridge | |
==> minion-2: Cannot find device "cbr0" | |
==> minion-2: bridge cbr0 doesn't exist; can't delete it | |
==> minion-2: Add ovs bridge | |
==> minion-2: ovs-vsctl: | |
==> minion-2: no port named gre0 | |
==> minion-2: Add tun device | |
==> minion-2: ovs-vsctl: | |
==> minion-2: no port named tun0 | |
==> minion-2: Add oflow rules | |
==> minion-2: Creating persistent gre tunnels | |
==> minion-2: Created persistent gre tunnels | |
==> minion-2: Add ip route rules such that all pod traffic flows through docker bridge | |
==> minion-2: Network configuration verified | |
Wrote config for vagrant to /home/nop/.kube/config | |
Each machine instance has been created/updated. | |
Now waiting for the Salt provisioning process to complete on each machine. | |
This can take some time based on your network, disk, and cpu speed. | |
It is possible for an error to occur during Salt provision of cluster and this could loop forever. | |
Validating master | |
Validating minion-1 | |
Validating minion-2 | |
Waiting for each minion to be registered with cloud provider | |
Validating we can run kubectl commands. | |
NAME READY REASON RESTARTS AGE | |
kube-dns-v4-4l4aw 3/3 Running 4 17m | |
Kubernetes cluster is running. The master is running at: | |
https://10.245.1.2 | |
The user name and password to use is located in ~/.kubernetes_vagrant_auth. | |
Found 2 nodes. | |
NAME LABELS STATUS | |
1 10.245.1.3 kubernetes.io/hostname=10.245.1.3 Ready | |
2 10.245.1.4 kubernetes.io/hostname=10.245.1.4 Ready | |
Validate output: | |
NAME STATUS MESSAGE ERROR | |
controller-manager Healthy ok nil | |
scheduler Healthy ok nil | |
etcd-0 Healthy {"health": "true"} nil | |
[0;32mCluster validation succeeded[0m | |
[0;32mKubernetes master[0m is running at [0;33mhttps://10.245.1.2[0m | |
[0;32mKubeDNS[0m is running at [0;33mhttps://10.245.1.2/api/v1/proxy/namespaces/default/services/kube-dns[0m | |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment