Transcript of a successful Kubernetes cluster creation on Google Compute Engine
$ cluster/kube-up.sh | |
Starting cluster using provider: gce | |
... calling verify-prereqs | |
... calling kube-up | |
Project: kubernetes-satnam2 | |
Zone: us-central1-b | |
+++ Staging server tars to Google Storage: gs://kubernetes-staging-a2dc4/devel | |
current-context: "kubernetes-satnam2_kubernetes" | |
Running: cluster/../cluster/gce/../../cluster/../cluster/gce/../../_output/dockerized/bin/darwin/amd64/kubectl config view -o template --template={{$dot := .}}{{with $ctx := index $dot "current-context"}}{{$user := index $dot "contexts" $ctx "user"}}{{index $dot "users" $user "auth-path"}}{{end}} | |
Starting VMs and configuring firewalls | |
WARNING: You have selected a disk size of under [200GB]. This may result in poor I/O performance. For more information, see: https://developers.google.com/compute/docs/disks#pdperformance. | |
Created [https://www.googleapis.com/compute/v1/projects/kubernetes-satnam2/zones/us-central1-b/disks/kubernetes-master-pd]. | |
NAME ZONE SIZE_GB TYPE STATUS | |
kubernetes-master-pd us-central1-b 10 pd-standard READY | |
+++ Logging using Fluentd to elasticsearch | |
Created [https://www.googleapis.com/compute/v1/projects/kubernetes-satnam2/global/firewalls/kubernetes-master-https]. | |
NAME NETWORK SRC_RANGES RULES SRC_TAGS TARGET_TAGS | |
kubernetes-master-https default 0.0.0.0/0 tcp:443 kubernetes-master | |
Created [https://www.googleapis.com/compute/v1/projects/kubernetes-satnam2/global/firewalls/kubernetes-minion-all]. | |
NAME NETWORK SRC_RANGES RULES SRC_TAGS TARGET_TAGS | |
kubernetes-minion-all default 10.244.0.0/16 tcp,udp,icmp,esp,ah,sctp kubernetes-minion | |
Created [https://www.googleapis.com/compute/v1/projects/kubernetes-satnam2/zones/us-central1-b/instances/kubernetes-master]. | |
NAME ZONE MACHINE_TYPE INTERNAL_IP EXTERNAL_IP STATUS | |
kubernetes-master us-central1-b n1-standard-1 10.240.69.132 104.197.5.247 RUNNING | |
WARNING: You have selected a disk size of under [200GB]. This may result in poor I/O performance. For more information, see: https://developers.google.com/compute/docs/disks#pdperformance. | |
Created [https://www.googleapis.com/compute/v1/projects/kubernetes-satnam2/global/instanceTemplates/kubernetes-minion-template]. | |
NAME MACHINE_TYPE CREATION_TIMESTAMP | |
kubernetes-minion-template n1-standard-1 2015-03-04T11:39:24.718-08:00 | |
Managed instance group kubernetes-minion-group is being created. Operation: operation-1425497981748-6e209b54-0983-4170-977b-48791a758277 | |
Waiting for minions to run. 0 out of 4 running. Retrying. | |
Waiting for minions to run. 0 out of 4 running. Retrying. | |
Waiting for minions to run. 0 out of 4 running. Retrying. | |
Waiting for minions to run. 0 out of 4 running. Retrying. | |
Waiting for minions to run. 0 out of 4 running. Retrying. | |
Waiting for minions to run. 0 out of 4 running. Retrying. | |
Waiting for minions to run. 2 out of 4 running. Retrying. | |
MINION_NAMES=kubernetes-minion-2il2 kubernetes-minion-h5kt kubernetes-minion-hqhv kubernetes-minion-kmin | |
Updated [https://www.googleapis.com/compute/v1/projects/kubernetes-satnam2/zones/us-central1-b/instances/kubernetes-minion-h5kt]. | |
Updated [https://www.googleapis.com/compute/v1/projects/kubernetes-satnam2/zones/us-central1-b/instances/kubernetes-minion-kmin]. | |
Updated [https://www.googleapis.com/compute/v1/projects/kubernetes-satnam2/zones/us-central1-b/instances/kubernetes-minion-hqhv]. | |
Updated [https://www.googleapis.com/compute/v1/projects/kubernetes-satnam2/zones/us-central1-b/instances/kubernetes-minion-2il2]. | |
Created [https://www.googleapis.com/compute/v1/projects/kubernetes-satnam2/global/routes/kubernetes-minion-hqhv]. | |
NAME NETWORK DEST_RANGE NEXT_HOP PRIORITY | |
kubernetes-minion-hqhv default 10.244.2.0/24 us-central1-b/instances/kubernetes-minion-hqhv 1000 | |
Created [https://www.googleapis.com/compute/v1/projects/kubernetes-satnam2/global/routes/kubernetes-minion-kmin]. | |
Created [https://www.googleapis.com/compute/v1/projects/kubernetes-satnam2/global/routes/kubernetes-minion-2il2]. | |
NAME NETWORK DEST_RANGE NEXT_HOP PRIORITY | |
kubernetes-minion-2il2 default 10.244.0.0/24 us-central1-b/instances/kubernetes-minion-2il2 1000 | |
NAME NETWORK DEST_RANGE NEXT_HOP PRIORITY | |
kubernetes-minion-kmin default 10.244.3.0/24 us-central1-b/instances/kubernetes-minion-kmin 1000 | |
Created [https://www.googleapis.com/compute/v1/projects/kubernetes-satnam2/global/routes/kubernetes-minion-h5kt]. | |
NAME NETWORK DEST_RANGE NEXT_HOP PRIORITY | |
kubernetes-minion-h5kt default 10.244.1.0/24 us-central1-b/instances/kubernetes-minion-h5kt 1000 | |
Using master: kubernetes-master (external IP: 104.197.5.247) | |
Created [https://www.googleapis.com/compute/v1/projects/kubernetes-satnam2/regions/us-central1/addresses/kubernetes-master-ip]. | |
NAME REGION ADDRESS STATUS | |
kubernetes-master-ip us-central1 104.197.5.247 IN_USE | |
Waiting for cluster initialization. | |
This will continually check to see if the API for kubernetes is reachable. | |
This might loop forever if there was some uncaught error during start | |
up. | |
Kubernetes cluster created. | |
current-context: "kubernetes-satnam2_kubernetes" | |
Running: cluster/../cluster/gce/../../cluster/../cluster/gce/../../_output/dockerized/bin/darwin/amd64/kubectl config set-cluster kubernetes-satnam2_kubernetes --server=https://104.197.5.247 --certificate-authority=/Users/satnam/.kube/kubernetes-satnam2_kubernetes/kubernetes.ca.crt --global | |
current-context: "kubernetes-satnam2_kubernetes" | |
Running: cluster/../cluster/gce/../../cluster/../cluster/gce/../../_output/dockerized/bin/darwin/amd64/kubectl config set-credentials kubernetes-satnam2_kubernetes-admin --auth-path=/Users/satnam/.kube/kubernetes-satnam2_kubernetes/kubernetes_auth --global | |
current-context: "kubernetes-satnam2_kubernetes" | |
Running: cluster/../cluster/gce/../../cluster/../cluster/gce/../../_output/dockerized/bin/darwin/amd64/kubectl config set-context kubernetes-satnam2_kubernetes --cluster=kubernetes-satnam2_kubernetes --user=kubernetes-satnam2_kubernetes-admin --global | |
current-context: "kubernetes-satnam2_kubernetes" | |
Running: cluster/../cluster/gce/../../cluster/../cluster/gce/../../_output/dockerized/bin/darwin/amd64/kubectl config use-context kubernetes-satnam2_kubernetes --global | |
Wrote /Users/satnam/.kube/kubernetes-satnam2_kubernetes/kubernetes_auth | |
Sanity checking cluster... | |
Attempt 1 to check Docker on node kubernetes-minion-2il2 ... [working] | |
Attempt 1 to check Docker on node kubernetes-minion-h5kt ... [working] | |
Attempt 1 to check Docker on node kubernetes-minion-hqhv ... [working] | |
Attempt 1 to check Docker on node kubernetes-minion-kmin ... [working] | |
Kubernetes cluster is running. The master is running at: | |
https://104.197.5.247 | |
The user name and password to use is located in /Users/satnam/.kube/kubernetes-satnam2_kubernetes/kubernetes_auth. | |
... calling validate-cluster | |
current-context: "kubernetes-satnam2_kubernetes" | |
Running: cluster/../cluster/gce/../../cluster/../cluster/gce/../../cluster/../cluster/gce/../../_output/dockerized/bin/darwin/amd64/kubectl config view -o template --template={{$dot := .}}{{with $ctx := index $dot "current-context"}}{{$user := index $dot "contexts" $ctx "user"}}{{index $dot "users" $user "auth-path"}}{{end}} | |
Project: kubernetes-satnam2 | |
Zone: us-central1-b | |
current-context: "kubernetes-satnam2_kubernetes" | |
Running: cluster/../cluster/gce/../../cluster/../cluster/gce/../../cluster/../cluster/gce/../../_output/dockerized/bin/darwin/amd64/kubectl get minions -o template -t {{range.items}}{{.id}} | |
{{end}} | |
Found 4 nodes. | |
1 kubernetes-minion-2il2.c.kubernetes-satnam2.internal | |
2 kubernetes-minion-h5kt.c.kubernetes-satnam2.internal | |
3 kubernetes-minion-hqhv.c.kubernetes-satnam2.internal | |
4 kubernetes-minion-kmin.c.kubernetes-satnam2.internal | |
Attempt 1 at checking Kubelet installation on node kubernetes-minion-2il2 ... [working] | |
Attempt 1 at checking Kubelet installation on node kubernetes-minion-h5kt ... [working] | |
Attempt 1 at checking Kubelet installation on node kubernetes-minion-hqhv ... [working] | |
Attempt 1 at checking Kubelet installation on node kubernetes-minion-kmin ... [working] | |
Cluster validation succeeded | |
... calling setup-monitoring-firewall | |
Setting up firewalls to Heapster based cluster monitoring. | |
Created [https://www.googleapis.com/compute/v1/projects/kubernetes-satnam2/global/firewalls/kubernetes-monitoring-heapster]. | |
NAME NETWORK SRC_RANGES RULES SRC_TAGS TARGET_TAGS | |
kubernetes-monitoring-heapster default 0.0.0.0/0 tcp:80,tcp:8083,tcp:8086 kubernetes-minion | |
Grafana dashboard will be available at https://104.197.5.247/api/v1beta1/proxy/services/monitoring-grafana/. Wait for the monitoring dashboard to be online. | |
... calling setup-logging-firewall | |
Created [https://www.googleapis.com/compute/v1/projects/kubernetes-satnam2/global/firewalls/kubernetes-fluentd-elasticsearch-logging]. | |
NAME NETWORK SRC_RANGES RULES SRC_TAGS TARGET_TAGS | |
kubernetes-fluentd-elasticsearch-logging default 0.0.0.0/0 tcp:5601,tcp:9200,tcp:9300 kubernetes-minion | |
waiting for logging services to be created by the master. | |
current-context: "kubernetes-satnam2_kubernetes" | |
Running: cluster/../cluster/gce/../../cluster/../cluster/gce/../../_output/dockerized/bin/darwin/amd64/kubectl get services -l name=kibana-logging -o template -t {{range.items}}{{.id}}{{end}} | |
current-context: "kubernetes-satnam2_kubernetes" | |
Running: cluster/../cluster/gce/../../cluster/../cluster/gce/../../_output/dockerized/bin/darwin/amd64/kubectl get services -l name=elasticsearch-logging -o template -t {{range.items}}{{.id}}{{end}} | |
Cluster logs are ingested into Elasticsearch running at https://104.197.5.247/api/v1beta1/proxy/services/elasticsearch-logging/ | |
Kibana logging dashboard will be available at https://104.197.5.247/api/v1beta1/proxy/services/kibana-logging/ (note the trailing slash) | |
Done |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment