Skip to content

Instantly share code, notes, and snippets.

@LincolnBryant
Created June 7, 2018 15:11
Show Gist options
  • Save LincolnBryant/8cf7a746db17cb417aadf65e4d838407 to your computer and use it in GitHub Desktop.
Save LincolnBryant/8cf7a746db17cb417aadf65e4d838407 to your computer and use it in GitHub Desktop.
coreos-23 ~ # kubeadm init --pod-network-cidr=192.168.0.0/16 --feature-gates CoreDNS=true
[init] Using Kubernetes version: v1.10.4
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks.
[WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 17.12.1-ce. Max validated version: 17.03
[WARNING Hostname]: hostname "coreos-23.grid.uchicago.edu" could not be reached
[WARNING Hostname]: hostname "coreos-23.grid.uchicago.edu" lookup coreos-23.grid.uchicago.edu on 128.135.247.50:53: no such host
[preflight] Starting the kubelet service
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [coreos-23.grid.uchicago.edu kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.1.254.23]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [localhost] and IPs [127.0.0.1]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [coreos-23.grid.uchicago.edu] and IPs [10.1.254.23]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests".
[init] This might take a minute or longer if the control plane images have to be pulled.
[apiclient] All control plane components are healthy after 18.001672 seconds
[uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[markmaster] Will mark node coreos-23.grid.uchicago.edu as master by adding a label and a taint
[markmaster] Master coreos-23.grid.uchicago.edu tainted and labelled with key/value: node-role.kubernetes.io/master=""
[bootstraptoken] Using token: z6ygxi.s9e8u3ops7sd152w
[bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of machines by running the following on each node
as root:
kubeadm join 10.1.254.23:6443 --token z6ygxi.s9e8u3ops7sd152w --discovery-token-ca-cert-hash sha256:b213f1fe2edd221d8662aaea3cf9432429c1ce3337e2e32133922aa950af83b8
coreos-23 ~ # kubectl apply -f https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml
clusterrole.rbac.authorization.k8s.io "calico-node" created
clusterrolebinding.rbac.authorization.k8s.io "calico-node" created
coreos-23 ~ # kubectl apply -f https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml
configmap "calico-config" created
service "calico-typha" created
deployment.apps "calico-typha" created
daemonset.extensions "calico-node" created
customresourcedefinition.apiextensions.k8s.io "felixconfigurations.crd.projectcalico.org" created
customresourcedefinition.apiextensions.k8s.io "bgppeers.crd.projectcalico.org" created
customresourcedefinition.apiextensions.k8s.io "bgpconfigurations.crd.projectcalico.org" created
customresourcedefinition.apiextensions.k8s.io "ippools.crd.projectcalico.org" created
customresourcedefinition.apiextensions.k8s.io "hostendpoints.crd.projectcalico.org" created
customresourcedefinition.apiextensions.k8s.io "clusterinformations.crd.projectcalico.org" created
customresourcedefinition.apiextensions.k8s.io "globalnetworkpolicies.crd.projectcalico.org" created
customresourcedefinition.apiextensions.k8s.io "globalnetworksets.crd.projectcalico.org" created
customresourcedefinition.apiextensions.k8s.io "networkpolicies.crd.projectcalico.org" created
serviceaccount "calico-node" created
coreos-23 ~ # kubectl get nodes
NAME STATUS ROLES AGE VERSION
coreos-17.grid.uchicago.edu NotReady <none> 2s v1.10.1
coreos-23.grid.uchicago.edu Ready master 3m v1.10.1
coreos-9.grid.uchicago.edu Ready <none> 1m v1.10.1
coreos-23 ~ # kubectl get nodes
NAME STATUS ROLES AGE VERSION
coreos-17.grid.uchicago.edu Ready <none> 5m v1.10.1
coreos-23.grid.uchicago.edu Ready master 9m v1.10.1
coreos-9.grid.uchicago.edu Ready <none> 7m v1.10.1
# coreos-23 ~ # kubectl run nginx --image=nginx
deployment.apps "nginx" created
coreos-23 ~ # kubectl run nginx2 --image=nginx
deployment.apps "nginx2" created
coreos-23 ~ # kubectl run nginx3 --image=nginx
deployment.apps "nginx3" created
coreos-23 ~ # kubectl run nginx2 --image=nginx^C
coreos-23 ~ # kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
busybox 0/1 Completed 0 24s 192.168.1.18 coreos-9.grid.uchicago.edu
nginx-65899c769f-5k67r 1/1 Running 0 10s 192.168.2.2 coreos-17.grid.uchicago.edu
nginx2-77b5c9d48c-zqks7 1/1 Running 0 8s 192.168.1.19 coreos-9.grid.uchicago.edu
nginx3-689c5cbd79-mfp7d 1/1 Running 0 6s 192.168.2.3 coreos-17.grid.uchicago.edu
coreos-23 ~ # kubectl run -i --tty busybox --image=busybox --restart=Never -- sh
Error from server (AlreadyExists): pods "busybox" already exists
coreos-23 ~ # kubectl delete pod busybox
pod "busybox" deleted
coreos-23 ~ # kubectl run -i --tty busybox --image=busybox --restart=Never -- sh
If you don't see a command prompt, try pressing enter.
/ # ping 192.168.2.2
PING 192.168.2.2 (192.168.2.2): 56 data bytes
^C
--- 192.168.2.2 ping statistics ---
2 packets transmitted, 0 packets received, 100% packet loss
/ # curl 192.168.2.2
sh: curl: not found
/ # pod default/busybox terminated (Error)
coreos-23 ~ # kubectl run -i --tty netshoot --image=nicolaka/netshoot --restart=Never -- sh
If you don't see a command prompt, try pressing enter.
/ # curl
curl: try 'curl --help' or 'curl --manual' for more information
/ # curl 192.168.2.2
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
/ # ping 10.96.0.10
PING 10.96.0.10 (10.96.0.10) 56(84) bytes of data.
^C
--- 10.96.0.10 ping statistics ---
2 packets transmitted, 0 received, 100% packet loss, time 1040ms
/ # telnet 10.96.0.10 53
asdf
^C
/ # nmap 10.96.0.10 -p53
Starting Nmap 7.70 ( https://nmap.org ) at 2018-06-07 15:10 UTC
Note: Host seems down. If it is really up, but blocking our ping probes, try -Pn
Nmap done: 1 IP address (0 hosts up) scanned in 3.09 seconds
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment