Skip to content

Instantly share code, notes, and snippets.

@mmehta-10
Created July 14, 2019 18:38
Show Gist options
  • Save mmehta-10/4320c891e92b37ec3e38e1a5fa5f2d06 to your computer and use it in GitHub Desktop.
Save mmehta-10/4320c891e92b37ec3e38e1a5fa5f2d06 to your computer and use it in GitHub Desktop.
This doc records solutions to issues faced with setting kubeadm on RHEL EC2, some useful kubectl commands and finally Selenium Grid setup commands at the end

Errors

1. The connection to the server localhost:8080 was refused - did you specify the right host or port?

SOLUTION:

(1)

cp /etc/kubernetes/admin.conf $HOME/
chown $(id -u):$(id -g) $HOME/admin.conf
export KUBECONFIG=$HOME/admin.conf

(2) kubernetes/kubernetes#48378

export KUBECONFIG=/etc/kubernetes/kubelet.conf
kubectl get nodes

2. kubelet node not ready

SOLUTION:

clear disk


3. Error: services https:kubernetes-dashboard: is forbidden: User system:node: cannot get resource services/proxy in API group in the namespace kube-system

OR

Error from server (Forbidden): rolebindings.rbac.authorization.k8s.io is forbidden: User "system:node:ip-X-X-X-X.domain" cannot list resource "rolebindings" in API group "rbac.authorization.k8s.io" in the namespace "kube-system"

kubectl get role,rolebinding -n kube-system


4. Unable to update cni config: No networks found in /etc/cni/net.d

May 30 16:32:44 ############################ 33 kubelet[12609]: E0530 16:32:44.965397 12609 kubelet.go:2170] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

OR

5. failed to "StartContainer" for "weave-npc" with CrashLoopBackOff

TROUBLESHOOTING :

systemctl -l status kubelet

SOLUTION:

Add cluster IP to route-table and add as gateway:

route add 10.96.0.1 gw x.x.x.x

where 10.96.0.1 is ClusterIP (it can change), and x.x.x.x is master node ip

swapoff -a

TROUBLESHOOTING -

kubectl logs -n kube-system weave-net-hjpjv weave

SOLUTION - Add hostname and IP of minion to /etc/kubernetes/manifests/kube-apiserver.yaml:

export no_proxy=.............,ip,localhost

7. ERROR

PS C:\Users\meghamehta> kubectl logs -n kube-system weave-net-hjpjv weave-npc
INFO: 2019/05/31 09:08:40.446482 Starting Weaveworks NPC 2.5.2; node name "x.x.x.x"
INFO: 2019/05/31 09:08:40.446653 Serving /metrics on :6781
Fri May 31 09:08:40 2019 <5> ulogd.c:843 building new pluginstance stack: 'log1:NFLOG,base1:BASE,pcap1:PCAP'
Fri May 31 09:08:40 2019 <7> ulogd_inppkt_NFLOG.c:552 unable to bind to log group 86
Fri May 31 09:08:40 2019 <7> ulogd.c:813 error starting `log1'
Fri May 31 09:08:40 2019 <8> ulogd.c:1430 not even a single working plugin stack
Fatal error.
FATA: 2019/05/31 09:08:40.449348 ulogd terminated: exit status 1

SOLUTION:

sysctl net.ipv4.conf.all.forwarding=1

8. The node had condition: [DiskPressure].

SOLUTION: Delete /var/log/audit, dead containers


9. 0/2 nodes are available: 2 node(s) had taints that the pod didn't tolerate.

SOLUTION:


WEAVE CNI SETUP

To set pod network using Weave -

nano /etc/sysctl.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
sysctl net.bridge.bridge-nf-call-iptables=1
sysctl -p

Spin up weave

kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=Q2xpZW50IFZlcnNpb246IHZlcnNpb24uSW5mb3tNYWpvcjoiMSIsIE1pbm9yOiIxNCIsIEdpdFZlcnNpb246InYxLjE0LjIiLCBHaXRDb21taXQ6IjY2MDQ5ZTNiMjFlZmUxMTA0NTRkNjdkZjRmYTYyYjA4ZWE3OWExOWIiLCBHaXRUcmVlU3RhdGU6ImNsZWFuIiwgQnVpbGREYXRlOiIyMDE5LTA1LTE2VDE2OjIzOjA5WiIsIEdvVmVyc2lvbjoiZ28xLjEyLjUiLCBDb21waWxlcjoiZ2MiLCBQbGF0Zm9ybToibGludXgvYW1kNjQifQpTZXJ2ZXIgVmVyc2lvbjogdmVyc2lvbi5JbmZve01ham9yOiIxIiwgTWlub3I6IjE0IiwgR2l0VmVyc2lvbjoidjEuMTQuMiIsIEdpdENvbW1pdDoiNjYwNDllM2IyMWVmZTExMDQ1NGQ2N2RmNGZhNjJiMDhlYTc5YTE5YiIsIEdpdFRyZWVTdGF0ZToiY2xlYW4iLCBCdWlsZERhdGU6IjIwMTktMDUtMTZUMTY6MTQ6NTZaIiwgR29WZXJzaW9uOiJnbzEuMTIuNSIsIENvbXBpbGVyOiJnYyIsIFBsYXRmb3JtOiJsaW51eC9hbWQ2NCJ9Cg=="

To set localhost as proxy -

export no_proxy=10.96.0.0/12

To set cluster -

kubectl config set-cluster test-cluster --server=https://10.x.x.x:6443 --api-version=v1

Check weave network -

docker network ls

k8s dashboard -

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
kubectl proxy

Dashboard link: http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/

For admin access to dashboard - https://github.com/kubernetes/dashboard/wiki/Access-control#admin-privileges


kubectl get pods,nodes --all-namespaces

Command to join master -

sudo kubeadm join 10.x.x.x:6443 --token xxxxxxxxxxxxxxxxxxxxxxxx \
    --discovery-token-ca-cert-hash sha256:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

GENERATE BEARER TOKEN

kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')

SELENIUM GRID SETUP

https://medium.com/@subbarao.pilla/k8s-selenium-grid-selenium-grid-with-docker-on-kubernetes-42af8b9a2cba

Starting Selenium Hub

kubectl run selenium-hub --image selenium/hub:3.10.0 --port 4444
kubectl get deployments selenium-hub
kubectl describe deployments selenium-hub

Create and expose service for hub

  1. As load balancer Note: Won't work with Kubeadm
kubectl expose deployment selenium-hub --type=LoadBalancer --name=selenium-hub-svc
kubectl get services selenium-hub-svc
kubectl create --filename=C:\work\GIT\personal\examples\staging\selenium\selenium-hub-deployment.yaml
  1. As nodeport: Worked!
kubectl expose deployment selenium-hub --type=NodePort
kubectl get services

Output:

NAME           TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)          AGE
kubernetes     ClusterIP   10.96.0.1      <none>        443/TCP          8d
selenium-hub   NodePort    10.104.142.0   <none>        4444:30142/TCP   2m19s

Hub is accessible on http://:30142/grid/console

Useful commands

Delete evicted pods

kubectl get pods --all-namespaces | grep Evicted | awk '{print $1}' | xargs kubectl delete pod
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment