1. The connection to the server localhost:8080 was refused - did you specify the right host or port?
SOLUTION:
(1)
cp /etc/kubernetes/admin.conf $HOME/
chown $(id -u):$(id -g) $HOME/admin.conf
export KUBECONFIG=$HOME/admin.conf
(2) kubernetes/kubernetes#48378
export KUBECONFIG=/etc/kubernetes/kubelet.conf
kubectl get nodes
SOLUTION:
clear disk
3. Error: services https:kubernetes-dashboard: is forbidden: User system:node: cannot get resource services/proxy in API group in the namespace kube-system
OR
Error from server (Forbidden): rolebindings.rbac.authorization.k8s.io is forbidden: User "system:node:ip-X-X-X-X.domain" cannot list resource "rolebindings" in API group "rbac.authorization.k8s.io" in the namespace "kube-system"
kubectl get role,rolebinding -n kube-system
May 30 16:32:44 ############################ 33 kubelet[12609]: E0530 16:32:44.965397 12609 kubelet.go:2170] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
OR
TROUBLESHOOTING :
systemctl -l status kubelet
SOLUTION:
Add cluster IP to route-table and add as gateway:
route add 10.96.0.1 gw x.x.x.x
where 10.96.0.1 is ClusterIP (it can change), and x.x.x.x is master node ip
swapoff -a
6. from server: Get https://10.x.x.x:10250/containerLogs/kube-system/weave-net-hjpjv/weave: Forbidden
TROUBLESHOOTING -
kubectl logs -n kube-system weave-net-hjpjv weave
SOLUTION - Add hostname and IP of minion to /etc/kubernetes/manifests/kube-apiserver.yaml:
export no_proxy=.............,ip,localhost
PS C:\Users\meghamehta> kubectl logs -n kube-system weave-net-hjpjv weave-npc
INFO: 2019/05/31 09:08:40.446482 Starting Weaveworks NPC 2.5.2; node name "x.x.x.x"
INFO: 2019/05/31 09:08:40.446653 Serving /metrics on :6781
Fri May 31 09:08:40 2019 <5> ulogd.c:843 building new pluginstance stack: 'log1:NFLOG,base1:BASE,pcap1:PCAP'
Fri May 31 09:08:40 2019 <7> ulogd_inppkt_NFLOG.c:552 unable to bind to log group 86
Fri May 31 09:08:40 2019 <7> ulogd.c:813 error starting `log1'
Fri May 31 09:08:40 2019 <8> ulogd.c:1430 not even a single working plugin stack
Fatal error.
FATA: 2019/05/31 09:08:40.449348 ulogd terminated: exit status 1
SOLUTION:
sysctl net.ipv4.conf.all.forwarding=1
SOLUTION: Delete /var/log/audit, dead containers
SOLUTION:
To set pod network using Weave -
nano /etc/sysctl.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
sysctl net.bridge.bridge-nf-call-iptables=1
sysctl -p
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=Q2xpZW50IFZlcnNpb246IHZlcnNpb24uSW5mb3tNYWpvcjoiMSIsIE1pbm9yOiIxNCIsIEdpdFZlcnNpb246InYxLjE0LjIiLCBHaXRDb21taXQ6IjY2MDQ5ZTNiMjFlZmUxMTA0NTRkNjdkZjRmYTYyYjA4ZWE3OWExOWIiLCBHaXRUcmVlU3RhdGU6ImNsZWFuIiwgQnVpbGREYXRlOiIyMDE5LTA1LTE2VDE2OjIzOjA5WiIsIEdvVmVyc2lvbjoiZ28xLjEyLjUiLCBDb21waWxlcjoiZ2MiLCBQbGF0Zm9ybToibGludXgvYW1kNjQifQpTZXJ2ZXIgVmVyc2lvbjogdmVyc2lvbi5JbmZve01ham9yOiIxIiwgTWlub3I6IjE0IiwgR2l0VmVyc2lvbjoidjEuMTQuMiIsIEdpdENvbW1pdDoiNjYwNDllM2IyMWVmZTExMDQ1NGQ2N2RmNGZhNjJiMDhlYTc5YTE5YiIsIEdpdFRyZWVTdGF0ZToiY2xlYW4iLCBCdWlsZERhdGU6IjIwMTktMDUtMTZUMTY6MTQ6NTZaIiwgR29WZXJzaW9uOiJnbzEuMTIuNSIsIENvbXBpbGVyOiJnYyIsIFBsYXRmb3JtOiJsaW51eC9hbWQ2NCJ9Cg=="
To set localhost as proxy -
export no_proxy=10.96.0.0/12
To set cluster -
kubectl config set-cluster test-cluster --server=https://10.x.x.x:6443 --api-version=v1
Check weave network -
docker network ls
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
kubectl proxy
Dashboard link: http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
For admin access to dashboard - https://github.com/kubernetes/dashboard/wiki/Access-control#admin-privileges
kubectl get pods,nodes --all-namespaces
sudo kubeadm join 10.x.x.x:6443 --token xxxxxxxxxxxxxxxxxxxxxxxx \
--discovery-token-ca-cert-hash sha256:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')
kubectl run selenium-hub --image selenium/hub:3.10.0 --port 4444
kubectl get deployments selenium-hub
kubectl describe deployments selenium-hub
- As load balancer Note: Won't work with Kubeadm
kubectl expose deployment selenium-hub --type=LoadBalancer --name=selenium-hub-svc
kubectl get services selenium-hub-svc
kubectl create --filename=C:\work\GIT\personal\examples\staging\selenium\selenium-hub-deployment.yaml
- As nodeport: Worked!
kubectl expose deployment selenium-hub --type=NodePort
kubectl get services
Output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 8d
selenium-hub NodePort 10.104.142.0 <none> 4444:30142/TCP 2m19s
Hub is accessible on http://:30142/grid/console
kubectl get pods --all-namespaces | grep Evicted | awk '{print $1}' | xargs kubectl delete pod