Skip to content

Instantly share code, notes, and snippets.

@gekart
Last active July 17, 2019 16:38
Show Gist options
  • Save gekart/aa67415db29f4ca6950571173796ef66 to your computer and use it in GitHub Desktop.
Save gekart/aa67415db29f4ca6950571173796ef66 to your computer and use it in GitHub Desktop.
kubernetes notepad
kubectl get componentstatuses
source <(kubectl completion bash)
openssl genrsa -out ca.key 2048
openssl req -x509 -new -nodes -key ca.key -subj "/CN=${MASTER_IP}" -days 10000 -out ca.crt
openssl genrsa -out server.key 2048
cat > csr.conf <<EOF
[ req ]
default_bits = 2048
prompt = no
default_md = sha256
req_extensions = req_ext
distinguished_name = dn
[ dn ]
C = <country>
ST = <state>
L = <city>
O = <organization>
OU = <organization unit>
CN = <MASTER_IP>
[ req_ext ]
subjectAltName = @alt_names
[ alt_names ]
DNS.1 = kubernetes
DNS.2 = kubernetes.default
DNS.3 = kubernetes.default.svc
DNS.4 = kubernetes.default.svc.cluster
DNS.5 = kubernetes.default.svc.cluster.local
IP.1 = <MASTER_IP>
IP.2 = <MASTER_CLUSTER_IP>
[ v3_ext ]
authorityKeyIdentifier=keyid,issuer:always
basicConstraints=CA:FALSE
keyUsage=keyEncipherment,dataEncipherment
extendedKeyUsage=serverAuth,clientAuth
subjectAltName=@alt_names
EOF
openssl req -new -key server.key -out server.csr -config csr.conf
openssl x509 -req -in server.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out server.crt -days 10000 -extensions v3_ext -extfile csr.conf
openssl x509 -noout -text -in ./server.crt
sudo systemctl daemon-reload
sudo systemctl enable etcd
sudo systemctl start etcd
sudo ETCDCTL_API=3 etcdctl member list \
--endpoints=https://127.0.0.1:2379 \
--cacert=/etc/etcd/ca.pem \
--cert=/etc/etcd/kubernetes.pem \
--key=/etc/etcd/kubernetes-key.pem
curl --cacert ca.pem https://${KUBERNETES_PUBLIC_ADDRESS}:6443/version
POD_NAME=$(kubectl get pods -l run=busybox -o jsonpath="{.items[0].metadata.name}")
kubectl exec -ti $POD_NAME -- nslookup kubernetes
kubectl run nginx image=nginx --port=80 --record
kubectl rollout history deployment nginx
kubectl rollout status deployment nginx
kubectl rollout undo deployment nginx --to-revision=2
1) Initially lets start by creating a directory users, "mkdir -p ~/.kube/users", now generate a private key for each users and store them in a directory that was just created (~/.kube/users), using these private keys generate csr (the csr cert contains user names and groups),for ex user is "prudhvi", now this user will be added to the cluster by the following steps.
#Generate a private key --prudhvi.key
openssl genrsa -out prudhvi.key 2048
#Generate a csr file for prudhvi as user --prudhvi.csr
openssl req -new -key prudhvi.key -out prudhvi.csr -subj "/CN=prudhvi/O=ops/O=example.org"
2) Now copy the clusters ca.key and ca.pem files to users folder that was just created from previous step and sign them to users, copy ca.key and ca.pem from /etc/kubernetes/pki to ~/.kube/users.
#Input as csr file, output is .crt
openssl x509 -req -CA ca.pem -CAkey ca-key.pem -CAcreateserial -days 730 -in prudhvi.csr -out prudhvi.crt
3) Setting up cluster configuration for the user "prudhvi" in a particular namespace, the namespace is optional and should be used if required for the user to use a particular namespace, by default the "default" namespace is used.
kubectl config set-credentials prudhvi --client-certificate=/absolute/path/to/prudhvi.crt --client-key=/absolute/path/to/prudhvi.key
#For below step use "user-nameofthecluster" that we just created "prudhvi-prod"
kubectl config set-context user-nameofthecluster --cluster=prod --user=prudhvi --namespace=<>
kubectl config get-contexts ---->switch the context kubectl config use-context prudhvi-prod
#Another example for the above step is kubectl config set-context yono-dev --namepsaces=development
kubectl config use-context prudhvi-prod
4) After we switch the cluster context to prudhvi-prod, the user (prudhvi) will still not be having access to use "kubectl get pods", for accessing the cluster services lets start to create Role&RoleBinding.
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: user-role
#if required specify a particular namespace.
namespace: <>
rules:
#below are rules for all the resources inside a required namespace.
- apiGroups: ["*"]
#below resources objects examples are services, deployments, pv, pvc ..
resources: ["*"]
#below we shall provide read only access
verbs:
- get
- list
- watch
#Map to the user by creating rolebinding:
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: user-role
namespace: <>
roleRef:
kind: Role
name: interns
apiGroup: rbac.authorization.k8s.io
subjects:
#bind it to either groups or service accounts
- kind: Group
name: interns
namespace: nexus
apiGroup: rbac.authorization.k8s.io
5)Clusterrolebinding is for all namespaces inside the cluster.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: inters
namespace: <>
roleRef:
kind: Role
#Example is "user-role" that we just created above.
name: <nameofclusterrole>
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: Group
name: ops
namespace: <>
etcdctl --endpoints=http://10.0.0.2,http://10.0.0.3 member list
etcdctl member remove 8211f1d0f64f3269
etcdctl member add member4 --peer-urls=http://10.0.0.4:2380
export ETCD_NAME="member4"
export ETCD_INITIAL_CLUSTER="member2=http://10.0.0.2:2380,member3=http://10.0.0.3:2380,member4=http://10.0.0.4:2380"
export ETCD_INITIAL_CLUSTER_STATE=existing
etcd [flags]
ETCDCTL_API=3 etcdctl --endpoints $ENDPOINT snapshot save snapshot.db
or copy member/snap/db
ETCDCTL_API=3 etcdctl --write-out=table snapshot status snapshot.db
# restores snap.db into a new cluster. restore only always starts new cluster creation.
ETCDCTL_API=3 etcdctl snapshot restore snapshot.db \
--name m1 \
--initial-cluster m1=http://host1:2380,m2=http://host2:2380,m3=http://host3:2380 \
--initial-cluster-token etcd-cluster-1 \
--initial-advertise-peer-urls http://host1:2380
ETCDCTL_API=3 etcdctl snapshot restore snapshot.db \
--name m2 \
--initial-cluster m1=http://host1:2380,m2=http://host2:2380,m3=http://host3:2380 \
--initial-cluster-token etcd-cluster-1 \
--initial-advertise-peer-urls http://host2:2380
ETCDCTL_API=3 etcdctl snapshot restore snapshot.db \
--name m3 \
--initial-cluster m1=http://host1:2380,m2=http://host2:2380,m3=http://host3:2380 \
--initial-cluster-token etcd-cluster-1 \
--initial-advertise-peer-urls http://host3:2380
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment