- AWS
- Kubernetes
- CI/CD (Jenkins)
- Docker
- Vault
What Exactly this project does: Automates the Deployment of Web application inside K8s cluster, Where frontend of the application is deployed automatically while backend(database) and vault cluster are already up inside K8s.
- In this project, Firstly I created a source code repository on github.
- Created a Kubernete cluster using AWS-EKS.
- A public Docker registry to store images.
- Created a Jenkins pipeline and K8s user for Jenkins to handle K8s workloads seamlessly.
- Set up mongoDB as statefulset inside Kubernetes cluster.
- Set Vault cluster up inside K8s cluster and enabled K8s authentication so that workloads running inside K8s can fetche secrets from Vault cluster.
- On every push or update to the source-code repository Jenkins pipeline gets triggered.
- On getting triggered Jenkins does the followings:
- Fetch the new/updated source code from the repository.
- Build contianer/docker image of fetched source code.
- pushes the built image to the dockerhub.
- And Deploy the YAML files (K8s resources) into Kubernetes (AWS EKS) cluster.
- At the end K8s pulls the latest build from the dockerhub (image registry) and start/restart the application.
- Started application fetches database (mongoDB) credentials from Vault cluster using
Service Account Token(JWT)
(becuase K8s authentication is enabled in vault cluster for that particular service account with a policy) to obtain access to the database. - On accessing application with the help of external service, request goes to database and data is fetched from the database and shows/displayed to the user via frontend of application.
-
Firstly, Created a source code repository.
-
Then, Created a public docker registry to store application container image on push.
-
Created EKS cluster in my AWS environment.
-
Created a Jenkins user for in my K8s cluster by following steps, That's used by Jenkins to manage(Creating/Deleting) workloads inside K8s cluster.
-
Generated a private key:
$ openssl genrsa -out jenkins.key 2048
-
Created Certificate Signing Request (CSR):
$ openssl req -new -key jenkins.key -out jenkins.csr -subj "/CN=jenkins"
-
Created
CertificateSigningRequest
resource for K8s:# csr.yaml apiVersion: certificates.k8s.io/v1 kind: CertificateSigningRequest metadata: name: jenkins spec: request: BASE64_ENCODED_CSR signerName: kubernetes.io/kube-apiserver-client usages: - client auth
-
Applied & Approved CSR in my Kubernetes cluster:
# Applying $ kubectl apply -f csr.yaml # Approved $ kubectl certificate approve jenkins
-
Created
ClusterRole
&ClusterRoleBinding
forjenkins
user:apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: jenkins rules: - apiGroups: ["*"] resources: ["*"] verbs: ["*"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: jenkins roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: jenkins subjects: - kind: User name: jenkins
-
-
Then, Set up the created
key
andcrt
forjenkins
user inside Jenkins ask8s-creadentials
. -
Installed Vault cluster inside K8s cluster and stored credentials there & enabled K8s authentication by following steps:
-
Stored database credentials inside:
# Connecting to running pod bash $ kubectl exec --stdin=true --tty=true vault-0 -- /bin/sh # Creating a secret of name `mongodb` $ vault kv put -mount=secret mongodb username=USERNAME password=PASSWORD
-
Enabled K8s Authentication and created/permitted a
ServiceAccount
so that workloads running inside K8s can fetch secrets using thatservice account tokens
and access database:# Enable K8s authentication $ vault auth enable kubernetes # Configuring the Kubernetes authentication $ vault write auth/kubernetes/config kubernetes_host="https://$KUBERNETES_PORT_443_TCP_ADDR:443" # Writing out the policy named `mongodb` that enables the read capability for secret at path `secret/data/mongodb` $ vault policy write mongodb - <<EOF path "secret/data/mongodb" { capabilities = ["read"] } EOF # Creating a Kubernetes authentication role, named `mongodb` $ vault write auth/kubernetes/role/mongodb \ bound_service_account_names=vault \ bound_service_account_namespaces=default \ policies=mongodb \ ttl=24h
-
-
Then, Set MongoDB statefulset up by following steps:
-
Deployed MongoDB statefulset inside K8s:
--- apiVersion: v1 kind: Service metadata: name: mongodb-svc labels: app: mongodb spec: ports: - name: mongodb port: 27017 clusterIP: None selector: app: mongodb --- apiVersion: apps/v1 kind: StatefulSet metadata: name: mongodb spec: selector: matchLabels: app: mongodb serviceName: mongodb-svc replicas: 3 template: metadata: labels: app: mongodb spec: containers: - name: mongodb image: mongo:latest env: - name: MONGO_INITDB_ROOT_USERNAME valueFrom: secretKeyRef: name: mongodb key: username - name: MONGO_INITDB_ROOT_PASSWORD valueFrom: secretKeyRef: name: mongodb key: password command: - "mongod" - "--bind_ip" - "0.0.0.0" - "--replSet" - "MainRepSet" resources: requests: cpu: "0.2" memory: 200Mi ports: - name: mongodb containerPort: 27017 volumeMounts: - name: data mountPath: /data/db volumeClaimTemplates: - metadata: name: data spec: accessModes: ["ReadWriteOnce"] resources: requests: storage: 1Gi --- apiVersion: v1 kind: Service metadata: name: mongodb labels: svc: mongodb spec: ports: - port: 27017 name: db targetPort: 27017 selector: statefulset.kubernetes.io/pod-name: mongodb-0
-
Then, Set up MongoDB cluster of three statefulset pods:
# Connecting to mongodb-0 pod. $ kubectl exec -it mongodb -- /bin/sh # Connecting to mongo shell inside mongodb-0 pod. $ mongosh # Intialising MongoDB Cluster. root@mongod-0:/# mongo > rs.initiate({ _id: "MainRepSet", version: 1, members: [ { _id: 0, host: "mongod-0.mongodb-service.default.svc.cluster.local:27017" }, { _id: 1, host: "mongod-1.mongodb-service.default.svc.cluster.local:27017" }, { _id: 2, host: "mongod-2.mongodb-service.default.svc.cluster.local:27017" } ]}); # Disconnecting & Reconnecting root@mongod-0:/# exit $ mongosh # Creaing a admin User in mongoDB MainRepSet:PRIMARY> db.getSiblingDB("admin").createUser({ ... user : "USERNAME", ... pwd : "PASSWORD", ... roles: [ { role: "root", db: "admin" } ] ... });
-
-
Then, I Created a Jenkins pipeline with the following Jenkinsfile:
pipeline { environment { dockerimagename = "mdsahiloss/vault-simple-user-profile-page" dockerImage = "" } agent any stages { stage('Checkout Source') { steps { git 'https://github.com/MdSahil-oss/vault-simple-user-profile-page' } } stage('Build image') { steps{ script { dockerImage = docker.build dockerimagename } } } stage('Pushing Image') { environment { registryCredential = 'DockerhubCredentials' } steps{ script { docker.withRegistry( 'https://registry.hub.docker.com', registryCredential ) { dockerImage.push("latest") } } } } stage('Deploying Application container to Kubernetes') { steps { withKubeConfig([ clusterName: 'MY_CLUSTER_NAME', namespace: 'default', contextName: 'jenkins-ctx', serverUrl: 'KUBERNETES_URL', credentialsId: 'k8s-creadentials' ]) { sh 'kubectl delete -f ./k8s && kubectl apply -f ./k8s' } } } } }
Jenkins
does followings using thisJenkinsfile
:- Builds container image
- Push container image to dockerhub (image registry)
- Redeploy yaml resources to K8s cluster.
This is how I built this project 😉.