Skip to content

Instantly share code, notes, and snippets.

@MdSahil-oss
Last active February 22, 2024 10:07
Show Gist options
  • Save MdSahil-oss/93f1de3dc1432a55214235cce95bb7b2 to your computer and use it in GitHub Desktop.
Save MdSahil-oss/93f1de3dc1432a55214235cce95bb7b2 to your computer and use it in GitHub Desktop.
A list of DevOps projects

Gist guide

This gist list the following DevOps projects that I've built yet:

Built more than 10 Addons for Kubernetes

These addons I built for CNCF's project Kubevela to simplify the installation of resources (like Kubernetes operator etc) in Kubernetes using kubevela's CLI tool vela.

Used Technologies:

  • Kubernetes
  • Containers (Docker Containers)
  • Bash Scripting
  • CueLang
  • Golang

Addons That I built were:

Fot more detailt about each addon, Please visit Addon specific gist.

Implemented DevOps lifecycle With AWS (Amazon Web Service)

The AWS Developer Tools allow us to securely save our application's source code, create, test, and deploy it to AWS or your on-premises environment. For this project, I built a continuous integration or delivery (CI/CD) process using AWS CodeBuild, AWS CodeDeploy and AWS EKS (Elastic Kubernetes Service) to build and deploy an application.

Screenshot from 2023-10-29 14-50-15

Used Technologies:

  • AWS
  • Kubernetes
  • Docker
  • ArgoCD
  • Bash

How did I build this project ?

In short, To accomplish this project I built a CI/CD pipeline by following steps:

  • Firstly, I started a CodeBuild, ECR and EKS in my AWS environment to implement CI/CD pipeline that works with a GIthub public repository personal web to fetch source code for further steps.
  • To build a container image of my personal web in continuous mode I used AWS CodeBuild, That works as follows:
    • It builds latest container image of the source code with a version tag mentioned in delpyment YAML on every merge or push.
    • And push/send the built container image to AWS ECR (Elastic Container Registry).
  • Running EKS (Elastic Kubernetes Service) with ArgoCD on AWS continuously checks source code deployment YAML updation.
  • On changes in Deployment YAML argoCD deploys a new Deployment in K8s for the web by pruning the old one and fetches the new latest image built on AWS ECR with the version tag mentioned in Deployment YAML to create new deployment.

That's how this DevOps project was built ;)

Scalable Deployment in Kubernetes and more

In this project I deployed my personal web application on AWS EKS, which Kubernetes automatically horizontally scale up and scale down, Means the number of pods of my application in a node gets scaled up and scaled down by an Autoscaler on the basis of resources load on each pod.

Used Technologies:

  • AWS
  • Kubernetes
  • Docker
  • ArgoCD
  • Bash

For full description about this project, Please visit project specific gist here.

Monitoring an application with prometheus

I configured a web application with prometheus to monitor its:

  • version
  • http_requests_total
  • http_request_duration_seconds
  • http_request_duration_seconds_count
  • http_request_duration_seconds_sum

Used Technologies:

  • AWS
  • Kubernete
  • Docker
  • Prometheus & Grafana (Monitoring)

Procedure to build this project:

  • First of all, A web application was prepared to host all of the metrics data at endpoint /metrics with the help of prometheus library so that the configured prometheus can scrape metrics data from the web application endpoint metrics at an interval.

  • Then, A Container image was built of the application, That can be found here.

  • Then I deployed a deployment resource in my running Kubernetes cluster on AWS EKS:

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      labels:
        app.kubernetes.io/name: prometheus-example-app
      name: prometheus-example-app
    spec:
      replicas: 1
      selector:
        matchLabels:
          app.kubernetes.io/name: prometheus-example-app
      template:
        metadata:
          labels:
            app.kubernetes.io/name: prometheus-example-app
        spec:
          containers:
          - name: prometheus-example-app
            image: mdsahiloss/prometheus-example-app:latest
            ports:
            - name: web
              containerPort: 8080
    
  • And then, I Created a ClusterIP service inside K8s, So that other resources in K8s can communicate with the application pods.

  • And then, I installed a promentheus operator in my running Kubernetes cluster on AWS EKS, By following the instructions available here.

  • And then created a resource ServiceMonitor (CRD of this resource is provided by the installed Prometheus-Operator) inside K8s cluster in default namespace:

     apiVersion: monitoring.coreos.com/v1
     kind: ServiceMonitor
     metadata:
       labels:
         app.kubernetes.io/name: prometheus-example-app-svc-monitor
       name: prometheus-example-app-svc-monitor
     spec:
       selector:
         matchLabels:
           app.kubernetes.io/name: prometheus-example-app-svc
       endpoints:
         - port: http
       namespaceSelector:
         any: true
    

That's it after this, I was able to see all the metrics generated by my application on the prometheus web which is runnning in my K8s Cluster.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment