Skip to content

Instantly share code, notes, and snippets.

@gajjardarshithasmukhbhai
Last active December 19, 2023 20:37
Show Gist options
  • Save gajjardarshithasmukhbhai/ccffa016bd17caf6b8d3628502413952 to your computer and use it in GitHub Desktop.
Save gajjardarshithasmukhbhai/ccffa016bd17caf6b8d3628502413952 to your computer and use it in GitHub Desktop.
In this Notes I go through the very detailed things like Docker, ECS, EKS, EFS, Load Balance, EC2, Kubernetes and Docker

Docker command to Reflect User Changes

Most Docker Commands that Needs in Day TO Day Life

  1. docker create -> creating the container in Docker
  2. docker rename -> Renaming the container in Docker
  3. docker run -> creating and start the container in docker
  4. docker rm -> Delete the container through container id
  5. docker rmi <IMAGE_ID> -> Deleting the docker Image through Image Id
  6. docker ps -> Show all the running process in running Dockers
  7. docker logs -> get the logs from containers
  8. docker start -> start the container that had been stop by user
  9. docker stop -> stop the running the container
  10. docker push -> push the image in the docker hub
  11. docker pull -> pull the image in the docker hub
  12. docker images -> see the list of all locally image
  13. docker build . -> Build the docker file and create own image based on the file
  14. docker run -p 3200:80 <Docker_ID> -> if you want to run your container on specific port like 3200
  15. docker run -p 3200:80 -d <Docker_ID> -> If you want to detach your code so we use this code
  16. docker attach <container_id> -> After Detaching the Image if you want to attach logs to the Image
  17. docker container prune -> this command is useful remove all the stopped container at once
  18. docker image prune -a -> This command will removed all images that havn't any container associated
  19. docker run -p 3200:80 -d --rm --name <container_name> <image_id> -> this command is used for run the container on 3200 port and rename the container
  20. docker start <container_name> -> this command use for the start the container Again
  21. docker tag <old_name>: <tag_name> <latest_docker_name> -> Renaming the docker tag and name
  22. docker login -> this command is used for login in docker
  23. docker run -v /home/darshitgajjar/DarshitGajjar_Docker_Practice/MERN_Docker_kubernetes/back_end:/app -p 3200:3500 --name goal_backend -v /app/node_modules react_back_end -> this command is used for the run the backend and -v display the volume, --name is belong to the container_name. -p belong to published the port. --rm means when your container stop it automatically remove
  24. docker run -v /home/darshitgajjar/DarshitGajjar_Docker_Practice/MERN_Docker_kubernetes/front_end/src:/app/src --rm --name goals-frontend -p 3000:3000 -it react_front_end -> this command is used for the frontEnd Application run without the Node_Modules because when we made container made so that time our node_modules store in the volume.
  25. docker volume ls -> see list of volumes
  26. docker volume rm <volume_name> -> remove the volume in docker
  27. docker volume inspect <volume_name> -> inspect the operation in volume
  28. docker volume prune -> to remove all unused volume in docker
  29. docker network ls -> This command is used for seeing the list of the network
  30. docker network create <container_name> -> This command is used for the multiple application wrap in same network
  31. docker build -t . -> this command is used when you want to give specific image name
  32. sudo service docker start -> This command is used to start docker in Unix or Linux
  33. sudo service docker stop -> This command is used to stop the docker in Linux
  34. docker container ls --all -> This comand is used to get all container info either stop or not

DockerFile Commands

  • FROM Creates layers of dependencies like we could build an OS layer.
  • RUN allows us to install your application and packages required for it.
  • COPY adds files from Docker client’s current directory.
  • EXPOSE instruction informs Docker that the container listens on the specified network port at runtime.
  • CMD specifies what command to run within the Container.

K8 most Common and Daily used Commands

  • Format of the Command: kubectl <command> <type> <name> <flags>
  • Kubernetes is common and most use query tool to manage the kubernetes cluster
  • Kubernetes uses the API interface to view, manage the cluster. It is supported the various Platforms

K8 Basic Commands, That are Helpful as Full Stack Dev

  1. kubectl cluster-info --> Display endpoint information regarding the services and master in the cluster
  2. kubectl version --> Show the Kubernetes version functioning on the client and server
  3. kubectl config view --> Get the configuration of the cluster
  4. kubectl api-resources --> Make a list of the available API resources
  5. kubectl api-versions --> Make a list of the available API versions
  6. kubectl get all –all-namespaces --> List everything

Node Operations:

  1. kubectl get node --> List one or more nodes
  2. kubectl delete node <node_name> --> Delete a node or multiple nodes
  3. kubectl top node --> Display Resource usage (CPU/Memory/Storage) for nodes
  4. kubectl describe nodes | grep Allocated -A 5 --> Resource allocation per node
  5. kubectl get pods -o wide | grep <node_name> --> Pods running on a node
  6. kubectl annotate node <node_name> --> Annotate a node
  7. kubectl cordon node <node_name> --> Mark a node as unschedulable
  8. kubectl uncordon node <node_name> --> Mark node as schedulable
  9. kubectl drain node <node_name> --> Drain a node in preparation for maintenance
  10. kubectl label node --> Add the labels of one or more nodes

Namespaces [Shortcode = ns]:

  1. kubectl create namespace <namespace_name> --> Create namespace
  2. kubectl describe namespace <namespace_name> --> Show the detailed condition of one or more namespace
  3. kubectl delete namespace <namespace_name> --> Delete a namespace
  4. kubectl edit namespace <namespace_name> --> Edit and modify the namespace’s definition
  5. kubectl top namespace <namespace_name> --> Display Resource (CPU/Memory/Storage) usage for a namespace

Deployments:

  1. kubectl get deployment --> List one or more deployments
  2. kubectl describe deployment <deployment_name> --> Show the in-depth state of one or more deployments
  3. kubectl edit deployment <deployment_name> --> Edit and revise the definition of one or more deployment on the server
  4. kubectl create deployment <deployment_name> --> Generate one a new deployment
  5. kubectl delete deployment <deployment_name> --> Delete deployments
  6. kubectl rollout status deployment <deployment_name> --> Check the rollout status of a deployment

Replication Controllers [Shortcode = rc]

  1. kubectl get rc --> Make a list of the replication controllers
  2. kubectl get rc –namespace=<namespace_name> --> Make a list of the replication controllers by namespace

ReplicaSets [Shortcode = rs]

  1. kubectl get replicasets --> List ReplicaSets
  2. kubectl describe replicasets <replicaset_name> --> Show the detailed state of one or more ReplicaSets
  3. kubectl scale –replicas=[x] --> Scale a ReplicaSet [x is a number here]

Listing Resources

  1. kubectl get namespaces --> Create a plain-text list of all namespaces
  2. kubectl get pods --> Create a plain-text list of all pods
  3. kubectl get pods -o wide --> Create a comprehensive plain-text list of all pods
  4. kubectl get pods–field-selector=spec. nodeName=[server-name] --> Create a list of all pods functioning on a certain node server
  5. kubectl get replicationcontroller [replication-controller-name] --> In plain text, make a lst a specific replication controller
  6. kubectl get replicationcontroller, services --> Generate a plain-text list of all replication services and controllers

Logs to Create for the Pod

  1. kubectl logs <pod_name> --> Print the logs for a pod
  2. kubectl logs –since=1h <pod_name> --> Print the logs for a pod for the last hour
  3. kubectl logs –tail=20 <pod_name> --> Get the current 20 lines of logs
  4. kubectl logs -f <service_name> [-c <$container>] Get logs from a service and choose which container optionally
  5. kubectl logs -f <pod_name> --> Adhere to new logs and print the logs for a pod
  6. kubectl logs -c <container_name> <pod_name> --> For a container in a pod, Print the logs
  7. kubectl logs <pod_name> --> pod.log Output the logs for a pod into a ‘pod.log’ file
  8. kubectl logs –previous <pod_name> --> View the logs for the last failed pod

Docker and VM Difference and How Docker as far better than VM

  • Unlike VMs( Virtual Machines ) that run on a Guest OS, using a hypervisor, Docker containers run directly on a host server (for Linux), using a Docker engine, making it faster and lightweight.
  • Docker containers can be easily integrated compared to VMs.
  • With a fully virtualized system, you get more isolation. However, it requires more resources. With Docker, you get less isolation. However, as it requires fewer resources, you can run thousands of container on a host.
  • A VM can take a minimum of one minute to start, while a Docker container usually starts in a fraction of seconds.
  • Containers are easier to break out of than a Virtual Machine.
  • Unlike VMs there is no need to preallocate the RAM. Hence docker containers utilize less RAM compared to VMs. So only the amount of RAM that is required is used.

Screenshot 2022-08-07 144707

Screenshot 2022-08-07 145220

0_cjzg5S18szBjfU1Q

0_YO81__-sEpKzvEBW

docker-image

Docker Command Introduction

docker run --rm -p 3200:80 --name feedback-app -v feedback:/app/feedback -v "D:/ubuntu_Darshit/docker_practice/Docker_Kubernetes_practice/Docker_code_practice/data-volumes-02-added-dockerfile/data-volumes-02-added-dockerfile:/app" -v app/node_modules feedback-node:volumes

  • rm used when the docker stop so it immediately stop the container
  • p for the publish the Web-App
  • name used for the give name to container
  • v used as volume desc
  • feedback:/app/feedback --> Before the :/app/feedback is used for the renaming the volume name
  • if you want to give the specific path of the volume so that time used under ":/app"
  • we want to add the node_modules so we add through (-v app/node_modules)
  • this called the Bind Mount of the volume
  • If you want RENAME the Volume name so you should do something like (name_of_the_volume_name:/app/node_modules)

Docker File Intro and DataBase Connection

  • Front End YML Script for Docker
FROM node:18 |-> so this command is store the node_modules out of the Docker Hub

WORKDIR /app  |-> The work Directory location where your entire files will be move

COPY package*.json .   |-> This command used when your package.json and package-lock.json file will be copy in the App Folder

RUN npm install  |-> This Command is used when you want to install your node_modules as per your package.json and dependency of the package-lock.json

COPY . .   |-> This Command is used where your source file to destination files are shifted that place

EXPOSE 3000   |-> This Command is Used When You want to run your container on that port. so basically this port very helpful to when your Docker run command want to bind and publish that port ex.docker run --rm -p 3200:80

CMD ["npm", "start"]      |-> This Cmd Command used when your container will be start the running
  • For Reference Attached Basic Command SC VS--CourseMicroserviceswithNodeJSandReactUdemy-1’19”

  • Back End YML Script for Docker

FROM node

WORKDIR /app

COPY package*.json .

RUN npm install

COPY . .

CMD ["npm", "start"]

EXPOSE 3500

How to Connect Front End Docker, BackEnd Docker and DataBase (MongoDB) Docker

  • step:1 When We have Mulitple Docker So that time very important to All Docker in Same Network

  • step:2 so we try to make one Network and that Network we are attached with the Database docker, Front End Docker and back_End Docker

docker run --name mongodb --rm -d --network goals-net mongo
  • step:3 we follow same thigs for the node js also
docker run -v /home/darshitgajjar/DarshitGajjar_Docker_Practice/MERN_Docker_kubernetes/back_end:/app -p 3200:3500 --name goal_backend -v /app/node_modules --network goals-net react_back_end
  • if you don't add same network so that time they throw the error somthing like we didn't find mongoDB path

Persist the MongoDB database Data and Security

  • If You Remove the MongoDB Container and start the container again so We couldn't find the Our Old data so this is very serious issue
  • so we used the volume to persist our database data. so volume basically make the copy of that data and that data will be persist if We will remove the Container.
docker run --name mongodb -v data:/data/db --rm -d --network goals-net mongo
  • In Above Command We are attach the Volume(-v) for persist to our Data

Data Security

  • We Know without userName and password our DataBase is not secure either in Cloud side or Specific Local Server
  • So solving this issue we add the userName and Password in Docker command side and as well as add in Node Server Side
  • for adding the mongoDB. We specificied the UserName and Password through below command
docker run --name mongodb -v data:/data/db --rm -d --network goals-net -e MONGO_INITDB_ROOT_USERNAME=darshit -e MONGO_INITDB_ROOT_PASSWORD=gajjar mongo
  • we also used cloud based the link to perform this thing let's we will copy the mongoDB link and paste into the mongoDBconnection boom🔥🔥.It will work.so this will connect to the mongoDB cloud

Docker Compose 🔥🔥

  • Docker Compose is used to automate the things like creating the container, creating the volumes, binding the volumes, and also deploying the container

slides-docker-compose_page-0001

slides-docker-compose_page-0002

slides-docker-compose_page-0003

  • Note:

    • Docker Compose You haven't need to mentioned the specific network alike we are doing in withour docker
    • In docker compose we added the volume name thorugh this one volumes: data: name_of_the_volume_name
    • Docker compose have two kind of command building, pulling, starting and stoping, removing the container
      • docker-compose up -> It is used to building, pulling and starting of the container
      • docker-compose down -> It is used to stoping, removing the container
  • Docker compose Script Information:

    • docker compose up -d -> This command is used for the detached the container
    • Example file of Docker Compose
    version: "3.8"
    services:
    mongodb:
      image: "mongo"
      volumes:
        - data:/data/db
      # environment:
      #   MONGO_INITDB_ROOT_USERNAME: max
      #   MONGO_INITDB_ROOT_PASSWORD: secret
      # - MONGO_INITDB_ROOT_USERNAME=max
      env_file:
        - ./backEnd.env
    backend:
      build: ./back_end
      # build:
      #   context: ./backend
      #   dockerfile: Dockerfile
      #   args:
      #     some-arg: 1
      ports:
        - "80:80"
      volumes:
        - logs:/app/logs
        - ./back_end:/app
        - /app/node_modules
      env_file:
        - ./backEnd.env
      depends_on:
        - mongodb
    frontend:
      build: ./front_end
      ports:
        - "3000:3000"
      volumes:
        - ./frontend/src:/app/src
      stdin_open: true
      tty: true
      depends_on:
        - backend
    
    volumes:
    data:
    logs:
    

```yml
version: "3.8" # this version is based on the Docker Engine release so it's all releases are https://docs.docker.com/compose/compose-file/compose-file-v3/
services:
mongodb: # it show case the container of the docker
  image: "mongo" # it show case the image of the docker
  volumes: # it used for the
    - data:/data/db # we also give the specific restricion like data:/data/db:ro -> this volumes restrict like only read access
  # environment:
  #   MONGO_INITDB_ROOT_USERNAME: JOI
  #   MONGO_INITDB_ROOT_PASSWORD: BIDEN
  #    -MONGO_INITDB_ROOT_USERNAME=joi
  env_file:
    - ./backEnd.env
  #networks: # but we didn't need to specified becase docker automarically specified the network
  #  -goals-net
# backend:
# frontend:
volumes:
  data: # one side note if you have same volumes name in different services it will shared to other services

Docker Deployement Tutorial (EC2, ECS and EKS)

In This section we will learn the How we are deploying Our Application in the AWS ECS, ECR, How to provide the Route53 and also we will Learn the LoadBalancer, EC2 Instance and CI and CD to automate the things very better way

  • We know Container is Great Because as Full Stack Developer we know when the migration will come so our NighMare happen from Developer. so For easier the Developer Life we use our app in dockerized way so migration will not take much time.

revoND.png

NOTE:

1) Bind Mount shouldn't be used in production

2) Cotainerized apps might need a build step(e.g. React Apps)

3) Multi Container projects might need to split or should be split on the multiple hosts

4) Securtity might be challenging when you handling multiple dockerized web app

EC2 Instance and Information:

EC2-ECS-Info

revoND.png

  • Amazon Elastic Compute Cloud, EC2 is a web service from Amazon that provides re-sizable compute services in the cloud.

  • They are re-sizable because you can quickly scale up or scale down the number of server instances you are using if your computing requirements change.

  • An instance is a virtual server for running applications on Amazon’s EC2. It can also be understood like a tiny part of a larger computer, a tiny part which has its own Hard drive, network connection, OS etc. But it is actually all virtual. You can have multiple “tiny” computers on a single physical machine, and all these tiny machines are called Instances.

  • AWS EC2 offers 5 types of instances:

    1)General Instance For applications that require a balance of performance and cost. Ex.t2,m4 and m3

    2) Compute Instances: For applications that require a lot of processing from the CPU.E.g analysis of data from a stream of data, like a Twitter stream Ex.c4 and c3

    3) Memory Instances: For applications that are heavy in nature, therefore, require a lot of RAM.E.g when your system needs a lot of applications running in the background i.e multitasking. Ex.r3 and x1

    4) Storage Instances:For applications that are huge in size or have a data set that occupies a lot of space.E.g When your application is of huge size. Ex. i2 and d2

    5) GPU Instances:For applications that require some heavy graphics rendering.E.g 3D modeling etc Ex.g2

Bind Mount in Production

  • Bind Mount: Means we are attached our local code with Our Container(Ex. for real time code effect showing we are used the path(/home/darshitGajjar/mern_app/) and that are attached with the volume. so when user change the code immediate server will restart) but that will not be used in the production server
  • In production server react build we are made that build we are copy to in container

screenshot (1)

EC2 Instance Note:
  • When You are used the EC2 Instance Always ADD the KEY Pair and download the SSH key very secure way
  • Don't Install where you add the EC2 instance github folder had been added
  • preodically rotate the key pair
  • MFA enable the root account
  • Don't use the Root Account instad of make role for own and limit the service for that
  • Do not share access using Access Key & Secret Key
  • Rotate credentials regularly
  • Use roles for applications that run on Amazon EC2 instances
  • Configure a strong password policy for your users

Steps for Docker Install in EC2 Instance

  1. First You Have to Launch Instance then after you need to give the key value pair and download the .pem file in secure way

  2. when you need to go to the Instances > Display All Instances > Click on Connect > SSH client > Go Step By Step

    chmod 400 Your PEM File

    You need to copy and paste of the "SSH -i command ...."

    So Now your EC2 Instance will remotely setup in My laptop

    sudo yum update -y -> this command ensure all remote packages are propely updated and working fine

    We need to install docker in remote machine so we need to install docker in remote so for that command is like -> sudo amazon-linux-extras install docker

    sudo service docker start -> This command used to start the Docker. Aftrer that you have to check Docker Install or Not ? so for that you have to use this commnad

    docker --help -> So this command is display various docker command, so we're ensure from our end like docker work perfectly.

In case Docker Will stopped so we will do this thing

  • start the AWS instance to going AWS console and start
  • to re-connect the AWS instance we need to hit this command
    • ssh -i file.pem username@ip-address [You will Get UserName In Under clicking Instance and EC2 Instance Connect]
    • if you want to Logout So exit
    • For Running the Docker so we need this command sudo service docker start

Managing & Updating the Container Image/Container in EC2

  • When you want to update the code so that time One thing remember first

  • we are not set the CI/CD pipeline so any code changes in docker hub it will reflect on AWS cloud

  • So changes the code we have to do this thing

  • You change in Local Image and re run & build Container -> Push in Docker hub through Local CMD -> After You Go EC2 Instance Cli -> Pull that Image -> Stop Container or Remove Container -> Re Build the Container -> You will see the Effect

ECS is better than EC2 ?

  • Yes, ECS is better than EC2 in terms of the orchestration. If we are buying the EC2 Instances and installed the Docker and that Docker through we run the Containerized the App.
  • But, If we are buying the EC2 so as developer we handle everything in single hand like his Securtiy, it's port scanning and time to time installed OS update. But that is so much headche and extra burden in mind, But if we install the ECS so all securtity and other stuff handle by the AWS Team himseld. So we neutrailed the our server heached and more on Business Solution
  • But EC2 Give entire kitchen, so if we want to make our custome application that runs on the various tools and technology and If we want to installed the advanced Orchestration and run the complex app in EC2 is good option for that.

difference Between ECS and Fargate

  • Fargate is serverless technology to handle the Container
  • In Fargate, we have multiple tasks and clustred those tasks.
  • In ECS we have need to manage various containers and it's ports and everything
  • But Fargate doing great job to handle those stuffs
  • When you run your Amazon ECS tasks and services with the Fargate launch type or a Fargate capacity provider, you package your application in containers, specify the Operating System, CPU and memory requirements, define networking and IAM policies, and launch the application. Each Fargate task has its own isolation boundary and does not share the underlying kernel, CPU resources, memory resources, or elastic network interface with another task.

VPC and SubNets Basics

  • VPC give extra layer of security to AWS Instances

  • VPC means Virtual Private Cloud

  • In VPC we have various SubNets (1) Private Subnets (2) Public Subnets

  • (1) Private Subnets :

    • In Private Subnets basically we add DataBase Activity
    • private Subnets and Public Subnets are in one Umbrella of VPC
    • it has no relation with outer network
    • But main Issue in private Subnets are they have no relation with the outer internet, so that's why very difficult to update the Database Drivers and it's latest patches. so for that we are using Nat Gateway. so basically through Nat Gateway it connect with the outer world Ex. Private Subnet -> Nat Gateway -> Access Outer Network -> Internet Gateway through Access internet Screenshot 2022-08-16 013553
    • Benefits:
      • We want to make Highly Secure Native Cloud Application, In that Application our IAC(Infrastructure as Code) run on the Web Server, that web server has accessiblity of the Database (that setup in Private Subnets). so In that scenario Attacker not Access DB Directly.
  • (2) Public Subnets:

    • In Public Subnet is those subnets that run the web-Server like our Node Js, Tomcat Server. this Server Accessible from out-side of world so our App works seamlessly.

ECS is better in term of EC2 for Development not Handling the Infra Focus

screenshot (1)

  • In ECS for Updating your code only follow this easy steps
    • Go your Local Machine change the code > Push Image in the Docker > And Go to your ECS and Update the code So It will Update the code but the IP will be changed 😃😎

Deploying Image in ECR instead of Docker Hub

  • Docker Hub has drawback. let's say you want to deploy Image in Docker Hub, So it access to publicaly.

  • But person who want to make secret so that will not used to Docker hub he must to go ECR(Elastic container Repository for safer side to Hide Hard and sweat blooded code base)

  • Follow this steps for Creating ECR -----> ECS

Deploying Multi Container in ECS Fargate with Load Balancer

  • Deploying Multi Container in ECS, so First we need to understand his entire flow.

Screenshot 2022-08-21 231733

  • Technical Jargons and It's Applications
  1. Task => It is basically service that give the port or name so through this port, we will handle the multiple container in it. so Advantage of this task is to under one umbrealla we will use various services like DB, FrontEnd and BackEnd.

  2. ECR => It is basically used for storing the multiple repository in it. after that we will map those in ECS

  3. Cluster Service => Cluster is top of the Service used to handle multiple EC2 Instance. When In EC2 Instance will use ECS. so In EC2 Instance use ECS agent to communicate with ECS Tasks.

Multi Million Dollar Question: If We made Our Infra as per above Image and In future our Traffic goes increasing, so How we handle that ? -> That time we have to use Load Balancer use in Cluster to handle that challenge

-> When you creating the Cluster so that time it's important to use the VPC in cluster

  • What is Cluster and ? -> ECS cluster is basically logical grouping the logical task and AWS services.

  • What Difference Between the AWS Fargate and On-Premise Server -> Basically What every Your Idea and You made solution for that(Either WEB App or Android APP), But You wan't to handle your server or it's security so that's time AWS Fargate(Serverless containerization) so much help to business and secure environment and other hand on-Prem server is who that managed in somewhere else and handle that server by company. so in my opiniun it's so much Time and Money consume process to go On-Prem server.

  • Main Advantage of ECS is If you mentioned the Auto-Scalling So it will check Every min either this task needs more Instance (Scale Out) so it will assigned and manage the Task so this is insight of ECS with Load Balancer

Problem Facing in Multi Container Application
  • Let's say we are working on the Multi Container Based Application In Local, so main Advantages of That Application is it's worked on the specific port(3200). so All multi-container(React, Node and Database) App worked seamlessely on that port.
  • But that thing is not happen in ECS. so in that Case We're using the **dynamic PORT Mapping** to ensure that every time Our port will not changed, so we are using the Load Balance to do this JOB and also control the Our Traffic for that.
  • we get unchanged domain Name from the Load Balancer that service attached with the ECS.
  • Note: Docker Compose is NOT GREAT tool to deploy the multi container in Cloud because as per varios Cloud provider you need to changed the Configuration

Getting Started With Kubernetes

Module Introduction
  • Kubernetes is not tool But More Like the Framework:: 1
More Problem with Manual Deployement
  • Why We Need the K8 to Handle the Multi Docker Containers

    • when we use containers, might crash or go down, and might need to be replaced with a new container.
    • This is something which can happen. Something might go wrong, inside of your containerized application.
    • Something might fail there, and therefore the entire container, in the end crashes and becomes unusable.
    • If something like this happens, you want to replace it with a new container, running your application again, because otherwise, your application might not be reachable anymore, if it crashed and isn't replaced.
    • But we are not Machine, so continuous Monitor on the Machine. so for that we need such system that handles those stuff. so Because of that K8 Come into the Picture
  • Let's say We have Gigantic Web App, so to manage that thing we not always in-front of logs of cloudwatch or HealthCheck. so manage that stuff we need K8, so it will increase Docker Containers if Traffic will spike and if traffic will decrease so containers will decrease.Let's say container will Crashed so that time k8 replace that with new one

  • So for container we need the Load Balancer kind of thing will manage, when Traffic spike up it will evolved more and more containers in it. so for one task, we can use the multiple containers to do that stuff.

  • Conclusion: We Learn How to multiple containers bringing up for one Image. Past we create one container per Image

2 3

Why Kubernetes ?
  • Actually in AWS ECS Done the same Job, means we will configure Docker Container means, when Docker container will stopped so Automatically ECS Up the Container / start the container automatically.

  • But Problem in ECS is like Let's say will Configure in .yml file and same configuration not done possible in other cloud provider like Microsoft Azure or GCP. So we are Locking the Thing

  • Also using Load Balancer, we can ensure that all the traffic will come. equally distribute in EC2. so we can do this kind of Practices through Load Balance or ECS

  • But Due to Locking of service, we need to Learn K8 4 5

What is K8 Exactly ?
  • Kubernetes is service that manage container deployment and orchstartoring container

  • Kubernetes help managing container, if they failed so it will replaced with other, monitoring the containers, scaling and load balancing the containers

  • Kubernetes is open source project that specially design to handle multi container system

  • kubernetes is not paid service, but which service we are depended on that is paid service

  • Indeed you can think of kubernetes, as Docker-Compose for multiple machines because in its core, that is basically what it's about.

  • Docker composes a tool we learn about, which helps us manage multi container projects easily on our local machine.

  • We could even use it on single container projects, just to a wide running these long docker run commands. And kubernetes does this the same for multi machine setups

  • Because when you deploy your application, you do that by running the application across multiple computers, multiple machines, not just one machine.

  • And kubernetes makes deploying your containers, monitoring and restarting them automatically across multiple machines very easy. 6 7 8 9 10

Kubernetes: Architecture core concepts
  • In Kubernetes world, Master Node Handled By the EKS(Elastic Kubernetes Service) and worker node user can handle
  • why we need worker node, that is million dollar($) Question. So we used worker node to shrink and expand of EC2 Instance computation power.
  • Let's say In our web application Traffic spike suddenly, so that time as developer we have responsibility to take care of Traffic and seamlessly we can deliver the service. For that that taken care by the worker node. so when traffic spike so associated EC2 Instance also increased. we also minimize the EC2 Instance.

1

Kubernetes will not manage the infrastructure

2

A Closer Look At the Worked Node
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment