Skip to content

Instantly share code, notes, and snippets.

View nitisht's full-sized avatar
🎯
Focusing

Nitish Tiwari nitisht

🎯
Focusing
View GitHub Profile
@nitisht
nitisht / rosbag-MinIO.py
Last active April 5, 2024 08:25
Use Spark to read / analyse / store Rosbag file formats for MinIO server
from time import time
from pyspark import SparkContext,SparkConf
import pyrosbag
from functools import partial
import pandas as pd
import numpy as np
from PIL import Image
from io import BytesIO
import rosbag
import cv2
@nitisht
nitisht / Nginx_Minio_SSL_Term.md
Last active March 6, 2024 22:44
Self-signed certificate setup with Nginx proxying requests to Minio Server

Nginx SSL termination for Minio server load balanced setup

This document explains the steps required to set up Nginx proxy and SSL termination for Minio servers running in the backgronud.

Generate self signed certificate

Create a directory /etc/nginx/ssl/domain.abc, here domain.abc is the name of your website domain. Then use the below commands

sudo openssl genrsa -out private.key 2048
sudo openssl req -new -x509 -days 3650 -key private.key -out public.crt -subj "/C=US/ST=state/L=location/O=organization/CN=domain.abc"
@nitisht
nitisht / Minio_GCS_Gateway.md
Last active December 3, 2023 18:29
Run Minio GCS Gateway on Docker

Run Minio Gateway Binary

 gcloud init
  • If you are reauthenticating use this command instead
gcloud beta auth application-default login
@nitisht
nitisht / healthcheck.md
Last active November 17, 2023 15:32
Minio Healthcheck endpoint

Minio Healthcheck

Liveness probe definition

Used to identify situations where server is running but may not behave optimally, i.e. sluggish response or corrupt backend. Such situations can be generally only be fixed after a restart.

Kubernetes kills the container and restarts it in case of liveness probe responding with a failure code.

Readiness probe definition

Used to identify situations where server is not ready to accept requests yet. Such situations are generally recovered after waiting for some time.

@nitisht
nitisht / minio-docker-steps.md
Last active September 25, 2023 03:04
minio-docker-swarm
  • Pre-Conditions: https://docs.docker.com/engine/swarm/swarm-tutorial/#/three-networked-host-machines For distributed Minio to run, you need 4 networked host machines.

  • Create a new swarm and set the manager. SSH to one of the host machine, which you want to set as manager and run: docker swarm init --advertise-addr <MANAGER-IP>

  • Current node should become the manager. Check using: docker node ls

  • Open a terminal and ssh into the machine where you want to run a worker node.

  • Run the command as output by the step where master is created. It will add the current machine (as a worker) to the swarm. Add all the workers similarly.

  • Check if all the machines are added as workers, SSH to the master and run: docker node ls

Create an overlay network:

Features of Minio Server.

Item Specification
Custom access key environment MINIO_ACCESS_KEY
Custom secret key environment MINIO_SECRET_KEY
Turn off web browser environment MINIO_BROWSER=off
Listening on bucket notifications using an extended S3 API
Support for bucket notifications postgres, amqp, nats, elasticsearch, redis, kafka (in-progress)
Shared Backend (FS) In-progress
@nitisht
nitisht / minio-distributed-statefulset.yaml
Last active November 23, 2021 14:44
Create a distributed Minio deployment based on StatefulSets
apiVersion: v1
kind: Service
metadata:
name: minio
labels:
app: minio
spec:
clusterIP: None
ports:
- port: 9000
@nitisht
nitisht / minio-standalone-deployment.yaml
Last active February 28, 2020 20:18
Create a standalone Minio deployment
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
# This name uniquely identifies the PVC. Will be used in deployment below.
name: minio-pv-claim
labels:
app: minio-storage-claim
spec:
# Read more about access modes here: http://kubernetes.io/docs/user-guide/persistent-volumes/#access-modes
accessModes:
@nitisht
nitisht / Cool mc scripts.md
Last active April 30, 2019 01:30
mc json and jq playground

Delete all the objects in a bucket

while read -r key; do
    mc rm myminio/testb/${key}
done< <(mc ls --json myminio/testb | jq --raw-output '"\(.key)"')

Count objects in a bucket

@nitisht
nitisht / federation.md
Last active January 16, 2018 18:27
MSF with Minio and Core DNS

CoreDNS vs kubedns

  • On-the-fly DNSSEC signing of served data in CoreDNS.
  • kube-dns supports only etcd as the backend, CoreDNS on the other hand has several supported backends.
  • kube-dns records do not reflect the state of the cluster. Any query to w-x-y-z.namespace.pod.cluster.local will return an A record with w.x.y.z, even if that IP does not belong to specified namespace or even to the cluster address space. CoreDNS integration offers the option pods verified, which will verify that the IP address w.x.y.z returned is in fact the IP of a pod in the specified namespace.
  • Plugin chaining and pluggable architecture makes CoreDNS better suited to adapt to various backends, as compared to kubedns.

CoreDNS plugins