Skip to content

Instantly share code, notes, and snippets.

@irishgordo
Last active October 23, 2023 15:47
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 1 You must be signed in to fork a gist
  • Save irishgordo/ebbdfbc2edd10916298cbf77e408249c to your computer and use it in GitHub Desktop.
Save irishgordo/ebbdfbc2edd10916298cbf77e408249c to your computer and use it in GitHub Desktop.
MinIO with SSL Lightweight Info dump (no docker / k8s blah needed, just a vm) w/ Rancher & Harvester
This is kinda a pain, the SSL part was hard to get down / understand.
(it took forever troubleshooting
The Binary of both Minio & Certgen is probably the easiest to work with.
x86-64, from minio
- certgen: https://github.com/minio/certgen
The loadout will have:
- single drive leveraged as a folder for storage since that's common place
- SSL outta the box, since that's ABSOLUTELY NEEDED when building custom RKE2 clusters that have backups created... plainly, we can't have minio without SSL if we're planning to use MinIO for RKE2 Backups...
Have both the binaries in the /usr/local/bin, *usually running a deb based vm for building the integration point*
Build a service user:
- sudo groupadd -r minio-user
- sudo useradd -m -d /home/minio-user -r -g minio-user minio-user
Switch to minio service user to build out directories with proper permissions:
- sudo su minio-user
- mkdir -p /home/minio-user/.minio/certs
- exit #to switch back to reg user, ubuntu
Switch back to build some certs using certgen binary (might rename it from the long amd64 name, to something just like certgen)
- certgen -host "YOUR.IPV4.ADDRESS.FOR.THE.VM"
- sudo cp -v private.key /home/minio-user/.minio/certs
- sudo cp -v public.crt /home/minio-user/.minio/certs
- sudo chown minio-user /home/minio-user/.minio/certs/public.crt
- sudo chown minio-user /home/minio-user/.minio/certs/private.key
Make sure the secondary disk is set up, adjust as desired:
- sudo gdisk /dev/vdb
- sudo mkfs.ext4 /dev/vdb1
- sudo mkdir -p /disks/vdb
- sudo mount /dev/vdb1 /disks/vdb
- sudo mkdir -p /disks/vdb/minio-data
- sudo chown -Rv minio-user:minio-user /disks/vdb/minio-data
Get the block id of the disk and add it to fstab to persist / automount for restarts of the vm, if restarts happen:
Ex:
ubuntu@minio-vm:~$ sudo blkid
/dev/vdb1: UUID="07217878-eaee-4862-b8db-e31fd69b4455" BLOCK_SIZE="4096" TYPE="ext4" PARTLABEL="Linux filesystem" PARTUUID="fa14414e-a931-4e3e-95f8-197fc35ad231"
/dev/vdc: BLOCK_SIZE="2048" UUID="2023-07-18-20-38-08-00" LABEL="cidata" TYPE="iso9660"
/dev/vda15: LABEL_FATBOOT="UEFI" LABEL="UEFI" UUID="BE0B-96E8" BLOCK_SIZE="512" TYPE="vfat" PARTUUID="d60bb161-6308-4521-9894-5990e2284dc2"
/dev/vda1: LABEL="cloudimg-rootfs" UUID="d9b62ce7-e586-4eed-b7b3-9fea96899fc6" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="2d164380-591e-4a91-95bd-30d3a463102a"
/dev/loop1: TYPE="squashfs"
/dev/loop2: TYPE="squashfs"
/dev/loop0: TYPE="squashfs"
/dev/vda14: PARTUUID="5f559e9d-56d5-4325-a52e-58773733b29d"
/dev/loop3: TYPE="squashfs"
ubuntu@minio-vm:~$ sudo cat /etc/fstab
LABEL=cloudimg-rootfs / ext4 discard,errors=remount-ro 0 1
LABEL=UEFI /boot/efi vfat umask=0077 0 1
UUID=07217878-eaee-4862-b8db-e31fd69b4455 /disks/vdb ext4 defaults 0 2
Build the systemd service file:
ubuntu@minio-vm:~$ sudo cat /etc/systemd/system/minio.service
[Unit]
Description=MinIO
Documentation=https://docs.min.io
Wants=network-online.target
After=network-online.target
AssertFileIsExecutable=/usr/local/bin/minio
[Service]
WorkingDirectory=/usr/local
User=minio-user
Group=minio-user
ProtectProc=invisible
EnvironmentFile=-/etc/default/minio
ExecStartPre=/bin/bash -c "if [ -z \"${MINIO_VOLUMES}\" ]; then echo \"Variable MINIO_VOLUMES not set in /etc/default/minio\"; exit 1; fi"
ExecStart=/usr/local/bin/minio server $MINIO_OPTS $MINIO_VOLUMES
# Let systemd restart this service always
Restart=always
# Specifies the maximum file descriptor number that can be opened by this process
LimitNOFILE=1048576
# Specifies the maximum number of threads this process can create
TasksMax=infinity
# Disable timeout logic and wait until process is stopped
TimeoutStopSec=infinity
SendSIGKILL=no
[Install]
WantedBy=multi-user.target
Build the environment variables file:
ubuntu@minio-vm:~$ sudo cat /etc/default/minio
# Set the hosts and volumes MinIO uses at startup
# The command uses MinIO expansion notation {x...y} to denote a
# sequential series.
#
# The following example covers four MinIO hosts
# with 4 drives each at the specified hostname and drive locations.
# The command includes the port that each MinIO server listens on
# (default 9000)
# NOTE!
# MINIO_VOLUMES, needs to reference the disk, that had a partition created & was mounted
MINIO_VOLUMES="/disks/vdb/minio-data"
# Set all MinIO server options
#
# The following explicitly sets the MinIO Console listen address to
# port 9001 on all network interfaces. The default behavior is dynamic
# port selection.
MINIO_OPTS="--certs-dir /home/minio-user/.minio/certs --console-address :9001"
# Set the root username. This user has unrestricted permissions to
# perform S3 and administrative API operations on any resource in the
# deployment.
#
# Defer to your organizations requirements for superadmin user name.
MINIO_ROOT_USER=minioadmin
# Set the root password
#
# Use a long, random, unique string that meets your organizations
# requirements for passwords.
MINIO_ROOT_PASSWORD=minioadmin
# Set to the URL of the load balancer for the MinIO deployment
# This value *must* match across all MinIO servers. If you do
# not have a load balancer, set this value to to any *one* of the
# MinIO hosts in the deployment as a temporary measure.
MINIO_SERVER_URL="https://YOUR.IPV4.VM.ADDRESS:9000"
Enable & Start the service:
- sudo systemctl daemon-reload
- sudo systemctl enable minio
- sudo systemctl start minio
Then you should be able to tail the logs:
- sudo journalctl -u minio.service --follow
Then the SSL should be present in the loadout.
The SSL piece was tricky to figure out, a service-user if it has it's own home directory simplifies things with systemd, the docs on minio's site didn't mention a home-dir for the service user
This also avoids:
- juggling docker & docker-compose
- having to have the vm be running some sort of cluster & a cluster install
And then when creating a custom rke2 cluster in Rancher on Harvester, when enabling backups, you'll just need to check the box:
- allow insecure certificates
And it should work great, since it's self-signed - but BACKUPS with RKE2 S3 MUST have a cert
@irishgordo
Copy link
Author

TODO: bundle this up as a cloud-init script addition to the terraform harvester-vm provisioning scripts

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment