Skip to content

Instantly share code, notes, and snippets.

@heartshare
heartshare / BaseResource.php
Created June 25, 2024 21:17 — forked from nathandaly/BaseResource.php
Tenancy for Laravel & Filament V3 (Tenant per database)
<?php
/**
* Below is an the extended Filament resource your tenant panel resources
* will have to extend so that the queries are scoped properly.
*/
namespace App\Filament;
use Filament\Resources\Resource;
@heartshare
heartshare / proxmox-ceph.md
Created May 31, 2024 02:33 — forked from scyto/proxmox-ceph.md
setting up the ceph cluster

CEPH HA Setup

Note this should only be done once you are sure you have reliable TB mesh network.

this is because proxmox UI seems fragile wrt to changing underlying network after configuration of ceph.

All installation done via command line due to gui not understanding the mesh network

This setup doesn't attempt to seperate the ceph public network and ceph cluster network (not same as proxmox clutser network), The goal is to get an easy working setup.

this gist is part of this series

Docker Swarm in LXC Containers

Part of collection: Hyper-converged Homelab with Proxmox

After struggling for some days, and since I really needed this to work (ignoring the it can't be done vibe everywhere), I managed to get Docker to work reliable in privileged Debian 12 LXC Containers on Proxmox 8

(Unfortunately, I couldn't get anything to work in unprivileged LXC Containers)

There are NO modifications required on the Proxmox host or the /etc/pve/lxc/xxx.conf file; everything is done on the Docker Swarm host. So the only obvious candidate who could break this setup, are future Docker Engine updates!

Docker Swarm in Vm's with CephFS

Part of collection: Hyper-converged Homelab with Proxmox

One of the objectives, of building my Proxmox HA Cluster, was to store persistent Docker volume data inside CephFS folders.

There are many different options to achieve this; via Docker Swarm in LXC using Bind Mounts, Docker Third Party Plugins that are hard to use and often outdated.

Another option for Docker Volumes was running GlusterFS, storing the disks on local NVMe storage and not using CephFS. Although appealing, it's adding complexity and unnecessary resource consumption, while I already have a High Available File System (CephFS) running!

Mount Volumes into Proxmox VMs with Virtio-fs

Part of collection: Hyper-converged Homelab with Proxmox

Virtio-fs is a shared file system that lets virtual machines access a directory tree on the host. Unlike existing approaches, it is designed to offer local file system semantics and performance. The new virtiofsd-rs Rust daemon Proxmox 8 uses, is receiving the most attention for new feature development.

Performance is very good (while testing, almost the same as on the Proxmox host)

VM Migration is not possible yet, but it's being worked on!

Create Erasure Coded CephFS Pools

Part of collection: Hyper-converged Homelab with Proxmox

How to create a Erasure Coded Pool in Ceph and use 'directory pinning' to connect it to the CephFS filesystem.

To use a Erasure Coded Pool with CephFS, a directory inside the CephFS filesystem needs to be connected to a Erasure Coded Pool, this is called 'directory pinning'.


Proxmox High Available cluster with Ceph and Dynamic Routing - Managing and Troubleshooting

Part of collection: Hyper-converged Homelab with Proxmox

This is part 3 focussing on Managing and Troubleshooting Proxmox and Ceph.

See also Part 1 about Setup Networking for a High Available cluster with Ceph, and see Part 2 for how to setup the Proxmox and Ceph Cluster itself.

WIP

Build a Proxmox High Available cluster with Ceph

Part of collection: Hyper-converged Homelab with Proxmox

This is part 2 focussing on building the Proxmox Cluster and setting up Ceph.

See also Part 1 about Setup Networking for a High Available cluster with Ceph, and see Part 2 for how to setup the Proxmox and Ceph Cluster itself, and part 3 focussing on Managing and Troubleshooting Proxmox and Ceph.

If everything went well in part 1, setting up Proxmox and Ceph should be 'a walk in the park'!