Skip to content

Instantly share code, notes, and snippets.

View steffenba's full-sized avatar

Steffen Barnau steffenba

View GitHub Profile
@steffenba
steffenba / qnap_nfs4_opnsense.md
Created January 19, 2025 12:28
QNAP NFSv4+ through opnsense not working

Preface

I wanted to use my QNAP NAS as an NFSv4 Kubernetes nfs-csi target.

The Cluster is running on RHEL 9. Mounting from either the cluster hosts, or other RHEL 9 hosts didn't work.

The error was:

mount.nfs: mounting nas.example.com:/kubernetescsinfsslow failed, reason given by server: No such file or directory

@steffenba
steffenba / aap2_proxmox_inventory.md
Last active January 16, 2025 17:53
AAP2 Proxmox Dynamic Inventory Plugin integration

Preface

Integration of the community.general.proxmox inventory dynamic source into AAP2.5+

Proxmox side

  • Add a PVE/LDAP/etc. User.
  • Add Permission for your tree (For the whole cluster add to "/").
  • For the inventory-sync "PVEAuditor" is sufficient (read-only).
  • If you want to use this user for automation tasks (Create, edit, etc VMs) you need higher permissions. You can seperate those from your API Token by using "Privilege Separation" on your API token

[guide] keycloak authentication for proxmox

How to setup Proxmox to use Keycloak as authentication realm.

Proxmox Setup

root@proxmox:/etc/pve# cat domains.cfg
pam: pam
        comment Linux PAM standard authentication
@steffenba
steffenba / aap2_5_keycloak_auth.md
Last active January 11, 2025 14:01
Ansible Automation Platform 2.5 Keycloak Authentication

Preface

Authenticating to the AAP2 Web-Gui using an OIDC-Based SSO like Keycloak is a requirement for many organizations.

This is why I wanted to try this out. Sadly the Red Hat Documentation is very thin on details on how to configure Keycloak itself to ensure a seamless integration.

Through some trial and error, I have found a solution that works. If there are any issues with this, please let me know!

Keycloak side

I assume you already have a functioning realm.

@steffenba
steffenba / cb_ad_config.md
Last active February 13, 2025 10:19
CloudBeaver Active Directory Configuration

Using Active Directory to authenticate to Cloudbeaver (CE)

This applies to Version (Dockerized) 24.3.2

Cloudbeaver does not document very well how to configure it to authenticate against an Active Diretory.

To do that, you have to edit the workspace/.data/.cloudbeaver.runtime.conf

The basic config looks like this:

@steffenba
steffenba / proxmox-nut-shutdown.md
Last active June 7, 2025 08:38
Proxmox: Shutdown via NUT with a NAS as UPS master

Scenario

Using a QNAP NAS connected to an EATON 3S UPS via USB, I wanted to do the following

In case of power loss:

  • Go to UPS Power
  • Start Timer
  • Abort Timer if Power restored
  • After 10 Minutes, shut down Proxmox (using ISCSI Storage provided by QNAP)
  • After 15 Minutes, shut down QNAP
@steffenba
steffenba / longhorn-replicas-eviction.md
Created September 23, 2024 15:14
Longhorn: Cannot evict pod as it would violate the pod's disruption budget

When upgrading (k3s) Kubernetes Nodes using Ansible to drain the nodes, the following error appears:

error when evicting pods/"instance-manager-xxxx" -n "longhorn" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.

I am using the CNPG-Operator to provision a few Postgres-Clusters in Kubernetes for my applications. These use Longhorn as a single-replica storage backend. More than one replicas would be pointless, since postgres itself manages data replication.

Since longhorn will not allow single-replica PVs to be evicted when a node is drained by default, we need to change that behaviour for node upgrades.

@steffenba
steffenba / Prometheus-kube-servicemonitor-not-picked-up.md
Last active August 31, 2024 08:54
ServiceMonitor resource not picked up by Prometheus kube-prometheus-stack Installation

As per prometheus-operator/kube-prometheus#1392 Thank you to KaiGrassnick!

The underlying issue is a configuration setting in the kube-prometheus-stack helm chart, that does by default expect you to label your ServiceMonitor in a certain way. For me with no further configuration, the expected label was the helm-deployments "release" label. Since I deployed my blackbox_exporter with a separate helm-cart in ArgoCD, that label wasn't applied and the ServiceMonitor wasn't picked up.

I am assuming you already installed kube-prometheus-stack and it's working. I am also assuming you're using the blackbox_exporter chart from here: https://github.com/prometheus-community/helm-charts/tree/main/charts/prometheus-blackbox-exporter