I hereby claim:
- I am frimik on github.
- I am frimik (https://keybase.io/frimik) on keybase.
- I have a public key ASDV4EMGtVVLqWX4UEYlqjXfmgc8XSt6O0btr6gEULmNxQo
To claim this, I am signing this object:
I hereby claim:
To claim this, I am signing this object:
# $XDG_CONFIG_HOME/k9s/plugin.yml | |
plugin: | |
# Annotate current object with a Flux ignore annotation | |
fluxdepignore: | |
# Define a mnemonic to invoke the plugin | |
shortCut: Shift-I | |
# What will be shown on the K9s menu | |
confirm: true | |
description: Flux Ignore |
.PHONY: vendor-cni | |
vendor-cni: | |
vendir sync | |
mkdir gitvendor/charts/linkerd2-cni/charts | |
mv gitvendor/charts/partials gitvendor/charts/linkerd2-cni/charts/partials | |
patch-cni: | |
patch -p0 <install-cni-delete-event.patch |
Beginning Tiltfile execution | |
local: tk eval environments/default -e '_tilt' | |
→ { | |
→ "megaproxy": { | |
→ "port_forwards": [ | |
→ { | |
→ "link_path": "/haproxy/stats", | |
→ "name": "health", | |
→ "port": 8404 | |
→ } |
Going on and off VPN, things work, then they don't work... general annoying. Containers can't resolve... and when you might make containers resolve, then containers in containers (k3d) can't resolve...
It seems I got things working... I can go on and off VPN, name resolution works essentially the same on the Host as in Docker and the Kubernetes (k3d) nodes and the k3d kubernetes containers.
On the host I have per-interface DNS servers via systemd-resolved that takes care of it.
#!/bin/bash | |
set -ex | |
if [ $# -ne 2 ]; then | |
echo "Usage: $0 EFS_ID_REGULAR EFS_ID_MAXIO" | |
exit 1 | |
fi | |
EFS_ID_REGULAR=$1 |
#!/bin/bash | |
# | |
# get_secret (c) 2019 Fulhack industries. | |
# | |
# Author: Mikael Fridh | |
# | |
# Half inspired half stolen from somewhere, can't remember exactly. | |
# | |
# Use in scripts like: | |
# get_secret vpn_password "VPN password" [username] |
#!/bin/bash | |
CLUSTER_NAME="k3s-default" | |
# Install k3d | |
k3d --version || wget -q -O - https://raw.githubusercontent.com/rancher/k3d/master/install.sh | bash | |
# verify it | |
k3d check-tools | |
# create a volume and cluster |
Aurora Scheduler consists of 5 instances across 5 nodes usually. Usually there is a zookeeper instance on each of those 5. Maybe you also have 5x Mesos Master instances on the same nodes.
Now you want SSL ... and AUTH on those!? Tough luck ...
The documentation discuss making the scheduler listen on 127.0.0.1:8081
and putting an nginx on each of the nodes listening on 0.0.0.0:8081
or the node IP. I forget which.
Now you'll think - ok - what happens to "sane hostnames", "hostnames matching certificate CN"?
""" | |
Aurora Scheduler check | |
Collects metrics from aurora scheduler. | |
""" | |
import requests | |
from checks import AgentCheck, CheckException | |
class AuroraCheck(AgentCheck): |