Skip to content

Instantly share code, notes, and snippets.

@agracey
Last active November 2, 2023 14:24
Show Gist options
  • Save agracey/b87928ce9244e1ab4c849c57694f2085 to your computer and use it in GitHub Desktop.
Save agracey/b87928ce9244e1ab4c849c57694f2085 to your computer and use it in GitHub Desktop.
mesh extension example yaml

How to run this:

Testing Environment:

  • 3 NUCs -- SLE Micro 5.4 with K3s installed with no special configuration
  • 1 Raspberry Pi 3 -- OpenSUSE Tumbleweed

On remote host:

  • Add routes to 10.4[1-3].0.0/16 via one of your cluster nodes
  • Create tls keypair and install keys to allow root access
    • Yeah... I know this is a bad idea. It's a demo, I'll fix it later

On Cluster

  • Create load balancer service for kubedns
    • For k3s, use dns-external.yaml
  • Apply rbac.yaml
  • Create configmap called workload containing a item called workload.yaml that contains the contents of workload.yaml
  • Create secret with TLS keypair (from remote host) called ssh-keys
    • Private key needs a newline at end of file. OpenSSL (or k8s) doesn't seem to add this.
  • Run Job

Work still to do:

  • Figure out rebooting...
  • Create patch that adds proxy to workload.yaml instead of needing it hard coded
  • Look up ssh key instead of hard coded
  • Clean up the extraneous errors/warnings
    • Use files instead of kubectl create
  • Use non-default namespace
  • Integrate with Akri to run when device is discovered
  • Integrate with Epinio to allow pushing code to non-k8s devices
apiVersion: v1
kind: Service
metadata:
labels:
k8s-app: kube-dns
name: dns-external
namespace: kube-system
spec:
ports:
- name: dns
port: 53
protocol: UDP
targetPort: 53
- name: dns-tcp
port: 53
protocol: TCP
targetPort: 53
selector:
k8s-app: kube-dns
type: LoadBalancer
apiVersion: v1
kind: ServiceAccount
metadata:
name: sme-setup-job
namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: sme-setup
rules:
- apiGroups:
- ''
resources:
- secrets
verbs:
- create
- get
- apiGroups:
- apps
resources:
- deployments
verbs:
- create
- apiGroups:
- ''
resources:
- serviceaccounts
verbs:
- create
- get
- apiGroups:
- ''
resources:
- configmaps
verbs:
- create
- get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: read-secrets-global
subjects:
- kind: ServiceAccount
name: sme-setup-job
namespace: default # TODO template
roleRef:
kind: ClusterRole
name: sme-setup
apiGroup: rbac.authorization.k8s.io
apiVersion: batch/v1
kind: Job
metadata:
name: test
spec:
template:
spec:
serviceAccountName: sme-setup-job
restartPolicy: Never
volumes:
- name: empty
emptyDir: {}
- name: ssh-keys
secret:
secretName: ssh-keys
defaultMode: 0600
- name: workload
configMap:
name: workload
items:
- key: workload.yaml
path: workload.yaml
containers:
- image: docker.io/atgracey/sme-job:latest
name: setup-remote
volumeMounts:
- name: ssh-keys
mountPath: /root/.ssh2/
- name: workload
readOnly: true
mountPath: /mnt/
- name: empty
mountPath: /tmp/
- name: empty
mountPath: /root/.ssh/
env:
- name: LINKERD2_PROXY_IDENTITY_DIR
value: "/tmp/keys"
- name: LINKERD2_PROXY_IDENTITY_LOCAL_NAME
value: "external-client.default.serviceaccount.identity.linkerd.cluster.local"
- name: WORKLOAD_NAME
value: "external-client"
- name: HOST
value: "192.168.1.213"
- name: REMOTEUSER
value: "root"
command: [ "/bin/bash" ]
args:
- "-c"
- |
mkdir -p $LINKERD2_PROXY_IDENTITY_DIR
cp /root/.ssh2/ssh-privatekey /root/.ssh/id_rsa
cp /root/.ssh2/ssh-publickey /root/.ssh/id_rsa.pub
kubectl create serviceaccount $WORKLOAD_NAME
kubectl create secret generic --dry-run=client --type="kubernetes.io/service-account-token" $WORKLOAD_NAME -oyaml | kubectl annotate --local=true -f - -oyaml kubernetes.io/service-account.name=$WORKLOAD_NAME | kubectl apply -f -
kubectl get secret $WORKLOAD_NAME -ojsonpath='{.data.token}' | base64 -d > $LINKERD2_PROXY_IDENTITY_DIR/sa_token
kubectl get configmap linkerd-identity-trust-roots -n linkerd -ojsonpath='{.data.ca-bundle\.crt}' > $LINKERD2_PROXY_IDENTITY_DIR/trustanchor.pem
kubectl create deployment --dry-run=client -oyaml $WORKLOAD_NAME-policy --image=busybox -- sleep 36000 | kubectl patch --local=true -f - -oyaml -p '{"spec":{"serviceAccountName": "$WORKLOAD_NAME"}}' | kubectl apply -f -
LINKERD2_PROXY_IDENTITY_TRUST_ANCHORS=`cat $LINKERD2_PROXY_IDENTITY_DIR/trustanchor.pem` /usr/lib/linkerd/linkerd2-proxy-identity # This will "fail" after doing the work we need.
# Create configmap with all the trust and identity bits
# should likely be secret but this is just a POC...
kubectl create configmap proxy-config --from-file=$LINKERD2_PROXY_IDENTITY_DIR --dry-run=client -oyaml > cm.yaml
cat cm.yaml > out.yaml
echo "---" >> out.yaml
cat /mnt/workload.yaml >> out.yaml
cat out.yaml | ssh $REMOTEUSER@$HOST -o StrictHostKeyChecking=no 'tee /root/workload.yaml | podman kube play --replace - ; systemctl enable "podman-kube@-root-workload.yaml"'
apiVersion: v1
kind: Pod
metadata:
name: workload
annotations:
io.podman.annotations.init.container.type: always
spec:
initContainers:
- name: setup-iptables
image: registry.opensuse.org/home/atgracey/utilities/containerfile/iptables:latest
command: [ "/bin/bash" ]
args:
- -c
- |
iptables -t nat -N PROXY_INIT_OUTPUT
iptables -t nat -A PROXY_INIT_OUTPUT -m owner --uid-owner 0 -j RETURN -m comment --comment ignore-proxy-user-id
iptables -t nat -A PROXY_INIT_OUTPUT -o lo -j RETURN -m comment --comment ignore-loopback
iptables -t nat -A PROXY_INIT_OUTPUT -p tcp -j REDIRECT --to-port 4140 -m comment --comment redirect-all-outgoing-to-proxy-port
iptables -t nat -A OUTPUT -j PROXY_INIT_OUTPUT -m comment --comment install-proxy-init-output
containers:
- name: vote-bot
image: docker.l5d.io/buoyantio/emojivoto-web:v11
command:
- emojivoto-vote-bot
env:
- name: WEB_HOST
value: web-svc.emojivoto:80
securityContext:
runAsUser: 1000
- name: linkerd-proxy
image: cr.l5d.io/linkerd/proxy:stable-2.14.1
env:
- name: _pod_name
value: votebotremote
- name: _pod_ns
value: remote
- name: _pod_nodeName
value: rpi
- name: LINKERD2_PROXY_IDENTITY_SVC_ADDR
value: "linkerd-identity-headless.linkerd.svc.cluster.local.:8080"
- name: LINKERD2_PROXY_IDENTITY_SVC_NAME
value: "linkerd-identity.linkerd.serviceaccount.identity.linkerd.cluster.local"
- name: LINKERD2_PROXY_IDENTITY_TRUST_ANCHORS
valueFrom:
configMapKeyRef:
name: proxy-config
key: trustanchor.pem
- name: LINKERD2_PROXY_IDENTITY_DIR
value: "/root/sme/keys"
- name: LINKERD2_PROXY_IDENTITY_LOCAL_NAME
value: "external-client.default.serviceaccount.identity.linkerd.cluster.local"
- name: LINKERD2_PROXY_IDENTITY_TOKEN_FILE
value: "/root/sme/keys/token"
- name: LINKERD2_PROXY_POLICY_CLUSTER_NETWORKS
value: "10.0.0.0/8,100.64.0.0/10,172.16.0.0/12,192.168.0.0/16"
- name: LINKERD2_PROXY_INBOUND_DEFAULT_POLICY
value: "all-unauthenticated"
- name: LINKERD2_PROXY_DESTINATION_SVC_ADDR
value: "linkerd-dst-headless.linkerd.svc.cluster.local.:8086"
- name: LINKERD2_PROXY_DESTINATION_PROFILE_NETWORKS
value: "10.0.0.0/8,100.64.0.0/10,172.16.0.0/12,192.168.0.0/16"
- name: LINKERD2_PROXY_DESTINATION_SVC_NAME
value: "linkerd-destination.linkerd.serviceaccount.identity.linkerd.cluster.local"
- name: LINKERD2_PROXY_POLICY_SVC_ADDR
value: "linkerd-policy.linkerd.svc.cluster.local.:8090"
- name: LINKERD2_PROXY_POLICY_WORKLOAD
value: $(_pod_ns):$(_pod_name)
- name: LINKERD2_PROXY_POLICY_SVC_NAME
value: "linkerd-destination.linkerd.serviceaccount.identity.linkerd.cluster.local"
- name: LINKERD2_PROXY_OUTBOUND_LISTEN_ADDR
value: "127.0.0.1:4140"
volumeMounts:
- name: keys
mountPath: /root/sme/keys
dnsPolicy: "None"
dnsConfig:
nameservers:
- 192.168.1.11
searches:
- local
- cluster.local
- svc.cluster.local
options:
- name: ndots
value: "2"
volumes:
- name: keys
configMap:
name: proxy-config
items:
- key: "key.p8"
path: "key.p8"
- key: "csr.der"
path: "csr.der"
- key: "sa_token"
path: "token"
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment