Skip to content

Instantly share code, notes, and snippets.

@dionysius
Last active August 13, 2019 14:16
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save dionysius/41b7fd1e6a5af6b0911c39c21b8f3a62 to your computer and use it in GitHub Desktop.
Save dionysius/41b7fd1e6a5af6b0911c39c21b8f3a62 to your computer and use it in GitHub Desktop.
apiVersion: v1
kind: ServiceAccount
metadata:
name: kubefed-external-dns
namespace: kube-federation-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: kubefed-external-dns
rules:
- apiGroups: [""]
resources: ["services"]
verbs: ["get","watch","list"]
- apiGroups: ["extensions"]
resources: ["ingresses"]
verbs: ["get","watch","list"]
- apiGroups: [""]
resources: ["pods"]
verbs: ["get","watch","list"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["list"]
- apiGroups: ["multiclusterdns.kubefed.io"]
resources: ["dnsendpoints","dnsendpoints/status"]
verbs: ["get","watch","list","update","patch"]
#"get", "list", "watch", "create", "update", "patch", "delete"
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: kubefed-external-dns
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kubefed-external-dns
subjects:
- kind: ServiceAccount
name: kubefed-external-dns
namespace: kube-federation-system
---
apiVersion: v1
kind: ConfigMap
metadata:
name: dns-1-ca
namespace: kube-federation-system
data:
ca: |
-----BEGIN CERTIFICATE-----
MI...xN
-----END CERTIFICATE-----
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: kubefed-external-dns
namespace: kube-federation-system
spec:
strategy:
type: Recreate
template:
metadata:
labels:
app: external-dns
spec:
serviceAccountName: kubefed-external-dns
containers:
- name: external-dns
image: registry.opensource.zalan.do/teapot/external-dns:latest
imagePullPolicy: IfNotPresent
args:
- --source=service # or ingress or both
- --provider=pdns
- --pdns-server=https://dns-1.example.io:8081
- --pdns-tls-enabled
- --tls-ca=/dns-1-ca/ca
- --pdns-api-key=notallowedtobeempty
- --txt-owner-id=my-external-dns
- --log-level=debug
- --interval=30s
# some settings I think should be good
- --domain-filter=external-dns-test.example.io # will make ExternalDNS see only the zones matching provided domain; omit to process all available zones in PowerDNS
- --fqdn-template={{.Name}}.{{.Namespace}}.external-dns-test.example.io # pods don't need special annotation for fqdn, useful when there's one for all
#- --combine-fqdn-annotation # lets see how this goes, technically we could optionally allow a mixture of these two fqdns
- --ignore-hostname-annotation # ignore the settings in pods annotation, since we want to enforce fqdn-template
- --publish-internal-services
- --publish-host-ip
# federated part
# yes, --source can and must be applied multiple times for multiple sources
- --source=crd
- --crd-source-apiversion=multiclusterdns.kubefed.io/v1alpha1
- --crd-source-kind=DNSEndpoint
# TODO! this software has a lot of options, have a look through together
volumeMounts:
- name: dns-1-ca
mountPath: /dns-1-ca
tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/master
volumes:
- name: dns-1-ca
configMap:
name: dns-1-ca
---
apiVersion: multiclusterdns.kubefed.io/v1alpha1
kind: Domain
metadata:
# Corresponds to <federation> in the resource records.
name: cluster1
# The namespace running kubefed-controller-manager.
namespace: kube-federation-system
# The domain/subdomain that is setup in your externl-dns provider.
domain: cluster1.external-dns-test.example.io
---
apiVersion: multiclusterdns.kubefed.io/v1alpha1
kind: Domain
metadata:
# Corresponds to <federation> in the resource records.
name: cluster2
# The namespace running kubefed-controller-manager.
namespace: kube-federation-system
# The domain/subdomain that is setup in your externl-dns provider.
domain: cluster2.external-dns-test.example.io
apiVersion: types.kubefed.io/v1beta1
kind: FederatedDeployment
metadata:
name: onlyfederated
namespace: test-namespace
spec:
placement:
clusterSelector:
matchLabels: {}
template:
metadata:
namespace: other-namespace
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx:1.7.9
imagePullPolicy: IfNotPresent
name: nginx
ports:
- containerPort: 80
protocol: TCP
---
apiVersion: types.kubefed.io/v1beta1
kind: FederatedService
metadata:
name: onlyfederated-service
namespace: test-namespace
spec:
placement:
clusterSelector:
matchLabels: {}
template:
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
type: LoadBalancer
---
apiVersion: multiclusterdns.kubefed.io/v1alpha1
kind: ServiceDNSRecord
metadata:
# The name of the sample service.
name: onlyfederated-service
# The namespace of the sample deployment/service.
namespace: test-namespace
spec:
# The name of the corresponding Domain.
domainRef: cluster1
recordTTL: 300
# how to set for both cluster1 and cluster2? The FederatedService is going on both clusters (I assume the metadata.name must exactly match the service name)
# or domainRef must be valid vor both clusters and it creates both entries for the same record? (So DNS-Roundrobin happens?)
# ---
# apiVersion: multiclusterdns.kubefed.io/v1alpha1
# kind: ServiceDNSRecord
# metadata:
# # The name of the sample service.
# name: onlyfederated-service-cluster2
# # The namespace of the sample deployment/service.
# namespace: test-namespace
# spec:
# # The name of the corresponding Domain.
# domainRef: cluster2
# recordTTL: 300
#If relevant, metallb is installed on all clusters using https://raw.githubusercontent.com/danderson/metallb/master/manifests/example-layer2-config.yaml
#show installed helm charts and its versions
$ helm list
NAME REVISION UPDATED STATUS CHART APP VERSION NAMESPACE
kubefed 1 Mon Aug 12 18:42:35 2019 DEPLOYED kubefed-0.1.0-rc5 kube-federation-system
metallb 1 Tue Aug 13 14:57:00 2019 DEPLOYED metallb-0.10.0 0.8.1 metallb-system
#On cluster1 this service was federated using test-onlyfederated.yaml
$ kubectl --context cluster1 -n test-namespace get service onlyfederated-service -o yaml
apiVersion: v1
kind: Service
metadata:
creationTimestamp: "2019-08-12T15:22:28Z"
labels:
kubefed.io/managed: "true"
name: onlyfederated-service
namespace: test-namespace
resourceVersion: "2321468"
selfLink: /api/v1/namespaces/test-namespace/services/onlyfederated-service
uid: 5ef69c1d-4539-4702-bb89-21d65d9f8f9d
spec:
clusterIP: 10.104.107.245
externalTrafficPolicy: Cluster
ports:
- nodePort: 32531
port: 80
protocol: TCP
targetPort: 80
selector:
app: nginx
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: 192.168.1.240
#Status of FederatedService
$ kubectl -n test-namespace get federatedservice onlyfederated-service -o yaml
apiVersion: types.kubefed.io/v1beta1
kind: FederatedService
metadata:
creationTimestamp: "2019-08-13T12:47:22Z"
finalizers:
- kubefed.io/sync-controller
generation: 1
name: onlyfederated-service
namespace: test-namespace
resourceVersion: "3199836"
selfLink: /apis/types.kubefed.io/v1beta1/namespaces/test-namespace/federatedservices/onlyfederated-service
uid: a619d175-4877-4755-afa4-442db9018f7e
spec:
placement:
clusterSelector:
matchLabels: {}
template:
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: nginx
type: LoadBalancer
status:
clusters:
- name: cluster1
- name: cluster2
conditions:
- lastProbeTime: "2019-08-13T14:05:30Z"
lastTransitionTime: "2019-08-13T12:47:30Z"
status: "True"
type: Propagation
# Status of DNSEndpoint
$ kubectl -n test-namespace get dnsendpoints.multiclusterdns.kubefed.io service-onlyfederated-service -o yaml
apiVersion: multiclusterdns.kubefed.io/v1alpha1
kind: DNSEndpoint
metadata:
creationTimestamp: "2019-08-13T12:47:22Z"
generation: 1
name: service-onlyfederated-service
namespace: test-namespace
resourceVersion: "3188019"
selfLink: /apis/multiclusterdns.kubefed.io/v1alpha1/namespaces/test-namespace/dnsendpoints/service-onlyfederated-service
uid: eb2bd4b0-d8a1-466b-a292-b4cf35733f56
spec: {}
status:
observedGeneration: 1
$ kubectl -n kube-federation-system logs -f kubefed-external-dns-f55999f76-pknpn
...
time="2019-08-13T14:12:46Z" level=debug msg="Records fetched:\n[kubernetes.default.external-dns-test.example.io 300 IN TXT \"heritage=external-dns,external-dns/owner=my-external-dns,external-dns/resource=service/default/kubernetes\" [] kubernetes.default.external-dns-test.exampe.io 300 IN A 10.96.0.1 [] test-service.default.external-dns-test.example.io 300 IN TXT \"heritage=external-dns,external-dns/owner=my-external-dns,external-dns/resource=service/default/test-service\" [] test-service.default.external-dns-test.example.io 300 IN A 192.168.1.240 [] kubefed-admission-webhook.kube-federation-system.external-dns-test.example.io 300 IN TXT \"heritage=external-dns,external-dns/owner=my-external-dns,external-dns/resource=service/kube-federation-system/kubefed-admission-webhook\" [] kubefed-admission-webhook.kube-federation-system.external-dns-test.example.io 300 IN A 10.104.48.220 [] tiller-deploy.kube-system.external-dns-test.example.io 300 IN TXT \"heritage=external-dns,external-dns/owner=my-external-dns,external-dns/resource=service/kube-system/tiller-deploy\" [] tiller-deploy.kube-system.external-dns-test.example.io 300 IN A 10.106.97.94 [] kube-dns.kube-system.external-dns-test.example.io 300 IN TXT \"heritage=external-dns,external-dns/owner=my-external-dns,external-dns/resource=service/kube-system/kube-dns\" [] kube-dns.kube-system.external-dns-test.example.io 300 IN A 10.96.0.10 [] external-dns-test.example.io 300 IN SOA external-dns-test.example.io. support.automatic-server.com. 2019081203 28800 7200 604800 86400 []]"
time="2019-08-13T14:12:46Z" level=debug msg="Endpoints generated from service: kube-system/tiller-deploy: [tiller-deploy.kube-system.external-dns-test.example.io 0 IN A 10.106.97.94 []]"
time="2019-08-13T14:12:46Z" level=debug msg="Endpoints generated from service: kube-federation-system/kubefed-admission-webhook: [kubefed-admission-webhook.kube-federation-system.external-dns-test.example.io 0 IN A 10.104.48.220 []]"
time="2019-08-13T14:12:46Z" level=debug msg="Endpoints generated from service: default/test-service: [test-service.default.external-dns-test.example.io 0 IN A 192.168.1.240 []]"
time="2019-08-13T14:12:46Z" level=debug msg="Endpoints generated from service: default/kubernetes: [kubernetes.default.external-dns-test.example.io 0 IN A 10.96.0.1 []]"
time="2019-08-13T14:12:46Z" level=debug msg="Endpoints generated from service: kube-system/kube-dns: [kube-dns.kube-system.external-dns-test.example.io 0 IN A 10.96.0.10 []]"
time="2019-08-13T14:12:46Z" level=debug msg="Changes pushed out to PowerDNS in 608ns\n"
$ kubectl -n kube-federation-system logs -f kubefed-controller-manager-7f4858f959-pxrmj
KubeFed controller-manager version: version.Info{Version:"v0.1.0-rc5", GitCommit:"99be0218bf5ac7d560ec0d7c2cfbfcbb86ba2d61", GitTreeState:"clean", BuildDate:"2019-08-01T16:50:07Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
W0813 12:52:59.494790 1 client_config.go:549] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
I0813 12:53:00.115469 1 controller-manager.go:224] Setting Options with KubeFedConfig "kube-federation-system/kubefed"
I0813 12:53:00.115559 1 controller-manager.go:315] Using valid KubeFedConfig "kube-federation-system/kubefed"
I0813 12:53:00.115609 1 controller-manager.go:139] KubeFed will target all namespaces
I0813 12:53:00.182886 1 leaderelection.go:205] attempting to acquire leader lease kube-federation-system/kubefed-controller-manager...
I0813 13:07:28.230211 1 leaderelection.go:214] successfully acquired lease kube-federation-system/kubefed-controller-manager
I0813 13:07:28.286073 1 leaderelection.go:75] promoted as leader
I0813 13:07:29.499642 1 controller.go:91] Starting cluster controller
I0813 13:07:30.781349 1 controller.go:70] Starting scheduling manager
I0813 13:07:30.882212 1 controller.go:179] Starting schedulingpreference controller for ReplicaSchedulingPreference
I0813 13:07:33.781438 1 controller.go:81] Starting replicaschedulingpreferences controller
I0813 13:07:33.790364 1 controller.go:197] Starting plugin FederatedReplicaSet for ReplicaSchedulingPreference
I0813 13:07:33.983638 1 federated_informer.go:205] Cluster kube-federation-system/cluster1 is ready
I0813 13:07:34.083967 1 federated_informer.go:205] Cluster kube-federation-system/cluster2 is ready
I0813 13:07:35.390367 1 controller.go:88] Starting ServiceDNS controller
I0813 13:07:35.691766 1 federated_informer.go:205] Cluster kube-federation-system/cluster1 is ready
I0813 13:07:35.694650 1 federated_informer.go:205] Cluster kube-federation-system/cluster1 is ready
I0813 13:07:35.784517 1 federated_informer.go:205] Cluster kube-federation-system/cluster2 is ready
I0813 13:07:35.787395 1 federated_informer.go:205] Cluster kube-federation-system/cluster2 is ready
I0813 13:07:36.286452 1 controller.go:197] Starting plugin FederatedDeployment for ReplicaSchedulingPreference
I0813 13:07:36.581688 1 federated_informer.go:205] Cluster kube-federation-system/cluster1 is ready
I0813 13:07:36.591278 1 federated_informer.go:205] Cluster kube-federation-system/cluster2 is ready
I0813 13:07:37.487758 1 controller.go:113] Starting "service" DNSEndpoint controller
I0813 13:07:37.692979 1 controller.go:124] "service" DNSEndpoint controller synced and ready
I0813 13:07:38.501106 1 federated_informer.go:205] Cluster kube-federation-system/cluster1 is ready
I0813 13:07:38.589496 1 federated_informer.go:205] Cluster kube-federation-system/cluster2 is ready
I0813 13:07:39.584133 1 controller.go:79] Starting IngressDNS controller
I0813 13:07:39.690098 1 federated_informer.go:205] Cluster kube-federation-system/cluster1 is ready
I0813 13:07:39.699828 1 federated_informer.go:205] Cluster kube-federation-system/cluster2 is ready
I0813 13:07:40.909212 1 controller.go:113] Starting "ingress" DNSEndpoint controller
I0813 13:07:41.080942 1 controller.go:124] "ingress" DNSEndpoint controller synced and ready
I0813 13:07:42.481802 1 controller.go:70] Starting FederatedTypeConfig controller
I0813 13:07:43.904364 1 controller.go:101] Starting sync controller for "FederatedJob"
I0813 13:07:43.904487 1 controller.go:330] Started sync controller for "FederatedJob"
I0813 13:07:44.009698 1 federated_informer.go:205] Cluster kube-federation-system/cluster1 is ready
I0813 13:07:44.082590 1 federated_informer.go:205] Cluster kube-federation-system/cluster2 is ready
I0813 13:07:45.284328 1 controller.go:101] Starting sync controller for "FederatedServiceAccount"
I0813 13:07:45.285159 1 controller.go:330] Started sync controller for "FederatedServiceAccount"
I0813 13:07:45.684565 1 federated_informer.go:205] Cluster kube-federation-system/cluster1 is ready
I0813 13:07:45.787269 1 federated_informer.go:205] Cluster kube-federation-system/cluster2 is ready
I0813 13:07:46.783481 1 controller.go:101] Starting sync controller for "FederatedIngress"
I0813 13:07:46.783587 1 controller.go:330] Started sync controller for "FederatedIngress"
I0813 13:07:46.898749 1 federated_informer.go:205] Cluster kube-federation-system/cluster2 is ready
I0813 13:07:46.982296 1 federated_informer.go:205] Cluster kube-federation-system/cluster1 is ready
I0813 13:07:47.804478 1 controller.go:101] Starting sync controller for "FederatedClusterRole"
I0813 13:07:47.804548 1 controller.go:330] Started sync controller for "FederatedClusterRole"
I0813 13:07:48.082067 1 federated_informer.go:205] Cluster kube-federation-system/cluster1 is ready
I0813 13:07:48.182859 1 federated_informer.go:205] Cluster kube-federation-system/cluster2 is ready
I0813 13:07:49.898449 1 controller.go:101] Starting sync controller for "FederatedConfigMap"
I0813 13:07:49.990486 1 controller.go:330] Started sync controller for "FederatedConfigMap"
I0813 13:07:49.999518 1 federated_informer.go:205] Cluster kube-federation-system/cluster1 is ready
I0813 13:07:50.081468 1 federated_informer.go:205] Cluster kube-federation-system/cluster2 is ready
I0813 13:07:50.802901 1 controller.go:101] Starting sync controller for "FederatedService"
I0813 13:07:50.802970 1 controller.go:330] Started sync controller for "FederatedService"
I0813 13:07:50.982349 1 federated_informer.go:205] Cluster kube-federation-system/cluster1 is ready
I0813 13:07:51.087396 1 federated_informer.go:205] Cluster kube-federation-system/cluster2 is ready
I0813 13:07:52.686298 1 controller.go:101] Starting sync controller for "FederatedSecret"
I0813 13:07:52.687214 1 controller.go:330] Started sync controller for "FederatedSecret"
I0813 13:07:53.890249 1 federated_informer.go:205] Cluster kube-federation-system/cluster1 is ready
I0813 13:07:53.985617 1 federated_informer.go:205] Cluster kube-federation-system/cluster2 is ready
I0813 13:07:54.789921 1 controller.go:101] Starting sync controller for "FederatedReplicaSet"
I0813 13:07:54.790008 1 controller.go:330] Started sync controller for "FederatedReplicaSet"
I0813 13:07:54.923957 1 federated_informer.go:205] Cluster kube-federation-system/cluster1 is ready
I0813 13:07:54.939449 1 federated_informer.go:205] Cluster kube-federation-system/cluster2 is ready
I0813 13:07:55.806292 1 controller.go:101] Starting sync controller for "FederatedNamespace"
I0813 13:07:55.884023 1 controller.go:330] Started sync controller for "FederatedNamespace"
I0813 13:07:55.982776 1 federated_informer.go:205] Cluster kube-federation-system/cluster2 is ready
I0813 13:07:56.082273 1 federated_informer.go:205] Cluster kube-federation-system/cluster1 is ready
I0813 13:07:56.896493 1 controller.go:101] Starting sync controller for "FederatedDeployment"
I0813 13:07:56.896554 1 controller.go:330] Started sync controller for "FederatedDeployment"
I0813 13:07:57.082567 1 federated_informer.go:205] Cluster kube-federation-system/cluster1 is ready
I0813 13:07:57.103955 1 federated_informer.go:205] Cluster kube-federation-system/cluster2 is ready
W0813 13:29:47.083158 1 reflector.go:270] sigs.k8s.io/kubefed/pkg/controller/util/federated_informer.go:437: watch of <nil> ended with: too old resource version: 2320590 (2320979)
W0813 13:37:43.189063 1 reflector.go:270] sigs.k8s.io/kubefed/pkg/controller/util/federated_informer.go:437: watch of <nil> ended with: too old resource version: 2308444 (2308786)
W0813 13:46:51.488558 1 reflector.go:270] sigs.k8s.io/kubefed/pkg/controller/util/federated_informer.go:437: watch of <nil> ended with: too old resource version: 2322243 (2322250)
W0813 13:54:44.302383 1 reflector.go:270] sigs.k8s.io/kubefed/pkg/controller/util/federated_informer.go:437: watch of <nil> ended with: too old resource version: 2310131 (2310133)
W0813 14:09:21.782198 1 reflector.go:270] sigs.k8s.io/kubefed/pkg/controller/util/federated_informer.go:437: watch of <nil> ended with: too old resource version: 2323507 (2323918)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment