Skip to content

Instantly share code, notes, and snippets.

@tobru
Created January 15, 2019 21:01
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save tobru/fb107ff7b7d532504942961b2d1a9985 to your computer and use it in GitHub Desktop.
Save tobru/fb107ff7b7d532504942961b2d1a9985 to your computer and use it in GitHub Desktop.
Triggermesh OnPrem

[WIP] Deploy TriggerMesh OnPrem

Cluster creation

For infromationm, on GKE we create a cluster without Httploadbalancing because we use an nginx ingress:

gcloud container clusters create NAME --disable-addons HttpLoadBalancing

With autoscaling and some scopes to write to cloudDNS and storage read-write for a persistent registry, a node-pool looks like this:

gcloud container node-pools create tm --cluster munuprod --zone us-central1-a --num-nodes 5 -m n1-standard-8 --enable-autoscaling --min-nodes 3 --max-nodes 10 --scopes https://www.googleapis.com/auth/ndev.clouddns.readwrite,https://www.googleapis.com/auth/devstorage.read_write

We need to set our main account to have admin privileges (GKE thing before being able to create RBAC clusterroles)

kubectl create clusterrolebinding cluster-admin-binding --clusterrole cluster-admin --user runseb@gmail.com

Ingress deployment

We use the Helm Chart to deploy the nginx Ingress controller. Following the instructions here.

Install Helm

Install Helm on your local machine, deploy tiller in the cluster.

Get the Service Account and RBAC roles properly set and then deply tiller:

kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'      
helm init --service-account tiller --upgrade

Deploy the Ingress controller

helm install --name nginx-ingress stable/nginx-ingress --set rbac.create=true

Note that the service fronting the controller will be a LoadBalancer type service hence you will get a public IP from Google:

$ kubectl get svc
NAME                                          TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)                      AGE
nginx-ingress-nginx-ingress-controller        LoadBalancer   10.31.247.106   35.184.224.62   80:30390/TCP,443:32244/TCP   1d
nginx-ingress-nginx-ingress-default-backend   ClusterIP      10.31.240.58    <none>          80/TCP                       1d

NOTE: Use that IP to create the A record

Annotate it to enable CORS with:

kubectl annotate svc nginx-ingress-controller nginx.ingress.kubernetes.io/enable-cors=true

Edit the ConfigMap to add a hide-headers entry otherwise a basic auth prompt might show up. Set hide-headers: "Www-authenticate"

CAREFUL: Add the - --publish-service=default/nginx-ingress-controller option to the ingress controller or your Ingress objects will get the wrong IP address.

Install knative

We follow the core documentation

Let's go with the 0.2.3 release still -even though 0.3.0 is out since last week-

install istio

kubectl apply --filename https://raw.githubusercontent.com/knative/serving/v0.2.2/third_party/istio-1.0.2/istio.yaml

install serving

kubectl apply --filename https://github.com/knative/serving/releases/download/v0.2.2/release.yaml

edit CM config-domain if we want to expose route somewhere else than example.com

kubectl edit cm config-domain -n knative-serving

###install eventing

kubectl apply --filename https://github.com/knative/eventing/releases/download/v0.2.0/release.yaml
kubectl apply --filename https://github.com/knative/eventing-sources/releases/download/v0.2.0/release.yaml

create namespaces

kubectl create ns triggermesh
kubectl create ns registry

Local Docker Registry (Optional)

NOTE: We will have to see which registry we can use on-prem/

Our local registry setup is a copy of https://github.com/triggermesh/knative-local-registry/releases/tag/v0.2 but with GCS persistence enabled. We maintain a copy of the manifests here.

Node pools must be created with buckets read-write scope (or we'd have to set up a service account for registry). We currently have a tiny extra node pool: 3685734 should be reverted once the default node pool supports GCS write.

Thanks to bucket persistence we can scale up the registry replicaset to any number of pods, and access the same images from multiple clusters.

For details and compatibility of the registry-etc-hosts-update daemonset see https://github.com/triggermesh/knative-local-registry/s.

Triggermesh

At a high level we should be able to simply apply the two manifests, which contains Ingress/Services/Deployments, RBAC rules , PVC and secrets.

kubectl apply -f console.yaml
kubectl apply -f app.yaml

Backend tokens

Backend application requires several secrets to be available in the namespace to be fully functional.

Auth0

First of all, Auth0 service is being used for user authorization, so backend expects auth0-token secret with API credentials to be available before start. To get those credentials, go to Auth0 applications page, open (or create) API application of MACHINE TO MACHINE type, copy Client ID and Client Secret into k8s secret:

kubectl -n triggermesh create secret generic auth0-token \
  --from-literal=client_id=<CLIENT_ID> \
  --from-literal=client_secret=<CLIENT_SECRET>

Github

Backend sets payload secret to secure webhook endpoint. Webhook secret name which backend will use can be configured via GIT_HOOK_SECRET_NAME environment variable (munusecret by default) and must contain secret key with random string. Please note: setting new key for existing secret will make all Github webhooks with old secret key invalid.

kubectl -n triggermesh create secret generic munusecret --from-literal=secret=<RANDOM_STRING>

Bitbucket

If you have Bitbucket enabled in your Social Connections Auth0 dashboard, you need to create bitbucket-token secret with client_id and client_secret keys which can be obtainetd from Bitbucket settings in Auth0 dashboard.

kubectl -n triggermesh create secret generic bitbucket-token \
  --from-literal=client_id=<CLIENT_ID> \
  --from-literal=client_secret=<CLIENT_SECRET>

Triggermesh service Admin

Triggermesh users whose default service account were added as subject into triggermesh-admin-binding ClusterRoleBinding are considered as Administrators with full access to resources across all namespaces. User's ClusterRoleBinding setup is manual operation and can be done by another Administrator. Please be aware that triggermesh-admin-binding grants to the user super wide access rights of Cluster Admin role.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment