Skip to content

Instantly share code, notes, and snippets.

@tobybellwood
Last active October 16, 2020 07:01
Show Gist options
  • Save tobybellwood/edf20e7d5a8eb0d0e7a89e708817d308 to your computer and use it in GitHub Desktop.
Save tobybellwood/edf20e7d5a8eb0d0e7a89e708817d308 to your computer and use it in GitHub Desktop.
Local Lagoon2 setup
rules:
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- persistentvolumes
verbs:
- get
- list
- watch
- create
- delete
- apiGroups:
- ""
resources:
- persistentvolumeclaims
verbs:
- get
- list
- watch
- update
- apiGroups:
- storage.k8s.io
resources:
- storageclasses
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- events
verbs:
- list
- watch
- create
- update
- patch
- apiGroups:
- ""
resources:
- services
- endpoints
verbs:
- get
- apiGroups:
- extensions
resourceNames:
- nfs-provisioner
resources:
- podsecuritypolicies
verbs:
- use
- apiGroups:
- ""
resources:
- endpoints
verbs:
- get
- list
- watch
- create
- delete
- update
- patch

Configuring local Kubernetes for Lagoon 2

Ensure you have the necessary tools installed locally: Docker Helm Kubectl - also recommend kubectx for easy switching Octant/Lens - for observing cluster operations

Configure the necessary chart repos in Helm

helm repo add lagoon-charts https://uselagoon.github.io/lagoon-charts/
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx/
helm repo add gitea-charts https://dl.gitea.io/charts/

Now we will configure a cluster using KinD

cat <<EOF | kind create cluster --name=lagoon-test --config=-
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
containerdConfigPatches:
- |-
  [plugins."io.containerd.grpc.v1.cri".registry.configs."registry.192.168.96.2.nip.io:32443".tls]
    insecure_skip_verify = true
EOF

Note that the IP address 192.168.96.2 could also need to be set to 172.18.0.2 depending on your local setup

To check which is correct for you, run this command.

kubectl get nodes -o jsonpath='{.items[0].status.addresses[0].address}' --selector "kubernetes.io/hostname=kind-control-plane"

If necessary, delete with kind delete cluster --name lagoon-test and redo with the correct IP address.

Configure ingress and registry for the cluster

helm upgrade \
  --install \
  --create-namespace \
  --namespace ingress-nginx \
  --wait \
  --timeout 15m \
  --set controller.service.type=NodePort \
  --set controller.service.nodePorts.http=32080 \
  --set controller.service.nodePorts.https=32443 \
  --set controller.config.proxy-body-size=100m \
  ingress-nginx \
  ingress-nginx/ingress-nginx \
&& helm upgrade \
  --install \
  --create-namespace \
  --namespace registry \
  --wait \
  --timeout 15m \
  --set ingress.enabled=true \
  --set "ingress.hosts={registry.$(kubectl get nodes -o jsonpath='{.items[0].status.addresses[0].address}').nip.io}" \
  --set "ingress.annotations.kubernetes\.io\/ingress\.class=nginx" \
  --set "ingress.annotations.nginx\.ingress\.kubernetes\.io\/proxy-body-size=0" \
  registry \
  stable/docker-registry

Create a nfs-server-provisioner to provide RWX-emulating storage

helm install \
  --namespace default \
  --set persistence.enabled=true,persistence.size=20Gi,storageClass.name=bulk \
  nfs-server \
  stable/nfs-server-provisioner 

and then patch the storageclass with the get/list/watch on the nodes resource

kubectl patch clusterrole nfs-server-nfs-server-provisioner --patch "$(cat nfs-server-provisioner-clusterrole.yaml)"

Installing Lagoon 2 into the cluster just built.

Ensure you've got the correct cluster selected (if you have multiple) - use kubectx to ensure that lagoon-test is the current context.

Clone the repo https://github.com/uselagoon/lagoon-charts to your local machine, and cd to it.

Configure and install the lagoon-core chart, using pr-2198 tag to provide Gitea support to the webhooks. We also set some sensible local development defaults from the linter-values.yaml file. This installs Lagoon2 into a lagoon namespace in the lagoon-test cluster

helm upgrade \
  --install \
  --create-namespace \
  --namespace lagoon \
  --wait \
  --timeout 15m \
  --values ./charts/lagoon-core/ci/linter-values.yaml \
  --set autoIdler.enabled=false \
  --set drushAlias.enabled=true \
  --set logs2email.enabled=false \
  --set logs2microsoftteams.enabled=false \
  --set logs2rocketchat.enabled=false \
  --set logs2slack.enabled=false \
  --set logsDBCurator.enabled=false \
  --set "registry=registry.$(kubectl get nodes -o jsonpath='{.items[0].status.addresses[0].address}').nip.io:32443" \
  --set "lagoonAPIURL=http://localhost:7070/graphql" \
  --set "keycloakAPIURL=http://localhost:8080/auth" \
  --set "webhooks2tasks.image.tag=pr-2198" \
  --set "webhookHandler.image.tag=pr-2198" \
  --set storageCalculator.enabled=false \
  --set sshPortal.enabled=false \
  lagoon-core \
  ./charts/lagoon-core

THe helm install process will generate a user lagoonadmin and a matching password - take note of this!

You can now set the Kubeconfig to default to lagoon with kubens if you'd like

We configure lagoon-remote to install to the same namespace (for convenience)

helm upgrade \
  --install \
  --create-namespace \
  --namespace lagoon \
  --wait \
  --timeout 15m \
  --values ./charts/lagoon-remote/ci/linter-values.yaml \
  --set "rabbitMQPassword=$(kubectl -n lagoon get secret lagoon-core-broker -o json | jq -r '.data.RABBITMQ_PASSWORD | @base64d')" \
  --set "dockerHost.registry=registry.$(kubectl get nodes -o jsonpath='{.items[0].status.addresses[0].address}').nip.io:32443" \
  lagoon-remote \
  ./charts/lagoon-remote

Configure lagoon-test also into the same namespace, here with the pr-2215 image override to provide drush support for Drupal

helm upgrade \
  --install \
  --create-namespace \
  --namespace lagoon \
  --wait \
  --timeout 15m \
  --set "ingressIP=$(kubectl get nodes -o jsonpath='{.items[0].status.addresses[0].address}')" \
  --set "keycloakAuthServerClientSecret=$(kubectl -n lagoon get secret lagoon-core-keycloak -o json | jq -r '.data.KEYCLOAK_AUTH_SERVER_CLIENT_SECRET | @base64d')" \
  --set "routeSuffixHTTP=$(kubectl get nodes -o jsonpath='{.items[0].status.addresses[0].address}').nip.io" \
  --set "routeSuffixHTTPS=$(kubectl get nodes -o jsonpath='{.items[0].status.addresses[0].address}').nip.io" \
  --set "token=$(kubectl -n lagoon get secret -o json | jq -r '.items[] | select(.metadata.name | match("lagoon-build-deploy-token")) | .data.token | @base64d')" \
  --set "tests.image.tag=pr-2215" \
  lagoon-test \
  ./charts/lagoon-test

Configure additional Git Server locally - not strictly necessary, but useful to visualise the entire process

Add and configure a Gitea instance to the Kubernetes Cluster, into it's own namespace, with a couple of config settings.

helm upgrade \
  --install \
  --create-namespace \
  --namespace gitea \
  --set "gitea.config.server.SSH_DOMAIN=gitea-ssh.gitea.svc.cluster.local" \
  --set "gitea.config.server.ROOT_URL=http://localhost:10080" \
  --set "ingress.hosts=localhost:10080" \
  gitea \
  gitea-charts/gitea 

In order to interact with the Lagoon install, forward the ports needed from your local

kubectl --namespace lagoon port-forward svc/lagoon-core-keycloak 8080 &
kubectl --namespace lagoon port-forward svc/lagoon-core-api 7070:80 &
kubectl --namespace lagoon port-forward svc/lagoon-core-ui 6060:3000 &
kubectl --namespace lagoon port-forward svc/lagoon-core-ssh 2020 &
kubectl --namespace lagoon port-forward svc/lagoon-core-api-db 3306 &
kubectl --namespace gitea port-forward gitea-0 10080:3000 &
kubectl --namespace gitea port-forward gitea-0 10022:22 &

Congratulations, you've got Lagoon2 up and running locally. There's still a bit of config to do.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment