Skip to content

Instantly share code, notes, and snippets.

@vfarcic
Last active January 15, 2023 14:41
Show Gist options
  • Star 3 You must be signed in to star a gist
  • Fork 3 You must be signed in to fork a gist
  • Save vfarcic/dc4ba562328c1d088047884026371f1f to your computer and use it in GitHub Desktop.
Save vfarcic/dc4ba562328c1d088047884026371f1f to your computer and use it in GitHub Desktop.
# Source: https://gist.github.com/dc4ba562328c1d088047884026371f1f
###########################################################
# Using Knative To Deploy And Manage Serverless Workloads #
###########################################################
######################
# Installing Knative #
######################
# GKE (gke-simple.sh): https://gist.github.com/ebe4ad31d756b009b2e6544218c712e4)
# EKS (eks-simple.sh): https://gist.github.com/8ef7f6cb24001e240432cd6a82a515fd)
# AKS (aks-simple.sh): https://gist.github.com/f3e6575dcefcee039bb6cef6509f3fdc)
kubectl apply \
--filename https://github.com/knative/serving/releases/download/knative-v1.8.1/serving-crds.yaml
kubectl apply \
--filename https://github.com/knative/serving/releases/download/knative-v1.8.1/serving-core.yaml
kubectl --namespace knative-serving get pods
git clone https://github.com/vfarcic/devops-catalog-code.git
cd devops-catalog-code
git pull
cd knative/istio
kubectl apply \
--filename https://github.com/knative/net-istio/releases/download/knative-v1.8.1/istio.yaml \
--selector knative.dev/crd-install=true
kubectl apply \
--filename https://github.com/knative/net-istio/releases/download/knative-v1.8.1/istio.yaml
kubectl apply \
--filename https://github.com/knative/net-istio/releases/download/knative-v1.8.1/net-istio.yaml
kubectl --namespace istio-system get pods
kubectl label namespace knative-serving istio-injection=enabled
cat peer-auth.yaml
kubectl apply --filename peer-auth.yaml
# Only if GKE or AKS
export INGRESS_IP=$(kubectl --namespace istio-system \
get service istio-ingressgateway \
--output jsonpath='{.status.loadBalancer.ingress[0].ip}')
# Only if GKE or AKS
export INGRESS_HOST=$INGRESS_IP.nip.io
# Only if EKS
export INGRESS_HOST=$(kubectl --namespace istio-system \
get service istio-ingressgateway \
--output jsonpath='{.status.loadBalancer.ingress[0].hostname}')
kubectl --namespace knative-serving get configmap config-domain \
--output yaml
echo "apiVersion: v1
kind: ConfigMap
metadata:
name: config-domain
namespace: knative-serving
data:
$INGRESS_HOST: \"\"
" | kubectl apply --filename -
kubectl --namespace knative-serving get pods
############################
# Painting The Big Picture #
############################
kubectl create namespace production
kubectl label namespace production istio-injection=enabled
kn service create devops-toolkit --namespace production \
--image vfarcic/devops-toolkit-series --port 80
kubectl --namespace production get routes
# Only if EKS
curl -H "Host: devops-toolkit.production.example.com" \
http://$INGRESS_HOST
# Only if GKE or AKS
open http://devops-toolkit.production.$INGRESS_HOST
kubectl --namespace production \
get pods
# Only if EKS
curl -H "Host: devops-toolkit.production.example.com" \
http://$INGRESS_HOST
# Only if GKE or AKS
open http://devops-toolkit.production.$INGRESS_HOST
kn service delete devops-toolkit \
--namespace production
#########################################
# Defining Knative Applications As Code #
#########################################
cat devops-toolkit.yaml
kubectl --namespace production apply \
--filename devops-toolkit.yaml
# Only if EKS
curl -H "Host: devops-toolkit.production.example.com" \
http://$INGRESS_HOST
# Only if GKE or AKS
open http://devops-toolkit.production.$INGRESS_HOST
kubectl --namespace production \
get kservice
kubectl --namespace production \
get configuration
kubectl --namespace production \
get revisions
kubectl --namespace production \
get deployments
kubectl --namespace production \
get services,virtualservices
kubectl --namespace production \
get podautoscalers
kubectl --namespace production \
get routes
# Only if EKS
kubectl run siege \
--image yokogawa/siege \
-it --rm \
-- --concurrent 500 --time 60S \
--header "Host: devops-toolkit.production.example.com" \
"http://$INGRESS_HOST" \
&& kubectl --namespace production \
get pods
# Only if GKE or AKS
kubectl run siege \
--image yokogawa/siege \
-it --rm \
-- --concurrent 500 --time 60S \
"http://devops-toolkit.production.$INGRESS_HOST" \
&& kubectl --namespace production \
get pods
kubectl --namespace production \
get pods
############################
# Destroying The Resources #
############################
kubectl --namespace production delete \
--filename devops-toolkit.yaml
kubectl delete namespace production
cd ../../../
# Only if EKS
kubectl --namespace istio-system \
delete service istio-ingressgateway
@tuxerrante
Copy link

tuxerrante commented Feb 17, 2021

Hi,
I get this error at line 84, installing in a centos VM running minikube v1.17,

[alex@centos7 echo]$ echo "apiVersion: v1
kind: ConfigMap
metadata:
  name: config-domain
  namespace: knative-serving
data:
  $INGRESS_HOST: | 
" | kubectl apply --filename -
The ConfigMap "config-domain" is invalid: data[192.168.49.2:31969]: Invalid value: "192.168.49.2:31969": a valid config key must consist of alphanumeric characters, '-', '_' or '.' (e.g. 'key.name',  or 'KEY_NAME',  or 'key-name', regex used for validation is '[-._a-zA-Z0-9]+')

is it possible to configure the dns also for minikube, like in the point 4 of the installation docs (https://knative.dev/docs/install/any-kubernetes-cluster/)?
I was trying something like

istio_ingress_name=$(kubectl --namespace istio-system get service istio-ingressgateway --no-headers --output=jsonpath={.metadata.name})

# istio service DNS CNAME
# https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#a-aaaa-records
istio_ingress_dns="${istio_ingress_name}.svc.$(hostname)"

But I'm not sure on how to proceed to configure the CNAME in the configmap.
Thanks

@vfarcic
Copy link
Author

vfarcic commented Feb 17, 2021 via email

@tuxerrante
Copy link

tuxerrante commented Feb 17, 2021

very interesting xip.io :)
Maybe I've discovered another way through nslookup:

## Test DNS 
kubectl run dnsutils --image=tutum/dnsutils --command -- sleep -- 3600

kubectl exec -it dnsutils -- nslookup $INGRESS_IP
# 192-168-49-2.kubernetes.default.svc.cluster.local

kubectl patch configmap/config-domain \
  --namespace knative-serving \
  --type merge \
  --patch '{"data":{"192-168-49-2.kubernetes.default.svc.cluster.local":""}}'

@vfarcic
Copy link
Author

vfarcic commented Feb 18, 2021

I think that should work, but cannot confirm since I haven't (yet) tried it myself.

@tuxerrante
Copy link

tuxerrante commented Feb 18, 2021 via email

@vfarcic
Copy link
Author

vfarcic commented Feb 19, 2021

That's strange... I saw that the image exists so that part is fine. Is it possible that you have a firewall or something similar that prevents your cluster from reaching out to Docker Hub? Is it creating Pods? If it is, can you paste the output of kubectl describe pod ___?

@tuxerrante
Copy link

tuxerrante commented Feb 19, 2021

The VM firewall is disabled.
There is another firewall on the Aruba cloud provider, but that should block only suspicious ingress traffic, like port scans etc.

I've put some info here: https://gist.github.com/tuxerrante/3b23b75642d1778a21903e309e6fa1c7
The only new info I notice is:
86s Warning InternalError route/my-echo failed to remove route annotation to /, Kind= "my-echo": configurations.serving.knative.dev "my-echo" not found

EDIT:
From minikube dashboard I just noticed:
failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io)
from the events of istio-ingressgateway

Thanks again

@vfarcic
Copy link
Author

vfarcic commented Feb 22, 2021

I saw that the image you're using is public. Can you push the manifests into a GitHub repo and write the commands that would allow me to reproduce it?

@tuxerrante
Copy link

tuxerrante commented Feb 22, 2021

Hi,
I have all the commands saved in a note luckily :)
Also I saw from other tutorials that the commands to install knative are very different, for example in the official guide there is no explicit installation of the Build module and here they replace all the LoadBalancer with NodePort: https://github.com/k8spatterns/examples/blob/master/INSTALL.adoc#knative

########################################################
# Resources:
# - Hands-on tutorials for Knative Build, Serving and Eventing: https://play.instruqt.com/public/topics/knative
# - https://github.com/k8spatterns/examples/blob/master/INSTALL.adoc#knative
# - https://github.com/knative/serving
# - A tutorial with a Java app:	https://knative.dev/docs/serving/samples/hello-world/helloworld-java-spring/index.html
# - https://gist.github.com/vfarcic/dc4ba562328c1d088047884026371f1f
########################################################
########################################################
# MINIKUBE

minikube start --memory=8192 --cpus=10 \
  --vm-driver=docker \
  --disk-size=30g \
  --extra-config=apiserver.enable-admission-plugins="LimitRanger,NamespaceExists,NamespaceLifecycle,ResourceQuota,ServiceAccount,DefaultStorageClass,MutatingAdmissionWebhook" \
  --addons metrics-server  

########################################################
########################################################	

git clone git@github.com:tuxerrante/echo.git

docker login
docker build --rm -t alessandroaffinito/echo:0.1 .
docker push echo

### Run some test
docker run -p 8080:8080 -d --name my-echo --rm  alessandroaffinito/echo:0.1  -listen=:8080 -text="hello world"

curl http://$(minikube ip):8080

docker stop my-echo:0.1


########################################################
# KNATIVE + ISTIO
########################################################
# https://knative.dev/docs/install/any-kubernetes-cluster/

### INSTALL KNATIVE SERVING
kubectl apply --filename https://github.com/knative/serving/releases/download/v0.20.0/serving-crds.yaml
kubectl apply --filename https://github.com/knative/serving/releases/download/v0.20.0/serving-core.yaml

kubectl get -n knative-serving pods
# NAME                          READY   STATUS              RESTARTS   AGE
# activator-85cd6f6f9-lxmz2     0/1     ContainerCreating   0          75s
# autoscaler-7959969587-hbdt9   1/1     Running             0          74s
# controller-577558f799-8tqkc   0/1     ContainerCreating   0          74s
# webhook-78f446786-fvxxg       0/1     ContainerCreating   0          74s


#### INSTALL KN CLI
curl -LO https://github.com/knative/client/releases/download/v0.20.0/kn-linux-amd64
chmod +x kn-linux-amd64 
sudo mv kn-linux-amd64 /usr/bin/kn
kn version

#### Install istioctl
curl -L https://istio.io/downloadIstio | sh -
istio_dir_name=$(ls |grep istio)
sudo mv $istio_dir_name/bin/istioctl /usr/bin

istioctl x precheck


## Installing Istio with sidecar injection 
## https://knative.dev/docs/install/installing-istio/#installing-istio-without-sidecar-injection
cat << EOF > ./istio-minimal-operator.yaml
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
  values:
    global:
      proxy:
        autoInject: enabled
      useMCP: false
      # The third-party-jwt is not enabled on all k8s.
      # See: https://istio.io/docs/ops/best-practices/security/#configure-third-party-service-account-tokens
      jwtPolicy: first-party-jwt

  addonComponents:
    pilot:
      enabled: true

  components:
    ingressGateways:
      - name: istio-ingressgateway
        enabled: true
EOF

istioctl install -f istio-minimal-operator.yaml

kubectl label namespace default istio-injection=enabled

## Check
kubectl get pods --namespace istio-system

## Install the Knative Istio controller:
kubectl apply --filename https://github.com/knative/net-istio/releases/download/v0.20.0/release.yaml

## Fetch the CNAME:
# kubectl --namespace istio-system get service istio-ingressgateway
ISTIO_INGRESS_NAME=$(kubectl --namespace istio-system get service istio-ingressgateway --no-headers --output=jsonpath={.metadata.name})

# "Normal" (not headless) Services are assigned a DNS A or AAAA record, depending on the IP family of the service, for a name of the form my-svc.my-namespace.svc.cluster-domain.example. This resolves to the cluster IP of the Service.
ISTIO_INGRESS_CNAME="${istio_ingress_name}.svc.$(hostname)"

# Some useful var for MINIKUBE
export INGRESS_IP=$(minikube ip)
export INGRESS_PORT=$(kubectl \
    --namespace istio-system \
    get service istio-ingressgateway \
    --output jsonpath='{.spec.ports[?(@.name=="http2")].nodePort}')
export INGRESS_HOST=$INGRESS_IP:$INGRESS_PORT


## Test DNS 
kubectl run dnsutils --image=tutum/dnsutils --command -- sleep -- 3600

kubectl exec -it dnsutils -- nslookup $INGRESS_IP
# > 192-168-49-2.kubernetes.default.svc.cluster.local
# 	put this record in the configmap

kubectl patch configmap/config-domain \
  --namespace knative-serving \
  --type merge \
  --patch '{"data":{"192-168-49-2.kubernetes.default.svc.cluster.local":""}}'

kubectl get pods --namespace knative-serving

########################################################
#### DEPLOYING THE APP 

kn service create my-echo --image=docker.io/alessandroaffinito/echo:0.1 --env MY_API_URI="$INGRESS_HOST"

kubectl get ksvc helloworld-java-spring  --output=custom-columns=NAME:.metadata.name,URL:.status.url

@vfarcic
Copy link
Author

vfarcic commented Feb 22, 2021

they replace all the LoadBalancer with NodePort

That really depends on the k8s provider. LoadBalancer Services have everything that NodePort Services have, so using the former gives you everything that the latter has, plus the ability to create an external LB. Now, some k8s cluster cannot create external LB so using NodePort avoids seeing errors in Service events.

I'll try to replicate the issue you're experiencing. I'll try to do that ASAP but cannot say exactly when. It all depends on how many meetings and tasks others put on my agenda. In the worst-case scenario, I will be on top of it over the weekend. I hope that's not too late.

@tuxerrante
Copy link

This is not urgent for me, please take your time 👍

@vfarcic
Copy link
Author

vfarcic commented Mar 1, 2021

When I look at the logs of the Pod Knative created, I'm getting Missing -text option!. The kn service never starts because the Pods are throwing errors. Can you add -text and see whether that solves the issue?

@tuxerrante
Copy link

Sure, where should I put it?

@vfarcic
Copy link
Author

vfarcic commented Mar 2, 2021

I'm not sure. I never used that image or whatever is inside it. I just saw in the logs of the container that the process failed to start because of that option. My best guess, without looking at the code behind that image, is that you need to change the init command or, maybe, you can pass it as an env var.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment