Skip to content

Instantly share code, notes, and snippets.

@rhtevan
Last active June 27, 2023 20:22
Show Gist options
  • Save rhtevan/aeeedf4d3037f5c605e94179ddf6f5e1 to your computer and use it in GitHub Desktop.
Save rhtevan/aeeedf4d3037f5c605e94179ddf6f5e1 to your computer and use it in GitHub Desktop.
OCP Serverless | Knative - Demo Scripts

OpenShift Serverless & Knative Demo Scripts

Installation

Key points:

  • Installation and multitenancy
    • Openshift Serverless and Camel-K Operator can be installed in a multitenant mode, but
    • The Control Plane (aka KnativeServing, KnativeEventing, KnativeKafka, and IntegrationPlatform) need to be installed at designated namespace/project (e.g. knative-serving, knative-eventing), or application namespace for IntegrationPlatform (e.g. demo)
    • Once the Control Plance installed, correspondent OpenShift console menu item (e.g. Serverless) and Kamel EventSource will become visiable.
    • Operator based installation (Use Camel-K community operator)

Knative Serving

Key points:

  • Concepts: ksvc, revision, tag, route, configuration
  • Simple service creation
  • Autoscaling / Elasticity
  • Rollout / Traffic Control (Blue-Green deployment, canary release, A/B testing)
    • Tagging
    • Revision
    • Public vs Private service

Scripts:

# Create ksvc
kn service create --image docker.io/openshift/hello-openshift --env RESPONSE="Hello Knative Serving" hello

# Try ksvc, expected response: "Hello Knative Serving"
http $(kn service describe hello -o url)

# Now, we should see knative and associated k8s resources, the revision name of ksvc/hello is auto generated, e.g. "hello-00001"
oc get all

# The ocp route for ksvc/hello is created at knative-serving-ingress namespace/project
oc -n knative-serving-ingress get route

# Before demonstrating ksvc autoscaling, need to update ksvc/hello with a setting of concurrenty-limit. (New revision will be created automatically) 
kn service update --concurrency-limit 10 hello

# Now, sending requests for 60s with 50 concurrent thread, should see ksvc autoscale to multile pods
# You may need to adjust the concurrency based on the testing environment to trigger the autoscaling
hey -c 50 -z 60s $(kn service describe hello -ourl)

# By tagging a revision, OpenShift serverless will create a new OCP-Route at knative-serving-ingress
# Tagged revision offers a way to bypass traffic control from the default k-service ingress
# A revision may have multiple tags, but tag can not be assigned to multiple revisions
# Tag assigned to @latest is sticky
# Updating ksvc with --tag or --traffic will not create new revision
kn service update --tag hello-00001=init hello
kn service update --tag hello-00002=concurrency hello

kn service update --scale 1.. --env RESPONSE="Hello in blue" hello
kn service update --tag hello-00003=blue hello
kn service update --scale 1.. --env RESPONSE="Hello in green" hello
kn service update --tag hello-00004=green hello

# At the moment, all traffic goes to the 'green'
for i in {1..10}; do echo -n "$i - " && http --body $(kn service describe hello -ourl) | tail -1; done

# Switch all traffic back to 'blue', confirm by running previous command again
kn service update --traffic blue=100% hello

# Use OCP developer console to split the traffic
kn revision list
for (( i = 1; i <= 10; i++ )); echo -n "$i - " && http --body $(kn service describe -ourl hello) | tail -1; done

# Set private ksvc
oc label ksvc hello networking.knative.dev/visibility=cluster-local

# All ocp routes associated with revision tags, including the base, are all gone
# Now, the ksvc/hello can only be accessed inside cluster.

Knative Eventing

Key Points:

  • Concepts: source, sink, channel, subscription, broker, trigger, filter
  • PingSource (Generating a message in every 2 minutes)
  • ContainerSource (Generating a heatbeat message in every 30 seconds)
  • KnativeKafka/KafkaChannel with RHOSAK (Managed Kafka instance)
  • Demonstrate OCP Developer Console capabilities

Source to Service provides the simplest getting started experience with Knative Eventing. It provides single Sink— that is, event receiving service --, with no queuing, backpressure, and filtering. The Source to Service does not support replies, which means the response from the Sink service is ignored.

Scripts:

# Create RHOSAK instances (use redhat sso account to login)
rhoas login
rhoas kafka create
rhoas kafka list
rhoas kafka describe --id 
rhoas status

# Create default broker (InMemory) via CLI, but it can also be created
# by adding the eventing.knative.dev/injection: enabled annotation to a Trigger, or
# by labeling a namespace with eventing.knative.dev/injection: enabled
kn broker create default
kn broker list

# Create a topic and test RHOSAK
# Use console.redhat.com to create test topic (kafkacat-topic)
# Save configuation information into local files: bootstrap, client-id, client-secret
export BOOTSTRAP_SERVER=$(cat bootstrap)
export USER=$(cat client-id)
export PASSWORD=$(cat client-secret)
export GROUP_ID=rhosak-$RANDOM
export TOPIC=kafkacat-topic

# Optional (Test RHOSAK Setup)
## Run a local kafkacat, or
alias kafkacat="docker run -it --rm --network host --name kafkacat$RANDOM edenhill/kafkacat:1.6.0 kafkacat"
## Run kafkacat on OCP
oc run kafkacat -it --rm --restart=Never --image=edenhill/kafkacat:1.6.0 --command -- /bin/sh
alias kafkacat="oc exec kafkacat -it -- kafkacat"

# Add '-P' for producer; '-G $GROUP_ID -C' for consumer
kafkacat -t "$TOPIC" -b "$BOOTSTRAP_SERVER" \
	 -X security.protocol=SASL_SSL -X sasl.mechanisms=PLAIN \
	 -X sasl.username="$USER" \
	 -X sasl.password="$PASSWORD" 

# Create KnativeKafka and KafkaChannel
# Use KnativeKafka instance disabling EventSource
oc create -n default secret generic rhosak-plain --from-literal=tls.enabled=true --from-literal=password="$PASSWORD" --from-literal=saslType="PLAIN" --from-literal=user="$USER"

# IMPORTANT! RECREATE KnativeKafka FOR ANY CONFIGURATION CHANGE
cat <<EOF | oc apply -f -
apiVersion: operator.serverless.openshift.io/v1alpha1
kind: KnativeKafka
metadata:
  namespace: knative-eventing
  name: knative-kafka
spec:
  channel:
    authSecretName: rhosak-plain
    authSecretNamespace: default
    bootstrapServers: $(cat bootstrap)
    enabled: true
  source:
    enabled: false
EOF

# Check Knative Kafka pods
oc -n knative-eventing get pods

# Create Knative KafkaChannel, never set replicationFactor less than 2
cat <<EOF | oc apply -f -
apiVersion: messaging.knative.dev/v1beta1
kind: KafkaChannel
metadata:
 name: kne-kafka-channel
spec:
 numPartitions: 3
 replicationFactor: 3
EOF

# Setup CLI and GUI based event monitor services
kn service create --image quay.io/openshift-knative/knative-eventing-sources-event-display --scale 1.. event-display

# Tips & Tricks: 
#   - Disable route with --annotation serving.knative.openshift.io/disableRoute=true
#   - Use port-forwarding
kn service create --image ruromero/cloudevents-player:latest --env BROKER_URL=http://broker-ingress.knative-eventing.svc.cluster.local/$(oc project -q)/default --scale 1.. --annotation serving.knative.openshift.io/disableRoute=true event-player

# Add PingSource connecting to a sink directly first
cat <<EOF | oc -v 9 apply -f -
apiVersion: sources.knative.dev/v1beta2
kind: PingSource
metadata:
  name: ping-source
spec:
  contentType: application/json
  data: '{"message": "Hello from PingSource!"}'
  schedule: '* * * * *'
  sink:
    ref:
      apiVersion: serving.knative.dev/v1
      kind: Service
      name: event-display
EOF

# Use web console to redirect events through channel or broker

# Add ContainerSource connecting to a sink directly first 
cat <<EOF | oc apply -f -
apiVersion: sources.knative.dev/v1
kind: ContainerSource
metadata:
  name: container-source
spec:
  ceOverrides:
    extensions:
      from: container-source
  sink:
    ref:
      apiVersion: serving.knative.dev/v1
      kind: Service
      name: event-display
  template:
    spec:
      containers:
        - args:
            - '--period=30'
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          image: 'quay.io/openshift-knative/knative-eventing-sources-heartbeats:latest'
          name: heartbeats
          resources: {}
EOF

Kamel/Camle-k (Function style of integration)

Key points:

  • Concepts: IntegrationPlatform(ip), Integration(it), IntegrationKit(ik), Traits, Modeline, Properties, Configuration, Resources
  • The IntegrationKit is a builder image which need to be generated once only when you attempt to run a new Integration.
  • To test without a cluster, use kamel local run
  • Integration (aka Routes) can be scaled manually by oc scale --replica x it yyy

Scripts:

  • Use OCP console to create prjects and intall Kamel IntegrationPlatform
  • Basic
    • The key takeaway is that Kamel is smart enought to choose different K8S resources (e.g. Deployment vs CronJob) depending on the duration of timer settings
  • Knative
    • Demonstrate Kamel with Knative trait
    • Control the behavior of trait with properties
    • Deploy multiple Integration with same code but different settings (aka properties)
    • Add the 'event-display' for betting observability (Adjust filter to show various types of cloudevent)
kn service create --image quay.io/openshift-knative/knative-eventing-sources-event-display --scale 1.. event-display

Kamelet (Distilled view of a complex system)

Key points:

  • Concepts: knative(Camel component), kamelet(Camel Component), Kamelet(K8S CR), KameletBinding (K8S CR)
  • Kamelet defines a reuseable building block in YAML, which is translated into a Camel routeTemplate.
  • Defining RouteTemplate/Kamelet in two ways:
    • With DSL e.g. Java, routeTemplate()
    • With Kubernetes CR, kind: Kamelet (of course, it uses Camel YAML DSL)
  • Using Kamelet with a materialized Route/Integration can be created in two ways:
    • In DSL (e.g. Java), uses kamelet Component, e.g. from(kamelet:myTemplate?)
    • With Kubernetes CR, kind: KameletBinding
  • Three types of RouteTemplate/Kamelet
    • source is a Event/Message generator, and need to end up with to.uri."kamelet:sink"
    • sink is a Event/Message consumer, and need to start with from.uri."kamelet:source
    • action is a Event/Message processor, and need to start with from.uri."kamelet:source

The demo scenario is described here. When creating the KameletBinding it is better to use CLI. But it is conventient to switch destination sink via Developer Console.

Scripts:

helm repo add bitnami https://charts.bitnami.com/bitnami
helm install my-release bitnami/postgresql
export POSTGRES_PASSWORD=$(oc get secret --namespace default my-release-postgresql -o jsonpath="{.data.postgresql-password}" | base64 --decode)
oc run my-release-postgresql-client -it --rm --restart='Never' --namespace default --image docker.io/bitnami/postgresql:11.12.0-debian-10-r38 --env="PGPASSWORD=$POSTGRES_PASSWORD" --command -- psql --host my-release-postgresql -U postgres -d postgres -p 5432
CREATE TABLE accounts (
user_id serial PRIMARY KEY,
username VARCHAR ( 50 ) UNIQUE NOT NULL,
city VARCHAR ( 50 ) NOT NULL
);
INSERT into accounts (username,city) VALUES ('andrea', 'Roma');
INSERT into accounts (username,city) VALUES ('John', 'New York');
SELECT * FROM accounts;

# Optional
wget https://raw.githubusercontent.com/apache/camel-kamelets/main/postgresql-source.kamelet.yaml

wget https://raw.githubusercontent.com/apache/camel-k-examples/main/kamelets/postgresql-to-log/log-sink.kamelet.yaml

# Update db password after download
wget https://raw.githubusercontent.com/apache/camel-k-examples/main/kamelets/postgresql-to-log/flow-binding.yaml

# Apply all YAMLs, except postgresql-source as it is preinstalled by Camel-K **community** Operator

apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
metadata:
labels:
app.kubernetes.io/managed-by: operator
name: sample
spec:
entityOperator:
topicOperator: {}
userOperator: {}
kafka:
config:
default.replication.factor: 3
offsets.topic.replication.factor: 3
transaction.state.log.min.isr: 2
transaction.state.log.replication.factor: 3
listeners:
- name: plain
port: 9092
tls: false
type: internal
- name: tls
port: 9093
tls: true
type: internal
livenessProbe:
initialDelaySeconds: 15
timeoutSeconds: 5
readinessProbe:
initialDelaySeconds: 15
timeoutSeconds: 5
replicas: 3
resources:
limits:
memory: 2Gi
requests:
memory: 2Gi
storage:
deleteClaim: true
size: 5Gi
type: persistent-claim
template:
statefulset:
metadata:
labels:
app.kubernetes.io/component: kafka
app.kubernetes.io/part-of: kafka
kafkaExporter:
groupRegex: .*
topicRegex: .*
zookeeper:
livenessProbe:
initialDelaySeconds: 15
timeoutSeconds: 5
readinessProbe:
initialDelaySeconds: 15
timeoutSeconds: 5
replicas: 3
storage:
deleteClaim: false
size: 1Gi
type: persistent-claim
template:
statefulset:
metadata:
labels:
app.kubernetes.io/component: zookeeper
app.kubernetes.io/part-of: kafka
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment