Skip to content

Instantly share code, notes, and snippets.

@rafaeltuelho
Created November 14, 2022 14:53
Show Gist options
  • Save rafaeltuelho/6fd979f97469e6bda374bc275aad9f24 to your computer and use it in GitHub Desktop.
Save rafaeltuelho/6fd979f97469e6bda374bc275aad9f24 to your computer and use it in GitHub Desktop.
RHOAS CLI commands to manage Kafka and Service Bindings resources on Openshift

Module 5.

Install rhoas on MacOS with zsh shell

curl -o- https://raw.githubusercontent.com/redhat-developer/app-services-cli/main/scripts/install.sh | bash
#make sure your $HOME/bin is included in your $PATH

#open a new terminal
rhoas --version

#add cli completion for zsh
rhoas completion zsh > "${fpath[1]}/_rhoas"

rhoas login
rhoas whoami
rhoas status

create a kafka instance

rhoas kafka create --name globex
rhoas context set-kafka --name=globex

create your first topic

rhoas kafka topic create --name globex.tracking

Connect your OpenShift instance to the Streams for Kafka instance you created previously.

#login into your Openshift Cluster (where your App is deployed)
oc login --token=<your dev user token> --server=https://api.cluster-<your openshift cluster domain>:6443

#make sure you are inside your app namespace
oc project globex-user1

Connect a Kafka instance to your cluster

#get your Openshit Cluster Manager Access Token from https://console.redhat.com/openshift/token/show#
rhoas cluster connect --service-type kafka --service-name globex
This command will link your cluster with Cloud Services by creating custom resources and secrets.
In case of problems please execute "rhoas cluster status" to check if your cluster is properly configured

Connection Details:

Service Type:           kafka
Service Name:			globex
? Provide an offline token to be used by the Operator (to get a token, visit https://console.redhat.com/openshift/token)
 PASTE YOUR 'Openshit Cluster Manager Access Token' HERE
✔️  Token Secret "rh-cloud-services-accesstoken" created successfully
✔️  Service Account Secret "rh-cloud-services-service-account" created successfully

Client ID:     332f90f4-3fcc-4bec-baf7-f57bd4e3c236

Make a copy of the client ID to store in a safe place. Credentials won't appear again after closing the terminal.

You will need to assign permissions to service account in order to use it.

You need to separately grant service account access to Kafka by issuing following command

  $ rhoas kafka acl grant-access --producer --consumer --service-account 332f90f4-3fcc-4bec-baf7-f57bd4e3c236 --topic all --group all

✔️  kafka resource "globex" has been created
Waiting for status from kafka resource.
Created kafka can be already injected to your application.

To bind you need to have Service Binding Operator installed:
https://github.com/redhat-developer/service-binding-operator

You can bind kafka to your application by executing "rhoas cluster bind"
or directly in the OpenShift Console topology view.

✔️  Connection to service successful.

verify a new KafkaConnection resource was created

oc get kafkaconnection
NAME     AGE
globex   5m5s

Set Permissions for a the rh-cloud-services-service-account

rhoas kafka acl grant-access --producer --consumer --service-account 332f90f4-3fcc-4bec-baf7-f57bd4e3c236 --topic all --group all -y

Binds the Kafka Service to the apps though the Service Binding Operator

rhoas cluster bind --namespace=globex-user1 --app-name=recommendation-engine --service-name=globex --service-type=kafka -y
rhoas cluster bind --namespace=globex-user1 --app-name=activity-tracking --service-name=globex --service-type=kafka -y

Scale up both kafka streams apps

oc scale --replicas=1 deployment/recommendation-engine
oc scale --replicas=1 deployment/activity-tracking

verify new topis gets created in your kafka instance

rhoas kafka topic list                                                                                                        [18:16:17]
  NAME (5)                                                                  PARTITIONS   RETENTION TIME (MS)   RETENTION SIZE (BYTES)
 ------------------------------------------------------------------------- ------------ --------------------- ------------------------
  globex.recommendation-KSTREAM-REDUCE-STATE-STORE-0000000003-changelog              1   604800000             -1 (Unlimited)
  globex.recommendation-KSTREAM-REDUCE-STATE-STORE-0000000003-repartition            1   -1 (Unlimited)        -1 (Unlimited)
  globex.recommendation-product-score-aggregated-changelog                           1   604800000             -1 (Unlimited)
  globex.recommendation-product-score-aggregated-repartition                         1   -1 (Unlimited)        -1 (Unlimited)
  globex.tracking                                                                    1   604800000             -1 (Unlimited)

Activity Tracking Simulation

curl -X 'POST' \
  'https://activity-tracking-simulator-globex-user1.apps.cluster-gnfvj.sandbox565.opentlc.com/simulate' \
  -H 'accept: text/plain' \
  -H 'Content-Type: application/json' \
  -d '{
  "count": 100
}'

Lab 5: Openshift Connectors

create orders topic

rhoas kafka topic create --name globex.orders

binds order-placement app to kafka

rhoas cluster bind --namespace=globex-user1 --app-name=order-placement --service-name=globex --service-type=kafka -y

Scale up both kafka streams apps

oc scale --replicas=1 deployment/order-placement

create a sa for openshift connectors

rhoas service-account create --short-description='connectors' --file-format=secret --output-file=./connectors-service-acct-credentials

grant access to kafka topics

rhoas kafka acl grant-access --producer --consumer --service-account 573c757a-2b58-4f4e-85ba-30cccbad61a2 --topic all --group all -y

create a connector namespace

rhoas connector namespace create --name mad-workshop1
# copy the .id from the json output

create a http-sync connector config manifest

cat <<EOF > ./http-orders-sink-connectorConfig.json
{
  "name": "http-orders-sink",
  "kind": "ConnectorType",
  "channels": [
    "stable"
  ],
  "connector_type_id": "http_sink_0.1",
  "desired_state": "ready",
  "kafka": {
    "url": "YOUR KAFKA NAME.kafka.rhcloud.com:443"
  },
  "service_account": {
    "client_id": "YOUR SERVICE ACCOUNT ID",
    "client_secret": "YOUR SERVICE ACCOUNT SECRET"
  },
  "connector": {
    "data_shape": {
      "consumes": {
        "format": "application/octet-stream"
      }
    },
    "kafka_topic": "globex.orders",
    "http_url": "https://webhook.site/YOUR WEBHOOK ID",
    "error_handler": {
      "stop": {}
    }
  }
}
EOF

copy the Kafka instance Id from the list below

rhoas kafka list
rhoas connector create --kafka your-kafka-instance-id --namespace your-name-space-id --file=./http-orders-sink-connectorConfig.json
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment