Skip to content

Instantly share code, notes, and snippets.

View sjwoodman's full-sized avatar

Simon Woodman sjwoodman

  • Red Hat
  • Newcastle, UK
View GitHub Profile

Strimzi CRD Upgrade

This describes (roughly) the steps Strimzi users will need to take to get to using v1beta2 Strimzi APIs which in turn are using the v1 CRD API (Kubernetes 1.21 is expected to have removed CRD v1beta1). In it we talk about Strimzi 0.21.0 and 0.22.0, but the changes described under those headings wouldn’t necessarily need to be delivered in exactly those versions. Rather the requirement is that what we’re calling Strimzi 0.22 is released around the time that Kubernetes 1.21 is.

Strimzi 0.20

This will be the last version of Strimzi which supports Kubernetes 1.11-1.15.

Users

apiVersion: v1
kind: Template
metadata:
name: apicurio-registry-kafka
message: |-
Congratulations on deploying Apicurio Registry (Kafka Storage) into OpenShift!
All components have been deployed and configured.
objects:
exec java -D%prod.registry.streams.topology.bootstrap.servers=192.168.0.3:9093 -D%prod.registry.streams.storage-producer.bootstrap.servers=192.168.0.3:9093 -D%prod.registry.streams.topology.security.protocol=SSL -D%prod.registry.streams.topology.ssl.truststore.location=/config/truststore.p12 -D%prod.registry.streams.topology.ssl.truststore.password=Z_pkTh9xgZovK4t34cGB2o6afT4zZg0L -D%prod.registry.streams.topology.ssl.truststore.type=PKCS12 -D%prod.registry.streams.topology.ssl.endpoint.identification.algorithm= -D%prod.registry.streams.storage-producer.security.protocol=SSL -D%prod.registry.streams.storage-producer.ssl.truststore.location=/config/truststore.p12 -D%prod.registry.streams.storage-producer.ssl.truststore.password=Z_pkTh9xgZovK4t34cGB2o6afT4zZg0L -D%prod.registry.streams.storage-producer.ssl.truststore.type=PKCS12 -D%prod.registry.streams.storage-producer.ssl.endpoint.identification.algorithm= -javaagent:/opt/agent-bond/agent-bond.jar=jmx_exporter{{9779:/opt/agent-bond/jmx_exporter_config.yml}} -
kafka@my-connect-cluster-connect-657d9b996d-lnhnd kafka]$ ls /opt/kafka/plugins/camel/
LICENSE.txt camel-telegram-3.1.0.jar jakarta.el-3.0.2.jar netty-codec-http-4.1.45.Final.jar
NOTICE.txt camel-telegram-kafka-connector-0.2.0-SNAPSHOT.jar jakarta.el-api-3.0.3.jar netty-codec-socks-4.1.45.Final.jar
README.adoc camel-timer-3.1.0.jar jakarta.inject-2.6.1.jar netty-common-4.1.45.Final.jar
async-http-client-2.10.5.jar camel-util-3.1.0.jar jakarta.validation-api-2.0.2.jar netty-handler-4.1.45.Final.jar
async-http-client-netty-utils-2.10.5.jar camel-webhook-3.1.0.jar jakarta.ws.rs-api-2.1.6.jar netty-handler-proxy-4.1.45.Final.jar
avro-1.9.2.jar classmate
2020-05-06 08:53:00,664 WARN Received body for GET https://api.telegram.org/bot1261735740:AAHykBjDgDnDGrPxozxwhAJrWzsoKazVy_A/getUpdates?offset=685677550&limit=100&timeout=30: {"ok":true,"result":[{"update_id":685677550,
"message":{"message_id":137,"from":{"id":754102435,"is_bot":false,"first_name":"Simon","last_name":"Woodman","language_code":"en"},"chat":{"id":754102435,"first_name":"Simon","last_name":"Woodman","type":"private"},"date":1588755180,"text":"2"}}]} (org.apache.camel.component.telegram.service.TelegramServiceRestBotAPIAdapter) [Camel (camel-2) thread #5 - telegram://bots]
2020-05-06 08:53:00,691 ERROR Failed delivery for (MessageId: ID-my-connect-cluster-connect-657d9b996d-lnhnd-1588753839129-0-2 on ExchangeId: ID-my-connect-cluster-connect-657d9b996d-lnhnd-1588753839129-0-2). Exhausted after delivery attempt: 1 caught: org.apache.camel.InvalidPayloadException: No body available of type: java.io.InputStream but has value: IncomingMessage{messageId=137, date=2020-05-06T08:53:00Z, from=User{id=75
apiVersion: v1
kind: Pod
metadata:
annotations:
openshift.io/scc: anyuid
operator.strimzi.io/generation: '0'
creationTimestamp: '2018-10-22T12:54:02Z'
generateName: my-cluster-zookeeper-
labels:
controller-revision-hash: my-cluster-zookeeper-7c8b76cc8f
package io.streamzi.cloudevents.kafka.util;
import io.streamzi.cloudevents.CloudEvent;
import io.streamzi.cloudevents.impl.CloudEventImpl;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.common.header.Header;
import org.apache.kafka.common.header.Headers;
import org.apache.kafka.common.header.internals.RecordHeader;
import org.apache.kafka.common.header.internals.RecordHeaders;