Navigation Menu

Skip to content

Instantly share code, notes, and snippets.

@matzew
Created March 18, 2022 15:36
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save matzew/12c375bcbb6a6f443c1e676ad7b7b330 to your computer and use it in GitHub Desktop.
Save matzew/12c375bcbb6a6f443c1e676ad7b7b330 to your computer and use it in GitHub Desktop.

Event Delivery Guarantees with Knative Broker

By default the Knative Broker does a number of retry attempts based on the response code of the Addressable. The entire list is visible here.

The default is 10 reties with exponential backoff and a delay of 0.2 second. Details about the Event-Delivery knobs for the broker can be found here.

Tweaking the broker

Below is a broker that tweaks the default settings for its needs:

apiVersion: eventing.knative.dev/v1
kind: Broker
metadata:
  annotations:
    eventing.knative.dev/broker.class: Kafka
  name: my-broker
spec:
  config:
    apiVersion: v1
    kind: ConfigMap
    name: matze-kafka-broker-config
  delivery:
    retry: 5
    backoffPolicy: exponential
    backoffDelay: PT3S
    deadLetterSink:
      ref:
        apiVersion: eventing.knative.dev/v1alpha1
        kind: KafkaSink
        name: kafka-dls

NOTE: The configuration aspects of the DeliverySpec settings are applied to every Trigger, if the Trigger does not have its own, individual configration for the DeliverySpec.

A Kafka-based dead letter sink

The above does configure a dead letter sink, which can be any Addressable, but in our case we use the KafkaSink, like:

apiVersion: eventing.knative.dev/v1alpha1
kind: KafkaSink
metadata:
  name: kafka-dls
  namespace: default
spec:
  topic: deadletters
  bootstrapServers:
   - my-cluster-kafka-bootstrap.kafka:9092

Once the reties are ended and the Broker could not deliver the event for the given Trigger, the event is sent to the registered deadLetterSink.

Using the KafkaSink ensures that the incoming Cloud Event, on the deadLetterSink, are stored as structured CloudEvents in the given Kafka Topic. The KafkaSink can be configured to store the Cloud Event in the binary format as well)

Having the missed events in a Kafka topic has benefits, since any 3rd party Kafka tool, can be used to process those missed events afterwards.

Kafkacat logging

To print the Cloud Events on on the topic, a kafkacat query like the following can be used:

kafkacat -C -b my-cluster-kafka-bootstrap.kafka.svc.cluster.local:9092 -t deadletters   -f '\nKey (%K bytes): %k
  Value (%S bytes): %s
  Timestamp: %T
  Partition: %p
  Offset: %o
  Headers: %h\n'

In case of a structured CloudEvent the output would be like:

Key (-1 bytes): 
  Value (188 bytes): {"specversion":"1.0","id":"4711","source":"/my/curl/command","type":"demo","datacontenttype":"application/json","smartevent":"super-duper-event-extension","data":{"message":"Hallo World"}}
  Timestamp: 1647616707249
  Partition: 96
  Offset: 0
  Headers: content-type=application/cloudevents+json
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment