Skip to content

Instantly share code, notes, and snippets.

@AdilHoumadi
Created March 22, 2021 12:37
Show Gist options
  • Save AdilHoumadi/eaf75337e69a198856a75a4badf89b77 to your computer and use it in GitHub Desktop.
Save AdilHoumadi/eaf75337e69a198856a75a4badf89b77 to your computer and use it in GitHub Desktop.
title date tags author summary
Consume and produce data to Apache Kafka using CLI
2021-03-21
kafka
consume
produce
kafka-cli
Adil
In this series of articles we will see the different methods that we can use in order to produce data to a topic, and the way to consume it. We will start by setting up a local environment using docker and docker-compose. Once the kafka ecosystem is ready we will create a topic, than produce some data and consume it via CLI.

Requirements

Introduction

In this series of articles we will see the different methods that we can use in order to produce data to a topic, and the way to consume it. We will start by setting up a local environment using docker and docker-compose. Once the kafka ecosystem is ready we will create a topic, than produce some data and consume it via CLI.

Local environment

Using the following docker-compose we will be able to start a local environment that contain: a broker and zookeeper.

docker-compose.yml

https://gist.github.com/87c5814157a697e19162af7589af8072

Start the local environment by executing this command:

https://gist.github.com/446227fe77b5a4e94d7063772ab40b95

Check if the environment is ready:

https://gist.github.com/c76409e6f85dedd8408640d44c1b6e4f

Setup kafka CLI

In order to interact with the kafka broker, Apache Kafka provides a client CLI:

https://gist.github.com/3ae545ce7b122afa7c142173a0a12a5a

Add this export to your shell profile, it will allow to execute the bin from any location in the system.

Create the topic

In order to create the topic, we will need to use kafka-topics.sh command and set the required parameters:

https://gist.github.com/daffea8989f5dec36be585b2b2ddc824

  • localhost:9092 - the broker address
  • newTopic - the topic name
  • 3 - The number of partitions for topic
  • 1 - The number of replication for the topic, in our environment we have only on broker.S

To check the creation of the topic we will use the previous command with --list option.

https://gist.github.com/e138a494b0759ffa24866a372c198088

To Have more details about the topic that has been creation, there is an option that we can set to kafka-topics.sh which is called --describe.

https://gist.github.com/91da843c97962303023acb7fa777d2b5

The output of the command give us the number of partitions and the replicas. In our case we are using only on instance for the broker which is obviously refer to the number of replicas.

In a future article we can discuss a setup that contain a cluster with multiple brokers.

Produce the data

Now that we have our topic created in the broker and we could describe its configuration, we can start producing messages.

In order to send messages to our topic we will use kafka-console-producer.sh. An interactive prompt will be shown and we can start writing our messages.

To confirm the sending of the message we need to hit ENTER and continue.

Once we finish our sending we can quit the process using CTRL+C

https://gist.github.com/cc09c27753f143e815cf4e3f77832093

DETAIL: We can start producing the data without creation of the topic. A question that we can ask ourself is the following: Why we took the time to create the topic before sending the messages? Usually in production environment the auto-creation for topics is disabled by default, Organizations prefer to have control and approve the creation of topics, if we want to have this behavior in our setup we can set this environment variable in our docker-compose.yml

https://gist.github.com/534b00ee72cbcd71713166658bf842c0

To make the production of the data more interesting we can use Meetup Streaming API. The following endpoint will stream open events from Meetup-API:

https://gist.github.com/e8d55aa3d1dcb8c41b202e9e91739036

We need to execute this command to have a continuous flow from Meetup-API

https://gist.github.com/6f9465ce139cbd9559210160d3eeb058

We execute a GET Request via curl on Meetup-API and we pipe the result to jq to map the output and get the following fields:

  • id
  • event_url
  • name

For each message produced to the topic a > will be printed to the terminal. We keep running this command in a tab to feed our topic. We will end up having json objects in our topic that look like the following structure:

https://gist.github.com/74003c77c84aa9fea64f1fc12429b15a

Consume the data

In the section we will spawn a new terminal to consume the data that has been produced previously. To do so, we will need kafka-console-consumer.sh binary and set the required parameters to start the consumption.

https://gist.github.com/921e776818af2b37c8893872c4957009

  • --from-beginning: it allows to start consuming the data from the first offset.

We can play with this command and consume from a specific offset for a specific partition with a fixed number of messages before exiting.

https://gist.github.com/79a130ccb73ef0050a4e16ba79d14594

  • --offset: rewind the process to the specified offset.
  • --partition: consume from this specific partition.
  • --max-messages: total messages to consume before exiting the process.

--offset accepts an integer and earliest to start from the beginning or latest which the default value that mean consume from end.

To get the status of each partition we can execute this command:

https://gist.github.com/3e35a35a4ffa5fb0ac398a190f7c9e2e

We see clearly that our topic has 3 partitions: 0, 1 and 2 and for each partition we have the last offset that has been reached.

  • partition 0 offset 34
  • partition 1 offset 41
  • partition 2 offset 36

Conclusion

As you have seen put in place a Kafka local environment is accessible to every developer that is interested to get in Streaming world. In a few minutes, we manage to setup the cluster and start producing and consuming the data.

For testing purpose working with the CLI is fine and allows to prototype quickly. But for production application the Favorite way of producing/consuming the data is via a programming language of or using kafka connector. In the next article we will discuss how we can implement it via the Java SDK.

Stay tuned ✌!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment