Skip to content

Instantly share code, notes, and snippets.

@ijokarumawak
Last active February 15, 2022 11:14
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 1 You must be signed in to fork a gist
  • Save ijokarumawak/cdf0ccaaedbc7b6e5d65d4918551923f to your computer and use it in GitHub Desktop.
Save ijokarumawak/cdf0ccaaedbc7b6e5d65d4918551923f to your computer and use it in GitHub Desktop.
How to connect GetKafka to Kafka through Stunnel

How to connect GetKafka to Kafka through Stunnel

Stunnel is a proxy that can make insecure network transmission secure by wrapping it with SSL. This Gist contains example and illustrations describing how it works and how to configure it.

Most part of it is derived from this informative Git comment I wouldn't be able to set it up without this comment. Thank you for sharing such detailed example.

How it works?

Let's see how it can be applied to NiFi GetKafka. I used two servers for this experimentation. 0 and 1.server.aws.mine. A single Zookeeper and Kafka broker is running on 0.server. A GetKafka NiFi processor in 1.server consumes messages through Stunnel:

  1. Kafka Broker joins the Kafka cluster and declares its address as 127.0.0.1:9092. If Zookeeper is in different server (recommended) and you need to secure this connection via Stunnel as well, then you need to apply the same method as the one used between GetKafka and Zookeeper.
  2. GetKafka's Zookeeper Connection String is set to 127.0.0.1:2181 which is local Stunnel is listening to. Then the local Stunnel on 1.server proxies the request to 0.server:2181 over SSL. At 0.server, the request is proxied again by the Stunnel running on 0.server, then finally arrives at Zookeeper.
  3. Since the Kafka Broker running on 0.server declares its address as 127.0.0.1:9092, GetKafka (Kafka client) sends request to 127.0.0.1:9092, and the request eventually transferred to the Broker through Stunnel pair.

Here is the relevant configurations in 1.server's stunnel.conf (entire file is available here):

client = yes

[zookeeper]
accept = 127.0.0.1:2181
connect = 0.server.aws.mine:2181

[kafka]
accept = 127.0.0.1:9092
connect = 0.server.aws.mine:9092

And this is for 0.server (entire file is available here):

client = no

[zookeeper]
accept = 0.server.aws.mine:2181
connect = 127.0.0.1:2181

[kafka]
accept = 0.server.aws.mine:9092
connect = 127.0.0.1:9092

Kafka server.properties:

host.name=127.0.0.1
zookeeper.connect=127.0.0.1:2181

Zookeeper zookeeper.properties

clientPort=2181
clientPortAddress=127.0.0.1

How to authorize client access?

Each Stunnel server has to have its own pem file containing a private key and a certificate. Also, a CA certificate file (or directory) is also needed to authorize client access.

  1. I used tls-toolkit.sh that is available in NiFi toolkit, to generate required files. Toolkit can generate three files, keystore.jks, truststore.jks and nifi.properties for each server. Server's key and cert can be extracted from keystore.jks. To do so, convert keystore.jks into keystore.p12 file by following commands (credit goes to this Stackoverflow) :

    # It's not important which server to run the toolkit on.
    $ ./bin/tls-toolkit.sh standalone -n [0-1].server.aws.mine -C 'CN=server,OU=mine'
    
    # Password for keystore.jks can be found in generated nifi.properties 'nifi.security.keystorePasswd'.
    $ keytool -importkeystore -srckeystore keystore.jks -destkeystore keystore.p12 -srcstoretype jks -deststoretype pkcs12
  2. Then extract key and cert from the p12 file:

    $ openssl pkcs12 -in keystore.p12 -nokeys -out cert.pem
    $ openssl pkcs12 -in keystore.p12 -nodes -nocerts -out key.pem
  3. Concatenate key and cert to create stunnel.pem, and deploy stunnel.pem to servers:

    $ cat key.pem cert.pem >> stunnel.pem
  4. I used cert.pem as the CAFile for Stunnel on 0.server.

In stunnel.conf on 0.server, following settings are needed to enable client cert verification:

verify = 3
CAFile = /etc/stunnel/certs

Refer Stunnel manual for further description on these configurations.

I confirmed that GetKafka running on 1.server can consume messages through Stunnel. If I used a cert which is not configured in the certs file on 0.server, GetKafka got timeout exception as follows:

2017-03-09 06:50:48,690 WARN [Timer-Driven Process Thread-5] o.apache.nifi.processors.kafka.GetKafka GetKafka[id=b0a21b5d-015a-1000-fbba-2648095ae625] Executor did not stop in 30 sec. Terminated.
2017-03-09 06:50:48,691 WARN [Timer-Driven Process Thread-5] o.apache.nifi.processors.kafka.GetKafka GetKafka[id=b0a21b5d-015a-1000-fbba-2648095ae625] Timed out after 60000 milliseconds while waiting to get connection
java.util.concurrent.TimeoutException: null
        at java.util.concurrent.FutureTask.get(FutureTask.java:205) [na:1.8.0_121]
        at org.apache.nifi.processors.kafka.GetKafka.onTrigger(GetKafka.java:348) ~[nifi-kafka-0-8-processors-1.1.2.jar:1.1.2]
        at org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27) [nifi-api-1.1.2.jar:1.1.2]
        at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1099) [nifi-framework-core-1.1.2.jar:1.1.2]

Stunnel commands

# Install
sudo yum -y install stunnel
# Edit config
sudo vi /etc/stunnel/stunnel.conf
# Start
sudo stunnel
# Stop
sudo kill `cat /var/run/stunnel.pid`

Conclusion

Although Stunnel works with GetKafka and Kafka 0.8.x, I recommend to use newer version of Kafka and ConsumeKafka NiFi processor with SSL if possible. As it's written in the Git comment, this workaround is not scalable (in terms of required administration tasks) and complicated.

client = no
cert = /etc/stunnel/stunnel.pem
verify = 3
CAFile = /etc/stunnel/certs
[zookeeper]
accept = 0.server.aws.mine:2181
connect = 127.0.0.1:2181
[kafka]
accept = 0.server.aws.mine:9092
connect = 127.0.0.1:9092
cert = /etc/stunnel/stunnel.pem
# cert = /etc/stunnel/stunnel-unauthorized.pem
client = yes
[zookeeper]
accept = 127.0.0.1:2181
connect = 0.server.aws.mine:2181
[kafka]
accept = 127.0.0.1:9092
connect = 0.server.aws.mine:9092
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment