Skip to content

Instantly share code, notes, and snippets.

@nurrony
Forked from aramalipoor/0-README.md
Created November 21, 2023 15:52
Show Gist options
  • Save nurrony/f6ca6028a7f192f82a94cdfc61e26c57 to your computer and use it in GitHub Desktop.
Save nurrony/f6ca6028a7f192f82a94cdfc61e26c57 to your computer and use it in GitHub Desktop.
Secure Kafka brokers with SSL and expose externally in OpenShift/Kubernetes via a passthrough Route

Kafka SSL + OpenShift Routes

To expose Kafka port externally enable SSL/TLS configuration in Kafka.

  1. Build the image based on this Dockerfile
  2. Generate all keys and certificates based on gen.sh. Note Replace <YOUR_KAFKA_DOMAIN_HERE> and passphrase (i.e. test1234).
  3. Create a secret based to store all certificates:
    oc create secret generic kafka-ssl --from-file=/absolute/path/to/generated/certs/dir
  4. Update Kafka's Statefulset to enable SSL (statefulset.yml holds already patched version of our template):
    • Configure Kafka brokers (Statefulset) to listen on SSL port:
      # Add this to Statefulset/Deployment > containers > command
      # Remove advertised.host.name if already defined
      # Note KAFKA_ADVERTISED_HOST_NAME env is defined via downward API from podIP
      --override listeners=SSL://$KAFKA_ADVERTISED_HOST_NAME:9093 
    • Mount kafka-ssl secret to /var/private/ssl path of Kafka's Statefulset.
    • Update containers > image to your newly built Kafka image. Note Replace <YOUR_PROJECT_NAME_HERE> in statefulset.yml
  5. Create/Update kafka service
  6. Create a passthrough Route (e.g. kafka-ssl.abar.cloud) to point to kafka Service port 9093.
  7. Test the connection via Kafka's consumer / producer utilities. Use correct path for certificates. You may run these in one of kafka-0/1/2 pods 'cause they already hold certificates in /var/private/ssl dir:
    # Create client configuration file:
    cat >client-ssl.properties <<EOL
    bootstrap.servers=kafka-ssl.abar.cloud:443
    security.protocol=SSL
    ssl.truststore.location=/var/private/ssl/kafka.client.truststore.jks
    ssl.truststore.password=test1234
    ssl.keystore.location=/var/private/ssl/kafka.client.keystore.jks
    ssl.keystore.password=test1234
    ssl.key.password=test1234
    EOL
    
    # Run a producer and type something then press ENTER
    ./bin/kafka-console-producer.sh --broker-list kafka-ssl.abar.cloud:443 --topic test --producer.config client-ssl.properties
    
    # Since previous command is blocking you may run command below in a separate terminal session
    # You should see anything you type in producer session.
    ./bin/kafka-console-consumer.sh --bootstrap-server kafka-ssl.abar.cloud:443 --topic test --new-consumer --consumer.config client-ssl.properties --from-beginning
FROM centos:7
ENV JAVA_HOME=/usr/lib/jvm/jre-1.8.0-openjdk \
KAFKA_VERSION=1.0.0 \
SCALA_VERSION=2.11 \
KAFKA_HOME=/opt/kafka
COPY fix-permissions /usr/local/bin
RUN INSTALL_PKGS="gettext tar zip unzip hostname java-1.8.0-openjdk openssl" && \
yum install -y $INSTALL_PKGS && \
rpm -V $INSTALL_PKGS && \
yum clean all && \
mkdir -p $KAFKA_HOME && \
curl -fSL $(curl -s http://www.apache.org/dyn/closer.cgi/kafka/$SCALA_VERSION-$KAFKA_VERSION/kafka-$SCALA_VERSION-$KAFKA_VERSION.tgz?as_json=1 | grep preferred | cut -f 4 -d \" -)/kafka/$KAFKA_VERSION/kafka_$SCALA_VERSION-$KAFKA_VERSION.tgz | tar xzf - --strip 1 -C $KAFKA_HOME/ && \
mkdir -p $KAFKA_HOME/logs && \
/usr/local/bin/fix-permissions $KAFKA_HOME
WORKDIR "/opt/kafka"
RUN sed -i "/listeners=/c listeners=SSL://:9093" config/server.properties
RUN sed -i "/listener.security.protocol.map=/c listener.security.protocol.map=SSL:SSL" config/server.properties
RUN echo $'\n\
ssl.client.auth=required\n\
security.protocol=SSL\n\
security.inter.broker.protocol=SSL\n\
ssl.keystore.location=/var/private/ssl/kafka.server.keystore.jks\n\
ssl.keystore.password=test1234\n\
ssl.key.password=test1234\n\
ssl.truststore.location=/var/private/ssl/kafka.server.truststore.jks\n\
ssl.truststore.password=test1234\n'\
>> config/server.properties
EXPOSE 9093
## 1. Create certificate authority (CA)
openssl req -new -x509 -keyout ca-key -out ca-cert -days 365 -passin pass:test1234 -passout pass:test1234 -subj "/CN=<YOUR_KAFKA_DOMAIN_HERE>/OU=DevOps/O=AbarCloud/L=FA/ST=Tehran/C=Iran"
## 2. Create client keystore
keytool -noprompt -keystore kafka.client.keystore.jks -genkey -alias localhost -dname "CN=<YOUR_KAFKA_DOMAIN_HERE>, OU=DevOps, O=AbarCloud, L=FA, ST=Tehran, C=Iran" -storepass test1234 -keypass test1234
## 3. Sign client certificate
keytool -noprompt -keystore kafka.client.keystore.jks -alias localhost -certreq -file cert-unsigned -storepass test1234
openssl x509 -req -CA ca-cert -CAkey ca-key -in cert-unsigned -out cert-signed -days 365 -CAcreateserial -passin pass:test1234
## 4. Import CA and signed client certificate into client keystore
keytool -noprompt -keystore kafka.client.keystore.jks -alias CARoot -import -file ca-cert -storepass test1234
keytool -noprompt -keystore kafka.client.keystore.jks -alias localhost -import -file cert-signed -storepass test1234
## 5. Import CA into client truststore (only for debugging with producer / consumer utilities)
keytool -noprompt -keystore kafka.client.truststore.jks -alias CARoot -import -file ca-cert -storepass test1234
## 6. Import CA into server truststore
keytool -noprompt -keystore kafka.server.truststore.jks -alias CARoot -import -file ca-cert -storepass test1234
## 7. Create PEM files for app clients
mkdir -p ssl
## 8. Create server keystore
keytool -noprompt -keystore kafka.server.keystore.jks -genkey -alias <YOUR_KAFKA_DOMAIN_HERE> -dname "CN=<YOUR_KAFKA_DOMAIN_HERE>, OU=DevOps, O=AbarCloud, L=FA, ST=Tehran, C=Iran" -storepass test1234 -keypass test1234
## 9. Sign server certificate
keytool -noprompt -keystore kafka.server.keystore.jks -alias <YOUR_KAFKA_DOMAIN_HERE> -certreq -file cert-unsigned -storepass test1234
openssl x509 -req -CA ca-cert -CAkey ca-key -in cert-unsigned -out cert-signed -days 365 -CAcreateserial -passin pass:test1234
## 10. Import CA and signed server certificate into server keystore
keytool -noprompt -keystore kafka.server.keystore.jks -alias CARoot -import -file ca-cert -storepass test1234
keytool -noprompt -keystore kafka.server.keystore.jks -alias <YOUR_KAFKA_DOMAIN_HERE> -import -file cert-signed -storepass test1234
### Extract signed client certificate
keytool -noprompt -keystore kafka.client.keystore.jks -exportcert -alias localhost -rfc -storepass test1234 -file ssl/client_cert.pem
### Extract client key
keytool -noprompt -srckeystore kafka.client.keystore.jks -importkeystore -srcalias localhost -destkeystore cert_and_key.p12 -deststoretype PKCS12 -srcstorepass test1234 -storepass test1234
openssl pkcs12 -in cert_and_key.p12 -nocerts -nodes -passin pass:test1234 -out ssl/client_key.pem
### Extract CA certificate
keytool -noprompt -keystore kafka.client.keystore.jks -exportcert -alias CARoot -rfc -file ssl/ca_cert.pem -storepass test1234
apiVersion: v1
kind: Service
metadata:
labels:
app: kafka
application: kafka
name: kafka
spec:
ports:
- name: kafka-ssl
port: 9093
protocol: TCP
targetPort: 9093
selector:
application: kafka
type: ClusterIP
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
labels:
app: kafka
application: kafka
name: kafka
spec:
podManagementPolicy: OrderedReady
replicas: 3
revisionHistoryLimit: 10
selector:
matchLabels:
application: kafka
serviceName: kafka-headless
template:
metadata:
labels:
application: kafka
spec:
containers:
- command:
- sh
- '-c'
- >-
bin/kafka-server-start.sh config/server.properties --override
zookeeper.connect=$ZOOKEEPER_HOST --override
listeners=SSL://$KAFKA_ADVERTISED_HOST_NAME:9093 --override
broker.id=$(hostname | awk -F'-' '{print $2}') --override
log.dirs=/opt/kafka/data --override
num.partitions=$KAFKA_DEFAULT_PARTITIONS --override
default.replication.factor=$KAFKA_DEFAULT_REPLICATION_FACTOR
--override
offsets.topic.replication.factor=$KAFKA_DEFAULT_REPLICATION_FACTOR
--override min.insync.replicas=$KAFKA_MIN_INSYNC_REPLICAS
env:
- name: KAFKA_ADVERTISED_HOST_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: status.podIP
- name: KAFKA_HEAP_OPTS
value: '-Xmx500m -Xms500m'
- name: KAFKA_DEFAULT_PARTITIONS
value: '1'
- name: KAFKA_DEFAULT_REPLICATION_FACTOR
value: '3'
- name: KAFKA_MIN_INSYNC_REPLICAS
value: '2'
- name: KAFKA_LOG4J_ROOT_LOGLEVEL
value: DEBUG
- name: ZOOKEEPER_HOST
value: 'zookeeper:2181'
image: '172.30.150.55:5000/<MY_PROJECT_NAME_HERE>/kafka:1.0.0'
imagePullPolicy: Always
name: kafka
ports:
- containerPort: 9093
protocol: TCP
resources:
limits:
memory: 1000Mi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /opt/kafka/data
name: datadir
- mountPath: /var/private/ssl
name: kafka-ssl
readOnly: true
dnsPolicy: ClusterFirst
restartPolicy: Always
volumes:
- name: kafka-ssl
secret:
defaultMode: 420
secretName: kafka-ssl
updateStrategy:
type: RollingUpdate
volumeClaimTemplates:
- metadata:
labels:
application: kafka
name: datadir
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment