Skip to content

Instantly share code, notes, and snippets.

@ru-rocker
Created May 22, 2020 12:06
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save ru-rocker/c078e7dfa8d9d40427a82715122195af to your computer and use it in GitHub Desktop.
Save ru-rocker/c078e7dfa8d9d40427a82715122195af to your computer and use it in GitHub Desktop.
Sample config for Kafka with SASL/SCRAM authentication
############################# Server Basics #############################
# The id of the broker. This must be set to a unique integer for each broker.
broker.id=2
############################# Socket Server Settings #############################
# The address the socket server listens on. It will get the value returned from
# java.net.InetAddress.getCanonicalHostName() if not configured.
# FORMAT:
# listeners = listener_name://host_name:port
# EXAMPLE:
# listeners = PLAINTEXT://your.host.name:9092
listeners=PLAINTEXT://:9092,SSL://:9093,SASL_SSL://:9094
# Hostname and port the broker will advertise to producers and consumers. If not set,
# it uses the value for "listeners" if configured. Otherwise, it will use the value
# returned from java.net.InetAddress.getCanonicalHostName().
#advertised.listeners=PLAINTEXT://your.host.name:9092
advertised.host.name=172.20.20.20
advertised.listeners=PLAINTEXT://172.20.20.20:9092,SSL://172.20.20.20:9093,SASL_SSL://172.20.20.20:9094
# Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details
#listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
# The number of threads that the server uses for receiving requests from the network and sending responses to the network
num.network.threads=3
# The number of threads that the server uses for processing requests, which may include disk I/O
num.io.threads=8
# The send buffer (SO_SNDBUF) used by the socket server
socket.send.buffer.bytes=102400
# The receive buffer (SO_RCVBUF) used by the socket server
socket.receive.buffer.bytes=102400
# The maximum size of a request that the socket server will accept (protection against OOM)
socket.request.max.bytes=104857600
############################# Log Basics #############################
# A comma seperated list of directories under which to store log files
log.dirs=/var/lib/kafka
# The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
num.partitions=1
# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
# This value is recommended to be increased for installations with data dirs located in RAID array.
num.recovery.threads.per.data.dir=1
############################# Internal Topic Settings #############################
# The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state"
# For anything other than development testing, a value greater than 1 is recommended for to ensure availability such as 3.
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
# The minimum age of a log file to be eligible for deletion due to age
log.retention.hours=168
# The maximum size of a log segment file. When this size is reached a new log segment will be created.
log.segment.bytes=1073741824
# The interval at which log segments are checked to see if they can be deleted according
# to the retention policies
log.retention.check.interval.ms=300000
zookeeper.connect=172.20.20.10:2181,172.20.20.20:2181,172.20.20.30:2181
# Timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms=60000
group.initial.rebalance.delay.ms=0
ssl.endpoint.identification.algorithm=
ssl.truststore.location=/var/private/ssl/kafka.server.truststore.jks
ssl.truststore.password=password
ssl.keystore.location=/var/private/ssl/kafka.server.keystore.jks
ssl.keystore.password=password
ssl.key.password=password
security.inter.broker.protocol=SASL_SSL
# List of enabled mechanisms, can be more than one
sasl.enabled.mechanisms=SCRAM-SHA-256,PLAIN
# Specify one of of the SASL mechanisms
sasl.mechanism.inter.broker.protocol=SCRAM-SHA-256
ssl.client.auth=required
# for ACL (RBAC)
authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
allow.everyone.if.no.acl.found=true
super.users=User:admin
# The basic time unit in milliseconds used by ZooKeeper.
# It is used to do heartbeats and the minimum session timeout will be twice the tickTime.
tickTime=2000
# the directory where the snapshot is stored.
dataDir=/var/lib/zookeeper
# the port at which the clients will connect
clientPort=2181
# disable the per-ip limit on the number of connections since this is a non-production config
# maxClientCnxns=0
initLimit=5
syncLimit=2
server.1=172.20.20.10:2888:3888
server.2=172.20.20.20:2888:3888
server.3=172.20.20.30:2888:3888
autopurge.snapRetainCount=3
autopurge.purgeInterval=24
authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider
authProvider.2=org.apache.zookeeper.server.auth.SASLAuthenticationProvider
authProvider.3=org.apache.zookeeper.server.auth.SASLAuthenticationProvider
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment