Skip to content

Instantly share code, notes, and snippets.

@rahulkumar-aws
Last active September 6, 2017 21:31
Show Gist options
  • Save rahulkumar-aws/5676aae3a04c1a77741e to your computer and use it in GitHub Desktop.
Save rahulkumar-aws/5676aae3a04c1a77741e to your computer and use it in GitHub Desktop.
Kafka-Distributed

Kafka Broker setting (If you are running multile broker on same machine.)

config/server.properties

broker.id = 1
listeners=PLAINTEXT://:9093
log.dirs=/tmp/broker1_logs

Start Zookeeper

bin/zookeeper-server-start.sh config/zookeeper.properties

Start individual brokers (eg: If we have three broker)

bin/kafka-server-start.sh config/server-1.properties
bin/kafka-server-start.sh config/server-2.properties
bin/kafka-server-start.sh config/server-3.properties

Extra Configuration

change in zookeeper directory

create myid file

vim /tmp/zookeeper/myid
1

vim config/zookeeper.properties

############################################## make this config for all server zookeeper file ##############################################

1.server-1

dataDir=/home/hadoop/zookeeper

the port at which the clients will connect

clientPort=2181

disable the per-ip limit on the number of connections since this is a non-production config

maxClientCnxns=0
server.1=server1-internal-ip-1:2888:3888
server.2=server1-internal-ip-2:2888:3888
server.3=server1-internal-ip-3:2888:3888
server.4=server1-internal-ip-4:2888:3888
initLimit=5
syncLimit=2

make this config for all server.properties file

The minimum age of a log file to be eligible for deletion

log.retention.hours=1

A size-based retention policy for logs. Segments are pruned from the log as long as the remaining

segments don't drop below log.retention.bytes.

log.retention.bytes=80000000

The maximum size of a log segment file. When this size is reached a new log segment will be created.

log.segment.bytes=536870912

**The interval at which log segments are checked to see if they can be deleted according

**to the retention policies

log.retention.check.interval.ms=60000

**By default the log cleaner is disabled and the log retention policy will default to just delete segments after their retention expires.

**If log.cleaner.enable=true is set the cleaner will be enabled and individual logs can then be marked for log compaction.

log.cleaner.enable=false

**Zookeeper

**Zookeeper connection string (see zookeeper docs for details).

**This is a comma separated host:port pairs, each corresponding to a zk

**server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".

**You can also append an optional chroot string to the urls to specify the

**root directory for all kafka znodes.

zookeeper.connect=server1-internal-ip-1:2181,server1-internal-ip-2:2181,server1-internal-ip-3:2181,server1-internal-ip-4:2181

**Timeout in ms for connecting to zookeeper

zookeeper.connection.timeout.ms=1000000

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment