Apache Kafka is publish-subscribe messaging rethought as a distributed commit log.
Last active
October 9, 2023 17:13
-
-
Save JeOam/d5f08938baec3039bd70 to your computer and use it in GitHub Desktop.
Apache Kafka Notes
查看所有的 topic:
~/kafka/bin/kafka-topics.sh --list --zookeeper localhost:2181
查看 topics 的分区情况:
~/kafka/bin/kafka-topics.sh --describe --zookeeper localhost:2181
创建 topic:
~/kafka/bin/kafka-topics.sh --zookeeper localhost:2181 --create --partitions 1 --replication-factor 1 --topic topic-name
删除 topic:
~/kafka/bin/kafka-topics.sh --delete --zookeeper localhost:2181 --topic [topic_name]
ProducerConfig values:
compression.type = none
metric.reporters = []
metadata.max.age.ms = 300000
metadata.fetch.timeout.ms = 60000
reconnect.backoff.ms = 50
sasl.kerberos.ticket.renew.window.factor = 0.8
bootstrap.servers = [192.168.5.130:9092]
retry.backoff.ms = 100
sasl.kerberos.kinit.cmd = /usr/bin/kinit
buffer.memory = 33554432
timeout.ms = 30000
key.serializer = class org.apache.kafka.common.serialization.StringSerializer
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
ssl.keystore.type = JKS
ssl.trustmanager.algorithm = PKIX
block.on.buffer.full = false
ssl.key.password = null
max.block.ms = 6000
sasl.kerberos.min.time.before.relogin = 60000
connections.max.idle.ms = 540000
ssl.truststore.password = null
max.in.flight.requests.per.connection = 5
metrics.num.samples = 2
client.id =
ssl.endpoint.identification.algorithm = null
ssl.protocol = TLS
request.timeout.ms = 30000
ssl.provider = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
acks = all
batch.size = 16384 # This is an upper limit of how many messages Kafka Producer will attempt to batch before sending – specified in bytes (Default is 16K bytes)
ssl.keystore.location = null
receive.buffer.bytes = 32768
ssl.cipher.suites = null
ssl.truststore.type = JKS
security.protocol = PLAINTEXT
retries = 0
max.request.size = 1048576
value.serializer = class org.apache.kafka.common.serialization.StringSerializer
ssl.truststore.location = null
ssl.keystore.password = null
ssl.keymanager.algorithm = SunX509
metrics.sample.window.ms = 30000
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
send.buffer.bytes = 131072
linger.ms = 3000 # How long will the producer wait before sending in order to allow more messages to get accumulated in the same batch.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Install Apache Kafka on Ubuntu 14.04
Step 1 — Create a User for Kafka
1.create a user called kafka using the
useradd
command:2.Set its password using
passwd
:3.Add it to the sudo group so that it has the privileges required to install Kafka's dependencies. This can be done using the
adduser
command:4.Your Kafka user is now ready. Log into it using
su
:Step 2 — Install Java
Install the
default-jre
packageStep 3 — Install ZooKeeper
After the installation completes, ZooKeeper will be started as a daemon automatically. By default, it will listen on port 2181.
To make sure that it is working, connect to it via Telnet:
At the Telnet prompt, type in
ruok
and press ENTER.If everything's fine, ZooKeeper will say
imok
and end the Telnet session.Step 4 — Download and Extract Kafka Binaries
1.To start, create a directory called Downloads to store all your downloads.
4.Extract the archive you downloaded using the tar command.
Step 5 — Configure the Kafka Server
Open server.properties using vi to configure the Kakfa server:
Step 6 — Start the Kafka Server
Run the kafka-server-start.sh script using nohup to start the Kafka server (also called Kafka broker) as a background process that is independent of your shell session.
Wait for a few seconds for it to start. You can be sure that the server has started successfully when you see the following messages in
~/kafka/kafka.log
.You now have a Kafka server which is listening on port 9092.
Step 7 — Test the Installation
Publish the string "Hello, World" to a topic called TutorialTopic by typing in the following:
The following command consumes messages from the topic we published to. Note the use of the --from-beginning flag, which is present because we want to consume a message that was published before the consumer was started.