Skip to content

Instantly share code, notes, and snippets.

@itayB
Last active July 27, 2016 15:15
Show Gist options
  • Save itayB/5516f7038bb68cb8fb51e2dde6f32ad1 to your computer and use it in GitHub Desktop.
Save itayB/5516f7038bb68cb8fb51e2dde6f32ad1 to your computer and use it in GitHub Desktop.

Written by Itay Bittan.

Apache Kafka

Background / Motivation

Our primary product contains a lot of modules (separate processes) talking together. At the beginning we wrote our own IMP (internal message protocol) written in C. It allowed us to send buffers (C structures) between modules over TCP connection. In addition, we developed a communication module - process that responsible to pass messages between all of the system modules. Here are some of the pitfalls we suffered from:

  1. Limited queue size - working with TCP sockets queues we find out that we where limited to relatively small size. If you want see your buffer size in terminal, you can take a look at: /proc/sys/net/ipv4/tcp_rmem (for read) /proc/sys/net/ipv4/tcp_wmem (for write) thanks to this answer on stack-overflow. After performing our load test we end up with full buffer queues and a lot of missing messages.

  2. Malformed messages / Segmentation faults - Our message structure was precede with a four bytes which represent the message length. Once the queue buffer is full part of the data was missing and then we suffered from shifting of the new arrival messages. In other case, too big messages wasn't ready to be pull (only half message arrived) and pulling them cause a lot of pain.

  3. No persistency - once a service went down (because of server failure, for example) - all of it's waiting messages are lost (because they are waiting in operating system sockets buffers).

  4. No replicas - if our communication module failed - there was no backup module and all other system modules were unable to connect.

  5. Peer to peer - if module wants to send message to all other modules of the same type (all of the web servers, for example) - he had to send each one of them message separately. We build automated mechanism to solve this issue (we called it broadcast message) but still not effective (see Kafka solution for this issue below).

  6. Debug - peeking the operating system buffers, and trying to figure out how many messages are waiting in queue and which kind of messages are wasn't that easy.

The solution

It turns out that there are a lot of queues out there: RabbitMQ, NSQ, Apache Kafka and more. I found this great queue.io web site after we've already start using Kafka. We heard about a lot of companies around us using Kafka so we decided to give it a try. We were also looking for a configuration service (to replace our ConfD as a configuration service with Zookeeper) but it will be covered in a different summary.

Our design

Now, regard the pitfalls above:

  1. Queue size - queues (called topics) are written to disk so the sizes are much bigger and limitation here suites are needs (and configurable of course).

  2. Solved. Messages are managed with Kafka.

  3. Messages are persisted on disk and replicated within the cluster to prevent data loss

  4. See section 3 above.

  5. Different concept. Modules of the same type can now subscribe as a consumer to one shared queue (topic) - instead of maintaining queue per each module. Each module consume the same message instead of sending the same message multiple times.

  6. Debugging is much easier - we knows how many messages are in queue and can iterate over them simply with the API.

Cons

  • Complicated configuration.
  • Rely on a third party code.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment