KAFKA for Mere Mortals: Topic replication factor

After learning about topics and partitions in theory and practice, I believe the final piece of the puzzle should be the "Topic Replication Factor". This is the third post about topics and partitions but with more detail and specific concepts oriented toward distribution and fault-tolerance support.
If you want to learn KAFKA from scratch, please follow the links below in the given order:

  1. Introduction to KAFKA
  2. KAFKA and ETL
  3. Installing KAFKA and Zookeeper
  4. Cluster and Brokers
  5. Topics and Partitions
  6. Topics and Partitions in practice

After reading my article series dedicated to KAFKA, you should already realize that two of the main attributes of KAFKA are "distribution" and "fault-tolerance" support. KAFKA can provide fault-tolerance through brokers and, of course, reserved copies of the exact data called Replicas( ISP – In-sync replica). The Kafka cluster, as a "container of brokers", stores not only the original data but also their reserved copies. These reserved copies are called ”Replicas”.

Why exactly do we need Replicas?

When one broker goes down, KAFKA should continue processing. Because it has in high-level concept called “cluster” and the broker is only the part of cluster. Losing a broker doesn’t mean that we completely lost our cluster. So clusters somehow should continue living and be consistent without data loss. Replicas are reserved copies and help us not to lose information.

When a topic is created, it contains one or more partitions. Every partition, as a physical unit, stores multiple messages in order. As we discussed before, a partition is a stack-oriented container that provides access to consumers to read data in an ordered manner. It means there's no way to read data from a partition in an unordered way. When partitions are created, KAFKA tries to distribute them between brokers. For instance,  if you have 2 brokers and one topic with 5 partitions, mostly the round-robin logic of KAFKA's partition splitting algorithm will spread out your partitions between brokers in a somewhat parallel manner. It can look like this:

Broker 1 will have 2 partitions

Broker 2 will have 3 partitions

As we learned before, partition spreading depends on multiple factors. But what if Broker 1 goes down? At first glance, you might think "Yes, KAFKA will continue responding, but the partitions in broker 1 will no longer be accessible, and we can lose the data." Well, don't you think that it was just... just too easy? And then why should I use KAFKA if it can't handle such types of cases? Of course, it can. Using the "Topic Replication Factor", KAFKA can handle such cases, and even if any of the brokers fail, we will not lose any data.

Let's dive into the details of the "Topic Replication Factor."

After creating a topic with several partitions, KAFKA distributes the partitions across brokers. Let's take the above example: we have a topic with 5 partitions and 2 brokers in the cluster. Here is the result of the partition distribution:

Broker 1 will have 2 partitions

Broker 2 will have 3 partitions

After the topic creation process, depending on the configuration, KAFKA, with the help of Zookeeper, triggers the "leader election process". For every partition, only one broker can be a leader. The concept of a leader means that all Producers and Consumers, by default, will interact with the leader partition. After receiving data from the producer, the leader broker will store it and spread the message copy to replicas. We often call them ISR (in-sync replicas). The process is similar to the replication of data in database systems. The replication factor can be different for each topic. This allows you to tailor the replication factor to the specific needs of each topic. The replication factor can be changed after a topic has been created. However, this should be done with caution, as it can lead to data loss.

KAFKA Replicas

In the above image, the partitions with the star symbol mean they are leaders, the rest are replicas.

The "Leader Election Process" is specific and handled by Zookeeper. It is one of the few things that bring Zookeeper to the table. Zookeeper can also be a listener and notify brokers about other brokers joining or dying, topic creation or deletion, etc. We'll have another article about Zookeeper's role, but for now, just focus on the "leader election process" and the replication factor.

When creating a topic using "kafka-topics.sh", it is possible to provide a replication factor count. It is the exact argument that helps us define how many copies of the message will be stored in KAFKA. The command looks like this:

kafka-topics.sh --create --topic myfirsttopic --partitions 8 --replication-factor 3 --bootstrap-server localhost:9092

The value 3 means that, besides the original broker (leader broker), the system will use 2 other brokers (3-1=2) to store the reserved data. In our case, the reserved data is equal to messages' copies in the leader broker.

"When the leader broker goes down":

Surely the leader broker can go down. In this case, a new "leader election" process will happen. One of the ISRs (in-sync replica) will be selected as a leader and will handle all the related responsibilities of the leader. After the old leader is up, the system can again reassign the role to it.


Similar Articles