Setting Up Kafka In Local using Docker

Introduction

This article is in the Learning Apache Kafka series. If you are new to Kafka, I would highly recommend you learn about why Kafka came into existence in software development, key terminologies, and the different Client APIs present in Kafka. Then you can visit these articles written before this article,

  1. Getting Started With Apache Kafka.
  2. Kafka Terminologies and Client APIs.

In this article, we will set up the zookeeper and Kafka broker in our local machine.

The local environment requires two components to install Kafka. Number one is Zookeeper, and the other one is going to be the Kafka broker. The reason why we need a zookeeper is that the zookeeper is the one who maintains the metadata about the Kafka broker and also the Kafka client information.

When we dive deeper into understanding more about Kafka consumers, we'll be able to understand more about the reasons why Zookeeper plays a vital role in maintaining Kafka consumer information.

So any time we spin up a Kafka broker, the broker registers itself with a zookeeper. From that point, the zookeeper will keep track of the health of the Kafka broker.

We can think of a zookeeper as a centralized service for maintaining the configuration information health of the Kafka broker and providing synchronization when we have multiple brokers, which is also referred to as the Kafka Cluster.

From a developer perspective, we don't have to worry about how Zookeeper works behind the scenes and how Kafka broker interacts with it all these things are done automatically behind the scenes.

From a knowledge perspective, we just need to be aware that Zookeeper is one of the components that's part of the Kafka architecture. With this information, we can go ahead and set up the Kafka environment in our local.

The next step here is that in the folder you will notice the Docker compose file.

Using Docker

We are going to be using Docker and Docker Compose to set up the local Kafka cluster.

Docker Compose is an amazing tool, so if you navigate to this folder, you will notice Docker-compose.yaml file and Docker compose multi-broker.yaml file.

We'll start with Docker-compose.yaml file. We'll set up the local Kafka cluster by launching this Docker compose file.

services:
  zoo1:
    image: confluentinc/cp-zookeeper:7.3.2
    hostname: zoo1
    container_name: zoo1
    ports:
      - "2181:2181"
    environment:
      ZOOKEEPER_CLIENT_PORT: 2181
      ZOOKEEPER_SERVER_ID: 1
      ZOOKEEPER_SERVERS: zoo1:2888:3888

  kafka1:
    image: confluentinc/cp-kafka:7.3.2
    hostname: kafka1
    container_name: kafka1
    ports:
      - "9092:9092"
      - "29092:29092"
    environment:
      KAFKA_ADVERTISED_LISTENERS: >
        INTERNAL://kafka1:19092,
        EXTERNAL://${DOCKER_HOST_IP:-127.0.0.1}:9092,
        DOCKER://host.docker.internal:29092
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INTERNAL:PLAINTEXT,EXTERNAL:PLAINTEXT,DOCKER:PLAINTEXT
      KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL
      KAFKA_ZOOKEEPER_CONNECT: "zoo1:2181"
      KAFKA_BROKER_ID: 1
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
      KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
      KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
    depends_on:
      - zoo1

File Explanation

  • So here is a file and this is a configuration for Zookeeper.
  • And then I'm using the Docker image from Confluent Take.
  • And then this is a configuration for the Kafka broker and this Kafka broker.
  • If you go to the bottom, this depends upon the zookeeper.

This means this is going to wait until the zookeeper container is up, and then this instance is going to come up And then we are exposing ports like 1992 so that we can use a local host to interact with this port.

This is going to make sure you can still have your application interact through port 9092 and then for internal communication, it uses Kafka, which is these containers' hostname, and then 19092, which is an internal port for external we're using the 127.0.0.1, this maps to the local host.

So this is a configuration setup for the zookeeper and Kafka broker. The next step is to launch this Docker compose file. This is going to spin up the Kafka broker and the zookeeper for you.

Launch Docker File

So all we need to do is just do Docker compose up, this is going to launch the zookeeper and Kafka broker in your machine it's going to create the network. This means this is our reference name for our zookeeper image or container and the Kafka container.

docker-compose up

Command

SCREENSHOT 1. This snapshot resembles the intermediate state where the Kafka image is pulling from confluent.

Kafka

SCREENSHOT 2. Here in Snapshot 2 we can see all images are downloaded, extracted, and pulled from the respective area mentioned in the docker-compose.yml.

Docker

SCREENSHOT 3. This Snapshot 3 shows Zookeeper started and we can see it in the logs.

Zookeeper

SCREENSHOT 4. Open the terminal and here you will notice a lot of logs spinning up and you will notice a message named Kafka ID-1 started and the ID-1 comes from the broker ID that we have provided over the docker-compose.yaml file. With this snapshot 4 we can see a kafka cluster with ID-1 is created.

Kafka cluster

The status is up and the zookeeper. The status is also up and this is a clear signal that our local setup is ready. The next step is to start interacting with this Kafka cluster and start to produce and consume messages from it.

SCREENSHOT 5. After all this, we can verify the Kafka running in the docker desktop application. Here we can see two containers running, one is zoo1 i.e our zookeeper server running on 2181:2181 which manages the Kafka, and the other is kafka1 i.e Kafka cluster is running on three different ports and three out of one port is 29092:29092.

Docker desktop

SCREENSHOT 6. In this snapshot, we can verify the images of Kafka running in Zookeeper.

Snapshot

Conclusion

This makes it to the end of the article, and in this article, we learned how to install Kafka locally using docker-machine.


Similar Articles