Integrate Azure AD OAuth2 SSO Authentication and RBAC for Kafka-UI

Previous Article

Why does Kafka require a UI?

Kafka, a distributed streaming platform, has gained immense popularity due to its scalability, reliability, and fault tolerance. However, managing and monitoring Kafka clusters can be complex, especially for large-scale deployments. To address this challenge, several Kafka UI tools have emerged, simplifying cluster management and providing valuable insights into Kafka's performance.

Understanding Kafka UI Tools

Kafka UI tools are web-based applications that provide a graphical interface for interacting with Kafka clusters. These tools typically offer features such as,

  • Cluster Management: Creating, deleting, and configuring Kafka clusters.
  • Topic Management: Creating, deleting, and listing topics.
  • Consumer Group Management: Viewing consumer group information, offsets, and lag.
  • Message Inspection: Viewing and searching for messages within topics.
  • Performance Metrics: Monitoring key performance indicators (KPIs) like throughput, latency, and consumer lag.
  • Security Management: Configuring authentication and authorization for Kafka access.

Choosing the Right Kafka UI Tool

The best Kafka UI tool for your needs depends on factors such as,

  • Scale and complexity of your Kafka cluster: Larger clusters may require more advanced features offered by tools like Confluent Control Center.
  • Required features: Consider the specific features you need, such as schema registry, KSQL, or Kafka Connect.
  • Cost: Some tools, like Confluent Control Center, may have licensing costs associated with them.
  • Ease of use: If you're new to Kafka, a user-friendly interface like Kafka Manager might be a good choice.

The Importance of Authentication and Authorization in Kafka UI Tools

Apache Kafka has become a cornerstone for building real-time event streaming systems, enabling organizations to handle vast amounts of data efficiently. However, Kafka’s inherent complexity and its critical role in managing sensitive data pipelines make security a top priority. When using a Kafka UI tool—such as AKHQ, Kafka UI by Provectus, or Conductor—to simplify the management and monitoring of Kafka clusters, authentication and authorization mechanisms become essential for protecting the Kafka infrastructure from unauthorized access and operations.

Kafka UI tools, designed to simplify cluster administration, play a crucial role in this regard. However, without proper authentication and authorization mechanisms, these tools can pose significant security risks.

Why is Authentication Essential?

  • Preventing Unauthorized Access: Authentication ensures that only authorized users can access the Kafka UI tool. This prevents unauthorized individuals from gaining control over the cluster and potentially compromising sensitive data.
  • Identifying Users: Authentication allows the system to track who is accessing the cluster, making it easier to audit and troubleshoot issues.
  • Enforcing Policies: Authentication can be used to enforce access policies, such as limiting access to certain features or restricting access during specific times.

Why Authorization is Critical?

  • Role-Based Access Control (RBAC): Authorization enables the implementation of RBAC, granting different users or groups varying levels of access to Kafka resources. This ensures that users only have the permissions necessary to perform their tasks.
  • Preventing Data Breaches: By limiting access to sensitive data, authorization helps prevent data breaches and unauthorized data manipulation.
  • Enhancing Security: Authorization can be combined with other security measures, such as encryption and firewalls, to create a more robust security posture.

Common Authentication and Authorization Methods

  • Username and Password: A simple but effective method that requires users to provide a valid username and password to gain access.
  • OAuth: A widely used standard for authorization that allows users to grant access to applications without sharing their credentials.
  • API Keys: A lightweight authentication mechanism that involves generating unique API keys for each user or application.
  • Certificates: A more secure method that uses digital certificates to verify the identity of users and applications.

Best Practices for Authentication and Authorization in Kafka UI Tools

  • Use Strong Authentication Methods: Avoid weak authentication methods like default passwords or easily guessable usernames.
  • Implement RBAC: Grant users only the permissions they need to perform their tasks.
  • Regularly Review and Update Access Controls: As roles and responsibilities change, ensure that access controls are kept up-to-date.
  • Enable Auditing: Track user activity to identify potential security threats.
  • Educate Users: Train users on best practices for security and password management.

Provectus: A Versatile Kafka Management Tool

Provectus is a powerful and intuitive web-based interface designed to simplify the management and monitoring of Apache Kafka clusters. It offers a comprehensive set of features, making it a valuable tool for both beginners and experienced Kafka users.

Key Features of Provectus

  1. Cluster Management
    • Create, delete, and configure Kafka clusters.
    • Manage brokers, topics, and consumer groups.
    • Monitor cluster health and performance metrics.
  2. Topic Management
    • Create, delete, and list topics.
    • View topic details, including partitions, replicas, and retention settings.
    • Produce and consume messages directly from the UI.
  3. Consumer Group Management
    • View consumer group information, offsets, and lag.
    • Rebalance consumer groups.
    • Monitor consumer performance and identify bottlenecks.
  4. Message Inspection
    • View and search for messages within topics.
    • Filter messages based on key, value, or timestamp.
    • Inspect message headers and payloads.
  5. Performance Metrics
    • Monitor key performance indicators (KPIs) like throughput, latency, and consumer lag.
    • Visualize metrics using charts and graphs.
    • Identify performance issues and optimize cluster configuration.
  6. Security Management
    • Configure authentication and authorization for Kafka access.
    • Implement role-based access control (RBAC).
    • Protect sensitive data with encryption.

Benefits of Using Provectus

  • Simplified Management: Provectus provides a user-friendly interface that simplifies complex Kafka management tasks.
  • Enhanced Visibility: Gain deeper insights into your Kafka cluster's health and performance.
  • Improved Efficiency: Streamline workflows and reduce manual tasks.
  • Centralized Control: Manage multiple Kafka clusters from a single console.
  • Cost-Effective: Provectus is a free and open-source tool.

Installation and Configuration

Provectus can be installed on various operating systems, including Linux, macOS, and Windows. The installation process typically involves cloning the repository from GitHub, installing dependencies, and running the application. Configuration options allow you to customize Provectus to your specific needs.

Configuring Provectus Kafka-UI using Docker-Compose without Authentication.

services:
  kafka-ui:
    container_name: provectus_kafka_ui
    image: provectuslabs/kafka-ui:latest
    ports:
      - 9095:8080
    environment:
      DYNAMIC_CONFIG_ENABLED: 'true'

Creates a docker-compose.yaml file and past the above code snippet. Attached 'docker-compose-KafkaUIOnly.yaml' file for reference.

Open PowerShell, navigate to the file location, and execute 'docker-compose up'.

 Kafka-UI

A Docker image will be running on the Docker Desktop.

Docker Desktop

Browse with 'http://localhost:9091/' and Kafka-ui will be displayed.

Browse

To configure the new Kafka cluster, click on Configure new Cluster and add the cluster name and Bootstrap server with port.

There is a Kafka instance that is running at my local.

 Kafka instance

Will add the Kafka cluster details and Submit.

Kafka cluster

Once Cluster is added successfully, we can see the details in the Dashboard and we can navigate to different options to view/modify Brokers, Topics, and Consumers.

Dashboard

Configuring Provectus Kafka-UI using Docker-Compose with Basic Authentication.

services:
  kafka-ui:
    container_name: provectus_kafka_ui
    image: provectuslabs/kafka-ui:latest
    ports:
      - 9091:8080
    environment:
      DYNAMIC_CONFIG_ENABLED: 'true'
      AUTH_TYPE: "LOGIN_FORM"
      SPRING_SECURITY_USER_NAME: admin
      SPRING_SECURITY_USER_PASSWORD: pass

Attached 'docker-compose-BasicAuth.yaml' file for reference.

Follow similar steps as above, where the docker-compose file has to be up and running on the docker desktop. In the docker-compose file, we can see the environment variables like AUTH_TYPE, which specifies LOGIN_FORM, which is nothing but Basic authentication, and other variables like SPRING_SECURITY_USER_NAME and SPRING_SECURITY_USER_PASSWORD for username and password.

When we browse with ''http://localhost:9091/', we will get a login page to enter user credentials.

User credentials

As this is for Basic Authentication, username and password are configured in the docker-compose file only. Enter admin for user name and pass for password, and the user will be logged in to Provectus Kafka UI.

Basic Authentication

Configuring Provectus Kafka-UI using Docker-Compose with Azure AD/OAuth2/SSO Authentication.

Configuring Single Sign-On (SSO) with Azure Active Directory (Azure AD) for Kafka UI by Provectus involves several steps.

  • Registering an Application in Azure AD
  • Configuring Kafka UI to Use Azure AD for Authentication

Step 1. Register an Application in Azure Active Directory.

First, you need to register your Kafka UI application with Azure AD to obtain the necessary credentials for OAuth 2.0 authentication.

1.1. Sign in to the Azure Portal.

  • Navigate to Azure Portal and sign in with your administrator account.

1.2. Register a New Application.

  1. In the Azure Portal, select Azure Active Directory from the left-hand navigation pane.
  2. In the Azure AD page, select App registrations from the menu.
  3. Click on New Registration.
  4. Configure the application registration.
    • Name: Enter a name for your application (e.g., Kafka UI SSO).
    • Supported account types: Choose Accounts in this organizational directory only if you want only users in your organization to access Kafka UI.
    • Redirect URI: Set the redirect URI to http://localhost:9091/login/oauth2/code/ (you will adjust this later if needed).
  5. Click Register to create the application.

1.3. Configure Authentication Settings

  1. After registration, you will be redirected to the application's Overview page.
  2. Select Authentication from the left-hand menu.
  3. Under Platform Configurations, click Add a Platform and select Web.
  4. Configure Web Platform
    • Redirect URIs: Add the redirect URI where Kafka UI will receive authentication responses. For example: http://localhost:9091/login/oauth2/code/. Ensure that this URI matches the SERVER_SERVLET_CONTEXT_PATH (if set) and the port mapping in your Docker Compose file.
    • Logout URL (optional): You can specify a logout URL if needed.
    • Implicit grant and hybrid flows: Check Access tokens and ID tokens to enable them.
  5. Click Configure to save the settings.

1.4. Create a Client Secret.

  1. On the application page, select Certificates & secrets.
  2. Under Client Secrets, click New Client Secrets.
  3. Add a client's secret.
    • Description: Enter a description (e.g., Kafka UI Client Secret).
    • Expires: Select an appropriate expiration period.
  4. Click Add.
  5. Copy the Client's Secret: After creating the secret, copy the Value immediately. You will need this value later, and it will not be shown again.

1.5. Gather Necessary Information

Make a note of the following information.

  • Application (client) ID: Found on the application's Overview page.
  • Directory (tenant) ID: Also found on the Overview page.
  • Client Secret: The value you copied in the previous step.

Step 2. Configure Kafka UI to Use Azure AD for Authentication.

Now that you have the necessary credentials, you can configure Kafka UI to use Azure AD for authentication.

2.1. Update the Docker Compose File.

Modify your docker-compose.yml file to include the OAuth 2.0 configuration.

version: '3'
services:
  kafka-ui:
    container_name: kafka-ui-provectus
    image: 'provectuslabs/kafka-ui:latest'
    ports:
      - "9091:8080"
    environment:
      DYNAMIC_CONFIG_ENABLED: 'true'
      AUTH_TYPE: "OAUTH2"
      AUTH_OAUTH2_CLIENT_AZURE_CLIENTID: "**Add Azure ClientID**"
      AUTH_OAUTH2_CLIENT_AZURE_CLIENTSECRET: "**Add Azure Client Secret**"
      AUTH_OAUTH2_CLIENT_AZURE_SCOPE: "openid"
      AUTH_OAUTH2_CLIENT_AZURE_CLIENTNAME: "azure"
      AUTH_OAUTH2_CLIENT_AZURE_PROVIDER: "azure"
      AUTH_OAUTH2_CLIENT_REGISTRATION_AZURE_REDIRECT_URI: "{baseUrl}/login/oauth2/code/{registrationId}"
      AUTH_OAUTH2_CLIENT_AZURE_ISSUERURI: "https://login.microsoftonline.com/{**TenantID**}/v2.0"
      AUTH_OAUTH2_CLIENT_AZURE_JWKSETURI: "https://login.microsoftonline.com/{**TenantID**}/discovery/v2.0/keys"

Attached is 'docker-compose-WithAzureAD Authonly.yaml' for reference.

2.2. Replace Placeholders with Your Values.

  • your-client-id: Replace with the Application (client) ID from Azure AD.
  • your-client-secret: Replace with the Client Secret you created.
  • your-tenant-id: Replace with your Directory (tenant) ID.

Step 3. Verify the Configuration.

3.1. Access Kafka UI.

Open your browser and navigate to http://localhost:9091 (or the appropriate URL based on your configuration).

3.2. Sign In with Azure AD.

  • You should be redirected to the Azure AD login page.
  • Enter your Azure AD credentials to sign in.
  • Upon successful authentication, you will be redirected back to Kafka UI and have access to the interface.
     Azure AD

Configuring Provectus Kafka-UI using Docker-Compose with Azure AD/OAuth2/SSO Authentication and RBAC for Authorization.

To configure Azure AD, follow the same steps mentioned above and continue to configure RBAC from here.

Restricting Access to Specific Users or Groups

  1. By default, any user in your Azure AD tenant can authenticate.
  2. You can restrict access to specific users or groups.
    • Use Azure AD Group Claims to include group information in the ID token.
    • Configure Role-Based Access Control (RBAC) in Kafka UI if supported.
    • Adjust your application to check for specific roles or group membership.

In this approach, authentication and authorization details are present in config.yaml file. To incorporate a config.yaml file into your Docker Compose setup for Kafka-UI, you'll need to follow these steps:

You can organize your project folder like this,

project-directory/
│
├── docker-compose.yml
├── config/
│   └── config.yaml

This structure includes.

  • docker-compose.yml: The Docker Compose file you've shared.
  • config/config.yaml: The configuration file for kafka-ui.

Content of config.yaml

auth:
  type: OAUTH2
  oauth2:
    client:
      azure:
        clientId: "clientId"
        clientSecret: "ClientSecret"
        scope: openid
        client-name: azure
        provider: azure
        authorization-grant-type: authorization_code
        issuer-uri: "https://login.microsoftonline.com/{TenantID}/v2.0"
        jwk-set-uri: "https://login.microsoftonline.com/{TenantID}/discovery/v2.0/keys"
        user-name-attribute: name  # email
        custom-params:
          type: oauth
          roles-field: roles

rbac:
  roles:
    - name: "admins"
      clusters:
        - cluster 1
      subjects:
        - provider: oauth
          type: role
          value: "UIAdmin"
      permissions:
        - resource: applicationconfig
          actions: [ VIEW, EDIT ]
        - resource: clusterconfig
          actions: [ VIEW, EDIT ]
        - resource: topic
          value: ".*"
          actions:
            - VIEW
            - CREATE
            - EDIT
            - DELETE
            - MESSAGES_READ
            - MESSAGES_PRODUCE
            - MESSAGES_DELETE
        - resource: consumer
          value: ".*"
          actions: [ VIEW, DELETE, RESET_OFFSETS ]
        - resource: schema
          value: ".*"
          actions: [ VIEW, CREATE, DELETE, EDIT, MODIFY_GLOBAL_COMPATIBILITY ]
        - resource: connect
          value: ".*"
          actions: [ VIEW, EDIT, CREATE, RESTART ]
        - resource: ksql
          actions: [ EXECUTE ]
        - resource: acl
          actions: [ VIEW, EDIT ]

    - name: "readonly"
      clusters:
        - cluster 1
      subjects:
        - provider: oauth
          type: role
          value: "Viewer"
      permissions:
        - resource: clusterconfig
          actions: [ VIEW ]
        - resource: topic
          value: ".*"
          actions:
            - VIEW
            - CREATE
            - EDIT
            - DELETE
            - MESSAGES_READ
            - MESSAGES_PRODUCE
            - MESSAGES_DELETE
        - resource: consumer
          value: ".*"
          actions: [ VIEW, DELETE, RESET_OFFSETS ]
        - resource: schema
          value: ".*"
          actions: [ VIEW, CREATE, DELETE, EDIT, MODIFY_GLOBAL_COMPATIBILITY ]
        - resource: connect
          value: ".*"
          actions: [ VIEW, EDIT, CREATE, RESTART ]
        - resource: ksql
          actions: [ EXECUTE ]
        - resource: acl
          actions: [ VIEW, EDIT ]

Modify docker-compose.yml to Mount config.yaml.

In the docker-compose.yml file, you'll need to mount the config.yaml file as a volume into the Kafka-ui container. volumes: This line mounts the local config/config.yaml file into the container at /config.yaml.

version: '3'
services:
  kafka-ui:
    container_name: kafka-ui-provectus
    image: 'provectuslabs/kafka-ui:latest'
    ports:
      - "9091:8080"
    environment:
      DYNAMIC_CONFIG_ENABLED: 'true'
      LOGGING_LEVEL_ROOT: DEBUG
      LOGGING_LEVEL_REACTOR: DEBUG
      SPRING_CONFIG_ADDITIONAL-LOCATION: /config.yaml
    volumes:
      - ./config/config.yaml:/config.yaml

Attached 'docker-compose-withAzureAD_RBAC.yaml' and config.yaml file inside the config folder.

To execute both the docker-compose.yml and config.yaml, Navigate to your project directory (where docker-compose.yml is located). Run Docker Compose.

Run Docker Compose

Configure Azure AD authentication and Role-Based Access Control (RBAC) for Provectus Kafka-UI using Helm.

Why Use Helm over Docker Compose for Kafka-UI?

If your infrastructure is based on Kubernetes, Helm is the natural choice. It’s designed to work with Kubernetes, making it easier to manage complex deployments and services like Kafka, especially in cloud environments.

  1. Helm enables better scalability and reliability due to Kubernetes’ native features for clustering, scaling, and self-healing. This is crucial for production environments and large Kafka clusters.
  2. Helm Charts make it easier to manage environment-specific configurations and secrets, which is beneficial when working with complex configurations like Kafka, SSL, RBAC, and OAuth2.
  3. Helm’s built-in versioning and rollback features make it more suited for managing Kafka-UI in production, where you may need to deploy updates frequently or revert changes.
  4. Attached is a values file named values-AzureAD_rbac_withHELM.yaml, which contains Azure AD and RBAC configurations using Helm.

To configure Azure AD authentication and Role-Based Access Control (RBAC) for Provectus Kafka-UI using Helm, you'll need to follow these steps.

Prerequisites

  • Azure AD application setup with OAuth2 credentials (client-id, client-secret, tenant-id, etc.).
  • Kafka-UI Helm repository added to your local setup.
  • Reference Provectus_HelmChart

In Powershell execute below commands to install Helm Repo.

helm repo add kafka-ui https://provectus.github.io/kafka-ui-charts

Prepare Helm Values File Code present in values.yaml.

replicaCount: 1
image:
  registry: docker.io
  repository: provectuslabs/kafka-ui
  pullPolicy: IfNotPresent
  # Overrides the image tag whose default is the chart appVersion.
  tag: ""

imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""

serviceAccount:
  # Specifies whether a service account should be created
  create: true
  # Annotations to add to the service account
  annotations: {}
  # The name of the service account to use.
  # If not set and create is true, a name is generated using the fullname template
  name: ""

existingConfigMap: ""
yamlApplicationConfig:
  # kafka:
  # clusters:
  # - name: Reimbursement
  # bootstrapServers: kafka-service:9092
  spring:
    security:
    
  auth:
    type: OAUTH2
    oauth2:
      client:
        azure:
          clientId: "ClientID"   
          clientSecret: "ClientSecret"   
          scope: openid
          client-name: azure
          provider: azure
          authorization-grant-type: authorization_code
          issuer-uri: "https://login.microsoftonline.com/{TenantID}/v2.0"   
          jwk-set-uri: "https://login.microsoftonline.com/{TenantID}/discovery/v2.0/keys"
          user-name-attribute: name
          custom-params:
            type: oauth
            roles-field: roles
              
  rbac:
    roles:
      - name: "admins"   # Role 1
        clusters:    # need to add cluster name that is going to create to view the cluster/topic/message
          - Cluster 1
          - Cluster 2
        subjects:
          - provider: oauth
            type: role
            value: "UIAdmin"
        permissions:
          - resource: applicationconfig
            actions: [ VIEW, EDIT ]
          - resource: clusterconfig
            actions: [ VIEW, EDIT ]
          - resource: topic
            value: ".*"
            actions:
              - VIEW
              - CREATE
              - EDIT
              - DELETE
              - MESSAGES_READ
              - MESSAGES_PRODUCE
              - MESSAGES_DELETE
          - resource: consumer
            value: ".*"
            actions: [ VIEW, DELETE, RESET_OFFSETS ]
          - resource: schema
            value: ".*"
            actions: [ VIEW, CREATE, DELETE, EDIT, MODIFY_GLOBAL_COMPATIBILITY ]
          - resource: connect
            value: ".*"
            actions: [ VIEW, EDIT, CREATE, RESTART ]
          - resource: ksql
            actions: [ EXECUTE ]
          - resource: acl
            actions: [ VIEW, EDIT ]
            
      - name: "readonly"   # Role 2
        clusters:         # need to add cluster name that is going to create to view the cluster/topic/message
          - Cluster 1
          - Cluster 2
        subjects:
          - provider: oauth
            type: role
            value: "Viewer"
        permissions:
          - resource: clusterconfig
            actions: [ VIEW ]
          - resource: topic
            value: ".*"
            actions:
              - VIEW
              - CREATE
              - EDIT
              - DELETE
              - MESSAGES_READ
              - MESSAGES_PRODUCE
              - MESSAGES_DELETE
          - resource: consumer
            value: ".*"
            actions: [ VIEW, DELETE, RESET_OFFSETS ]
          - resource: schema
            value: ".*"
            actions: [ VIEW, CREATE, DELETE, EDIT, MODIFY_GLOBAL_COMPATIBILITY ]
          - resource: connect
            value: ".*"
            actions: [ VIEW, EDIT, CREATE, RESTART ]
          - resource: ksql
            actions: [ EXECUTE ]
          - resource: acl
            actions: [ VIEW, EDIT ]

yamlApplicationConfigConfigMap:
  {}
  # keyName: config.yml
  # name: configMapName
existingSecret: ""
envs:
  secret: {}
  config: {}

networkPolicy:
  enabled: false
  egressRules:
    ## Additional custom egress rules
    customRules: []
  ingressRules:
    ## Additional custom ingress rules
    customRules: []

podAnnotations: {}
podLabels: {}

annotations: {}

probes:
  useHttpsScheme: false

podSecurityContext:
  {}
  # fsGroup: 2000

securityContext:
  {}
  # capabilities:
  #   drop:
  #   - ALL
  # readOnlyRootFilesystem: true
  # runAsNonRoot: true
  # runAsUser: 1000

service:
  type: ClusterIP
  port: 80

ingress:
  enabled: false
  annotations: {}
  ingressClassName: ""
  path: "/"
  pathType: "Prefix"
  host: ""
  tls:
    enabled: false
    secretName: ""

resources:
  {}
  # limits:
  #   cpu: 200m
  #   memory: 512Mi
  # requests:
  #   cpu: 200m
  #   memory: 256Mi

autoscaling:
  enabled: false
  minReplicas: 1
  maxReplicas: 100
  targetCPUUtilizationPercentage: 80
  # targetMemoryUtilizationPercentage: 80

nodeSelector: {}

tolerations: []

affinity: {}

env:
  - name: DYNAMIC_CONFIG_ENABLED
    value: 'true'
  - name: LOGGING_LEVEL_ROOT
    value: DEBUG
  - name: LOGGING_LEVEL_REACTOR
    value: DEBUG

initContainers: {}

volumeMounts: {}

volumes: {}

Install Kafka-UI Using Helm: Navigate to the Folder where the values.yaml files are saved and execute the below command to configure the Provectus image to download into the Kubernetes cluster and configure Azure AD and RBAC.

helm install kafka-ui kafka-ui/kafka-ui -f values.yaml

Access to your Kubernetes cluster with Helm installed, by port forwarding.

Execute the below command to check the Pod is created and running.

kubectl get pods

Use the below command to do port-forwarding.

kubectl port-forward --namespace default $POD_NAME 9091:8080

Browse with 'http://localhost:9091/'

Reference

Referance

Conclusion

Kafka UI tools are essential for effectively managing and monitoring Kafka clusters. By providing a graphical interface, these tools simplify tasks that would otherwise be complex and time-consuming. Whether you're a Kafka beginner or an experienced administrator, a well-chosen Kafka UI tool can greatly enhance your productivity and efficiency.

Provectus is a valuable tool for anyone working with Apache Kafka. Its intuitive interface, comprehensive features, and open-source nature make it a popular choice among developers and administrators. By using Provectus, you can effectively manage your Kafka clusters, optimize performance, and ensure data reliability.

By implementing robust authentication and authorization mechanisms, organizations can protect their Kafka clusters from unauthorized access and data breaches. This not only safeguards sensitive information but also ensures the reliability and integrity of the streaming platform.