Integrate Azure AD OAuth2 SSO Authentication RBAC for AKHQ Kafka-UI

Previous Article

Kafka UI is a powerful web-based interface designed to facilitate interaction with Apache Kafka, a distributed event-streaming platform. Kafka is widely used for building real-time data pipelines and streaming applications, making Kafka UI an invaluable tool for administrators and developers. This article provides an in-depth look into Kafka UI, its features, use cases, and configuration aspects.

Why does Kafka require a UI?

Apache Kafka, while extremely efficient for event streaming, comes with a steep learning curve. Managing topics, consumers, and producers, as well as monitoring data flow, can be overwhelming, especially in large-scale systems. Kafka UI solves this by offering a user-friendly graphical interface that helps simplify common operations.

  • Topic management: Create, delete, and manage topics.
  • Message browsing: View and filter messages within topics.
  • Consumer group monitoring: View consumer group lags, offsets, and states.
  • Schema registry management: Manage and monitor schema changes in Kafka.
  • Kafka broker configurations: Review the configuration settings of brokers.

This tool is especially useful for developers, DevOps engineers, and system administrators who need to manage Kafka clusters efficiently.

Key Features

  1. Topic Management
    • View a list of Kafka topics.
    • Create, delete, or modify topic configurations.
    • Inspect partition details (e.g., leader, ISR, and replication factors).
    • Browse messages and apply filters for debugging purposes.
  2. Consumer Groups
    • Monitor consumer group activity and offsets.
    • Visualize consumer group lags, helping identify slow consumers.
    • Reset offsets for topics in specific consumer groups.
  3. Schema Registry Integration
    • Easily inspect and manage Avro schemas used in Kafka.
    • Register and delete schema versions.
    • View schema compatibility and updates across topics.
  4. ACLs & Authorization
    • Manage Access Control Lists (ACLs) to control access to topics, consumer groups, and brokers.
    • View role-based access permissions.
  5. Cluster and Broker Management
    • Inspect cluster health and broker configurations.
    • View broker logs and metrics.
  6. Real-time Monitoring and Debugging
    • Kafka UI provides real-time visibility into the status of the cluster.
    • It simplifies debugging by allowing users to inspect messages within topics, visualize the flow of messages, and track down errors related to producers or consumers.

Security Considerations

When exposing Kafka UI in production environments, it’s critical to secure the interface with proper authentication and authorization. Some common approaches include:

  • Basic Authentication: Using HTTP basic authentication to restrict access.
  • OAuth2: Kafka UI supports OAuth2 for authentication, which is useful for integrating with identity providers like Azure AD.
  • SSL/TLS Encryption: Ensure that all communications between Kafka UI and Kafka brokers are encrypted using SSL/TLS.

Monitoring and Troubleshooting

Kafka UI provides real-time metrics and logs, but additional monitoring tools, such as Prometheus and Grafana, can be used alongside Kafka UI for comprehensive monitoring. Some advanced use cases include.

  • Setting up alerting when consumer lag exceeds a certain threshold.
  • Monitoring broker health for CPU, memory, and disk usage.
  • Debugging message flow issues by tracing partitions and visualizing consumer activity.

The Importance of Authentication and Authorization in Kafka UI Tools

Kafka UI tools provide a graphical interface for managing Kafka clusters. While these tools significantly enhance the user experience by providing a visual layer to Kafka operations, they also expose critical control points that can affect the Kafka infrastructure if accessed by unauthorized users.

Why Authentication and Authorization are Crucial
 

1. Protecting Sensitive Data and Operations

Kafka often handles real-time event data, including financial transactions, user activity logs, or business-critical information. With a Kafka UI tool, users can:

  • Browse message content on topics that may include sensitive data.
  • Modify topics, delete data, or reset consumer offsets, which could disrupt services or lead to data loss.

Without authentication (confirming the identity of users) and authorization (defining permissions for users), anyone with access to the Kafka UI could potentially:

  • View confidential messages.
  • Perform destructive operations (e.g., deleting topics or altering configurations).
  • Impact on the stability of services consuming Kafka streams.

Example Scenario. Imagine a situation where a user with no access controls inadvertently deletes a topic that stores critical transaction records. This could cause downtime, financial loss, or data integrity issues. Authentication ensures that only authorized personnel can perform such actions.

2. Preventing Unauthorized Access and Attacks

Without proper security controls, Kafka UI tools become vulnerable to various types of attacks.

  • Unauthorized access: Without authentication, any user on the network can gain access to Kafka’s management interface.
  • Insider threats: Even within an organization, there may be different roles (e.g., admin, developer, analyst) that should have varying levels of access to Kafka clusters.
  • External attacks: Exposing Kafka UI without secure authentication can invite external attackers to exploit the system and tamper with Kafka data streams.

Example Scenario. A malicious actor gains access to a Kafka UI tool and changes the consumer offsets or reconfigures broker settings, leading to service outages or severe disruptions in data flow.

3. Ensuring Role-Based Access Control (RBAC)

In enterprise environments, different teams (such as developers, operators, and data analysts) often work with Kafka, but their required permissions will vary significantly. Role-Based Access Control (RBAC) helps ensure that each user only has access to the resources necessary for their role. Without RBAC in Kafka UI tools, everyone would have unrestricted access to all Kafka resources, which can result in.

  • Accidental Misconfigurations: Developers might accidentally alter production topics or consumer groups.
  • Data Breaches: Analysts might gain access to sensitive data they aren’t authorized to view.

Example Scenario. A developer working on a test environment might need read-only access to Kafka topics, whereas an administrator requires full control to manage configurations, topics, and brokers. Without RBAC, the developer could mistakenly perform administrative tasks, leading to potential misconfigurations or data exposure.

4. Compliance with Security Standards

Organizations that deal with sensitive data, such as healthcare (HIPAA), finance (PCI-DSS), or consumer data (GDPR), need to implement stringent security practices. This includes securing all entry points to the infrastructure, including Kafka UI tools.

Without proper authentication and authorization, organizations could be non-compliant with regulatory standards, resulting in legal ramifications, fines, and loss of customer trust.

Types of Authentication and Authorization Mechanisms for Kafka UI
 

1. Basic Authentication

The simplest form of authentication, basic authentication, requires users to provide a valid username and password before gaining access to the Kafka UI. This prevents unauthorized users from accessing the interface.

  1. Pros
    • Simple to configure and use.
    • Effective for smaller environments or internal access.
  2. Cons
    • Passwords are vulnerable if not encrypted.
    • No role-based access control; everyone with valid credentials gets full access.

2. OAuth2 Authentication

For more complex environments, OAuth2 allows Kafka UI tools to integrate with Identity Providers (IdPs) such as Google, GitHub, or Azure Active Directory (Azure AD). This method is commonly used for organizations that want to enforce Single Sign-On (SSO) and centralized access control.

Benefits of OAuth2

  • Allows integration with existing user management systems.
  • Can enforce multi-factor authentication (MFA) for added security.
  • Provides detailed control over user roles and permissions.

Example. An enterprise using Azure AD for identity management can configure AKHQ or Kafka UI to authenticate users via OAuth2, ensuring that only authorized users with specific roles can access the Kafka UI.

3. SASL/SSL for Kafka Cluster Authentication

In production, Kafka brokers are often configured with SASL (Simple Authentication and Security Layer) or SSL (Secure Sockets Layer) for authentication. Kafka UI tools should support these authentication methods to ensure a secure connection between the tool and the Kafka cluster.

SASL/SSL allows Kafka UI to securely interact with Kafka brokers, ensuring that,

  • Only authenticated users can interact with the Kafka brokers via the UI.
  • Data in transit between Kafka UI and Kafka brokers is encrypted, preventing eavesdropping.

Example. If a Kafka cluster is configured to use SASL/SSL for securing communication, the Kafka UI (such as AKHQ) must be configured with valid SSL certificates and authentication credentials.

4. Role-Based Access Control (RBAC)

RBAC provides fine-grained control over what actions users can perform in the Kafka UI. It allows you to define roles such as,

  • Admin: Full access to all Kafka operations, including creating/deleting topics and managing brokers.
  • Developer: Limited access to create topics and view messages but restricted from modifying sensitive configurations.
  • Analyst: Read-only access to topics and message browsing for analytics purposes.

Implementing RBAC ensures that users only have the necessary permissions required for their roles, reducing the risk of accidental or malicious changes to Kafka configurations.

Technical Overview of AKHQ: A Kafka UI Tool

AKHQ (formerly known as KafkaHQ) is a robust open-source Kafka UI tool designed to provide developers and administrators with an easy-to-use interface for managing and monitoring Kafka clusters. With AKHQ, users can interact with Apache Kafka, view real-time data, and manage critical aspects of their Kafka infrastructure with ease. This article offers a technical dive into AKHQ, including its key features, deployment options, and configuration.

What is AKHQ?

AKHQ is a web-based interface that simplifies the management and monitoring of Apache Kafka. Kafka, a distributed streaming platform, is used to build real-time data pipelines and applications. However, managing Kafka at scale can be complex. AKHQ addresses this by providing a user-friendly dashboard to handle tasks such as,

  • Managing Kafka topics, partitions, and replicas.
  • Browsing and filtering Kafka messages.
  • Monitoring consumer group lags and offset tracking.
  • Managing access control lists (ACLs) and role-based permissions.
  • Monitoring Kafka Connect, Schema Registry, and more.

AKHQ is especially useful for Kafka administrators, DevOps engineers, and developers who need real-time visibility and control over Kafka clusters.

Key Features of AKHQ

  1. Topic Management
    • Create, update, and delete Kafka topics.
    • View detailed information on topics, including partitions, replication, and ISR (In-Sync Replicas) status.
    • Browse and filter messages within topics by key, value, or timestamp.
    • Support is available for both Avro and JSON message formats.
  2. Consumer Groups
    • Visualize consumer group lags.
    • Monitor offsets for each partition and consumer group.
    • Reset offsets to a specific value, earliest, or latest.
  3. Schema Registry
    • View and manage Avro schemas.
    • Register and delete schema versions.
    • View schema compatibility and version history for topics.
  4. Kafka Connect Integration
    • Manage Kafka Connect connectors.
    • View connector status and configuration.
    • Restart or pause connectors as needed.
  5. Security and Access Management
    • Manage Access Control Lists (ACLs) to control permissions for topics, consumer groups, and brokers.
    • Role-based access control (RBAC) to assign users different permission levels based on their roles.
  6. Monitoring & Alert
    • Real-time monitoring of Kafka clusters, brokers, topics, and consumer groups.
    • Set up alerts based on consumer lag or broker performance issues.
    • Integration with external monitoring tools like Prometheus and Grafana.
  7. Kafka Cluster Health & Metrics
    • Monitor Kafka broker health, CPU/memory usage, and disk metrics.
    • View detailed logs and monitor event streams in real time.

Use Cases for AKHQ

  1. Real-time Monitoring and Debugging: AKHQ allows teams to monitor Kafka clusters in real time. This is particularly useful for identifying issues with consumer groups, such as lag or debugging message flows between producers and consumers.
  2. Schema Management: For applications relying on Avro schemas, AKHQ integrates with Schema Registry, enabling users to manage schemas directly from the UI.
  3. Streamlining Operations: Instead of using the Kafka command line tools, AKHQ allows users to create, delete, and manage Kafka topics and consumer groups through a graphical interface, saving time and reducing the learning curve.
  4. DevOps Automation: With its role-based access control (RBAC) and monitoring capabilities, AKHQ can be part of a DevOps pipeline, where different teams get access to monitor and debug Kafka clusters without needing full admin rights.

Installation and Configuration

AKHQ can be installed on various operating systems, including Linux, macOS, and Windows. The installation process typically involves cloning the repository from GitHub, installing dependencies, and running the application. Configuration options allow you to customize AKHQ to your specific needs.

Configuring AKHQ Kafka-UI using Docker-Compose without Authentication.

version: '3.6'
services:
  akhq:
    image: tchiotludo/akhq
    restart: unless-stopped
    environment:
      AKHQ_CONFIGURATION: |
        akhq:
          server:
            base-path: /
          connections:
            placeholder-cluster:
              properties:
                bootstrap.servers: "localhost:29092"  # Dummy server, no need to be real
    ports:
      - 9091:8080

Note. AKHQ needs at least one Kafka Cluster to load the UI. So, I have added a dummy cluster. We have to add a cluster via a configuration file, and we dont have the option to add a Kafka cluster via UI.

Create a docker-compose.yaml file and past the above code snippet. Attached 'docker-compose-KafkaUIOnly.yaml' file for reference.

Open PowerShell, navigate to the file location, and execute 'docker-compose up'.

Docker-compose

A Docker image will be running on the Docker Desktop.

Docker Desktop

Browse with 'http://localhost:9091/', and Kafka-ui will be displayed.

Kafka

Configuring AKHQ Kafka-UI using an application configuration file with Basic Authentication and Readonly mode.

Attached 'application-AuthOnly.yaml' configuration file for reference. You need to place the file inside the app folder. Replace bootstrap server details, Keystore location, and password if needed.

In the Security block, username and password are mentioned. Password must be in an encoded format and attached to the URL to encrypt the password.

micronaut:
  security:
    enabled: true
akhq:
  connections:
    rbac:
      properties:
        bootstrap.servers: "*bootstrap server URL"* 
        security.protocol: "SSL"
        ssl.keystore.location: "*location for Keystore*"
        ssl.keystore.password: "*password of keystore*"

    reimb:
      properties:
        bootstrap.servers: "*bootstrap server URL"* 
        security.protocol: "SSL"
        ssl.keystore.location: "*location for Keystore*"
        ssl.keystore.password: "*password of keystore*"

  security:
    default-group: reader
    basic-auth:
      - username: admin
        password: "$2a$12$WtkG2l5pXgCdQdLr9Gbsz.Sg3PAobtKt4hzXS4g4wNaUKA2kDJSva"  #pass123 - https://bcrypt-generator.com/
        passwordHash: BCRYPT
        groups:
        - admin
      - username: reader
        password: "9b8769a4a742959a2d0298c36fb70623f2dfacda8436237df08d8dfd5b37374c"  #pass123 - https://tools.keycdn.com/sha256-online-generator
        groups:
        - reader
      - username: viewer
        password: "9b8769a4a742959a2d0298c36fb70623f2dfacda8436237df08d8dfd5b37374c"  #pass123 - https://tools.keycdn.com/sha256-online-generator

Navigate to the folder where the application file has been saved in PowerShell and execute the below command.

docker run -d \
  -p 9091:8080 \
  -v "C:\AKHQKafkaUI\app\application.yml:/app/application.yml" \
  -v "C:\AKHQKafkaUI\app:/app/config" \
  --name kafka_UI_akhq tchiotludo/akhq

The docker image will be running.

Docker image

When we browse with ''http://localhost:9091/', we will get a login page to enter user credentials.

Log in as an admin user.

Admin user

Log in as a Reader user.

Reader User

Configuring AKHQ Kafka-UI using application.yaml with Azure AD/OAuth2/SSO Authentication and RBAC for Authorization.

Configuring Single Sign-On (SSO) with Azure Active Directory (Azure AD) for Kafka UI by AKHQ involves several steps:

  • Registering an Application in Azure AD
  • Configuring Kafka UI to Use Azure AD for Authentication

Step 1. Register an Application in the Azure Active Directory.

First, you need to register your Kafka UI application with Azure AD to obtain the necessary credentials for OAuth 2.0 authentication.

1.1. Sign in to the Azure Portal,

Navigate to Azure Portal and sign in with your administrator account.

1.2. Register a New Application.

  1. In the Azure Portal, select Azure Active Directory from the left-hand navigation pane.
  2. In the Azure AD page, select App registrations from the menu.
  3. Click on New Registration.
  4. Configure the application registration.
    • Name: Enter a name for your application (e.g., Kafka UI SSO).
    • Supported account types: Choose Accounts in this organizational directory only if you want only users in your organization to access Kafka UI.
    • Redirect URI: Set the redirect URI to http://localhost:9091/oauth/callback/azure (you will adjust this later if needed).
  5. Click Register to create the application.

1.3. Configure Authentication Settings

  1. After registration, you will be redirected to the application's Overview page.
  2. Select Authentication from the left-hand menu.
  3. Under Platform Configurations, click Add a Platform and select Web.
  4. Configure Web Platform
    • Redirect URIs: Add the redirect URI where Kafka UI will receive authentication responses. For example http://localhost:9091/oauth/callback/azure
    • Logout URL (optional): You can specify a logout URL if needed.
    • Implicit grant and hybrid flows: Check Access tokens and ID tokens to enable them.
  5. Click Configure to save the settings.

1.4. Create a Client Secret

  1. On the application page, select Certificates & secrets.
  2. Add a client's secret.
    • Description: Enter a description (e.g., Kafka UI Client Secret).
    • Expires: Select an appropriate expiration period.
  3. Click Add.
  4. Copy the Client's Secret: After creating the secret, copy the Value immediately. You will need this value later, and it will not be shown again.

1.5. Gather Necessary Information

Make a note of the following information.

  • Application (client) ID: Found on the application's Overview page.
  • Directory (tenant) ID: Also found on the Overview page.
  • Client Secret: The value you copied in the previous step.

Step 2. Configure Kafka UI to Use Azure AD for Authentication.

Now that you have the necessary credentials, you can configure Kafka UI to use Azure AD for authentication.

2.1. Update the application.yaml file.

Modify your application.yml file to include the OAuth 2.0 configuration.

micronaut:  
  security:
    enabled: true
    oauth2:
      enabled: true
      clients:
        azure:
          client-id: "clientID"
          client-secret: "ClientSecret"
          scopes:
            - openid
          openid:
            issuer: "https://login.microsoftonline.com/{TenantID}/v2.0"  

akhq:
  server:
    access-log:
      enabled: true
      name: org.akhq.log.access  
  connections:
    rbac:
      properties:
        bootstrap.servers: "bootstrap URL"  
        security.protocol: "SSL"
        ssl.keystore.location: "location of Keystore"
        ssl.keystore.password: "password of Keystore"

    reimb:
      properties:
        bootstrap.servers: "bootstrap URL"  
        security.protocol: "SSL"
        ssl.keystore.location: "location of Keystore"
        ssl.keystore.password: "password of Keystore"
  security:
    default-group: no-roles
    roles:
      Viewer:
        - resources: [ "TOPIC" ]
          actions: [ "READ" ]
        - resources: [ "TOPIC_DATA" ]
          actions: [ "READ" ]
        - resources: [ "CONSUMER_GROUP" ]
          actions: [ "READ" ]   
        - resources: [ "CONNECT_CLUSTER" ]
          actions: [ "READ" ]
        - resources: [ "CONNECTOR" ]
          actions: [ "READ" ]
        - resources: [ "SCHEMA" ]
          actions: [ "READ" ]          
        - resources: [ "NODE" ]
          actions: [ "READ" ]              
        - resources: [ "ACL" ]
          actions: [ "READ" ]                        
        - resources: [ "KSQLDB" ]
          actions: [ "READ" ]
      UIAdmin:
        - resources: [ "TOPIC" ]
          actions: [ "READ", "CREATE", "UPDATE", "DELETE", "READ_CONFIG", "ALTER_CONFIG" ]
        - resources: [ "TOPIC_DATA" ]
          actions: [ "READ", "CREATE", "UPDATE", "DELETE" ]
        - resources: [ "CONSUMER_GROUP" ]
          actions: [ "READ", "DELETE", "UPDATE_OFFSET", "DELETE_OFFSET" ]   
        - resources: [ "CONNECT_CLUSTER" ]
          actions: [ "READ" ]
        - resources: [ "CONNECTOR" ]
          actions: [ "READ", "CREATE", "DELETE", "UPDATE_STATE" ]
        - resources: [ "SCHEMA" ]
          actions: [ "READ", "CREATE", "UPDATE", "DELETE", "DELETE_VERSION" ]          
        - resources: [ "NODE" ]
          actions: [ "READ", "READ_CONFIG", "ALTER_CONFIG" ]              
        - resources: [ "ACL" ]
          actions: [ "READ" ]                        
        - resources: [ "KSQLDB" ]
          actions: [ "READ", "EXECUTE" ]
    groups:
      viewer-group:
        - role: Viewer
          patterns: [ "fdn.*", "cdh.*" ]
          clusters: [ "rbac.*", "reimb.*" ]
      admin-group:
        - role: UIAdmin
          patterns: [ "fdn.*", "cdh.*" ]
          clusters: [ "rbac.*", "reimb.*" ] 
    oidc:
      enabled: true
      providers:
        azure:
          label: "Click here to Login as SSO"
          username-field: email
          groups-field: roles
          default-group: Viewer
          groups:
            - name: UIAdmin
              groups:
                - admin-group
            - name: Viewer
              groups:
                - viewer-group

Attached is 'application -AzureAD_RBAC.yaml' for reference

2.2. Replace Placeholders with Your Values

  • your-client-id: Replace with the Application (client) ID from Azure AD.
  • your-client-secret: Replace with the Client Secret you created.
  • your-tenant-id: Replace with your Directory (tenant) ID.

Step 3. Verify the Configuration

3.1. Access Kafka UI

Open your browser and navigate to http://localhost:9091 (or the appropriate URL based on your configuration).

3.2. Sign In with Azure AD

  • You should be redirected to the Azure AD login page.
  • Enter your Azure AD credentials to sign in.
  • Upon successful authentication, you will be redirected back to Kafka UI and have access to the interface.

login page

Login page

Dashboard page

Dashboard page

Configure Azure AD authentication and Role-Based Access Control (RBAC) for AKHQ Kafka-UI using Helm.

Why Use Helm over Docker Compose for Kafka-UI?

  • If your infrastructure is based on Kubernetes, Helm is the natural choice. It’s designed to work with Kubernetes, making it easier to manage complex deployments and services like Kafka, especially in cloud environments.
  • Helm enables better scalability and reliability due to Kubernetes’ native features for clustering, scaling, and self-healing. This is crucial for production environments and large Kafka clusters.
  • Helm Charts make it easier to manage environment-specific configurations and secrets, which is beneficial when working with complex configurations like Kafka, SSL, RBAC, and OAuth2.
  • Helm’s built-in versioning and rollback features make it more suited for managing Kafka-UI in production, where you may need to deploy updates frequently or revert changes.
  • Attached is a values file named values-AzureAD_rbac_withHELM.yaml, which contains Azure AD and RBAC configuration using Helm.

To configure Azure AD authentication and Role-Based Access Control (RBAC) for AKHQ Kafka-UI using Helm, you'll need to follow these steps.

Prerequisites

  • Azure AD application setup with OAuth2 credentials (client-id, client-secret, tenant-id, etc.).
  • The Kafka-UI Helm repository has been added to your local setup.

Reference: AKHQ_HelmChart

In Powershell, execute the commands below to install Helm Repo.

helm repo add akhq https://akhq.io/

Prepare Helm Values File: The code is present in values.yaml.

# imagePullSecrets:
#  - name: my-repository-secret
image:
  repository: tchiotludo/akhq
  tag: "" # uses Chart.AppVersion by default
# Number of old deployment ReplicaSets to retain. The rest will be garbage collected.
revisionHistoryLimit: 10

# custom annotations (example: for prometheus)
annotations: {}
  # prometheus.io/scrape: 'true'
  # prometheus.io/port: '8080'
  # prometheus.io/path: '/prometheus'

podAnnotations: {}

configmapAnnotations: {}
  # vault.security.banzaicloud.io/vault-role: akhq
  # vault.security.banzaicloud.io/vault-serviceaccount: akhq

# custom labels
labels: {}
  # custom.label: 'true'

podLabels: {}

## You can put directly your configuration here... or add java opts or any other env vars
extraEnv: []
# - name: AKHQ_CONFIGURATION
#   value: |
#       akhq:
#         secrets:
#           docker-kafka-server:
#             properties:
#               bootstrap.servers: "kafka:9092"
# - name: JAVA_OPTS
#   value: "-Djavax.net.ssl.trustStore=/opt/java/openjdk/lib/security/cacerts -Djavax.net.ssl.trustStorePassword=password"
# - name: CLASSPATH
#   value: "/any/additional/jars/desired.jar:/go/here.jar"

## Or you can also use configmap for the configuration...
configuration:
  # Micronaut Configuration
  micronaut:
    security:
      enabled: true
      oauth2:
        enabled: true
        clients:
          azure:
            client-id: "client ID"
            client-secret: "Client Secret"
            scopes:
              - openid
            openid:
              issuer: "https://login.microsoftonline.com/{TenantID}/v2.0"

  akhq:
    security:
      enabled: true
      default-group: no-roles
      roles:
        Viewer: # Role 1
          - resources: ["TOPIC"]
            actions: ["READ"]
          - resources: ["TOPIC_DATA"]
            actions: ["READ"]
          - resources: ["CONSUMER_GROUP"]
            actions: ["READ"]
          - resources: ["CONNECT_CLUSTER"]
            actions: ["READ"]
          - resources: ["CONNECTOR"]
            actions: ["READ"]
          - resources: ["SCHEMA"]
            actions: ["READ"]
          - resources: ["NODE"]
            actions: ["READ"]
          - resources: ["ACL"]
            actions: ["READ"]
          - resources: ["KSQLDB"]
            actions: ["READ"]
        UIAdmin: # Role 2
          - resources: ["TOPIC"]
            actions: ["READ", "CREATE", "UPDATE", "DELETE", "READ_CONFIG", "ALTER_CONFIG"]
          - resources: ["TOPIC_DATA"]
            actions: ["READ", "CREATE", "UPDATE", "DELETE"]
          - resources: ["CONSUMER_GROUP"]
            actions: ["READ", "DELETE", "UPDATE_OFFSET", "DELETE_OFFSET"]
          - resources: ["CONNECT_CLUSTER"]
            actions: ["READ"]
          - resources: ["CONNECTOR"]
            actions: ["READ", "CREATE", "DELETE", "UPDATE_STATE"]
          - resources: ["SCHEMA"]
            actions: ["READ", "CREATE", "UPDATE", "DELETE", "DELETE_VERSION"]
          - resources: ["NODE"]
            actions: ["READ", "READ_CONFIG", "ALTER_CONFIG"]
          - resources: ["ACL"]
            actions: ["READ"]
          - resources: ["KSQLDB"]
            actions: ["READ", "EXECUTE"]
      groups:
        viewer-group:
          - role: Viewer
            patterns: ["fdn.*", "cdh.*"] # need to include topic regular expression
            clusters: ["rbac.*", "reimb.*"] # need to include cluster regular expression
        admin-group:
          - role: UIAdmin
            patterns: ["fdn.*", "cdh.*"] # need to include topic regular expression
            clusters: ["rbac.*", "reimb.*"] # need to include cluster regular expression
      oidc:
        enabled: true
        providers:
          azure:
            label: "Click here to Login as SSO"
            username-field: email
            # specifies the field name in the oidc claim containing the user-assigned role
            groups-field: roles
            default-group: Viewer
            groups:
              - name: UIAdmin
                groups:
                  - admin-group
              - name: Viewer
                groups:
                  - viewer-group

    server:
      access-log:
        enabled: true
        name: org.akhq.log.access

    connections:
      rbac: # Kafka Cluster 1
        properties:
          bootstrap.servers: "bootstrap URL"
          security.protocol: "SSL"
          ssl.keystore.location: "keystore location"
          ssl.keystore.password: "keystore password"

      reimb: # Kafka Cluster 2
        properties:
          bootstrap.servers: "bootstrap URL"
          security.protocol: "SSL"
          ssl.keystore.location: "keystore location"
          ssl.keystore.password: "keystore password"

##... and secret for connection information
existingSecrets: ""
# name of the existingSecret
secrets: {}

# Provide extra base64 encoded kubernetes secrets (keystore/truststore)
kafkaSecrets: {}
#  truststore.jks: MIIIE...
#  keystore.jks: MIIIE...

# Any extra volumes to define for the pod (like keystore/truststore)
extraVolumes:
  - name: certstore-secret
    secret:
      secretName: keystore1
      items:
        - key: "Key1.jka"
          path: "key2.jka"
  - name: certstore-secret-1
    secret:
      secretName: keystore2
      items:
        - key: "keystore2.jks"
          path: "keystore2.jks"

# Any extra volume mounts to define for the akhq container
extraVolumeMounts:
  - name: certstore-secret
    mountPath: "/app/keystore.jks"
    subPath: "keystore.jks"
    readOnly: true
  - name: certstore-secret-2
    mountPath: "/app/keystore2.jks"
    subPath: "keystore2e.jks"
    readOnly: true

# Specify ServiceAccount for pod
serviceAccountName: null
serviceAccount:
  create: false
  # annotations:
  #  eks.amazonaws.com/role-arn: arn:aws:iam::123456789000:role/iam-role-name-here

# Add your own init container or uncomment and modify the example.
initContainers: {}
#   create-keystore:
#     image: "eclipse-temurin:11-jre"
#     command: ['sh', '-c', 'keytool']
#     volumeMounts:
#      - mountPath: /tmp
#        name: certs

# Configure the Pod Security Context
securityContext: {}
  # runAsNonRoot: true
  # runAsUser: 1000

# Configure the Container Security Context
containerSecurityContext: {}
  # allowPrivilegeEscalation: false
  # privileged: false
  # capabilities:
  #   drop:
  #     - ALL
  # runAsNonRoot: true
  # runAsUser: 1001
  # readOnlyRootFilesystem: true

service:
  enabled: true
  type: ClusterIP
  port: 80
  managementPort: 28081
  # httpNodePort: 32551
  # managementNodePort: 32552
  labels: {}
  loadBalancerIP: ""
  annotations:
    # cloud.google.com/load-balancer-type: "Internal"

ingress:
  enabled: false
  ingressClassName: ""
  annotations: {}
    # kubernetes.io/ingress.class: nginx
    # kubernetes.io/tls-acme: "true"
  paths:
    - /
  pathType: "ImplementationSpecific"
  hosts:
    - akhq.demo.com
  tls: []
  #  - secretName: akhq-tls
  #    hosts:
  #      - akhq.demo.com

### Readiness / Liveness probe config.
readinessProbe:
  enabled: true
  prefix: "" # set same as `micronaut.server.context-path`
  path: /health
  port: management
  initialDelaySeconds: 5
  periodSeconds: 10
  timeoutSeconds: 5
  successThreshold: 1
  failureThreshold: 3
  httpGetExtra: {}

livenessProbe:
  enabled: true
  prefix: "" # set same as `micronaut.server.context-path`
  path: /health
  port: management
  initialDelaySeconds: 5
  periodSeconds: 10
  timeoutSeconds: 5
  successThreshold: 1
  failureThreshold: 3
  httpGetExtra: {}

resources: {}
  # limits:
  #  cpu: 100m
  #  memory: 128Mi
  # requests:
  #  cpu: 100m
  #  memory: 128Mi

nodeSelector: {}

tolerations: []

affinity: {}

networkPolicy:
  enabled: true

If you are using any KeyStore, then you need to add the Keystore to the secrets and refer to the secret in the value.yaml file in extraVolumes, extraVolumeMounts block.

kubectl create secret generic cdc-keystore --from-file=C:/app/keystore1.jks

Install Kafka-UI Using Helm: Navigate to the Folder where the values.yaml files are saved and execute the below command to configure the AKHQ image to download into Kubernetes cluster and configure Azure AD and RBAC

helm install my-akhq akhq/akhq --version 0.25.1 -f values.yaml

Access to your Kubernetes cluster with Helm installed by port forwarding.

Execute the below command to check the Pod is created and running.

kubectl get pods

Use the command below to forward the port.

kubectl port-forward --namespace default $POD_NAME 9091:8080

Browse with 'http://localhost:9091/'

Browse

Reference

Conclusion

Kafka UI simplifies the management of Kafka clusters by offering an intuitive interface for topic management, message inspection, consumer group monitoring, and schema registry integration. It can be deployed quickly using Docker or Kubernetes, and with its advanced configuration options, it's suitable for both development and production environments.

For large-scale Kafka deployments, Kafka UI helps streamline administrative tasks, allowing teams to focus more on business logic than on infrastructure complexities.

AKHQ is an essential tool for anyone managing large-scale Kafka clusters. Its intuitive user interface simplifies many of Kafka’s complex operations, making it easier for teams to monitor, manage, and secure their event-streaming architecture. By supporting flexible deployment options and comprehensive configurations, AKHQ is versatile enough to be deployed in both development and production environments.

Authentication and authorization are critical security measures for Kafka UI tools to ensure that only authorized users can access, modify, or monitor Kafka clusters. Without proper security controls, an organization’s Kafka infrastructure could be exposed to unauthorized access, resulting in potential data breaches, service disruptions, and regulatory non-compliance. Implementing robust security mechanisms—such as OAuth2, SASL/SSL, and RBAC—ensures that Kafka remains secure while still allowing users to efficiently manage and monitor Kafka resources through the UI.


Similar Articles