Introduction
Suppose you are planning to deploy a two-tier application in the Azure Kubernetes Service Cluster. You have a customer-facing web interface deployed to Pods. Your customers should be able to access the web interface hosted on these Pods using HTTP requests from the browser. These customer-facing Pods should communicate with Database hosted on a set of backend Pods. The customers should not be able to access these backend Pods directly. Instead, they should access the customer-facing web interface hosted on the front-end pods. And the front-end pods should access the backend database pods, get the data, and display it to the customers. By design, the Azure Kubernetes Cluster does not allow you to communicate with the Pods directly. You need to plan out the Networking components accordingly to suit your application needs.
Networking is an important aspect that you should take into consideration while designing an Azure Kubernetes Cluster. Networking defines how the pods in the component communicate with each other. You need to design the Network before building the Azure Kubernetes Service cluster. Otherwise, you may have to shut down and recreate the Cluster from scratch to accommodate any unplanned networking needs.
In the following article, we learned the Azure Kubernetes Service Architecture.
In this article, let us explore the concept of a service and how it defines the way you access the Pods in an Azure Kubernetes Service cluster.
Service
Azure Kubernetes Service cluster is self-contained. You cannot access your applications hosted on your Pods directly. Service comes to your rescue here. The Service groups your pods logically and define how you access the Pods inside the Cluster. The Service either exposes the Pods outside the Cluster or makes the Pods accessible inside the Cluster. The following are the types of Service available for the Azure Kubernetes Service.
- Cluster IP
- NodePort
- LoadBalancer
Let us discuss each of these Service Types in detail.
Cluster IP
Let's try to understand Cluster IP using the two-tier application we discussed when we started with the article. You need to secure the backend database Pods so that they are accessible only within the Cluster. The front-end web interface Pods should access these backend Pods. The customers should not be accessing the backend Pods directly. You can achieve this requirement using the Cluster IP. The Cluster IP restricts your Pods to get accessed outside the Cluster. In Figure 1, we can see that the Front-end identical Pods send the request to the Cluster IP. The Cluster IP then forwards the requests to the Back-end identical Pods. Here, in this case, the Cluster IP plays a significant role in defining access policies that limit the backend Pods access only to the Front-end Pods inside the Cluster. It cannot get accessed outside the Cluster.
Figure 1
NodePort
Now, let's plan how to expose the front-end Pods to the external traffic so that the customers can access the web-interface hosted on these Pods. We can achieve this using NodePort. The NodePort exposes the front-end Pods to the customers. In Figure 2, the customers can access the web-interface hosted on identical front-end Pods using the NodePort. The customers access the Azure Kubernetes Service Cluster Node. The Cluster Node sends the request to the NodePort Service. The NodePort Service then forwards the request to the front-end identical Pods.
Figure 2
Load Balancer
In Figure 2, the customer accesses the Cluster Node directly. In the case of peak hours, the number of incoming requests increases, and it may get difficult for the Cluster Node to handle the traffic. To handle the substantial incoming traffic, we use the Load Balancer Service. The Load Balancer Service distributes and redirects the request to the Azure Kubernetes Service Cluster Nodes in a round-robin fashion. In Figure 3, the Load Balancer receives the customer request and redirects it to the Azure Kubernetes Service Cluster Node.
Figure 3
Ingress Controller
In a real-world scenario, you can have multiple applications hosted in the Azure Kubernetes Service cluster. You host each of the applications on a set of identical Pods. Whenever the customer requests an application, the request should get redirected to the set of Pods hosting that application. You need to have some intelligence in the Cluster to identify the requests and redirect it to the appropriate set of Pods. Ingress Controller comes to your rescue here. Using Ingress Controller, you can define Ingress rules that identify the request and redirects it to the right set of Pods. In Figure 4, the Ingress Controller receives customer requests and redirects the request to the appropriate Service. You can use any of the Service Types we discussed. Ingress Controller is not a Service type. It is like a router that smartly redirects the request to the appropriate Service.
Figure 4
Conclusion
In this article, we learned how Service controls the Pod access in the Azure Kubernetes Service Cluster. We explored the different types of services. We also explored the concept of the Ingress Controller.