Azure Kubernetes Service Networking - Azure CNI Virtual Network

Introduction

 
In the last article, we learned about Basic Networking using Azure Kubernetes Cluster. The following is the link to the last article of this series.
Let us quickly recap what we need to consider while planning out the Network for Azure Kubernetes Cluster.
  • Does my Azure Kubernetes Cluster need to integrate with other components outside Cluster, or is it self-contained and need not integrate with external components?
  • Do you need to access the Pods in a cluster directly, or you need to isolate the Pods in the Cluster so that no one can access it directly?
  • What is the strategy you should follow to communicate with the Pods?
  • Do you need to configure network policies and firewalls at the Pods level?
  • What is the number of networking components and pods you are planning for the Cluster, and do you have enough IP addresses to assign?
These considerations form the basic building block for your Azure Kubernetes Cluster Network design. You need to plan out the Network for your Azure Kubernetes cluster before provisioning the Cluster, or you may end up recreating the Cluster to accommodate any network-level changes at a later point of time.
 
The challenge with the Basic Kubernetes Networking using Kubenet is that it does not allow you to connect to the Pod directly. The Control plane creates and manages the Virtual Network in the case of Kubenet. The Control Plane assigns IP Address to the Pods that do not fall in the IP Range of the Cluster Virtual Network. This aspect makes the Pods as not accessible from outside the Cluster. However, using Azure CNI, you can directly access the Pod. Let's explore Advanced Networking for Azure Kubernetes Service using Azure CNI in this article.
 
The following are the links to the previous articles in this series.

Advanced Networking using Azure CNI

 
In the case of Advanced Networking using Azure CNI, you can control and manage the Virtual Network of the Cluster. The Nodes get an IP Address from the Virtual Network IP Address range. The Pods get IP Address from the Virtual Network IP Address range. The IP Address for the Pods gets assigned as Secondary IP Address in the Network Interface Card of the Node. This aspect makes the Pod discoverable outside the Cluster, and you can access the Pods directly from outside the Cluster.
 
Figure 1 depicts an Azure Kubernetes Cluster using Azure CNI Networking. You can see that both the Nodes and all the Pods inside the Cluster get assigned with IP Address from the IP Address range of the Virtual Network. The IP Address for each of the Pods is assigned as Secondary IP Address for the Network Interface Card. The Bridge inside the Node facilitates the Pods inside a Node to talk to each other. All the Network traffic beyond the Node gets routed using the Router. The User Defined Route plays a vital role in delivering the traffic to the destination Node and the Pod. Since the IP Address of the Pods gets available as a Secondary IP Address in the Network Interface Card of the Node, it gets easy to route the traffic appropriately with ease to the destination. The Network is well aware of the whereabouts of the source and the destination Pods.
 
Azure Kubernetes Service Networking - Azure CNI Virtual Network
Figure 1
 
You can use Calico Network Policies to configure Firewall and Network rules for the Pods in the Cluster.
 

Conclusion

 
In this article, we learned the Advanced Networking option available for the Azure Kubernetes Cluster using Azure CNI. In the next article, we will explore the Storage options provided by the Azure Kubernetes Cluster.