In this article, we’ll learn about Azure Kubernetes Service in brief and go through a step-by-step process to deploy a Kubernetes Cluster in Azure, learn to scale the cluster, upgrade it, and then also learn to delete it after usage and is no longer in need.
Let us imagine a scenario where developers are preparing for a demo. In this situation, there arise issues with different machines not being able to function as the software was with the original machine. “It was working in my machine” - is the most common dialogue we hear. In order to cater to this issue, we had a requirement of a proper system. In a nutshell, if an application has been built and is used in another system it might not function properly in the new environment. The key solutions that address this problem are, Virtual Machines and Containers. Learn about Virtual Machines from the previous article, Azure Compute.
Containers
Containers can be defined as a unit of software that is executable where the application codes are packaged with the needed dependencies and libraries such that it can be run where ever needed from cloud to on-premises services. You can learn more about Containers and briefly about container orchestration from the previous article, Containers And Container-Orchestration In Azure.
Container Challenges
Source: CNCF 2020 Survey Report
We can see majority agree that complexity is the biggest challenge with deploying containers. Thus, this isn’t an easy topic by any means and can be overwhelming. But it also means, we can make it easier as we understand it a bit better.
Differences between Virtual Machines and Containers
Virtual Machines (VM) |
Containers |
A virtual machine is the emulation or virtualization of an entire computer system. The Virtual Machines (VM) performs like a physical computer system altogether. |
Containers can be defined as a unit of software that is executable where the application codes are packaged with the needed dependencies and libraries such that it can be run where ever needed from cloud to on-premises services. |
The entire computer system is virtualized. |
Only the Operating System is virtualized. |
The size of VM is extremely large. |
The size of Containers is extremely light ranging to just a few megabytes. |
Eg: Azure Virtual Machine, Vmware, etc. |
Eg: Docker |
Benefits of Using Containers
There is a huge benefit of using containers. It supports the use of different operating system, both on-prem and cloud services as well as can be used from applications with both monolith and microservices architecture. Moreover, multiple languages from Java, .NET, Python, Node are all supported too. Basically, any OS, anywhere, for any types of application architecture programmed in any Language.
Docker
Docker is a platform that supports to development, ship, and run applications by delivering software in packages that are known as containers.
Kubernetes
Kubernetes provides the service to automate deployment, scale, and manage our containerized applications easily. This open-source container-orchestration software was developed and released by Google in 2014 and is currently looked over by Cloud Native Computing Foundation (CNCF). With the rise of microservices, there has also been a surge in the use of containers and thus the container orchestration – Kubernetes.
Kubernetes is an orchestration tool and it enables high availability or no downtime. Moreover, it also provides scalability and high performance as well as supports disaster recovery – ie. backup and restore. More about Disaster Recovery and Business Continuity here.
At present, the analogy of Cattle vs Pet has been imperative to understand to realize how Kubernetes should be used. The convention is that Kubernetes should be approached and treated like Cattle and not like pet. The analogy comes from the idea that pets are individually extremely important while cattle can be interchanged. Thus, the notion of shifting the production infrastructure into the cattle approach helps create high availability, reduced failure, and a rapid disaster recovery system with minimal downtime. Kubernetes has now become the gold standard for container orchestration mainly because it enables DevOps engineers to treat the containers like cattle.
Basic Architecture of Kubernetes
Source: Kubernetes
The Cluster of Kubernetes architecture consists of Nodes, Control Plane, Controllers, Cloud Controller Manager, and Garbage Collection.
Node
A node can be a physical or virtual machine and depends on the cluster. The workload is run by Kubernetes by placing the containers into Pods to run on these nodes. Hence, these containers are inside and are running on the nodes. We can find out the status of the Node from the following command.
kubectl describe node <insert-node-name-here>
Control Plane
The Control plane runs the cluster, and it includes the following.
- API Server
- Controller Manager
- Scheduler
- Etcd
The API pattern on which node to control plane communicates is through “hub-and-spoke”. The API usages from the nodes terminates at this API Server whereas for the communication from control plane to node – there are two communication paths. One is from the API Server to the Kubelet Process that runs on node in the cluster and another is from API server to any node, service or pod through the API Server’s proxy functionality.
Virtual Network between the control plane and node
Moreover, SSH tunnels are provided by Kubernetes in order to protect communication paths between control plane and node. As of now, SSH Tunnels has been deprecated and TCP level proxy has substituted it.
Monolith vs Microservice
You can learn more about Monolith and Microservice architecture, from the previous article.
Monolith |
Microservice |
It is a traditional approach to design software. |
It is comparatively a modern approach. |
A single-tiered software program designed in a manner where data access and user interface are all combined in one specific program on a single platform. |
The application tiers are managed independently in the microservice architecture, each as separate units such that the units are in the different repositories each having its allocated resources for processor and memory. |
All the application tiers lie in the same unit and are managed in a singular repository sharing system resources such as processor and memory. |
The APIs are well-defined to connect the units to each other. |
The programming language used for the entire application also relies on only one and the product is released using a single binary. |
The entire application might use one or more programming languages and are released using each individual binary. |
Azure Kubernetes Service (AKS)
Azure Kubernetes Service provided the facility to manage and deploy our containerized application with complete Kubernetes Service. It offers CI/ CD – Integrated Continuous Integration and Continuous Delivery, serverless architecture, and enterprise-level security for our applications. It provides features for Microservices, DevOps, and even support to train Machin Learning Models. Azure Kubernetes Service simplifies the deployment, management and operation of Kubernetes. Moreover, deployment and management of Kubernetes can be done with ease along with application made accessible to scale and run with confidence. Furthermore, the Kubernetes environment is secured and containerized application development is accelerated. Developers and DevOps engineers can work how they want with open-source tools and APIs as it is made extremely convenient to set up CI/CD in just a few clicks with AKS.
The elements of orchestration,
- Scheduling
- Affinity / Anti-affinity
- Health Monitoring
- Failover
- Scaling
- Networking
- Service Discovery
- Coordinated App Upgrades
Azure Container Instances (ACI)
With Azure Container Instances, it is made easy to run the containers on Azure without managing servers. Containers can be run without managing servers and the agility is increased along with the applications secured by hypervisor isolation. More about ACI in Containers and Container Orchestration.
How do we Deploy a Kubernetes Cluster in Azure?
We can deploy a Kubernetes Cluster in Azure by simply following the below steps. All these commands are executed in Azure CLI.
Create a resource group
az group create
Create AKS cluster
az aks create
Connect to Cluster
az aks get-credentialscommand
Now, the application can be Run.
How can we scale a Cluster?
The following tutorial works on Azure CLI version 2.0.53 and later versions. In order to check the current version of your Azure CLI, run az –version.
To see the number and state of pods,
kubectl get nodes
With these commands, there’ll be an output that showcases front-end and back-end pod.
NAME READY STATUS RESTARTS AGE
azure-vote-back-2549590172-4d2r5 1/1 Running 0 29m
azure-vote-front-840927080-tf34m 1/1 Running 0 28m
To manage change,
kubectl scale –-replicas=8 deployment/azure-vote-front
This command scales the front-end pods to 8.
Verify with,
kubectl get pods
This command verifies AKS successfully and creates the additional pods with update to the cluster.
kubectl get pods
READY STATUS RESTARTS AGE
azure-vote-back-2606967446-nmpcf 1/1 Running 0 15m
azure-vote-front-3309479140-2hfh0 1/1 Running 0 5m
azure-vote-front-3309479140-bzt05 1/1 Running 0 3m
azure-vote-front-3309479140-fvcvm 1/1 Running 0 2m
azure-vote-front-3309479140-hrbf2 1/1 Running 0 11m
azure-vote-front-3309479140-qphz8 1/1 Running 0 4m
azure-vote-front-3309479140-mnsf2 1/1 Running 0 1m
azure-vote-front-3309479140-rtyz8 1/1 Running 0 3m
azure-vote-front-3309479140-uwqf2 1/1 Running 0 6m
azure-vote-front-3309479140-poiz8 1/1 Running 0 2m
Automation
Horizontal pod autoscaling is supported by Kubernetes. Use the cluster autoscaler for automation purposes.
How do we update a cluster?
First of all, find out the updates that are available.
az aks get-upgrades
Example
az aks get-upgrades --resource-group myResourceGroup --name myAKSCluster
And, when you can upgrade;
To upgrade
az aks upgrade \
--resource-group myResourceGroup \
--name myAKSCluster \
--kubernetes-version KUBERNETES_VERSION
Next, validate the upgrade by checking with az aks show command.
az aks show --resource-group myResourceGroup --name myAKSCluster --output table
Name Location ResourceGroup KubernetesVersion ProvisioningState Fqdn
------------ ---------- --------------- ------------------- ------------------- ----------------------------------------------------------------
myAKSCluster eastus myResourceGroup 1.19.1 Succeeded myaksclust-myresourcegroup-23da35-rd59a4dh.hcp.eastus.azmk8s.io
Finally, to delete the cluster
az group delete --name myResourceGroup --yes --no-wait
Conclusion
Thus, in this article, we learned in brief about Azure Kubernetes Services, Containers, differences between virtual machines and containers, the benefits of using containers, and about Docker. Then we dove into Kubernetes, its basic architecture, and differentiated in brief about Monolith and Microservice. Finally, we went deep into Azure Kubernetes Service and learned to deploy Kubernetes cluster in Azure, scale the cluster, upgrade it, and then learned to delete it too.