Kubernetes is an open-source platform for automating the deployment, scaling, and management of containerized applications. A Kubernetes cluster is a group of nodes that run containerized applications and services. It provides a unified way to manage and orchestrate containers, making it easier to deploy, scale, and manage microservices.
[https://kubernetes.io/docs/concepts/overview/components/]
Kubernetes is designed to work with containers, which are isolated, lightweight, and portable units that contain all the necessary code and dependencies for an application to run. Containers can be run on any platform, making them ideal for deployment in cloud-native environments.
In a Kubernetes cluster, nodes run the applications and services in containers. The nodes communicate with each other to coordinate the deployment and scaling of the containers. The nodes in a Kubernetes cluster are divided into two types: worker nodes and master nodes.
The worker nodes are responsible for running the containers and providing resources such as CPU, memory, and storage. The master nodes manage the state of the cluster and make decisions about the deployment and scaling of containers.
The key components of a Kubernetes cluster are the API server, the scheduler, the controller manager, and the etcd database. The API server is the central control point for the cluster and exposes the Kubernetes API. The scheduler is responsible for placing containers on nodes based on resource availability and other constraints. The controller manager manages the state of the cluster, including the scaling of containers and the deployment of updates. The etcd database stores the state of the cluster, including the configuration and status of the containers and nodes.
In Kubernetes, containers are deployed as pods, which are the smallest and simplest units in the platform. Pods can contain one or more containers, and they provide a way to manage the containers as a single unit. Pods are created and managed by higher-level abstractions called controllers, such as Deployments and ReplicaSets.
Deployments provide a way to declaratively manage the state of a set of replicas, making it easy to perform rolling updates and rollbacks. ReplicaSets ensure that a specified number of replicas of a pod are running at all times.
Kubernetes also provides a number of other features for managing containers, including:
- Services - which provide a way to expose the pods to the network and load balance traffic to them.
- ConfigMaps and Secrets - which provide a way to manage configuration data and secrets separately from the containers.
- Volumes - which provide a way to persist data across containers and across restarts.
- Namespaces - which provide a way to partition resources in a cluster and enforce access controls.
Kubernetes is designed to be highly scalable and available, making it a popular choice for deploying cloud-native applications and services. It provides automatic self-healing, rolling updates, and rollbacks, making it easier to deploy and manage applications and services in a production environment.
In conclusion, a Kubernetes cluster provides a unified way to manage and orchestrate containers, making it easier to deploy, scale, and manage microservices. With its powerful features and scalability, it has become a popular choice for deploying cloud-native applications and services, and is widely used in production environments.
Master and slave
A master-slave setup in running. The worker nodes also communicate with each other to provide network connectivity between containers and to facilitate communication between containers within the same pod or between pods.
In a master-slave Kubernetes cluster, the worker nodes are typically stateless, which means that they do not store any persistent data. This allows the worker nodes to be easily replaced or upgraded without affecting the state of the cluster. The master node, on the other hand, stores the state of the cluster in the etcd database and is typically deployed on a separate, highly available node to ensure that the state of the cluster can be maintained even if one of the worker nodes fails.
One of the benefits of the master-slave setup in Kubernetes is that it allows for easy scaling of the cluster. To add additional worker nodes to the cluster, simply spin up a new worker node and join it to the cluster. The master node will automatically recognize the new worker node and begin scheduling containers on it. To remove a worker node from the cluster, simply remove it from the etcd database and the master node will stop scheduling containers on it.
Another benefit of the master-slave setup in Kubernetes is that it provides a high degree of fault tolerance. If a worker node fails, the master node will automatically reschedule the containers that were running on that node to other available nodes. This ensures that the cluster continues to operate normally even in the event of a failure.
In conclusion, the master-slave setup in a Kubernetes cluster provides a flexible, scalable, and fault-tolerant infrastructure for deploying containers. The master node acts as the central control point for the cluster, managing the deployment and scaling of containers and maintaining the state of the cluster, while the worker nodes run the containers and provide resources.
Here's a script to set up a 3 master 3 worker node Kubernetes cluster on a single Linux VM,
# Install Docker
sudo apt-get update
sudo apt-get install -y docker.io
# Install kubeadm, kubelet and kubectl
sudo apt-get update && sudo apt-get install -y apt-transport-https curl
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
cat <<EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
# Initialize the cluster with 3 master nodes
sudo kubeadm init --control-plane-endpoint "127.0.0.1:6443" --upload-certs --node-name "node-1" --node-ip "127.0.0.1" --pod-network-cidr=192.168.0.0/16
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
# Deploy the pod network
kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
# Create 2 more master nodes
kubeadm join 127.0.0.1:6443 --token <token> --discovery-token-ca-cert-hash sha256:<hash> --control-plane --node-name "node-2"
kubeadm join 127.0.0.1:6443 --token <token> --discovery-token-ca-cert-hash sha256:<hash> --control-plane --node-name "node-3"
# Create 3 worker nodes
kubeadm join 127.0.0.1:6443 --token <token> --discovery-token-ca-cert-hash sha256:<hash> --node-name "worker-node-1"
kubeadm join 127.0.0.1:6443 --token <token> --discovery-token-ca-cert-hash sha256:<hash> --node-name "worker-node-2"
kubeadm join 127.0.0.1:6443 --token <token> --discovery-token-ca-cert-hash sha256:<hash> --node-name "worker-node-3"
Note
Replace the <token> and <hash> placeholders with actual values from the output of the kubeadm init command.