Introduction
What we will cover.
- What is the Kubernetes Service?
- Create and expose Redis master service
- Deploy the Redis slaves
- Deploy the front of the application
- Expose the front-end service
- What is Azure Load Balancer?
- Let’s Play with Application
Prerequisites
In the previous article of Application Deployment on AKS part 2, we configured the Redis Master to load configuration data from a ConfigMap. The dynamic configuration of applications using a ConfigMap, we will now return to the deployment of the rest of the guestbook application. You will once again come across the concepts of deployment, ReplicaSets, and Pods for the back end and front end. Apart from this, I will be introduced to another key concept, called a service.
What is the Kubernetes Service?
A service is a grouping of pods that are running on the cluster. Like a pod, a Kubernetes service is a REST object. A service is both an abstraction that defines a logical set of pods and a policy for accessing the pod set It helps pods to scale very easily. You can have many services within the cluster. Kubernetes services can efficiently power a microservice architecture.
To start the complete end-to-end deployment, we are going to create a service to expose the Redis master service.
Create and expose Redis master service
When exposing a port in plain Docker, the exposed port is constrained to the host it is running on. With Kubernetes networking, there is network connectivity between different Pods in the cluster. However, Pods themselves are unstable in nature, meaning they can be shut down, restarted, or even moved to other hosts without maintaining their IP address. If you were to connect to the IP of a Pod directly, you might lose connectivity if that Pod was moved to a new host.
Kubernetes provides the service object, which handles this exact problem. Using label matching selectors, it proxies traffic to the right Pods and does load balancing. In this case, the master has only one Pod, so it just ensures that the traffic is directed to the Pod independent of the node the Pod runs on. To create the Service, run the following command.
kubectl apply -f redis-master-service.yaml
The Redis master Service has the following content.
apiVersion: v1
kind: Service
metadata:
name: redis-master
labels:
app: redis
role: master
tier: backend
spec:
ports:
- port: 6379
targetPort: 6379
selector:
app: redis
role: master
tier: backend
Let's now see what you have created using the preceding code.
Lines 1-8
These lines tell Kubernetes that we want a service called redis-master, which has the same labels as our redis-master server Pod.apiVersion:v1 end.
apiVersion: v1
kind: Service
metadata:
name: redis-master
labels:
app: redis
role: master
tier: backend
Lines 10-12
These lines indicate that the service should handle traffic arriving at port 6379 and forward it to port 6379 of the Pods that match the selector defined between lines 13 and 16.
spec:
ports:
- port: 6379
targetPort: 6379
Lines 13-16
These lines are used to find the Pods to which the incoming traffic needs to be proxied. So, any Pod with labels matching (app: redis, role: master and tier: backend) is expected to handle port 6379 traffic. If you look back at the previous example, those are the exact labels we applied to that deployment.
selector:
app: redis
role: master
tier: backend
We can check the properties of the service by running the following command.
kubectl get service
This will give you an output as shown in the below screenshot.
You see that a new service, named redis-master, has been created. It has a cluster-wide IP of 10.0.183.90 (in your case, the IP will likely be different).
Note. This IP will work only within the cluster (hence we called it ClusterIP type).
Now we expose the Redis master service. Next, let's move to deploy Redis slave.
Deploying the Redis slaves
Running a single back end on the cloud is not recommended. You can configure Redis in a master-slave setup. This means that you can have a master that will serve write traffic and multiple slaves that can handle read traffic. It is useful for handling increased read traffic and high availability.
The following steps will help us to deploy the Redis slaves.
Step 1. Create the deployment by running the following command.
kubectl apply -f redis-slave-deployment.yaml
Step 2. Let's check all the resources that have been created now.
kubectl get all
This will give you an output as shown in the screenshot.
Step 3. Based on the preceding output, you can see that you created two replicas of the Redis-slave Pods. This can be confirmed by examining the redis-slave-deployment.yaml file.
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: redis-slave
labels:
app: redis
spec:
selector:
matchLabels:
app: redis
role: slave
tier: backend
replicas: 2
template:
metadata:
labels:
app: redis
role: slave
tier: backend
spec:
containers:
- name: slave
image: gcr.io/google_samples/gb-redisslave:v1
resources:
requests:
cpu: 100m
memory: 100Mi
env:
- name: GET_HOSTS_FROM
value: dns
# Using `GET_HOSTS_FROM=dns` requires your cluster to
# provide a dns service. As of Kubernetes 1.3, DNS is a built-in
# service launched automatically. However, if the cluster you are using
# does not have a built-in DNS service, you can instead
# access an environment variable to find the master
# service's host. To do so, comment out the 'value: dns' line above, and
# uncomment the line below:
# value: env
ports:
- containerPort: 6379
Everything is the same except for the following.
Line 13
The number of replicas is 2.
replicas: 2
Line 23
You are now using a specific slave image.
image: gcr.io/google_samples/gb-redisslave:v1
Lines 29-30
Setting GET_HOSTS_FROM to DNS. As you saw in the previous example, DNS resolves in the cluster.
- name: GET_HOSTS_FROM
value: dns
Step 4. Like the master service, you need to expose the slave service by running the following.
kubectl apply -f redis-slave-service.yaml
The only difference between this service and the Redis-master service is that this service proxies traffic to Pods that have the role: of slave label.
Step 5. Check the Redis-slave service by running the following command.
kubectl get service
This will give you an output as shown in the screenshot below.
Now we have a Redis cluster up and running, with a single master and two replicas. Let's deploy and expose the front end.
Deploy the Front end of the Application
From now we have focused on the Redis back end. Now we are ready to deploy the front end. This will add a graphical web page to our application that we will be able to interact with.
Step 1. You can create the front end using the following command.
kubectl apply -f frontend-deployment.yaml
Step 2. To verify the deployment, run this code.
kubectl get pods
This will display the output shown in the below screenshot.
You will notice that this deployment specifies 3 replicas. The deployment has the usual aspects with minor changes, as shown in the following code.
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: frontend
labels:
app: guestbook
spec:
selector:
matchLabels:
app: guestbook
tier: frontend
replicas: 3
template:
metadata:
labels:
app: guestbook
tier: frontend
spec:
containers:
- name: php-redis
image: gcr.io/google-samples/gb-frontend:v4
resources:
requests:
cpu: 100m
memory: 100Mi
env:
- name: GET_HOSTS_FROM
value: dns
# Using `GET_HOSTS_FROM=dns` requires your cluster to
# provide a dns service. As of Kubernetes 1.3, DNS is a built-in
# service launched automatically. However, if the cluster you are using
# does not have a built-in DNS service, you can instead
# access an environment variable to find the master
# service's host. To do so, comment out the 'value: dns' line above, and
# uncomment the line below:
# value: env
ports:
- containerPort: 80
Let's see these changes.
Line 12
The replica count is set to 3.
replicas: 3
Lines 9-11 and 15-17
The labels are set to app: guestbook and tier: frontend.
matchLabels:
app: guestbook
tier: frontend
labels:
app: guestbook
tier: frontend
Line 21
gb-frontend:v4 is used as an image.
image: gcr.io/google-samples/gb-frontend:v4
Now we have created the front-end deployment. You now need to expose it as a service
Expose the Front-end Service
There are multiple ways to define a Kubernetes service.
Cluster IP
This default type exposes the service on a cluster-internal IP. You can reach the service only from within the cluster. The two Redis services we created were of the type ClusterIP. This means they are exposed to an IP that is reachable only from the cluster, as shown in the mentioned screenshot.
Node Port
This type of service exposes the service on each node’s IP at a static port. A ClusterIP service is created automatically, and the NodePort service will route to it. From outside the cluster, you can contact the NodePort service by using “<NodeIP>:<NodePort>”.This service would be exposed on a static port on each node as shown in the mentioned screenshot.
Load Balancer
A final type that we will use in our example is the LoadBalancer type. This service type exposes the service externally using the load balancer of your cloud provider. The external load balancer routes to your NodePort and ClusterIP services, which are created automatically. In other words, it will create an Azure load balancer that will get a public IP that we can use to connect to, as shown in the mentioned screenshot. But first, let me explain the Azure Load Balancer.
What is the Azure Load Balancer?
The load balancer is used to distribute the incoming traffic to the pool of virtual machines. It stops routing the traffic to a failed virtual machine in the pool. In this way, we can make our application resilient to any software or hardware failures in that pool of virtual machines. Azure Load Balancer operates at layer four of the Open Systems Interconnection (OSI) model. It's the single point of contact for clients. Load Balancer distributes inbound flows that arrive at the load balancer's front end to backend pool instances.
- Load Balancing: Azure load balancer uses a 5-tuple hash composed of source IP, source port, destination IP, destination port, and protocol. We can configure a load balancing role within the load balancer in such a way based on the source port and source IP address from where the traffic is originating
- Port forwarding: The load balancer also has port forwarding capability if we have a pool of web servers, and we don't want to associate public IP addresses for each web server in that pool. If we're going to carry out any maintenance activities, you need to RDP into those Web servers having a public IP address on the Web servers.
- Application agnostic and transparent: The load balancer doesn't directly interact with TCP or UDP or the application layer. We can route the traffic based on URL or multi-site hosting, and then we can go for the application gateway.
- Automatic reconfiguration: The load balancer can reconfigure itself when we scale up or down instances. So, if we add more virtual machines to the backend pool, automatically load balancer will automatically reconfigure.
- Health probes: As we discussed earlier, the load balancer can recognize any failed virtual machines in the backend pool and stop routing the traffic to that particular failed virtual machine. It will recognize using health probes we can configure a health probe to determine the health of the instances in the backend pool.
- Outbound connection: All the outbound flows from a private IP address inside our virtual network to public IP addresses on the Internet can be translated to a frontend IP of the load balancer.
Now we have to expose the front-end service, the following code will help us to understand how a front-end service is exposed.
apiVersion: v1
kind: Service
metadata:
name: frontend
labels:
app: guestbook
tier: frontend
spec:
# comment or delete the following line if you want to use a LoadBalancer
# type: NodePort
# if your cluster supports it, uncomment the following to automatically create
# an external load-balanced IP for the frontend service.
type: LoadBalancer
ports:
- port: 80
selector:
app: guestbook
tier: frontend
Now that you have seen how a front-end service is exposed, let's make the guestbook application ready for use with the following steps.
Step 1. To create the service, run the following command.
kubectl create -f frontend-service.yaml
This step takes some time to execute when you run it for the first time. In the background, Azure must perform a couple of actions to make it seamless. It has to create an Azure load balancer and a public IP and set the port-forwarding rules to forward traffic on port 80 to the internal ports of the cluster.
Step 2. Run the following until there is a value in the EXTERNAL-IP column.
kubectl get service
This should display the output shown in the mentioned screenshot.
Step 3. In the Azure portal, if you click on All Resources and filter on the Loadbalancer, you will see a Kubernetes Load balancer. Clicking on it shows you something similar to the attached screenshot. The highlighted sections show you that there is a load balancing rule accepting traffic on port 80 and you have 2 public IP addresses.
If you click through on the two public IP addresses, you'll see both IP addresses linked to your cluster. One of those will be the IP address of your actual service; the other one is used by AKS to make outbound connections.
We're finally ready to put our guestbook app into action.
Let's Play with the Application
Type the public IP of the service in your favorite browser. You should get the output shown in the below screenshot.
Go ahead and record your messages. They will be saved. Open another browser and type the same IP; you will see all the messages you typed.
Congratulations
you have completed your first fully deployed, multi-tier, cloud-native Kubernetes application.
To conserve resources on your free-trial virtual machines, it is better to delete the created deployments to run the next round of deployments by using the following commands.
kubectl delete deployment frontend redis-master redis-slave
kubectl delete service frontend redis-master redis-slave
Conclusion
Over the three parts of Application deployment on Azure Kubernetes Service, you have deployed a Redis cluster and deployed a publicly accessible web application. You have learned how deployments, ReplicaSets, and Pods are linked, and you have learned how Kubernetes uses the service object to route network traffic.