Teaching How to Rollout Deployments with Zero downtime

End users expect services to always be available and responsive. Just a small degradation in performance for an application might be enough to lose customers. Kubernetes makes it easy to scale applications and rollout deployments with zero downtime. It does it by incrementally updating Pod instances, running different application versions and also maintaining a history of rolled out versions, so that we can easily rollback to a specific version in time if needed.

In this blog, I am going to show you how to scale and rollout an application, across various deployment versions with speed and obviously with zero-downtime.

In this blog, I assume the following:

  • You are familiar with containers and Kubernetes in general. If you need a refresher, have a look at this previous blog.
  • You have already provisioned a Kubernetes cluster environment somewhere. I am going to be using a simple local minikube, but if using any other Managed Kubernetes such as EKS, AKS, GKE, the steps are mostly identical.  If you need help to install a minikube, have a look at this reference.

 Let’s deploy our Application in Kubernetes

Applications in Kubernetes run within the concept of “pods”, that are logical runtime grouping of Containers that make up a whole Application. However, a Kubernetes “Deployment” is a higher construct that not only creates the Pods, but also it offers a better ability to perform ops-like tasks to a full application deployment, for example:

  • Roll out a full deployment, consisting of one or multiple pods, while each pod consists of one or multiple container images.
  • Reports on the actual status of deployments.
  • Pause and resume deployments
  • Defines a “replica set” that provides the flexibility to scale in/out applications, by assigning a “desired state” of number of replicas. Once defined, Kubernetes internal constructs will ensure to maintain such desired state.
  • Ability to roll back to an earlier deployment revision or to any specific version in history.
  •  Scale up deployments to facilitate growth

Ok, you get it, Kubernetes Deployments are awesome, let’s play with them.

·   SSH into the environment where you installed kubectl that points to your Kubernetes cluster. In my case, it is my own laptop, as I am running a local 1 node minikube, but it could be anywhere else, like a Dev environment or a Master node of a Managed Kubernetes cluster running on EKS/AKS/GKE cluster.

·   Make sure your Kubernetes cluster is up and running and that kubectl is pointing to it

kubectl get nodes

      You should see at least 1 worker node ready.

·   Retrieve a git repository that contains a sample Deployment YAML file already configured to easily pull our docker image and rollout a deployment into Kubernetes.

git clone https://github.com/mulethunder/hello-microservices

·   Move into the kubernetes directory:

cd hello-microservices/nodejs-ms/kubernetes

·   Inside, there is a file called “hello-nodejs-dpl.yaml_sample”, that is the Deployment definition for our Hello World NodeJS App demo.

·   Copy this file and remove the trailing “_template”

mv hello-nodejs-dpl.yaml_sample hello-nodejs-dpl.yaml

·   Replace ENTER_IMAGE_TAG_NAME_HERE with a premade container image

mulethunder/hello-microservices-k8s:1.0

Note: Feel free to point to your own one if you want to.

·   Before we apply the Deployment definition, let’s create the namespace where we want our Deployment to run.

kubectl create namespace hello-microservices

·   Now, use kubectl to apply the Deployment definition

kubectl apply -f hello-nodejs-dpl.yaml

·   You should see a message saying that your Deployment was created. However, give it some time, as your image needs to be downloaded.

·   Validate the status of your new pod:

kubectl get pods -n hello-microservices -w

·   After a minute or two, 2 pods should be up and running. 

·       This is because in the Deployment descriptor we defined 2 replicas

Once this is the case, break the wait with a Ctrl + C

·   Also, make sure that the whole deployment is up and available

kubectl get deployment -n hello-microservices

·   If you want more details about your pod, you can describe them:

kubectl describe pod [YOUR_POD_NAME] -n [NS]

e.g. Adjust your pod name accordingly

kubectl describe pod hello-nodejs-deployment-5bf65f8f7-gmmtn -n hello-microservices
  • Now let’s test our Hello World NodeJS Application running on Kubernetes. For this, we are going to apply a NodePort service definition that is also in the same folder as the deployment descriptor. This way, it is going to assign an external port on each worker node, so that we can test our application externally to the cluster. Also, by definition, the service construct will maintain a real-time registry of the running pods at any point in time, so that regardless of whether pods get terminated or initiated, we can always reach them all seamlessly, without any load balancing manual intervention. 
  • Apply the file:
kubectl apply -f hello-nodejs-svc.yaml 
  • If you are also running your Kubernetes cluster with minikube, you can get the address of the service with:
minikube service [SERVICE_NAME] -n [NS] –url

For example:

minikube service hello-nodejs-svc -n hello-microservices –url
  • Run a simple curl command or open in the browser to confirm it is accessible.

Or:

 Let’s scale out/in our Application

So far, we have applied a Kubernetes Deployment that is pointing to our NodeJS application version 1.0 across 2 replicas. Before we scale our application, let’s prove the self-healing aspect of running a replica set inside Kubernetes. The assumption is that we always want Kubernetes to maintain a desired state of 2 replicas. 

  • Let’s simulate a fire in one of the pods by deleting one random pod. First let’s retrieve the current running pods:
kubectl get pods -n hello-microservices
  • Then, let’s delete one of them
kubectl delete pod [POD_NAME] -n hello-microservices
  • If you are quick to run another get pods command, you will notice that one of the pods is terminating while another on is spawn up already
  • After just a few seconds later, you will notice that Kubernetes maintained a predictable desired state of 2 replicas.

Notice that at the moment we are not running any specific affinity or anti-affinity rules to tell Kubernetes where to run the pods. However, in a normal production cluster, we would want to ensure that our pods are always distributed across different availability zones, for example, in order to achieve a higher degree of availability.

  • Now, let’s play with the replica set. Let’s dynamically modify the number of replicas from 2 to 4, to see how Kubernetes takes that into account immediately.
  • First, let’s scale our Deployment, by modifying the implicit replica set. That is, let’s scale from the current “desired state” of 2 replicas to 4.
kubectl scale –replicas=4 deployment/hello-nodejs-deployment -n hello-microservices
  • After a few seconds, verify that now there are 4 pods running:
kubectl get pods -n hello-microservices
  • If you describe the deployment, you will notice that now the replica is set to 4
kubectl describe deployment hello-nodejs-deployment -n hello-microservices
  • Let’s scale it back in from 4 replicas to only 2 replicas.
kubectl scale –replicas=2 deployment/hello-nodejs-deployment -n hello-microservices
  • If you get the pods quickly enough, you will see that 2 random pods are terminated
kubectl get pods -n hello-microservices
  • In a few seconds try it again and you will only see 2 running pods.

 

Let’s roll out our application across various versions

The last bit that I want to talk about in this blog is the ability to roll out different application versions with ease and speed, but also with zero downtime.

  • Let’s fits run the history of rollouts for our deployment
kubectl rollout history deployment/hello-nodejs-deployment -n hello-microservices

If this is a new deployment, there should be only 1 deployment so far:

  • Every time that we apply a Deployment to Kubernetes, this action gets tracked as a rollout. You can analyse a deployment status. 
kubectl rollout status deployment/hello-nodejs-deployment -n hello-microservices

Note: Normally, the process of setting images as part of a deployment is an ultra sensitive process that involves zero human intervention, but it is fully automated as part of a CI/CD pipeline. It is a terrible practice to let human operators decide the right image to be run as part of a Deployment, especially in a production environment. However, in this blog we are going to apply a direct command to change the underlying running instance for a Deployment, so that we can analyse the rollout mechanics in Kubernetes.

  • Let’s change the running application instance that we originally specified in the YAML descriptor. For this, we are going to set it to a version 2.0 that is also available in my public Docker Hub repo for you to play with, but you are free to apply your own.
kubectl set image deployment/[YOUR_DEPLOYMENT] -n [YOUR_NAMESPACE] [YOUR_CONTAINER_NAME]=[IMAGE_USER_PATH]/[IMAGE_NAME]:[IMAGE_VERSION]

For example:

kubectl set image deployment/hello-nodejs-deployment -n hello-microservices  hello-nodejs-microservices-k8s=mulethunder/hello-microservices-k8s:2.0

Notice that the only BIG THING that I am changing from the original set image from the descriptor is the version of it, from 1.0 to 2.0

  • Let’s retrieve the rollout status again and make sure that the rollout was successful:
kubectl rollout status deployment/hello-nodejs-deployment -n hello-microservices
  • Hit the same URL page that we tested before and notice the change:
  • Perfect, we are running version 2.0 – And we didn’t have to do much. Behind the scenes Kubernetes executed the process of incrementally updating the running pod instances, making sure that there was always a running one to serve requests. To prove this, get a new list of your pods and you will see that they are very new. All of them!
kubectl get pods -n hello-microservices -n hello-microservices
  • Now, let’s bring the rollout history again:
kubectl rollout history deployment/hello-nodejs-deployment -n hello-microservices

Ok, so now there are 2 revisions in the history of this Deployment.

  • What if something goes bad with 1 Deployment? Surely we need a way to quickly rollback to the previous deployment version.  Let’s apply a quick rollback – Again, normally this is something never maintained by a human operator, as the risk of a human error is way too high, but since we are just making a proof of concept here, let’s go with it:
kubectl rollout undo deployment/hello-nodejs-deployment -n hello-microservices

Once again, Kubernetes will terminate pods incrementally to make the whole rollback process

  • Within a few seconds, the full rollback will be complete. Reload the test page to make sure that we are not in version 2.0 anymore.
  • Also, you can rollback a specific revision in history, by using the — to-revision=[REV] directive. Let’s play with this. Manually, set image version 2.0 and 1.0 a couple of times.
kubectl set image deployment/hello-nodejs-deployment -n hello-microservices  hello-nodejs-microservices-k8s=mulethunder/hello-microservices-k8s:2.0

kubectl set image deployment/hello-nodejs-deployment -n hello-microservices  hello-nodejs-microservices-k8s=mulethunder/hello-microservices-k8s:1.0
  • After a couple of tries, pick one version in the history that you want to rollback into and simply point to it. For example, let’s rollback to revision 6:
kubectl rollout undo deployment/hello-nodejs-deployment -n hello-microservices –to-revision=6

Congratulations!!! I hope you found this blog useful. I will keep publishing more advanced topics on Kubernetes and Cloud Native in general, so stay tuned.

If you have any question or comment, feel free to contact me directly at https://www.linkedin.com/in/citurria/

Thanks for your time.

Published by Carlos Rodriguez Iturria

To me, it’s all about being useful and adding value… If you want to connect with me, reach me at LinkedIn – That’s the best way that I have found to be responsive… (I hate emails).

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: