Kubernetes Rolling Update Strategies in Action

Being able to react to any kind of anomality on production environment is the key to success.
Kubernetes has good features to let you revert your deployments back with a simple command. If you have
ever heard following scenario within your company before, this session will be the medicine for you.
“Payment microservice is in an unstable state after last deployment, what we can do?”
“I realized that Cart service has incorrect version during the deployment is in half state, I need to revert them back…”

My “Microservices Best Practices on Kubernetes” Talk on booking.com

Booking @ Amsterdam

At the end of April 2018, I visited Amsterdam for a good reason. Visiting the good places that I couldn’t
visited 3 years ago, and after filling that power, I just made a talk about Microservice Best Practices
on Kubernetes. The event is handled by Booking, and I was very happy with their hospitality. Thank you again !

Let me provide brief summary of each topic I have mentioned on the event. You can see my slides here, if you are not so much patient 🙂

1. Glory of REST

Microservices are like humans, and they need to communicate with each other by using well structured interfaces. Richardson’s Maturity Model is a good reference to this

2. Power of HATEOAS

Hypermedia As The Engine Of Application State provides navigate-able resources that you will find all the informations within the response. Forget about trying to generate some links on different kind of client applications to navigate next resources by using previous one.

3. Distributed Configuration

When you switched to the Microservice Architecture, you will need to configure multiple services at the same time, that configs must be applied to applications in real-time, etc… Distributed configuration can be handled with Consul as key/value pair, git2consul for synchronizing configurations to Consul, and you may need to keep those configurations on a simple git project.

4. Client Code Generation

In order to communicate microservices, you may have 2 options at least to make inter service communications.
If you are already using service discovery, you can think about Feign Client. Or else, you can use swagger-codegen to generate client library whenever you deploy your app to any kind of environment. Do not think about writing client libraries manually for your hundreds of microservices, just trigger a Jenkins job and take a REST!

5. Kubernetes Warm-up

You can create a k8s folder to keep your k8s resource definitions to use on deployment pipeline. A typical micro service may have deployment, service definition at least for deployment and exposing your application to the outside or at least to the load balancer

6. CI/CD

If you have kubernetes specifications within your project, you are ready to deploy your app by using Jenkins with a simple kubectl configuration
within jenkins servers. In order to reduce complexity, you can use multi stage builds to build docker image to use in your k8s deployment.

7. Monitoring

Even you are in a stable environment like k8s, you need to track your infrastructure and application insights. To collect metrics, you can use
Prometheus, and to serve them in a good dashboard, you can use Grafana. CoreOS team developed a good project that is called prometheus operator
comes with a built-in kubernetes configurations. One click monitoring !

8. Logging

There are several types of logging architecture on kubernetes and I mainly focused on cluster level logging with daemon set agents. You can send your logs to logging backend like Elasticsearch to show on Kibana dashboard, or if you don’t want to maintain ELK stack, you can use
https://humio.com/ for a fast centralized real-time logging and monitoring system. Just use their kubernetes integration

9.APM & Service Mesh

Monitoring and Logging may not help you all the time, you may need to see deeper insights about your application. When it comes to
Microservice and Container world, Instana is a good choice to handle Tracings, Monitoring with a simple sensor integration. You can create your
infrastructure map, see traces and spans for a request lifecycle, even you can see real time service requests on simple dashboard.

10. API Gateway

If you are planning to expose your services to the public, you definitely manage your APIs with an API Gateway to perform Authentication,
Authorization, Rate Limiting, API Versioning, etc… I have used Tyk API Gateway to set this up in Kubernetes to route traffic to microservices
after successfully validated by API Gateway.

11. Event Sourcing & CQRS

In a synchronous world, you can only change 1 object in 1 transaction at a time. When you switch to distributed systems, you need to
use 2-phase commits in an extended architecture. Again, with this strategy, whenever you made an update to current state of an object, all
the previous states will be gone. You can use Event Sourcing with asynchronous events stored in an event store like Apache Kafka, Hazelcast, etc…
Also, you can separate read (query) and write (command) in order to handle events asynchronously and populate desired views on database to serve it
via query later.

Hope above sections would be a good reference for your next Microservice Architecture design.

Kubernetes Service, Pod, Deployment Simplified

In this session, we had a look at the some of the most important concepts of Kubernetes; Services, Pods, Deployments in order to understand the application lifecycle in a typical Kubernetes environment. Some of the topics covered are below;

– Creating Deployment
– Single and Multi-Container Concept in Deployment
– Monitoring, Debugging Pods
– Service Types
– Exposing Services to Internet
– Manage Environment Variables of Deployment
– Service-to-Service Communication

Kubernetes Cluster from Scratch

In this session, we create Kubernetes cluster from scratch and deep dive kubernetes cluster components. You can see schedule below;

  1. Create machines by using Docker Machine to use as Kubernetes cluster nodes
  2. Install kubernetes cluster with 1 master 2 slave nodes
  3. Install and configure kubectl to apply some operations on cluster
  4. Review cluster components to have a good insight about kubernetes
  5. Deploy a Spring Boot app on to cluster

Deploy Auto-Scalable Node.js Application on Kubernetes Cluster — Part 2

In previous article, we have setup kubernetes cluster by using minikube and applied some kubectl command to deploy sample Node.js application to kubernetes cluster. In this article, we will configure our application in a way that it will be auto-scaled according to cpu load. Fasten your belts !

Horizontal Pod Autoscaler

Kubernetes helps us to scale desired pod in a replication controller, deployment, or replica set according to observed CPU utilization.

HPA Scheme

The logic behind above architecture is very simple. Controller manager queries the system resources within a specific period defined by option — horizontal-pod-autoscaler-sync-period (30 seconds by default), and if the resources are in an exceeding state than the threshold you provided, the pods will be auto-scaled up. Contrarily, if the resources got back to normal state, the pods will be auto-scaled down.


There are several ways to configure application for auto-scaling, and we will defined this config inside our project I mean in k8s folder. I assume you have minikube started and already deployed Node.js application to kubernetes cluster. Go to project and add a file named hpa.yml to k8s folder, put following content to it

apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
name: node-example
namespace: default
maxReplicas: 5
minReplicas: 1
apiVersion: extensions/v1
kind: Deployment
name: node-example
targetCPUUtilizationPercentage: 1

With targetCPUUtilizationPercentage option, we are saying that, Once the cpu load inside observed CPU more than 1%, scale this pod.

After this configuration, apply your changes with following

kubectl apply -f k8s/hpa.yml

HPA apply

You can be ensure about HPA configuration with following command;

kubectl get hpa

HPA resources

Metric Collection

As you can see, there is a strange thing above. Yeah, on TARGETS section, there is no current metric data. In order to fix that, you need to check addon list on minikube by;

minikube addons list

Here by default heapster addon is disabled. In order to let controller manager query your resources, you need to enable this addon. You can enable it via following;

minikube addons enable heapster

And now, we can see TARGETS value.

CPU usage

CPU usage is 0% and let’s make some loads to this application.

Hardening CPU

ab -c 5 -n 1000 -t 100000

I assumed you have already Apache Benchmark installed on your computer. With above command, in total 1000 request will be sent to our application within 100 seconds with a concurrency level 5. You can also see demo video below;


To sum up, we put a simple configuration to our project and applied to enable Horizontal Pod Autoscaler. By using Apache Benchmark tool, we sent some request to our application to hardening CPU. If you want to see HPA configs, you can access them here

Deploy Auto-Scalable Node.js Application on Kubernetes Cluster — Part 1

We are facing crazy technologies everyday, and we — as developers — need to decide on ones that are more production ready. During this decision period, there are several parameters we use to convince our selves. Being able to simulate some production environment behaviours on developer machine is a must if I will say it is crazy. In this tutorial, we will create a Kubernetes cluster with Minikube on our local computer and then deploy a sample Node.js application to this cluster in a way that scales according to load from out side.


Kubectl is a command line tool for running commands on Kubernetes cluster. You can install it via;

curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/darwin/amd64/kubectl

It will be enabled after granting execution permission and moving to user local folder.

chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin/kubectl

You can verify it with kubectl version

Kubectl Version

From now on, we will be able to access Kubernetes cluster with kubectl. If you are using different operating system, you can refer installation instructions here


Minikube is a tool let us create Kubernete cluster on our local computer. You can install minikube with following command,

brew cask install minikube

If you are using different operating system, you can refer installation instructions here

We can start a Kubernetes cluster locally by executing minikube start

Minikube start

As you can see on the output, our kubectl client is autoconfigured automatically to connect local kubernetes cluster. To test this, you can list services with;

kubectl get services

Kubectl services

Sample Node.js Application

Here is a sample yet another Hello World Node.js application.

const http = require(‘http’);
const port = process.env.PORT || 3000;
const server = http.createServer((req, res) => {
res.statusCode = 200;
res.setHeader(‘Content-Type’, ‘text/plain’);
res.end(‘Hello Worldn’);
server.listen(port, () => {
console.log(`Server running on port: ${port}`);

This application runs on port 3000 if you have no environment variable with key PORT .

Docker Image Preparation

In order to deploy this app on Kubernetes, we can prepare a Dockerfile to build a docker image for future use. Dockerfile comes as following;

FROM node:alpine

RUN mkdir -p /usr/src/app

WORKDIR /usr/src/app

ADD index.js ./

ADD package.json ./

RUN npm install

CMD ["npm", "start"]

I assume you have already a Docker Hub account, and when you execute followings, you will be able to push your image to Docker Hub. Do not forget to replace <username> section below.

docker login
docker build -t <username>/node-example .
docker push <username>/node-example

Now we are ready to use this docker image on our Kubernetes deployments.

Kubernetes Deployments & Services

The project conventions for my Kubernetes projects on my daily basis as follows;

  • Create a folder k8s inside your project
  • Create deployment.yml inside k8s
  • Create service.yml inside k8s

In deployment file, we simply define our project metadata and container definitions to manage pods. Here is our deployment.yml file;

apiVersion: extensions/v1beta1
kind: Deployment
name: node-example-deployment
replicas: 1
app: node-example
- name: node-example
image: huseyinbabal/node-example
imagePullPolicy: Always
- containerPort: 3000

You can see this is a deployment object by looking at kind keyword. With replicas keyword, I am saying that this will have only one instance beyond the service. In containers section, we provided our docker image, and port number for container internals.

We are ok with the deployment, and let’s expose our app to real world. This time we will have a service file like below;

apiVersion: v1
kind: Service
name: node-example
app: node-example
app: node-example
- port: 3000
protocol: TCP
nodePort: 30001
type: LoadBalancer

We are simply exposing our port 3000 as 30001 to outside and this is a service with LoadBalancer type. Now go to your project folder and execute following;

kubectl apply -f k8s

This command will create/update service and deployment on Kubernetes cluster by using service and deployment definitions inside necessary files in project.

Kubectl apply

In order to check your deployment and services, you can use commandline;

Kubectl service

Kubectl deployments

Kubectl pods

As you can see, we have 1 running pod beyond our service. You can see above status by using minikube also;

minikube dashboard

Minikube dashboard

Now we are ready to access our service by using;

minikube service node-example

This will open our service inside a browser by using kubernetes internals.

You can access Github project here


To sum up; we simply created a Kubernetes cluster by using minikube, and this enabled us using our kubectl to connect Kubernetes cluster. Node.js application helped us to test our deployments. In this tutorial, I mainly focused on Kubernetes preparation and application deployment. In next tutorial, I will show you how to scale your application manually/automatically with supporting benchmark operations.