My “Microservices Best Practices on Kubernetes” Talk on booking.com

Booking @ Amsterdam

At the end of April 2018, I visited Amsterdam for a good reason. Visiting the good places that I couldn’t
visited 3 years ago, and after filling that power, I just made a talk about Microservice Best Practices
on Kubernetes. The event is handled by Booking, and I was very happy with their hospitality. Thank you again !

Let me provide brief summary of each topic I have mentioned on the event. You can see my slides here, if you are not so much patient 🙂

1. Glory of REST

Microservices are like humans, and they need to communicate with each other by using well structured interfaces. Richardson’s Maturity Model is a good reference to this

2. Power of HATEOAS

Hypermedia As The Engine Of Application State provides navigate-able resources that you will find all the informations within the response. Forget about trying to generate some links on different kind of client applications to navigate next resources by using previous one.

3. Distributed Configuration

When you switched to the Microservice Architecture, you will need to configure multiple services at the same time, that configs must be applied to applications in real-time, etc… Distributed configuration can be handled with Consul as key/value pair, git2consul for synchronizing configurations to Consul, and you may need to keep those configurations on a simple git project.

4. Client Code Generation

In order to communicate microservices, you may have 2 options at least to make inter service communications.
If you are already using service discovery, you can think about Feign Client. Or else, you can use swagger-codegen to generate client library whenever you deploy your app to any kind of environment. Do not think about writing client libraries manually for your hundreds of microservices, just trigger a Jenkins job and take a REST!

5. Kubernetes Warm-up

You can create a k8s folder to keep your k8s resource definitions to use on deployment pipeline. A typical micro service may have deployment, service definition at least for deployment and exposing your application to the outside or at least to the load balancer

6. CI/CD

If you have kubernetes specifications within your project, you are ready to deploy your app by using Jenkins with a simple kubectl configuration
within jenkins servers. In order to reduce complexity, you can use multi stage builds to build docker image to use in your k8s deployment.

7. Monitoring

Even you are in a stable environment like k8s, you need to track your infrastructure and application insights. To collect metrics, you can use
Prometheus, and to serve them in a good dashboard, you can use Grafana. CoreOS team developed a good project that is called prometheus operator
comes with a built-in kubernetes configurations. One click monitoring !

8. Logging

There are several types of logging architecture on kubernetes and I mainly focused on cluster level logging with daemon set agents. You can send your logs to logging backend like Elasticsearch to show on Kibana dashboard, or if you don’t want to maintain ELK stack, you can use
https://humio.com/ for a fast centralized real-time logging and monitoring system. Just use their kubernetes integration

9.APM & Service Mesh

Monitoring and Logging may not help you all the time, you may need to see deeper insights about your application. When it comes to
Microservice and Container world, Instana is a good choice to handle Tracings, Monitoring with a simple sensor integration. You can create your
infrastructure map, see traces and spans for a request lifecycle, even you can see real time service requests on simple dashboard.

10. API Gateway

If you are planning to expose your services to the public, you definitely manage your APIs with an API Gateway to perform Authentication,
Authorization, Rate Limiting, API Versioning, etc… I have used Tyk API Gateway to set this up in Kubernetes to route traffic to microservices
after successfully validated by API Gateway.

11. Event Sourcing & CQRS

In a synchronous world, you can only change 1 object in 1 transaction at a time. When you switch to distributed systems, you need to
use 2-phase commits in an extended architecture. Again, with this strategy, whenever you made an update to current state of an object, all
the previous states will be gone. You can use Event Sourcing with asynchronous events stored in an event store like Apache Kafka, Hazelcast, etc…
Also, you can separate read (query) and write (command) in order to handle events asynchronously and populate desired views on database to serve it
via query later.

Hope above sections would be a good reference for your next Microservice Architecture design.

Docker Multi-Stage Builds

Docker Multi-Stage Builds

We use docker images to have same application with its dependencies on any kind environment. Having compile time and runtime dependencies is the nature of a specific developer’s life. For example, in previous article, we had Golang dependency to build our Golang REST API. We need to design our dependencies carefully to eliminate unnecessary dependency within container is alive. In this tutorial, we will see how to use docker multi-stage builds to bundle our application in real life.

Ways to Handle Dependencies

Let say that, you have a Golang application, and you are developing a docker image to run on production environment. You can follow 2 ways to handle this;

  1. Use a build tool like Jenkins to compile you Golang application into a binary and add this binary to docker image. You need to have Golang installed on your Jenkins machine to build Golang project.
  2. You can add Golang as dependency to your docker image, and you can run Golang application within container with go run … or build your Golang application inside docker image and execute binary.

In 1st strategy, you need to have Golang installed in order to build docker images, since you need a compiled binary. When a new joiner of your team clones project, he/she needs to install Golang and continue to work

In 2nd strategy, no need to install anything, just clone project and build docker image, since everything included inside Dockerfile definition.

Wait! Why we have Golang dependency inside docker image even I only need that at first time during compiling ? Because we are too lazy to install Golang locally and build project and use inside docker image to eliminate that dependency 🙂 Let’s find a proper way to be happy with our laziness and yet eliminate dependency.

Multi-Stage Builds

Multi-stage builds helps us to keep our Dockerfile clean and reduces the image size by not including the dependencies you will not need on run time. In order to do this, we will simply do some operations on first step and then send the output of first step to the second step as parameter.


FROM instrumentisto/glide as builder
WORKDIR /go/src/bitbucket.org/kloiahuseyin/flowmon-projects
COPY . .
RUN glide install
RUN CGO_ENABLED=0 GOOS=linux go build -a -tags flowmon -o build/flowmon-projects -ldflags ‘-w’ .
FROM scratch
COPY — from=builder /go/src/bitbucket.org/kloiahuseyin/flowmon-projects/build/flowmon-projects app
ENV PORT 3000
EXPOSE 3000
ENTRYPOINT [“/app”]

In first section, we add our dependent image for glide and provide an alias builder. Since, this Dockerfile inside our project, we simply add project files to docker image. With glide install, we install all the dependencies for Golang project. Finally, we compile our Golang application into a executable binary with CGO_ENABLED=0 GOOS=linux go build -a -tags flowmon -o build/flowmon-projects -ldflags ‘-w’ .

At the end of the first stage, we have likely created an image that contains our binary. On second step, we used stratch which is an empty image and then copied our binary from first image with FROM scratch
COPY — from=builder /go/src/bitbucket.org/kloiahuseyin/flowmon-projects/build/flowmon-projects app
 . Now we have our binary app, and only thing we need to do is providing this as entrypoint to our image.

Conclusion

There are several ways to create a docker image and the best way is to do this with minimum dependencies for cross platforms. Instead of being needed to install some dependencies, it is better to do this within Dockerfile by using multi-stage builds. For example, you don’t need to go to the Jenkins to see application dependencies in order to create your docker image on your local image. If you want to see this in action, you can refer for source code here

Micro Docker Images for Go Applications

Micro Docker Image for Go Application

We are able to run an application in seconds with Docker nowadays just by grabbing specific image within several official / custom docker images and that’s it. However, this easiness, using ready images for your application, may let us to have docker images with a size hundreds of megabytes ! In this article, I will show you how to create damn small images for Go application and run them. Let’s rock!

Sample Go REST API

I assume you have already have Golang and necessary tools installed on your system. This project has no dependent external library, so just create a file called main.go and put following content inside

package main
import (
"net/http"
"fmt"
"log"
)
func main() {
http.HandleFunc("/", func(writer http.ResponseWriter, request *http.Request) {
fmt.Fprint(writer, "Hello, World")
})
log.Fatal(http.ListenAndServe(":3000", nil))
}

This is a sample application that listens requests on port 3000 and responds Hello, World. You can try it yourself by executing go run main.go, and going to http://localhost:3000

Dockerize Go Project with Official Image

What is your first step for dockerizing a project? Just search official image you need on Docker Hub and just use inside Dockerfile right? Let’s use this approach first. Create a Dockerfile inside project folder and use official golang image to run your project inside container.

FROM golang
ADD . ./
RUN go build -o main
ENV PORT 3000
EXPOSE 3000
CMD [“/main”]

Build your docker image with docker build -t my-go-app-golang .

Docker Build

Now check your image with docker images | grep my-go-app-golang

Docker Image Size

739 MB! If you are happy with this size, I think you don’t need to go further, thanks for reading so far 🙂 If you want to make this smaller, continue to read…

Dockerize Go Project with Official Alpine Image

Docker Alpine images are the images with minimal dependencies that means you will not be able to see lots of tools inside it when you exec and check. For a specific Golang application, we don’t need so much dependencies and this alpine images may fit well with our needs.

Use Alpine Image inside your Dockerfile like below;

FROM golang:alpine
ADD . ./
RUN go build -o main
ENV PORT 3000
EXPOSE 3000
ENTRYPOINT [“/main”]

Build it wit docker build -t my-go-app-alpine .

Docker Build Alpine

Now check your image with docker images | grep my-go-app-alpine

Docker Image Size Alpine

275 MB, is it small enough for you ? Ok, keep reading for next surprize docker image size.

Dockerize Go Project with Docker Stratch Image

Think about a docker image and there is nothing inside it. Yes, it is Docker Scratch image. You cannot pull this image, but you can refer this in your Dockerfile. The very first next line after referral will be your 1st layer of filesystem. The main strategy here is, provide your binary as entrypoint this scratch image and that’s all.

We need a binary of our Golang application to provide as an entrypoint to our scratch image. In order to build a binary, go to your project folder and execute:

CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o main .

Here with CGO_ENABLED=0 we are saying that disable cgo and build golang application statically that means you will have all the dependencies once you copy this binary to image. -a is for re build entire packages to be sure you have all the dependencies. After this execution, you will have a binary inside your project folder.

Golang Build

We have a binary and now create Dockerfile with following content.

FROM scratch
ADD main ./
ENV PORT 3000
EXPOSE 3000
ENTRYPOINT [“/main”]

You can create your image with docker build -t my-go-app-scratch .

Docker Build Scratch

And when you check your image size docker images | grep my-go-app-scratch

Docker Image Size for Scratch

6.1 MB! If this is not small enough for you, keep reading …

Just joking 🙂 This is the minimal image I can provide you for this application. We have make image 100 times smaller than initial one. This minimal image will keep our motivation very high because, you will be able to deploy this image in a very short time on any kind of environment.

Conclusion

Using an official docker image for an application is very good choice at first, but if you have good devops mindset, I know you will try to make docker image smaller. This time, the main step will be more probably using an alpine image or other small images like busybox, etc… If you are able to convert your project to a binary, do use docker scratch image to have smallest docker image.

You can find application on Github here

Deploy Auto-Scalable Node.js Application on Kubernetes Cluster — Part 2


In previous article, we have setup kubernetes cluster by using minikube and applied some kubectl command to deploy sample Node.js application to kubernetes cluster. In this article, we will configure our application in a way that it will be auto-scaled according to cpu load. Fasten your belts !

Horizontal Pod Autoscaler

Kubernetes helps us to scale desired pod in a replication controller, deployment, or replica set according to observed CPU utilization.

HPA Scheme

The logic behind above architecture is very simple. Controller manager queries the system resources within a specific period defined by option — horizontal-pod-autoscaler-sync-period (30 seconds by default), and if the resources are in an exceeding state than the threshold you provided, the pods will be auto-scaled up. Contrarily, if the resources got back to normal state, the pods will be auto-scaled down.

Configuration

There are several ways to configure application for auto-scaling, and we will defined this config inside our project I mean in k8s folder. I assume you have minikube started and already deployed Node.js application to kubernetes cluster. Go to project and add a file named hpa.yml to k8s folder, put following content to it

apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
annotations:
name: node-example
namespace: default
spec:
maxReplicas: 5
minReplicas: 1
scaleTargetRef:
apiVersion: extensions/v1
kind: Deployment
name: node-example
targetCPUUtilizationPercentage: 1

With targetCPUUtilizationPercentage option, we are saying that, Once the cpu load inside observed CPU more than 1%, scale this pod.

After this configuration, apply your changes with following

kubectl apply -f k8s/hpa.yml

HPA apply

You can be ensure about HPA configuration with following command;

kubectl get hpa

HPA resources

Metric Collection

As you can see, there is a strange thing above. Yeah, on TARGETS section, there is no current metric data. In order to fix that, you need to check addon list on minikube by;

minikube addons list

Here by default heapster addon is disabled. In order to let controller manager query your resources, you need to enable this addon. You can enable it via following;

minikube addons enable heapster

And now, we can see TARGETS value.

CPU usage

CPU usage is 0% and let’s make some loads to this application.

Hardening CPU

ab -c 5 -n 1000 -t 100000 http://192.168.99.100:30001/

I assumed you have already Apache Benchmark installed on your computer. With above command, in total 1000 request will be sent to our application within 100 seconds with a concurrency level 5. You can also see demo video below;

Conclusion

To sum up, we put a simple configuration to our project and applied to enable Horizontal Pod Autoscaler. By using Apache Benchmark tool, we sent some request to our application to hardening CPU. If you want to see HPA configs, you can access them here

Deploy Auto-Scalable Node.js Application on Kubernetes Cluster — Part 1


We are facing crazy technologies everyday, and we — as developers — need to decide on ones that are more production ready. During this decision period, there are several parameters we use to convince our selves. Being able to simulate some production environment behaviours on developer machine is a must if I will say it is crazy. In this tutorial, we will create a Kubernetes cluster with Minikube on our local computer and then deploy a sample Node.js application to this cluster in a way that scales according to load from out side.

Kubectl

Kubectl is a command line tool for running commands on Kubernetes cluster. You can install it via;

curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/darwin/amd64/kubectl

It will be enabled after granting execution permission and moving to user local folder.

chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin/kubectl

You can verify it with kubectl version

Kubectl Version

From now on, we will be able to access Kubernetes cluster with kubectl. If you are using different operating system, you can refer installation instructions here

Minikube

Minikube is a tool let us create Kubernete cluster on our local computer. You can install minikube with following command,

brew cask install minikube

If you are using different operating system, you can refer installation instructions here

We can start a Kubernetes cluster locally by executing minikube start

Minikube start

As you can see on the output, our kubectl client is autoconfigured automatically to connect local kubernetes cluster. To test this, you can list services with;

kubectl get services

Kubectl services

Sample Node.js Application

Here is a sample yet another Hello World Node.js application.


const http = require(‘http’);
const port = process.env.PORT || 3000;
const server = http.createServer((req, res) => {
res.statusCode = 200;
res.setHeader(‘Content-Type’, ‘text/plain’);
res.end(‘Hello Worldn’);
});
server.listen(port, () => {
console.log(`Server running on port: ${port}`);
});

This application runs on port 3000 if you have no environment variable with key PORT .

Docker Image Preparation

In order to deploy this app on Kubernetes, we can prepare a Dockerfile to build a docker image for future use. Dockerfile comes as following;

FROM node:alpine

RUN mkdir -p /usr/src/app

WORKDIR /usr/src/app

ADD index.js ./

ADD package.json ./

RUN npm install

CMD ["npm", "start"]

I assume you have already a Docker Hub account, and when you execute followings, you will be able to push your image to Docker Hub. Do not forget to replace <username> section below.

docker login
docker build -t <username>/node-example .
docker push <username>/node-example

Now we are ready to use this docker image on our Kubernetes deployments.

Kubernetes Deployments & Services

The project conventions for my Kubernetes projects on my daily basis as follows;

  • Create a folder k8s inside your project
  • Create deployment.yml inside k8s
  • Create service.yml inside k8s

In deployment file, we simply define our project metadata and container definitions to manage pods. Here is our deployment.yml file;

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: node-example-deployment
spec:
replicas: 1
template:
metadata:
labels:
app: node-example
spec:
containers:
- name: node-example
image: huseyinbabal/node-example
imagePullPolicy: Always
ports:
- containerPort: 3000

You can see this is a deployment object by looking at kind keyword. With replicas keyword, I am saying that this will have only one instance beyond the service. In containers section, we provided our docker image, and port number for container internals.

We are ok with the deployment, and let’s expose our app to real world. This time we will have a service file like below;

apiVersion: v1
kind: Service
metadata:
name: node-example
labels:
app: node-example
spec:
selector:
app: node-example
ports:
- port: 3000
protocol: TCP
nodePort: 30001
type: LoadBalancer

We are simply exposing our port 3000 as 30001 to outside and this is a service with LoadBalancer type. Now go to your project folder and execute following;

kubectl apply -f k8s

This command will create/update service and deployment on Kubernetes cluster by using service and deployment definitions inside necessary files in project.

Kubectl apply

In order to check your deployment and services, you can use commandline;

Kubectl service

Kubectl deployments

Kubectl pods

As you can see, we have 1 running pod beyond our service. You can see above status by using minikube also;

minikube dashboard

Minikube dashboard

Now we are ready to access our service by using;

minikube service node-example

This will open our service inside a browser by using kubernetes internals.


You can access Github project here

Conclusion

To sum up; we simply created a Kubernetes cluster by using minikube, and this enabled us using our kubectl to connect Kubernetes cluster. Node.js application helped us to test our deployments. In this tutorial, I mainly focused on Kubernetes preparation and application deployment. In next tutorial, I will show you how to scale your application manually/automatically with supporting benchmark operations.

Ultimate IoT with Docker, Jenkins, and Raspberry Pi — Installation

Jenkins Raspberry Pi Deployment

Docker is very popular nowadays and lots of companies try to use containerization technologies on different kind of platforms. Raspberry Pi has its own place on this containerization world in a special way. In this tutorial, I will mention about how to deploy contanierized Node.js application into Raspberry Pi 3 by using Jenkins.

Prerequisites


  1. Raspberry Pi (In this tutorial I will use Raspberry Pi 3)
  2. Mini SD Card
  3. USB Adapter for Mini SD Card
  4. Power Cable for Raspberry Pi
  5. External Display with HDMI support

Prepare Operating System Image

The first thing we need to do is transferring os to Raspberry Pi with an OS installation manager called NOOBS (New Out Of Box Software). Download it from here and transfer the contents of zip file to empty SD card. Be sure that, the files extracted from zip file should be in root level folder of SD card, not in another folder.

First Boot

Connect HDMI display to Raspberry Pi, insert SD card into correct socket in Raspberry Pi, and connect your keyboard into USB socket on Raspberyy Pi.

Raspberry Pi 3

Once, you connect power cable to Raspberry Pi, the operation system inside SD card will be automatically booted up.

NOOBS

Select Raspbian with PIXEL here and continue to installation.

Overwrite Data

Installing Raspbian

SSH Configuration

OS installation is finished and we are able to connect to Raspberry Pi. In order to do that, we need to allow SSH connection first. Go to Raspberry Pi > Preferences > Raspberry Pi Configuration.

Raspberry Pi Config

On the screen, click on Interfaces tab then enable SSH.

SSH

Connecting to Raspberry Pi

We need to detect IP address of Raspberry Pi now. Connect your Raspberry Pi with your internet modem with Cat5 cable (you can see network socket on Raspberry Pi). Go to your modem management web ui and grab ip address of connected Raspberry Pi. Connect to Raspberry Pi with;

ssh pi@<ip_address_of_raspberry_pi>

you need to provide default password which is raspberry.

SSH connect

Docker Installation

After connecting to Raspberry Pi, we are ready to install docker with a single command;

curl -sSL https://get.docker.com | sh

Docker Install

After successfull installation, you will see Docker client and server version with command;

docker version

Docker Version

We all set it up, and now we can use our Raspberry Pi as our dedicated server 🙂

Conclusion

To sum up, we have prepared a Raspberry Pi device where Docker installed to apply our DevOps fantasies according to our dreams 🙂 In following articles, I will be talking about how we achieved to replace hard core circuits and special devices with Raspberry Pi in order to use microservices for intercommunication instead of writing libraries an drivers to make devices communicate each other.