Docker Multi-Stage Builds

Docker Multi-Stage Builds

We use docker images to have same application with its dependencies on any kind environment. Having compile time and runtime dependencies is the nature of a specific developer’s life. For example, in previous article, we had Golang dependency to build our Golang REST API. We need to design our dependencies carefully to eliminate unnecessary dependency within container is alive. In this tutorial, we will see how to use docker multi-stage builds to bundle our application in real life.

Ways to Handle Dependencies

Let say that, you have a Golang application, and you are developing a docker image to run on production environment. You can follow 2 ways to handle this;

  1. Use a build tool like Jenkins to compile you Golang application into a binary and add this binary to docker image. You need to have Golang installed on your Jenkins machine to build Golang project.
  2. You can add Golang as dependency to your docker image, and you can run Golang application within container with go run … or build your Golang application inside docker image and execute binary.

In 1st strategy, you need to have Golang installed in order to build docker images, since you need a compiled binary. When a new joiner of your team clones project, he/she needs to install Golang and continue to work

In 2nd strategy, no need to install anything, just clone project and build docker image, since everything included inside Dockerfile definition.

Wait! Why we have Golang dependency inside docker image even I only need that at first time during compiling ? Because we are too lazy to install Golang locally and build project and use inside docker image to eliminate that dependency 🙂 Let’s find a proper way to be happy with our laziness and yet eliminate dependency.

Multi-Stage Builds

Multi-stage builds helps us to keep our Dockerfile clean and reduces the image size by not including the dependencies you will not need on run time. In order to do this, we will simply do some operations on first step and then send the output of first step to the second step as parameter.


FROM instrumentisto/glide as builder
WORKDIR /go/src/bitbucket.org/kloiahuseyin/flowmon-projects
COPY . .
RUN glide install
RUN CGO_ENABLED=0 GOOS=linux go build -a -tags flowmon -o build/flowmon-projects -ldflags ‘-w’ .
FROM scratch
COPY — from=builder /go/src/bitbucket.org/kloiahuseyin/flowmon-projects/build/flowmon-projects app
ENV PORT 3000
EXPOSE 3000
ENTRYPOINT [“/app”]

In first section, we add our dependent image for glide and provide an alias builder. Since, this Dockerfile inside our project, we simply add project files to docker image. With glide install, we install all the dependencies for Golang project. Finally, we compile our Golang application into a executable binary with CGO_ENABLED=0 GOOS=linux go build -a -tags flowmon -o build/flowmon-projects -ldflags ‘-w’ .

At the end of the first stage, we have likely created an image that contains our binary. On second step, we used stratch which is an empty image and then copied our binary from first image with FROM scratch
COPY — from=builder /go/src/bitbucket.org/kloiahuseyin/flowmon-projects/build/flowmon-projects app
 . Now we have our binary app, and only thing we need to do is providing this as entrypoint to our image.

Conclusion

There are several ways to create a docker image and the best way is to do this with minimum dependencies for cross platforms. Instead of being needed to install some dependencies, it is better to do this within Dockerfile by using multi-stage builds. For example, you don’t need to go to the Jenkins to see application dependencies in order to create your docker image on your local image. If you want to see this in action, you can refer for source code here

Micro Docker Images for Go Applications

Micro Docker Image for Go Application

We are able to run an application in seconds with Docker nowadays just by grabbing specific image within several official / custom docker images and that’s it. However, this easiness, using ready images for your application, may let us to have docker images with a size hundreds of megabytes ! In this article, I will show you how to create damn small images for Go application and run them. Let’s rock!

Sample Go REST API

I assume you have already have Golang and necessary tools installed on your system. This project has no dependent external library, so just create a file called main.go and put following content inside

package main
import (
"net/http"
"fmt"
"log"
)
func main() {
http.HandleFunc("/", func(writer http.ResponseWriter, request *http.Request) {
fmt.Fprint(writer, "Hello, World")
})
log.Fatal(http.ListenAndServe(":3000", nil))
}

This is a sample application that listens requests on port 3000 and responds Hello, World. You can try it yourself by executing go run main.go, and going to http://localhost:3000

Dockerize Go Project with Official Image

What is your first step for dockerizing a project? Just search official image you need on Docker Hub and just use inside Dockerfile right? Let’s use this approach first. Create a Dockerfile inside project folder and use official golang image to run your project inside container.

FROM golang
ADD . ./
RUN go build -o main
ENV PORT 3000
EXPOSE 3000
CMD [“/main”]

Build your docker image with docker build -t my-go-app-golang .

Docker Build

Now check your image with docker images | grep my-go-app-golang

Docker Image Size

739 MB! If you are happy with this size, I think you don’t need to go further, thanks for reading so far 🙂 If you want to make this smaller, continue to read…

Dockerize Go Project with Official Alpine Image

Docker Alpine images are the images with minimal dependencies that means you will not be able to see lots of tools inside it when you exec and check. For a specific Golang application, we don’t need so much dependencies and this alpine images may fit well with our needs.

Use Alpine Image inside your Dockerfile like below;

FROM golang:alpine
ADD . ./
RUN go build -o main
ENV PORT 3000
EXPOSE 3000
ENTRYPOINT [“/main”]

Build it wit docker build -t my-go-app-alpine .

Docker Build Alpine

Now check your image with docker images | grep my-go-app-alpine

Docker Image Size Alpine

275 MB, is it small enough for you ? Ok, keep reading for next surprize docker image size.

Dockerize Go Project with Docker Stratch Image

Think about a docker image and there is nothing inside it. Yes, it is Docker Scratch image. You cannot pull this image, but you can refer this in your Dockerfile. The very first next line after referral will be your 1st layer of filesystem. The main strategy here is, provide your binary as entrypoint this scratch image and that’s all.

We need a binary of our Golang application to provide as an entrypoint to our scratch image. In order to build a binary, go to your project folder and execute:

CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o main .

Here with CGO_ENABLED=0 we are saying that disable cgo and build golang application statically that means you will have all the dependencies once you copy this binary to image. -a is for re build entire packages to be sure you have all the dependencies. After this execution, you will have a binary inside your project folder.

Golang Build

We have a binary and now create Dockerfile with following content.

FROM scratch
ADD main ./
ENV PORT 3000
EXPOSE 3000
ENTRYPOINT [“/main”]

You can create your image with docker build -t my-go-app-scratch .

Docker Build Scratch

And when you check your image size docker images | grep my-go-app-scratch

Docker Image Size for Scratch

6.1 MB! If this is not small enough for you, keep reading …

Just joking 🙂 This is the minimal image I can provide you for this application. We have make image 100 times smaller than initial one. This minimal image will keep our motivation very high because, you will be able to deploy this image in a very short time on any kind of environment.

Conclusion

Using an official docker image for an application is very good choice at first, but if you have good devops mindset, I know you will try to make docker image smaller. This time, the main step will be more probably using an alpine image or other small images like busybox, etc… If you are able to convert your project to a binary, do use docker scratch image to have smallest docker image.

You can find application on Github here

Deploy Auto-Scalable Node.js Application on Kubernetes Cluster — Part 2


In previous article, we have setup kubernetes cluster by using minikube and applied some kubectl command to deploy sample Node.js application to kubernetes cluster. In this article, we will configure our application in a way that it will be auto-scaled according to cpu load. Fasten your belts !

Horizontal Pod Autoscaler

Kubernetes helps us to scale desired pod in a replication controller, deployment, or replica set according to observed CPU utilization.

HPA Scheme

The logic behind above architecture is very simple. Controller manager queries the system resources within a specific period defined by option — horizontal-pod-autoscaler-sync-period (30 seconds by default), and if the resources are in an exceeding state than the threshold you provided, the pods will be auto-scaled up. Contrarily, if the resources got back to normal state, the pods will be auto-scaled down.

Configuration

There are several ways to configure application for auto-scaling, and we will defined this config inside our project I mean in k8s folder. I assume you have minikube started and already deployed Node.js application to kubernetes cluster. Go to project and add a file named hpa.yml to k8s folder, put following content to it

apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
annotations:
name: node-example
namespace: default
spec:
maxReplicas: 5
minReplicas: 1
scaleTargetRef:
apiVersion: extensions/v1
kind: Deployment
name: node-example
targetCPUUtilizationPercentage: 1

With targetCPUUtilizationPercentage option, we are saying that, Once the cpu load inside observed CPU more than 1%, scale this pod.

After this configuration, apply your changes with following

kubectl apply -f k8s/hpa.yml

HPA apply

You can be ensure about HPA configuration with following command;

kubectl get hpa

HPA resources

Metric Collection

As you can see, there is a strange thing above. Yeah, on TARGETS section, there is no current metric data. In order to fix that, you need to check addon list on minikube by;

minikube addons list

Here by default heapster addon is disabled. In order to let controller manager query your resources, you need to enable this addon. You can enable it via following;

minikube addons enable heapster

And now, we can see TARGETS value.

CPU usage

CPU usage is 0% and let’s make some loads to this application.

Hardening CPU

ab -c 5 -n 1000 -t 100000 http://192.168.99.100:30001/

I assumed you have already Apache Benchmark installed on your computer. With above command, in total 1000 request will be sent to our application within 100 seconds with a concurrency level 5. You can also see demo video below;

Conclusion

To sum up, we put a simple configuration to our project and applied to enable Horizontal Pod Autoscaler. By using Apache Benchmark tool, we sent some request to our application to hardening CPU. If you want to see HPA configs, you can access them here

Deploy Auto-Scalable Node.js Application on Kubernetes Cluster — Part 1


We are facing crazy technologies everyday, and we — as developers — need to decide on ones that are more production ready. During this decision period, there are several parameters we use to convince our selves. Being able to simulate some production environment behaviours on developer machine is a must if I will say it is crazy. In this tutorial, we will create a Kubernetes cluster with Minikube on our local computer and then deploy a sample Node.js application to this cluster in a way that scales according to load from out side.

Kubectl

Kubectl is a command line tool for running commands on Kubernetes cluster. You can install it via;

curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/darwin/amd64/kubectl

It will be enabled after granting execution permission and moving to user local folder.

chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin/kubectl

You can verify it with kubectl version

Kubectl Version

From now on, we will be able to access Kubernetes cluster with kubectl. If you are using different operating system, you can refer installation instructions here

Minikube

Minikube is a tool let us create Kubernete cluster on our local computer. You can install minikube with following command,

brew cask install minikube

If you are using different operating system, you can refer installation instructions here

We can start a Kubernetes cluster locally by executing minikube start

Minikube start

As you can see on the output, our kubectl client is autoconfigured automatically to connect local kubernetes cluster. To test this, you can list services with;

kubectl get services

Kubectl services

Sample Node.js Application

Here is a sample yet another Hello World Node.js application.


const http = require(‘http’);
const port = process.env.PORT || 3000;
const server = http.createServer((req, res) => {
res.statusCode = 200;
res.setHeader(‘Content-Type’, ‘text/plain’);
res.end(‘Hello Worldn’);
});
server.listen(port, () => {
console.log(`Server running on port: ${port}`);
});

This application runs on port 3000 if you have no environment variable with key PORT .

Docker Image Preparation

In order to deploy this app on Kubernetes, we can prepare a Dockerfile to build a docker image for future use. Dockerfile comes as following;

FROM node:alpine

RUN mkdir -p /usr/src/app

WORKDIR /usr/src/app

ADD index.js ./

ADD package.json ./

RUN npm install

CMD ["npm", "start"]

I assume you have already a Docker Hub account, and when you execute followings, you will be able to push your image to Docker Hub. Do not forget to replace <username> section below.

docker login
docker build -t <username>/node-example .
docker push <username>/node-example

Now we are ready to use this docker image on our Kubernetes deployments.

Kubernetes Deployments & Services

The project conventions for my Kubernetes projects on my daily basis as follows;

  • Create a folder k8s inside your project
  • Create deployment.yml inside k8s
  • Create service.yml inside k8s

In deployment file, we simply define our project metadata and container definitions to manage pods. Here is our deployment.yml file;

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: node-example-deployment
spec:
replicas: 1
template:
metadata:
labels:
app: node-example
spec:
containers:
- name: node-example
image: huseyinbabal/node-example
imagePullPolicy: Always
ports:
- containerPort: 3000

You can see this is a deployment object by looking at kind keyword. With replicas keyword, I am saying that this will have only one instance beyond the service. In containers section, we provided our docker image, and port number for container internals.

We are ok with the deployment, and let’s expose our app to real world. This time we will have a service file like below;

apiVersion: v1
kind: Service
metadata:
name: node-example
labels:
app: node-example
spec:
selector:
app: node-example
ports:
- port: 3000
protocol: TCP
nodePort: 30001
type: LoadBalancer

We are simply exposing our port 3000 as 30001 to outside and this is a service with LoadBalancer type. Now go to your project folder and execute following;

kubectl apply -f k8s

This command will create/update service and deployment on Kubernetes cluster by using service and deployment definitions inside necessary files in project.

Kubectl apply

In order to check your deployment and services, you can use commandline;

Kubectl service

Kubectl deployments

Kubectl pods

As you can see, we have 1 running pod beyond our service. You can see above status by using minikube also;

minikube dashboard

Minikube dashboard

Now we are ready to access our service by using;

minikube service node-example

This will open our service inside a browser by using kubernetes internals.


You can access Github project here

Conclusion

To sum up; we simply created a Kubernetes cluster by using minikube, and this enabled us using our kubectl to connect Kubernetes cluster. Node.js application helped us to test our deployments. In this tutorial, I mainly focused on Kubernetes preparation and application deployment. In next tutorial, I will show you how to scale your application manually/automatically with supporting benchmark operations.

Ultimate IoT with Docker, Jenkins, and Raspberry Pi — Installation

Jenkins Raspberry Pi Deployment

Docker is very popular nowadays and lots of companies try to use containerization technologies on different kind of platforms. Raspberry Pi has its own place on this containerization world in a special way. In this tutorial, I will mention about how to deploy contanierized Node.js application into Raspberry Pi 3 by using Jenkins.

Prerequisites


  1. Raspberry Pi (In this tutorial I will use Raspberry Pi 3)
  2. Mini SD Card
  3. USB Adapter for Mini SD Card
  4. Power Cable for Raspberry Pi
  5. External Display with HDMI support

Prepare Operating System Image

The first thing we need to do is transferring os to Raspberry Pi with an OS installation manager called NOOBS (New Out Of Box Software). Download it from here and transfer the contents of zip file to empty SD card. Be sure that, the files extracted from zip file should be in root level folder of SD card, not in another folder.

First Boot

Connect HDMI display to Raspberry Pi, insert SD card into correct socket in Raspberry Pi, and connect your keyboard into USB socket on Raspberyy Pi.

Raspberry Pi 3

Once, you connect power cable to Raspberry Pi, the operation system inside SD card will be automatically booted up.

NOOBS

Select Raspbian with PIXEL here and continue to installation.

Overwrite Data

Installing Raspbian

SSH Configuration

OS installation is finished and we are able to connect to Raspberry Pi. In order to do that, we need to allow SSH connection first. Go to Raspberry Pi > Preferences > Raspberry Pi Configuration.

Raspberry Pi Config

On the screen, click on Interfaces tab then enable SSH.

SSH

Connecting to Raspberry Pi

We need to detect IP address of Raspberry Pi now. Connect your Raspberry Pi with your internet modem with Cat5 cable (you can see network socket on Raspberry Pi). Go to your modem management web ui and grab ip address of connected Raspberry Pi. Connect to Raspberry Pi with;

ssh pi@<ip_address_of_raspberry_pi>

you need to provide default password which is raspberry.

SSH connect

Docker Installation

After connecting to Raspberry Pi, we are ready to install docker with a single command;

curl -sSL https://get.docker.com | sh

Docker Install

After successfull installation, you will see Docker client and server version with command;

docker version

Docker Version

We all set it up, and now we can use our Raspberry Pi as our dedicated server 🙂

Conclusion

To sum up, we have prepared a Raspberry Pi device where Docker installed to apply our DevOps fantasies according to our dreams 🙂 In following articles, I will be talking about how we achieved to replace hard core circuits and special devices with Raspberry Pi in order to use microservices for intercommunication instead of writing libraries an drivers to make devices communicate each other.

The Hitchhiker’s Guide to Rancher — Installation


What is Rancher ?

We are in cloud age right now, and there are several company provides cloud solutions. Beside this, containerisation technologies are now very simple to use Docker, but for production ready systems we need to use orchestration systems. Both cloud systems and orchestration systems has standards for all providers, you need to be specialised for every cloud provider, and orchestration tools. You can see some of cloud providers here, and Mesos, Kubernetes, Swarm, Cattle are some of orchestration tools that you can use. Rancher provides us a clean interface to manage container system in a cloud agnostic way by isolating cloud providers and orchestration tools.

How ?

We have 2 options to run Rancher on our system.

RancherOS

RancherOS is a minimal operating system built by Rancher Labs to provide production ready system to execute rancher related services.

Rancher Server

On this option, we can simply run docker container located here on modern Linux distribution with a supported Docker. RancherOS, Ubuntu, RHEL / Centos 7 are the mostly tested ones

We will continue with RancherOS during this tutorial.

Requirements

We will create a virtual machine by using prebuilt RancherOS, so only Docker Machine needed. I assume you all have Docker Machine installed.

Go!

In order to use rancher goodies, we need to have a proper environment. Let’s create a machine with a pre-built rancher image.

docker-machine create -d virtualbox — virtualbox-boot2docker-url https://releases.rancher.com/os/latest/rancheros.iso kloia

Create Machine with Rancher OS

After machine creation, you can ssh into machine by following command;

docker-machine ssh kloia

At first stage, we have no running container;

Initial RancherOS

We have ready environment to start rancher server with following command;

sudo docker run -d --restart=unless-stopped -p 8080:8080 rancher/server

Now we have Rancher containers on our system.


As you may realised, we have exposed port 8080. This port is for Rancher UI like below.

Rancher UI

In order to access ui, you can get ip address with following command.

docker-machine ip <machine_name>

Most probably, the first thing you want to do is adding authentication / authorisation system to your Rancher UI. On main menu, you will see a warning indicator on Admin menu. In this tutorial, we will continue with Github login. For Github login, you will need a Github application. To create an application, please follow instruction on that screen.

Github Login Description



Now we are able to login Rancher UI with our Github account.

We have installed the brain of Rancher and this will lead all the system we will configure later.

Rancher Environment

Rancher comes with a Default environment once you installed to your system. This environment uses Cattle for orchestration. We will create another environment to create all the things from scratch. Go to Manage Environment > Add Environmnet



While you are adding your environment, you will be able to select your orchestration tool, we will continue with Kubernetes.

Kubernetes as Orchestration Tool

Since, we have no cluster host yet, you will see following on newly created environment;


Add Hosts for Cluster

In order to run your containers, services, and stacks on Rancher, you need to enrich your cluster to have stable environment. In order to do that, you need to run some pre-configured script that you can grab on Rancher UI on your host that you have setup before. We will create another machine for rancher host with following command;

docker-machine create -d virtualbox rancher-host-1

We have already created an environment on previous stem and when you go to that environment, you will see a message on header that warns you to Add Host. Click that link and select Custom. Get the ip address of your rancher host with following;

docker-machine ip rancher-host-1

This will give you some thing like 192.168.99.101. On Add Host UI, once you add IP address of your host, it will generate an agent script that you can run on your cluster host. That will be something like below;

Rancher Cluster Host Add

Copy the generated docker run command and execute it on your cluster host. You can ssh into machine via following;

docker-machine ssh rancher-host-1

Running Rancher Agent

After successful operation, you will se your host on Rancher UI

Rancher Host Initializing

Validate Kubernetes

If you managed to add host successfully, you will see Kubernetes processes finished successfully on Environment page;

Kubernetes About to Finish

Kubernetes Initialising Completed

Conclusion

In this tutorial we have focused on complete installation of Rancher to have stable and scalable manageable containerised environment. We will mention about running stacks, services, containers on upcoming Rancher tutorials.

Zero Down Time Microservices

In GDG Devfest Istanbul, I have talked about Zero Down Time Microservices. The main concept was Microservices with Node.js dispatched by another Node.js application with service discovery backed by MongoDB. All the requests dispatched by dispatcher will be synchronized to Elasticsearch. By doing this, all the logs will be analyzed on Kibana and will be able to monitor by providing threshold to api requests.

Don’t Try to Unit Test Controller Request Validation

Don’t try to unit test controller request validation, because MockMvc is a container-less spring platform that means exception resolver will not be activated during controller unit test. MethodArgumentNotValidException will not be resolved and you will get an empty response body with desired status(400 Bad Request) on your response inside unit test section. The best way to test controller request validation for bad request assertion is testing it inside integration tests of controllers.

DevFest Istanbul 2015 — Infinite Scalable Systems with Docker, Docker Swarm, Docker Machine

My talk about Infinite Scalable Systems with Docker, Docker Machine and Docker Swarm in GDG Istanbul’15 . I have provided quick history and introduction about Docker, and then setup native clustering system with Docker Machine and Docker Swarm. Finally, I have scaled Node.js microservices with entire architecture. Hope you have fun.

https://www.youtube.com/embed/XaSdCGvTSFk