Being able to react to any kind of anomality on production environment is the key to success. Kubernetes has good features to let you revert your deployments back with a simple command. If you have ever heard following scenario within your company before, this session will be the medicine for you. “Payment microservice is in an unstable state after last deployment, what we can do?” “I realized that Cart service has incorrect version during the deployment is in half state, I need to revert them back…”
At the end of April 2018, I visited Amsterdam for a good reason. Visiting the good places that I couldn’t visited 3 years ago, and after filling that power, I just made a talk about Microservice Best Practices on Kubernetes. The event is handled by Booking, and I was very happy with their hospitality. Thank you again !
Let me provide brief summary of each topic I have mentioned on the event. You can see my slides here, if you are not so much patient 🙂
1. Glory of REST
Microservices are like humans, and they need to communicate with each other by using well structured interfaces. Richardson’s Maturity Model is a good reference to this
2. Power of HATEOAS
Hypermedia As The Engine Of Application State provides navigate-able resources that you will find all the informations within the response. Forget about trying to generate some links on different kind of client applications to navigate next resources by using previous one.
3. Distributed Configuration
When you switched to the Microservice Architecture, you will need to configure multiple services at the same time, that configs must be applied to applications in real-time, etc… Distributed configuration can be handled with Consul as key/value pair, git2consul for synchronizing configurations to Consul, and you may need to keep those configurations on a simple git project.
4. Client Code Generation
In order to communicate microservices, you may have 2 options at least to make inter service communications. If you are already using service discovery, you can think about Feign Client. Or else, you can use swagger-codegen to generate client library whenever you deploy your app to any kind of environment. Do not think about writing client libraries manually for your hundreds of microservices, just trigger a Jenkins job and take a REST!
5. Kubernetes Warm-up
You can create a k8s folder to keep your k8s resource definitions to use on deployment pipeline. A typical micro service may have deployment, service definition at least for deployment and exposing your application to the outside or at least to the load balancer
If you have kubernetes specifications within your project, you are ready to deploy your app by using Jenkins with a simple kubectl configuration within jenkins servers. In order to reduce complexity, you can use multi stage builds to build docker image to use in your k8s deployment.
Even you are in a stable environment like k8s, you need to track your infrastructure and application insights. To collect metrics, you can use Prometheus, and to serve them in a good dashboard, you can use Grafana. CoreOS team developed a good project that is called prometheus operator comes with a built-in kubernetes configurations. One click monitoring !
There are several types of logging architecture on kubernetes and I mainly focused on cluster level logging with daemon set agents. You can send your logs to logging backend like Elasticsearch to show on Kibana dashboard, or if you don’t want to maintain ELK stack, you can use https://humio.com/ for a fast centralized real-time logging and monitoring system. Just use their kubernetes integration
9.APM & Service Mesh
Monitoring and Logging may not help you all the time, you may need to see deeper insights about your application. When it comes to Microservice and Container world, Instana is a good choice to handle Tracings, Monitoring with a simple sensor integration. You can create your infrastructure map, see traces and spans for a request lifecycle, even you can see real time service requests on simple dashboard.
10. API Gateway
If you are planning to expose your services to the public, you definitely manage your APIs with an API Gateway to perform Authentication, Authorization, Rate Limiting, API Versioning, etc… I have used Tyk API Gateway to set this up in Kubernetes to route traffic to microservices after successfully validated by API Gateway.
11. Event Sourcing & CQRS
In a synchronous world, you can only change 1 object in 1 transaction at a time. When you switch to distributed systems, you need to use 2-phase commits in an extended architecture. Again, with this strategy, whenever you made an update to current state of an object, all the previous states will be gone. You can use Event Sourcing with asynchronous events stored in an event store like Apache Kafka, Hazelcast, etc… Also, you can separate read (query) and write (command) in order to handle events asynchronously and populate desired views on database to serve it via query later.
Hope above sections would be a good reference for your next Microservice Architecture design.
In this session, we will cover how an external access to the internal cluster can be architected in Kubernetes environment. Beside some theoretical information, we will apply some production-ready dojos. The outline will be;
What is Ingress?
Ingress Operations on Production Environment
Isolated and Non-Isolated Pods with Network Policies
In a typical Kubernetes cluster, you can do lots of things to make your DevOps mindset become true. If you plan to extend your from personal one to enterprise one, you need to apply some authorization rules to restrict some operations according to needs. In this session, you will see a do a kubernetes dojo especially for authorization and secret management.
What would you do if you needed to upgrade your machine types on production environment on Google Cloud? If you are using Google Kubernetes Engine, it is a piece of cake. I have upgraded machine types from n1-standard-1 to n1-standard-2 for all of the kubernetes cluster nodes and you can see my adventure in following video.
Application configuration is one of the most important operations of a production ready environment. Kubernetes lets us create and manage these configurations in several ways. Additionally, we need to share application specific data in a clustered environment on cloud or on premise. In this webinar, you can see kubernetes dojo for production ready cases.
In this session, we had a look at the some of the most important concepts of Kubernetes; Services, Pods, Deployments in order to understand the application lifecycle in a typical Kubernetes environment. Some of the topics covered are below;
– Creating Deployment – Single and Multi-Container Concept in Deployment – Monitoring, Debugging Pods – Service Types – Exposing Services to Internet – Manage Environment Variables of Deployment – Service-to-Service Communication
In previous article, we have setup kubernetes cluster by using minikube and applied some kubectl command to deploy sample Node.js application to kubernetes cluster. In this article, we will configure our application in a way that it will be auto-scaled according to cpu load. Fasten your belts !
Horizontal Pod Autoscaler
Kubernetes helps us to scale desired pod in a replication controller, deployment, or replica set according to observed CPU utilization.
The logic behind above architecture is very simple. Controller manager queries the system resources within a specific period defined by option — horizontal-pod-autoscaler-sync-period (30 seconds by default), and if the resources are in an exceeding state than the threshold you provided, the pods will be auto-scaled up. Contrarily, if the resources got back to normal state, the pods will be auto-scaled down.
There are several ways to configure application for auto-scaling, and we will defined this config inside our project I mean in k8s folder. I assume you have minikube started and already deployed Node.js application to kubernetes cluster. Go to project and add a file named hpa.yml to k8s folder, put following content to it
I assumed you have already Apache Benchmark installed on your computer. With above command, in total 1000 request will be sent to our application within 100 seconds with a concurrency level 5. You can also see demo video below;
To sum up, we put a simple configuration to our project and applied to enable Horizontal Pod Autoscaler. By using Apache Benchmark tool, we sent some request to our application to hardening CPU. If you want to see HPA configs, you can access them here
We are facing crazy technologies everyday, and we — as developers — need to decide on ones that are more production ready. During this decision period, there are several parameters we use to convince our selves. Being able to simulate some production environment behaviours on developer machine is a must if I will say it is crazy. In this tutorial, we will create a Kubernetes cluster with Minikube on our local computer and then deploy a sample Node.js application to this cluster in a way that scales according to load from out side.
Kubectl is a command line tool for running commands on Kubernetes cluster. You can install it via;
You can see this is a deployment object by looking at kind keyword. With replicas keyword, I am saying that this will have only one instance beyond the service. In containers section, we provided our docker image, and port number for container internals.
We are ok with the deployment, and let’s expose our app to real world. This time we will have a service file like below;
To sum up; we simply created a Kubernetes cluster by using minikube, and this enabled us using our kubectl to connect Kubernetes cluster. Node.js application helped us to test our deployments. In this tutorial, I mainly focused on Kubernetes preparation and application deployment. In next tutorial, I will show you how to scale your application manually/automatically with supporting benchmark operations.