Releases, Deployments and Traffic Mirroring

During my journey to learn ISTIO and your stack I’ve discovered some interesting concepts about deployments stuff. The first one I didn’t know the difference between Deployment and Release if you know no problem I’ll explain detailed during this blog post. Also on the next post, I’ll explain how ISTIO can help us to achieve it.

Deployment vs Release

The first thing to know, before deep dive in strategies is to understand the difference between these concepts. I’ve discovered it reading the excellent Christian E. Posta book Istio in Action. The book is under production then there is some chapters to release yet.

Deployment

Deployment can be described as an activity to install new code into production or another environment at runtime, the important thing here it can’t affect users anyway, or we can’t change traffic to these artifacts. Then we can deploy multiple versions without problems.

Release

A release can be when change traffic to a deployment did previously; it can affect system users, then we should plan it carefully. There are some ways to minimize the user’s impact during our releases. We’ll discuss it detailed in this blog post. Also, the version avoids “Big Bang” deployments, like blue-green deployments.

Request Level or Traffic Shifting

Now we know the main difference between Deployment and Release, then we can discuss another critical topic Requests Distributions Strategies or how is the best strategy to split traffics during deployments.

There is two ways to achieve it . Traffic Shifting or Request Level it is super important to understand because based on that you should choose the best option in your use case.

Request Level

It is a kind of self-explanatory, in this technique, we can split traffic based on request headers attributes and then control production traffic as we want. This strategy allows us to gained more fine control in production traffic during our deployments.

For example, we can change traffic based on client-id, in OAuth protocol, when this specific client can be a partner to test our application in the real world.

Traffic Shifting

Traffic shifting can be an excellent option when we did not expect to “identify” users by something in the request. In this strategy, you, want to split traffic to different based on a percentage of calls. This strategy is a little bit more simple that Request Level but can be an exciting option to test our deployments.

Let’s talk about Releases Strategies!!!

Dark Launch Releases

In this kind, we can change the traffic to a new deployment using the minor part of users based in some rule, a percentage for instance. The important note here is the most of users a.k.a production traffic should go to the “stable” version. The main idea here is testing new features to a set of premium users and then measure the adoption or something important for your company.

Let’s see an example using the Traffic Shifting Strategy

Dark Launch Example

Canary Releases

The idea is very similar to Dark Launches, but there is a small difference, in the Canary Release we want to test a new version of our deployment, see performance and system behavior. In this kind, it is not related only something new, feature or significant change. Sometimes we want to test new versions which one has performance improves for example. In this example we’ll use the Request Level Strategy, let’s see it.

Canary Release

On the example above, we change the production traffic to a new version only for client-id = 10. Others client-ids go to stable version of our application.

Traffic Mirroring

The idea of Traffic Mirroring is pretty simple. We will route the real production, a copy of the production requests to a new deployment or experimental version. The copy of the request is based on fire and forget principle and won’t impact the real user’s requests. Mirroring traffic is an interesting techniques to delivery code into production with more confidence.

The image below will show the Traffic Mirroring strategy

Traffic Mirroring Flow

These concepts are very important to know. It helps us to choose the correct strategy during our deployments.

In my opinion, this knowledge is the key point to guide us to choose an successfull deployment strategy.

On the next post we will learn how to do it using the ISTIO an open-source service-mesh implementation.

References

ISTIO in Action by Christian E. Posta

Blue Green Deployments by Martin Fowler

Install ISTIO on AZURE AKS

Hello,

During in my learning path to understanding Service Mesh and ISTIO. I decided to use some different cloud vendors. I choose Azure and Google.

I’ve started with Google Kubernetes Engine (GKE). It was my first experience with Google Cloud Platform components and was amazing. The command line is well documented and easy to interact with Kubernetes APIs.

Today I will explain how to install Istio components in Azure Cloud (Azure Kubernetes Service or AKS) which one offers managed kubernetes in Azure Infrastructure. On this post, I will use HELM to install Istio on kubernetes.

Let’s start with some requirements:

  • HELM Client ( installation instructions can be found here )
  • Azure CLI  (installation instructions can be found here )
  • kubectl ( installation instructions can be found here )

Creating the AKS Cluster and Preparing HELM

To create the AKS Cluster we can use the following statement:

Some considerations about this command:

  • I strongly recommend creating your own resource group
  • The –enable-rbac is mandatory to deploy ISTIO.

Then we need to configure our kubectl on Azure we can do it using the az command line, like this:

Now our kubectl is fully configured, we can start to install ISTIO in our AKS cluster.

Let’s start downloading the Istio Release. The zip can be found here. We are using the 0.8.0 version which one is the stable version. You need to choose the target OS, in the cluster.

Go to the ISTIO root folder and then we need to create a service account for HELM, it can be done using the following command:

Good, you should see the following output:

Awesome our service account is ready.

Let’s deploy our tiller deploy. Run the command above:

Then we can see, the following output:

Awesome, our HELM client is ready to start to deploy Istio.

Installing ISTIO

Go to the ISTIO root folder and then we can deploy install ISTIO in our Kubernetes cluster. It can be achieved with this command

After we can check the ISTIO components using the following command:

All the pods need to stay in Running state like in the image below:

Well done, our ISTIO is installed in our cluster and ready to receive some microservices.

On next week I will explain how to interact with our cluster, creating some microservices and manage the cluster monitoring tools like Grafana, Jaeger and other.

References:

Install ISTIO: https://istio.io/docs/setup/kubernetes/helm-install/

Create a cluster in AKS: https://docs.microsoft.com/en-us/azure/aks/kubernetes-walkthrough