Microservices Authentication In Action with Service Mesh

Context

Microservices architecture is the most famous pattern in the Software Architecture today, there are many articles on the internet that tries to explain the benefits and drawbacks present in this architecture style.

I’ve got a couple of years working with microservices and, for me, the most problematic part is related to apply security in this scenario. There are many patterns and tools which try to solve it, most part of them is related to the languages and framework.

That way we have a sort of problems because we need to apply security in the “Software” layer and it might cause problems like:

  • Incorrect implementation by developers
  • Different implementation because of the frameworks “implies” their own patterns like Spring Security or Apache Shiro
  • Usually, languages imply their own patterns as well
  • Add certificates in the application layer add some extra complexity to managing these certificates
  • The framework should support certificate configuration
  • Security implemented with different patterns in the Solution
  • Developers don’t know the security standards in-depth it causes some security leaks.

Security is a non-functional requirement and in the most part of the companies, there is a specific area which will define standards and manage security in the whole ecosystem. It means define network firewalls, network privileges and so on.

We need to find a way to create a standard for security in our microservice solution. ISTIO a Service Mesh implementation can help us adding security in the platform (a.k.a kubernetes) in an easy way.

Let’s understand that!!!

Istio Service Mesh Implementation

The explanation about Istio is out of scope in this blog post. If you need some explanations about you can find it in my previous article.

Authentication types in ISTIO

Istio offers two types of authentication. The first one targets the end-users and the second one aims the service-to-service authentication with certificates, we will understand a little deeper about these two models.

End-User Authentication

This feature aims to authenticate the end-user, it means a person who is trying to access our system or a simple device or application which is trying to access our solution.

Istio enables request level authentication through the JWT specification, the most used security specification for cloud-native applications. Also, Istio enables simple integration with OpenId-Connect specification, another relevant standard for security.

With a couple of configurations like JWT Issuer, JWKS URI, and some paths to include and exclude we are able to protect our microservices with OAuth authentication flow.

It is configured by yaml files which one is very simple and intuitive.

Service-to-Service Authentication

This kind of authentication sometimes is called transport authentication, it will verify the client connection to check the communication. Istio offers mtls as a full-stack solution for transport authentication.

Citadel it the component that will provide the digital certificates based on SPIFFE standards for each sidecar a.k.a envoy proxies present in the Data Plane.

In the next sections, I’ll explain how it works in the configuration time and runtime authentication execution. Let’s do it!!!!

Configuration Flow

Let’s understand how the Citadel component distributes digital certificates for each pod present in the Data Plane.

The following steps describe these interactions:

  • Citadel watches the Kubernetes API server, creates a SPIFFE certificate and key pair for each of the existing and new service accounts. Citadel stores the certificate and key pairs as Kubernetes secrets.
  • When you create a pod, Kubernetes mounts the certificate and key pair to the pod according to its service account via Kubernetes secret volume.
  • Citadel watches the lifetime of each certificate and automatically rotates the certificates by rewriting the Kubernetes secrets.
  • Pilot generates secure naming information, which defines what service account or accounts can run a certain service. The pilot then passes the secure naming information to the sidecar Envoy.

As we can see in the steps above, the digital certificates are fully managed by Citadel component in the Istio Infrastructure.

That is very important because it makes de digital certificates easy to management and a centralized point to act in the case is something wrong with certificates.

Authentication Flow in Data Plane

The service-to-service authorization flow follows the steps below:

  • Istio re-routes the outbound traffic from a client to the client’s local sidecar Envoy.
  • The client-side Envoy starts a mutual TLS handshake with the server-side Envoy. During the handshake, the client-side Envoy also does a secure naming check to verify that the service account presented in the server certificate is authorized to run the target service.
  • The client-side Envoy and the server-side Envoy establish a mutual TLS connection, and Istio forwards the traffic from the client-side Envoy to the server-side Envoy.
  • After authorization, the server-side Envoy forwards the traffic to the server service through local TCP connections.

As we can see the flow is strongly based in envoy (sidecar) features.

Conclusions

In this post, I tried to explain a little bit about microservices authentication with Istio.

As we can see it makes our security configuration in a centralized point, I mean in a couple of files, but the authentication happens in a distributed way en envoys or data plane.

In the next post, we will create the yamls and apply them in a real cluster to be able to see the behaviors.

See you there!!!!

Authentication & Authorization for microservices in Service Mesh World

Motivation

I’ve been studying Service Mesh infrastructure in the last year. There are many exciting features like how to achieve observability with minor changes in the code.

Or how to use the right deployment strategy to deliver the real value about microservices architecture for our customers, the business.
Also, people talk about how to handle network issues with Mesh, which one is very easy and very helpful as well.

I’ve been working with Enterprise Systems, which one in most of the cases this company has problems with security in the microservices architecture.

There are two main complains about this topic.

The first one is about different ways to implement security in microservices. Different frameworks and languages have mixed thoughts about security.

And the last one is about how we can provide security in the platform layer. It means remove security concerns from developers, in general, because there is a department that defines the security patterns for the company. This department has specific requirements to attend business requirements, as well.

That is the main reason that I decided to study security in the Service Mesh context, and then I’ve found the different ways to solve the problems described above, and I’ll try to explain how to achieve it using Istio.

Blog Post Series

This blog post series will cover the full authentication and authorization features present in ISTIO.

The series will have four posts and the main idea is to cover best practices regarding security for microservices architecture using the service mesh, we will use ISTIO for that.

The first post will cover the Authentication concepts present in ISTIO. We will explain how it works in detail to understand the right use cases for that.

The second post will cover the Authentication in ISTIO but in a practical way, in this post, we will have a lot of yaml and examples.

The third post will cover the Authorization concepts implemented in ISTIO, it is very important to understand

In the last post, we will create the Authorization stuff with yamls, this is the practical part for the Authorization part.

Now our context is very clear. Let’s start it right now !!!!!

Use-case for this Blog Series

The idea about this blog series is about how to use Istio to enable Authentication & Authorization in the microservices world but, to achieve that we need a use-case to show how it works in real.

We will configure a simple bet solution, it will enable users to create bets in our systems. The solution is composed of four microservices bet, matches, championship and players.

The bet microservice will reach the other one’s matches, championships and players to validate the data, it means the championship date and so on.

This is not a real use-case is something to use to clarify the ideas about security in the microservices world and get benefits from Service Mesh infrastructure.

Let’s see our simple solution diagram:

As we can see there are different profiles in our solution. The manager will manage the championship data, match data and users who will manage their profile.

In the next posts, we will cover all details about how Istio can help us to deliver security, authentication, and authorization in a centralized way.

See you there!!!

Let’s GO!!! Dependency Injection in GOlang

Starting in GOlang (few points about Java)

After almost ten years of programming in Java, I decided to start to learn a different language.

But Why??? Java is not good enough??

Big NO here, for me Java is brilliant and awesome language and the community is so vibrant as well. I love coding in java. I love how people who’re working in java practice Oriented Object Programming.

But in my opinion, nowadays Java addressing Enterprise World. I mean it is an excellent language to code for Business requirements, like CRM and other things related a business. There are several exciting frameworks about persistence, web which increase developer productivity and help delivery code in production.

Nowadays, my challenge is about creating cloud-native systems for infrastructure stuff. I mean the main purpose of system is helping developers to create amazing microservices and delivery it in a secure and managed way. That is the reason why I choose the golang language.

Dependency Injection Pattern

I will not deep dive into dependency injection pattern, because there are a lot of incredible blog post, articles and discussions about that.

Look at the Martin Fowler blog to find an amazing article about that.

For now, Dependency injection is important to create decoupled and well-designed code.

We will use WIRE to help us to implement Dependency Injection pattern in Golang.

Requirements

I’ve created the go project using Go Modules, it is an interesting way to manage the dependencies.

Installing Wire

Wire generates the necessary stuff at compilation time, then we need to install wire to be able to do it. easy peasy lemon squeezy..

and ensuring that $GOPATH/bin is added to your $PATH.

You can use the full instructions here.

Let’s code a little bit

Go Dependencies

We will create a simple random words generator using Babler, A small utility to generate random words in #golang.

I’m using the IntellijIDEA. The IDE has the autocomplete feature to add dependency automatically in the go.mod, but if you are using the VSCODE IDE which is good as well you can add the go.mod described below

There are a couple of dependencies, the two most important are Wire and Babler.

Business Code

Now, we will create a simple file with our “business code”, it will very simple. Let’s look at the code

There are a few important things here, let’s discuss

Producers

Let’s look at the New* functions. It is our producers, the main goal of these functions is to produce things to be injected. It is very important because we will need to instruct Wire how to create or produce it, we will understand as soon. Very simple code here we will create a pointer for each struct.

Injection Points or Clients

Another important part of the code is WordGenerator struct declaration. As we can see we need a Dictionary pointer, let’s understand how we are able to receive it.

On the NewWordGenerator function, as we can see we received the pointer to Dictionary, that is our injection point. The wire will “inject” the dictionary reference here.

We don’ care about how the Dictionary was created and injected, the only thing we need to think is “I want the instance here, this instance should be ready to me” in simple words. Wire framework will take care of injection for us, that is the important principle about dependency injection.

Wire Configuration

Now we know about the main characteristics of our application let’s instructs wire how to create our objects.

In the root folder create a file called wire.go, this name is mandatory.

Let’s look at the file content.

In the first line, we instruct to go build to ignore this file. This file is necessary to generate our compile file with all dependencies configured it is fantastic.

There are some imports and finally the SetupApplication function, that is the core for our application.

Let’s look at the return. It will produce a WordGenerator, this struct contains our business logic. Our main.go will invoke this struct to generate random words. Sometimes you can create your Application configuration, is up to you.

In our function body, we’ve used wire to create our container. I mean our dependencies to enable other structs to use.

Look at the builder functions as we saw before these functions produce pre-configured structs look at Producers Section

The return of this function doesn’t matter, the important part here is “we explain to wire how to create our application container”.

Wire Code Generation

Now, we are ready to generate our code, as we saw before wire will generate the file to build our application at compilation time, then Let’s do it.

On the root folder, at the same wire.go level type:

The tool will create a file called wire_gen.go, let’s analyze the content

In the first line we have a warning, “Do not edit” that is very import thing to notice.

Then we have the application configuration, look at the declaration all of our dependencies are built by the tool, amazing thing here. All of the structs are configured and ready to use.

The wire did the “dirty job” for us. I’m so proud..hahaha

Main.go

Now our dependencies are ready to use, let’s use. Create a file called main.go

Look at the SetupApplication() invocation, it will produce our WordGenerator then we can call the GenerateWord function, easy easy.

Conclusion

I like what java programmers are creating using some important patterns like Dependency Injection, and for me, wire is a vital library if you are thinking to work professionally with go.

It will increase your productivity and also will help you to create decoupled applications.

The GitHub repo is available here.

More complicated stuff

If you want something more real I’ve coded a simple application which will receive the HTTP Request and persist in the PostgreSQL database using Wire.

The code can be found here.

References

Wire tutorial

Wire Userguide

What is Service Mesh and Why you should consider it

Brief about Software Architecture History

Before to start the explanations about Service Mesh infrastructure, let’s understand a little bit about what we have created before, as software architects and developers.

I think it is essential to know why Service Mesh might be useful for you.

We’ll talk a little bit about Software Architecture History. I promise it will be rapid, but it will be worth it.

Let’s analyze the image below

There are different architectures models in the Timeline. The MVC Pattern which one is present in our architecture today.

SOA and EDA were popularized in 2000 and 2003, respectively; things started to be distributed. We started at that time to break things into small pieces of software, for several reasons.

And finally Microservices and Serverless, in 2013 and 2015 respectively, and different approaches about software development were coming.

Conway’s law and Inverse Conway Maneuver sprung up to the scene to explain how to work with Software Architecture in terms of business and Teams.

There is typical behavior if we look at this Timeline, we are trying to divide our software pieces into small and smaller parts, as small as we can.

But, what is it important or related to Service Mesh Infrastructure???

Microservices are distributed systems, and distributed systems means handle network problems.

That is exactly what Service Mesh Infrastructure can help us. Abstracts network issues from developers.

Service Mesh

In a nutshell.

Service Mesh can be defined as a dedicated infrastructure to handle a high volume of traffic based in IPC (inter-process communication). In a microservices architecture, usually called East-West Traffic.

In a few words, service mesh can be considered a “layer” to abstract network for services communications.

These abstractions solve the most part of the network handlings like Load Balancing, Circuit Breakers, Retries, Timeout, Smart Routing which can enable advanced deployment techniques like Canary Releases, Dark Launches, and others.

Then, we can take off these responsibilities from our application code. Also, we can remove these roles from developers, and it is very important because developers should code for the business, not for infrastructure requirements.

Another important characteristic of Service Mesh is Telemetry, some implementations integrate with Jaeger and Prometheus easily.

Famous libraries in the Java ecosystem related to network handlings like Netflix Ribbon, Hystrix and Eureka can be replaced for the Service Mesh implementations like ISTIO.

Service Mesh & Microservices Architecture

In general, in the Microservices Architecture, service-to-service communication is quite complex.

Usually involves different communications patterns like REST and gRPC over HTTP or AMQP for asynchronous and durable communications.

As we can see, microservices are distributed systems, that is the reason why Service Mesh Infrastructure fits very well.

Practical example

Let’s look in a simple and pretty standard Microservices Architecture

Standard Microservices Architecture

There are some important things to look here.

North & South Traffic

North & South traffic usually happens between different networks look at the image Network A and Network B. This kind of traffic comes from outside our infrastructure, our clients, and this traffic is not trusted because the external network is out of our control.

We need heavy security here, that’s our Gateway to protect our applications.

Usually, we have an API Platform to manage our external APIs. API Management techniques and processes can help us with this task.

East & West Traffic

On the other hand, the East-West traffic happens in general on the same network, as we saw before, normally it is called service-to-service communication or IPC.

That is the place where Service Mesh lives.

gRPC is a very interesting framework if you are looking for high throughput applications or service-to-service communications.

Conclusions

Service Mesh is an interesting thing you are trying to play with Microservices Architecture, but I strongly recommended you to understand a little deeper before adding Service Mesh in your Architecture Stack.

There is no silver bullet when you think about Software Architecture but we as Software Architect, Developers and other need to understand and propose the right solution considering the company context.

Kubernetes Patterns – Sidecar

Motivation

On the last week, I’ve blogged about Ambassador Pattern.

This pattern is very important when we are trying to solve network issues in the Microservices architecture, in a few words Ambassador is a kind of proxy, to help in the service-to-service communications.

Today we’ll talk about Sidecar Pattern, it’s an interesting pattern when we are looking for help with network issues, but as we will see during this post, there are more features which this pattern enable for us.

 

Context

In the containers world, we need to follow the container Golden Rule, the container should have one single purpose to exist. That is the most important thing to follow.

When we are developing applications using the microservice as an Architectural guide, we shouldn’t worry about concerns related to infrastructures, like log collector, network handlings and other orthogonal concerns. These concerns are more related to the platform where we are running our service than our application code.

We should use our platform to help us with these activities. Kubernetes is a “de-facto” platform to run containers workloads. We can use Kubernetes to deploy a dedicated infrastructure to handle internal network traffic, ISTIO for an example. In this case, ISTIO is our “platform” to help with network handlings.

I’ve blogged about my first impressions about ISTIO and Service Mesh

Kubernetes has the primitive called PODs, the small unit of computational resources in the kubernetes ecosystem, the POD is able to have multiple containers, in that scenario the Sidecar Pattern is a perfect solution to help the main container.

Let’s look at the POD anatomy (the yellow one)

Solution

The Sidecar container should add some additional functionalities to the microservice container. The important part to pay attention here is the sidecar should run in a different process and is not able to change anything in the microservice container.

In the same POD, containers are able to share the volumes and the same network, it means the containers can reach each other via “localhost” for an example.

Let’s analyze an example.

In the real microservices architecture, we might have different services and many instances of these services, but, how we are able to look effectively at the logs?

We need a centralized tool that collects these logs, also we need an effective way to query these data to find something that helps us to troubleshoot and debug distributed systems.

Is that role of the main container to send these logs for service in the cloud? Maybe a Sidecar container is able to collect these logs, they are sharing the volumes, and send these data to the cloud.

The sidecar “enrich” the main container functionalities sending data to the cloud systems. That is the main role of Sidecar Container.

Look at the image below:

As we can see, the logger container sends the data to the cloud storage, the logger read data from POD volumes, because they are sharing the disk.

The microservice container doesn’t care about the logs, the main container should play to service our business only.

That is one example where sidecar container can help us adding extra functionalities for our main container.

In the Service Mesh Infrastructure, the sidecar container can help us adding some extra functionalities to help us to handle network issues, it is another example.

Conclusion

The Sidecar Pattern is very useful when we are working in distributed systems, especially in containers world.

It will increase our productivity because we don’t need to pay attention to infrastructure stuff and it makes our code more concise than ever without infrastructure handlings.

Then, it is time to say goodbye to Netflix Ribbon, Netflix Eureka and Netflix Hystrix and put their responsibilities to sidecar container.

References

Kubernetes Pattern Book

Microsoft Azure Docs

Kubernetes Patterns – Ambassador

 

Motivation

Recently, I’m studying kubernetes in-depth, mainly in part about how to use platform features to help me to work with distributed architectures.

During this journey, for my surprise, I’ve found many books of Kubernetes Patterns, and my god, these books opened my mind about “How to use Kubernetes effectively”.

My favorite one is Kubernetes Patterns, the book is awesome, it’s a kind of guide for me right now. The book describes many patterns and categorizes them in principles like Predictable Demands, Declarative Deployments, Health Probe, Managed Lifecycle and Automated Placement.

Today, I’ll talk about an important pattern related to network management techniques.

Let’s talk about Ambassador Pattern.

Ambassador

Context

When we are working with distributed systems, the network is the biggest challenge to solve, remember The Fallacies of Distributed Computing.

We need to do an effective strategy to work with outages, service discovery, circuit breakers, intelligent and dynamic routing rules, and time-outs.

In general, these things require a lot of configuration files envolving connection, authentication and authorization. These configurations should be dynamic as well, because in the distributed systems, addresses for instances, changes a lot during a certain timebox.

Of course, sometimes we are not able to handle these issues because our “application” is not able to handle it, our framework which the application is coded doesn’t support these features.

Also, we need to remember the containers Golden Rule, the container should exist for one single and small reason.

Maybe, handle these challenges into our application code cannot be a good idea, especially because sometimes we need to integrate with legacy applications.

The ambassador help us exactly at this point, let’s see how it happening.

Solution

Ambassador acts as “proxy” and hides all the complexities of accessing the external services.

We will put the ambassador container between our main application and external services connections. Just to remind, our ambassador container should be deployed in the same Kubernetes POD, which resides our main application container.

Using this simple approach we able to handle network failures, security, resiliency in the ambassador container, simple and effective way to handle these hards things to solve.

Look at the image below

The Ambassador Container should handle configurations related to Service Discovery, Time-outs, Circuit Breaker, Smart Routing and Security

Conclusion

The ambassador Pattern is very useful when we are working with distributed systems. It will reduce our main application complexity taking off the network management in our application code.

Remember: It will add some latency overhead. If network latency is a critical point for you, maybe you need to think about the ambassador adoption.

 

References

Kubernetes Patterns book

https://docs.microsoft.com/pt-br/azure/architecture/patterns/ambassador

Releases, Deployments and Traffic Mirroring

During my journey to learn ISTIO and your stack I’ve discovered some interesting concepts about deployments stuff. The first one I didn’t know the difference between Deployment and Release if you know no problem I’ll explain detailed during this blog post. Also on the next post, I’ll explain how ISTIO can help us to achieve it.

Deployment vs Release

The first thing to know, before deep dive in strategies is to understand the difference between these concepts. I’ve discovered it reading the excellent Christian E. Posta book Istio in Action. The book is under production then there is some chapters to release yet.

Deployment

Deployment can be described as an activity to install new code into production or another environment at runtime, the important thing here it can’t affect users anyway, or we can’t change traffic to these artifacts. Then we can deploy multiple versions without problems.

Release

A release can be when change traffic to a deployment did previously; it can affect system users, then we should plan it carefully. There are some ways to minimize the user’s impact during our releases. We’ll discuss it detailed in this blog post. Also, the version avoids “Big Bang” deployments, like blue-green deployments.

Request Level or Traffic Shifting

Now we know the main difference between Deployment and Release, then we can discuss another critical topic Requests Distributions Strategies or how is the best strategy to split traffics during deployments.

There is two ways to achieve it . Traffic Shifting or Request Level it is super important to understand because based on that you should choose the best option in your use case.

Request Level

It is a kind of self-explanatory, in this technique, we can split traffic based on request headers attributes and then control production traffic as we want. This strategy allows us to gained more fine control in production traffic during our deployments.

For example, we can change traffic based on client-id, in OAuth protocol, when this specific client can be a partner to test our application in the real world.

Traffic Shifting

Traffic shifting can be an excellent option when we did not expect to “identify” users by something in the request. In this strategy, you, want to split traffic to different based on a percentage of calls. This strategy is a little bit more simple that Request Level but can be an exciting option to test our deployments.

Let’s talk about Releases Strategies!!!

Dark Launch Releases

In this kind, we can change the traffic to a new deployment using the minor part of users based in some rule, a percentage for instance. The important note here is the most of users a.k.a production traffic should go to the “stable” version. The main idea here is testing new features to a set of premium users and then measure the adoption or something important for your company.

Let’s see an example using the Traffic Shifting Strategy

Dark Launch Example

Canary Releases

The idea is very similar to Dark Launches, but there is a small difference, in the Canary Release we want to test a new version of our deployment, see performance and system behavior. In this kind, it is not related only something new, feature or significant change. Sometimes we want to test new versions which one has performance improves for example. In this example we’ll use the Request Level Strategy, let’s see it.

Canary Release

On the example above, we change the production traffic to a new version only for client-id = 10. Others client-ids go to stable version of our application.

Traffic Mirroring

The idea of Traffic Mirroring is pretty simple. We will route the real production, a copy of the production requests to a new deployment or experimental version. The copy of the request is based on fire and forget principle and won’t impact the real user’s requests. Mirroring traffic is an interesting techniques to delivery code into production with more confidence.

The image below will show the Traffic Mirroring strategy

Traffic Mirroring Flow

These concepts are very important to know. It helps us to choose the correct strategy during our deployments.

In my opinion, this knowledge is the key point to guide us to choose an successfull deployment strategy.

On the next post we will learn how to do it using the ISTIO an open-source service-mesh implementation.

References

ISTIO in Action by Christian E. Posta

Blue Green Deployments by Martin Fowler

Install ISTIO on AZURE AKS

Hello,

During in my learning path to understanding Service Mesh and ISTIO. I decided to use some different cloud vendors. I choose Azure and Google.

I’ve started with Google Kubernetes Engine (GKE). It was my first experience with Google Cloud Platform components and was amazing. The command line is well documented and easy to interact with Kubernetes APIs.

Today I will explain how to install Istio components in Azure Cloud (Azure Kubernetes Service or AKS) which one offers managed kubernetes in Azure Infrastructure. On this post, I will use HELM to install Istio on kubernetes.

Let’s start with some requirements:

  • HELM Client ( installation instructions can be found here )
  • Azure CLI  (installation instructions can be found here )
  • kubectl ( installation instructions can be found here )

Creating the AKS Cluster and Preparing HELM

To create the AKS Cluster we can use the following statement:

Some considerations about this command:

  • I strongly recommend creating your own resource group
  • The –enable-rbac is mandatory to deploy ISTIO.

Then we need to configure our kubectl on Azure we can do it using the az command line, like this:

Now our kubectl is fully configured, we can start to install ISTIO in our AKS cluster.

Let’s start downloading the Istio Release. The zip can be found here. We are using the 0.8.0 version which one is the stable version. You need to choose the target OS, in the cluster.

Go to the ISTIO root folder and then we need to create a service account for HELM, it can be done using the following command:

Good, you should see the following output:

Awesome our service account is ready.

Let’s deploy our tiller deploy. Run the command above:

Then we can see, the following output:

Awesome, our HELM client is ready to start to deploy Istio.

Installing ISTIO

Go to the ISTIO root folder and then we can deploy install ISTIO in our Kubernetes cluster. It can be achieved with this command

After we can check the ISTIO components using the following command:

All the pods need to stay in Running state like in the image below:

Well done, our ISTIO is installed in our cluster and ready to receive some microservices.

On next week I will explain how to interact with our cluster, creating some microservices and manage the cluster monitoring tools like Grafana, Jaeger and other.

References:

Install ISTIO: https://istio.io/docs/setup/kubernetes/helm-install/

Create a cluster in AKS: https://docs.microsoft.com/en-us/azure/aks/kubernetes-walkthrough

 

My first impression about ISTIO and Service Mesh

Today I will talk about ISTIO and Service Mesh topics. I’m on a learning path these concepts to apply in my job and help some other interesting people on a software architecture and development.

I’m so excited to learn it because I think this maybe change our way to develop microservices, specifically how we can use infrastructure to get more insights about our runtime applications a.k.a metrics, health and routing.

This post intends to be a simple explanation about some Istio features and some comparisons with other tools provided by amazing Netflix OSS components.

Intro

I’m a java developer and most of part of my job time I’ve been created solutions with the Java frameworks and libraries. The Spring Framework is an amazing framework to create web applications, also has a brilliant and vibrant ecosystem.

Also, the Spring Cloud Projects provides interesting solutions to helps us, developers, to build distributed applications.

On this post, we will use the Kubernetes as container orchestrator.

I had some experiences coding with Spring Cloud Netflix Projects which integrates the Spring ecosystem with Netflix OSS tools, like Netflix Ribbon, Netflix Hystrix, Netflix Eureka and these projects is really amazing, and makes our developer’s infrastructures tasks so easy, with a couple annotations and some configurations the projects have interesting features like Service Discovery, Client Side Load Balancer and Circuit Breaker.

This kind of application, which uses the Netflix OSS tools, usually handle the network layer inside the application layer. For instance, the circuit breaker feature provided by Hystrix needs to be configured in the application layer. We can see the retrieves calls are handled by our component (application) because we have the Hystrix inside. In the same way, the Ribbon works we need to have in our classpath.

It works generally well but imagines the following situation. Remember the Netflix OSS only works for Java stacks, it uses the Java ecosystem to achieve these features. But now we need to change the application because the application needs to handle more load with minimal resource usage. The Golang fits well in this case.

The microservices advent can help developers to create small and independents application.

We can’t lose these important features in the microservices architectural style.

Because of this situation, the Service Mesh Pattern has been gaining some notation in the development world. Using this pattern we can isolate the Network stack in another container (in the same POD), it is called sidecar container, it will be responsible to handle all network calls independently of the application language has been built.

In this context, Istio as an implementation of service-mesh gives for us some interesting features like Intelligent Routing, Circuit Breaker, and Fault Injection outside of our application, with any code more to achieve these features. It is amazing.

Installation

I tried different approaches to install Istio in my cluster, actually, I’m using the Google Kubernetes Engine (GKE) to learn and try Istio features.

I decided to use the Helm installation which one proved easier and fast way to install/delete Istio in my kubernetes cluster. I followed the Istio installation with Helm, you can find the instruction at istio installation guide.

The Helm installation disables the Mutual TLS authentication by default, of course, you can enable using the Helm command line flags.

If you using the Istio 0.8.0 version, the installation automatically spins up a pod which one provides automatic sidecar containers in our PODs, it can make our lives easier because we can mark a namespace to enable sidecar automatically. I used this feature and works very well.

I extremely recommend you to install “infra” services, the Istio refers to this as “add-ons” like Grafana, Prometheus, Service Graph and Jaeger. These add-ons help us to see the cluster metrics, which Istio collects and stores in some these services.

That is all to enable us to play with Istio!!

Experience

I’ve created a couple of applications, there are no complex business rules. The idea here is to try the service-to-service communications and get some metrics about these communications. And then I deployed these applications in the Kubernetes cluster, the important thing here is I’ve created the standard Deployments in Kubernetes.

These deployments have a Services which one’s exports the ports, nothing special here. The crucial thing is the Service needs to use the metadata section with the labels called “app”  if you forget this the Istio will not work as expected.

Look at the following example:

Pay attention to the metadata section.

To Istio work as expected we need to follow some requirements, these requirements can be found here.

After the deployments worked. I decided to try some requests for my services. And like a magic without any configuration, my Grafana instances started to gathering some services metrics. I’m got impressed at this moment because the how fast it is and with a simple yaml configuration I had an almost complete and interesting metrics.

Look at Grafana instance, at this moment I didn’t create any graphics or configuration and the graphics show my whole applications ecosystem.

 

Also there other interesting services like Jaeger which one allows us to collect tracing between service communications and measure the time of service calls.

Prometheus which one will store our metrics in the time series database, it is a kind of Backend for Grafana. The Prometheus basically will collect the service metrics and store it.

Service Graph which one show our services dependencies in real-time. It is an awesome app.

Conclusion

I’ve started to play a few weeks ago with Istio (Service Mesh) and I think these tools will helps developers and devops guys in different ways. To get insight into the infrastructure, maybe the applications metrics usages. Another important characteristics for me is how Istio collect the application metrics. It is not intrusive and it makes easier to development lifecycle.

Also, these tools can help the companies to innovate faster than ever because it offers awesome implementations like Canaries Deployments, the companies can try your deployments without downtime and with a reasonable safety.

I’m excited to study more about it, in the next weeks I will discover more features and I will share on this blog.

If you can help me, on the journey to learn service mesh please share interesting articles about the topic.

Thank you

References

 

Spring Boot 2 Meets Kotlin

Hello Guys.

Today we will talk about the new feature added in Spring 5.0 and Spring Boot 2. We will understand the Kotlin support for Spring Boot Applications.

Kotlin is a new language created by JetBrains Team. The language is JVM language. It means the language creates a bytecode to run on Java Virtual Machine.

As we can see, the primary inspiration is Scala language. There are many constructions similar in both languages, the data classes concepts for instance.

There are some interesting advantages when we adopt Kotlin to code. The most exciting is reduce boilerplate code and make our code more concise, it brings more maintainability and transforms our code more readable.

We will understand these topics on next examples, then is Time to Code!!!

Create the Tasks Project with Spring Initializr

We will create a simple project to manage Tasks. The main idea here is to explain some Kotlin interesting points on this project.

Let’s go to Spring Initiliazr page. The project should be created with these configurations below:

The interesting points here are:

  • Maven Project
  • Kotlin Language
  • Spring Boot 2
  • Dependencies: Reactive Web, Actuator and Reactive MongoDB

Creating our TaskEntity

We need to create our main entity. The entity is pretty simple and so easy to implement the Task class should be like this:

As we can see, there are some Kotlin interesting points here. There is a data class keyword it means the class has the purpose of holding data. Kotlin will add some important behaviors automatically like:

  • equals() and hashCode()
  • toString()
  • copy()

there are some restrictions the data classes cannot be abstract, open, sealed or inner.

The full data classes documentation can be found here.

Creating the Reactive Task Repository

Now, we will use the Spring Data MongoDB Reactive implementation. The behaviors are similar in the blocking version, but it will not blocking because is the reactive version. The way to thinking is similar there is DSL to use objects properties to create queries automatically.

The TaskRepository should be like this:

The keyword interface is the same in Java language, but the way to extends is Kotlin is slightly different, in Kotlin we will use “:” instead of extends.

Creating the TaskService

Let’s create our TaskService class it will invoke the repository implementations. The code of TaskService should be like this:

There are a couple of interesting things here. Let’s start by injection, there is no necessity to use @Autowired in class constructor since Spring Framework 4 version, as we can see it will work as expected here as well. We use val in favor immutability.

Let’s understand the tasks() function. The function has no body because of the implementation has one line only. In this case, the function return can be omitted as well. It makes the code more concise and easy to understand.

We have used the same features in our other functions.

The full documentation about functions can be found here.

The REST Layer

Our REST Layer should be reactive. Then we need to return Flux or Mono in our methods. We will use the one line function and we can omit these declarations, keep in mind to achieve reactive functionalities we need Flux or Mono in our methods.

The TaskResource class should be like this:

As we can see we are using the same as we did before, we have not used return methods declaration, the compiler can infer it for us, also it prevents developers errors as well.

We are using the @GetMapping and @PostMapping instead of @RequestMapping annotations, it makes our code more readable.

Configuring the MongoDB connections

We will use the yaml file, it makes our configuration file more readable and introduces semantics in our file. The configuration file should be like this:

There is nothing special here, a couple of configurations for mongoDB and tomcat server.

The Final Project Structure

Let’s analyze the final project structure, you can put in your preferred structure. I suggest the following one:

 

Excellent, now we can run it.

Run it and try your pretty new API using Kotlin Language!!!

Awesome Job, well done!!!

The full source code can be found here.

Tip

I recommend docker to run a mongoDB instance it makes your life extremely easy.

Book

You can find more detailed implementations and different Use Case in my book ( Spring 5.o By Example), published by Packt.

 

Thank you, for next Post, I will write about Spring Cloud Gateway and how it can help developers to work with Routes.

BYE.