Microservices Authentication In Action with Service Mesh

Context

Microservices architecture is the most famous pattern in the Software Architecture today, there are many articles on the internet that tries to explain the benefits and drawbacks present in this architecture style.

I’ve got a couple of years working with microservices and, for me, the most problematic part is related to apply security in this scenario. There are many patterns and tools which try to solve it, most part of them is related to the languages and framework.

That way we have a sort of problems because we need to apply security in the “Software” layer and it might cause problems like:

  • Incorrect implementation by developers
  • Different implementation because of the frameworks “implies” their own patterns like Spring Security or Apache Shiro
  • Usually, languages imply their own patterns as well
  • Add certificates in the application layer add some extra complexity to managing these certificates
  • The framework should support certificate configuration
  • Security implemented with different patterns in the Solution
  • Developers don’t know the security standards in-depth it causes some security leaks.

Security is a non-functional requirement and in the most part of the companies, there is a specific area which will define standards and manage security in the whole ecosystem. It means define network firewalls, network privileges and so on.

We need to find a way to create a standard for security in our microservice solution. ISTIO a Service Mesh implementation can help us adding security in the platform (a.k.a kubernetes) in an easy way.

Let’s understand that!!!

Istio Service Mesh Implementation

The explanation about Istio is out of scope in this blog post. If you need some explanations about you can find it in my previous article.

Authentication types in ISTIO

Istio offers two types of authentication. The first one targets the end-users and the second one aims the service-to-service authentication with certificates, we will understand a little deeper about these two models.

End-User Authentication

This feature aims to authenticate the end-user, it means a person who is trying to access our system or a simple device or application which is trying to access our solution.

Istio enables request level authentication through the JWT specification, the most used security specification for cloud-native applications. Also, Istio enables simple integration with OpenId-Connect specification, another relevant standard for security.

With a couple of configurations like JWT Issuer, JWKS URI, and some paths to include and exclude we are able to protect our microservices with OAuth authentication flow.

It is configured by yaml files which one is very simple and intuitive.

Service-to-Service Authentication

This kind of authentication sometimes is called transport authentication, it will verify the client connection to check the communication. Istio offers mtls as a full-stack solution for transport authentication.

Citadel it the component that will provide the digital certificates based on SPIFFE standards for each sidecar a.k.a envoy proxies present in the Data Plane.

In the next sections, I’ll explain how it works in the configuration time and runtime authentication execution. Let’s do it!!!!

Configuration Flow

Let’s understand how the Citadel component distributes digital certificates for each pod present in the Data Plane.

The following steps describe these interactions:

  • Citadel watches the Kubernetes API server, creates a SPIFFE certificate and key pair for each of the existing and new service accounts. Citadel stores the certificate and key pairs as Kubernetes secrets.
  • When you create a pod, Kubernetes mounts the certificate and key pair to the pod according to its service account via Kubernetes secret volume.
  • Citadel watches the lifetime of each certificate and automatically rotates the certificates by rewriting the Kubernetes secrets.
  • Pilot generates secure naming information, which defines what service account or accounts can run a certain service. The pilot then passes the secure naming information to the sidecar Envoy.

As we can see in the steps above, the digital certificates are fully managed by Citadel component in the Istio Infrastructure.

That is very important because it makes de digital certificates easy to management and a centralized point to act in the case is something wrong with certificates.

Authentication Flow in Data Plane

The service-to-service authorization flow follows the steps below:

  • Istio re-routes the outbound traffic from a client to the client’s local sidecar Envoy.
  • The client-side Envoy starts a mutual TLS handshake with the server-side Envoy. During the handshake, the client-side Envoy also does a secure naming check to verify that the service account presented in the server certificate is authorized to run the target service.
  • The client-side Envoy and the server-side Envoy establish a mutual TLS connection, and Istio forwards the traffic from the client-side Envoy to the server-side Envoy.
  • After authorization, the server-side Envoy forwards the traffic to the server service through local TCP connections.

As we can see the flow is strongly based in envoy (sidecar) features.

Conclusions

In this post, I tried to explain a little bit about microservices authentication with Istio.

As we can see it makes our security configuration in a centralized point, I mean in a couple of files, but the authentication happens in a distributed way en envoys or data plane.

In the next post, we will create the yamls and apply them in a real cluster to be able to see the behaviors.

See you there!!!!

Authentication & Authorization for microservices in Service Mesh World

Motivation

I’ve been studying Service Mesh infrastructure in the last year. There are many exciting features like how to achieve observability with minor changes in the code.

Or how to use the right deployment strategy to deliver the real value about microservices architecture for our customers, the business.
Also, people talk about how to handle network issues with Mesh, which one is very easy and very helpful as well.

I’ve been working with Enterprise Systems, which one in most of the cases this company has problems with security in the microservices architecture.

There are two main complains about this topic.

The first one is about different ways to implement security in microservices. Different frameworks and languages have mixed thoughts about security.

And the last one is about how we can provide security in the platform layer. It means remove security concerns from developers, in general, because there is a department that defines the security patterns for the company. This department has specific requirements to attend business requirements, as well.

That is the main reason that I decided to study security in the Service Mesh context, and then I’ve found the different ways to solve the problems described above, and I’ll try to explain how to achieve it using Istio.

Blog Post Series

This blog post series will cover the full authentication and authorization features present in ISTIO.

The series will have four posts and the main idea is to cover best practices regarding security for microservices architecture using the service mesh, we will use ISTIO for that.

The first post will cover the Authentication concepts present in ISTIO. We will explain how it works in detail to understand the right use cases for that.

The second post will cover the Authentication in ISTIO but in a practical way, in this post, we will have a lot of yaml and examples.

The third post will cover the Authorization concepts implemented in ISTIO, it is very important to understand

In the last post, we will create the Authorization stuff with yamls, this is the practical part for the Authorization part.

Now our context is very clear. Let’s start it right now !!!!!

Use-case for this Blog Series

The idea about this blog series is about how to use Istio to enable Authentication & Authorization in the microservices world but, to achieve that we need a use-case to show how it works in real.

We will configure a simple bet solution, it will enable users to create bets in our systems. The solution is composed of four microservices bet, matches, championship and players.

The bet microservice will reach the other one’s matches, championships and players to validate the data, it means the championship date and so on.

This is not a real use-case is something to use to clarify the ideas about security in the microservices world and get benefits from Service Mesh infrastructure.

Let’s see our simple solution diagram:

As we can see there are different profiles in our solution. The manager will manage the championship data, match data and users who will manage their profile.

In the next posts, we will cover all details about how Istio can help us to deliver security, authentication, and authorization in a centralized way.

See you there!!!

Let’s GO!!! Dependency Injection in GOlang

Starting in GOlang (few points about Java)

After almost ten years of programming in Java, I decided to start to learn a different language.

But Why??? Java is not good enough??

Big NO here, for me Java is brilliant and awesome language and the community is so vibrant as well. I love coding in java. I love how people who’re working in java practice Oriented Object Programming.

But in my opinion, nowadays Java addressing Enterprise World. I mean it is an excellent language to code for Business requirements, like CRM and other things related a business. There are several exciting frameworks about persistence, web which increase developer productivity and help delivery code in production.

Nowadays, my challenge is about creating cloud-native systems for infrastructure stuff. I mean the main purpose of system is helping developers to create amazing microservices and delivery it in a secure and managed way. That is the reason why I choose the golang language.

Dependency Injection Pattern

I will not deep dive into dependency injection pattern, because there are a lot of incredible blog post, articles and discussions about that.

Look at the Martin Fowler blog to find an amazing article about that.

For now, Dependency injection is important to create decoupled and well-designed code.

We will use WIRE to help us to implement Dependency Injection pattern in Golang.

Requirements

I’ve created the go project using Go Modules, it is an interesting way to manage the dependencies.

Installing Wire

Wire generates the necessary stuff at compilation time, then we need to install wire to be able to do it. easy peasy lemon squeezy..

and ensuring that $GOPATH/bin is added to your $PATH.

You can use the full instructions here.

Let’s code a little bit

Go Dependencies

We will create a simple random words generator using Babler, A small utility to generate random words in #golang.

I’m using the IntellijIDEA. The IDE has the autocomplete feature to add dependency automatically in the go.mod, but if you are using the VSCODE IDE which is good as well you can add the go.mod described below

There are a couple of dependencies, the two most important are Wire and Babler.

Business Code

Now, we will create a simple file with our “business code”, it will very simple. Let’s look at the code

There are a few important things here, let’s discuss

Producers

Let’s look at the New* functions. It is our producers, the main goal of these functions is to produce things to be injected. It is very important because we will need to instruct Wire how to create or produce it, we will understand as soon. Very simple code here we will create a pointer for each struct.

Injection Points or Clients

Another important part of the code is WordGenerator struct declaration. As we can see we need a Dictionary pointer, let’s understand how we are able to receive it.

On the NewWordGenerator function, as we can see we received the pointer to Dictionary, that is our injection point. The wire will “inject” the dictionary reference here.

We don’ care about how the Dictionary was created and injected, the only thing we need to think is “I want the instance here, this instance should be ready to me” in simple words. Wire framework will take care of injection for us, that is the important principle about dependency injection.

Wire Configuration

Now we know about the main characteristics of our application let’s instructs wire how to create our objects.

In the root folder create a file called wire.go, this name is mandatory.

Let’s look at the file content.

In the first line, we instruct to go build to ignore this file. This file is necessary to generate our compile file with all dependencies configured it is fantastic.

There are some imports and finally the SetupApplication function, that is the core for our application.

Let’s look at the return. It will produce a WordGenerator, this struct contains our business logic. Our main.go will invoke this struct to generate random words. Sometimes you can create your Application configuration, is up to you.

In our function body, we’ve used wire to create our container. I mean our dependencies to enable other structs to use.

Look at the builder functions as we saw before these functions produce pre-configured structs look at Producers Section

The return of this function doesn’t matter, the important part here is “we explain to wire how to create our application container”.

Wire Code Generation

Now, we are ready to generate our code, as we saw before wire will generate the file to build our application at compilation time, then Let’s do it.

On the root folder, at the same wire.go level type:

The tool will create a file called wire_gen.go, let’s analyze the content

In the first line we have a warning, “Do not edit” that is very import thing to notice.

Then we have the application configuration, look at the declaration all of our dependencies are built by the tool, amazing thing here. All of the structs are configured and ready to use.

The wire did the “dirty job” for us. I’m so proud..hahaha

Main.go

Now our dependencies are ready to use, let’s use. Create a file called main.go

Look at the SetupApplication() invocation, it will produce our WordGenerator then we can call the GenerateWord function, easy easy.

Conclusion

I like what java programmers are creating using some important patterns like Dependency Injection, and for me, wire is a vital library if you are thinking to work professionally with go.

It will increase your productivity and also will help you to create decoupled applications.

The GitHub repo is available here.

More complicated stuff

If you want something more real I’ve coded a simple application which will receive the HTTP Request and persist in the PostgreSQL database using Wire.

The code can be found here.

References

Wire tutorial

Wire Userguide

What is Service Mesh and Why you should consider it

Brief about Software Architecture History

Before to start the explanations about Service Mesh infrastructure, let’s understand a little bit about what we have created before, as software architects and developers.

I think it is essential to know why Service Mesh might be useful for you.

We’ll talk a little bit about Software Architecture History. I promise it will be rapid, but it will be worth it.

Let’s analyze the image below

There are different architectures models in the Timeline. The MVC Pattern which one is present in our architecture today.

SOA and EDA were popularized in 2000 and 2003, respectively; things started to be distributed. We started at that time to break things into small pieces of software, for several reasons.

And finally Microservices and Serverless, in 2013 and 2015 respectively, and different approaches about software development were coming.

Conway’s law and Inverse Conway Maneuver sprung up to the scene to explain how to work with Software Architecture in terms of business and Teams.

There is typical behavior if we look at this Timeline, we are trying to divide our software pieces into small and smaller parts, as small as we can.

But, what is it important or related to Service Mesh Infrastructure???

Microservices are distributed systems, and distributed systems means handle network problems.

That is exactly what Service Mesh Infrastructure can help us. Abstracts network issues from developers.

Service Mesh

In a nutshell.

Service Mesh can be defined as a dedicated infrastructure to handle a high volume of traffic based in IPC (inter-process communication). In a microservices architecture, usually called East-West Traffic.

In a few words, service mesh can be considered a “layer” to abstract network for services communications.

These abstractions solve the most part of the network handlings like Load Balancing, Circuit Breakers, Retries, Timeout, Smart Routing which can enable advanced deployment techniques like Canary Releases, Dark Launches, and others.

Then, we can take off these responsibilities from our application code. Also, we can remove these roles from developers, and it is very important because developers should code for the business, not for infrastructure requirements.

Another important characteristic of Service Mesh is Telemetry, some implementations integrate with Jaeger and Prometheus easily.

Famous libraries in the Java ecosystem related to network handlings like Netflix Ribbon, Hystrix and Eureka can be replaced for the Service Mesh implementations like ISTIO.

Service Mesh & Microservices Architecture

In general, in the Microservices Architecture, service-to-service communication is quite complex.

Usually involves different communications patterns like REST and gRPC over HTTP or AMQP for asynchronous and durable communications.

As we can see, microservices are distributed systems, that is the reason why Service Mesh Infrastructure fits very well.

Practical example

Let’s look in a simple and pretty standard Microservices Architecture

Standard Microservices Architecture

There are some important things to look here.

North & South Traffic

North & South traffic usually happens between different networks look at the image Network A and Network B. This kind of traffic comes from outside our infrastructure, our clients, and this traffic is not trusted because the external network is out of our control.

We need heavy security here, that’s our Gateway to protect our applications.

Usually, we have an API Platform to manage our external APIs. API Management techniques and processes can help us with this task.

East & West Traffic

On the other hand, the East-West traffic happens in general on the same network, as we saw before, normally it is called service-to-service communication or IPC.

That is the place where Service Mesh lives.

gRPC is a very interesting framework if you are looking for high throughput applications or service-to-service communications.

Conclusions

Service Mesh is an interesting thing you are trying to play with Microservices Architecture, but I strongly recommended you to understand a little deeper before adding Service Mesh in your Architecture Stack.

There is no silver bullet when you think about Software Architecture but we as Software Architect, Developers and other need to understand and propose the right solution considering the company context.

Kubernetes Patterns – Sidecar

Motivation

On the last week, I’ve blogged about Ambassador Pattern.

This pattern is very important when we are trying to solve network issues in the Microservices architecture, in a few words Ambassador is a kind of proxy, to help in the service-to-service communications.

Today we’ll talk about Sidecar Pattern, it’s an interesting pattern when we are looking for help with network issues, but as we will see during this post, there are more features which this pattern enable for us.

 

Context

In the containers world, we need to follow the container Golden Rule, the container should have one single purpose to exist. That is the most important thing to follow.

When we are developing applications using the microservice as an Architectural guide, we shouldn’t worry about concerns related to infrastructures, like log collector, network handlings and other orthogonal concerns. These concerns are more related to the platform where we are running our service than our application code.

We should use our platform to help us with these activities. Kubernetes is a “de-facto” platform to run containers workloads. We can use Kubernetes to deploy a dedicated infrastructure to handle internal network traffic, ISTIO for an example. In this case, ISTIO is our “platform” to help with network handlings.

I’ve blogged about my first impressions about ISTIO and Service Mesh

Kubernetes has the primitive called PODs, the small unit of computational resources in the kubernetes ecosystem, the POD is able to have multiple containers, in that scenario the Sidecar Pattern is a perfect solution to help the main container.

Let’s look at the POD anatomy (the yellow one)

Solution

The Sidecar container should add some additional functionalities to the microservice container. The important part to pay attention here is the sidecar should run in a different process and is not able to change anything in the microservice container.

In the same POD, containers are able to share the volumes and the same network, it means the containers can reach each other via “localhost” for an example.

Let’s analyze an example.

In the real microservices architecture, we might have different services and many instances of these services, but, how we are able to look effectively at the logs?

We need a centralized tool that collects these logs, also we need an effective way to query these data to find something that helps us to troubleshoot and debug distributed systems.

Is that role of the main container to send these logs for service in the cloud? Maybe a Sidecar container is able to collect these logs, they are sharing the volumes, and send these data to the cloud.

The sidecar “enrich” the main container functionalities sending data to the cloud systems. That is the main role of Sidecar Container.

Look at the image below:

As we can see, the logger container sends the data to the cloud storage, the logger read data from POD volumes, because they are sharing the disk.

The microservice container doesn’t care about the logs, the main container should play to service our business only.

That is one example where sidecar container can help us adding extra functionalities for our main container.

In the Service Mesh Infrastructure, the sidecar container can help us adding some extra functionalities to help us to handle network issues, it is another example.

Conclusion

The Sidecar Pattern is very useful when we are working in distributed systems, especially in containers world.

It will increase our productivity because we don’t need to pay attention to infrastructure stuff and it makes our code more concise than ever without infrastructure handlings.

Then, it is time to say goodbye to Netflix Ribbon, Netflix Eureka and Netflix Hystrix and put their responsibilities to sidecar container.

References

Kubernetes Pattern Book

Microsoft Azure Docs

Kubernetes Patterns – Ambassador

 

Motivation

Recently, I’m studying kubernetes in-depth, mainly in part about how to use platform features to help me to work with distributed architectures.

During this journey, for my surprise, I’ve found many books of Kubernetes Patterns, and my god, these books opened my mind about “How to use Kubernetes effectively”.

My favorite one is Kubernetes Patterns, the book is awesome, it’s a kind of guide for me right now. The book describes many patterns and categorizes them in principles like Predictable Demands, Declarative Deployments, Health Probe, Managed Lifecycle and Automated Placement.

Today, I’ll talk about an important pattern related to network management techniques.

Let’s talk about Ambassador Pattern.

Ambassador

Context

When we are working with distributed systems, the network is the biggest challenge to solve, remember The Fallacies of Distributed Computing.

We need to do an effective strategy to work with outages, service discovery, circuit breakers, intelligent and dynamic routing rules, and time-outs.

In general, these things require a lot of configuration files envolving connection, authentication and authorization. These configurations should be dynamic as well, because in the distributed systems, addresses for instances, changes a lot during a certain timebox.

Of course, sometimes we are not able to handle these issues because our “application” is not able to handle it, our framework which the application is coded doesn’t support these features.

Also, we need to remember the containers Golden Rule, the container should exist for one single and small reason.

Maybe, handle these challenges into our application code cannot be a good idea, especially because sometimes we need to integrate with legacy applications.

The ambassador help us exactly at this point, let’s see how it happening.

Solution

Ambassador acts as “proxy” and hides all the complexities of accessing the external services.

We will put the ambassador container between our main application and external services connections. Just to remind, our ambassador container should be deployed in the same Kubernetes POD, which resides our main application container.

Using this simple approach we able to handle network failures, security, resiliency in the ambassador container, simple and effective way to handle these hards things to solve.

Look at the image below

The Ambassador Container should handle configurations related to Service Discovery, Time-outs, Circuit Breaker, Smart Routing and Security

Conclusion

The ambassador Pattern is very useful when we are working with distributed systems. It will reduce our main application complexity taking off the network management in our application code.

Remember: It will add some latency overhead. If network latency is a critical point for you, maybe you need to think about the ambassador adoption.

 

References

Kubernetes Patterns book

https://docs.microsoft.com/pt-br/azure/architecture/patterns/ambassador

Releases, Deployments and Traffic Mirroring

During my journey to learn ISTIO and your stack I’ve discovered some interesting concepts about deployments stuff. The first one I didn’t know the difference between Deployment and Release if you know no problem I’ll explain detailed during this blog post. Also on the next post, I’ll explain how ISTIO can help us to achieve it.

Deployment vs Release

The first thing to know, before deep dive in strategies is to understand the difference between these concepts. I’ve discovered it reading the excellent Christian E. Posta book Istio in Action. The book is under production then there is some chapters to release yet.

Deployment

Deployment can be described as an activity to install new code into production or another environment at runtime, the important thing here it can’t affect users anyway, or we can’t change traffic to these artifacts. Then we can deploy multiple versions without problems.

Release

A release can be when change traffic to a deployment did previously; it can affect system users, then we should plan it carefully. There are some ways to minimize the user’s impact during our releases. We’ll discuss it detailed in this blog post. Also, the version avoids “Big Bang” deployments, like blue-green deployments.

Request Level or Traffic Shifting

Now we know the main difference between Deployment and Release, then we can discuss another critical topic Requests Distributions Strategies or how is the best strategy to split traffics during deployments.

There is two ways to achieve it . Traffic Shifting or Request Level it is super important to understand because based on that you should choose the best option in your use case.

Request Level

It is a kind of self-explanatory, in this technique, we can split traffic based on request headers attributes and then control production traffic as we want. This strategy allows us to gained more fine control in production traffic during our deployments.

For example, we can change traffic based on client-id, in OAuth protocol, when this specific client can be a partner to test our application in the real world.

Traffic Shifting

Traffic shifting can be an excellent option when we did not expect to “identify” users by something in the request. In this strategy, you, want to split traffic to different based on a percentage of calls. This strategy is a little bit more simple that Request Level but can be an exciting option to test our deployments.

Let’s talk about Releases Strategies!!!

Dark Launch Releases

In this kind, we can change the traffic to a new deployment using the minor part of users based in some rule, a percentage for instance. The important note here is the most of users a.k.a production traffic should go to the “stable” version. The main idea here is testing new features to a set of premium users and then measure the adoption or something important for your company.

Let’s see an example using the Traffic Shifting Strategy

Dark Launch Example

Canary Releases

The idea is very similar to Dark Launches, but there is a small difference, in the Canary Release we want to test a new version of our deployment, see performance and system behavior. In this kind, it is not related only something new, feature or significant change. Sometimes we want to test new versions which one has performance improves for example. In this example we’ll use the Request Level Strategy, let’s see it.

Canary Release

On the example above, we change the production traffic to a new version only for client-id = 10. Others client-ids go to stable version of our application.

Traffic Mirroring

The idea of Traffic Mirroring is pretty simple. We will route the real production, a copy of the production requests to a new deployment or experimental version. The copy of the request is based on fire and forget principle and won’t impact the real user’s requests. Mirroring traffic is an interesting techniques to delivery code into production with more confidence.

The image below will show the Traffic Mirroring strategy

Traffic Mirroring Flow

These concepts are very important to know. It helps us to choose the correct strategy during our deployments.

In my opinion, this knowledge is the key point to guide us to choose an successfull deployment strategy.

On the next post we will learn how to do it using the ISTIO an open-source service-mesh implementation.

References

ISTIO in Action by Christian E. Posta

Blue Green Deployments by Martin Fowler