How Kubernetes-Operators can help us on API First Approach

During my career, I’ve worked with several concepts, architecture models and new technologies but sometimes I’ve got impressed by how people can think “outside of the box”.

When I have encountered Kubernetes Operator Pattern, that is definitely that I thought. How people are able to create a tool simple that helps a lot my development process, deployments and works with kubernetes.

I’ve been working with API design and Microservices a couple of years that I decided to study Kubernetes Operator and how I can use this amazing stuff in my current job. My friends and I decided to create an open-source project that helps developers to create mocks and tests to enable API first approach.

In this blog post, I’ll explain concepts about API First Approach and Kubernetes operator and the project, of course.

Kubernetes Operators

Following the Kubernetes site definition, kubernetes operators can be defined as “software extensions to Kubernetes that make use of custom resources to manage applications and their components. Operators follow Kubernetes principles, notably the control loop.”

The operator pattern trying to optimize the capacity of the human operator, it means it tries to capture the knowledge in how to manage services and translate it to software code. It is like a tireless sysadmin for some apps.

Then the primitive about operators is automate everything, but how it is possible in the kubernetes ecosystem??

The kubernetes controller provides the ability to extend kubernetes functionalities and that feature we use to create an operator, let’s see how the kubernetes controller works.

Kubernetes Controller

In Kubernetes, a controller is an application that takes care of routine tasks to ensure the desired state of the cluster and matches an observed state.

For example, when we use Deployment kind on kubernetes we want to run an application on kubernetes, there is a special attribute on that called Replicas in the specification part, for kubernetes it is a number of desired instances deployed on the kubernetes infrastructure, in other words, it is our desired state. The ReplicaSets Controller will spend their time to maintain the correct number of pods and then keep the cluster on the desired state.

The important part here is the controller should be taking care of one specific Kubernetes Object, it is a kind of best practice and I recommend to follow to keep our code simple and concise.

Kubernetes Objects are the entities in the Kubernetes system. Kubernetes use the entities to represent the state of the cluster. In general, we use a yaml to put that information in our cluster.

The kubernetes controllers work as a control-loop which is a non-terminating loop that regulates the state of a system.

The real-world example is a thermostat in a room, when we set the temperature we are telling our desired state.

Let’s see how the kubernetes controllers work.

Kubernetes Custom Resources Definition

Custom Resources are extensions of the kubernetes API.

A resource is an endpoint in the Kubernetes API that stores a collection of Kubernetes objects of a certain Kind. There is a resource called Services, the built-in service’s resources contain a collection of Services objects.

A custom resource is a point to extend the kubernetes API and it is not present in vanilla kubernetes installation, we need to “deploy” these Custom Resources Definitions.

Once a custom resource is installed, users can create and access its objects using kubectl, just as they do for built-in resources like Pods.

Putting together Kubernetes Controllers and CRDs

The combination of Kubernetes Controllers and Customer Resource Definition (CRDs) and the ability to manage the desired state of something is called Operator Pattern.

We can use kubernetes to automate deploying and running workloads. Also, we can automate how kubernetes do that.

Then operators are clients of the kubernetes APIs that act as controllers for a Custom Resource. Bingo.

That is very important because we are using all these concepts in our project that manage our API mocks.

But before to explain how it works in practice, let’s revisit the API first definition.

API-First

The API-First approach means the development process should be guided by APIs, it means, the APIs are treated as first-class citizens in our products.

It means before to start the development flow we should have our API designed, defined and well documented.

It is very important because we can think about API as a Product and we can think about product goals, product market segmentation and so on before to start our development flow.

You can find some insight into business and APIs here.

It gives some important benefits for us an API Products creators like:

Development Speed: because our teams can start the integration based on our contracts because we are created at the beginning of the project.

Improve Developer Experience: we can test our design before to go to market or development, exposing the design to developers can help us to improve API Design.

For sure it is a simple example of API-First design advantages you can find a lot more on the internet.

APIrator – Operator For APIs

Using the kubernetes operator patterns and the brilliant API first approach we have created a project called APIrator. An open-source project which helps developers to test, mock and validate the APIs.

Based on Open API Specification the developer can create your mocks and tests in an easy manner. Only using the CRD created previously during the operator deployment.

The operator aims to help developers to mock and test their APIs, it means it is a developer tool to improve speed during the development process. It is not designed to expose your mocks to the external world, please keep in mind that it can be a security vulnerability, do not use that if you want to achieve that purpose.

The idea is very simple, we are able to provide a Custom Resource to the kubernetes cluster and then the operator will “operate” and finally will deploy the Open API stuff in a mock.

It will deploy the mock container and doc container. The mock container you can find the API “mocked” and it will responding based in Open API examples section, yes, you should provide an example for each API. It is a best practice and helps to expose correctly.

Also, the doc container will give the ui for the API, it can be useful to give a visual for the developers and testers.

The CRD is pretty simple, let’s see an example

At the top of the file our specific Kind APIMock and API version, it is a kubernetes stuff. The most important part for us is the field definition where is the open api definition in yaml format, which is very important.

Then you should apply the CRD, the desired result should be

As we can see our mock is up and running. Awesome. In a couple of seconds, we have a simple mock for development and testing purposes.

If you look at the pod resources on the namespaces you can see the created pods, like this

Time to play you can test your mock at 8000 port and 8080 for doc container.

Enjoy!!!!

This post is how to use and take advantage of Kubernetes Operators, we will post how to use the APIrator as soon.

Conclusions

Kubernetes Operators can be useful in the development process with different perspectives, but the most part for automatizing things and we, as developers can take advantage of this kind of solution.

It is a simple solution that helps us, developers, to testing and tries the APIs worlds, and of course, testing our kubernetes skills as well.

Studying different things helps me a lot to understand how I can take advantage of some patterns or approaches, maybe it can help you as well.

That is the reason why I think kubernetes operators help us to achieve the API First Approach.

You can find the APIrator at GitHub.

References

https://kubernetes.io/docs/concepts/architecture/controller/

https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/

https://swagger.io/resources/articles/adopting-an-api-first-approach/

Let’s GO!!! Dependency Injection in GOlang

Starting in GOlang (few points about Java)

After almost ten years of programming in Java, I decided to start to learn a different language.

But Why??? Java is not good enough??

Big NO here, for me Java is brilliant and awesome language and the community is so vibrant as well. I love coding in java. I love how people who’re working in java practice Oriented Object Programming.

But in my opinion, nowadays Java addressing Enterprise World. I mean it is an excellent language to code for Business requirements, like CRM and other things related a business. There are several exciting frameworks about persistence, web which increase developer productivity and help delivery code in production.

Nowadays, my challenge is about creating cloud-native systems for infrastructure stuff. I mean the main purpose of system is helping developers to create amazing microservices and delivery it in a secure and managed way. That is the reason why I choose the golang language.

Dependency Injection Pattern

I will not deep dive into dependency injection pattern, because there are a lot of incredible blog post, articles and discussions about that.

Look at the Martin Fowler blog to find an amazing article about that.

For now, Dependency injection is important to create decoupled and well-designed code.

We will use WIRE to help us to implement Dependency Injection pattern in Golang.

Requirements

I’ve created the go project using Go Modules, it is an interesting way to manage the dependencies.

Installing Wire

Wire generates the necessary stuff at compilation time, then we need to install wire to be able to do it. easy peasy lemon squeezy..

and ensuring that $GOPATH/bin is added to your $PATH.

You can use the full instructions here.

Let’s code a little bit

Go Dependencies

We will create a simple random words generator using Babler, A small utility to generate random words in #golang.

I’m using the IntellijIDEA. The IDE has the autocomplete feature to add dependency automatically in the go.mod, but if you are using the VSCODE IDE which is good as well you can add the go.mod described below

There are a couple of dependencies, the two most important are Wire and Babler.

Business Code

Now, we will create a simple file with our “business code”, it will very simple. Let’s look at the code

There are a few important things here, let’s discuss

Producers

Let’s look at the New* functions. It is our producers, the main goal of these functions is to produce things to be injected. It is very important because we will need to instruct Wire how to create or produce it, we will understand as soon. Very simple code here we will create a pointer for each struct.

Injection Points or Clients

Another important part of the code is WordGenerator struct declaration. As we can see we need a Dictionary pointer, let’s understand how we are able to receive it.

On the NewWordGenerator function, as we can see we received the pointer to Dictionary, that is our injection point. The wire will “inject” the dictionary reference here.

We don’ care about how the Dictionary was created and injected, the only thing we need to think is “I want the instance here, this instance should be ready to me” in simple words. Wire framework will take care of injection for us, that is the important principle about dependency injection.

Wire Configuration

Now we know about the main characteristics of our application let’s instructs wire how to create our objects.

In the root folder create a file called wire.go, this name is mandatory.

Let’s look at the file content.

In the first line, we instruct to go build to ignore this file. This file is necessary to generate our compile file with all dependencies configured it is fantastic.

There are some imports and finally the SetupApplication function, that is the core for our application.

Let’s look at the return. It will produce a WordGenerator, this struct contains our business logic. Our main.go will invoke this struct to generate random words. Sometimes you can create your Application configuration, is up to you.

In our function body, we’ve used wire to create our container. I mean our dependencies to enable other structs to use.

Look at the builder functions as we saw before these functions produce pre-configured structs look at Producers Section

The return of this function doesn’t matter, the important part here is “we explain to wire how to create our application container”.

Wire Code Generation

Now, we are ready to generate our code, as we saw before wire will generate the file to build our application at compilation time, then Let’s do it.

On the root folder, at the same wire.go level type:

The tool will create a file called wire_gen.go, let’s analyze the content

In the first line we have a warning, “Do not edit” that is very import thing to notice.

Then we have the application configuration, look at the declaration all of our dependencies are built by the tool, amazing thing here. All of the structs are configured and ready to use.

The wire did the “dirty job” for us. I’m so proud..hahaha

Main.go

Now our dependencies are ready to use, let’s use. Create a file called main.go

Look at the SetupApplication() invocation, it will produce our WordGenerator then we can call the GenerateWord function, easy easy.

Conclusion

I like what java programmers are creating using some important patterns like Dependency Injection, and for me, wire is a vital library if you are thinking to work professionally with go.

It will increase your productivity and also will help you to create decoupled applications.

The GitHub repo is available here.

More complicated stuff

If you want something more real I’ve coded a simple application which will receive the HTTP Request and persist in the PostgreSQL database using Wire.

The code can be found here.

References

Wire tutorial

Wire Userguide