Let’s GO!!! Dependency Injection in GOlang

Starting in GOlang (few points about Java)

After almost ten years of programming in Java, I decided to start to learn a different language.

But Why??? Java is not good enough??

Big NO here, for me Java is brilliant and awesome language and the community is so vibrant as well. I love coding in java. I love how people who’re working in java practice Oriented Object Programming.

But in my opinion, nowadays Java addressing Enterprise World. I mean it is an excellent language to code for Business requirements, like CRM and other things related a business. There are several exciting frameworks about persistence, web which increase developer productivity and help delivery code in production.

Nowadays, my challenge is about creating cloud-native systems for infrastructure stuff. I mean the main purpose of system is helping developers to create amazing microservices and delivery it in a secure and managed way. That is the reason why I choose the golang language.

Dependency Injection Pattern

I will not deep dive into dependency injection pattern, because there are a lot of incredible blog post, articles and discussions about that.

Look at the Martin Fowler blog to find an amazing article about that.

For now, Dependency injection is important to create decoupled and well-designed code.

We will use WIRE to help us to implement Dependency Injection pattern in Golang.

Requirements

I’ve created the go project using Go Modules, it is an interesting way to manage the dependencies.

Installing Wire

Wire generates the necessary stuff at compilation time, then we need to install wire to be able to do it. easy peasy lemon squeezy..

and ensuring that $GOPATH/bin is added to your $PATH.

You can use the full instructions here.

Let’s code a little bit

Go Dependencies

We will create a simple random words generator using Babler, A small utility to generate random words in #golang.

I’m using the IntellijIDEA. The IDE has the autocomplete feature to add dependency automatically in the go.mod, but if you are using the VSCODE IDE which is good as well you can add the go.mod described below

There are a couple of dependencies, the two most important are Wire and Babler.

Business Code

Now, we will create a simple file with our “business code”, it will very simple. Let’s look at the code

There are a few important things here, let’s discuss

Producers

Let’s look at the New* functions. It is our producers, the main goal of these functions is to produce things to be injected. It is very important because we will need to instruct Wire how to create or produce it, we will understand as soon. Very simple code here we will create a pointer for each struct.

Injection Points or Clients

Another important part of the code is WordGenerator struct declaration. As we can see we need a Dictionary pointer, let’s understand how we are able to receive it.

On the NewWordGenerator function, as we can see we received the pointer to Dictionary, that is our injection point. The wire will “inject” the dictionary reference here.

We don’ care about how the Dictionary was created and injected, the only thing we need to think is “I want the instance here, this instance should be ready to me” in simple words. Wire framework will take care of injection for us, that is the important principle about dependency injection.

Wire Configuration

Now we know about the main characteristics of our application let’s instructs wire how to create our objects.

In the root folder create a file called wire.go, this name is mandatory.

Let’s look at the file content.

In the first line, we instruct to go build to ignore this file. This file is necessary to generate our compile file with all dependencies configured it is fantastic.

There are some imports and finally the SetupApplication function, that is the core for our application.

Let’s look at the return. It will produce a WordGenerator, this struct contains our business logic. Our main.go will invoke this struct to generate random words. Sometimes you can create your Application configuration, is up to you.

In our function body, we’ve used wire to create our container. I mean our dependencies to enable other structs to use.

Look at the builder functions as we saw before these functions produce pre-configured structs look at Producers Section

The return of this function doesn’t matter, the important part here is “we explain to wire how to create our application container”.

Wire Code Generation

Now, we are ready to generate our code, as we saw before wire will generate the file to build our application at compilation time, then Let’s do it.

On the root folder, at the same wire.go level type:

The tool will create a file called wire_gen.go, let’s analyze the content

In the first line we have a warning, “Do not edit” that is very import thing to notice.

Then we have the application configuration, look at the declaration all of our dependencies are built by the tool, amazing thing here. All of the structs are configured and ready to use.

The wire did the “dirty job” for us. I’m so proud..hahaha

Main.go

Now our dependencies are ready to use, let’s use. Create a file called main.go

Look at the SetupApplication() invocation, it will produce our WordGenerator then we can call the GenerateWord function, easy easy.

Conclusion

I like what java programmers are creating using some important patterns like Dependency Injection, and for me, wire is a vital library if you are thinking to work professionally with go.

It will increase your productivity and also will help you to create decoupled applications.

The GitHub repo is available here.

More complicated stuff

If you want something more real I’ve coded a simple application which will receive the HTTP Request and persist in the PostgreSQL database using Wire.

The code can be found here.

References

Wire tutorial

Wire Userguide

What is Service Mesh and Why you should consider it

Brief about Software Architecture History

Before to start the explanations about Service Mesh infrastructure, let’s understand a little bit about what we have created before, as software architects and developers.

I think it is essential to know why Service Mesh might be useful for you.

We’ll talk a little bit about Software Architecture History. I promise it will be rapid, but it will be worth it.

Let’s analyze the image below

There are different architectures models in the Timeline. The MVC Pattern which one is present in our architecture today.

SOA and EDA were popularized in 2000 and 2003, respectively; things started to be distributed. We started at that time to break things into small pieces of software, for several reasons.

And finally Microservices and Serverless, in 2013 and 2015 respectively, and different approaches about software development were coming.

Conway’s law and Inverse Conway Maneuver sprung up to the scene to explain how to work with Software Architecture in terms of business and Teams.

There is typical behavior if we look at this Timeline, we are trying to divide our software pieces into small and smaller parts, as small as we can.

But, what is it important or related to Service Mesh Infrastructure???

Microservices are distributed systems, and distributed systems means handle network problems.

That is exactly what Service Mesh Infrastructure can help us. Abstracts network issues from developers.

Service Mesh

In a nutshell.

Service Mesh can be defined as a dedicated infrastructure to handle a high volume of traffic based in IPC (inter-process communication). In a microservices architecture, usually called East-West Traffic.

In a few words, service mesh can be considered a “layer” to abstract network for services communications.

These abstractions solve the most part of the network handlings like Load Balancing, Circuit Breakers, Retries, Timeout, Smart Routing which can enable advanced deployment techniques like Canary Releases, Dark Launches, and others.

Then, we can take off these responsibilities from our application code. Also, we can remove these roles from developers, and it is very important because developers should code for the business, not for infrastructure requirements.

Another important characteristic of Service Mesh is Telemetry, some implementations integrate with Jaeger and Prometheus easily.

Famous libraries in the Java ecosystem related to network handlings like Netflix Ribbon, Hystrix and Eureka can be replaced for the Service Mesh implementations like ISTIO.

Service Mesh & Microservices Architecture

In general, in the Microservices Architecture, service-to-service communication is quite complex.

Usually involves different communications patterns like REST and gRPC over HTTP or AMQP for asynchronous and durable communications.

As we can see, microservices are distributed systems, that is the reason why Service Mesh Infrastructure fits very well.

Practical example

Let’s look in a simple and pretty standard Microservices Architecture

Standard Microservices Architecture

There are some important things to look here.

North & South Traffic

North & South traffic usually happens between different networks look at the image Network A and Network B. This kind of traffic comes from outside our infrastructure, our clients, and this traffic is not trusted because the external network is out of our control.

We need heavy security here, that’s our Gateway to protect our applications.

Usually, we have an API Platform to manage our external APIs. API Management techniques and processes can help us with this task.

East & West Traffic

On the other hand, the East-West traffic happens in general on the same network, as we saw before, normally it is called service-to-service communication or IPC.

That is the place where Service Mesh lives.

gRPC is a very interesting framework if you are looking for high throughput applications or service-to-service communications.

Conclusions

Service Mesh is an interesting thing you are trying to play with Microservices Architecture, but I strongly recommended you to understand a little deeper before adding Service Mesh in your Architecture Stack.

There is no silver bullet when you think about Software Architecture but we as Software Architect, Developers and other need to understand and propose the right solution considering the company context.

Kubernetes Patterns – Sidecar

Motivation

On the last week, I’ve blogged about Ambassador Pattern.

This pattern is very important when we are trying to solve network issues in the Microservices architecture, in a few words Ambassador is a kind of proxy, to help in the service-to-service communications.

Today we’ll talk about Sidecar Pattern, it’s an interesting pattern when we are looking for help with network issues, but as we will see during this post, there are more features which this pattern enable for us.

 

Context

In the containers world, we need to follow the container Golden Rule, the container should have one single purpose to exist. That is the most important thing to follow.

When we are developing applications using the microservice as an Architectural guide, we shouldn’t worry about concerns related to infrastructures, like log collector, network handlings and other orthogonal concerns. These concerns are more related to the platform where we are running our service than our application code.

We should use our platform to help us with these activities. Kubernetes is a “de-facto” platform to run containers workloads. We can use Kubernetes to deploy a dedicated infrastructure to handle internal network traffic, ISTIO for an example. In this case, ISTIO is our “platform” to help with network handlings.

I’ve blogged about my first impressions about ISTIO and Service Mesh

Kubernetes has the primitive called PODs, the small unit of computational resources in the kubernetes ecosystem, the POD is able to have multiple containers, in that scenario the Sidecar Pattern is a perfect solution to help the main container.

Let’s look at the POD anatomy (the yellow one)

Solution

The Sidecar container should add some additional functionalities to the microservice container. The important part to pay attention here is the sidecar should run in a different process and is not able to change anything in the microservice container.

In the same POD, containers are able to share the volumes and the same network, it means the containers can reach each other via “localhost” for an example.

Let’s analyze an example.

In the real microservices architecture, we might have different services and many instances of these services, but, how we are able to look effectively at the logs?

We need a centralized tool that collects these logs, also we need an effective way to query these data to find something that helps us to troubleshoot and debug distributed systems.

Is that role of the main container to send these logs for service in the cloud? Maybe a Sidecar container is able to collect these logs, they are sharing the volumes, and send these data to the cloud.

The sidecar “enrich” the main container functionalities sending data to the cloud systems. That is the main role of Sidecar Container.

Look at the image below:

As we can see, the logger container sends the data to the cloud storage, the logger read data from POD volumes, because they are sharing the disk.

The microservice container doesn’t care about the logs, the main container should play to service our business only.

That is one example where sidecar container can help us adding extra functionalities for our main container.

In the Service Mesh Infrastructure, the sidecar container can help us adding some extra functionalities to help us to handle network issues, it is another example.

Conclusion

The Sidecar Pattern is very useful when we are working in distributed systems, especially in containers world.

It will increase our productivity because we don’t need to pay attention to infrastructure stuff and it makes our code more concise than ever without infrastructure handlings.

Then, it is time to say goodbye to Netflix Ribbon, Netflix Eureka and Netflix Hystrix and put their responsibilities to sidecar container.

References

Kubernetes Pattern Book

Microsoft Azure Docs

Kubernetes Patterns – Ambassador

 

Motivation

Recently, I’m studying kubernetes in-depth, mainly in part about how to use platform features to help me to work with distributed architectures.

During this journey, for my surprise, I’ve found many books of Kubernetes Patterns, and my god, these books opened my mind about “How to use Kubernetes effectively”.

My favorite one is Kubernetes Patterns, the book is awesome, it’s a kind of guide for me right now. The book describes many patterns and categorizes them in principles like Predictable Demands, Declarative Deployments, Health Probe, Managed Lifecycle and Automated Placement.

Today, I’ll talk about an important pattern related to network management techniques.

Let’s talk about Ambassador Pattern.

Ambassador

Context

When we are working with distributed systems, the network is the biggest challenge to solve, remember The Fallacies of Distributed Computing.

We need to do an effective strategy to work with outages, service discovery, circuit breakers, intelligent and dynamic routing rules, and time-outs.

In general, these things require a lot of configuration files envolving connection, authentication and authorization. These configurations should be dynamic as well, because in the distributed systems, addresses for instances, changes a lot during a certain timebox.

Of course, sometimes we are not able to handle these issues because our “application” is not able to handle it, our framework which the application is coded doesn’t support these features.

Also, we need to remember the containers Golden Rule, the container should exist for one single and small reason.

Maybe, handle these challenges into our application code cannot be a good idea, especially because sometimes we need to integrate with legacy applications.

The ambassador help us exactly at this point, let’s see how it happening.

Solution

Ambassador acts as “proxy” and hides all the complexities of accessing the external services.

We will put the ambassador container between our main application and external services connections. Just to remind, our ambassador container should be deployed in the same Kubernetes POD, which resides our main application container.

Using this simple approach we able to handle network failures, security, resiliency in the ambassador container, simple and effective way to handle these hards things to solve.

Look at the image below

The Ambassador Container should handle configurations related to Service Discovery, Time-outs, Circuit Breaker, Smart Routing and Security

Conclusion

The ambassador Pattern is very useful when we are working with distributed systems. It will reduce our main application complexity taking off the network management in our application code.

Remember: It will add some latency overhead. If network latency is a critical point for you, maybe you need to think about the ambassador adoption.

 

References

Kubernetes Patterns book

https://docs.microsoft.com/pt-br/azure/architecture/patterns/ambassador

Releases, Deployments and Traffic Mirroring

During my journey to learn ISTIO and your stack I’ve discovered some interesting concepts about deployments stuff. The first one I didn’t know the difference between Deployment and Release if you know no problem I’ll explain detailed during this blog post. Also on the next post, I’ll explain how ISTIO can help us to achieve it.

Deployment vs Release

The first thing to know, before deep dive in strategies is to understand the difference between these concepts. I’ve discovered it reading the excellent Christian E. Posta book Istio in Action. The book is under production then there is some chapters to release yet.

Deployment

Deployment can be described as an activity to install new code into production or another environment at runtime, the important thing here it can’t affect users anyway, or we can’t change traffic to these artifacts. Then we can deploy multiple versions without problems.

Release

A release can be when change traffic to a deployment did previously; it can affect system users, then we should plan it carefully. There are some ways to minimize the user’s impact during our releases. We’ll discuss it detailed in this blog post. Also, the version avoids “Big Bang” deployments, like blue-green deployments.

Request Level or Traffic Shifting

Now we know the main difference between Deployment and Release, then we can discuss another critical topic Requests Distributions Strategies or how is the best strategy to split traffics during deployments.

There is two ways to achieve it . Traffic Shifting or Request Level it is super important to understand because based on that you should choose the best option in your use case.

Request Level

It is a kind of self-explanatory, in this technique, we can split traffic based on request headers attributes and then control production traffic as we want. This strategy allows us to gained more fine control in production traffic during our deployments.

For example, we can change traffic based on client-id, in OAuth protocol, when this specific client can be a partner to test our application in the real world.

Traffic Shifting

Traffic shifting can be an excellent option when we did not expect to “identify” users by something in the request. In this strategy, you, want to split traffic to different based on a percentage of calls. This strategy is a little bit more simple that Request Level but can be an exciting option to test our deployments.

Let’s talk about Releases Strategies!!!

Dark Launch Releases

In this kind, we can change the traffic to a new deployment using the minor part of users based in some rule, a percentage for instance. The important note here is the most of users a.k.a production traffic should go to the “stable” version. The main idea here is testing new features to a set of premium users and then measure the adoption or something important for your company.

Let’s see an example using the Traffic Shifting Strategy

Dark Launch Example

Canary Releases

The idea is very similar to Dark Launches, but there is a small difference, in the Canary Release we want to test a new version of our deployment, see performance and system behavior. In this kind, it is not related only something new, feature or significant change. Sometimes we want to test new versions which one has performance improves for example. In this example we’ll use the Request Level Strategy, let’s see it.

Canary Release

On the example above, we change the production traffic to a new version only for client-id = 10. Others client-ids go to stable version of our application.

Traffic Mirroring

The idea of Traffic Mirroring is pretty simple. We will route the real production, a copy of the production requests to a new deployment or experimental version. The copy of the request is based on fire and forget principle and won’t impact the real user’s requests. Mirroring traffic is an interesting techniques to delivery code into production with more confidence.

The image below will show the Traffic Mirroring strategy

Traffic Mirroring Flow

These concepts are very important to know. It helps us to choose the correct strategy during our deployments.

In my opinion, this knowledge is the key point to guide us to choose an successfull deployment strategy.

On the next post we will learn how to do it using the ISTIO an open-source service-mesh implementation.

References

ISTIO in Action by Christian E. Posta

Blue Green Deployments by Martin Fowler

Install ISTIO on AZURE AKS

Hello,

During in my learning path to understanding Service Mesh and ISTIO. I decided to use some different cloud vendors. I choose Azure and Google.

I’ve started with Google Kubernetes Engine (GKE). It was my first experience with Google Cloud Platform components and was amazing. The command line is well documented and easy to interact with Kubernetes APIs.

Today I will explain how to install Istio components in Azure Cloud (Azure Kubernetes Service or AKS) which one offers managed kubernetes in Azure Infrastructure. On this post, I will use HELM to install Istio on kubernetes.

Let’s start with some requirements:

  • HELM Client ( installation instructions can be found here )
  • Azure CLI  (installation instructions can be found here )
  • kubectl ( installation instructions can be found here )

Creating the AKS Cluster and Preparing HELM

To create the AKS Cluster we can use the following statement:

Some considerations about this command:

  • I strongly recommend creating your own resource group
  • The –enable-rbac is mandatory to deploy ISTIO.

Then we need to configure our kubectl on Azure we can do it using the az command line, like this:

Now our kubectl is fully configured, we can start to install ISTIO in our AKS cluster.

Let’s start downloading the Istio Release. The zip can be found here. We are using the 0.8.0 version which one is the stable version. You need to choose the target OS, in the cluster.

Go to the ISTIO root folder and then we need to create a service account for HELM, it can be done using the following command:

Good, you should see the following output:

Awesome our service account is ready.

Let’s deploy our tiller deploy. Run the command above:

Then we can see, the following output:

Awesome, our HELM client is ready to start to deploy Istio.

Installing ISTIO

Go to the ISTIO root folder and then we can deploy install ISTIO in our Kubernetes cluster. It can be achieved with this command

After we can check the ISTIO components using the following command:

All the pods need to stay in Running state like in the image below:

Well done, our ISTIO is installed in our cluster and ready to receive some microservices.

On next week I will explain how to interact with our cluster, creating some microservices and manage the cluster monitoring tools like Grafana, Jaeger and other.

References:

Install ISTIO: https://istio.io/docs/setup/kubernetes/helm-install/

Create a cluster in AKS: https://docs.microsoft.com/en-us/azure/aks/kubernetes-walkthrough

 

My first impression about ISTIO and Service Mesh

Today I will talk about ISTIO and Service Mesh topics. I’m on a learning path these concepts to apply in my job and help some other interesting people on a software architecture and development.

I’m so excited to learn it because I think this maybe change our way to develop microservices, specifically how we can use infrastructure to get more insights about our runtime applications a.k.a metrics, health and routing.

This post intends to be a simple explanation about some Istio features and some comparisons with other tools provided by amazing Netflix OSS components.

Intro

I’m a java developer and most of part of my job time I’ve been created solutions with the Java frameworks and libraries. The Spring Framework is an amazing framework to create web applications, also has a brilliant and vibrant ecosystem.

Also, the Spring Cloud Projects provides interesting solutions to helps us, developers, to build distributed applications.

On this post, we will use the Kubernetes as container orchestrator.

I had some experiences coding with Spring Cloud Netflix Projects which integrates the Spring ecosystem with Netflix OSS tools, like Netflix Ribbon, Netflix Hystrix, Netflix Eureka and these projects is really amazing, and makes our developer’s infrastructures tasks so easy, with a couple annotations and some configurations the projects have interesting features like Service Discovery, Client Side Load Balancer and Circuit Breaker.

This kind of application, which uses the Netflix OSS tools, usually handle the network layer inside the application layer. For instance, the circuit breaker feature provided by Hystrix needs to be configured in the application layer. We can see the retrieves calls are handled by our component (application) because we have the Hystrix inside. In the same way, the Ribbon works we need to have in our classpath.

It works generally well but imagines the following situation. Remember the Netflix OSS only works for Java stacks, it uses the Java ecosystem to achieve these features. But now we need to change the application because the application needs to handle more load with minimal resource usage. The Golang fits well in this case.

The microservices advent can help developers to create small and independents application.

We can’t lose these important features in the microservices architectural style.

Because of this situation, the Service Mesh Pattern has been gaining some notation in the development world. Using this pattern we can isolate the Network stack in another container (in the same POD), it is called sidecar container, it will be responsible to handle all network calls independently of the application language has been built.

In this context, Istio as an implementation of service-mesh gives for us some interesting features like Intelligent Routing, Circuit Breaker, and Fault Injection outside of our application, with any code more to achieve these features. It is amazing.

Installation

I tried different approaches to install Istio in my cluster, actually, I’m using the Google Kubernetes Engine (GKE) to learn and try Istio features.

I decided to use the Helm installation which one proved easier and fast way to install/delete Istio in my kubernetes cluster. I followed the Istio installation with Helm, you can find the instruction at istio installation guide.

The Helm installation disables the Mutual TLS authentication by default, of course, you can enable using the Helm command line flags.

If you using the Istio 0.8.0 version, the installation automatically spins up a pod which one provides automatic sidecar containers in our PODs, it can make our lives easier because we can mark a namespace to enable sidecar automatically. I used this feature and works very well.

I extremely recommend you to install “infra” services, the Istio refers to this as “add-ons” like Grafana, Prometheus, Service Graph and Jaeger. These add-ons help us to see the cluster metrics, which Istio collects and stores in some these services.

That is all to enable us to play with Istio!!

Experience

I’ve created a couple of applications, there are no complex business rules. The idea here is to try the service-to-service communications and get some metrics about these communications. And then I deployed these applications in the Kubernetes cluster, the important thing here is I’ve created the standard Deployments in Kubernetes.

These deployments have a Services which one’s exports the ports, nothing special here. The crucial thing is the Service needs to use the metadata section with the labels called “app”  if you forget this the Istio will not work as expected.

Look at the following example:

Pay attention to the metadata section.

To Istio work as expected we need to follow some requirements, these requirements can be found here.

After the deployments worked. I decided to try some requests for my services. And like a magic without any configuration, my Grafana instances started to gathering some services metrics. I’m got impressed at this moment because the how fast it is and with a simple yaml configuration I had an almost complete and interesting metrics.

Look at Grafana instance, at this moment I didn’t create any graphics or configuration and the graphics show my whole applications ecosystem.

 

Also there other interesting services like Jaeger which one allows us to collect tracing between service communications and measure the time of service calls.

Prometheus which one will store our metrics in the time series database, it is a kind of Backend for Grafana. The Prometheus basically will collect the service metrics and store it.

Service Graph which one show our services dependencies in real-time. It is an awesome app.

Conclusion

I’ve started to play a few weeks ago with Istio (Service Mesh) and I think these tools will helps developers and devops guys in different ways. To get insight into the infrastructure, maybe the applications metrics usages. Another important characteristics for me is how Istio collect the application metrics. It is not intrusive and it makes easier to development lifecycle.

Also, these tools can help the companies to innovate faster than ever because it offers awesome implementations like Canaries Deployments, the companies can try your deployments without downtime and with a reasonable safety.

I’m excited to study more about it, in the next weeks I will discover more features and I will share on this blog.

If you can help me, on the journey to learn service mesh please share interesting articles about the topic.

Thank you

References

 

Spring Boot 2 Meets Kotlin

Hello Guys.

Today we will talk about the new feature added in Spring 5.0 and Spring Boot 2. We will understand the Kotlin support for Spring Boot Applications.

Kotlin is a new language created by JetBrains Team. The language is JVM language. It means the language creates a bytecode to run on Java Virtual Machine.

As we can see, the primary inspiration is Scala language. There are many constructions similar in both languages, the data classes concepts for instance.

There are some interesting advantages when we adopt Kotlin to code. The most exciting is reduce boilerplate code and make our code more concise, it brings more maintainability and transforms our code more readable.

We will understand these topics on next examples, then is Time to Code!!!

Create the Tasks Project with Spring Initializr

We will create a simple project to manage Tasks. The main idea here is to explain some Kotlin interesting points on this project.

Let’s go to Spring Initiliazr page. The project should be created with these configurations below:

The interesting points here are:

  • Maven Project
  • Kotlin Language
  • Spring Boot 2
  • Dependencies: Reactive Web, Actuator and Reactive MongoDB

Creating our TaskEntity

We need to create our main entity. The entity is pretty simple and so easy to implement the Task class should be like this:

As we can see, there are some Kotlin interesting points here. There is a data class keyword it means the class has the purpose of holding data. Kotlin will add some important behaviors automatically like:

  • equals() and hashCode()
  • toString()
  • copy()

there are some restrictions the data classes cannot be abstract, open, sealed or inner.

The full data classes documentation can be found here.

Creating the Reactive Task Repository

Now, we will use the Spring Data MongoDB Reactive implementation. The behaviors are similar in the blocking version, but it will not blocking because is the reactive version. The way to thinking is similar there is DSL to use objects properties to create queries automatically.

The TaskRepository should be like this:

The keyword interface is the same in Java language, but the way to extends is Kotlin is slightly different, in Kotlin we will use “:” instead of extends.

Creating the TaskService

Let’s create our TaskService class it will invoke the repository implementations. The code of TaskService should be like this:

There are a couple of interesting things here. Let’s start by injection, there is no necessity to use @Autowired in class constructor since Spring Framework 4 version, as we can see it will work as expected here as well. We use val in favor immutability.

Let’s understand the tasks() function. The function has no body because of the implementation has one line only. In this case, the function return can be omitted as well. It makes the code more concise and easy to understand.

We have used the same features in our other functions.

The full documentation about functions can be found here.

The REST Layer

Our REST Layer should be reactive. Then we need to return Flux or Mono in our methods. We will use the one line function and we can omit these declarations, keep in mind to achieve reactive functionalities we need Flux or Mono in our methods.

The TaskResource class should be like this:

As we can see we are using the same as we did before, we have not used return methods declaration, the compiler can infer it for us, also it prevents developers errors as well.

We are using the @GetMapping and @PostMapping instead of @RequestMapping annotations, it makes our code more readable.

Configuring the MongoDB connections

We will use the yaml file, it makes our configuration file more readable and introduces semantics in our file. The configuration file should be like this:

There is nothing special here, a couple of configurations for mongoDB and tomcat server.

The Final Project Structure

Let’s analyze the final project structure, you can put in your preferred structure. I suggest the following one:

 

Excellent, now we can run it.

Run it and try your pretty new API using Kotlin Language!!!

Awesome Job, well done!!!

The full source code can be found here.

Tip

I recommend docker to run a mongoDB instance it makes your life extremely easy.

Book

You can find more detailed implementations and different Use Case in my book ( Spring 5.o By Example), published by Packt.

 

Thank you, for next Post, I will write about Spring Cloud Gateway and how it can help developers to work with Routes.

BYE.

Continuous Query With Spring Data Reactive MongoDB

Continuous Query With Spring Data Reactive MongoDB

On this blog post, we will take a look how to implement Continuous Queries in MongoDB. Also, we will have a dash of Spring WebFlux in Action.

Let’s do it, right now!!!

What is Continuous Query???

Is a kind of active query, when the data were arriving in the database, if this piece of data matches accordingly with our query an event will be emitted to our application, with the piece of data. We can think a kind of Event-Driven Programming using the database as an event trigger.

It is a powerful feature and enable us to add interactive behaviors in our application.

Capped Collections and Tailable Cursors

In MongoDB, there is a feature called Capped Collections. This kind of collection has a fixed size and support high-throughput operations and retrieve the documents based on insert order. We will use this kind of collection to store our data to simulate the “continuous query.”

Also, MongoDB has an exciting feature called Tailable Cursors which we are able to use on Capped Collections. This cursor is similar to

in Unix command system, which means we will retrieve the documents in natural order.

Spring Data Reactive MongoDB

Now we know how Capped Collections and Tailable Cursor works. Then is time to use Spring Data Reactive Mongo DB stuff.

Use Case

We will simulate a kind of data collector for IoT components. The main feature of our project is collect components data and provides an API to show max and min temperatures when the temperature reaches the limits configured for a specific device. Then the Frontend Team can build a fantastic reactive page to display the device information.

Our API will have five principal operations:

  • Create Device
  • Input Temperature
  • Device Temperature Stream
  • Device Min Temperature Stream
  • Device Max Temperature Stream

Let’s start with our primary Entity Temperature, the temperature class should be:

As we can see there is nothing special, we have used the Lombok Annotations to remove boilerplate code.

We have the Temperature class created, then we can create our TemperatureRepository, the interface can be declared like this:

Awesome, there is some interesting stuff here. The first one is the @Tailable annotation, which instructs the Spring Frameworks to await the MongoDB Tailable Cursors. When the data arrives in MongoDb with de data matches with a specific query one event will be emitted. In this case, we will watch events from a specific device. Take a look at our return type we always should use the Flux<T> to make application reactive.

Time to create our service class, the implementation should be like this:

On our method init(), we have declared a capped collection. First, we have removed the collection created and then reactively we have created and configured the temperatures collection. Remember this method will call one time. The others methods are so simple the main idea here is to call the Repository Layer to retrieve the documents. The important part here is the return types of methods, always Flux<T> when we expect the events.

In our RestController we will use the Server-Sent Event (SSE), to add more interactive behaviors for our API. The main idea here is to push data to the client using the Half-Duplex HTTP connection.

Our DeviceResource should be like this:

As we can see take a look at the methods declarations we have used the MediaType.APPLICATION_STREAM_JSON_VALUE. It is much important to add Stream Behavior for our application.

Simple like that we have created a couple of classes and lines, to add a powerful feature like “continuous query” in our application. Spring Data Reactive reactive brings reactive characteristics for our application and make them more resilient and cost-effective.

The Full source code can be found at GitHub. There is a client to add some data and make the test easier, the client will send data to interval pre-configured.

The Spring 5.0 By Example Book

Recently I launched the Spring 5.0 By Example Book. This book contains a lot of Spring Concepts and Examples, using the Microservices Architecture and much codes using the Spring Boot 2.

The key features of the book are:

  • Learn reactive programming by implementing a reactive application with Spring Webflux
  • Create a robust and scalable messaging application with Spring messaging support
  • Apply your knowledge to build three real-world projects in Spring

You can find the book on Amazon.

Requirements:

MongoDb instance up and running. I recommend Docker to spin up a MongoDB container.

The MongoDb should be listening at “localhost”, otherwise you can configure using the application.yaml or application.properties.

References:

https://docs.mongodb.com/manual/core/capped-collections/

https://docs.mongodb.com/manual/core/tailable-cursors/

https://en.wikipedia.org/wiki/Server-sent_events

 

See you. BYE.

Introduce the Spring WebFlux – A Practical Guide

Hello guys,

In my first blog post, I mean on this new address I choose the hot topic in java programming, the Spring 5, recently Pivotal had launched the new version of the framework and promoted this version to GA, which means this version is production ready, you can find the full list of features here.

The highlights for me are the full compatibility with Java 9, the kotlin support and finally the most commented feature, the new module called Spring WebFlux which one was built on a reactive foundation.

For this post, I will explain about the Spring WebFlux, which one can help us to write a fully non-blocking application based on event loop concepts.

Before coding, I would like to explain how event-loop works and the difference between this model and a traditional large thread pool with thread-per-request execution model…let’s go.

Event-loop Model

Popularized by nodejs, this style allows us to scale with a small number of threads instead of the thread-pool, the concept is quite simple the request arrives at event loop then it blocking on resource-emitting events and dispatches them to corresponding handlers and callbacks.

Keep in mind one important thing, you should never block the execution otherwise the application will be blocked.

This figure can help us to understand the concept

 

Some considerations before code

The Spring Framework Team chosen the Reactor as a Reactive Streams implementation is a quite similar to RxJava

The most important classes are Mono, which one represents 0 | 1 sequence and Flux which represents 0 to N elements.

Let’s do something amazing…

My choice as a datastore was Cassandra, is a popular noSQL database because Spring has a Reactive Repository for this database. Let’s do the reactive Repository

This is a simple repository, pay attention in one thing you should extend the Reactive version of Cassandra repository.

Let’s go to the service layer, this layer is like we did in before versions of Spring Framework, the only difference is, you should return Mono or Flux in your methods.

The interesting thing here is the @Autowired annotation isn’ t necessary anymore, Spring detects it.

And finally, we will create a REST layer for our applications, this is a quite similar to the last version before 5. Remember we are using the Reactor Netty as a web server this makes our application reactive.

@RequestMapping, @GetMapping, @PostMapping is unchanged, please pay attention to the return types, this needs to be changed to Mono Or Flux.

Conclusions

Spring renews your portfolio one more time, bringing concepts of reactive programming to your projects, you can use your previous knowledge about Spring to create amazing Reactive Applications.

Advice: Keep in mind one thing, the concept of reactive programming uses the declarative programming and this is totally different from imperative programming.

You can find the full code on my GitHub

On the next post, I will explain the Kotlin support on Spring 5, see you there. BYE.