HTTP & Web

Client side load balancing on Kubernetes

This document will show you how to perform client side load balancing on Kubernetes with a microservice.

What you will build

You will build a Vert.x microservice which:

  • listens to HTTP requests for the / URI

  • makes an HTTP request to a back-end service using a load balancer

  • sends the back-end service HTTP response content

It consists of a single part named microservice communicating with another pod deployed in Kubernetes.

What you need

  • A text editor or IDE

  • Java 11 higher

  • Maven or Gradle

  • Minikube or any Kubernetes cluster

  • kubectl command-line tool

Create the project

The code of the microservice project contains Maven and Gradle build files that are functionally equivalent.

Dependencies

The project depends on:

The Service Resolver library is a plugin that lets Vert.x clients call services using logical service names instead of network addresses. The service resolver is also capable of performing client side load balancing with the usual strategies.

Containerization

To create containers we will use Jib because:

  • it creates images with distinct layers for dependencies, resources and classes, thus saving build time and deployment time

  • it supports both Maven and Gradle

  • it requires neither Docker nor Podman

Using Maven

Here is the content of the pom.xml file you should be using for the microservice:

Using Gradle

Assuming you use Gradle with the Kotlin DSL, here is what your build.gradle.kts file should look like for the microservice:

Implement the service

Let’s implement the microservice and then test it on the development machine.

The frontend service is encapsulated in a MicroServiceVerticle class.

The service will request another pod of the Kubernetes cluster with a service address. The microservice Verticle creates an HttpClient configured with a load-balancer and a resolver.

For this matter we create an address resolver that takes logical ServiceAddress as input and returns a list of addresses the HTTP client can use in practice.

The KubeResolver is the resolver to go when deploying in Kubernetes. Notice that the resolver is created with new KubeResolverOptions(), configured from the pod env variables set by the Kubernetes.

The load-balancer part is very straightforward with a round-robin strategy.

There are other available strategies.

We also need to create and bind a web server for our service, this is very much straightforward:

Finally let’s have a look at service request handling.

First we create an HTTP server request to the back-end server. Instead of passing the back-end server socket address, we use instead the logical service address, which is the name of the service in Kubernetes (hello-node).

Then we implement the back-end server response handling. We send back the original response as part of our response, decorated with the response socket address so we can determine which server the service interacted with.

Deploy to Kubernetes

First, make sure Minikube has started with minikube status.

If you don’t use Minikube, verify that kubectl is connected to your cluster.

Push container image

There are different ways to push container images to Minikube.

In this document, we will push directly to the in-cluster Docker daemon.

To do so, we must point our shell to Minikube’s docker-daemon:

eval $(minikube -p minikube docker-env)

Then, within the same shell, we can build the images with Jib:

  • with Maven: mvn compile jib:dockerBuild, or

  • with Gradle: ./gradlew jibDockerBuild (Linux, macOS) or gradlew jibDockerBuild (Windows).

Jib will not use the Docker daemon to build the image but only to push it.
If you don’t use Minikube, please refer to the Jib Maven or Jib Gradle plugin documentation for details about how to configure them when pushing to a registry.

Back-end service deployment

For the sake of simplicity, we will reuse the HTTP server from the Minikube tutorial.

We simply need to create a deployment with 3 pods for the purpose of this how-to.

kubectl create deployment hello-node --image=registry.k8s.io/e2e-test-images/agnhost:2.39 -- /agnhost netexec --http-port=8080

Verify the pods have started successfully:

kubectl get pods --selector=app=hello-node

You should see something like:

NAME                          READY   STATUS    RESTARTS   AGE
hello-node-66d457cb86-vndgc   1/1     Running   0          45s

Let’s increase the number of replicas to 3 for the purpose of this how-to.

kubectl scale deployment hello-node --replicas=3

And verify the new pods have started successfully:

NAME                          READY   STATUS    RESTARTS   AGE
hello-node-66d457cb86-m9nsr   1/1     Running   0          11s
hello-node-66d457cb86-vndgc   1/1     Running   0          2m51s
hello-node-66d457cb86-z6x26   1/1     Running   0          11s

Finally, we need to expose the pods as a service

kubectl expose deployment hello-node --type=LoadBalancer --port=8080

Again, verify the service has been successfully created:

kubectl get services hello-node

You should see something like:

NAME         TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)          AGE
hello-node   LoadBalancer   10.101.56.23   <pending>     8080:32159/TCP   2m31s

Microservice deployment

Now we can deploy our microservice in Kubernetes.

Apply this configuration:

kubectl apply -f deployment.yml

Verify the pods have started successfully:

kubectl get pods --selector=app=microservice

You should see something like:

NAME                                       READY   STATUS    RESTARTS   AGE
microservice-deployment-69dfcbc79c-kk85f   1/1     Running   0          117s

We also need a service to load-balance the HTTP traffic.

Pods will be selected by the label app:microservice that was defined in the deployment:

Apply this configuration:

kubectl apply -f service.yml

Verify the service has been created successfully:

kubectl get services microservice

You should see something like:

NAME           TYPE           CLUSTER-IP    EXTERNAL-IP   PORT(S)        AGE
microservice   LoadBalancer   10.109.6.50   <pending>     80:30336/TCP   2m3s

Finally, we need to configure the default service account to grant permission to observe the endpoints to the Vert.x service resolver.

Apply this configuration

kubectl apply -f roles.yml

Test the microservice

Now it is time to test our microservice and observe client side load balancing in action.

If you use Minikube, open another terminal window and run:

minikube service microservice

This opens a web browser and show our microservice in action, you should see something like

Hello from: 10.244.0.48:8080 with: NOW: 2024-11-27 17:18:37.179191424 +0000 UTC m=+1267.971197286

You can refresh the page to show that the IP address of the back-end service our microservice interacted with has changed.

Follow-up activities

You can go beyond and implement the following features

  • Use different load-balancing strategies

  • Come up with your own load-balancer implementation

  • The microservice could interact with a gRPC service instead of a generic HTTP server

Summary

This document covered:

  • dependencies required to deploy a microservice load balancing between Kubernetes pods

  • containerization of Vert.x services with Jib