<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>io.vertx.howtos</groupId>
<artifactId>k8s-client-slide-lb-microservice</artifactId>
<version>1.0.0-SNAPSHOT</version>
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<vertx.version>5.0.0.CR2</vertx.version>
<main.verticle>io.vertx.howtos.clientsidelb.MicroServiceVerticle</main.verticle>
<launcher.class>io.vertx.launcher.application.VertxApplication</launcher.class>
</properties>
<dependencyManagement>
<dependencies>
<dependency>
<groupId>io.vertx</groupId>
<artifactId>vertx-stack-depchain</artifactId>
<version>${vertx.version}</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>
<dependencies>
<dependency>
<groupId>io.vertx</groupId>
<artifactId>vertx-core</artifactId>
</dependency>
<dependency>
<groupId>io.vertx</groupId>
<artifactId>vertx-launcher-application</artifactId>
</dependency>
<dependency>
<groupId>io.vertx</groupId>
<artifactId>vertx-service-resolver</artifactId>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>3.13.0</version>
<configuration>
<release>11</release>
</configuration>
</plugin>
<plugin>
<groupId>com.google.cloud.tools</groupId>
<artifactId>jib-maven-plugin</artifactId>
<version>3.4.4</version>
<configuration>
<to>

</to>
<container>
<mainClass>${launcher.class}</mainClass>
<args>
<arg>${main.verticle}</arg>
</args>
<ports>
<port>8080</port>
</ports>
</container>
</configuration>
</plugin>
</plugins>
</build>
</project>
Client side load balancing on Kubernetes
This document will show you how to perform client side load balancing on Kubernetes with a microservice.
What you will build
You will build a Vert.x microservice which:
-
listens to HTTP requests for the
/
URI -
makes an HTTP request to a back-end service using a load balancer
-
sends the back-end service HTTP response content
It consists of a single part named microservice communicating with another pod deployed in Kubernetes.
What you need
-
A text editor or IDE
-
Java 11 higher
-
Maven or Gradle
-
Minikube or any Kubernetes cluster
-
kubectl
command-line tool
Create the project
The code of the microservice project contains Maven and Gradle build files that are functionally equivalent.
Dependencies
The project depends on:
The Service Resolver library is a plugin that lets Vert.x clients call services using logical service names instead of network addresses. The service resolver is also capable of performing client side load balancing with the usual strategies.
Containerization
To create containers we will use Jib because:
-
it creates images with distinct layers for dependencies, resources and classes, thus saving build time and deployment time
-
it supports both Maven and Gradle
-
it requires neither Docker nor Podman
Using Maven
Here is the content of the pom.xml
file you should be using for the microservice:
pom.xml
Using Gradle
Assuming you use Gradle with the Kotlin DSL, here is what your build.gradle.kts
file should look like for the microservice:
build.gradle.kts
plugins {
java
application
id("com.google.cloud.tools.jib") version "3.4.4"
}
repositories {
mavenCentral()
}
val vertxVersion = "5.0.0.CR2"
val verticle = "io.vertx.howtos.clientsidelb.MicroServiceVerticle"
dependencies {
implementation("io.vertx:vertx-core:${vertxVersion}")
implementation("io.vertx:vertx-service-resolver:${vertxVersion}")
implementation("io.vertx:vertx-launcher-application:${vertxVersion}")
}
jib {
to {
image = "client-side-lb/microservice"
}
container {
mainClass = "io.vertx.launcher.application.VertxApplication"
args = listOf(verticle)
ports = listOf("8080")
}
}
tasks.wrapper {
gradleVersion = "8.11.1"
}
Implement the service
Let’s implement the microservice and then test it on the development machine.
The frontend service is encapsulated in a MicroServiceVerticle
class.
The service will request another pod of the Kubernetes cluster with a service address. The microservice Verticle creates an HttpClient
configured with a load-balancer and a resolver.
src/main/java/io/vertx/howtos/clientsidelb/MicroServiceVerticle.java
client = vertx
.httpClientBuilder()
.withLoadBalancer(loadBalancer)
.withAddressResolver(resolver)
.build();
For this matter we create an address resolver that takes logical ServiceAddress
as input and returns a list of addresses the HTTP client can use in practice.
The KubeResolver
is the resolver to go when deploying in Kubernetes. Notice that the resolver is created with new KubeResolverOptions()
, configured from the pod env variables set by the Kubernetes.
src/main/java/io/vertx/howtos/clientsidelb/MicroServiceVerticle.java
AddressResolver resolver = KubeResolver.create(new KubeResolverOptions());
The load-balancer part is very straightforward with a round-robin strategy.
src/main/java/io/vertx/howtos/clientsidelb/MicroServiceVerticle.java
LoadBalancer loadBalancer = LoadBalancer.ROUND_ROBIN;
There are other available strategies.
We also need to create and bind a web server for our service, this is very much straightforward:
src/main/java/io/vertx/howtos/clientsidelb/MicroServiceVerticle.java
return vertx.createHttpServer()
.requestHandler(request -> handleRequest(request))
.listen(8080);
Finally let’s have a look at service request handling.
First we create an HTTP server request to the back-end server. Instead of passing the back-end server socket address, we use instead the logical service address, which is the name of the service in Kubernetes (hello-node
).
src/main/java/io/vertx/howtos/clientsidelb/MicroServiceVerticle.java
ServiceAddress serviceAddress = ServiceAddress.of("hello-node");
Future<HttpClientRequest> fut = client.request(new RequestOptions()
.setMethod(HttpMethod.GET)
.setServer(serviceAddress)
.setURI("/"));
Then we implement the back-end server response handling. We send back the original response as part of our response, decorated with the response socket address so we can determine which server the service interacted with.
src/main/java/io/vertx/howtos/clientsidelb/MicroServiceVerticle.java
fut.compose(r -> r.send()
.expecting(HttpResponseExpectation.SC_OK)
.compose(resp -> resp.body())
.map(body -> "Response of pod " + r.connection().remoteAddress() + ": " + body + "\n"))
.onSuccess(res -> {
request.response()
.putHeader("content-type", "text/plain")
.end(res);
})
.onFailure(cause -> {
request.response()
.setStatusCode(500)
.putHeader("content-type", "text/plain")
.end("Error: " + cause.getMessage());
});
Deploy to Kubernetes
First, make sure Minikube has started with minikube status
.
If you don’t use Minikube, verify that kubectl is connected to your cluster. |
Push container image
There are different ways to push container images to Minikube.
In this document, we will push directly to the in-cluster Docker daemon.
To do so, we must point our shell to Minikube’s docker-daemon:
eval $(minikube -p minikube docker-env)
Then, within the same shell, we can build the images with Jib:
-
with Maven:
mvn compile jib:dockerBuild
, or -
with Gradle:
./gradlew jibDockerBuild
(Linux, macOS) orgradlew jibDockerBuild
(Windows).
Jib will not use the Docker daemon to build the image but only to push it. |
If you don’t use Minikube, please refer to the Jib Maven or Jib Gradle plugin documentation for details about how to configure them when pushing to a registry. |
Back-end service deployment
For the sake of simplicity, we will reuse the HTTP server from the Minikube tutorial.
We simply need to create a deployment with 3 pods for the purpose of this how-to.
kubectl create deployment hello-node --image=registry.k8s.io/e2e-test-images/agnhost:2.39 -- /agnhost netexec --http-port=8080
Verify the pods have started successfully:
kubectl get pods --selector=app=hello-node
You should see something like:
NAME READY STATUS RESTARTS AGE hello-node-66d457cb86-vndgc 1/1 Running 0 45s
Let’s increase the number of replicas to 3 for the purpose of this how-to.
kubectl scale deployment hello-node --replicas=3
And verify the new pods have started successfully:
NAME READY STATUS RESTARTS AGE hello-node-66d457cb86-m9nsr 1/1 Running 0 11s hello-node-66d457cb86-vndgc 1/1 Running 0 2m51s hello-node-66d457cb86-z6x26 1/1 Running 0 11s
Finally, we need to expose the pods as a service
kubectl expose deployment hello-node --type=LoadBalancer --port=8080
Again, verify the service has been successfully created:
kubectl get services hello-node
You should see something like:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE hello-node LoadBalancer 10.101.56.23 <pending> 8080:32159/TCP 2m31s
Microservice deployment
Now we can deploy our microservice in Kubernetes.
deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: microservice-deployment
labels:
app: microservice
spec:
replicas: 1
selector:
matchLabels:
app: microservice
template:
metadata:
labels:
app: microservice
spec:
containers:
- name: microservice
image: client-side-lb/microservice:latest
imagePullPolicy: IfNotPresent
env:
- name: HTTP_PORT
value: "8080"
ports:
- containerPort: 8080
Apply this configuration:
kubectl apply -f deployment.yml
Verify the pods have started successfully:
kubectl get pods --selector=app=microservice
You should see something like:
NAME READY STATUS RESTARTS AGE microservice-deployment-69dfcbc79c-kk85f 1/1 Running 0 117s
We also need a service to load-balance the HTTP traffic.
Pods will be selected by the label app:microservice
that was defined in the deployment:
service.yml
apiVersion: v1
kind: Service
metadata:
name: microservice
spec:
selector:
app: microservice
ports:
- protocol: TCP
port: 80
targetPort: 8080
type: LoadBalancer
Apply this configuration:
kubectl apply -f service.yml
Verify the service has been created successfully:
kubectl get services microservice
You should see something like:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE microservice LoadBalancer 10.109.6.50 <pending> 80:30336/TCP 2m3s
Finally, we need to configure the default service account to grant permission to observe the endpoints to the Vert.x service resolver.
roles.yml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: observe-endpoints
namespace: default
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "watch", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: observe-endpoints
namespace: default
roleRef:
kind: Role
name: observe-endpoints
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: default
namespace: default
Apply this configuration
kubectl apply -f roles.yml
Test the microservice
Now it is time to test our microservice and observe client side load balancing in action.
If you use Minikube, open another terminal window and run:
minikube service microservice
This opens a web browser and show our microservice in action, you should see something like
Hello from: 10.244.0.48:8080 with: NOW: 2024-11-27 17:18:37.179191424 +0000 UTC m=+1267.971197286
You can refresh the page to show that the IP address of the back-end service our microservice interacted with has changed.
Follow-up activities
You can go beyond and implement the following features
-
Use different load-balancing strategies
-
Come up with your own load-balancer implementation
-
The microservice could interact with a gRPC service instead of a generic HTTP server
Summary
This document covered:
-
dependencies required to deploy a microservice load balancing between Kubernetes pods
-
containerization of Vert.x services with Jib