<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>io.vertx.howtos</groupId>
<artifactId>clustering-kubernetes-frontend</artifactId>
<version>1.0-SNAPSHOT</version>
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<verticle>io.vertx.howtos.cluster.FrontendVerticle</verticle>
</properties>
<dependencyManagement>
<dependencies>
<dependency>
<groupId>io.vertx</groupId>
<artifactId>vertx-stack-depchain</artifactId>
<version>5.0.0.CR2</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>
<dependencies>
<dependency>
<groupId>io.vertx</groupId>
<artifactId>vertx-launcher-application</artifactId>
</dependency>
<dependency>
<groupId>io.vertx</groupId>
<artifactId>vertx-web</artifactId>
</dependency>
<dependency>
<groupId>io.vertx</groupId>
<artifactId>vertx-infinispan</artifactId>
</dependency>
<dependency>
<groupId>io.vertx</groupId>
<artifactId>vertx-health-check</artifactId>
</dependency>
<dependency>
<groupId>ch.qos.logback</groupId>
<artifactId>logback-classic</artifactId>
<version>1.5.12</version>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>3.13.0</version>
<configuration>
<release>11</release>
</configuration>
</plugin>
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>exec-maven-plugin</artifactId>
<version>3.5.0</version>
<configuration>
<systemProperties>
<systemProperty>
<key>java.net.preferIPv4Stack</key>
<value>true</value>
</systemProperty>
<systemProperty>
<key>vertx.jgroups.config</key>
<value>default-configs/default-jgroups-udp.xml</value>
</systemProperty>
</systemProperties>
<mainClass>${verticle}</mainClass>
</configuration>
</plugin>
<plugin>
<groupId>com.google.cloud.tools</groupId>
<artifactId>jib-maven-plugin</artifactId>
<version>2.4.0</version>
<configuration>
<to>

</to>
<container>
<mainClass>io.vertx.launcher.application.VertxApplication</mainClass>
<args>
<arg>${verticle}</arg>
<arg>-cluster</arg>
</args>
<ports>
<port>8080</port>
<port>7800</port>
</ports>
</container>
</configuration>
</plugin>
</plugins>
</build>
</project>
Deploying clustered Vert.x apps on Kubernetes with Infinispan
This document will show you how to deploy clustered Vert.x apps on Kubernetes with Infinispan.
What you will build
You will build a clustered Vert.x application which:
-
listens to HTTP requests for the
/hello
URI -
extracts the HTTP query param
name
-
replies with a greeting such as
"Hello <name> from <pod>
where-
<name>
is the query param value -
<pod>
is the name of the Kubernetes pod that generated the greeting
-
It consists in two parts (or microservices) communicating over the Vert.x event bus.
The frontend handles HTTP requests. It extracts the name
param, sends a request on the bus to the greetings
address and forwards the reply to the client.
The backend consumes messages sent to the greetings
address, generates a greeting and replies to the frontend.
What you need
-
A text editor or IDE
-
Java 11 higher
-
Maven or Gradle
-
Minikube or any Kubernetes cluster
-
kubectl
command-line tool
Create the projects
The code of the frontend and backend projects contains Maven and Gradle build files that are functionally equivalent.
Dependencies
Both projects depend on:
Vert.x Infinispan is a cluster manager for Vert.x based on the Infinispan in-memory key/value data store. In Vert.x a cluster manager is used for various functions. In particular, it provides discovery/membership of cluster nodes and stores event bus subscription data.
Vert.x Web is a set of building blocks which make it easy to create HTTP applications.
Vert.x Health Check is a component that standardizes the process of checking the different parts of your system, deducing a status and exposing it.
Containerization
To create containers we will use Jib because:
-
it creates images with distinct layers for dependencies, resources and classes, thus saving build time and deployment time
-
it supports both Maven and Gradle
-
it requires neither Docker nor Podman
Using Maven
Here is the content of the pom.xml
file you should be using for the frontend:
pom.xml
For the backend, the content is similar:
pom.xml
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>io.vertx.howtos</groupId>
<artifactId>clustering-kubernetes-backend</artifactId>
<version>1.0-SNAPSHOT</version>
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<verticle>io.vertx.howtos.cluster.BackendVerticle</verticle>
</properties>
<dependencyManagement>
<dependencies>
<dependency>
<groupId>io.vertx</groupId>
<artifactId>vertx-stack-depchain</artifactId>
<version>5.0.0.CR2</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>
<dependencies>
<dependency>
<groupId>io.vertx</groupId>
<artifactId>vertx-launcher-application</artifactId>
</dependency>
<dependency>
<groupId>io.vertx</groupId>
<artifactId>vertx-web</artifactId>
</dependency>
<dependency>
<groupId>io.vertx</groupId>
<artifactId>vertx-infinispan</artifactId>
</dependency>
<dependency>
<groupId>io.vertx</groupId>
<artifactId>vertx-health-check</artifactId>
</dependency>
<dependency>
<groupId>ch.qos.logback</groupId>
<artifactId>logback-classic</artifactId>
<version>1.5.12</version>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>3.13.0</version>
<configuration>
<release>11</release>
</configuration>
</plugin>
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>exec-maven-plugin</artifactId>
<version>3.5.0</version>
<configuration>
<systemProperties>
<systemProperty>
<key>java.net.preferIPv4Stack</key>
<value>true</value>
</systemProperty>
<systemProperty>
<key>vertx.jgroups.config</key>
<value>default-configs/default-jgroups-udp.xml</value>
</systemProperty>
</systemProperties>
<mainClass>${verticle}</mainClass>
</configuration>
</plugin>
<plugin>
<groupId>com.google.cloud.tools</groupId>
<artifactId>jib-maven-plugin</artifactId>
<version>2.4.0</version>
<configuration>
<to>

</to>
<container>
<mainClass>io.vertx.launcher.application.VertxApplication</mainClass>
<args>
<arg>${verticle}</arg>
<arg>-cluster</arg>
</args>
<ports>
<port>8080</port>
<port>7800</port>
</ports>
</container>
</configuration>
</plugin>
</plugins>
</build>
</project>
Using Gradle
Assuming you use Gradle with the Kotlin DSL, here is what your build.gradle.kts
file should look like for the frontend:
build.gradle.kts
plugins {
java
application
id("com.google.cloud.tools.jib") version "2.4.0"
}
repositories {
mavenCentral()
}
val vertxVersion = "5.0.0.CR2"
val verticle = "io.vertx.howtos.cluster.FrontendVerticle"
dependencies {
implementation("io.vertx:vertx-launcher-application:${vertxVersion}")
implementation("io.vertx:vertx-web:${vertxVersion}")
implementation("io.vertx:vertx-infinispan:${vertxVersion}")
implementation("io.vertx:vertx-health-check:${vertxVersion}")
implementation("ch.qos.logback:logback-classic:1.5.12")
}
application {
applicationDefaultJvmArgs =
listOf("-Djava.net.preferIPv4Stack=true", "-Dvertx.jgroups.config=default-configs/default-jgroups-udp.xml")
mainClass = verticle
}
jib {
to {
image = "clustering-kubernetes/frontend"
}
container {
mainClass = "io.vertx.launcher.application.VertxApplication"
args = listOf(verticle, "-cluster")
ports = listOf("8080", "7800")
}
}
For the backend, the content is similar:
build.gradle.kts
plugins {
java
application
id("com.google.cloud.tools.jib") version "2.4.0"
}
repositories {
mavenCentral()
}
val vertxVersion = "5.0.0.CR2"
val verticle = "io.vertx.howtos.cluster.BackendVerticle"
dependencies {
implementation("io.vertx:vertx-launcher-application:${vertxVersion}")
implementation("io.vertx:vertx-web:${vertxVersion}")
implementation("io.vertx:vertx-infinispan:${vertxVersion}")
implementation("io.vertx:vertx-health-check:${vertxVersion}")
implementation("ch.qos.logback:logback-classic:1.5.12")
}
application {
applicationDefaultJvmArgs =
listOf("-Djava.net.preferIPv4Stack=true", "-Dvertx.jgroups.config=default-configs/default-jgroups-udp.xml")
mainClass = verticle
}
jib {
to {
image = "clustering-kubernetes/backend"
}
container {
mainClass = "io.vertx.launcher.application.VertxApplication"
args = listOf(verticle, "-cluster")
ports = listOf("8080", "7800")
}
}
Implement the services
Let’s start with the backend service. We will continue with the frontend and then test them on the development machine.
Backend service
The backend service is encapsulated in a BackendVerticle
class.
It is configured with environment variables:
backend/src/main/java/io/vertx/howtos/cluster/BackendVerticle.java
private static final int HTTP_PORT = Integer.parseInt(System.getenv().getOrDefault("HTTP_PORT", "0"));
private static final String POD_NAME = System.getenv().getOrDefault("POD_NAME", "unknown");
When the verticle starts, it registers an event bus consumer, sets up a Vert.x Web Router
and binds an HTTP server:
backend/src/main/java/io/vertx/howtos/cluster/BackendVerticle.java
@Override
public Future<?> start() {
Future<Void> registration = registerConsumer();
Router router = setupRouter();
Future<HttpServer> httpServer = vertx.createHttpServer()
.requestHandler(router)
.listen(HTTP_PORT)
.onSuccess(server -> log.info("Server started and listening on port {}", server.actualPort()));
return Future.join(registration, httpServer);
}
The event bus consumer takes messages sent to the greetings
address and formats a reply:
backend/src/main/java/io/vertx/howtos/cluster/BackendVerticle.java
private Future<Void> registerConsumer() {
return vertx.eventBus().<String>consumer("greetings", msg -> {
msg.reply(String.format("Hello %s from %s", msg.body(), POD_NAME));
}).completion();
}
The Router
exposes health and readiness checks over HTTP:
backend/src/main/java/io/vertx/howtos/cluster/BackendVerticle.java
private Router setupRouter() {
Router router = Router.router(vertx);
router.get("/health").handler(rc -> rc.response().end("OK"));
Handler<Promise<Status>> procedure = ClusterHealthCheck.createProcedure(vertx, false);
HealthChecks checks = HealthChecks.create(vertx).register("cluster-health", procedure);
router.get("/readiness").handler(HealthCheckHandler.createWithHealthChecks(checks));
return router;
}
Vert.x Infinispan provides a cluster health check out of the box. io.vertx.ext.cluster.infinispan.ClusterHealthCheck verifies the underlying Infinispan cluster status. |
For local testing, a main
method is an easy way to start the verticle from the IDE:
backend/src/main/java/io/vertx/howtos/cluster/BackendVerticle.java
public static void main(String[] args) {
Vertx.clusteredVertx(new VertxOptions())
.compose(vertx -> vertx.deployVerticle(new BackendVerticle()))
.await();
}
On startup, Vert.x Infinispan uses the default networking stack which combines IP multicast for discovery and TCP connections for group messaging. This networking stack is fine for testing on our development machine. We will see later on how to switch to a stack that is suitable when deploying to Kubernetes.
Frontend service
The frontend service is encapsulated in a FrontendVerticle
class.
It is configured with an environment variable:
frontend/src/main/java/io/vertx/howtos/cluster/FrontendVerticle.java
private static final int HTTP_PORT = Integer.parseInt(System.getenv().getOrDefault("HTTP_PORT", "8080"));
When the verticle starts, it sets up a Vert.x Web Router
and binds an HTTP server:
frontend/src/main/java/io/vertx/howtos/cluster/FrontendVerticle.java
@Override
public Future<?> start() {
Router router = Router.router(vertx);
setupRouter(router);
return vertx.createHttpServer()
.requestHandler(router)
.listen(HTTP_PORT)
.onSuccess(server -> log.info("Server started and listening on port {}", server.actualPort()));
}
The Router
defines a GET handler for the /hello
URI, besides it exposes health and readiness checks over HTTP:
frontend/src/main/java/io/vertx/howtos/cluster/FrontendVerticle.java
private void setupRouter(Router router) {
router.get("/hello").handler(this::handleHelloRequest);
router.get("/health").handler(rc -> rc.response().end("OK"));
Handler<Promise<Status>> procedure = ClusterHealthCheck.createProcedure(vertx, false);
HealthChecks checks = HealthChecks.create(vertx).register("cluster-health", procedure);
router.get("/readiness").handler(HealthCheckHandler.createWithHealthChecks(checks));
}
The HTTP request handler for /hello
URI extracts the name
parameter, sends a request over the event bus and forwards the reply to the client:
frontend/src/main/java/io/vertx/howtos/cluster/FrontendVerticle.java
private void handleHelloRequest(RoutingContext rc) {
vertx.eventBus().<String>request("greetings", rc.queryParams().get("name"))
.map(Message::body)
.onSuccess(reply -> rc.response().end(reply))
.onFailure(rc::fail);
}
For local testing, a main
method is an easy way to start the verticle from the IDE:
frontend/src/main/java/io/vertx/howtos/cluster/FrontendVerticle.java
public static void main(String[] args) {
Vertx.clusteredVertx(new VertxOptions())
.compose(vertx -> vertx.deployVerticle(new FrontendVerticle()))
.await();
}
Test locally
You can start each service:
-
straight from your IDE or,
-
with Maven:
mvn compile exec:java
, or -
with Gradle:
./gradlew run
(Linux, macOS) orgradlew run
(Windows).
The frontend service output should print a message similar to the following:
2020-07-16 16:29:39,478 [vert.x-eventloop-thread-2] INFO i.v.howtos.cluster.FrontendVerticle - Server started and listening on port 8080
The backend:
2020-07-16 16:29:40,770 [vert.x-eventloop-thread-2] INFO i.v.howtos.cluster.BackendVerticle - Server started and listening on port 38621
Take note of the backend HTTP server port. By default, it uses a random port to avoid conflict with the frontend HTTP server. |
The following examples use the HTTPie command line HTTP client. Please refer to the installation documentation if you don’t have it installed on your system yet. |
First let’s send a request to the frontend for the /hello
URI with the name
query param set to Vert.x Clustering
http :8080/hello name=="Vert.x Clustering"
You should see something like:
HTTP/1.1 200 OK content-length: 36 Hello Vert.x Clustering from unknown
unknown is the default pod name used by the backend when the POD_NAME environment variable is not defined. |
We can also verify the readiness of the frontend:
http :8080/readiness HTTP/1.1 200 OK content-length: 65 content-type: application/json;charset=UTF-8 { "checks": [ { "id": "cluster-health", "status": "UP" } ], "outcome": "UP" }
And the backend:
http :38621/readiness HTTP/1.1 200 OK content-length: 65 content-type: application/json;charset=UTF-8 { "checks": [ { "id": "cluster-health", "status": "UP" } ], "outcome": "UP"
Deploy to Kubernetes
First, make sure Minikube has started with minikube status
.
If you don’t use Minikube, verify that kubectl is connected to your cluster. |
Push container images
There are different ways to push container images to Minikube.
In this document, we will push directly to the in-cluster Docker daemon. To do so, we must point our shell to Minikube’s docker-daemon:
eval $(minikube -p minikube docker-env)
Then, within the same shell, we can build the images with Jib:
-
with Maven:
mvn compile jib:dockerBuild
, or -
with Gradle:
./gradlew jibDockerBuild
(Linux, macOS) orgradlew jibDockerBuild
(Windows).
Jib will not use the Docker daemon to build the image but only to push it. |
If you don’t use Minikube, please refer to the Jib Maven or Jib Gradle plugin documentation for details about how to configure them when pushing to a registry. |
Clustered app headless service
On Kubernetes, Infinispan shouldn’t use the default networking stack because most often IP multicast is not available.
Instead, we will configure it to use a stack that relies on headless service lookup for discovery and TCP connections for group messaging.
Let’s create a clustered-app
headless service which selects member pods having the label cluster:clustered-app
:
headless-service.yml
apiVersion: v1
kind: Service
metadata:
name: clustered-app
spec:
selector:
cluster: clustered-app
ports:
- name: jgroups
port: 7800
protocol: TCP
publishNotReadyAddresses: true
clusterIP: None
The headless service must account for pods even when not ready (publishNotReadyAddresses set to true ). |
Apply this configuration:
kubectl apply -f headless-service.yml
Then verify it was successful:
kubectl get services clustered-app
You should see something like:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE clustered-app ClusterIP None <none> 7800/TCP 63m
Frontend deployment and service
Let’s deploy the frontend service now.
We want at least two replicas for high availability.
To configure Vert.x Infinispan, we need to start the JVM with a few system properties:
-
java.net.preferIPv4Stack
-
vertx.jgroups.config
: networking stack configuration file, set todefault-configs/default-jgroups-kubernetes.xml
-
jgroups.dns.query
: the DNS name of the headless service we just created
And then Kubernetes needs to know the URI for liveness, readiness and startup probes.
The startup probe can point to the readiness URI with different timeout settings. |
frontend/deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend-deployment
labels:
app: frontend
spec:
replicas: 2
selector:
matchLabels:
app: frontend
template:
metadata:
labels:
app: frontend
cluster: clustered-app
spec:
containers:
- name: frontend
image: clustering-kubernetes/frontend:latest
imagePullPolicy: IfNotPresent
env:
- name: JAVA_TOOL_OPTIONS
value: "-Djava.net.preferIPv4Stack=true -Dvertx.jgroups.config=default-configs/default-jgroups-kubernetes.xml -Djgroups.dns.query=clustered-app.default.svc.cluster.local"
- name: HTTP_PORT
value: "8080"
ports:
- containerPort: 8080
- containerPort: 7800
livenessProbe:
httpGet:
path: /health
port: 8080
failureThreshold: 1
periodSeconds: 10
readinessProbe:
httpGet:
path: /readiness
port: 8080
initialDelaySeconds: 5
periodSeconds: 5
startupProbe:
httpGet:
path: /readiness
port: 8080
failureThreshold: 30
periodSeconds: 10
Apply this configuration:
kubectl apply -f frontend/deployment.yml
Verify the pods have started successfully:
kubectl get pods
You should see something like:
NAME READY STATUS RESTARTS AGE frontend-deployment-8cfd4d966-lpvsb 1/1 Running 0 4m58s frontend-deployment-8cfd4d966-tctgv 1/1 Running 0 4m58s
We also need a service to load-balance the HTTP traffic. Pods will be selected by the label app:frontend
that was defined in the deployment:
frontend/service.yml
apiVersion: v1
kind: Service
metadata:
name: frontend
spec:
selector:
app: frontend
ports:
- protocol: TCP
port: 80
targetPort: 8080
type: LoadBalancer
Apply this configuration:
kubectl apply -f frontend/service.yml
Verify the service has been created successfully:
kubectl get services frontend
You should see something like:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE frontend LoadBalancer 10.106.16.88 <pending> 80:30729/TCP 62s
If you use Minikube, open another terminal window and run:
minikube tunnel
Minikube tunnel runs as a separate process and exposes the service to the host operating system.
If you run kubectl get services frontend
again, then the external IP should be set:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE frontend LoadBalancer 10.100.254.64 10.100.254.64 80:30660/TCP 30m
Take note of the external IP. |
Minikube tunnel requires privilege escalation. If you are not granted to do this, you can still access the service via the NodePort with |
If you don’t use Minikube and no external IP has been assigned to your service, please refer to your cluster documentation. |
Backend deployment
The backend service deployment is similar to the frontend one.
Notice that in this case:
-
3 replicas should be created
-
the
POD_NAME
environment variable will be set in the container
backend/deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: backend-deployment
labels:
app: backend
spec:
replicas: 3
selector:
matchLabels:
app: backend
template:
metadata:
labels:
app: backend
cluster: clustered-app
spec:
containers:
- name: backend
image: clustering-kubernetes/backend:latest
imagePullPolicy: IfNotPresent
env:
- name: JAVA_TOOL_OPTIONS
value: "-Djava.net.preferIPv4Stack=true -Dvertx.jgroups.config=default-configs/default-jgroups-kubernetes.xml -Djgroups.dns.query=clustered-app.default.svc.cluster.local"
- name: HTTP_PORT
value: "8080"
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
ports:
- containerPort: 8080
- containerPort: 7800
livenessProbe:
httpGet:
path: /health
port: 8080
failureThreshold: 1
periodSeconds: 10
readinessProbe:
httpGet:
path: /readiness
port: 8080
initialDelaySeconds: 5
periodSeconds: 5
startupProbe:
httpGet:
path: /readiness
port: 8080
failureThreshold: 30
periodSeconds: 10
Apply this configuration:
kubectl apply -f backend/deployment.yml
Verify the pods have started successfully:
kubectl get pods
You should see something like:
NAME READY STATUS RESTARTS AGE backend-deployment-74d7f45c67-h7h9c 1/1 Running 0 63s backend-deployment-74d7f45c67-r45bc 1/1 Running 0 63s backend-deployment-74d7f45c67-r75ht 1/1 Running 0 63s frontend-deployment-8cfd4d966-lpvsb 1/1 Running 0 15m frontend-deployment-8cfd4d966-tctgv 1/1 Running 0 15m
Testing remotely
We can now send a request to the frontend for the /hello
URI with the name
query param set to Vert.x Clustering
http 10.100.254.64/hello name=="Vert.x Clustering"
You should see something like:
HTTP/1.1 200 OK content-length: 64 Hello Vert.x Clustering from backend-deployment-74d7f45c67-6r2g2
Notice that we can see now the name of the pod instead of the default value (unknown
).
Also, if you send requests repeatedly, you will see that the backend
services receive event bus
requests in a round-robin fashion.
Summary
This document covered:
-
dependencies required to deploy clustered Vert.x apps on Kubernetes with Infinispan
-
containerization of Vert.x services with Jib
-
configuration of the Vert.x Infinispan cluster manager for local testing and deployment on Kubernetes