plugins {
java
application
id("com.github.johnrengelman.shadow") version "5.0.0"
}
repositories {
mavenCentral()
}
dependencies {
val vertxVersion = "3.7.0"
implementation("io.vertx:vertx-web:${vertxVersion}")
}
application {
mainClassName = "io.vertx.howtos.openj9.Main"
}
tasks.wrapper {
gradleVersion = "5.4.1"
}
Running Eclipse Vert.x applications with Eclipse OpenJ9
This how-to provides some tips for running Vert.x applications with OpenJ9, an alternative Java Virtual Machine built on top of OpenJDK that is gentle on memory usage.
Vert.x is a resource-efficient toolkit for building all kinds of modern distributed applications, and OpenJ9 is a resource-efficient runtime that is well-suited for virtualized and containerized deployments.
What you will build and run
-
You will build a simple micro-service that computes the sum of 2 numbers through a HTTP JSON endpoint.
-
We will look at the options for improving startup time with OpenJ9.
-
We will measure the resident set size memory footprint on OpenJ9 under a workload.
-
You will build a Docker image for the micro-service and OpenJ9.
-
We will discuss how to improve the startup time of Docker containers and how to tune OpenJ9 in that environment.
What you need
-
A text editor or IDE
-
Java 8 higher
-
OpenJ9 (we recommend a build from AdoptOpenJDK)
-
Maven or Gradle
-
Docker
-
Locust to generate some workload
Create a project
The code of this project contains Maven and Gradle build files that are functionally equivalent.
With Gradle
Here is the content of the build.gradle.kts
file that you should be using:
With Maven
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>io.vertx.howtos</groupId>
<artifactId>openj9-howto</artifactId>
<version>1.0-SNAPSHOT</version>
<properties>
<vertx.version>3.7.0</vertx.version>
<maven.compiler.source>1.8</maven.compiler.source>
<maven.compiler.target>1.8</maven.compiler.target>
</properties>
<dependencyManagement>
<dependencies>
<dependency>
<groupId>io.vertx</groupId>
<artifactId>vertx-stack-depchain</artifactId>
<version>${vertx.version}</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>
<dependencies>
<dependency>
<groupId>io.vertx</groupId>
<artifactId>vertx-core</artifactId>
</dependency>
<dependency>
<groupId>io.vertx</groupId>
<artifactId>vertx-web</artifactId>
</dependency>
<dependency>
<groupId>io.vertx</groupId>
<artifactId>vertx-web-api-contract</artifactId>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-shade-plugin</artifactId>
<version>3.2.1</version>
<configuration>
<transformers>
<transformer implementation="org.apache.maven.plugins.shade.resource.ManifestResourceTransformer">
<mainClass>io.vertx.howtos.openj9.Main</mainClass>
</transformer>
</transformers>
</configuration>
<executions>
<execution>
<phase>package</phase>
<goals>
<goal>shade</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
</build>
</project>
Writing the service
The service exposes a HTTP server and fits within a single Java class:
package io.vertx.howtos.openj9;
import io.vertx.core.AbstractVerticle;
import io.vertx.core.Future;
import io.vertx.core.Vertx;
import io.vertx.core.json.JsonObject;
import io.vertx.ext.web.Router;
import io.vertx.ext.web.RoutingContext;
import io.vertx.ext.web.handler.BodyHandler;
public class Main extends AbstractVerticle {
private static long startTime;
@Override
public void start(Future<Void> future) {
Router router = Router.router(vertx);
router.post().handler(BodyHandler.create());
router.post("/sum").handler(this::sum);
vertx.createHttpServer()
.requestHandler(router)
.listen(8080, ar -> {
if (ar.succeeded()) {
System.out.println("Started in " + (System.currentTimeMillis() - startTime) + "ms");
} else {
ar.cause().printStackTrace();
}
});
}
private void sum(RoutingContext context) {
JsonObject input = context.getBodyAsJson();
Integer a = input.getInteger("a", 0);
Integer b = input.getInteger("b", 0);
JsonObject response = new JsonObject().put("sum", a + b);
context.response()
.putHeader("Content-Type", "application/json")
.end(response.encode());
}
public static void main(String[] args) {
startTime = System.currentTimeMillis();
Vertx vertx = Vertx.vertx();
vertx.deployVerticle(new Main());
}
}
We can run the service:
$ ./gradlew run
and then test it with HTTPie:
$ http :8080/sum a:=1 b:=2 HTTP/1.1 200 OK Content-Type: application/json content-length: 9 { "sum": 3 } $
We can also build a JAR archive will all dependencies bundled, then execute it:
$ ./gradlew shadowJar $ java -jar build/libs/openj9-howto-all.jar
Improving startup time
The micro-service reports the startup time by measuring the time between the main
method entry, and the callback notification when the HTTP server has started.
We can do a few runs of java -jar build/libs/openj9-howto-all.jar
and pick the best time. On my machine the best I got was 639ms.
OpenJ9 offers both an ahead-of-time compiler and a class data shared cache for improving startup time as well as reducing memory consumption. The first run is typical costly, but then all subsequent runs will benefit from the caches, which are also regularly updated.
The relevant OpenJ9 flags are the following:
-
-Xshareclasses
: enable class sharing -
-Xshareclasses:name=NAME
: a name for the cache, typically one per-application -
-Xshareclasses:cacheDir=DIR
: a folder for storing the cache files
Let us have a few run of:
$ java -Xshareclasses -Xshareclasses:name=sum -Xshareclasses:cacheDir=_cache -jar build/libs/openj9-howto-all.jar
On my machine the first run takes 1098ms, which is way more than 639ms! However the next runs are all near 300ms, with a best score of 293ms which is very good for a JVM application start time.
Memory usage
Let us now measure the memory usage of the micro-service with OpenJ9 and compare with OpenJDK.
This is not a rigorous benchmark. You have been warned 😉 |
Generate some workload
We are using Locust to generate some workload. The locustfile.py
file contains the code to simulate users that perform sums of random numbers:
from locust import *
import random
import json
class Client(TaskSet):
@task
def ping(self):
data = json.dumps({"a": random.randint(1, 100), "b": random.randint(1, 100)})
self.client.post("http://localhost:8080/sum", data=data, name="Sum", headers={"content-type": "application/json"})
class Client(HttpLocust):
task_set = Client
host = "localhost"
min_wait = 500
max_wait = 1000
We can then run locust
, and connect to http://localhost:8089 to start a test. Let us simulate 100 users with a 10 new users per second hatch rate. This gives us about 130 requests per second.
Measuring RSS
The Quarkus team has a good guide on measuring RSS. On Linux you can use either ps
or pmap
to measure RSS, while on macOS ps
will do. I am using macOS, so once I have the process id of a running application I can get its RSS as follows:
$ ps x -o pid,rss,command -p 99425 PID RSS COMMAND 99425 89844 java -jar build/libs/openj9-howto-all.jar
For all measures we start Locust and let it warm up the micro-service. After a minute we reset the stats and restart a test, then look into RSS and 99% latency. We will try to run the application with no tuning and then by limiting the maximum heap size (see the -Xmx
flag).
With OpenJDK 11 and no tuning:
-
RSS: ~446 MB
-
99% latency: 8ms
With OpenJDK 11 and -Xmx8m
:
-
RSS: ~111 MB
-
99% latency: 8ms
With OpenJ9/OpenJDK 11 and no tuning:
-
RSS: ~84 MB
-
99% latency: 8ms
With OpenJ9/OpenJDK 11 and -Xmx8m
:
-
RSS: ~63 MB
-
99% latency: 9ms
OpenJ9 is clearly very efficient with respect to memory consumption, without compromising the latency.
As usual take these numbers with a grain of salt and perform your own measures on your own services with a workload that is appropriate to your usages. |
Building and running a Docker image
Ok so we have seen how gentle OpenJ9 was on memory even without tuning. Let us now package the micro-service as a Docker image.
Here is the Dockerfile
you can use:
FROM adoptopenjdk/openjdk12-openj9:alpine-slim
RUN mkdir -p /app/_cache
COPY build/libs/openj9-howto-all.jar /app/app.jar
VOLUME /app/_cache
EXPOSE 8080
CMD ["java", "-Xvirtualized", "-Xshareclasses", "-Xshareclasses:name=sum", "-Xshareclasses:cacheDir=/app/_cache", "-jar", "/app/app.jar"]
You can note:
-
-Xvirtualized
is a flag for virtualized / container environments so OpenJ9 reduces CPU consumption when idle -
/app/_cache
is a volume that will have to be mounted for containers to share the OpenJ9 classes cache.
The image can be built as in:
$ docker build . -t openj9-app
We can then create containers from the image:
$ docker run -it --rm -v /tmp/_cache:/app/_cache -p 8080:8080 openj9-app
Again the first container is slower to start, while the next ones benefit from the cache.
On some platforms
|
Summary
-
We wrote a micro-service with Vert.x.
-
We ran this micro-service on OpenJ9.
-
We improved startup time using class data sharing.
-
We put the microservice under some workload, then checked that the memory footprint remained low with OpenJ9 compared to OpenJDK with HotSpot.
-
We built a Docker image with OpenJ9, class data sharing for fast container boot time and diminished CPU usage when idle.