Vert.x JDBC client

This client allows you to interact with any JDBC compliant database using an asynchronous API from your Vert.x application.

The client API is represented with the interface JDBCClient.

To use this project, add the following dependency to the dependencies section of your build descriptor:

  • Maven (in your pom.xml):

<dependency>
 <groupId>io.vertx</groupId>
 <artifactId>vertx-jdbc-client</artifactId>
 <version>4.5.10</version>
</dependency>
  • Gradle (in your build.gradle file):

compile 'io.vertx:vertx-jdbc-client:4.5.10'

Using the Sql Client API

Sql Client is the vert.x reactive API to communicate with SQL databases. There are already several implementations of the API for popular engines such as MySQL, PostgreSQL, MSSQL and IBM DB2. However as there are still many other engines that have no asynchronous drivers, the JDBC Client also implements the same API to allow the usage of JDBC drivers until such asynchronous support is implemented (eventually).

Creating a DB Pool

It all starts with a pool. Creating a pool is simple. The shortest example would be:

final JsonObject config = new JsonObject()
  .put("jdbcUrl", "jdbc:h2:~/test")
  .put("datasourceName", "pool-name")
  .put("username", "sa")
  .put("password", "")
  .put("max_pool_size", 16);

JDBCPool pool = JDBCPool.pool(vertx, config);

In this case we are reusing the JDBC client API to create the pool. The configuration is free form and users need to look up the properties required for the underlying connection pool of choice.

For a type safe alternative, a second factory method is present. This second factory method ensures that the config is correct (as it’s properties and type is validated by the compiler) but currently only works with the Agroal connection pool.

JDBCPool pool = JDBCPool.pool(
  vertx,
  // configure the connection
  new JDBCConnectOptions()
    // H2 connection string
    .setJdbcUrl("jdbc:h2:~/test")
    // username
    .setUser("sa")
    // password
    .setPassword(""),
  // configure the pool
  new PoolOptions()
    .setMaxSize(16)
    .setName("pool-name")
);

Using the Pool

Once you have a pool, you can start using your database, the pool operates in 2 modes:

  1. Managed connection mode

  2. Manual connection mode

When working in managed connection mode, as a user, you don’t need to worry to fetch and return connections to the pool. Queries can be run directly against the pool and the pool ensures that a connection is acquired and returned after the query is terminated.

pool
  .query("SELECT * FROM user")
  .execute()
  .onFailure(e -> {
    // handle the failure
  })
  .onSuccess(rows -> {
    for (Row row : rows) {
      System.out.println(row.getString("FIRST_NAME"));
    }
  });

This is also applicable for prepared statements:

pool
  .preparedQuery("SELECT * FROM user WHERE emp_id > ?")
  // the emp id to look up
  .execute(Tuple.of(1000))
  .onFailure(e -> {
    // handle the failure
  })
  .onSuccess(rows -> {
    for (Row row : rows) {
      System.out.println(row.getString("FIRST_NAME"));
    }
  });

Using this mode is quite convenient as it allows you to focus on the business logic, not on the connection management. There are times when order and causality may need to be preserved. In this case we need to execute queries on a manual connection mode:

pool
  .getConnection()
  .onFailure(e -> {
    // failed to get a connection
  })
  .onSuccess(conn -> {
    conn
      .query("SELECT * FROM user")
      .execute()
      .onFailure(e -> {
        // handle the failure

        // very important! don't forget to return the connection
        conn.close();
      })
      .onSuccess(rows -> {
        for (Row row : rows) {
          System.out.println(row.getString("FIRST_NAME"));
        }

        // very important! don't forget to return the connection
        conn.close();
      });
  });

Of course same is to say that prepared statements are also applicable to this mode:

pool
  .getConnection()
  .onFailure(e -> {
    // failed to get a connection
  })
  .onSuccess(conn -> {
    conn
      .preparedQuery("SELECT * FROM user WHERE emp_id > ?")
      // the emp_id to look up
      .execute(Tuple.of(1000))
      .onFailure(e -> {
        // handle the failure

        // very important! don't forget to return the connection
        conn.close();
      })
      .onSuccess(rows -> {
        for (Row row : rows) {
          System.out.println(row.getString("FIRST_NAME"));
        }

        // very important! don't forget to return the connection
        conn.close();
      });
  });

Retrieving the generated keys

Generated keys is a common feature of JDBC drivers. The pool allows you to retrieve the keys using a special property: JDBCPool.GENERATED_KEYS. For example:

String sql = "INSERT INTO insert_table (FNAME, LNAME) VALUES (?, ?)";

pool
  .preparedQuery(sql)
  .execute(Tuple.of("Paulo", "Lopes"))
  .onSuccess(rows -> {
    // the generated keys are returned as an extra row
    Row lastInsertId = rows.property(JDBCPool.GENERATED_KEYS);
    // just refer to the position as usual:
    long newId = lastInsertId.getLong(0);
  });

Working with Procedures and Functions

Users that have used JDBC before are aware that in order to call either a function or a stored procedure, one must use the interface CallableStatement. This abstraction is, for most database engines, synthetic as popular engines like PostgreSQL or MySQL do not really have any special command to differentiate from callable statements from any other regular statement.

The existing SQL client api was designed to be closed to what the reality of wire protocols shows, rather than adapt to the JDBC spec, therefore you will not find any specific way to deal with callable statements. This design choice introduces some complexities on the JDBC SQL client, as we need to adapt a regular call to respect the JDBC spec requirements and be compatible with clients out there.

Simple IN mapping

Mapping simple IN parameters is trivial. Consider the following procedure:

create procedure new_customer(firstname varchar(50), lastname varchar(50))
 modifies sql data
 insert into customers values (default, firstname, lastname, current_timestamp)

In order to call this procedure from the JDBC client, you would write some code as:

String sql = "{call new_customer(?, ?)}";

pool
  .preparedQuery(sql)
  // by default the "IN" argument types are extracted from the
  // type of the data in the tuple, as well as the order
  .execute(Tuple.of("Paulo", "Lopes"))
  .onFailure(e -> {
    // handle the failure
  })
  .onSuccess(rows -> {
    // ... success handler
  });

Complex IN / OUT mapping

The previous example, showed how to create simple queries. However, there are some limitations with it. It will assume that arguments are always of type IN and that the argument type is the corresponding java type of the argument, which will not always work, for example, when dealing with null.

In this case, the client has a helper SqlOutParam that allows you to explicitly tag the desired type of the argument. Not only the data type but also if it is a IN or a OUT.

Consider the following procedure:

create procedure customer_lastname(IN firstname varchar(50), OUT lastname varchar(50))
 modifies sql data
 select lastname into lastname from customers where firstname = firstname

This procedure will return all last names, from all customers, when looking for a first name. So we will need to map both IN parameters as well as OUT parameters.

String sql = "{call customer_lastname(?, ?)}";

pool
  .preparedQuery(sql)
  // by default the "IN" argument types are extracted from the
  // type of the data in the tuple, as well as the order
  //
  // Note that we now also declare the output parameter and it's
  // type. The type can be a "String", "int" or "JDBCType" constant
  .execute(Tuple.of("John", SqlOutParam.OUT(JDBCType.VARCHAR)))
  .onFailure(e -> {
    // handle the failure
  })
  .onSuccess(rows -> {
    // we can verify if there was a output received from the callable statement
    if (rows.property(JDBCPool.OUTPUT)) {
      // and then iterate the results
      for (Row row : rows) {
        System.out.println(row.getString(0));
      }
    }
  });

Sometimes, you will need to map both IN and OUT parameters to the same variable. Again, it is all handled by the SqlOutParam helper.

SqlOutParam param;

// map IN as "int" as well as "OUT" as VARCHAR
param = SqlOutParam.INOUT(123456, JDBCType.VARCHAR);
// or
param = SqlOutParam.INOUT(123456, "VARCHAR");

// and then just add to the tuple as usual:
Tuple.of(param);

Legacy JDBC Client API

The JDBC Client API created in Vert.x 3 is now deprecated and instead the new SQL Client API should be used.

It will remain supported for the life time of Vert.x 4 to allow applications to be migrated to the new SQL Client API.

Getting a connection

Once you’ve created a client you use getConnection to get a connection.

This will return the connection in the handler when one is ready from the pool.

client.getConnection(res -> {
  if (res.succeeded()) {

    SQLConnection connection = res.result();

    connection.query("SELECT * FROM some_table", res2 -> {
      if (res2.succeeded()) {

        ResultSet rs = res2.result();
        // Do something with results
      }
    });
  } else {
    // Failed to get connection - deal with it
  }
});

The connection is an instance of SQLConnection which is a common interface not only used by the Vert.x JDBC Client.

You can learn how to use it in the common sql interface documentation.

Simple SQL Operations

There are times when you will want to run a single SQL operation, e.g.: a single select of a row, or a update to a set of rows which do not require to be part of a transaction or have dependencies on the previous or next operation.

For these cases, clients provide a boilerplate-less API SQLOperations. This interface will perform the following steps for you:

  1. acquire a connection from the connection pool

  2. perform your action

  3. close and return the connection to the connection pool

An example where users get loaded from the USERS table could be:

client.query("SELECT * FROM USERS", ar -> {
  if (ar.succeeded()) {
    if (ar.succeeded()) {
      ResultSet result = ar.result();
    } else {
      // Failed!
    }
    // NOTE that you don't need to worry about
    // the connection management (e.g.: close)
  }
});

You can perform the following operations as a simple one "shot" method call:

For further details on these API please refer to the SQLOperations interface.

Creating a the client

There are several ways to create a client. Let’s go through them all.

Using default shared data source

In most cases you will want to share a data source between different client instances.

E.g. you scale your application by deploying multiple instances of your verticle and you want each verticle instance to share the same datasource so you don’t end up with multiple pools

You do this as follows:

SQLClient client = JDBCClient.createShared(vertx, config);

The first call to JDBCClient.createShared will actually create the data source, and the specified config will be used.

Subsequent calls will return a new client instance that uses the same data source, so the configuration won’t be used.

Specifying a data source name

You can create a client specifying a data source name as follows

SQLClient client = JDBCClient.createShared(vertx, config, "MyDataSource");

If different clients are created using the same Vert.x instance and specifying the same data source name, they will share the same data source.

The first call to JDBCClient.createShared will actually create the data source, and the specified config will be used.

Subsequent calls will return a new client instance that uses the same data source, so the configuration won’t be used.

Use this way of creating if you wish different groups of clients to have different data sources, e.g. they’re interacting with different databases.

Creating a client with a non shared data source

In most cases you will want to share a data source between different client instances. However, it’s possible you want to create a client instance that doesn’t share its data source with any other client.

In that case you can use JDBCClient.create.

SQLClient client = JDBCClient.create(vertx, config);

This is equivalent to calling JDBCClient.createShared with a unique data source name each time.

Specifying a data source

If you already have a pre-existing data source, you can also create the client directly specifying that:

SQLClient client = JDBCClient.create(vertx, dataSource);

Closing the client

It’s fine to keep hold of the client for a long time (e.g. the lifetime of your verticle), but once you’re done with it you should close it.

Clients that share a data source with other client instances are reference counted. Once the last one that references the same data source is closed, the data source will be closed.

Automatic clean-up in verticles

If you’re creating clients from inside verticles, the clients will be automatically closed when the verticle is undeployed.

The SQL Connection

A connection to the database is represented by SQLConnection.

Auto-commit

When you obtain a connection auto commit is set to true. This means that each operation you perform will effectively execute in its own transaction.

If you wish to perform multiple operations in a single transaction you should set auto commit to false with setAutoCommit.

When the operation is complete, the handler will be called:

connection.setAutoCommit(false, res -> {
  if (res.succeeded()) {
    // OK!
  } else {
    // Failed!
  }
});

Executing queries

To execute a query use query

The query string is raw SQL that is passed through without changes to the actual database.

The handler will be called with the results, represented by ResultSet when the query has been run.

connection.query("SELECT ID, FNAME, LNAME, SHOE_SIZE from PEOPLE", res -> {
  if (res.succeeded()) {
    // Get the result set
    ResultSet resultSet = res.result();
  } else {
    // Failed!
  }
});

The ResultSet instance represents the results of a query.

The list of column names are available with getColumnNames, and the actual results available with getResults

The results are a list of JsonArray instances, one for each row of the results.

List<String> columnNames = resultSet.getColumnNames();

List<JsonArray> results = resultSet.getResults();

for (JsonArray row : results) {

  String id = row.getString(0);
  String fName = row.getString(1);
  String lName = row.getString(2);
  int shoeSize = row.getInteger(3);

}

You can also retrieve the rows as a list of Json object instances with getRows - this can give you a somewhat simpler API to work with, but please be aware that SQL results can contain duplicate column names - if that’s the case you should use getResults instead.

Here’s an example of iterating through the results as Json object instances:

List<JsonObject> rows = resultSet.getRows();

for (JsonObject row : rows) {

  String id = row.getString("ID");
  String fName = row.getString("FNAME");
  String lName = row.getString("LNAME");
  int shoeSize = row.getInteger("SHOE_SIZE");

}

Prepared statement queries

To execute a prepared statement query you can use queryWithParams.

This takes the query, containing the parameter place holders, and a JsonArray or parameter values.

String query = "SELECT ID, FNAME, LNAME, SHOE_SIZE from PEOPLE WHERE LNAME=? AND SHOE_SIZE > ?";
JsonArray params = new JsonArray().add("Fox").add(9);

connection.queryWithParams(query, params, res -> {

  if (res.succeeded()) {
    // Get the result set
    ResultSet resultSet = res.result();
  } else {
    // Failed!
  }
});

Executing INSERT, UPDATE or DELETE

To execute an operation which updates the database use update.

The update string is raw SQL that is passed through without changes to the actual database.

The handler will be called with the results, represented by UpdateResult when the update has been run.

The update result holds the number of rows updated with getUpdated, and if the update generated keys, they are available with getKeys.

connection.update("INSERT INTO PEOPLE VALUES (null, 'john', 'smith', 9)", res -> {
  if (res.succeeded()) {

    UpdateResult result = res.result();
    System.out.println("Updated no. of rows: " + result.getUpdated());
    System.out.println("Generated keys: " + result.getKeys());

  } else {
    // Failed!
  }
});

Prepared statement updates

To execute a prepared statement update you can use updateWithParams.

This takes the update, containing the parameter place holders, and a JsonArray or parameter values.

String update = "UPDATE PEOPLE SET SHOE_SIZE = 10 WHERE LNAME=?";
JsonArray params = new JsonArray().add("Fox");

connection.updateWithParams(update, params, res -> {

  if (res.succeeded()) {

    UpdateResult updateResult = res.result();

    System.out.println("No. of rows updated: " + updateResult.getUpdated());

  } else {

    // Failed!

  }
});

Callable statements

To execute a callable statement (either SQL functions or SQL procedures) you can use callWithParams.

This takes the callable statement using the standard JDBC format { call func_proc_name() }, optionally including parameter place holders e.g.: { call func_proc_name(?, ?) }, a JsonArray containing the parameter values and finally a JsonArray containing the output types e.g.: [null, 'VARCHAR'].

Note that the index of the output type is as important as the params array. If the return value is the second argument then the output array must contain a null value as the first element.

A SQL function returns some output using the return keyword, and in this case one can call it like this:

String func = "{ call one_hour_ago() }";

connection.call(func, res -> {

  if (res.succeeded()) {
    ResultSet result = res.result();
  } else {
    // Failed!
  }
});

When working with Procedures you and still return values from your procedures via its arguments, in the case you do not return anything the usage is as follows:

String func = "{ call new_customer(?, ?) }";

connection.callWithParams(func, new JsonArray().add("John").add("Doe"), null, res -> {

  if (res.succeeded()) {
    // Success!
  } else {
    // Failed!
  }
});

However you can also return values like this:

String func = "{ call customer_lastname(?, ?) }";

connection.callWithParams(func, new JsonArray().add("John"), new JsonArray().addNull().add("VARCHAR"), res -> {

  if (res.succeeded()) {
    ResultSet result = res.result();
  } else {
    // Failed!
  }
});

Note that the index of the arguments matches the index of the ? and that the output parameters expect to be a String describing the type you want to receive.

To avoid ambiguation the implementations are expected to follow the following rules:

  • When a place holder in the IN array is NOT NULL it will be taken

  • When the IN value is NULL a check is performed on the OUT When the OUT value is not null it will be registered as a output parameter When the OUT is also null it is expected that the IN value is the NULL value.

The registered OUT parameters will be available as an array in the result set under the output property.

Batch operations

The SQL common interface also defines how to execute batch operations. There are 3 types of batch operations:

A batches statement will exeucte a list of sql statements as for example:

List<String> batch = new ArrayList<>();
batch.add("INSERT INTO emp (NAME) VALUES ('JOE')");
batch.add("INSERT INTO emp (NAME) VALUES ('JANE')");

connection.batch(batch, res -> {
  if (res.succeeded()) {
    List<Integer> result = res.result();
  } else {
    // Failed!
  }
});

While a prepared or callable statement batch will reuse the sql statement and take an list of arguments as for example:

List<JsonArray> batch = new ArrayList<>();
batch.add(new JsonArray().add("joe"));
batch.add(new JsonArray().add("jane"));

connection.batchWithParams("INSERT INTO emp (name) VALUES (?)", batch, res -> {
  if (res.succeeded()) {
    List<Integer> result = res.result();
  } else {
    // Failed!
  }
});

Executing other operations

To execute any other database operation, e.g. a CREATE TABLE you can use execute.

The string is passed through without changes to the actual database. The handler is called when the operation is complete

String sql = "CREATE TABLE PEOPLE (ID int generated by default as identity (start with 1 increment by 1) not null," +
  "FNAME varchar(255), LNAME varchar(255), SHOE_SIZE int);";

connection.execute(sql, execute -> {
  if (execute.succeeded()) {
    System.out.println("Table created !");
  } else {
    // Failed!
  }
});

Multiple ResultSet responses

In some cases your query might return more than one result set, in this case and to preserve the compatibility when the returned result set object is converted to pure json, the next result sets are chained to the current result set under the property next. A simple walk of all result sets can be achieved like this:

while (rs != null) {
  // do something with the result set...

  // next step
  rs = rs.getNext();
}

Streaming

When dealing with large data sets, it is not advised to use API just described but to stream data since it avoids inflating the whole response into memory and JSON and data is just processed on a row by row basis, for example:

connection.queryStream("SELECT * FROM large_table", stream -> {
  if (stream.succeeded()) {
    stream.result().handler(row -> {
      // do something with the row...
    });
  }
});

You still have full control on when the stream is pauses, resumed and ended. For cases where your query returns multiple result sets you should use the result set ended event to fetch the next one if available. If there is more data the stream handler will receive the new data, otherwise the end handler is invoked.

connection.queryStream("SELECT * FROM large_table; SELECT * FROM other_table", stream -> {
  if (stream.succeeded()) {
    SQLRowStream sqlRowStream = stream.result();

    sqlRowStream
      .resultSetClosedHandler(v -> {
        // will ask to restart the stream with the new result set if any
        sqlRowStream.moreResults();
      })
      .handler(row -> {
        // do something with the row...
      })
      .endHandler(v -> {
        // no more data available...
      });
  }
});

Using transactions

To use transactions first set auto-commit to false with setAutoCommit.

You then do your transactional operations and when you want to commit or rollback use commit or rollback.

Once the commit/rollback is complete the handler will be called and the next transaction will be automatically started.

connection.commit(res -> {
  if (res.succeeded()) {
    // Committed OK!
  } else {
    // Failed!
  }
});

Closing connections

When you’ve done with the connection you should return it to the pool with close.

Unresolved directive in index.adoc - include::override/rxjava2.adoc[]

Configuration

Configuration is passed to the client when creating or deploying it.

The following configuration properties generally apply:

provider_class

The class name of the class actually used to manage the database connections. By default, this is the default provider (configured with the vertx.defaultDataSourceProvider system property) but if you want to use a different provider you can override this property and provide your implementation.

row_stream_fetch_size

The size of SQLRowStream internal cache which used to better performance. By default it equals to 128

datasourceName

Name of the data source. Pool metrics will be reported under this name. By default, random UUID is generated.

The default provider is defined by the vertx.defaultDataSourceProvider that can be one of Hikari, Agroal or C3P0 or a default will be used (C3P0 for the legacy JDBC client and Agroal for the Sql client).

Assuming the C3P0 implementation is being used (the default), the following extra configuration properties apply:

url

the JDBC connection URL for the database

driver_class

the class of the JDBC driver

user

the username for the database

password

the password for the database

max_pool_size

the maximum number of connections to pool - default is 15

initial_pool_size

the number of connections to initialise the pool with - default is 3

min_pool_size

the minimum number of connections to pool

max_statements

the maximum number of prepared statements to cache - default is 0.

max_statements_per_connection

the maximum number of prepared statements to cache per connection - default is 0.

max_idle_time

number of seconds after which an idle connection will be closed - default is 0 (never expire).

Other Connection Pool providers are:

  • Hikari

  • Agroal

Similar to C3P0 they can be configured by passing the configuration values on the JSON config object. For the special case where you do not want to deploy your app as a fat jar but run with a vert.x distribution, then it is recommended to use Agroal if you have no write permissions to add the JDBC driver to the vert.x lib directory and are passing it using the -cp command line flag.

If you want to configure any other C3P0 properties, you can add a file c3p0.properties to the classpath.

Here’s an example of configuring a service:

JsonObject config = new JsonObject()
  .put("url", "jdbc:hsqldb:mem:test?shutdown=true")
  .put("driver_class", "org.hsqldb.jdbcDriver")
  .put("max_pool_size", 30);

SQLClient client = JDBCClient.createShared(vertx, config);

Hikari uses a different set of properties:

  • jdbcUrl for the JDBC URL

  • driverClassName for the JDBC driven class name

  • maximumPoolSize for the pool size

  • username for the login (password for the password)

Refer to the Hikari documentation for further details. Also refer to the Agroal documentation to configure Agroal.

JDBC Drivers

If you are using the default DataSourceProvider (relying on c3p0), you would need to copy the JDBC driver class in your classpath.

If your application is packaged as a fat jar, be sure to embed the jdbc driver. If your application is launched with the vertx command line, copy the JDBC driver to ${VERTX_HOME}/lib.

The behavior may be different when using a different connection pool.

Data types

Since Vert.x supports many vendor SQL databases, we confide in JDBC 4.2 specification to provide the best efforts in converting Java type to SQL type and vice versa

If any limitations to the data types accepted in the specific SQL database, you will get out of the box the standard by overriding your owns:

  • The encoder JDBCEncoder to convert Java input type to SQL type. By default implementation, it is io.vertx.ext.jdbc.spi.impl.JDBCEncoderImpl

  • The decoder JDBCDecoder to convert SQL result type to Java type. By default implementation, it is io.vertx.ext.jdbc.spi.impl.JDBCDecoderImpl

You can include the overridden classes in the SPI file META-INF/services/io.vertx.ext.jdbc.spi.JDBCEncoder and META-INF/services/io.vertx.ext.jdbc.spi.JDBCDecoder to load in runtime. Or you can configure directly in the configuration with 2 properties:

  • encoderCls

  • decoderCls

JsonObject options = new JsonObject().put("url", "your_jdbc_url")
  .put("user", "your_database_user")
  .put("password", "your_database_password")
  .put("encoderCls", encoderClass.getName())
  .put("decoderCls", decoderClass.getName());
return JDBCClient.createShared(vertx, options);
JsonObject extraOptions = new JsonObject()
  .put("encoderCls", encoderClass.getName())
  .put("decoderCls", decoderClass.getName());
JDBCConnectOptions options = new JDBCConnectOptions().setJdbcUrl("your_jdbc_url")
  .setUser("your_database_user")
  .setPassword("your_database_password");
PoolOptions poolOptions = new PoolOptions().setMaxSize(1);
 DataSourceProvider provider = new AgroalCPDataSourceProvider().init(extraOptions);
return JDBCPool.pool(vertx, options, poolOptions, provider);