Skip to main content

A Vert.x client allowing applications to interact with a Cassandra service.

This module has Tech Preview status, this means the API can change between versions.

Using the Cassandra Client for Vert.x

To use this module, add the following to the dependencies section of your Maven POM file:


Or, if you use Gradle:

compile 'io.vertx:vertx-lang-groovy:3.6.3'
the Cassandra client is not compatible with the Vert.x Dropwizard Metrics library. Both are using a different major version of the Dropwizard Metrics library and the Cassandra driver won’t upgrade to the most recent version due to the drop of Java 7. The next major version of the Cassandra driver (4) will uses a more recent Dropwizard Metrics library version.

Creating a client

Cassandra is a distributed system, and it can have many nodes. To connect to Cassandra you need to specify the addresses of some cluster nodes when creating a CassandraClientOptions object:

def options = [
def client = CassandraClient.createShared(vertx, options)
By default, the Cassandra client for Vert.x will connect to the local machine’s port 9042.


After the client is created you can connect using specific cluster options:

cassandraClient.connect({ connect ->
  if (connect.succeeded()) {
    println("Just connected")
  } else {
    println("Unable to connect")

To disconnect, you can do it in a similar way:

cassandraClient.disconnect({ disconnect ->
  if (disconnect.succeeded()) {
    println("Just disconnected")
  } else {
    println("Unable to disconnect")

Using the API

The client API is represented by CassandraClient.


You can get query results using three different ways.


The streaming API is most appropriate when you need to consume results iteratively, e.g you want to process each item. This is very efficient specially for large amount of rows.

In order to give you some inspiration and ideas on how you can use the API, we’d like to you to consider this example:

cassandraClient.queryStream("SELECT my_string_col FROM my_keyspace.my_table where my_key = 'my_value'", { queryStream ->
  if (queryStream.succeeded()) {
    def stream = queryStream.result()

    // resume stream when queue is ready to accept buffers again
    response.drainHandler({ v ->

    stream.handler({ row ->
      def value = row.getString("my_string_col")

      // pause row stream when we buffer queue is full
      if (response.writeQueueFull()) {

    // end request when we reached end of the stream
    stream.endHandler({ end ->

  } else {
    // response with internal server error if we are not able to execute given query
    response.setStatusCode(500).end("Unable to execute the query")

In the example, we are executing a query, and stream results via HTTP.

Bulk fetching

This API should be used when you need to process all the rows at the same time.

cassandraClient.executeWithFullFetch("SELECT * FROM my_keyspace.my_table where my_key = 'my_value'", { executeWithFullFetch ->
  if (executeWithFullFetch.succeeded()) {
    def rows = executeWithFullFetch.result()
    rows.each { row ->
      // handle each row here
  } else {
    println("Unable to execute the query")
Use bulk fetching only if you can afford to load the full result set in memory.

Low level fetch

This API provides greater control over loading at the expense of being a bit lower-level than the streaming and bulk fetching APIs.

cassandraClient.execute("SELECT * FROM my_keyspace.my_table where my_key = 'my_value'", { execute ->
  if (execute.succeeded()) {
    def resultSet = execute.result(){ one ->
      if (one.succeeded()) {
        def row = one.result()
        println("One row successfully fetched")
      } else {
        println("Unable to fetch a row")

    resultSet.fetchMoreResults({ fetchMoreResults ->
      if (fetchMoreResults.succeeded()) {
        def availableWithoutFetching = resultSet.getAvailableWithoutFetching()
        println("Now we have ${availableWithoutFetching} rows fetched, but not consumed!")
        if (resultSet.isFullyFetched()) {
          println("The result is fully fetched, we don't need to call this method for one more time!")
        } else {
          println("The result still does not fully fetched")
      } else {
        println("Unable to fetch more results")

  } else {
    println("Unable to execute the query")

Prepared queries

For security and efficiency reasons, it is a good idea to use prepared statements for all the queries you are using more than once.

You can prepare a query:

cassandraClient.prepare("SELECT * FROM my_keyspace.my_table where my_key = ? ", { preparedStatementResult ->
  if (preparedStatementResult.succeeded()) {
    println("The query has successfully been prepared")
    def preparedStatement = preparedStatementResult.result()
    // now you can use this PreparedStatement object for the next queries
  } else {
    println("Unable to prepare the query")

And then use the PreparedStatement for all the next queries:

// You can execute you prepared statement using any way to execute queries.

// Low level fetch API
cassandraClient.execute(preparedStatement.bind("my_value"), { done ->
  def results = done.result()
  // handle results here

// Bulk fetching API
cassandraClient.executeWithFullFetch(preparedStatement.bind("my_value"), { done ->
  def results = done.result()
  // handle results here

// Streaming API
cassandraClient.queryStream(preparedStatement.bind("my_value"), { done ->
  def results = done.result()
  // handle results here


In case you’d like to execute several queries at once, you can use BatchStatement for that:

def batchStatement = new com.datastax.driver.core.BatchStatement().add(new com.datastax.driver.core.SimpleStatement("INSERT INTO NAMES (name) VALUES ('Pavel')")).add(new com.datastax.driver.core.SimpleStatement("INSERT INTO NAMES (name) VALUES ('Thomas')")).add(new com.datastax.driver.core.SimpleStatement("INSERT INTO NAMES (name) VALUES ('Julien')"))

cassandraClient.execute(batchStatement, { result ->
  if (result.succeeded()) {
    println("The given batch executed successfully")
  } else {
    println("Unable to execute the batch")