Vert.x MongoDB Client

A Vert.x client allowing applications to interact with a MongoDB instance, whether that’s saving, retrieving, searching, or deleting documents. Mongo is a great match for persisting data in a Vert.x application as it natively handles JSON (BSON) documents.

Features

  • Completely non-blocking

  • Custom codec to support fast serialization to/from Vert.x JSON

  • Supports a majority of the configuration options from the MongoDB Java Driver

This client is based on the MongoDB ReactiveStreams Driver.

Using Vert.x MongoDB Client

To use this project, add the following dependency to the dependencies section of your build descriptor:

  • Maven (in your pom.xml):

<dependency>
 <groupId>io.vertx</groupId>
 <artifactId>vertx-mongo-client</artifactId>
 <version>4.5.11</version>
</dependency>
  • Gradle (in your build.gradle file):

compile 'io.vertx:vertx-mongo-client:4.5.11'

Creating a client

You can create a client in several ways:

Using the default shared pool

In most cases you will want to share a pool between different client instances.

E.g. you scale your application by deploying multiple instances of your verticle and you want each verticle instance to share the same pool so you don’t end up with multiple pools

The simplest way to do this is as follows:

MongoClient client = MongoClient.createShared(vertx, config);

The first call to MongoClient.createShared will actually create the pool, and the specified config will be used.

Subsequent calls will return a new client instance that uses the same pool, so the configuration won’t be used.

Specifying a pool source name

You can create a client specifying a pool source name as follows

MongoClient client = MongoClient.createShared(vertx, config, "MyPoolName");

If different clients are created using the same Vert.x instance and specifying the same pool name, they will share the same pool.

The first call to MongoClient.createShared will actually create the pool, and the specified config will be used.

Subsequent calls will return a new client instance that uses the same pool, so the configuration won’t be used.

Use this way of creating if you wish different groups of clients to have different pools, e.g. they’re interacting with different databases.

Creating a client with a non shared data pool

In most cases you will want to share a pool between different client instances. However, it’s possible you want to create a client instance that doesn’t share its pool with any other client.

In that case you can use MongoClient.create.

MongoClient client = MongoClient.create(vertx, config);

This is equivalent to calling MongoClient.createShared with a unique pool name each time.

Using the API

The client API is represented by MongoClient.

Saving documents

To save a document you use save.

If the document has no \_id field, it is inserted, otherwise, it is upserted. Upserted means it is inserted if it doesn’t already exist, otherwise it is updated.

If the document is inserted and has no id, then the id field generated will be returned to the result handler.

Here’s an example of saving a document and getting the id back

JsonObject document = new JsonObject()
  .put("title", "The Hobbit");
mongoClient.save("books", document, res -> {
  if (res.succeeded()) {
    String id = res.result();
    System.out.println("Saved book with id " + id);
  } else {
    res.cause().printStackTrace();
  }
});

And here’s an example of saving a document which already has an id.

JsonObject document = new JsonObject()
  .put("title", "The Hobbit")
  .put("_id", "123244");
mongoClient.save("books", document, res -> {
  if (res.succeeded()) {
    // ...
  } else {
    res.cause().printStackTrace();
  }
});

Inserting documents

To insert a document you use insert.

If the document is inserted and has no id, then the id field generated will be returned to the result handler.

JsonObject document = new JsonObject()
  .put("title", "The Hobbit");
mongoClient.insert("books", document, res -> {
  if (res.succeeded()) {
    String id = res.result();
    System.out.println("Inserted book with id " + id);
  } else {
    res.cause().printStackTrace();
  }
});

If a document is inserted with an id, and a document with that id already exists, the insert will fail:

JsonObject document = new JsonObject()
  .put("title", "The Hobbit")
  .put("_id", "123244");
mongoClient.insert("books", document, res -> {
  if (res.succeeded()) {
    //...
  } else {
    // Will fail if the book with that id already exists.
  }
});

Updating documents

To update a documents you use updateCollection.

This updates one or multiple documents in a collection. The json object that is passed in the updateCollection parameter must contain Update Operators and determines how the object is updated.

The json object specified in the query parameter determines which documents in the collection will be updated.

Here’s an example of updating a document in the books collection:

JsonObject query = new JsonObject()
  .put("title", "The Hobbit");
// Set the author field
JsonObject update = new JsonObject().put("$set", new JsonObject()
  .put("author", "J. R. R. Tolkien"));
mongoClient.updateCollection("books", query, update, res -> {
  if (res.succeeded()) {
    System.out.println("Book updated !");
  } else {
    res.cause().printStackTrace();
  }
});

To specify if the update should upsert or update multiple documents, use updateCollectionWithOptions and pass in an instance of UpdateOptions.

This has the following fields:

multi

set to true to update multiple documents

upsert

set to true to insert the document if the query doesn’t match

writeConcern

the write concern for this operation

JsonObject query = new JsonObject()
  .put("title", "The Hobbit");
// Set the author field
JsonObject update = new JsonObject().put("$set", new JsonObject()
  .put("author", "J. R. R. Tolkien"));
UpdateOptions options = new UpdateOptions().setMulti(true);
mongoClient.updateCollectionWithOptions("books", query, update, options, res -> {
  if (res.succeeded()) {
    System.out.println("Book updated !");
  } else {
    res.cause().printStackTrace();
  }
});

Replacing documents

To replace documents you use replaceDocuments.

This is similar to the update operation, however it does not take any operator. Instead it replaces the entire document with the one provided.

Here’s an example of replacing a document in the books collection

JsonObject query = new JsonObject()
  .put("title", "The Hobbit");
JsonObject replace = new JsonObject()
  .put("title", "The Lord of the Rings")
  .put("author", "J. R. R. Tolkien");
mongoClient.replaceDocuments("books", query, replace, res -> {
  if (res.succeeded()) {
    System.out.println("Book replaced !");
  } else {
    res.cause().printStackTrace();
  }
});

Bulk operations

To execute multiple insert, update, replace, or delete operations at once, use bulkWrite.

You can pass a list of BulkOperations, with each working similar to the matching single operation. You can pass as many operations, even of the same type, as you wish.

To specify if the bulk operation should be executed in order, and with what write option, use bulkWriteWithOptions and pass an instance of BulkWriteOptions. For more explanation what ordered means, see Execution of Operations.

Finding documents

To find documents you use find.

The query parameter is used to match the documents in the collection.

Here’s a simple example with an empty query that will match all books:

JsonObject query = new JsonObject();
mongoClient.find("books", query, res -> {
  if (res.succeeded()) {
    for (JsonObject json : res.result()) {
      System.out.println(json.encodePrettily());
    }
  } else {
    res.cause().printStackTrace();
  }
});

Here’s another example that will match all books by Tolkien:

JsonObject query = new JsonObject()
  .put("author", "J. R. R. Tolkien");
mongoClient.find("books", query, res -> {
  if (res.succeeded()) {
    for (JsonObject json : res.result()) {
      System.out.println(json.encodePrettily());
    }
  } else {
    res.cause().printStackTrace();
  }
});

The matching documents are returned as a list of json objects in the result handler.

To specify things like what fields to return, how many results to return, etc use findWithOptions and pass in the an instance of FindOptions.

This has the following fields:

fields

The fields to return in the results. Defaults to null, meaning all fields will be returned

sort

The fields to sort by. Defaults to null.

limit

The limit of the number of results to return. Default to -1, meaning all results will be returned.

skip

The number of documents to skip before returning the results. Defaults to 0.

hint

The index to use. Defaults to empty String.

Finding documents in batches

When dealing with large data sets, it is not advised to use the find and findWithOptions methods. In order to avoid inflating the whole response into memory, use findBatch:

JsonObject query = new JsonObject()
  .put("author", "J. R. R. Tolkien");
mongoClient.findBatch("book", query)
  .exceptionHandler(throwable -> throwable.printStackTrace())
  .endHandler(v -> System.out.println("End of research"))
  .handler(doc -> System.out.println("Found doc: " + doc.encodePrettily()));

The matching documents are emitted one by one by the ReadStream handler.

FindOptions has an extra parameter batchSize which you can use to set the number of documents to load at once:

JsonObject query = new JsonObject()
  .put("author", "J. R. R. Tolkien");
FindOptions options = new FindOptions().setBatchSize(100);
mongoClient.findBatchWithOptions("book", query, options)
  .exceptionHandler(throwable -> throwable.printStackTrace())
  .endHandler(v -> System.out.println("End of research"))
  .handler(doc -> System.out.println("Found doc: " + doc.encodePrettily()));

By default, batchSize is set to 20.

Finding a single document

To find a single document you use findOne.

This works just like find but it returns just the first matching document.

Removing documents

To remove documents use removeDocuments.

The query parameter is used to match the documents in the collection to determine which ones to remove.

Here’s an example of removing all Tolkien books:

JsonObject query = new JsonObject()
  .put("author", "J. R. R. Tolkien");
mongoClient.removeDocuments("books", query, res -> {
  if (res.succeeded()) {
    System.out.println("Never much liked Tolkien stuff!");
  } else {
    res.cause().printStackTrace();
  }
});

Removing a single document

To remove a single document you use removeDocument.

This works just like removeDocuments but it removes just the first matching document.

Counting documents

To count documents use count.

Here’s an example that counts the number of Tolkien books. The number is passed to the result handler.

JsonObject query = new JsonObject()
  .put("author", "J. R. R. Tolkien");
mongoClient.count("books", query, res -> {
  if (res.succeeded()) {
    long num = res.result();
  } else {
    res.cause().printStackTrace();
  }
});

Managing MongoDB collections

All MongoDB documents are stored in collections.

To get a list of all collections you can use getCollections

mongoClient.getCollections(res -> {
  if (res.succeeded()) {
    List<String> collections = res.result();
  } else {
    res.cause().printStackTrace();
  }
});

To create a new collection you can use createCollection

mongoClient.createCollection("mynewcollectionr", res -> {
  if (res.succeeded()) {
    // Created ok!
  } else {
    res.cause().printStackTrace();
  }
});

To drop a collection you can use dropCollection

Dropping a collection will delete all documents within it!
mongoClient.dropCollection("mynewcollectionr", res -> {
  if (res.succeeded()) {
    // Dropped ok!
  } else {
    res.cause().printStackTrace();
  }
});

Running other MongoDB commands

You can run arbitrary MongoDB commands with runCommand.

Commands can be used to run more advanced MongoDB features, such as using MapReduce. For more information see the mongo docs for supported Commands.

Here’s an example of running an aggregate command. Note that the command name must be specified as a parameter and also be contained in the JSON that represents the command. This is because JSON is not ordered but BSON is ordered and MongoDB expects the first BSON entry to be the name of the command. In order for us to know which of the entries in the JSON is the command name it must be specified as a parameter.

JsonObject command = new JsonObject()
  .put("aggregate", "collection_name")
  .put("pipeline", new JsonArray());
mongoClient.runCommand("aggregate", command, res -> {
  if (res.succeeded()) {
    JsonArray resArr = res.result().getJsonArray("result");
    // etc
  } else {
    res.cause().printStackTrace();
  }
});

MongoDB Extended JSON support

For now, only date, oid and binary types are supported (see MongoDB Extended JSON).

Here’s an example of inserting a document with a date field:

JsonObject document = new JsonObject()
  .put("title", "The Hobbit")
  //ISO-8601 date
  .put("publicationDate", new JsonObject().put("$date", "1937-09-21T00:00:00+00:00"));
mongoService.save("publishedBooks", document).compose(id -> {
  return mongoService.findOne("publishedBooks", new JsonObject().put("_id", id), null);
}).onComplete(res -> {
  if (res.succeeded()) {
    System.out.println("To retrieve ISO-8601 date : "
      + res.result().getJsonObject("publicationDate").getString("$date"));
  } else {
    res.cause().printStackTrace();
  }
});

Here’s an example (in Java) of inserting a document with a binary field and reading it back

byte[] binaryObject = new byte[40];
JsonObject document = new JsonObject()
  .put("name", "Alan Turing")
  .put("binaryStuff", new JsonObject().put("$binary", binaryObject));
mongoService.save("smartPeople", document).compose(id -> {
  return mongoService.findOne("smartPeople", new JsonObject().put("_id", id), null);
}).onComplete(res -> {
  if (res.succeeded()) {
    byte[] reconstitutedBinaryObject = res.result().getJsonObject("binaryStuff").getBinary("$binary");
    //This could now be de-serialized into an object in real life
  } else {
    res.cause().printStackTrace();
  }
});

Here’s an example of inserting a base 64 encoded string, typing it as binary a binary field, and reading it back

String base64EncodedString = "a2FpbHVhIGlzIHRoZSAjMSBiZWFjaCBpbiB0aGUgd29ybGQ=";
JsonObject document = new JsonObject()
  .put("name", "Alan Turing")
  .put("binaryStuff", new JsonObject().put("$binary", base64EncodedString));
mongoService.save("smartPeople", document).compose(id -> {
  return mongoService.findOne("smartPeople", new JsonObject().put("_id", id), null);
}).onComplete(res -> {
  if (res.succeeded()) {
    String reconstitutedBase64EncodedString = res.result().getJsonObject("binaryStuff").getString("$binary");
    //This could now converted back to bytes from the base 64 string
  } else {
    res.cause().printStackTrace();
  }
});

Here’s an example of inserting an object ID and reading it back

String individualId = new ObjectId().toHexString();
JsonObject document = new JsonObject()
  .put("name", "Stephen Hawking")
  .put("individualId", new JsonObject().put("$oid", individualId));
mongoService.save("smartPeople", document).compose(id -> {
  JsonObject query = new JsonObject().put("_id", id);
  return mongoService.findOne("smartPeople", query, null);
}).onComplete(res -> {
  if (res.succeeded()) {
    String reconstitutedIndividualId = res.result().getJsonObject("individualId").getString("$oid");
  } else {
    res.cause().printStackTrace();
  }
});

Getting distinct values

Here’s an example of getting distinct value

JsonObject document = new JsonObject()
  .put("title", "The Hobbit");
mongoClient.save("books", document).compose(v -> {
  return mongoClient.distinct("books", "title", String.class.getName());
}).onComplete(res -> {
  if (res.succeeded()) {
    System.out.println("Title is : " + res.result().getJsonArray(0));
  } else {
    res.cause().printStackTrace();
  }
});

Here’s an example of getting distinct value in batch mode

JsonObject document = new JsonObject()
  .put("title", "The Hobbit");
mongoClient.save("books", document, res -> {
  if (res.succeeded()) {
    mongoClient.distinctBatch("books", "title", String.class.getName())
      .handler(book -> System.out.println("Title is : " + book.getString("title")));
  } else {
    res.cause().printStackTrace();
  }
});
  • Here’s an example of getting distinct value with query

JsonObject document = new JsonObject()
  .put("title", "The Hobbit")
  .put("publicationDate", new JsonObject().put("$date", "1937-09-21T00:00:00+00:00"));
JsonObject query = new JsonObject()
  .put("publicationDate",
    new JsonObject().put("$gte", new JsonObject().put("$date", "1937-09-21T00:00:00+00:00")));
mongoClient.save("books", document).compose(v -> {
  return mongoClient.distinctWithQuery("books", "title", String.class.getName(), query);
}).onComplete(res -> {
  if (res.succeeded()) {
    System.out.println("Title is : " + res.result().getJsonArray(0));
  }
});

Here’s an example of getting distinct value in batch mode with query

JsonObject document = new JsonObject()
  .put("title", "The Hobbit")
  .put("publicationDate", new JsonObject().put("$date", "1937-09-21T00:00:00+00:00"));
JsonObject query = new JsonObject()
  .put("publicationDate", new JsonObject()
    .put("$gte", new JsonObject().put("$date", "1937-09-21T00:00:00+00:00")));
mongoClient.save("books", document, res -> {
  if (res.succeeded()) {
    mongoClient.distinctBatchWithQuery("books", "title", String.class.getName(), query)
      .handler(book -> System.out.println("Title is : " + book.getString("title")));
  }
});

Storing/Retrieving files and binary data

The client can store and retrieve files and binary data using MongoDB GridFS. The MongoGridFsClient can be used to upload or download files and streams to GridFS.

Get the MongoGridFsClient to interact with GridFS.

The MongoGridFsClient is created by calling createGridFsBucketService and providing a bucket name. In GridFS, the bucket name ends up being a collection that contains references to all of the objects that are stored. You can segregate objects into distinct buckets by providing a unique name.

This has the following fields:

bucketName : The name of the bucket to create

Here’s an example of getting a MongoGridFsClient with the a custom bucket name

mongoClient.createGridFsBucketService("bakeke", res -> {
  if (res.succeeded()) {
    //Interact with the GridFS client...
    MongoGridFsClient client = res.result();
  } else {
    res.cause().printStackTrace();
  }
});

GridFS uses a default bucket named "fs". If you prefer to get the default bucket instead of naming your own, call createDefaultGridFsBucketService

Here’s an example of getting a MongoGridFsClient with the default bucket name.

mongoClient.createDefaultGridFsBucketService( res -> {
  if (res.succeeded()) {
    //Interact with the GridFS client...
    MongoGridFsClient client = res.result();
  } else {
    res.cause().printStackTrace();
  }
});

Drop an entire file bucket from GridFS.

An entire file bucket along with all of its contents can be dropped with drop. It will drop the bucket that was specified when the MongoGridFsClient was created.

Here is an example of dropping a file bucket.

gridFsClient.drop(res -> {
  if (res.succeeded()) {
    //The file bucket is dropped and all files in it, erased
  } else {
    res.cause().printStackTrace();
  }
});

Find all file IDs in a GridFS bucket.

A list of all of the file IDs in a bucket can be found with findAllIds. The files can be downloaded by ID using downloadFileByID.

Here is an example of retrieving the list of file IDs.

gridFsClient.findAllIds(res -> {
  if (res.succeeded()) {
    List<String> ids = res.result(); //List of file IDs
  } else {
    res.cause().printStackTrace();
  }
});

Find file IDs in a GridFS bucket matching a query.

A query can be specified to match files in the GridFS bucket. findIds will return a list of file IDs that match the query.

This has the following fields:

query : The is a json object that can match any of the file’s metadata using standard MongoDB query operators. An empty json object will match all documents. You can query on attributes of the GridFS files collection as described in the GridFS manual. https://docs.mongodb.com/manual/core/gridfs/#the-files-collection

The files can be downloaded by ID using downloadFileByID.

Here is an example of retrieving the list of file IDs based on a metadata query.

JsonObject query = new JsonObject().put("metadata.nick_name", "Puhi the eel");
gridFsClient.findIds(query, res -> {
  if (res.succeeded()) {
    List<String> ids = res.result(); //List of file IDs
  } else {
    res.cause().printStackTrace();
  }
});

Delete a file in GridFS based on its ID.

A file previously stored in GridFS can be deleted with delete by providing the ID of the file. The file IDs can be retrieved with a query using findIds.

This has the following fields: id : The ID generated by GridFS when the file was stored

Here is an example of deleting a file by ID.

String id = "56660b074cedfd000570839c"; //The GridFS ID of the file
gridFsClient.delete(id, (AsyncResult<Void> res) -> {
  if (res.succeeded()) {
    //File deleted
  } else {
    //Something went wrong
    res.cause().printStackTrace();
  }
});

Upload a file in GridFS

A file can be stored by name with uploadFile. When it succeeds, the ID generated by GridFS will be returned. This ID can be used to retrieve the file later.

This has the following fields:

fileName : this is name used to save the file in GridFS

gridFsClient.uploadFile("file.name", res -> {
  if (res.succeeded()) {
    String id = res.result();
    //The ID of the stored object in Grid FS
  } else {
    res.cause().printStackTrace();
  }
});

Upload a file in GridFS with options.

A file can be stored with additional options with uploadFileWithOptions passing in an instance of GridFsUploadOptions. When it succeeds, the ID generated by GridFS will be returned.

This has the following fields:

metadata : this is a json object that includes any metadata that may be useful in a later search chunkSizeBytes : GridFS will break up the file into chunks of this size

Here is an example of a file uploadByFileName that specifies the chunk size and metadata.

JsonObject metadata = new JsonObject();
metadata.put("nick_name", "Puhi the Eel");

GridFsUploadOptions options = new GridFsUploadOptions();
options.setChunkSizeBytes(1024);
options.setMetadata(metadata);

gridFsClient.uploadFileWithOptions("file.name", options, res -> {
  if (res.succeeded()) {
    String id = res.result();
    //The ID of the stored object in Grid FS
  } else {
    res.cause().printStackTrace();
  }
});

Download a file previously stored in GridFS

A file can be downloaded by its original name with downloadFile. When the download is complete, the result handler will return the length of the download as a Long.

This has the following fields:

fileName

the name of the file that was previously stored

Here is an example of downloading a file using the name that it was stored with in GridFS.

gridFsClient.downloadFile("file.name", res -> {
  if (res.succeeded()) {
    Long fileLength = res.result();
    //The length of the file stored in fileName
  } else {
    res.cause().printStackTrace();
  }
});

Download a file previously stored in GridFS given its ID

A file can be downloaded to a given file name by its ID with downloadFileByID. When the download succeeds, the result handler will return the length of the download as a Long.

This has the following fields:

id : The ID generated by GridFS when the file was stored

Here is an example of downloading a file using the ID that it was given when stored in GridFS.

String id = "56660b074cedfd000570839c";
String filename = "puhi.fil";
gridFsClient.downloadFileByID(id, filename, res -> {
  if (res.succeeded()) {
    Long fileLength = res.result();
    //The length of the file stored in fileName
  } else {
    res.cause().printStackTrace();
  }
});

Download a file from GridFS to a new name

A file can be resolved using its original name and then downloaded to a new name with downloadFileAs. When the download succeeds, the result handler will return the length of the download as a Long.

This has the following fields:

fileName : the name of the file that was previously stored newFileName : the new name for which the file will be stored

gridFsClient.downloadFileAs("file.name", "new_file.name", res -> {
  if (res.succeeded()) {
    Long fileLength = res.result();
    //The length of the file stored in fileName
  } else {
    res.cause().printStackTrace();
  }
});

Upload a Stream to GridFS

Streams can be uploaded to GridFS using uploadByFileName. Once the stream is uploaded, the result handler will be called with the ID generated by GridFS.

This has the following fields:

stream : the ReadStream to upload fileName : the name for which the stream will be stored

Here is an example of uploading a file stream to GridFS:

gridFsStreamClient.uploadByFileName(asyncFile, "kanaloa", stringAsyncResult -> {
  String id = stringAsyncResult.result();
});

Upload a Stream to GridFS with Options

Streams can be uploaded to GridFS using uploadByFileNameWithOptions passing in an instance of GridFsUploadOptions. Once the stream is uploaded, the result handler will be called with the ID generated by GridFS.

This has the following fields:

stream : the ReadStream to upload fileName : the name for which the stream will be stored `options' : the UploadOptions

GridFsUploadOptions has the following fields:

metadata : this is a json object that includes any metadata that may be useful in a later search chunkSizeBytes : GridFS will break up the file into chunks of this size

Here is an example of uploading a file stream with options to GridFS:

GridFsUploadOptions options = new GridFsUploadOptions();
options.setChunkSizeBytes(2048);
options.setMetadata(new JsonObject().put("catagory", "Polynesian gods"));
gridFsStreamClient.uploadByFileNameWithOptions(asyncFile, "kanaloa", options, stringAsyncResult -> {
  String id = stringAsyncResult.result();
});

Download a Stream from GridFS using File Name

Streams can be downloaded from GridFS using a file name with downloadByFileName. Once the stream is downloaded a result handler will be called with the length of the stream as a Long.

This has the following fields:

stream : the WriteStream to download to fileName : the name of the file that will be downloaded to the stream.

Here is an example of downloading a file to a stream:

gridFsStreamClient.downloadByFileName(asyncFile, "kamapuaa.fil", longAsyncResult -> {
  Long length = longAsyncResult.result();
});

Download a Stream with Options from GridFS using File Name

Streams can be downloaded from GridFS using a file name and download options with downloadByFileNameWithOptions passing in an instance of GridFsDownloadOptions. Once the stream is downloaded a result handler will be called with the length of the stream as a Long.

This has the following fields:

stream : the WriteStream to download to fileName : the name of the file that will be downloaded to the stream options : an instance of GridFsDownloadOptions

DownloadOptions has the following field:

revision : the revision of the file to download

Here is an example of downloading a file to a stream with options:

GridFsDownloadOptions options = new GridFsDownloadOptions();
options.setRevision(0);
gridFsStreamClient.downloadByFileNameWithOptions(asyncFile, "kamapuaa.fil", options, longAsyncResult -> {
  Long length = longAsyncResult.result();
});

Download a Stream from GridFS using ID

Streams can be downloaded using the ID generated by GridFS with downloadById. Once the stream is downloaded a result handler will be called with the length of the stream as a Long.

This has the following fields:

stream : the WriteStream to download to id : the string represendation of the ID generated by GridFS

Here is an example of downloading a file to a stream using the object’s ID:

String id = "58f61bf84cedfd000661af06";
gridFsStreamClient.downloadById(asyncFile, id, longAsyncResult -> {
  Long length = longAsyncResult.result();
});

Configuring the client

The client is configured with a json object.

The following configuration is supported by the mongo client:

db_name

Name of the database in the MongoDB instance to use. Defaults to default_db

useObjectId

Toggle this option to support persisting and retrieving ObjectId’s as strings. If true, hex-strings will be saved as native Mongodb ObjectId types in the document collection. This will allow the sorting of documents based on creation time. You can also derive the creation time from the hex-string using ObjectId::getDate(). Set to false for other types of your choosing. If set to false, or left to default, hex strings will be generated as the document _id if the _id is omitted from the document. Defaults to false.

The mongo client tries to support most options that are allowed by the driver. There are two ways to configure mongo for use by the driver, either by a connection string or by separate configuration options.

connection_string

The connection string the driver uses to create the client. E.g. mongodb://localhost:27017. For more information on the format of the connection string please consult the driver documentation.

Specific driver configuration options

{
 // Single Cluster Settings
 "host" : "127.0.0.1", // string
 "port" : 27017,      // int

 // Multiple Cluster Settings
 "hosts" : [
   {
     "host" : "cluster1", // string
     "port" : 27000       // int
   },
   {
     "host" : "cluster2", // string
     "port" : 28000       // int
   },
   ...
 ],
 "replicaSet" :  "foo",    // string
 "serverSelectionTimeoutMS" : 30000, // long

 // Connection Pool Settings
 "maxPoolSize" : 50,                // int
 "minPoolSize" : 25,                // int
 "maxIdleTimeMS" : 300000,          // long
 "maxLifeTimeMS" : 3600000,         // long
 "waitQueueTimeoutMS" : 10000,      // long
 "maintenanceFrequencyMS" : 2000,   // long
 "maintenanceInitialDelayMS" : 500, // long

 // Credentials / Auth
 "username"   : "john",     // string
 "password"   : "passw0rd", // string
 "authSource" : "some.db"   // string
 // Auth mechanism
 "authMechanism"     : "GSSAPI",        // string
 "gssapiServiceName" : "myservicename", // string

 // Socket Settings
 "connectTimeoutMS" : 300000, // int
 "socketTimeoutMS"  : 100000, // int
 "sendBufferSize"    : 8192,  // int
 "receiveBufferSize" : 8192,  // int

 // Server Settings
 "heartbeatFrequencyMS"    : 1000, // long
 "minHeartbeatFrequencyMS" :  500, // long

 // SSL Settings
 "ssl" : false,                       // boolean
 "sslInvalidHostNameAllowed" : false, // boolean
 "trustAll" : false,                  // boolean
 "keyPath" : "key.pem",               // string
 "certPath" : "cert.pem",             // string
 "caPath" : "ca.pem",                 // string

 // Network compression Settings
 "compressors"           : ["zstd", "snappy", "zlib"],  // string array
 "zlibCompressionLevel"  : 6                            // int
}

Driver option descriptions

host

The host the MongoDB instance is running. Defaults to 127.0.0.1. This is ignored if hosts is specified

port

The port the MongoDB instance is listening on. Defaults to 27017. This is ignored if hosts is specified

hosts

An array representing the hosts and ports to support a MongoDB cluster (sharding / replication)

host

A host in the cluster

port

The port a host in the cluster is listening on

replicaSet

The name of the replica set, if the MongoDB instance is a member of a replica set

serverSelectionTimeoutMS

The time in milliseconds that the mongo driver will wait to select a server for an operation before raising an error.

maxPoolSize

The maximum number of connections in the connection pool. The default value is 100

minPoolSize

The minimum number of connections in the connection pool. The default value is 0

maxIdleTimeMS

The maximum idle time of a pooled connection. The default value is 0 which means there is no limit

maxLifeTimeMS

The maximum time a pooled connection can live for. The default value is 0 which means there is no limit

waitQueueTimeoutMS

The maximum time that a thread may wait for a connection to become available. Default value is 120000 (2 minutes)

maintenanceFrequencyMS

The time period between runs of the maintenance job. Default is 0.

maintenanceInitialDelayMS

The period of time to wait before running the first maintenance job on the connection pool. Default is 0.

username

The username to authenticate. Default is null (meaning no authentication required)

password

The password to use to authenticate.

authSource

The database name associated with the user’s credentials. Default value is the db_name value.

authMechanism

The authentication mechanism to use. See [Authentication](http://docs.mongodb.org/manual/core/authentication/) for more details.

gssapiServiceName

The Kerberos service name if GSSAPI is specified as the authMechanism.

connectTimeoutMS

The time in milliseconds to attempt a connection before timing out. Default is 10000 (10 seconds)

socketTimeoutMS

The time in milliseconds to attempt a send or receive on a socket before the attempt times out. Default is 0 meaning there is no timeout

sendBufferSize

Sets the send buffer size (SO_SNDBUF) for the socket. Default is 0, meaning it will use the OS default for this option.

receiveBufferSize

Sets the receive buffer size (SO_RCVBUF) for the socket. Default is 0, meaning it will use the OS default for this option.

heartbeatFrequencyMS

The frequency that the cluster monitor attempts to reach each server. Default is 5000 (5 seconds)

minHeartbeatFrequencyMS

The minimum heartbeat frequency. The default value is 1000 (1 second)

ssl

Enable ssl between the vertx-mongo-client and mongo

sslInvalidHostNameAllowed

Accept hostnames not included in the servers certificate

trustAll

When using ssl, trust ALL certificates. WARNING - Trusting ALL certificates will open you up to potential security issues such as MITM attacks.

keyPath

Set a path to a file that contains the client key that will be used to authenticate against the server when making SSL connections to mongo.

certPath

Set a path to a file that contains the certificate that will be used to authenticate against the server when making SSL connections to mongo.

caPath

Set a path to a file that contains a certificate that will be used as a source of trust when making SSL connections to mongo.

compressors

Sets the compression algorithm for network transmission. Valid values range from [snappy, zlib, zstd], the default value is null (meaning no compression).

For snappy and zstd compression algorithms support, additional dependencies must be added to your project build descriptor (snappy-java and zstd-java, respectively).

zlibCompressionLevel

Sets the compression level for zlib. Valid values are between -1 and 9, the default value is -1 if zlib is enabled.

Most of the default values listed above use the default values of the MongoDB Java Driver. Please consult the driver documentation for up-to-date information.

RxJava 3 API

The Mongo client provides an Rxified version of the original API.

Creating an Rxified client

To create an Rxified Mongo client, make sure to import the MongoClient class. Then use one of the create methods to get an instance:

MongoClient client = MongoClient.createShared(vertx, config);

Finding documents in batches

A ReadStream can be converted to a Flowable, which is handy when you have to deal with large data sets:

JsonObject query = new JsonObject()
  .put("author", "J. R. R. Tolkien");

ReadStream<JsonObject> books = mongoClient.findBatch("book", query);

// Convert the stream to a Flowable
Flowable<JsonObject> flowable = books.toFlowable();

flowable.subscribe(doc -> {
  System.out.println("Found doc: " + doc.encodePrettily());
}, throwable -> {
  throwable.printStackTrace();
}, () -> {
  System.out.println("End of research");
});